- Specify the null and alternative hypotheses (\(H_0\) and \(H_1\)) in terms of a the population mean \(\mu\).
\[ \begin{align} H_0: \mu &= \mu_0 \\ H_1: \mu &\neq \mu_0 \end{align} \]
2025
\[ \begin{align} H_0: \mu &= \mu_0 \\ H_1: \mu &\neq \mu_0 \end{align} \]
\[ \alpha = 0.05 \]
\[ \widehat{\mu} = \overline{x} \sim \mathscr{N}(\mu_0, \frac{\sigma}{\sqrt{n}}) \]
\[ \begin{align} x_{\text{obs}} &= \{ x_1, x_2, \ldots, x_n \} \\ \widehat{\mu}_{\text{obs}} &= \overline{x}_{\text{obs}} = \frac{1}{n} \sum_{i=1}^{n} x_i \end{align} \]
\[ P(\overline{X} \leq \overline{x}_{\text{obs}} | H_0) < \alpha \rightarrow \text{reject } H_0 \\\\ \text{Fail to reject } H_0 \text{ otherwise} \]
Specify the null and alternative hypotheses (\(H_0\) and \(H_1\)) in terms of a population parameter \(\theta\).
Specify the type I error rate – denoted by the symbol \(\alpha\) – you are willing to tolerate.
Specify the sample statistic \(\widehat{\theta}\) that you will use to estimate the population parameter \(\theta\) in step 1 and state how it is distributed under the assumption that \(H_0\) is true.
Obtain a random sample and use it to compute the sample statistic from step 3. Call this value \(\widehat{\theta}_{\text{obs}}\).
If \(\widehat{\theta}_{\text{obs}}\) or a more extreme outcome is very unlikely to occur under the assumption that \(H_0\) is true, then reject \(H_0\). Otherwise, do not reject \(H_0\).
\(\theta\) is a population parameter that you are interested in estimating. E.g., in the case of a Normal test, \(\theta\) is the population mean \(\mu\).
\(\widehat{\theta}\) is a sample statistic that you use to estimate \(\theta\). E.g., in the case of a Normal test, \(\widehat{\theta}\) is the sample mean \(\overline{x}\).
The reason we bother to write the steps in terms of \(\theta\) and \(\widehat{\theta}\) is that the steps are general and can be applied to any hypothesis test.
Statistical tables were the primary tool for finding p-values before computers.
Tables provided pre-calculated critical values and p-values for various distributions at common significance levels.
Relevant test statistic would be calculated by hand based on the sample data and then looked up in the appropriate table to get the corresponding p-value.
Consider just the Normal distribution defined by a mean \(\mu_X\) and standard deviation \(\sigma_X\).
There are infinitely many possible values for \(\mu_X\) and \(\sigma_X\), so there would be infinitely many tables.
In practice only the standard normal distribution was tabulated.
\[ \begin{align} X &\sim \mathscr{N}(\mu_X, \sigma_X) \\ Z &= \frac{X - \mu_X}{\sigma_X} \\ Z &\sim \mathscr{N}(0, 1) \end{align} \]
\[ \begin{align} z_{obs} &= \frac{x_{obs} - \mu_X}{\sigma_X} \\ z_{obs} &\sim \mathscr{N}(0, 1) \end{align} \]
\[ \begin{align} z_{obs} &= \frac{\overline{x}_{obs} - \mu_\overline{X}}{\sigma_\overline{X}} \\ z_{obs} &\sim \mathscr{N}(0, 1) \end{align} \]
We use capital letters to denote random variables and lower case letters to denote specific observations from those random variables.
We may also use subscripts (e.g., \(z_{obs}\)) to further clarifiy whether we are referencing a random variable or a specific observation.
Even if we neglect to perfectly follow this convention it will usually be clear from context which is which.
Given a normal distribution with a mean (\(\mu_X\)) of 100 and a standard deviation (\(\sigma_X\)) of 15, you calculate a sample mean (\(\overline{x}\)) of 108 from a random sample of size \(n=10\). What is the \(z\)-score for this sample mean?
What is true about the \(Z\)-transformed distribution of and \(X\) and \(\overline{X}\)?
The \(Z\)-transformed distribution of \(X\) is normal with a mean of 0 and a standard deviation of 1 but the \(Z\)-transformed distribution of \(\overline{X}\) is not.
The \(Z\)-transformed distribution of \(\overline{X}\) is normal with a mean of 0 and a standard deviation of 1 but the \(Z\)-transformed distribution of \(X\) is not.
They are both normal with a mean of 0 and a standard deviation of 1.
Neither is normal with a mean of 0 and a standard deviation of 1.