2025

Why Effect Size?

  • NHST \(p\)-values tells us if there’s an effect, but not how big the effect is.

  • Effect size helps quantify the magnitude of differences or relationships.

  • Important for interpreting practical significance and conducting power analyses.

  • Effect sizes are not influenced by sample size the way \(p\)-values are.

  • Always report effect size alongside NHST results.

One-sample t-test

  • Metric: \(d\) — Cohen’s \(d\) for a single group

\[ d = \frac{\bar{X} - \mu_0}{s} \]

  • \(\bar{X}\): sample mean

  • \(\mu_0\): population mean under the null hypothesis

  • \(s\): sample standard deviation

  • \(d\) tells us how many standard deviations the sample mean is from the hypothesised mean.

Independent Samples t-test

  • Metric: \(d\) — Cohen’s \(d\) for independent groups

\[ d = \frac{\bar{X}_1 - \bar{X}_2}{s_p} \]

  • \(\bar{X}_1, \bar{X}_2\): means of the two groups
  • \(s_p\): pooled standard deviation

\[ s_p = \sqrt{\frac{(n_1 - 1)s_1^2 + (n_2 - 1)s_2^2}{n_1 + n_2 - 2}} \]

  • \(d\) tells us how many pooled standard deviations separate the two group means.

Repeated Measures t-test

  • Metric: \(d_z\) — Cohen’s \(d_z\) for paired samples

\[ d_z = \frac{\bar{D}}{s_D} \]

  • \(\bar{D}\): mean of the difference scores

  • \(s_D\): standard deviation of the difference scores

  • \(d_z\) is the size of the mean change relative to variability in those changes.

One-way ANOVA

  • Metric: \(\eta^2\) — Eta-squared

\[ \eta^2 = \frac{SS_{\text{between}}}{SS_{\text{total}}} \]

  • \(SS_{\text{between}}\): variability due to group differences

  • \(SS_{\text{total}}\): total variability in the data

  • \(\eta^2\) is the proportion of total variance explained by the group factor.

One-way Repeated Measures ANOVA

  • Metric: \(\eta_p^2\) — Partial eta-squared

\[ \eta_p^2 = \frac{SS_{\text{effect}}}{SS_{\text{effect}} + SS_{\text{error}}} \]

  • \(SS_{\text{effect}}\): variability due to the factor of interest

  • \(SS_{\text{error}}\): residual variability after accounting for subjects

  • \(\eta_p^2\) is the proportion of non-subject-related variance explained by the factor.

Two-way Factorial ANOVA

  • Metric: \(\eta_p^2\) — Partial eta-squared (for each factor and interaction)

\[ \eta_p^2 = \frac{SS_{\text{effect}}}{SS_{\text{effect}} + SS_{\text{error}}} \]

  • Used separately for:
    • Main effect A
    • Main effect B
    • Interaction A \(\times\) B
  • Each \(\eta_p^2\) quantifies how much of the explainable variance is due to that factor.

Mixed ANOVA

  • Metric: \(\eta_p^2\) — Partial eta-squared

  • Same formula applies to:

    • Between-subject factors
    • Within-subject factors
    • Their interactions
  • Gives proportion of variance explained by each effect, adjusting for other factors.

Correlation

  • Metric: Pearson’s \(r\), also known as the Pearson product-moment correlation coefficient.

\[ r = \frac{\text{cov}(X, Y)}{s_X s_Y} \]

  • \(\text{cov}(X, Y)\): covariance between \(X\) and \(Y\)

  • \(s_X, s_Y\): standard deviations of \(X\) and \(Y\)

  • \(r\) quantifies the strength and direction of a linear relationship between two variables.

Partial Correlation

  • Metric: \(r_{XY\cdot Z}\) — partial correlation coefficient

\[ r_{XY\cdot Z} = \frac{r_{XY} - r_{XZ}r_{YZ}}{\sqrt{(1 - r_{XZ}^2)(1 - r_{YZ}^2)}} \]

  • \(r_{XY}\): correlation between \(X\) and \(Y\)

  • \(r_{XZ}, r_{YZ}\): correlations with the control variable \(Z\)

  • \(r_{XY\cdot Z}\) is the relationship between \(X\) and \(Y\) with the effect of \(Z\) removed.

Simple Linear Regression

  • Metric: \(R^2\) — the coefficient of determination.

\[ R^2 = \frac{SS_{\text{regression}}}{SS_{\text{total}}} \]

  • \(SS_{\text{regression}}\): variability explained by the model

  • \(SS_{\text{total}}\): total variability in the outcome

  • \(R^2\) is the proportion of variance in \(Y\) explained by \(X\).

Multiple Regression

  • Metric: \(sr_X^2\) — Squared semi-partial correlation (\(sr_X^2\))

\[ sr_X^2 = R^2_{\text{full}} - R^2_{\text{without X}} \]

  • \(R^2_{\text{full}}\): total \(R^2\) with all predictors

  • \(R^2_{\text{without X}}\): \(R^2\) when predictor \(X\) is removed

  • \(sr_X^2\) is the unique contribution of \(X\) to explaining \(Y\), over and above other predictors.

  • This differs from partial correlation, which removes shared variance from both \(X\) and \(Y\).

  • summary(lm()) does not report \(sr_X^2\) directly, but it can be calculated from the full and reduced models.

Summary Table (1/2)

Test Metric Equation Interpretation
One-sample t \(d\) \((\bar{X} - \mu_0)/s\) Mean vs null, in SD units
Independent t \(d\) \((\bar{X}_1 - \bar{X}_2)/s_p\) Between groups, in pooled SDs
Paired t \(d_z\) \(\bar{D}/s_D\) Mean change in SDs
One-way ANOVA \(\eta^2\) \(SS_{\text{between}}/SS_{\text{total}}\) Proportion variance explained
RM One-way ANOVA \(\eta_p^2\) \(SS_{\text{effect}}/(SS_{\text{effect}} + SS_{\text{error}})\) Adjusts for within-subjects
Two-way / Mixed ANOVA \(\eta_p^2\) Same as above Applied to each effect

Summary Table (2/2)

Test Metric Equation Interpretation
Correlation \(r\) Covariance / SDs Linear relationship strength
Partial correlation \(r_{XY\cdot Z}\) See above Controls for \(Z\)
Simple regression \(R^2\) \(SS_{\text{reg}}/SS_{\text{total}}\) Variance in \(Y\) explained by \(X\)
Multiple regression \(sr^2\) \(R^2_{\text{full}} - R^2_{\text{w/o X}}\) Unique variance explained

\(\eta^2\), \(\eta_P^2\), and \(R^2\)

  • We’ve seen ANOVA can be viewed as an instance of a general linear model.

  • When treated as a GLM summary(lm()) reports \(R^2\) and adjusted \(R^2\).

  • When using ezANOVA(), we get generalized \(\eta^2\).

  • Are they related?

\(\eta^2\), \(\eta_P^2\), and \(R^2\)

  • \(R^2\) represents the proportion of total variance in the outcome explained by all predictors in the model.

  • If you fit a one-way ANOVA using lm(): \(R^2 = \eta^2\)

  • This is because there’s only one source of explained variance in a one-way ANOVA.

  • With multiple factors, \(R^2\) is the proportion of variance explained by all factors combined, not just one.

\(\eta^2\), \(\eta_P^2\), and \(R^2\)

  • Adjusted \(R^2\) corrects \(R^2\) for the number of predictors, penalizing complexity.

  • Like \(R^2\) it’s a model-level summary. It is not effect-specific.

  • For any linear model with a single predictor, adjusted \(R^2\) is equivalent \(R^2\) and also to \(\eta^2\).

\(\eta^2\), \(\eta_P^2\), and \(R^2\)

  • \(\eta_P^2\) isolates the contribution of a single factor ignoring other effects when you have multiple factors (e.g., two-way or mixed ANOVA).

  • It is not directly comparable to \(R^2\) or adjusted \(R^2\) because it does not consider the total variance in the outcome, only the variance explained by that specific factor relative to its error term.

Generalized Eta-Squared (\(\eta_G^2\))

  • This is what ezANOVA() reports by default via the ez package. It was proposed by Olejnik & Algina (2003) to address limitations of both \(\eta^2\) and \(\eta_P^2\) when dealing with complex designs.

\[ \eta_G^2 = \frac{SS_{\text{effect}}}{SS_{\text{effect}} + SS_{\text{error}} + SS_{\text{subject}}} \]

  • More comparable across between- and within-subjects designs.

  • Controls for inflation in repeated measures designs (where \(\eta_P^2\) tends to be high).

Summary Table

Metric Scope Numerator Denominator Includes Good for…
\(\eta^2\) Whole model \(SS\_{\text{effect}}\) \(SS\_{\text{total}}\) One-way ANOVA, simple designs
\(\eta\_p^2\) Per-effect (ignores others) \(SS\_{\text{effect}}\) \(SS\_{\text{effect}} + SS\_{\text{error}}\) Focused effects in factorial designs
\(\eta\_G^2\) Per-effect, generalizable \(SS\_{\text{effect}}\) \(SS\_{\text{effect}} + SS\_{\text{error}} + SS\_{\text{subject}}\) Mixed/repeated designs, meta-analysis
\(R^2\) Whole model \(SS\_{\text{model}}\) \(SS\_{\text{total}}\) Regression-style reporting
Adjusted \(R^2\) Whole model Penalized \(SS\_{\text{model}}\) \(SS\_{\text{total}}\) Comparing models with different predictors