
This is directly measured by the time*group interaction term in the repeated measures ANOVA. Nancy’s research question was whether the mean change in the outcome from pre to post differed in the two groups. Sometimes they’re right, but sometimes, as was true here, the two analyses answer different research questions. This kind of situation happens all the time, in which a colleague, a reviewer, or a statistical consultant insists that you need to do the analysis differently. The more she insisted repeated measures didn’t work in Nancy’s design, the more confused Nancy got. The advisor said repeated measures ANOVA is only appropriate if the outcome is measured multiple times after the intervention. This model assesses the differences in the post-test means after accounting for pre-test values. The pre-test measure is not an outcome, but a covariate. In ANCOVA, the dependent variable is the post-test measure. The advisor insisted that this was a classic pre-post design, and that the way to analyze pre-post data is not with a repeated measures ANOVA, but with an ANCOVA. It has one between subjects factor (treatment group) and one within-subjects factor (time). Nancy was sure that this was a classic repeated measures experiment. The researcher measured each participant before and after the intervention. Participants were randomly assigned to one of the two groups. The intervention group received a treatment and a control group did not. Nancy had measured a response variable at two time points for two groups. This advice led her to fear that she had grossly misunderstood a very basic tenet in her statistical training. Nancy was sure repeated measures was appropriate. The advisor told Nancy that actually, a repeated measures analysis was inappropriate for her data.
JMP TWO WAY ANOVA HOW TO
Nancy had asked for advice about how to run a repeated measures analysis. A few years ago, I received a call from a distressed client. One area in statistics where I see conflicting advice is how to analyze pre-post data.
