Detecting, Interpreting, and Analyzing Program Effects
17 important questions on Detecting, Interpreting, and Analyzing Program Effects
Effect size statistic
End product of an impact assessment is
Ability of an impact assessment to detect and describe program effects depends in large part on
- Higher grades + faster learning
- Never study anything twice
- 100% sure, 100% understanding
Ways to characterize magnitude of the program effect
- in terms of % increase or decrease (only meaningful for measures that have a true zero, not meaningful for measures with arbitrary scaled units)
- effect size statistic
When effect of program on outcome is zero, we should not expect effect size of exactly zero in impact evaluation. Why?
We do not want so much noise that we are likely to mistake a difference for a program effect -->assess chances that effect is actually statistical noise through statistical significance testing
How do you assess signal-to-noise-ratio?
if difference between mean outcomes for an intervention and control group is statistically significant significance test is telling us that the signal-to noise ratio, under its assumptions, is such that statistical noise is unlikely to have produced an effect as large as the one observed in the data when real effect is zero.
Statistical significance does/does not mean practical significance or importance.
It is simply a result that is unlikely to be due to chance and is thus a minimum requirement for meaningful results
Statistical significance testing is basically an all-or-nothing test because if observed data is significant it s large enough to be discussed as a program effect, but if not significant this claim cannot be made.
How can you control for Type I error?
How can you control for Type II error?
1) decide smallest effect size the design should reliably detect
2) decide how much risk of Type II error to accept
3) design impact evaluation with sample size and type of statistical test that will yield the desired level of statistical power
use of control variables to reduce statistical noise
Control variables act to:
Practical significance of program effects:
interpretation of statistical effects on outcome measures with values that are not inherently meaningful requires comparison with some external referent that will put the effect size in practical context
If subgroups are defined at the start of impact analysis the there will be a selection bias/no selection bias?
selection bias comes into play in emerging subgroups
What happens if you do not conduct a moderator analysis?
Important role of moderator analysis
test evaluators' expectations about what differential effects should appear
Proximal outcomes in program impact assessment are all _____
Mediator variables are interesting for 2 reasons
- testing for mediator relationships hypothesized in program logic is another way of probing the evaluation findings to determine if they are fully consistent with what is expected if the program is in fact having the intended effects.
How are meta-analyses constructed?
2) the program effects on selected outcomes are then encoded as effect sizes using an effect size statistic. Other descriptive information is also recorded.
3) all of this is put in a database and various statistical analysis are conducted on the variation in effects and factors associated with that variation
The question on the page originate from the summary of the following study material:
- A unique study and practice tool
- Never study anything twice again
- Get the grades you hope for
- 100% sure, 100% understanding