Detecting, Interpreting, and Analyzing Program Effects

17 important questions on Detecting, Interpreting, and Analyzing Program Effects

Effect size statistic

A statistical formulation of an estimate of program effect that expresses its magnitude in a standardized form that is comparable across outcome measures using different units or scales, Two of the most commonly used effect size statistics are standardized mean different and odds ratio

End product of an impact assessment is

set of estimates of the effect of program

Ability of an impact assessment to detect and describe program effects depends in large part on

the magnitude of the effects the program produces
  • Higher grades + faster learning
  • Never study anything twice
  • 100% sure, 100% understanding
Discover Study Smart

Ways to characterize magnitude of the program effect

- numerical difference between means of two outcome values (most direct way, very specific to particular measurement instrument being used)
- in terms of % increase or decrease (only meaningful for measures that have a true zero, not meaningful for measures with arbitrary scaled units)
- effect size statistic

When effect of program on outcome is zero, we should not expect effect size of exactly zero in impact evaluation. Why?

due to statistical noise
We do not want so much noise that we are likely to mistake a difference for a program effect -->assess chances that effect is actually statistical noise through statistical significance testing

How do you assess signal-to-noise-ratio?

- estimate both program effect signal and background statistical noise

if difference between mean outcomes for an intervention and control group is statistically significant significance test is telling us that the signal-to noise ratio, under its assumptions, is such that statistical noise is unlikely to have produced an effect as large as the one observed in the data when real effect is zero.

Statistical significance does/does not mean practical significance or importance.

does not
It is simply a result that is unlikely to be due to chance and is thus a minimum requirement for meaningful results

Statistical significance testing is basically an all-or-nothing test because if observed data is significant it s large enough to be discussed as a program effect, but if not significant this claim cannot be made.

How can you control for Type I error?

the maximum acceptable chance Type I error is set by the researcher when an alpha level for statistical significance is selected for the statistical test to be applied

How can you control for Type II error?

requires configuring the research design so that it has adequate statistical power

1) decide smallest effect size the design should reliably detect
2) decide how much risk of Type II error to accept
3) design impact evaluation with sample size and type of statistical test that will yield the desired level of statistical power

use of control variables to reduce statistical noise

Control variables act to:

magnify statistical effect size and thereby allow the same poer to be attained with a smaller sample

Practical significance of program effects:

for stakeholders and evaluators to interpret appraise program effects effects must be translated to terms that are relevant to the social conditions the program aim to improve,

interpretation of statistical effects on outcome measures with values that are not inherently meaningful requires comparison with some external referent that will put the effect size in practical context

If subgroups are defined at the start of impact analysis the there will be a selection bias/no selection bias?

no selection bias

selection bias comes into play in emerging subgroups

What happens if you do not conduct a moderator analysis?

overall positive effect would mask the fact that the program was ineffective with a critical subgroup or the overall program effect may be negligible, suggesting that the program was ineffective

Important role of moderator analysis

avoid premature conclusions about program effectiveness based only on overall mean program effects

test evaluators' expectations about what differential effects should appear

Proximal outcomes in program impact assessment are all _____

mediator variables

Mediator variables are interesting for 2 reasons

- it helps to better understand what change processes occur among targets as a result of exposure to the program --> informed consideration of ways to enhance that process and improve program to attain better effects
- testing for mediator relationships hypothesized in program logic is another way of probing the evaluation findings to determine if they are fully consistent with what is expected if the program is in fact having the intended effects.

How are meta-analyses constructed?

1) reports of all available impact assessment studies of a particular intervention or type of program are first collected
2) the program effects on selected outcomes are then encoded as effect sizes using an effect size statistic. Other descriptive information is also recorded.
3) all of this is put in a database and various statistical analysis are conducted on the variation in effects and factors associated with that variation

The question on the page originate from the summary of the following study material:

  • A unique study and practice tool
  • Never study anything twice again
  • Get the grades you hope for
  • 100% sure, 100% understanding
Remember faster, study better. Scientifically proven.
Trustpilot Logo