More on experiments, confounding and obscuring variables
36 important questions on More on experiments, confounding and obscuring variables
How do we call a designing mistake where the researcher doesn't design the independent variable well enough and therefor anther variable happens to vary systematically along with the intended variable therefor being an alternative explanation for the results?
How do we call mistakes in the results that occur due to the fact that the different groups of participants differ systematically from each other?
How do we call the principle where a change in behavior changes naturally and spontaneously over time without any influence?
- Higher grades + faster learning
- Never study anything twice
- 100% sure, 100% understanding
How do we call spontaneous improvement with a patient with a disorder without a clear cause for this?
How can you tackle the maturation threat?
How do we call external factors that influences the dependent measure, that affect almost all subjects in a study at the same time as the treatment which could cause confounding. things like winterdepression and springhappiness in a study on depression treatment.
How can you tackle history effects an why so?
How do we call the principle where the group average on a score is reduced in the posttest, compared to the pretest where this is caused by extreme scorers in the pretest condition simply scoring less extreme in the posttest condition, even without treatment in between?
How come with regression to the mean scorers who did score more extreme scores didn't do so in the posttest?
How can you detect regression in a line graph of two groups?
How do we call the principle that can change the mean in a pretest-posttest situation because a specific type of participants are unavailable to show up for the posttest. because of this principle the mean can drop if the highest scoring participants don't show up for the posttest.
How can you tackle attrition threat?
How do we call the principle where there's a change in a participants result as a result of taking the test more than once?
What's the difference between testing threat and practice effects?
How do you tackle testing threat?
- use posttest only design
- use alternative forms of the test for pre and posttest
- comparison group.
How do you call a principle where a measuring instrument changes over time between pre and posttest, like an observer's judgement towards behavior?
How do you tackle instrumentation threat?
- use a posttest design only
- use clear coding manuals for the observers.
How do you call an external factor that only effects one level of the independent variable?
How do you call the principle where consistently for only one level of the independent variable people don't show up for the posttest instead of for both groups?
How do we call the principle where the expectation of the observer influence their interpretation of the results?
How do we call the principle where participants guess what the study is about and change their behavior in the expected direction of the study?
How do you call the principle of people actually improving after getting what they thought was a legit treatment when it actually wasn't?
How do you call a study design where you use a placebo group, and a treatment group but neither the observer as the subjects know who is in which group?
How do we call when the independent variable does not make a difference in the dependent variable?
What could be possible reasons or a null effect?
- weak manipulations
- insensitive measures
- ceiling and floor effects
Which reason for a null effect is described; the differences between the levels of the independent variable aren't big enough to see a difference in dependent variable. or the levels of independent variables aren't extreme enough in general to influence the dependent variable.
Which reason for a null effect is described; when the instruments can't measure precise enough to detect the small differences in the results of the experiment, which makes it look like there isn't a difference when there actually is?
How do we call the principle that occurs when all the scores on a test are squeezed together at the maximum side of the possible scores?
How do we call the principle that occurs when all the scores on a test are clustered on the low end of possible scores?
How do we call the principle of too much unsystematic variability within each group in an experiment?
How do we call a human, or instrument factor or influence that can inflate of deflate the score of a subject on a measurement?
What influences variability within groups?
- measurement errors
- individual differences
- situation noise
How do we call external factors that cause variability within groups?
How can situation noise be tackled?
How do you call the probability that a study finds an effect, if there's indeed an effect in the population?
How can you increase the power of an experiment?
- increase the manipulations of the independent variable
- use bigger samples to decrease the standard deviation
- decrease variety within groups to decrease standard deviation
- decrease level of confidence
The question on the page originate from the summary of the following study material:
- A unique study and practice tool
- Never study anything twice again
- Get the grades you hope for
- 100% sure, 100% understanding