Introduction to Data Analysis - pages 280-287

10 important questions on Introduction to Data Analysis - pages 280-287

Cronbach’s alpha

Cronbach's alpha is a measure of internal consistency, that is, how closely related a set of items are as a group. It is considered to be a measure of scale reliability. A “high” value for alpha does not imply that the measure is unidimensional.

In essence, both of the translation validity types  attempt to assess the degree to which you accurately translated your construct into the operationalization.


1.Face validity
A validity that checks ‘on its face’ the operalization seems like a good translation of the construct.
2.Content validity
A check of the operationalization against the relevant content domain for the construct.

How is criterion-related validity different from translation validity?


In translation validity, the question is, how well did you translate the idea of the construct into its manifestation. No other measures comes into play. In criterion-related validity you usually make a prediction about how the operationalization will perform on some other measure, based on your theory of the construct. The difference among the criterion-related validity types are in the criteria they use as the standard for judgment.
  • Higher grades + faster learning
  • Never study anything twice
  • 100% sure, 100% understanding
Discover Study Smart

Convergent and Discriminant validity

Convergent and discriminant validity are both considered subcategories of construct validity. The important thing to recognize is that they  work together; if you can demonstrate that you have evidence for both, you have by definition demonstrated that you have evidence for construct validity. However, neither one alone is sufficient for establishing construct validity.

Threats to construct validity Before we launch into a discussion of the most common threats to construct validity, let's recall what a threat to validity is.

. In a research study you are likely to reach a conclusion that your program was a good operationalization of what you wanted and that your measures reflected what you wanted them to reflect. Would you be correct? How will you be criticized if you make these types of claims? How might you strengthen your claims. The kinds of questions and issues your critics will raise are what I mean by threats to construct validity.

Threats to construct validity: Mono-Method Bias

Mono-method bias refers to your measures or observations, not to your programs or causes. Otherwise, it's essentially the same issue as mono-operation bias. With only a single version of a self esteem measure, you can't provide much evidence that you're really measuring self esteem. Your critics will suggest that you aren't measuring self esteem -- that you're only measuring part of it, for instance. Solution: try to implement multiple measures of key constructs and try to demonstrate (perhaps through a pilot or side study) that the measures you use behave as you theoretically expect them to.

Threats to construct validity: Interaction of Testing and Treatment

Does testing or measurement itself make the groups more sensitive or receptive to the treatment? If it does, then the testing is in effect a part of the treatment, it's inseparable from the effect of the treatment. This is a labeling issue (and, hence, a concern of construct validity) because you want to use the label "program" to refer to the program alone, but in fact it includes the testing.

Threats to construct validity: Restricted Generalizability Across Constructs

This is what I like to refer to as the "unintended consequences" treat to construct validity. You do a study and conclude that Treatment X is effective. In fact, Treatment X does cause a reduction in symptoms, but what you failed to anticipate was the drastic negative consequences of the side effects of the treatment. When you say that Treatment X is effective, you have defined "effective" as only the directly targeted symptom. This threat reminds us that we have to be careful about whether our observed effects (Treatment X is effective) would generalize to other potential outcomes.

The "Social" Threats to Construct Validity I've set aside the other major threats to construct validity because they all stem from the social and human nature of the research endeavor.

1.1Hypothesis Guessing 1.2 Evaluation Apprehension

1.3 Researcher Expectancies

How to improve statistical power?

The rule of thumb in social research is that you want statistical power to be at least 0.8 in value. Several factors interact to affect power.  Here are some general guidelines you can follow in designing your study that will help improve statistical power and thus the conclusion validity of your study:
·Increase the sample size
·Increase the level of significance.
·Increase the effect size
The ratio of the signal to the noise is you research is often called the effect size.

The question on the page originate from the summary of the following study material:

  • A unique study and practice tool
  • Never study anything twice again
  • Get the grades you hope for
  • 100% sure, 100% understanding
Remember faster, study better. Scientifically proven.
Trustpilot Logo