Validity of data cause and effect relationship definition

Reverse Cause-Effect Relationship and Correlation by Vicky Jiang on Prezi

validity of data cause and effect relationship definition

Mar 3, The general amount of traffic in a city has a cause-effect and a reverse cause - effect relationship with the general number of roads built. When research is designed to investigate cause and effect relationships ( explanatory research) through the direct manipulation Implies that the same data would have been collected each Operational definition of the dependent variable. Establishing the internal and external validity of experimental studies. establishes the cause-and-effect relationship between the treatment and the observed threats to internal validity, the results section reports the relevant data , and the.

The relationships described so far are rather simple binary relationships. Sometimes we want to know whether different amounts of the program lead to different amounts of the outcome -- a continuous relationship: It's possible that there is some other variable or factor that is causing the outcome. This is sometimes referred to as the "third variable" or "missing variable" problem and it's at the heart of the issue of internal validity.

What are some of the possible plausible alternative explanations? Just go look at the threats to internal validity see single group threatsmultiple group threats or social threats -- each one describes a type of alternative explanation. In order for you to argue that you have demonstrated internal validity -- that you have shown there's a causal relationship -- you have to "rule out" the plausible alternative explanations.

How do you do that?

validity of data cause and effect relationship definition

One of the major ways is with your research design. Let's consider a simple single group threat to internal validity, a history threat. Let's assume you measure your program group before they start the program to establish a baselineyou give them the program, and then you measure their performance afterwards in a posttest.

You see a marked improvement in their performance which you would like to infer is caused by your program. One of the plausible alternative explanations is that you have a history threat -- it's not your program that caused the gain but some other specific historical event.

validity of data cause and effect relationship definition

For instance, it's not your anti-smoking campaign that caused the reduction in smoking but rather the Surgeon General's latest report that happened to be issued between the time you gave your pretest and posttest. How do you rule this out with your research design?

One of the simplest ways would be to incorporate the use of a control group -- a group that is comparable to your program group with the only difference being that they didn't receive the program.

Establishing the internal and external validity of experimental studies.

But they did experience the Surgeon General's latest report. If you find that they didn't show a reduction in smoking even though they did experience the same Surgeon General report you have effectively "ruled out" the Surgeon General's report as a plausible alternative explanation for why you observed the smoking reduction. In most applied social research that involves evaluating programs, temporal precedence is not a difficult criterion to meet because you administer the program before you measure effects.

validity of data cause and effect relationship definition

And, establishing covariation is relatively simple because you have some control over the program and can set things up so that you have some people who get it and some who don't if X and if not X.

Typically the most difficult criterion to meet is the third -- ruling out alternative explanations for the observed effect. This is also known as temporal precedence. In the example above, the students had to become all-star athletes before their attractiveness ratings and self-confidence improved.

Establishing the internal and external validity of experimental studies.

For example, let's say that you were conducting an experiment to see if making a loud noise would cause newborns to cry. In this example, the loud noise would have to occur before the newborns cried. In both examples, the causes occurred before the effects, so the first criterion was met. Second, whenever the cause happens, the effect must also occur. Consequently, if the cause does not happen, then the effect must not take place. The strength of the cause also determines the strength of the effect.

Think about the example with the all-star athlete. The research study found that popularity and self-confidence did not increase for the students who did not become all-star athletes. Let's assume we also found that the better the student's rankings in sports; that is, the stronger they became in athletics compared to their peers, the more popular and confident the student became. For this example, criterion two is met.

  • Reader's Guide
  • What is Internal Validity?
  • This article is a part of the guide:

Let's say that for our newborn experiment we found that as soon as the loud noise occurred, the newborn cried and that the newborns did not cry in absence of the sound.

We also found that the louder the sound, the louder the newborn cried. In this example, we see that the strength of the loud sound also determines how hard the newborn cries.

Again, criterion two has been met for this example.