Exam 2 Flashcards

1
Q

Internal Validity

A

the independent variable is what caused the change in the dependent variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Selection effects

A

sampling effects, possible intentional grouping

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Order effects

A

order of the independent variable impacts behaviors on subsequent orders

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Design confounds

A

another variable systematically varies and is actually what causes the change to the dependent variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Maturation

A

participants changed or matured over time
comparison groups as potential solution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

history threat

A

Maybe something happened in culture/society/environment to cause the change.
comparison groups over same amount of time as potential solution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

regression threat

A

outlier; this one time was really different than the norm, and with time the behavior ‘regresses’ to the meam
comparison group can show how it decreases or increases in comparison to the treatment group
if both groups start in the same place, it is not regression.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

attrition threat

A

participants left the study; bad if said participants added variance to your results
potential solution: re-analyze pre-test without the participants who dropped

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

testing threats

A

A type of order effect where there is a change in participants as a result of experiencing the DV (the test) more than once.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

instrumentation threat

A

your measurement changed over time,
potential solution: use post-test design only, with comparison group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Preventing testing threat

A

One way to prevent testing threats is not to use a pretest (posttest-only design). Another way is to use alternative forms of the test at pretest and posttest.
Having a comparison group is also helpful. You can rule out testing threats if both groups take the pretest and the posttest, but the treatment group exhibits a larger change than the comparison group.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

observer bias

A

bias caused by researchers’ expectations influencing how they interpret the results. (expecting to see participants improve and they are rated as improving)
potential solution: : blind rater code behavior(bring someone in to code behavior who does not know the purpose of the study); masked design(participants know what condition they are in but the research doesn’t)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

demand characteristics

A

participants figure out what the research is about and change their behavior accordingly
potential solutions: double-blind study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

placebo effects

A

Participants expect a change and manifest it.
solution: a special comparison group is used that is receiving the placebo therapy or placebo medication, but neither the people working with the participants nor the participants know who is in which group (double-blind placebo control study)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

why might a study result in null effects?

A

it might be that the IV really didn’t affect the DV. Other times when there’s a null result, it’s because the study wasn’t designed or conducted properly, so the IV actually did cause the DV, but some obscuring factor got in the way of the researchers detecting the difference.
There are two types of obscuring factors: (1) There might not have been enough difference between groups, or (2) there might have been too much variability within groups. Let’s look at each of these types in detail.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Null effects: not enough between group differences

A

within groups variability may be too high and obscured the differences; weak manipulations, insensitive measures, ceiling and floor effects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

how to solve not enough between group differences

A

Weak manipulations: Use a manipulation check; If needed, rerun the study with a stronger manipulation.

Insensitive measures: Use a refined scale.

Ceiling and floor effects: Use DV measures that allow variability; Use a manipulation check for IV.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

ceiling effects

A

everybody gets the highest score; questions are too easy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

floor effects

A

everybody gets the lowest scores; questions are too hard

20
Q

Manipulation check

A

a second DV included in a study to make sure the IV manipulation worked

21
Q

Why could there be too much within groups variability?

A

measurement error, situation noise, individual differences; Having too much noise can get in the way of detecting between-group differences.

22
Q

how to solve for individual differences

A
  1. Change the design: use a within-groups design instead of an independent-groups design When you do this, then each person receives both levels of the IV, and individual differences are controlled for. It’s easier to see the effect of the IV when individual differences aren’t obscuring between-groups differences. You can also use a matched-groups design. Pairs of participants are matched on an individual differences variable, and it’s easier to see the effects of the IV.
  2. Add more participants: if it’s not feasible to change the design to a within-groups or matched-groups design, then try adding more participants. This will lessen the effect that any one participant has on the group average.
23
Q

Power

A

the likelihood that a study will yield a statistically significant result when the IV really has an effect.
Studies with a lot of power are more likely to detect true differences

24
Q

There really is no difference? Not enough variability between levels?

A

your IV simply does not have a causal effect on your DV

25
Q

How to solve for measurement error

A

any factor that can inflate or deflate a person’s true score on the DV.The goal is to keep measurement error as small as possible.

Use reliable, precise measurements: measurement errors are reduced when researchers use measurement tools that are reliable (internal, interrater, and test/retest) and that are valid (i.e., have good construct validity).

Measure more instances: if researcher can’t find a measurement tool that’s reliable and valid, then the best alternative is to measure a larger sample of participants. Random errors will cancel each other out with more people in the sample.

26
Q

how to solve for situation noise

A

external distractions of any kind that obscure between-groups differences and cause variability within groups.
Example: This includes smells, sights, and sounds that might distract participants and increase within-groups variability; it adds unsystematic variability to each group situation by controlling the surroundings of an experiment that might affect the DV
solve by testing participants in a quiet room with no outside odors, and so on

27
Q

One way Anova

A

used to determine whether there are any statistically significant differences between the means of three or more independent (unrelated) groups
compares the means between the groups you are interested in and determines whether any of those means are statistically significantly different from each other.

28
Q

t-test versus one way anova

A

if there are two groups, use a t-test. if there are three or more groups, use a one-way anova

29
Q

Within Samples Variance in one way ANOVA

A

the variance within one condition

30
Q

Between samples variance

A

variance between ALL conditions

31
Q

three main assumptions that must be met in order for your ANOVA test to be valid and meaningful

A

The dependent variable is normally distributed in each group that is being compared in the one-way ANOVA
There is homogeneity of variances
There is independence of observations

32
Q

The dependent variable is normally distributed in each group that is being compared in the one-way ANOVA

A

So, for example, if we were comparing three groups (e.g., amateur, semi-professional and professional rugby players) on their leg strength, their leg strength values (dependent variable) would have to be normally distributed for the amateur group of players, normally distributed for the semi-professionals and normally distributed for the professional players.

33
Q

There is a homogeneity of variances

A

This means that the population variances in each group are equal. If you use SPSS Statistics, Levene’s Test for Homogeneity of Variances is included in the output when you run a one-way ANOVA in SPSS Statistics

34
Q

there is an independence of observations

A

This is mostly a study design issue and, as such, you will need to determine whether you believe it is possible that your observations are not independent based on your study design (e.g., group work/families/etc).

35
Q

Explain why ANOVAs are needed (why not just run multiple t-tests?)

A

Every time you conduct a t-test there is a chance that you will make a Type I error (false positive). This error is usually 5% (if your alpha is .05 – it is 1% if your alpha is .01).
- By running two t-tests on the same data you will have increased your chance of “making a mistake” to 10% (essentially doubling your chances of making a Type I error.)

36
Q

total variance ANOVA

A

How much an individual score differs from the grand mean
-Grand mean (Gm) = overall mean of all scores

37
Q

Within group variance

A

How much an individual score differs from
its own group mean

38
Q

Between group variance

A

How much a group mean differs from the
grand mean

39
Q

Understand the impact of within group variance on an F- Value

A

the f statistic is the comparison of the variance between the groups (your levels of the IV) from the variance within the groups.

40
Q

Repeated measures ANOVA

A

a one- way ANOVA used to determine whether there are any statistically significant differences between different levels of your independent variable across one group of participants

41
Q

when to use RM Anova

A

(1) when participants are measured multiple times to see changes to an intervention or something over time; or
(2) when participants are subjected to more than one condition/trial and the response to each of these conditions wants to be compared.

42
Q

Between Treatments/ Between Conditions Variance(RM ANOVA)

A

the variance between different levels of the IV

43
Q

Within Treatments/ Within Groups Variance

A

variance within each level of the IV

44
Q

advantage of a repeated measures ANOVA

A

whereas within-group variability (SSw) expresses the error variability (SSerror) in a between-subjects ANOVA, a repeated measures ANOVA can further partition this error term, reducing its size.
- This has the effect of increasing the value of the F-statistic due to the reduction of the denominator and leading to an increase in the power of the test to detect significant differences between means

45
Q

Between-Subjects Variability

A

variability
that comes individual participant score
variability across treatments
individual differences