Types of Research Design 1 Flashcards

1
Q

Pre-experimental design

A
T         O
                       O1       T        O2
                       T         O1
                     ------------------
                                  O2
One shot study
1 group pre-test and post-test
Static group comparison
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

True experimental desing

A

R T O1
R O2

Randomised groups design

R T1 O1
R T2 O2
R O3
Extending levels - randomised groups design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Factorial designs

A

IV 1 has 3 levels (A1, A2, A3)
IV 2 has 2 levels (B1, B2)

Gives 6 groups to which participants are randomly assigned
(A1B1, A1B2, A2B1, A2B2, A3B1, A3B2)

Test (O) is performed after each treatment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

One group time series

A

O1 O2 O3 O4 T O5 O6 O7 O8

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Define Causal-comparative research

A

Sometimes IV cannot be manipulated in humans or it is impractical to do so.

For example: gender, age, intelligence, ethnicity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Give 4 arguments for Causal comparative vs. Experimental research

A

In experimental research, IV is manipulated by the researcher (active variable)

In causal comparative, IV has already occurred (attribute variable)

In experimental research, assignment to groups must be random

In causal comparative, assignment to groups is pre-determined

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Define Validity and Reliability

A

Validity: degree to which a measuring instrument measures what its supposed to measure (accuracy)

Reliability: degree of consistency within which a measuring instrument measures whatever it is measuring

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Define Measurement reliability

A

Observed Score = true score + error score (systematic or random)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

State the sources of Measurement error

A

Participants (mood, motivation, fatigue)
Testing (poor instruction, encouragement)
Scoring (competence, nature of scoring)
Instrumentation (poor calibration, inaccurate recording)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

State sources of Measurement validity

A

Logical validity

Content validity

Criterion validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Define Logical validity

A

Degree to which the measure obviously involves performance being measured

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Define Criterion validity

A

Degree to which scores on a test are related to some standard or criterion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Define Content validity

A

Checking the content of that test measures what it intended.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Define Concurrent validity

A

Instrument correlated with criterions administered at about the same time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Define Predictive validity

A

Degree to which the measurement can accurately predict future measurements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Define Construct validity

A

Degree to which a test measures a hypothetical construct