Task/Lecture 2 Flashcards

1
Q

Reliability

A

ability to produce similar results when the measure is taken several times.
- physical variables → measure fixed quantity and use the observed variation to derive the precision.
- measuring psychological variables → the precision of the estimate is called the margin of error.
- measuring judgments/ratings of multiple observers → establish the degree of agreement among observers by using a statistical measure of interrater reliability (kappa).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Testing reliability

A

Test-retest reliability: administering the same test twice, separated by a relatively long interval of time.
→ best used for stable characteristics.

Parallel-forms reliability: same as test-retest, but the second form is a different one, which is supposedly equivalent to the first.
→ deals with the problem of memorizing responses.

Split-half reliability: two parallel forms of the test are intermingled in a single test and administered together in one sitting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Accuracy

A

measure that produces results that agree with a known standard.
→ Bias: differences between the standard value and the average score of the measure.
→ Precision of a measure limits the accuracy of a single measurement. However a measure can be very precise but not at all accurate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Validity

A

the extent to which something measures what you intend to measure.

Face validity: how well an instrument measures what it was designed to measure.

Content validity: how well the content of the measure samples the knowledge, skills or behaviors that the test is intended to measure.

Criterion-related validity: how well a test score can be used to infer an individual’s value on some “criterion” measure.
Concurrent validity: scores on test and the criterion are collected at the same time → high concurrent validity = high correlation.
Predictive validity: scores predict directly related behavior in the future.

Construct validity: how well does the questionnaire measure the underlying theoretical construct. –> high construct validity if people who score high also behave as predicted by the theory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Adequacy of a dependent measure

A

Sensitivity: some measures of a dependent variable may be insensitive to manipulation, whereas other measures under the same condition definitely show an effect.

Range Effects: occur when the values of a variable have an upper or lower limit, which is encountered during the course of the observation. (floor and ceiling) Range effects limit the values of the highest (or lowest) data points → no statistically reliable difference between the groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Demand characteristics

A

cues that are unintentionally provided by the researcher or the research context concerning the expected behavior from participants.
- Cooperative attitude: participants may try to conform to the demand characteristics
- Apprehensive attitude: participants are worried about what will happen to them
- Negative attitude: participants may try to ruin the experiment
- Social desirability: participants want to come across as good as possible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Experimenter bias

A

behavior of the experimenter can sometimes influence the outcome of the experiment - often unintentional. It can be a result of:
- Expectancy effects: a researcher may have preconceived ideas about how participants should behave and subtly communicate it to them.
- Treating different experimental groups differently: in such a way as to confirm the hypothesis being tested.

Solutions:
- Single-blind technique: experimenter does not know which treatment a subject has been assigned to.
Double-blind technique: both the experimenter and the participant don’t know at the time of testing with treatments the participants are receiving.
Automating: make the experiment as automated as possible by using computers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Detecting/correcting problems

A

Pilot study: small-scale version of a study used to establish procedures, material and parameters to be used in the full study.
- can save time and money
- can clarify instructions, improve procedures, determine appropriate levels of the independent variables and the reliability and validity of the observational methods.

Manipulation checks: test whether the independent variables had the intended effects on the participants.
- used to determine why an independent variable failed to produce an effect and whether the participants perceived the experiment in the manner in which you intended

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

quantifying behaviour

A

Frequency method: record the number of times a particular behavior occurs within a time period.

Duration method: how long particular behavior lasts.

Intervals method: divide your observation period into discrete time intervals and then record whether a behavior occurs within each interval.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

sampling strategy

A

Time sampling: alternate between scanning the group for a specific period of time and then recording the observed behaviors for the next period
→ appropriate when behavior occurs continuously.

Individual sampling: select a single subject for observation over a given time period and then record the observed behavior.
→ appropriate when you want to preserve the organization of an individual’s behavior over time rather than note how often particular behaviors occur.

Event sampling: observe only one behavior and record all instances.
→ appropriate when one behavior of interest can be clearly defined and focused on.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Naturalist oberservations

A

non-experimental

observing your subjects in their natural environments without making any attempt to control or manipulate variables.

+ insight to how behaviour occurs in real world –> high external validity
- can’t investigate underlying causes

–> has to be unobtrusive, so use indirect measures when possible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Ethnography

A

researcher becomes immersed in the behavioral/social system being studied. In most cases it is conducted in field settings → field researcher.

  • participant or nonparticipant.
  • conducted overtly or covertly.
  • Non participant covert observation is essentially naturalistic observation.
  • Participant covert observation carries ethical issues → subjects cannot give informed consent → such violations can be acceptable if the results promise a significant contribution to science.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Designing questionnaire

A

Demographics: characteristics of participants - used as a predictor variable. Non-demographic items like attitude might also be used as predictor variables.

Criterion variable: representing the behavior of interest, measured by an item or several items.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Different types of questions

A

Open-ended items: allow the participant to respond in their own words.
+ more complete/accurate info
- may not be the answer you need

Restricted items (closed-ended items): provide a limited number of specific response alternatives.
+ easy to summarise
- poor info
- may not reflect the participants opinion

Partially open-ended items: like restricted items, but they include an “other” option - gives the opportunity to the participant to give an answer that is not listed.

Rating scales: restricted items that use a rating scale rather than response alternatives.
- Stretched rating scale: one that has a finer scaling format in one end - may set up a demand characteristic to respond on the finer end of the scale.
- Likert scale: provides a series of statements which indicate degrees of agreement or disagreement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Types

A

Mail survey: questionnaire is mailed directly to participants. They complete and return it at their leisure
+ easy
- nonresponse bias

Internet survey: posting surveys on the internet
+ easy
- nonresponse bias
- may not represent the entire population

Telephone surveys: contacted through phone calls
- laws prohibiting this

Group administered surveys: administering the questionnaire to a large group of individuals that you have at your disposal.
+ large amount of data
+ reduced volunteer bias
- may not be treated seriously
- cant really ensure anonymity
- pressure on participants

Face to face interviews: direct conversation with the participant.
Structured interview: ask prepared questions
Unstructured interview: have a general idea of the issue to discuss, but no prepared questions.
- behaviour of interviewer may affect answers
- social context may affect responses.

Mixed-mode surveys: employing more than one survey technique
+ increase likelihood of reaching potential responders
- Responders may respond differently to the same question delivered in different modes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Nonresponse bias

A

the participants fail to return the questionnaire might differ significantly from those who do return it

17
Q

Kuder-richardson formula

A

the average of all the split-half reliabilities that could be derived from splitting the questionnaire into two halves in every possible way.