Task 3 (chapters 5,8 And 9) Flashcards

1
Q

How do you choose variables?

A

Research tradition

Choosing variables based on theory

Availability of new techniques/equipment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the reliability of a measure?

A

Reliability of a measure concerns its ability to produce similar results when repeated measurements are made under identical conditions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the relation between variability and reliability?

A

The more variability the less reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How do you measure the reliability of a physical measure?

A

Height and weight are assessed by repeatedly measuring fixed quantity of variable

Precision represents range of variation to be expected on repeated measurement( precise measurements show little range of variation)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How do you measure the reliability of population estimates?

A

Measures of opinion attitude and similar psych. Variables

Problematic to estimate the average value of a variable in given population based on sample drawn from population

Precision of the estimate is called margin error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How do you measure the reliability of judgements or ratings by multiple observers?

A

Establish degree of agreement amongst observers by using statistical measure of interrater reliability.

     —>how much agreement is there between raters?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How do you measure the reliability of psychological test of measures?(e.g intelligence,anxiety etc)

A

Basic strategy is to repeat the assessment twice to a large group of individuals, and then determine correlation between scores .

High correlation=greater reliability (high reliability of r is 0.95 or higher)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the three ways to test reliability of psychological tests of measures?

A

Test-retest reliability

Parallel forms reliability

Split-half reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How does test-retest reliability work?what are its limitations? What is it best to assess with?

A
  • Administering the same test twice separated by a long interval of time
  • Participants could respond in same way because they recall initial answers
  • it is best for assessing estable chatacteristics(such as intelligence)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How does parallel(alternate) forms of reliability work?

Limitations?

A

Same as test-retes, except the form on the second retesting is replaced by a parallel form which contains quizá Kent items to the original

Differences I test performance can be due to nonequivalence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How does split half reliability work?

A

Two parallel forms of test are intermingled in a single test and administered together in one testing

Responses from two forms are separated and scored individually

Quantity being measured had no time to change

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the accuracy?

A

Measure that produces results that agree with a known standard

Value may not agree with the standard so you average all values and that is what has to be equal to the standard

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How is the difference between average and standard called?

A

Bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is precision? I

A

The range of variation that is expected

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the validity of a measure ?

A

The extent to which a measure measures what you intend to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the different types of validity?

A

Face validity

Content validity

Criterion related validity

Construct validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is face validity?

A

How well a measurement Instrument appears to measure what it was designed to measure (judging by appearance)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is content validity?

A

How adequately the content of a test samples the knowledge, skills or behaviors that the test is intended to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is criterion related validity?

A

How adequately a test score can be used to infer an individuals value on some criterion measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What two types of criterion-related validity are therev

A

Concurrent validity

Predictive validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is concurrent validity?

A

If scores are collected about the same time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is predictive validity?

A

Comparing scores on your test with the value of a criterion measure observed at a later time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is construct validity?

A

When a test is designed to measure a “construct” or variable not directly observable that has been developed to explain Behaviour in the basis of a theory(cognitions, happiness etc)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What are the differences between criterion and construct validity?

A

Construct is more about abstractions while criterion is just one variable.

Construct is theoretical that you cannot directly observe while criterion is more general and already established

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What is the sensitivity of a dependent measure?

A

How much your measure responds to your manipulation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What are range effects?

A

Occur when the values of a variable have an up and lower limit

If bathroom scale measures up till 100kg you put something of 200 kg it will still show 100 kg

Psychology questionnaire is too hard or too easy for participants

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What are the two distinct cases for range effects?

A

Floor effects

Ceiling effects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What are floor effects?

A

Variable reaches lowest possible value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Ceiling effects

A

Variable reaches highest possible value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What are behavioral measures?

A

Measures that record the actual Behaviour of subjects

Good indicator of overt Behaviour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Behavioral measures: what does frequency of responding do?

A

Count the number of occurrences over specified period

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Behaviour measures: what does latency do?

A

Measure amount of time it takes for subjects to respond to stimulus

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What are physiological measures?

A

Measures that record the participant bodily functions

Provide fairly accurate info about the state of arousal within participants body

Psych. States must be inferred from physical states

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What are self report measures?

A

Participant self report variables

Can’t be sure the participantes are telling the truth as they have the tendency to project themselves in a socially desirable manner

35
Q

Self report measures: Rating scale and what is the likert scaling?

A

Rating from 1-10

Likely scaling: indicating the degree to which they agree or disagree with statement on a 5 point scale

36
Q

Self-report measures: Q-sort methodology

A

Qualitative measuring technique; establishing evaluative categories and sorting items into those categories

37
Q

What are implicit measures

A

Measure responses that are not under direct conscious control

E.g IAT implicit association test

38
Q

How to understand implicit measures and the role they play in an experiment?

A

The participant is not an object is a human being and the experiment is a relation between him and his attitudes and experimental context.

Participant assessed you and laboratory and dress inferences on what the experiment is about.

39
Q

Implicit measures: what are demand characteristics?

A

Cues provided by the researcher and the context that communicate the participant the purpose of the study (or expected responses of the participant)

40
Q

Implicit measures: what are role attitude cues?

A

May signal the participant that a change in the participants attitudes is needed to conform to their role as a research participant

41
Q

Implicit measures:what is the experimenter bias?

A

When the Behaviour of the experimenter influences the Behaviour of the experiment

42
Q

Implicit theories: what are the expectancy effects?

A

When an experimenter develops preconceived ideas about the capabilities of the participants

43
Q

How can we prevent experimenter bias?

A

Single blind technique

Double blind

44
Q

Single blind technique what is it?

A

Just experimenter or participant does not know which experimental condition a subject has been assigned to

45
Q

What is double blind technique?

A

Technique to lower exp time tee kiss where neither the experimenter nor the participant know at the time of testing which treatments the participants are receiving

46
Q

What does automating a research mean?

A

Use technology to eliminate th experimenter effects and increase the precision of measures

47
Q

What is a pilot study?

A

Small scale version of a study used to establish procedures,materials and parameters to be used in the full study

48
Q

What is the manipulation check?

A

Tests whether or not your independent variables had the intended effects on your participants

49
Q

What are some quantifying behaviors in observational studies?

A

Frequency method
Duration method
Intervals method;helpful method to observe multiple behaviors at the same time

Recording single events or Behaviour sequences

50
Q

What are the different types of sampling?

A

Time sampling
Individual sampling
Event sampling
Recording

51
Q

Name the different types of observations

A

Naturalistic observation

Ethnography

Sociometry

Case history

Archival research

Content analysis

52
Q

What is naturalistic observation?

A

Observing subjects in their natural environments without making any attempt to control or manipulate variables

53
Q

How do you make naturalistic observations?

A

You have to make unibstrusive observations so the subjects don’t know they are being observed

54
Q

What are the advantages and disadvantages of naturalistic observation?

A

Advanatages: gives insight in how Behaviour occurs in the real world; observations made are not tainted by laboratory settings (high external validity)

Disadvantages: Only description of observ d Behaviour can be derived from this method, no investigation of underlying causes of Behaviour.

It also is time consuming and expensive

55
Q

What is ethnography?

A

Becoming immersed in the behavioral or social system being studied.

56
Q

What do we use ethnography for?

A

To study and describe the functioning of cultures through study of social interactions and expressions between people and groups

57
Q

How do you perform ethnographical observations?

A

Conducting observations using participant observation(you act as a functioning member of the group) or using non participant observation (observing as a non member)

Deciding whether to conduct observations overtly (group members will know) or undercover(covertly) (group members unaware)

58
Q

What is sociometry?

A

Identifying and measuring interpersonal relationships within a group

59
Q

Example of sociometry?

A

Have research participants evaluate each other along some dimension

60
Q

What are case history observation?

A

Descriptive technique in which you observe and report on a single case (or a few cases)

61
Q

Limitations of case history observations?

A

Purely descriptive

62
Q

What are archival research observations?

A

Non experimental strategy that involves studying existing records

It requires having specific research questions in mind

63
Q

Limitations of archival research?

A

Purely descriptive causal relationships cannot be established

64
Q

What are content analysis observations?

A

Used to analyze a written or spoken record (or other meaningful matter) for the occurrence of specific categories or events,items or Behaviour.

65
Q

What should a content analysis be like?

A

Should be objective

Should be systematic; Including articles not in favor of your position as well

Should have generality; findings should fit within a theoretical, empirical, or applied context.

66
Q

Limitations of content analysis

A

Purely descriptive, centers in the durability of findings

67
Q

What is survey research?

A

Research where you directly question your participants about three Behaviour and underlying attitudes, beliefs and intentions.

68
Q

What kind of study is survey research?

A

Is a correlational study

69
Q

Limitations of survey research?

A

Usually does not permit you to draw causal inferences from your data

70
Q

Steps to define a questionnaire or

A
  1. Clearly define topic of your study
  2. Demographics
  3. write questionnaire items
  4. order of questions in questionnaire
71
Q

What are demographics?

A

Characteristics of participants(age,sex,marital status,occupation,income,education)

72
Q

How should demographics be of use?

A

They’re used as predictor variables during analysis of the data to determine whether participants characteristics correlate with or predict responses to other items in the survey( should not be presented first k. He questionnaire,first question should be engaging).

73
Q

What are the different types of writing questionnaire items?

A
  1. Open ended items
  2. Restricted items
  3. Partially open-ended items
  4. Rating scales
74
Q

How do open ended items work?

A

Allow participant to respond in their own words

75
Q

How do Restricted items (close-ended items) work?

A

Provide limited number of specific response alternatives

76
Q

How do partially open ended items work?

A

They resemble restricted items but provide an additional “other” category, an opportunity to give an answer not listed amongs specific alternatives

77
Q

How do rating scales work?

A

They are a variation on restricted items using rating scale rather than response alternatives.

78
Q

How should the order of questions in a questionnaire be? How is a questionnaire more effective?

A

Sensitive questions should be towards the end

Questionnaire is more effective if the organization is coherent and the questions follow a logical order and relate to each other

79
Q

How can you administer a questionnaire?

A
  • mail survey
  • internet survey
  • telephone survey
  • group administer d survey
  • face to face interviews

Mixed mode survey

80
Q

What are the two ways to asses the reliability of a questionnaire l

A

Repeated administration

Single administration

81
Q

How do you asses reliability of a questionnaire through repeated administration?

A
  • test retest reliability

- use parallel forms to avoid the problem with test res test reliability

82
Q

How do you asses reliability of a questionnaire through single administration?

A

Split half reliability

Splitting the questionnaire into equivalent hables & deriving score from each half(split half…)

Applying Kuder-Richardson formula

83
Q

How do you apply Kuder-Richardson formula?

A

Yielding the average of all split half reliabilities that could be derived from the questionnaires handed.

The resulting number should lie somewhere between 0 and 1

The higher the number, the greater the reliability of questionnaire

84
Q

How do we increase the reliability of a questionnaire?

A
  • increasing the number of items on questionnaire
  • standardize administration procedures(timing procedures,lighting,ventilation, instructions to participants, and instructions to administrators are constant)
  • score questionnaire carefully
  • items on questionnaire are clear, well written, and appropriate for sample