Research Methods Flashcards

1
Q

Aim [definition]:

A

A statement of what the researcher intends to find out in a research study.
For example: Investigating the effect of caffeine on memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Debriefing [definition]:

A

A post-research interview designed to inform participants of the true nature of the study and to restore them to the state they were at the start of the study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How is debriefing useful? [2]:

A
  • It is a means of dealing with ethical issues

- Can be used to get feedback on the procedures of the study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Independent variable [definition]:

A

The variable that changes in an experiment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Dependent variable [definition]:

A

Dependent on the independent variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Control variable [definition]:

A

The one that doesn’t change

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Confounding Variable [definition]:

A

A variable under the study that is not the IV but varies systematically with the IV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Extraneous variables [3]:

A
  • Do NOT vary systematically with the IV
  • They do not act as an alternative IV but instead have an effect on the DV
  • They are nuisance variables
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Internal validity [definition]:

A

The degree to which an observed effect was due to the experimental manipulation rather than other factors such as confounding/extraneous variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

External validity [definition]:

A

The degree to which a research finding can be generalised to other settings (ecological validity)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Validity vs Reliability:

A
Reliability = consistency of a measure 
Validity= accuracy of a measure
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Confederate [2]:

A

An individual in a study who has been instructed how to behave, by the researcher
- In stanford prison experiment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Directional hypothesis [2]:

A
  • States the direction of the predicted difference between two conditions
  • example: Women will have higher scores than men will on Hudson’s self-esteem scale
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Non-directional hypothesis [2]:

A
  • Predicts simply that there is a difference between conditions of the iv
  • There will be no difference between men’s scores and women’s scores on Hudson’s self-esteem scale
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Pilot study [definition]:

A
  • A small-scale trial run of a study to test any aspects of the design, to make improvements before the final study
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

When do psychologists use a directional hypothesis?

A

When past research suggests that the findings will go in a particular direction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

When is a non-directional hypothesis used?

A

When there is no past research on the topic studied or past research is contradictory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are 3 types of experimental design?

A
  • Repeated measure design
  • Independent measure design
  • Matched pairs design
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Repeated measures design [3]:

A

ALL participants experience ALL levels of the IV
+ Participant variables are reduced since its the same person
+ Fewer people are needed as they take part in all conditions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Limitations of repeated measure design [2]:

A
  • Order effects e.g getting tired. Can be avoided by using counterbalance
  • Participants may guess the aim of the experiment and behave a certain way e.g purposely do worse in the second half. Can be avoided by using a cover story
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Independent measure design [2]:

A

Participants are placed in separate groups and only experience one level of the IV each
+ Avoids order effects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Limitations of independent measure design [2]:

A
  • Participant variables e.g different abilities or characteristics [participants are randomly allocated]
  • Needs more participants than repeated measure
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Matched pairs design [3]:

A

Participants are matched by key characteristics or abilities, related to the study
+ Reduces participant variables
+ Reduces order effects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Limitations of matched pairs design [3]:

A
  • If one participant drops out you lose 2 PPs’ data
  • Very time-consuming trying to find closely matched pairs
  • Impossible to match people exactly
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Lab experiments [2]:
- Conducted in an environment controlled by researcher | - Researcher manipulates the IV
26
lab experiment examples [2]:
- Milgram’s experiment on obedience | - Bobo's doll
27
Strengths of lab experiments [2]:
- It is easier to replicate. This is because standard procedure is being used - They allow for precise control of extraneous and independent variables
28
Weakness of lab experiments:
- The artificiality of the setting may produce unnatural behavior that does not reflect real life (low ecological validity)
29
Field experiments [3]:
- Conducted in the participant's everyday setting - Researcher manipulates the IV, but in a real-life setting (can't rlly control extraneous variables) - example: Hofling's hospital study on obedience (involves medicine cabinet used by nurses in hospital and tested nurses)
30
Strengths of field studies [2]:
- Behavior in a field experiment is more likely to reflect real life because of its natural setting - There is less likelihood of demand characteristics affecting the results, as participants may not know they are being studied (in covert experiments)
31
Weakness of field experiments [2]`:
- There is less control over extraneous variables that might bias the results. - This makes it difficult for another researcher to replicate the study in exactly the same way.
32
Natural experiments [3]:
- Conducted in everyday life - Researcher does NOT manipulate the IV cus occurs naturally - Hodges and Tizard's attachment research (1989) compared the development of children who had been adopted to children who spent their lives with their biological families
33
Strengths of natural experiments [3]:
- Behavior in a natural experiment is more likely to reflect real life because of its natural setting - There is less likelihood of demand characteristics affecting the results, as participants may not know they are being studied - Can be used in situations in which it would be ethically unacceptable to manipulate the independent variable, e.g. researching stress
34
Weaknesses of natural experiments [2]:
- They may be more expensive and time consuming than lab experiments - There is no control over extraneous variables that might bias the results. makes it difficult replicate
35
Quasi experiments [3]:
+ Can be done in a controlled environment - IV is not made, it is a pre-existing difference - Sheridan and King 1972 tested obedience between the genders by making them shock a puppy with increasing strength. Male obedience was 54% and female was 100%
36
Strengths of quasi experiment:
Allows comparisons between different types of people
37
Weaknesses of quasi experiments [2]:
- Participants may be aware they are being studied, creating demand characteristics - The dependent variable may be an artificial task reducing mundane realism
38
Mundane realism [definition]:
The degree to which the procedures in an experiment are similar to events that occur in the real world
39
Single blind design:
Participant is not aware of research aims and/or which condition of the iv they are in
40
Double blind design [2]:
- Both participant and researcher are unaware of condition of IV or aim - The person conducting the experiment is less likely to give away the aim of the experiment
41
Experimental realism:
If the researcher makes an experimental task sufficiently engaging the participant pays attention to the task and not the fact that are being observed
42
Generalisation [definition]:
Applying the findings of a particular study to the population
43
Opportunity sample [3]:
People who are the most convenient or available are recruited + Easiest method cus u can just use the first suitable subject - Biased sample cus only a small part of the population
44
Random sample [3]:
Uses random methods like picking names out of a hat + Unbiased/ all members of target population have an equal chance of getting chosen - Time consuming (needs to have a list of all population members)
45
Stratified sample [3]:
Strata (subgroups) within a population are identified. Then members of the strata are chosen + More representative of the population than other samples - Very time consuming
46
Systematic sample [3]:
A predetermined system is used to select participants + Unbiased as it uses an objective system - Not truly random
47
Volunteer sample [3]:
Advertised on the newspaper, noticeboard or the internet and people volunteer + Gives access to variety of participants which can make the sample more representative - Sample is biased because participants are more highly motivated to be helpful
48
Random techniques [3]:
- Random number table - Random number generator - Lottery method (pulling names out of a hat)
49
Ethical issues [6]:
- Deception - Informed consent - Privacy - Confidentiality - Protection from harm - Right to withdraw
50
Deception- Participant POV [3]:
- It's unethical. - The researcher should not deceive anyone without good cause. - Deception prevents informed consent
51
Deception- researcher's POV [2]:
- Can be necessary otherwise participants may alter behaviour - Can be dealt with by debriefing participant when study is completed
52
Informed consent- Participant POV [2]:
- They should be told what will be required to do in the study so that they know what they are agreeing too - It is a basic human right
53
Informed consent- Researcher POV [2]:
- Means revealing true aims of the study | - Can get presumptive consent
54
Right to withdraw- Participant POV [3]:
- It is an important right - Allows patient to leave if uncomfortable - The withdraw may be compromised if payment was used as an incentive
55
Right to withdraw- Researcher POV [3]:
- Can lead to a biased sample if people leave - They lose money if the person was paid and withdrew - Researcher has to inform the participant of this right before the study
56
Protection from harm- Participant POV [2]:
- Nothing should happen to them during a study that causes harm - It is acceptable if the harm is no greater than what the subject would experience in ordinary life
57
Protection from harm- Researcher POV [3]:
- Some more important questions in psychology involve a degree of distress to participants - It is difficult to guarantee protection from harm - Harm is acceptable if the outcome is more beneficial than the harm
58
Confidentiality- Participant POV [2]:
- The data protection act makes confidentiality a legal right - It is only acceptable for personal data to be recorded if the data is not made available in a form that identifies participants
59
Confidentiality- Researcher's POV [3]:
- Can be difficult because the researcher wishes to publish the findings - A researcher can guarantee anonymity but it may still be obvious who the subjects were - Researchers should not record the names of participants
60
Privacy- Participant POV:
- People do not expect to be observed in certain situations
61
Privacy- Researcher POV [2]:
- It may be difficult to avoid invasion of privacy when studying participants in public - Do not study anyone without informed consent unless in a public place and displaying public behaviour
62
BPS ethical guideline strengths and weaknesses [3]:
+ The guidelines are quite clear - They're vague - The guidelines absolve the individual responsibility because they can just justify their research claiming they followed the guidelines
63
Controlled observation [definition]:
A form of investigation in which behaviour is observed but under conditions where certain variables have been organised the researcher
64
Covert observations [definitions]:
Observing people without their knowledge. | Knowing that behaviour is being observed is likely to alter the participant's behaviour
65
Inter-observer reliability [definition]:
The extent to which there is agreement between 2 or more observers involved in observations of a behaviour
66
Naturalistic observation [definition]:
An observation carried out in an everyday setting, in which the investigator does not interfere in anyway but merely observes the behaviours in question
67
Non-participant observation [definition]:
The observer is seperate from the people being observed
68
Overt observation [definition]:
Observational studies where participants are aware they are being observed
69
Participant observation [definition]:
Observations made by someone who is also participating in the activity being observed, which may affect their objectivity
70
Naturalistic observation evaluation [2]:
+ Gives a realistic picture of spontaneous behaviour (high ecological validity) - There is little control of all other things that are happening (sumn unknown may cause the behaviour being observed)
71
Controlled observation evaluation [2]:
+ Observer can focus on particular aspects of behaviour | - control comes at the cost of the environment (artificial feeling)
72
Covert observation evaluation [2]:
+ Behaviour is more natural | - Participants cannot give consent
73
Overt observation [-]:
- Participants are aware they are being watched and may behave unnaturally
74
Participant Observation evaluation [3]:
+ May provide insight into behaviour from the 'inside' - Likely to be overt and so have participant awareness issues - Might be biased
75
Non-participant observation evaluation [2]:
+ Observers are likely to be more objective because they are not part of the group being observed - More likely to be covert and so there is ethical issues
76
Event sampling [definition]:
An observational technique in which a count is kept of the number of times a certain behaviour occurs
77
Time sampling [definition]:
An observational technique in which the observer records behaviours in a given timeframe
78
Structured interview [definition]:
Any interview with predetermined questions
79
Unstructured interview [definition]:
The interview starts with some general aims and possibly some questions, and lets the interviewee's guide subsequent questions
80
Questionnaire evaluation [3]:
+ Can reach large numbers of people easily (large sample) + Respondents may be more willing to give personal information in a questionnaire than an interview - can only be filled by literate people = sample is biased
81
Structured interview evaluation [4]:
+ Can be easily repeated cus questions are standardised + Easier to analyse than unstructured interview - low reliability: different interviewers behave differently - Interviewer bias
82
Unstructured interview evaluation [3]:
+ more detailed information than in structured - Require interviewers with more skill than structured - In-depth questions may lack objectivity compared to predetermined ones
83
Correlation [definition]:
A relationship between two variables
84
Correlations [3]:
- Participant provides data for both variables - In a correlation design, there are no independent or dependent variables, but co-variables - We only use a correlation when testing the relationship between 2 variables
85
Structured observation [definition]:
A researcher uses various systems to organise observation such as behavioural categories and sampling procedures
86
What happens in unstructured observations?
The researcher records all relevant behaviour but has no system
87
Features of structured observations [2]:
- Behavioural categories | - Time/event sampling
88
Rules of behavioural categories [3]:
- Categories should be objective - Cover all possible component behaviours - Categories should be mutually exclusive
89
What are the self report techniques [3]:
- Structured interview - Unstructured interview - Questionnaire
90
Rules of writing a questionnaire [3]:
- Questions must be clear - Bias can lead to a participant to give a particular answer - Questions need to be written so that answers are easily to analyse
91
What to add in a questionnaire [2]:
- Filler questions to distract participant from true aim | - Easier questions first
92
Meta analysis [definition]:
When a researcher looks at findings from a number of different studies and produces a statistic to represent the overall effect
93
Review [definition]:
A consideration of a number of studies that have investigated the same topic in order to reach a general conclusion about a particular hypothesis
94
Content analysis [definition]:
A type of observational study where behaviour is observed INDIRECTLY in written or verbal materials (interviews, questionnaires)
95
Effect size [definition]:
A measure of the strength of the relationship between two variables
96
Meta analysis strengths [2]:
+ Increases validity of conclusion as they are based on wider sample of participants + Groups of studies on same topic often contradict, meta analysis helps us to reach an overall conclusion with stats
97
Limitations of meta analysis [2]:
- Experimental designs in different studies may vary so research will never be truly comparable - Putting them all together to calculate the effect size may not be appropriate
98
Why is the mean the most sensitive measure of central tendency? *
It takes account of the exact distance between all the values of data
99
When is a scattergram used in psychology?
When correlations lol
100
When is a line graph used in psychology?
With continuous data
101
When is a histogram used in psychology?
- Can't draw histogram with data in categories | - When frequency is mentioned lmao
102
When is a bar chart used in psychology? [2]:
- When data is not continuous | - Can be used when categorical / nominal data
103
When is a table used in psychology?
when displaying raw data
104
Skewed distribution [definition]: *
There is a number of extreme values on one side
105
Positive skewed distribution =
Scores on the left side
106
Negative skewed distribution =
Scores to the right side
107
Quantitative data =
numbers n shiiii
108
Qualitative data =
cannot be quantified
109
Quantitative data evaluation [2]:
+ Easy to analyse using descriptive stats or stats tests | - Data may oversimplify reality
110
Qualitative data [2]:
+ Provides richer and detailed information about people's experiences - Complexity makes it more difficult to analyse/summarise and draw conclusions from
111
Primary data evaluation [2]:
+ researcher has control of the data and how it is collected - Lengthy and expensive process
112
Secondary data [definition]:
Information used in research that was collected by someone else
113
Secondary data evaluation [2]:
+ It is simpler and cheaper to access someone else's data | - Data may not exactly fit the needs of the study
114
When is a sign test used? [3]:
- Paired or related data - Repeated measure design - matched pair design
115
How to sign test:
s value = no. of smallest sign
116
Nominal data [definition]:
Named data which can be separated into discrete categories which do not overlap
117
Ordinal data [definition]:
Data which is placed into some kind of order or scal
118
Interval data [definition]:
Data which comes in the form of a numerical value where the difference between points is standardised and meaningful
119
What are the types of data? [4]:
- Nominal data - Ordinal data - Interval data - Ratio
120
Peer review evaluation
*
121
Content analysis [definition]:
A type of observational study where behaviour is observed indirectly in visual or verbal material
122
Thematic analysis [explanation]:
Themes or categories are identified and data is organised into these themes
123
Content analysis [3]:
- Researcher has to pick whether they are using a time or event sample - Then have to pick behavioural categories for them to tally - Represent data by analysing data quantitatively or qualitatively
124
Example of quantitative content analysis [4]:
- Anthony Manstead & Caroline McCulloch - Interested in the way men and women were presented in tv ads - observed 170 ads over one week - Focused on adult figure & recorded frequency of desired behaviour in a table
125
Thematic analysis [3]:
- qualitative content analysis - quali data summarised by identifying repeated themes - Very lengthy process cus every thing is heavily analysed
126
Strengths of content analysis [2]:
+ have high ecological validity cus its based on observations of what people acc do + Content analysis can be replicated cus sources can be accessed by others
127
Weaknesses of content analysis [2]:
- Observer bias reduces OBJECTIVITY & VALIDITY of findings cus different observers may interpret behavioural categories differently - likely to be culturally biased cus observer is judging behaviours based on their standards
128
What are the intentions of thematic analysis? [3]:
- to impose some kind of order on data - summarise data to reduce page no. - ensure that 'order' is representative
129
Case study [2]:
- detailed study of an individual | - provide a rich record of human experience
130
Case study example [3]:
- Henry Molaison: his hippocampus was removed cus epi seizures, can't form new memories - Little Hans - Phineas Gage: iron rod through brain
131
Case studies strengths [2]:
+ ideographic- in-depth data provides new insights | + Allow us to investigate rare instances of human behaviour e.g Romanian orphanages
132
Case studies weaknesses [2]:
- Difficult to generalise/ apply to population | - ethical issues like confidentiality and informed consent, protection from harm
133
Inter-observer reliability [definition]:
The extent to which there is an agreement between two or more observers in an experiment
134
Test-retest reliability [definition]:
same test or interview is given to same participant on diff occasion to see if they get the same result
135
How to asses reliability [3]:
- Have 2 or more observers making separate records and then compare - inter-observer- reliability is the extent they agree - calculated using correlation coefficients
136
Improving inter-observer reliability [2]:
- Clearer behavioural categories (may not have been clear before) - Observers may need more practice using the categories
137
Improving reliability [2]:
- Reduce ambiguity of items in tests | - Standardise procedure
138
Concurrent validity [definition]:
establishing validity by comparing an existing test with the one ur interested in
139
Ecological validity [definition]:
ability to generalise research effect beyond the research setting
140
Face validity [definition]:
the extent which test items look like what the test claims to measure
141
Mundane realism [definition]:
How a study mirrors the real world/ is it realistic?
142
Temporal validity [definition]:
whether research can be generalised beyond the time period of the study
143
Validity [definition]:
whether an observed effect is a genuine one
144
How to improve validity [2]:
- If face = make better questions bro | - If internal/ external = use better research design bro smh
145
What are the features of science? [5]:
- Empirical methods - Objectivity - Replicability - Theory construction - Hypothesis testing
146
What are empirical methods?
When info is gained thru observation or experimentation rather than unfounded beliefs
147
Theory construction [explanation]: | [2]:
Facts alone are meaningless, theories/ explanations must be made to make the facts make sense - can be done through hypothesis testing
148
Falsifiablility [definition]:
The possibility that a statement or hypothesis can be proven wrong
149
Type 1 error [definition]:
When a researcher rejects a null hypothesis that's true
150
Type 2 error [definition]:
When a researcher accepts a null hypothesis that is not true
151
When is p ≤ 0.01 used?
When a researcher is replicating another study because results need to be more certain
152
What is a parametric test? [3]:
- A test that has interval data or ratio level of measurement. - Be drawn from population with normal distribution. - Both samples have equal variances.
153
Non- parametric tests of difference [4]:
- Wilcoxon test - Mann-whitney - Sign test - Chi-square
154
Parametric tests of difference [2]:
- Related t test | - Unrelated t test
155
Tests of correlation [2]:
- Spearman's Rho (non-parametric) | - Pearson's R (parametric)
156
Wilcoxon test [3]:
- Hypothesis states a difference - Related data (repeated measure/ matched pairs) - Ordinal data
157
Wilcoxon significance [2]:
- Calculated value of ‘T’ must be ≤ than the critical value to be significant - If not significant we accept the null
158
Mann-Whitney [3]:
- Hypothesis states a difference - Unrelated data (independent measure) - Ordinal data
159
Mann-Whitney significance [2]:
- Calculated value of ‘U’ must be ≤ than the critical value to be significant - If not significant we accept the null
160
Related t- test [3]:
- Hypothesis states a difference - Related data (repeated measure/ matched pairs) - Interval data
161
Related t-test significance [2]:
- Calculated value of ‘t’ must be ≥ than the critical value to be significant - If not significant we accept the null
162
Unrelated t-test [3]:
- Hypothesis states a difference in data - Unrelated data (independent measure) - Interval data
163
Unrelated t-test [2]:
- Calculated value of ‘t’ must be ≥ than the critical value to be significant - If not significant we accept the null
164
Spearman's Rho [3]:
- Hypothesis states a corrrelation - Related data (repeated measure/ matched pairs) - Ordinal data
165
Spearman's Rho significance [2]:
- Calculated value of ‘rho’ must be ≥ than the critical value to be significant - If not significant we accept the null
166
Pearson's R [3]:
- hypothesis states a correlation - Related data (repeated measure/ matched pair) - Interval data
167
Pearson's R significance [2]:
- Calculated value of ‘r’ must be ≥ than the critical value to be significant - If not significant we accept the null
168
Chi square [3]:
- Hypothesis states a difference/ association - Unrelated/ independent data - Nominal data
169
Chi square significance [2]:
- Calculated value of ‘x2’ must be ≥ than the critical value to be significant - If not significant we accept the null
170
Sign test [3]:
- Hypothesis states a difference - Related data (repeated measure/ matched pairs) - Nominal data
171
Sign test significance [2]:
- Calculated value of ‘S’ must be ≤ than the critical value to be significant - If not significant we accept the null
172
One-tailed test [definition]:
Form of test used with a directional hypothesis
173
Two-tailed test [definition]:
Form of test used with a non-directional hypothesis
174
Degrees of freedom
*
175
levels of measurement =
ration , nominal , ordinal , interval
176
Report structure [6]:
1. abstract 2. intro 3. Method 4. results 5. discussion 6. references
177
What is an abstract?
A summary of the study e.g aims, hypothesis method
178
what measure of central data is used for nominal data?
Mode
179
what measure of central data is used for ordinal data?
Median
180
what measure of central data is used for interval/ratio?
Mean
181
What is the order of robustness for the measures of central tendency?
1. Mean 2. Median 3. Mode