Research Methods Flashcards

(123 cards)

1
Q

define “observation”

A

a non-experimental technique, the researcher watches and records spontaneous/ natural behaviour of ppt. without manipulating levels of IV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

define “controlled observation”

A

aspects of the environment are controlled, in an attempt to give ppt. the same experience. This is often conducted in a laboratory setting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

give a strength for controlled observations

A

controlling the environment and giving the same experience reduces the likelihood that extraneous variables are responsible for observed behaviour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

give a weakness for controlled observations

A

the artificiality of the observational environment may result in unnatural behaviour, not like behaviour shown in real-world situations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

define “naturalistic observation”

A

takes place in the “real world” places the ppt. are likely to spend their time such as school, work or in their homes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

give a strength for naturalistic observations

A

high realism, ppt. are more likely to show naturalistic behaviour.
external validity, behaviour is more likely to be generalisable to other situations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

give a weakness for naturalistic observations

A

uncontrolled extraneous variables may be responsible for the behaviour observed, resulting in lower internal validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

define “overt observation”

A

the ppt. can see the researcher, and are aware their behaviour is being observed as part of an observational study.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

give a strength for overt observations

A

ethical, as the principle of informed consent means ppt. should agree to take part in research and they should know what they are signing up for.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

give a weakness of overt observations

A

demand characteristics are likely, if the ppt. know they are being observed they may try to show behaviour that they think the researcher wants to see. or social desirability bias may be a factor, acting to “look cool”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

define a covert observation

A

the ppt. are not aware they are being observed and they can’t see someone taking notes/recordings.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

give a strength of covert observations

A

as ppt. are unaware they are being observed, they are far more likely to show naturalistic behaviour free from demand characteristics and social desirability bias.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

define a participant observaiton

A

the researcher joins the group being observed and takes part in the group’s activities and conversations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

give a strength of participant observations

A

by taking part the researcher may build rapport, more trust and comfort could lead to the ppt. behaving more naturally and disclosing more.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

give a weakness of participant observations

A

researchers can lose objectivity, interpretation of behaviour is biased, seeing only from the ppt. perspective. sometimes termed “going native”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

define “non-participant observation”

A

the researcher is separate from the ppt recoding observations without taking part in the groups activities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

give a strength of non-participant observations

A

the researcher is more ;likely to remain objective in the interpretation of the ppt. behaviour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

give a weakness of non-ppt. observations

A

due to lack of trust/rapport with the ppt. the researcher misses out on important insights/ppt don’t behave naturally.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

define “operationalised”

A

clearly defining a variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

define “observational design”

A

the choice of behaviours to record and how they are measured

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

define “operationalised behavioural categories”

A

the behaviours need to be clearly identifiable and measurable. e.g= aggression= number of pushes, punches and kicks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

define “time sampling”

A

researcher records all relevant behaviour at set points. e.g everything for 15s, every 10 mins over a 1 hour observation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

define “event sampling”

A

researcher records/tallies every time a behaviour occurs from the list of operationalised behavioural categories.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

strength for time sampling

A

more flexibility to be able to record unexpected types of behaviour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
weakness for time sampling
can miss behaviour that happens outside of the recording periods
26
define "assessing reliability" in relation to observations
even with clear behavioural categories interpreting observed behaviour can be affected by bias, researchers should assess the reliability of their own observation by seeing if it is consistent with another researcher's observation.
27
define inter-observer/rater reliability
two or more observers conduct the same observation. -agree and use the same checklist/tally of operationalised categories -observation is conducted separately by each observer -compare the two independently produced data sets. a test of correlation (e.g: spearman's rho) can assess the strength of the relationship between two data sets. A correlation of 0.8 or stronger is generally accepted.
28
define "self-report techniques"
the ppt. reveals personal information about themselves (e.g, behaviours, emotions, beliefs, attitudes and memories) in response to a series of questions.
29
define open questions
the ppt. can answer in any way they choose; what do you think about... produces qualitative data
30
define "closed questions"
the question is phrased in a way which limits ppt. responses. produces quantitative data.
31
give a strength for closed questions
quantitative data allows easy data analysis between large numbers of ppt. responses.
32
give a strength for open questions
as the ppt. have the freedom to choose their responses, this can be argues to lead to more valid responses
33
what is the difference between correlations and experiments
experimental designs require manipulation of the independent variable and a measurement of the resulting change in the dependent variable. In a correlational study, no variables are manipulated, two co-variables are measured and compared to look for a relationship.
34
define "co-variables"
the two factors/variables that are measured/ collected by the researcher and then compared to each other.
35
define correlation coefficient
represents both the strength and direction of the relationship between the co-variables as a number between -1 and +1
36
how are correlation coefficients calculated
using statistical tests. a correlation coefficient equal or greater than 0.8 is usually judged to show a strong correlation.
37
what is a weakness of correlations
correlation does not show causation. a correlation does not show which co-variable led to the change in the other co-variable.
38
define content analysis
an indirect observational method that is used to analyse human behaviour, investigating through studying human artefacts. content analysis is often on the written word (non-numerical/qualitative data) or write-ups of spoken words (transcripts). This is transformed into quantitative data.
39
how do you perform a content analysis
1)decide a research question 2)select a sample 3)coding 4)work through the data 5)data analysis
40
define test-retest reliability
run the content analysis again on the same sample and compare the two sets of data
41
define inter-rater reliability
a second rater also performs the content analysis with the same set of data and the sam behavioural categories. compare the two sets of data
42
give a strength for content analysis
the "artefacts" are usually not created for research but are taken from the real world. this means content analysis has high external validity, and findings should be generalisable to other real-world situations.
43
define thematic analysis
researchers start by attempting to identify the deeper meaning of the text by reading it first, and allowing themes to emerge.
44
how do you perform a thematic analysis
collect text/turn recordings into text read text first to spot patterns re-read text and look for emergent themes
45
define case studies
range of data collected from an individual, group or institution. Mainly data is collected using interviews and observations, but content analysis can be performed on written evidence and even experimental techniques can be included.
46
give examples of things case studies are usually conducted on
psychologically unusual individuals unusual events organisational practices typical individuals within a demographic
47
what is the main form of data collected from case studies
qualitative data
48
what are snapshot case studies
case studies that look at behaviour over a short period of time
49
what are longitudinal case studies
changes in the behaviour of ppt. over a long period of time (years)
50
give examples of where case studies are used
in clinical psychology: Broca's research on patient- Tan in psychodynamic psychology: Freud's research on Hanz in childhood psychology: case study on Genie
51
strength of case studies
the holistic approach of conducting research is favoured by humanist psychologists, arguing the depth of detail gives highly valid insights and are a true reflection of a person's experience
52
give a limitation of case studies
findings from one individual's case cannot be generalised to wider populations. also, exact replication of case studies is impossible.
53
define the word "aim"
a clearly phrased general statement about what the investor intends to research. can include the purpose of the study, for example, following on from findings of previous research to develop a theory
54
what is a hypothesis
a precise testable statement including the levels of the independent variable and dependent variable (or both co-variables for a correlational study)
55
define operationalisation in terms of hypothesis
operationalised variables are carefully stated, demonstrating exactly how they are to be measured for example the dependant variable would be the number of words recalled not "recall", the independent variable would need to state both levels
56
what is a null hypothesis
a null hypothesis states that there is no change in the measurement of the dependent variable as a result of the manipulation in the independent variable.
57
what is an alternative hypothesis
also known as the research hypothesis, states that there is a change in the measurement of the dependent variable as a result of the manipulation in the independent variable.
58
what is hypothesis testing
data is collected and statistical testing is conducted on the data. This provides evidence, if the evidence is strong enough the null hypothesis can be rejected and the alternate hypothesis is accepted.
59
what is a non-directional hypothesis (two-tailed)
states that there is a difference in the measurement of the dependant variable (as a result of the manipulation of the IV) but not the direction the results will go
60
what is a directional hypothesis (one-tailed)
states that there is a difference in the measurement of the dependant variable (as a result of the manipulation of the IV) and says which way the results will go
61
what is falsifiability
the more a theory is able to withstand attempts to falsify it the greater the confidence we have in that theory but our confidence can never reach 100% certainty
62
define "target population"
every member of the group that the investigator plans to study. As the target population could contain millions of people they can not all be studied.
63
define random sampling
each member of the target population has a mathematically equal chance of being in experiment's sample.
64
how do you conduct random sampling
-the researcher needs a full list of the entire target population -all names are entered into a container -a number of names equal to the sample are pulled from the container -the names are selected from the sample
65
what is a strength of random sampling
-avoids researcher bias as the researcher cannot choose the ppt. they want to form a sample.
66
what is a limitation of random sampling
-could result in an unrepresentative sample maybe not representing all minority groups -time consuming.
67
define systematic sampling
ppt. are chosen from a list of the target population. every nth ppt. is chosen to form the sample.
68
how do you conduct systematic sampling
-full list of target population -choose nth ppt. -repeat process until sample is chosen
69
strength of systematic sampling
reduces researcher bias
70
weakness of systematic sampling
time consuming may not be representative
71
define opportunity sampling
the researcher directly asks available members of the target population to take part in the research.
72
what is a weakness of opportunity sampling
-researcher bias -sample may not be representative of whole target population
73
what is volunteer sampling
also known as self-selecting sampling, ppt offer to take part after finding out about the research, most likely through adverts
74
what is a weakness of volunteer sampling
-voluenter bias
75
what is stratified sampling
by selecting from within strata, the characteristics of ppt. within the sample are in the same proportion as found within the target population.
76
how do you conduct stratified sampling
1)strata/subgroups are identified along with their proportion in the target population (e.g, gender, ethnicity, education level) 2) random sampling is then used to select the number of ppt. required from each stratum.
77
STRENGTH of stratified sampling
-representative sample -no researcher bias
78
weakness of stratified sampling
-time consuming -researcher chooses strata so may lead to some bias
79
what is the repeated measures design
the same ppt. complete two levels or more of the independent variable
80
what is the independent groups design
different ppt. complete the two or more levels of the independent variable. ppt. are randomly allocated to each condition to avoid researcher bias
81
what type of data does independent group designs produce
unrelated data.
82
what is a weakness of independent group design
ppt. variables: if more ppt. with a particular characteristic are randomly allocated to one of the groups (e.g age) this can influence the measurement of the DV (an extraneous variable)
83
what type of data does repeated measures design produce?
related data
84
what is a weakness of the repeated measures design?
order effects: taking part in the first condition could influence performance in the second condition. ppt. are also more likely to figure out the aim of the study and alter their behaviour= demand characteristics
85
how can you control order effects
by using counter balancing. ABBA, half the ppt. complete condition A first then B, others do B then A
86
what is the matched pairs design
different ppt. complete in each of the two or more conditions of the experiment. Ppt. are first assessed and ranked on a characteristic (e.g aggression) and then the top two ppt. are randomly assigned to separate conditions.
87
what are strengths of the matched pairs design
reduces ppt. variables no order effects
88
what are weaknesses of matched pairs design
time consuming need more ppt may be ppt. variables as ppt. are similar but not identical
89
what variables are measured in a correlational study
co-variables. in a correlational study two measured co-variables are assessed for a relationship.
90
what is an independent variable
the variables which the researcher manipulates
91
what is a dependent variable
the thing which you measure
92
what is an extraneous variable
any other variable which can influence the measurable dependant variable.
93
what is a confounding variable
a variable other than the Independent variable that changes systematically between the levels of IV.
94
what is a pilot study
a small-scale version of the main research study conducted before the main study.
95
what are the aims of conducting pilot studies
-to improve the quality of the main research study by assessing the experience of the ppt. in the pilot. this is because pilot studies can reveal: -unexpected extraneous variables that need to be controlled...
96
what is informed consent in regards to ethical issues
consent is not valid if ppt. are not informed of what they are agreeing to. So before the research, ppt. should be made aware of the aims and consequences of taking part in research.
97
define the "right to withdraw"
ppt. (as part of giving informed consent) should be told they can withdraw from the study at any stage with no adverse consequences (e.g; not being paid for their time). this includes withdrawing data from them.
98
define confidentiality
ppt. personal data should be kept securely by the researcher, and not shared. when the research is published it should not include the identity of ppt. or information that could reveal the identity of ppt.
99
define debriefing
once the research is complete, the researcher should offer a debriefing this would reveal any information withheld, such as the existence of other groups.
100
define peer review
when before a publication in a journal, an author's scientific paper is assessed by people who are experts in the same scientific area
101
define reliability
saying results are reliable is another way of saying the results are consistent. if the researchers replicate their study exactly, they will get similar results.
102
define external reliability
the extent to which a measure is consistent when repeated (e.g: the results of a study are consistent with an exact replication at a different time and/or with different ppt.)
103
define internal reliability
the extent to which different parts of s measure are consistent with itself
104
what method would you use to assess internal reliability
split half method. -split test in two parts -ppt. complete both parts -test the strength of the correlation between the two parts of the measure -a strong correlation indicates internal reliability
105
what method would you use to assess external reliability
test-retest: repeat the study using the same procedures/measuring devices at different times and test the correlation between the two versions.
106
how would you use inter-observer reliability to assess external reliability
two or more observers record behaviours during the same observation using the same behavioural categories; then, they test the correlation between each tally of behaviour to identify if the behavioural categories are appropriately operationalised.
107
define internal validity
questions the cause and effect relationship between the change the researcher made to the independent variable IV and the observed change in the dependent variable DV. If the change in DV was influenced by ANY OTHER FACTOR than the IV, the findings lack internal validity.
108
define external validity
questions if a study's findings can be generalised beyond the study. So from the sample used to the target population and from the experimental set up to other "real world" settings and activities.
109
define social desirability bias
ppt. hide their genuine opinions/behaviours and instead act/respond in a more socially acceptable way to 'look good'.
110
define mundane realism
the extent to which the task/materials/activities used in an experimental set-up are similar to the stimuli experienced in the real world.
111
define population validity
the extent to which the sample used in the study is representative of the target population.
112
define temporal validity
the extent to which the findings of a study can be generalised to other time periods. Generally asked for older studies.
113
define face validity
does the test appear to measure what it claims to be measuring?
114
define concurrent validity
the extent to which data from the newly created test is similar to an established test of the same variable conducted at the same time. A test of correlation assesses this; there is high concurrent validity if the strength of the correlation is +0.8 or higher.
115
what is the empirical method
the process of collecting data from direct experience, in psychological research, this is the data we gather from direct observation of ppt. This includes observation but also experimentation, self-report, case studies and content analysis.
116
define objectivity
data should be collected and interpreted in ways that avoid bias, meaning the data is not influenced by the researcher's opinions or expectations. Research that has been affected by bias produces subjective conclusions.
117
what are ways to improve objectivity
systematic data collection, double-blind, peer review
118
define replicability
scientists are required to carefully record their methods and produce standardised procedures so that other scientists can repeat.
119
define falsifiability
Karl popper argues that the ability to collect supporting evidence for a theory I snot enough for that theory to be genuinely scientific. For a theory to be genuinely scientific, it needs to be constructed in a way it can be empirically tested. This means the theory can be tested in a way that demonstrates it is not true.
120
define paradigm shift
philosopher of science Thomas Kuhn 1972, suggests scientific fields develop in a series of "scientific revolutions" known as paradigm shifts.
121
what are the stages of an inductive/bottom up process of theory construction
1)observation 2)construct a testable hypothesis 3)conduct an experiment and gain experimental data 4)propose a theory that explains the results
122
define meta-analysis
a process that collects and combines the results of a range of previously published studies asking similar research questions. The data collected is compared and reviewed together, and part of this review can include statistically combining all the data to produce an overall effect size and conclusion.
123