Research methods Flashcards

1
Q

main methods

A
  • experiments
  • observation
  • self-report
  • correlations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

other methods

A
  • meta analysis
  • case studies
  • content analysis
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

introduction

experiments

A
  • main methods
  • allows mesurement of one variable or another
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

introduction

IV

A

change
(ndependant ppl can change)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

DV

A

mesure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

introduction

observation

A
  • aim is to watch the behaviour without manipulation
  • removes bias and increases validity as this is their natural behaviour
  • e.g. watching cctv cameras, two sided mirror
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

introduction

self- report

A
  • questionares
  • interviews
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

introduction

correlation

A

(most forgotten)
- rather than seeing jwo one variable affects the other we see if two variables are associated
- e.g. do students who study longer get better gradees
- scatter graph/gram used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

meaning

reliable

A

consistencey

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

meaning

vaild

A

accuracy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

direcrtional

A

when the direction of the difference has

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

non-directional

A

when the direction of the difference has

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

3 criteria for obseration

A

covert/overt
non/ppt
control/natural

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

natural

A

watching and recordng behaviour in setting where it would naturally occur e.g. nature documentary
+less demand characteristics and socai desirability
+ high ecological validity
- cant control extraneous variables
- hard to replicate, unreliable?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

controlled

A

watching and recording behavir within a structured envroment where variables controled e.g. zimbardo
+ can control extraneous variables
+ easier to replicate, reliable
- demand ch, social d

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

overt

A

watching and recording when ppts know they are bieng watched e.g. cctv
+ less ethical issues
- higher chance of ppt reactvity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

covert

A

observing and recorning without ppts nowledge e.g. hidden cameras, tea room trade
+high ecological validity
- ethical issues
- psychological harm, anxiety

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

ppt

A

when researcher disguses themselves into the group they are oserving e.g. jake perulta
+increased insight
+may increase validity
- subjectivity
- deception

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

non ppt

A

when researcher remains outside of the group they are watching e.g. invidulator
+more objective
- may miss out on valuable insgight

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

observational desgign

unstructured

A

when reseracher records everything they see
+qualitative
- hard o may attention to everything

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Behavioural categories

A

when a target behaviour is broken into components that are observable and measurable and specific
e.g. leaving the room, laughing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

practice behavioural categories

anger

A

shouting
clenched fists
arched eyebrows

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

practice behavioural categories

affection

A

smiling
hugging
holding hands

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

observational design

sampling methods

A

refers to how often data is recorded, not how ppts are selected

event
time

intervals in time sampling refer to the tine between observations not te

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
# sampling method event sampling
counting the number of times a particular event occours
26
# sampling method time sampling
recornig behaviour within a pre-established time frame e.g. every 30 seconds
27
Inter-oserver reliability
this is when 2 or more observers make consistent judgment about the data they recorded 1. carry out pilot study using behavioural categories 2. collect observed data from both observations (independantly) 3. check for a correlation between the two sets of data 4. is there's a strong correlation theer is a high inter observer reliability
28
# A03 structured
quantitative numerical results, less detail, easy to compare
29
# A03 non structured
qualitative data more detail, observer bias
30
# A03 behavioural categories
+structured, less open to interpretation findings are more objective - overlapping categories e.g. offensive language, sweraing - dustbin categories - missed out categories
31
# A03 event sampling
+useful when the target behaviour is infrequent - we may overlook other important information if we are too focued on one e.g. fixated on one child in play ground
32
# A03 Time sampling
+less observations need to be made so less overwhelming, fresh eyes, qyicker analysis - we may miss important events during the intervals so data doesnt have high internal validity
33
Pilot study
- a small-scale trail run of the investigation - the study is initially conducted with a small sample to check that the procedure & materials are appropriate - this is also done with TV shows + gives us oppurtunity to improve the study + ensures that study runs smoothly + saves us time, money and effort
34
stats we use to describe trends/ patterns and differences in data
- moCT: mean, median,mode - measures of dispersion: range, IQR - reading & interpreting graphs, tables, bar, line - graphical displays - dispersion
35
Distribution
the dpread of frequency data for a particular variable (how data is distributed) types: normal/skewed (positively or negitively)
36
why look at distribution
- tells us about the frequency data for a particular variable across a particular population - it can show differences across populations and is used to identify statistics infrequently
37
# Distributions what i need to know
- definition of each type - how to draw a distribution using moct - interpreting distribution based on measures of centeral tendancies, may include describing how the distribution may be skewed, or comparing distributons from different data sets
38
Normal distribution
a symetrical spread of frequency that forms a bell-shaped curve. The mean, median and mode are all located at the highest part if teh curve [see physical card]
39
Skewed distribution
a spread of frequencey data that is not symmetrical, where data clusters to one end
40
Explain why the psychologist did a pilot study
To test whereter ppts understand standerdised intructions, timing, so they can make adjustments before the real study to save time and money
41
Discriptive statistics
the use of graphs, tables and summary statistics to analyse trends and sets of data - measures of centeral tendancies - measures of dispersion
42
measures of centeral tendancies
the general term for any measure of the average value in a set of data
43
measures of dispersion
how far scores vary and differ from one another (the spread of scores)
44
the measures
CT- mean - mode - median D- range - standard deviation
45
mean
the arithmetic average calculated by adding up all the values in a set of data and diving by the number of valuses there are
46
median
the centeral value in a set of data when values are arranged from lowest to highest
47
Experimental design
- the different ways in which the testing of ppts can be organised in relation to experimental conditions e.g. independant groups, repeated measures, matched pair
48
Independant groups
ppts are allocated to different groups wjere each group represnets one condition - when having more than 1 group we usually have an experimental condition and a controlled condition - ppts will be split and accolated to one of the two groups - this mean that all ppts only experience one condition each - scores/behaviour from each group will then be compared - mean+compare
49
Repeated measures design
all ppts take part in all conditions of the experiment - will this design we still have an experimental and control condition - howerever ppts will take part in all conditions one at a time - we usually use this meathod to test before/after
50
# experimental desgin controls
randomisation standerdisation counterbalancing
51
single blind procedures
when ppts are unware of what condition/group they're in | prevents demand characteristics
52
double blind procedures
when both participants and researchers are unaware of what condition/ group theyre in | prevents investigator effects
53
# controls Randomisation
the use of chance in order to control for the effects of bias when designing materials and deciding the order of conditions - this is different from random allocation | random allocation only refers to randomly assigning ## Footnote ppts to thier conditions/groups
54
what should we randomise in an experiment?
- alphabetical order - difficulty of words - meaning of words this controls demand characteristics and investigator effects
55
Standerdisation
using exactly the same formulised procedures and instructions for all ppts in a research study - if conditions arent the same for all ppts they may have different experiences, unreliable - also controls for investigator effects - e.g. task indtructions, word lists
56
Counterbalancing
when ppts in a repeated measues design are split into half with half ppts completeing conditions in reverse order - we use this control when we have a repeated measures desgin - aims to prevent the main confounding variable that we may come across using this design
57
Explain how to counterbalance
- split ppts into 2 groups - get the 1st group to complete the conditions in one order - get second group to do conditions in opposite order - put the data together and compare each condition (not each group)
57
Range and standard deviation
both measure the spread of data- standard deviation is just a more sophisticated measure Rule: bigger the standard deviation value, the wider the spread of data, more variation in scores/data smaller the standard deviation value, the closer the spread of data, less variation in scores/data, roughly the same level, less varriation
58
quantitative
data that can be counted, usually given as nmbers e.g. scores on test can be analysed statistically and converted into graphs/ charts
59
qualitative
data that is expressed in words and non-numerical e.g. interviews, diary, observation notes - it may be converted into quantitative later on for further analysis
60
# evaluation qualitative
+ richness of detail + allows ppts to give thoughts and opinions + data tends to be more meaningful + high validity - can be difficult to analayse/ compare - hard to find trends and patterns - analysed under subjective interpritaion - may lead to researcher bias, low internal validity
61
# evaluation quantitative
+less subjective + easier to analyse + conclusions cab be drawn quicker - may include less detail - may not fully represent real life/behaviour, lacking external validity
62
Presenation of quantitative data
- tables: usually display the mean and SD for each group - bar charts: graphs that show the frequencey of each variable represented by the height of bars - scattergraphs: graphs that represent the strength/ direction of a relationship between co variables
63
How to draw a graph
1. labled axis (opperationalised) 2. descriptive title 3. labled y axis 4. correctly plotted data (bar with gaps)
64
Bar charts vs Histograms
bar: categorical data e.g. class A vs class B, tall vs short histograms: bars are connected as data runs on continups scale
65
extraneous variable
any variable that has potential to affect the DV ppt/situational varbles- affect the person/setting
66
confounding
any varable that affects only one group/ condition in a study- they vary systematically whin the IV
67
ppt variables | examples
fatigue knowledge of task energy concentartion mood
68
situational variables | examples
temp of room disctracions noise outside comfort
69
confounding variables
extranous variable that only affects one group e.g. y7's watching a movie when observing if y9/y7 has better concentration levels
70
Investigator effects
anything that a reserahcer may say or do (consciously or subconsiouly) that may influence the behaviour from a ppt e.g. researchers movment, tone of voice
71
social desirability
- when ppt changes nehaviour in order to be favoured by others - if true answer is taboo/ embarassing you may chnage it to avoid judgment
72
demand characteristics
when teh ppt changes thier behaviour de to guessing the aim of the study
73
Leading questions
"what do you think about Rishi Sunak?" "what do you think about our so-called PM Rishi Sunak?" no longer testing what we aim, low internal validity
74
when is it suitable to carry out a case stdy rather than an experiment
- get more qualitative data - in depth - specific senarios - historical conext - things that arent observable/opperationolised
75
case study
an in depth, detailed investigationor a small group (victim to the same event) they use a wide variety of techniques including retrospective data, interviews, observations an psychological testing they are usually longitudinal (long piecses of reserach) and are carrried out over a long time period
76
# evaluation case studies | strengths
+in depth, more qualitative data, high explanatory power can dis/prove theories, theoretical value +richer in detail, can explore sensitive topics can provide direction for new hypothesisies, practical value
77
# evaluation case studies | weaknesses
- low generalisibility, looks at one person, small sample, low population validity - retrospective data - may be issues such at data may be outdtated and not useful to modern day - must rely on self-report- social desirability/ poor memory, innacurate, low internal validity - subjective, bias, investigator effects, researchers may only look into areas they are interested in, low internal validity - ethical issues, breach of confientally e.g. uisng their real name psychology harm - acessing trauma
78
# types of experiments lab
when reseraher manipulates an IV to test its effect on the DV e.g. skinner, bandura
79
# types of experiments field
when reseraher manipulates an IV to test its effect on the DV in a natural setting e.g. bus, park
80
# evaluation Natural
rely on natral events, so control is limited the events are real (IV) effects are real, can be sure the Dv is natural, high ecological validity the events are ratre so you cant check for reliability, no replication lots of extraneous variables we cannot control, low internal validity
81
# evaluation Quasi
+high control (just like lab), high interanl validity - we cannot control group allocation - if the IV is within ppts we cannot control how we seperate them into groups- there way of existing differences between our groups(other than IV) that affect our stdy - these are confounding variables
82
weakness of quasi and natural
neither method allows us to establish cause and effect - the purpose of an experminet is to measure the effect that one varable has on another - due to lack of control we can never trule know whether it is in fact the IV that changes the DV (it could be the other extranous varibles), this lowers the internal validity
83
# evaluation Lab
+high control, minimise extraneous variables, high internal validity + easy tp replicate, reliable +less likely to have ethical issues - lacks ecological validity, artificl task/setting doenst reflect real life
84
# evaluation Field
- low control, extranous variables low internal validity - harder to replicate, real life is unpredictable - ethical issues more likely to arise, informded ocnsent, right to withdraw, deception, goes agaisnt BPS guidlenes + high ecological valueu, ppts are more likely to behave natually, natral setting/task, reflect real life
85
Natral experiment
when change in the IV is not brought about by the researcher (natutrally occuring event) they just record the effect of teh DV e.g. number of stdents that applied beofre the fees tripled how may flights wre ebook ed to ukraine before/after war
86
Quasi
when teh Iv pre-exists within ppts and has not been determned by anyone, techinally not a real experiment, no manipukation e.g. different testosterone levels in males and femals something thats fixed, IQ
87
open questions
qualitative data no fixed response detailed
88
closed questions
“yes or no” answers
89
Content analysis
A research technique that enables the indirect study or behaviour by examining visual written or verbal material The indirect study or behaviour by examining visual written or verbal material
90
Content analysis
A research technique that enables the indirect study or behaviour by examining visual written or verbal material The indirect study or behaviour by examining visual written or verbal material
91
Content analysis steps
1 choosing a sampling method (time/event sampling) 2 coding the data(using behavioural categories) 3 choosing a method of data representation(qualitative quantitative)
92
Content analysis steps
1 choosing a sampling method (time/event sampling) 2 coding the data(using behavioural categories) 3 choosing a method of data representation(qualitative quantitative)
93
Coding
1 choosing a sampling method (time/event sampling) 2 coding the data(using behavioural categories) 3 choosing a method of data representation(qualitative quantitative)
94
1. Choosing sampling method content analysis
Time or event sampling Depending on the material you analysing you will choose to either tally the chosen behaviour/ code within timed intervals E.g. Written material.-event sampling E.g. speech./film -time sampling
95
2. Coding (content analysis)
Decide on specific behaviours/codes you wish to focus on the content analysis E,g, for an episode of The Simpsons you may choose behaviours that represents aggression such as shouting hitting insulting language For speech from the prime minister, you may choose words like I us we Then you are ready to carry out the contents analysis
96
3. Data representation
Qualitative or quantitative? Displaying data with qualitative method: -Group together commonly used phrasing(verbal aggression) -Simplifying observations by putting them into categories e.g. Physical verbal. Displaying data with quantitative method: -Bar chart-compare behaviours -Pie chart-show proportions Finish content analysis by concluding data found
97
Content analysis RELIABILITY (often forgotten in exam)
INTER RATER RELIABILITY don’t say observer Qualitative or quantitative? Displaying data with qualitative method: -Group together commonly used phrasing(verbal aggression) -Simplifying observations by putting them into categories e.g. Physical verbal. Displaying data with quantitative method: -Bar chart-compare behaviours -Pie chart-show proportions Finish content analysis by concluding data found
98
Thematic analysis
Qualitative or quantitative? Displaying data with qualitative method: -Group together commonly used phrasing(verbal aggression) -Simplifying observations by putting them into categories e.g. Physical verbal. Displaying data with quantitative method: -Bar chart-compare behaviours -Pie chart-show proportions Finish content analysis by concluding data found
99
Strengths of content and thematic analysis
Qualitative or quantitative? Displaying data with qualitative method: -Group together commonly used phrasing(verbal aggression) -Simplifying observations by putting them into categories e.g. Physical verbal. Displaying data with quantitative method: -Bar chart-compare behaviours -Pie chart-show proportions Finish content analysis by concluding data found
100
Weaknesses of content and thematic analysis
Qualitative or quantitative? Displaying data with qualitative method: -Group together commonly used phrasing(verbal aggression) -Simplifying observations by putting them into categories e.g. Physical verbal. Displaying data with quantitative method: -Bar chart-compare behaviours -Pie chart-show proportions Finish content analysis by concluding data found
101
Falsifiability
The principle that a theory cannot be considered scientific unless it admits the possibility of being untrue Karl popper - a philosopher who insisted that a theory must be falsifiable to be deemed as scientific
102
The theory of falsifiability
even if a theory has been tested multiple times this doesn’t mean it’s true, it just haven’t been proven false yet E.g. proving th earth is flat by disproving 118 elements is a fact, this changes when people find another element Falsifiabile theories are therefore the strongest as they have not yet been disproven, despite efforts E.g. be,I’veing there are only white seamen will be disproven when yous et a black swan
103
Paradigms
A set of shared assumptions and agreed methods within the scientific discipline Thomas Kuhn -a paradigm separates scientific discipline from non-scientific discipline He suggested that social sciences were seen as ‘pre-sciences’ had a universally accepted paradigm Psychology has too much internal disagreement such as conflicting approaches issues in debates, unlike biology (evolution) and physics (universe) gravity He said that the progress within a particular science occurs when there’s a scientific revolution-the result of this is known as a paradigm shift
104
Paradigm shift
A result of scientific revolution: a significant change in the dominant unifying theory with a scientific discipline 1 a handful of research made begin to question the accepted paradigm 2 begins together popularity and pace 3 eventually there is too much contradictory evidence to ignore and a paradigm shift occurs Relevance/importance in science to shift from one parody to another
105
Example of paradigm shifts in psychology
Approaches
106
THEORY
T theory construction H hypothesis testing E emoicrcal methods O objectivity R replicanikity Y!
107
For exam (features of science )
- define each term - identify examples of each feature - discuss why the feature is important (for science)
108
T theory construction
A theory is a set of general laws and principles that have the ability to explain particular events or behaviours. Construction caused by gathering evidence via direct observation Psychological theories provide understanding by explaining regularities in behaviour Theories can develop via an inductive or deductive model Inductive(hypothesis testing leading to theory) Deductive(theory to hypothesis testing)
109
T theory construction
A theory is a set of general laws and principles that have the ability to explain particular events or behaviours. Construction caused by gathering evidence via direct observation Psychological theories provide understanding by explaining regularities in behaviour Theories can develop via an inductive or deductive model Inductive(hypothesis testing leading to theory) Deductive(theory to hypothesis testing)
110
Inductive
Hypothesis testing leads to theory E,g. Milgram and asch
111
Deductive
Theory leading to hypothesis testing E.g. 44 thieves affectionless psychopathy
112
Hypothesis testing
This is when the validity of a theory is tested - theories are then used as findings - a good theory must be able to generate testable expectations E.g. of a theory that hasn’t : friends psychosexual stages, role of defence mechanisms
113
Empiricism
When information is gathered via direct observation and experiment, unfounded beliefs “ I will believe it when I see it” Arguing that something exists is no longer enough to be deemed as scientific- there must be concrete(empirical) evidence to support claims E.g. Of empirical Evidence skinners rats Pablo’s dogs
114
Objectivity
when all sources of personal bias are minimised to not distort/ influence research process Objectivity is the bias of empirical methods Researchers must stay out the critical distance from their research to ensure that expectations don't affect the research this can be done by not involving yourself in the testing/ research avoiding investigator effects methods must be carefully controlled to ensure highest levels of objectivity e.g. reduce extremely variables higher level of the validity
115
Replicability
the extent to which scientific procedures and findings can be repeated by another researcher If a scientific theories to be trusted it's findings must be repeatable across the number of different contexts and circumstances, this makes the findings more reliable replication can also determine the validity of a theory, replication can strengthen the reliability of findings as well as the validity of theories leading to credibility
116
Reliability
Consistent methods/ results. and measuring devices said to be reliable; it produces consistent results every time it's used E.g. scale stop watch ruler, concrete measures but in psychology can we be sure that the abstract measures are reliable? psychological tests and questioares
117
Method to improve reliability
Test-retest reliability
118
Test - retest reliabiltiy
A method of assessing the reliability of a questionnaire or psychological test by assessing the same person or group on two separate occasions if the measure produces the same answers as the second round it's considered reliable E.g. IQ tests you're a q should be the same today as it is next week there must be enough time between your first and second test( Memory) Scores from the first and second test are then correlated to check for similarity- looking for positive correlation
119
Results: Inter-observer reliability
The extent which there is agreement between two more researchers involved in observing measuring behavior avoids bias and subjectivity researchers generally were an pilot study to test their behavioral categories/ variables/ questionnaires before the real thing data sets are then correlated to check similarity- the correlation coefficient must be higher than 0.8 for the data to have high interobserver reliability
120
Test retest reliability steps
1Give participants the test/ questioner and collects the data 2 give the same participants the same test/ questioner on a later occasion and collect the data 3compare the data collective from each participant on both occasions 4look for correlation between the scores if the correlation is higher than 0.8 the test/ questioner has high test retest reliability
121
Interobserver reliability steps
1. Run a pilot study and have the researcher measuring/ observing the same thing e.g. behavioral categories 2. collect data from the researcher and compare 3. Check correlation between the sets of data 4. If there is a correlation of 0.8 there is a high inter rater reliability
122
Ways of improving reliability
Questionnaires : closed questions, jargon,leading questions Interviews: training, no leading questions, structured interviews Experiments: control variables (lab experiment) more replication, standardised conditions Observations: operationalise behavioural categories
123
Assessing and improving accracey
How accurate the findings or measuring devices are( are we measuring what we think we're measuring)
124
Types of validty
internal, ecological, historical, population
125
Assessing validity
Face validity, concurrent valdity
126
Internal validity
Refers to whether the observed effects of our dependent variable is due to the manipulation of the independent variable and not anything else - social desirability, extraneous variables, demand characteristics, investigator effects, researcher bias, confounding variables
127
External validity
the extent to which findings from a study are accurate outside the study
128
Ecological validty
the extent to which findings from artificial study are generalisable to other settings high: bickmans obedience low: asch, lab experiments, baddley deep sea divers
129
Historical validity
the extent to which findings are generalizable to present day high: lorenz, imprinting, biological suff low: asch, jacob's digit span test
130
Population validity
the extent to which finding from the sample are generalisable to the target population high: zumroos meta analysis low: little albert/ hans
131
Face validity
Extent to which measure looks like what it claims to measure If the item looks like it measures what it claims to measure then we can see it has Faith validity E.g. items on a IQ tests have face validity
132
Concurrent validity
Establishing Liberty by comparing your measure to an existing measure To assess validity we can compare measure to already established validated measure if we items on the measure of various similar/ much the items on the validated test then your measure has concurrent validity in addition if the result from your measure are very similar/ much the results you yielded from the establishment nature then your measure has concurrent validity E.g. if someone completes a validated IQ test they should get similar results on our test
133
Improving validity
Experiments Internal: better experimental design e.g. matched pairs instead of repeated measures) lab setting high control External: field instead of lab method, representative stratified sample Questions face validity, change questions to make them relevant concurrent valdirty, remove questions
134
economy
state of a country/ region in terms of the production and consumption of goods and services
135
context econonmy
How does learn from the findings of psychological research influence effect benefit or devalue the economy Economy: government, law, Healthcare(NHS), education - have we learned anything from research that anything from research that can help save money/ show us how we're wasting money? - does research point/ better ways of investigating
136
Economy e.g. treating mental illnesses
-Absence from work costs the economy roughly £15 billion a year - reports show that a third of those absences are due to mild illnesses such as depressions stress and anxiety psychological research may lead to improvements and treating mental illnesses either e.g. drug therapy - this will help people manage their conditions more effectively meaning they can take less time of work
137
economy - psycopathology
-Drug therapy= time and cost effective - flooding= time and cost effective, may be more likely to drop out - CBT= takes longer, tackles root cause( long-term) - treatments are most effective when CBT and drugs are combined so both are worth the investment
138
Economy in attatchment
-Field- role of the Father, may encourage longer paternity leave - mothers may feel discouraged to go back to work due to bowlby’s findings on the importance of a primary caregiver -what does this mean for nannies if the mother isn't willing to let anyone else raise them
139
Economy in memory
- cognitive individual training, formation of questions, may not be worth it,2/4 parts still effective so maybe a waste to teach all four, Good investment as it catches criminals -EWT, training to not cause leading questions benefits to the economy due to changes
140
Economy in social influence
- NSI, may discourage people from substance abuse, smoking drinking social Norm campaigns - social change- people will use less plastic- good implications - legitimacy of authority- encourages people to actually work, increases productivity
141
peer review
an assessment of scientific work by specialists in the field - psychological research is often published through confederates, textbook, joural articles however before a piece of research can be published it must be subject to peer review - this involves every written aspect being scrutinized by a small group of experts(peers) within that field - the experts must be objective and unknown to the author of the research- prevents by us e.g. favouriteism bribery
142
1/3 main aims of peer review
To validate quality and relevance of the research - each element of the research is assessed for quality and accuracy- to that there are no mistakes E.g. sample designs, spell/ grammar mistakes,
143
2/3 peer review aim
To suggest amendments or improvements - if minor in accuracies are found the reviewers can suggest suitable changes to the Research in order to improve it - if there are major issues and concerns reviewer can suggest that the research should be withdrawn and not be published - if mistakes and inaccuracies go unnoticed the publisher can be sued E.g. Andrew Wakefield, published a study stating that MMR vaccinations were linked to the development of autism. following a peer review he was the subject of a two-year investigation. His conclusions were based on unscientific work ( conflict of interest, selective reporting of data, fabrication of data, unethical dealings with children) due to this he lost his position on the medical council and License to practice medicine
144
3/3 Aims of peer review
To allocate research funding running a peer review can be helpful to decide whether a particular research project should be awarded funding - this may be done in association with the government run funding organisation e.g. Medical research council - these organizations may be particularly interested in establishing research projects that are most worthwhile
145
Peer review process
1.The writer sends their work off for peer review 2. the research is received by a group of experts within that subject field 3. the research is checked for relevance and validity 4. the workers sent back/ flagged up for amendments 5. once work is appropriate it's sent off for publishing
146
Evalation: anonymity of peer review
Having anonymous reviews increases the likelihood of the reviews being honest and accurate - however some reviewers may take advantage of this: - a reviewer may recognize the author and criticize their work harshly( e.g. revenge if they have crossed them in the past, competition) This may be because there are direct competition with the author for funding their own research - because of this some researchers favor and open review system where the identity of the reviewer is made public
147
publication bias
journal editors tend to prefer to publish significant research and positive results - this is to increase the credibility in popularity of the publication however what happens when research does not find ‘significant results’ bias problem
148
149
self- reporting technique
Any method by which person is asked to state or explain their own feelings opinions behaviors and experiences related to a given topic
150
Why might you want to use self-report method
- subjective - in the past - get a lot of detail
151
questionnaires
a written set of questions used to assess a person's thoughts and/or experiences tip: make sure u specify its a question not a statement
152
Writing good questions: what to avoid
-Overusing jargon - use of emotive language/ leading questions - double barrels questions ( two questions answered in one) -double negatives
153
Jargon
technical terms only specialists are familar with
154
Questionares, closed quetsions, 3 types
- Likert scales (indicates how strongly ppts agrees with the statement) - Rating scales (indicates how strongly a ppts feels about a topic ) -fixed choice option (tick which apply)
155
Questionnaires benefits
+ time and cost effective, can be distributed in large volumes (post/online) and lots of data can be collected quickly, the researcher doenst even need to be present
156
Questionares limitations
- response by us people may just respond similarly for each question once they're bored - data, open questions gathers qualitative data: rich and detail but is difficult to analyze - close questions gather quantitative data which is easy to compare and analyze –Characteristics or social desirability bias - people may be ashamed of their true response or find their response taboo
157
Interviews
a live encounter where an interviewer asks a set of questions to assess an interviwers thoughts and/ or experiences this can take place face to face or on the phone
158
structured interviews
When questions are predetermined and in a fixed order + questions are the same for each participant( standardized) which makes the interview easy to replicate this increases the reliability of the data + less room for bias - unable to deviate from questions or expand on answers
159
unstructured interviews
When there are no set of questions and a topic is discussed generally, free-flowing + lots of flexibility kind of elaborates and gather more detail which may lead to higher explanatory power - much more difficult to analyse may be lots of irrelevant data
160
overall conclusion for self report
-People may still give dishonest answers due to social desirability
161
semi structured
when questions are pre-determined but there is freedom to ask follow up questions
162
Explain how to counterbalance
1. Split intel 2 groups 2 1st group completes first condition then second 2 2nd group completes inoppositie order to prevent order effects 4 put data together and compare each condition
163
Standard dictation rule of thumb
The bigger the number the father each score is from the mean, more variation/ spread in scores/ data
164
Remembering difference distributions
“N” negative skew looks like a backwards N
165
Descriptors of strength
0.1-0.3 weak correlation 0.4-0.6 moderate correction 0.7-1 strong correlation
167
Inferential stats table acronym
Space Weather Really Contains Many UFO’s Chasing Small Pigs
168
169
Inferential stats table filled out
Sign test Wilcoxon Related t-test Chi squared Mann Whitney Unrelated t-test Chi squared Spearsmans rho Pearsons R
170
171
probability def
a measure of likelihood that a particular event will occour where 0 indicates statistcal impossibility ad 1 statistical certainty
172
173