Research methods Flashcards
main methods
- experiments
- observation
- self-report
- correlations
other methods
- meta analysis
- case studies
- content analysis
introduction
experiments
- main methods
- allows mesurement of one variable or another
introduction
IV
change
(ndependant ppl can change)
DV
mesure
introduction
observation
- aim is to watch the behaviour without manipulation
- removes bias and increases validity as this is their natural behaviour
- e.g. watching cctv cameras, two sided mirror
introduction
self- report
- questionares
- interviews
introduction
correlation
(most forgotten)
- rather than seeing jwo one variable affects the other we see if two variables are associated
- e.g. do students who study longer get better gradees
- scatter graph/gram used
meaning
reliable
consistencey
meaning
vaild
accuracy
direcrtional
when the direction of the difference has
non-directional
when the direction of the difference has
3 criteria for obseration
covert/overt
non/ppt
control/natural
natural
watching and recordng behaviour in setting where it would naturally occur e.g. nature documentary
+less demand characteristics and socai desirability
+ high ecological validity
- cant control extraneous variables
- hard to replicate, unreliable?
controlled
watching and recording behavir within a structured envroment where variables controled e.g. zimbardo
+ can control extraneous variables
+ easier to replicate, reliable
- demand ch, social d
overt
watching and recording when ppts know they are bieng watched e.g. cctv
+ less ethical issues
- higher chance of ppt reactvity
covert
observing and recorning without ppts nowledge e.g. hidden cameras, tea room trade
+high ecological validity
- ethical issues
- psychological harm, anxiety
ppt
when researcher disguses themselves into the group they are oserving e.g. jake perulta
+increased insight
+may increase validity
- subjectivity
- deception
non ppt
when researcher remains outside of the group they are watching e.g. invidulator
+more objective
- may miss out on valuable insgight
observational desgign
unstructured
when reseracher records everything they see
+qualitative
- hard o may attention to everything
Behavioural categories
when a target behaviour is broken into components that are observable and measurable and specific
e.g. leaving the room, laughing
practice behavioural categories
anger
shouting
clenched fists
arched eyebrows
practice behavioural categories
affection
smiling
hugging
holding hands
observational design
sampling methods
refers to how often data is recorded, not how ppts are selected
event
time
intervals in time sampling refer to the tine between observations not te
sampling method
event sampling
counting the number of times a particular event occours
sampling method
time sampling
recornig behaviour within a pre-established time frame e.g. every 30 seconds
Inter-oserver reliability
this is when 2 or more observers make consistent judgment about the data they recorded
1. carry out pilot study using behavioural categories
2. collect observed data from both observations (independantly)
3. check for a correlation between the two sets of data
4. is there’s a strong correlation theer is a high inter observer reliability
A03
structured
quantitative
numerical results, less detail, easy to compare
A03
non structured
qualitative data
more detail, observer bias
A03
behavioural categories
+structured, less open to interpretation
findings are more objective
- overlapping categories e.g. offensive language, sweraing
- dustbin categories
- missed out categories
A03
event sampling
+useful when the target behaviour is infrequent
- we may overlook other important information if we are too focued on one e.g. fixated on one child in play ground
A03
Time sampling
+less observations need to be made so less overwhelming, fresh eyes, qyicker analysis
- we may miss important events during the intervals so data doesnt have high internal validity
Pilot study
- a small-scale trail run of the investigation
- the study is initially conducted with a small sample to check that the procedure & materials are appropriate
- this is also done with TV shows
+ gives us oppurtunity to improve the study
+ ensures that study runs smoothly
+ saves us time, money and effort
stats we use to describe trends/ patterns and differences in data
- moCT: mean, median,mode
- measures of dispersion: range, IQR
- reading & interpreting graphs, tables, bar, line
- graphical displays
- dispersion
Distribution
the dpread of frequency data for a particular variable (how data is distributed)
types: normal/skewed (positively or negitively)
why look at distribution
- tells us about the frequency data for a particular variable across a particular population
- it can show differences across populations and is used to identify statistics infrequently
Distributions
what i need to know
- definition of each type
- how to draw a distribution using moct
- interpreting distribution based on measures of centeral tendancies, may include describing how the distribution may be skewed, or comparing distributons from different data sets
Normal distribution
a symetrical spread of frequency that forms a bell-shaped curve. The mean, median and mode are all located at the highest part if teh curve
[see physical card]
Skewed distribution
a spread of frequencey data that is not symmetrical, where data clusters to one end
Explain why the psychologist did a pilot study
To test whereter ppts understand standerdised intructions, timing, so they can make adjustments before the real study to save time and money
Discriptive statistics
the use of graphs, tables and summary statistics to analyse trends and sets of data
- measures of centeral tendancies
- measures of dispersion
measures of centeral tendancies
the general term for any measure of the average value in a set of data
measures of dispersion
how far scores vary and differ from one another (the spread of scores)
the measures
CT- mean
- mode
- median
D- range
- standard deviation
mean
the arithmetic average calculated by adding up all the values in a set of data and diving by the number of valuses there are
median
the centeral value in a set of data when values are arranged from lowest to highest
Experimental design
- the different ways in which the testing of ppts can be organised in relation to experimental conditions
e.g. independant groups, repeated measures, matched pair
Independant groups
ppts are allocated to different groups wjere each group represnets one condition
- when having more than 1 group we usually have an experimental condition and a controlled condition
- ppts will be split and accolated to one of the two groups
- this mean that all ppts only experience one condition each
- scores/behaviour from each group will then be compared
- mean+compare
Repeated measures design
all ppts take part in all conditions of the experiment
- will this design we still have an experimental and control condition
- howerever ppts will take part in all conditions one at a time
- we usually use this meathod to test before/after
experimental desgin
controls
randomisation
standerdisation
counterbalancing
single blind procedures
when ppts are unware of what condition/group they’re in
prevents demand characteristics
double blind procedures
when both participants and researchers are unaware of what condition/ group theyre in
prevents investigator effects
controls
Randomisation
the use of chance in order to control for the effects of bias when designing materials and deciding the order of conditions
- this is different from random allocation
random allocation only refers to randomly assigning
ppts to thier conditions/groups
what should we randomise in an experiment?
- alphabetical order
- difficulty of words
- meaning of words
this controls demand characteristics and investigator effects
Standerdisation
using exactly the same formulised procedures and instructions for all ppts in a research study
- if conditions arent the same for all ppts they may have different experiences, unreliable
- also controls for investigator effects
- e.g. task indtructions, word lists
Counterbalancing
when ppts in a repeated measues design are split into half with half ppts completeing conditions in reverse order
- we use this control when we have a repeated measures desgin
- aims to prevent the main confounding variable that we may come across using this design
Explain how to counterbalance
- split ppts into 2 groups
- get the 1st group to complete the conditions in one order
- get second group to do conditions in opposite order
- put the data together and compare each condition (not each group)
Range and standard deviation
both measure the spread of data- standard deviation is just a more sophisticated measure
Rule:
bigger the standard deviation value, the wider the spread of data, more variation in scores/data
smaller the standard deviation value, the closer the spread of data, less variation in scores/data, roughly the same level, less varriation
quantitative
data that can be counted, usually given as nmbers
e.g. scores on test
can be analysed statistically and converted into graphs/ charts
qualitative
data that is expressed in words and non-numerical
e.g. interviews, diary, observation notes
- it may be converted into quantitative later on for further analysis
evaluation
qualitative
+ richness of detail
+ allows ppts to give thoughts and opinions
+ data tends to be more meaningful
+ high validity
- can be difficult to analayse/ compare
- hard to find trends and patterns
- analysed under subjective interpritaion
- may lead to researcher bias, low internal validity
evaluation
quantitative
+less subjective
+ easier to analyse
+ conclusions cab be drawn quicker
- may include less detail
- may not fully represent real life/behaviour, lacking external validity
Presenation of quantitative data
- tables: usually display the mean and SD for each group
- bar charts: graphs that show the frequencey of each variable represented by the height of bars
- scattergraphs: graphs that represent the strength/ direction of a relationship between co variables
How to draw a graph
- labled axis (opperationalised)
- descriptive title
- labled y axis
- correctly plotted data (bar with gaps)
Bar charts vs Histograms
bar: categorical data e.g. class A vs class B, tall vs short
histograms: bars are connected as data runs on continups scale
extraneous variable
any variable that has potential to affect the DV
ppt/situational varbles- affect the person/setting
confounding
any varable that affects only one group/ condition in a study- they vary systematically whin the IV
ppt variables
examples
fatigue
knowledge of task
energy
concentartion
mood
situational variables
examples
temp of room
disctracions
noise outside
comfort
confounding variables
extranous variable that only affects one group e.g. y7’s watching a movie when observing if y9/y7 has better concentration levels
Investigator effects
anything that a reserahcer may say or do (consciously or subconsiouly) that may influence the behaviour from a ppt
e.g. researchers movment, tone of voice
social desirability
- when ppt changes nehaviour in order to be favoured by others
- if true answer is taboo/ embarassing you may chnage it to avoid judgment
demand characteristics
when teh ppt changes thier behaviour de to guessing the aim of the study
Leading questions
“what do you think about Rishi Sunak?”
“what do you think about our so-called PM Rishi Sunak?”
no longer testing what we aim, low internal validity
when is it suitable to carry out a case stdy rather than an experiment
- get more qualitative data
- in depth
- specific senarios
- historical conext
- things that arent observable/opperationolised
case study
an in depth, detailed investigationor a small group (victim to the same event)
they use a wide variety of techniques including retrospective data, interviews, observations an psychological testing
they are usually longitudinal (long piecses of reserach) and are carrried out over a long time period
evaluation
case studies
strengths
+in depth, more qualitative data, high explanatory power
can dis/prove theories, theoretical value
+richer in detail, can explore sensitive topics
can provide direction for new hypothesisies, practical value
evaluation
case studies
weaknesses
- low generalisibility, looks at one person, small sample, low population validity
- retrospective data - may be issues such at data may be outdtated and not useful to modern day
- must rely on self-report- social desirability/ poor memory, innacurate, low internal validity
- subjective, bias, investigator effects, researchers may only look into areas they are interested in, low internal validity
- ethical issues, breach of confientally e.g. uisng their real name psychology harm - acessing trauma
types of experiments
lab
when reseraher manipulates an IV to test its effect on the DV
e.g. skinner, bandura
types of experiments
field
when reseraher manipulates an IV to test its effect on the DV in a natural setting
e.g. bus, park
evaluation
Natural
rely on natral events, so control is limited
the events are real (IV) effects are real, can be sure the Dv is natural, high ecological validity
the events are ratre so you cant check for reliability, no replication
lots of extraneous variables we cannot control, low internal validity
evaluation
Quasi
+high control (just like lab), high interanl validity
- we cannot control group allocation
- if the IV is within ppts we cannot control how we seperate them into groups- there way of existing differences between our groups(other than IV) that affect our stdy
- these are confounding variables
weakness of quasi and natural
neither method allows us to establish cause and effect
- the purpose of an experminet is to measure the effect that one varable has on another
- due to lack of control we can never trule know whether it is in fact the IV that changes the DV (it could be the other extranous varibles), this lowers the internal validity
evaluation
Lab
+high control, minimise extraneous variables, high internal validity
+ easy tp replicate, reliable
+less likely to have ethical issues
- lacks ecological validity, artificl task/setting doenst reflect real life
evaluation
Field
- low control, extranous variables low internal validity
- harder to replicate, real life is unpredictable
- ethical issues more likely to arise, informded ocnsent, right to withdraw, deception, goes agaisnt BPS guidlenes
+ high ecological valueu, ppts are more likely to behave natually, natral setting/task, reflect real life
Natral experiment
when change in the IV is not brought about by the researcher (natutrally occuring event)
they just record the effect of teh DV
e.g. number of stdents that applied beofre the fees tripled
how may flights wre ebook ed to ukraine before/after war
Quasi
when teh Iv pre-exists within ppts and has not been determned by anyone, techinally not a real experiment, no manipukation
e.g. different testosterone levels in males and femals
something thats fixed, IQ
open questions
qualitative data
no fixed response
detailed
closed questions
“yes or no” answers
Content analysis
A research technique that enables the indirect study or behaviour by examining visual written or verbal material
The indirect study or behaviour by examining visual written or verbal material
Content analysis
A research technique that enables the indirect study or behaviour by examining visual written or verbal material
The indirect study or behaviour by examining visual written or verbal material
Content analysis steps
1 choosing a sampling method (time/event sampling)
2 coding the data(using behavioural categories)
3 choosing a method of data representation(qualitative quantitative)
Content analysis steps
1 choosing a sampling method (time/event sampling)
2 coding the data(using behavioural categories)
3 choosing a method of data representation(qualitative quantitative)
Coding
1 choosing a sampling method (time/event sampling)
2 coding the data(using behavioural categories)
3 choosing a method of data representation(qualitative quantitative)
- Choosing sampling method content analysis
Time or event sampling
Depending on the material you analysing you will choose to either tally the chosen behaviour/ code within timed intervals
E.g. Written material.-event sampling
E.g. speech./film -time sampling
- Coding (content analysis)
Decide on specific behaviours/codes you wish to focus on the content analysis
E,g, for an episode of The Simpsons you may choose behaviours that represents aggression such as shouting hitting insulting language
For speech from the prime minister, you may choose words like I us we
Then you are ready to carry out the contents analysis
- Data representation
Qualitative or quantitative?
Displaying data with qualitative method:
-Group together commonly used phrasing(verbal aggression)
-Simplifying observations by putting them into categories e.g. Physical verbal.
Displaying data with quantitative method:
-Bar chart-compare behaviours
-Pie chart-show proportions
Finish content analysis by concluding data found
Content analysis RELIABILITY (often forgotten in exam)
INTER RATER RELIABILITY don’t say observer
Qualitative or quantitative?
Displaying data with qualitative method:
-Group together commonly used phrasing(verbal aggression)
-Simplifying observations by putting them into categories e.g. Physical verbal.
Displaying data with quantitative method:
-Bar chart-compare behaviours
-Pie chart-show proportions
Finish content analysis by concluding data found
Thematic analysis
Qualitative or quantitative?
Displaying data with qualitative method:
-Group together commonly used phrasing(verbal aggression)
-Simplifying observations by putting them into categories e.g. Physical verbal.
Displaying data with quantitative method:
-Bar chart-compare behaviours
-Pie chart-show proportions
Finish content analysis by concluding data found
Strengths of content and thematic analysis
Qualitative or quantitative?
Displaying data with qualitative method:
-Group together commonly used phrasing(verbal aggression)
-Simplifying observations by putting them into categories e.g. Physical verbal.
Displaying data with quantitative method:
-Bar chart-compare behaviours
-Pie chart-show proportions
Finish content analysis by concluding data found
Weaknesses of content and thematic analysis
Qualitative or quantitative?
Displaying data with qualitative method:
-Group together commonly used phrasing(verbal aggression)
-Simplifying observations by putting them into categories e.g. Physical verbal.
Displaying data with quantitative method:
-Bar chart-compare behaviours
-Pie chart-show proportions
Finish content analysis by concluding data found
Falsifiability
The principle that a theory cannot be considered scientific unless it admits the possibility of being untrue
Karl popper - a philosopher who insisted that a theory must be falsifiable to be deemed as scientific
The theory of falsifiability
even if a theory has been tested multiple times this doesn’t mean it’s true, it just haven’t been proven false yet
E.g. proving th earth is flat by disproving
118 elements is a fact, this changes when people find another element
Falsifiabile theories are therefore the strongest as they have not yet been disproven, despite efforts
E.g. be,I’veing there are only white seamen will be disproven when yous et a black swan
Paradigms
A set of shared assumptions and agreed methods within the scientific discipline
Thomas Kuhn -a paradigm separates scientific discipline from non-scientific discipline
He suggested that social sciences were seen as ‘pre-sciences’ had a universally accepted paradigm
Psychology has too much internal disagreement such as conflicting approaches issues in debates, unlike biology (evolution) and physics (universe) gravity
He said that the progress within a particular science occurs when there’s a scientific revolution-the result of this is known as a paradigm shift
Paradigm shift
A result of scientific revolution: a significant change in the dominant unifying theory with a scientific discipline
1 a handful of research made begin to question the accepted paradigm
2 begins together popularity and pace
3 eventually there is too much contradictory evidence to ignore and a paradigm shift occurs
Relevance/importance in science to shift from one parody to another
Example of paradigm shifts in psychology
Approaches
THEORY
T theory construction
H hypothesis testing
E emoicrcal methods
O objectivity
R replicanikity
Y!
For exam (features of science )
- define each term
- identify examples of each feature
- discuss why the feature is important (for science)
T theory construction
A theory is a set of general laws and principles that have the ability to explain particular events or behaviours.
Construction caused by gathering evidence via direct observation
Psychological theories provide understanding by explaining regularities in behaviour
Theories can develop via an inductive or deductive model
Inductive(hypothesis testing leading to theory)
Deductive(theory to hypothesis testing)
T theory construction
A theory is a set of general laws and principles that have the ability to explain particular events or behaviours.
Construction caused by gathering evidence via direct observation
Psychological theories provide understanding by explaining regularities in behaviour
Theories can develop via an inductive or deductive model
Inductive(hypothesis testing leading to theory)
Deductive(theory to hypothesis testing)
Inductive
Hypothesis testing leads to theory
E,g. Milgram and asch
Deductive
Theory leading to hypothesis testing
E.g. 44 thieves affectionless psychopathy
Hypothesis testing
This is when the validity of a theory is tested - theories are then used as findings
- a good theory must be able to generate testable expectations
E.g. of a theory that hasn’t : friends psychosexual stages, role of defence mechanisms
Empiricism
When information is gathered via direct observation and experiment, unfounded beliefs
“ I will believe it when I see it”
Arguing that something exists is no longer enough to be deemed as scientific- there must be concrete(empirical) evidence to support claims
E.g. Of empirical Evidence
skinners rats Pablo’s dogs
Objectivity
when all sources of personal bias are minimised to not distort/ influence research process
Objectivity is the bias of empirical methods
Researchers must stay out the critical distance from their research to ensure that expectations don’t affect the research
this can be done by not involving yourself in the testing/ research avoiding investigator effects
methods must be carefully controlled to ensure highest levels of objectivity e.g. reduce extremely variables
higher level of the validity
Replicability
the extent to which scientific procedures and findings can be repeated by another researcher
If a scientific theories to be trusted it’s findings must be repeatable across the number of different contexts and circumstances, this makes the findings more reliable
replication can also determine the validity of a theory, replication can strengthen the reliability of findings as well as the validity of theories leading to credibility
Reliability
Consistent methods/ results. and measuring devices said to be reliable; it produces consistent results every time it’s used
E.g. scale stop watch ruler, concrete measures
but in psychology can we be sure that the abstract measures are reliable? psychological tests and questioares
Method to improve reliability
Test-retest reliability
Test - retest reliabiltiy
A method of assessing the reliability of a questionnaire or psychological test by assessing the same person or group on two separate occasions
if the measure produces the same answers as the second round it’s considered reliable
E.g. IQ tests you’re a q should be the same today as it is next week
there must be enough time between your first and second test( Memory)
Scores from the first and second test are then correlated to check for similarity- looking for positive correlation
Results: Inter-observer reliability
The extent which there is agreement between two more researchers involved in observing measuring behavior
avoids bias and subjectivity
researchers generally were an pilot study to test their behavioral categories/ variables/ questionnaires before the real thing
data sets are then correlated to check similarity- the correlation coefficient must be higher than 0.8 for the data to have high interobserver reliability
Test retest reliability steps
1Give participants the test/ questioner and collects the data
2 give the same participants the same test/ questioner on a later occasion and collect the data
3compare the data collective from each participant on both occasions
4look for correlation between the scores if the correlation is higher than 0.8 the test/ questioner has high test retest reliability
Interobserver reliability steps
- Run a pilot study and have the researcher measuring/ observing the same thing e.g. behavioral categories
- collect data from the researcher and compare
- Check correlation between the sets of data
- If there is a correlation of 0.8 there is a high inter rater reliability
Ways of improving reliability
Questionnaires : closed questions, jargon,leading questions
Interviews: training, no leading questions, structured interviews
Experiments: control variables (lab experiment) more replication, standardised conditions
Observations: operationalise behavioural categories
Assessing and improving accracey
How accurate the findings or measuring devices are( are we measuring what we think we’re measuring)
Types of validty
internal, ecological, historical, population
Assessing validity
Face validity, concurrent valdity
Internal validity
Refers to whether the observed effects of our dependent variable is due to the manipulation of the independent variable and not anything else
- social desirability, extraneous variables, demand characteristics, investigator effects, researcher bias, confounding variables
External validity
the extent to which findings from a study are accurate outside the study
Ecological validty
the extent to which findings from artificial study are generalisable to other settings
high: bickmans obedience
low: asch, lab experiments, baddley deep sea divers
Historical validity
the extent to which findings are generalizable to present day
high: lorenz, imprinting, biological suff
low: asch, jacob’s digit span test
Population validity
the extent to which finding from the sample are generalisable to the target population
high: zumroos meta analysis
low: little albert/ hans
Face validity
Extent to which measure looks like what it claims to measure
If the item looks like it measures what it claims to measure then we can see it has Faith validity
E.g. items on a IQ tests have face validity
Concurrent validity
Establishing Liberty by comparing your measure to an existing measure
To assess validity we can compare measure to already established validated measure
if we items on the measure of various similar/ much the items on the validated test then your measure has concurrent validity
in addition if the result from your measure are very similar/ much the results you yielded from the establishment nature then your measure has concurrent validity
E.g. if someone completes a validated IQ test they should get similar results on our test
Improving validity
Experiments
Internal: better experimental design e.g. matched pairs instead of repeated measures) lab setting high control
External: field instead of lab method, representative stratified sample
Questions
face validity, change questions to make them relevant
concurrent valdirty, remove questions
economy
state of a country/ region in terms of the production and consumption of goods and services
context econonmy
How does learn from the findings of psychological research influence effect benefit or devalue the economy
Economy: government, law, Healthcare(NHS), education
- have we learned anything from research that anything from research that can help save money/ show us how we’re wasting money?
- does research point/ better ways of investigating
Economy e.g. treating mental illnesses
-Absence from work costs the economy roughly £15 billion a year
- reports show that a third of those absences are due to mild illnesses such as depressions stress and anxiety
psychological research may lead to improvements and treating mental illnesses either e.g. drug therapy
- this will help people manage their conditions more effectively meaning they can take less time of work
economy - psycopathology
-Drug therapy= time and cost effective
- flooding= time and cost effective, may be more likely to drop out
- CBT= takes longer, tackles root cause( long-term)
- treatments are most effective when CBT and drugs are combined so both are worth the investment
Economy in attatchment
-Field- role of the Father, may encourage longer paternity leave
- mothers may feel discouraged to go back to work due to bowlby’s findings on the importance of a primary caregiver
-what does this mean for nannies if the mother isn’t willing to let anyone else raise them
Economy in memory
- cognitive individual training, formation of questions, may not be worth it,2/4 parts still effective so maybe a waste to teach all four, Good investment as it catches criminals
-EWT, training to not cause leading questions benefits to the economy due to changes
Economy in social influence
- NSI, may discourage people from substance abuse, smoking drinking social Norm campaigns
- social change- people will use less plastic- good implications
- legitimacy of authority- encourages people to actually work, increases productivity
peer review
an assessment of scientific work by specialists in the field
- psychological research is often published through confederates, textbook, joural articles
however before a piece of research can be published it must be subject to peer review
- this involves every written aspect being scrutinized by a small group of experts(peers) within that field
- the experts must be objective and unknown to the author of the research- prevents by us e.g. favouriteism bribery
1/3 main aims of peer review
To validate quality and relevance of the research
- each element of the research is assessed for quality and accuracy- to that there are no mistakes
E.g. sample designs, spell/ grammar mistakes,
2/3 peer review aim
To suggest amendments or improvements
- if minor in accuracies are found the reviewers can suggest suitable changes to the Research in order to improve it
- if there are major issues and concerns reviewer can suggest that the research should be withdrawn and not be published
- if mistakes and inaccuracies go unnoticed the publisher can be sued
E.g. Andrew Wakefield, published a study stating that MMR vaccinations were linked to the development of autism. following a peer review he was the subject of a two-year investigation. His conclusions were based on unscientific work ( conflict of interest, selective reporting of data, fabrication of data, unethical dealings with children)
due to this he lost his position on the medical council and License to practice medicine
3/3 Aims of peer review
To allocate research funding
running a peer review can be helpful to decide whether a particular research project should be awarded funding
- this may be done in association with the government run funding organisation e.g. Medical research council
- these organizations may be particularly interested in establishing research projects that are most worthwhile
Peer review process
1.The writer sends their work off for peer review
2. the research is received by a group of experts within that subject field
3. the research is checked for relevance and validity
4. the workers sent back/ flagged up for amendments
5. once work is appropriate it’s sent off for publishing
Evalation: anonymity of peer review
Having anonymous reviews increases the likelihood of the reviews being honest and accurate
- however some reviewers may take advantage of this:
- a reviewer may recognize the author and criticize their work harshly( e.g. revenge if they have crossed them in the past, competition)
This may be because there are direct competition with the author for funding their own research
- because of this some researchers favor and open review system where the identity of the reviewer is made public
publication bias
journal editors tend to prefer to publish significant research and positive results
- this is to increase the credibility in popularity of the publication
however what happens when research does not find ‘significant results’
bias problem
self- reporting technique
Any method by which person is asked to state or explain their own feelings opinions behaviors and experiences related to a given topic
Why might you want to use self-report method
- subjective
- in the past
- get a lot of detail
questionnaires
a written set of questions used to assess a person’s thoughts and/or experiences
tip: make sure u specify its a question not a statement
Writing good questions: what to avoid
-Overusing jargon
- use of emotive language/ leading questions
- double barrels questions ( two questions answered in one)
-double negatives
Jargon
technical terms only specialists are familar with
Questionares, closed quetsions, 3 types
- Likert scales (indicates how strongly ppts agrees with the statement)
- Rating scales (indicates how strongly a ppts feels about a topic )
-fixed choice option (tick which apply)
Questionnaires benefits
+ time and cost effective, can be distributed in large volumes (post/online) and lots of data can be collected quickly, the researcher doenst even need to be present
Questionares limitations
- response by us people may just respond similarly for each question once they’re bored
- data, open questions gathers qualitative data: rich and detail but is difficult to analyze
- close questions gather quantitative data which is easy to compare and analyze
–Characteristics or social desirability bias
- people may be ashamed of their true response or find their response taboo
Interviews
a live encounter where an interviewer asks a set of questions to assess an interviwers thoughts and/ or experiences
this can take place face to face or on the phone
structured interviews
When questions are predetermined and in a fixed order
+ questions are the same for each participant( standardized) which makes the interview easy to replicate this increases the reliability of the data
+ less room for bias
- unable to deviate from questions or expand on answers
unstructured interviews
When there are no set of questions and a topic is discussed generally, free-flowing
+ lots of flexibility kind of elaborates and gather more detail which may lead to higher explanatory power
- much more difficult to analyse may be lots of irrelevant data
overall conclusion for self report
-People may still give dishonest answers due to social desirability
semi structured
when questions are pre-determined but there is freedom to ask follow up questions
Explain how to counterbalance
- Split intel 2 groups
2 1st group completes first condition then second
2 2nd group completes inoppositie order to prevent order effects
4 put data together and compare each condition
Standard dictation rule of thumb
The bigger the number the father each score is from the mean, more variation/ spread in scores/ data
Remembering difference distributions
“N” negative skew looks like a backwards N
Descriptors of strength
0.1-0.3 weak correlation
0.4-0.6 moderate correction
0.7-1 strong correlation
Inferential stats table acronym
Space
Weather
Really
Contains
Many
UFO’s
Chasing
Small
Pigs
Inferential stats table filled out
Sign test
Wilcoxon
Related t-test
Chi squared
Mann Whitney
Unrelated t-test
Chi squared
Spearsmans rho
Pearsons R
probability def
a measure of likelihood that a particular event will occour where 0 indicates statistcal impossibility ad 1 statistical certainty