Research Methods Flashcards
The Scientific Process:
Define a pilot study and why they are conducted
a small scale trial run of the investigation/experiment.
They are conducted to find out of the study works or not of whether there are any methodological issues which can be addressed before the full study is launched
The Scientific Process:
What are the ethical guidelines determined by the British Psychological Society
Informed consent
Right to withdraw
Deception
Debriefing of Study
Protection from Harm
Confidentiality
Competence
The Scientific Process:
What are the types of sampling techniques
random
opportunity
volunteer
systematic
stratified
The Scientific Process:
Define random sampling and state its advantages and disadvantages
randomly select ppt from the target population and ask if they are willing to take part in the study
A: produces unbiased results
D: requires a list of everyone in the target population and is very time consuming
The Scientific Process:
Define opportunity sampling and state its advantages and disadvantages
also known as convenience sampling. ask people around you
A: save time, effort and money.
D: produces biased results and high rejection rate
The Scientific Process:
Define volunteer sampling and state its advantages and disadvantages
also known as self-selection. use attraction methods to get civilians to sign themselves up to the study
A: less time consuming and low rejection rate
D: only ‘good’ civilians sign up - produces biased results
The Scientific Process:
Define systematic sampling and state its advantages and disadvantages
also known as interval. select every nth person in list
A: not biased
D: time consuming and requires list of everyone in target population
The Scientific Process:
Define stratified sampling and state its advantages and disadvantages
sample population made up of equal proportions to target population
A: fair sample population
D: requires list of everyone in target population and time consuming
The Experimental Method:
Define the two types of hypotheses
Alternate - prediction of what the researchers think the results will be in favour off
Null - opposite of alternate hypothesis. IV does not effect the DV
The Experimental Method:
Define the two types of alternate hypotheses
Non-directional - predicts that there will be a difference but the researchers don’t know in what way the DV will be effected by the IV
Directional - Researchers predict in what way the IV will effect the DV (often only used if there has been previous research)
The Experimental Method:
What factors other than the IV could effect the DV and how
Extraneous - variables which can be controlled by the researchers and are predictable
Participant (internal) - variables due to the participants such as age, gender, social class etc.
Situational (external) - outside factors which could effect the DV such as time of day, light, temperature etc.
Demand Characteristics - when ppts change their behaviour as a result of realising the aim of the study.
Investigator Effects - where the actions of the investigator result in a change in ppt behaviour and alter the results.
The Experimental Method:
What is the difference between reliability and validity
Reliability - how consistent the results of the study are to previous or future studies investigating the same thing
Validity - Does the study investigate what it intends to investigate.
The Experimental Method:
Define the types of reliability
Internal - was the study conducted consistent i.e. standardisation, replicability
External - was the study consistent over time i.e. whether the results of test-retest were the same or not
The Experimental Method:
Define the types of Validity
internal - does the IV effect the DV
external - the task generalisability
ecological - the setting generalisability
temporal - do the results still hold up today
The Experimental Method:
Name the types of experiments
Lab
Field
Natural
Quasi
The Experimental Method:
State the features of a lab experiment
- tightly controlled environments
- experimenter deliberately manipulates the IV across conditions
- experimenter measures the DV
- experimenter controls extraneous variables
- procedure and instructions are standardised
The Experimental Method:
State the features of a field experiment
- conducted in natural environments
- experimenter deliberately manipulates the IV across conditions
- experimenter measures the DV
- experimenter controls some of the extraneous variables
The Experimental Method:
State the features of a natural/quasi experiment
- experimenter has no control over the IV
- experimenter measures the DV
- experimenter has no control over any extraneous variables
The Experimental Method:
Identify the difference between natural and quasi experiments
Natural - only observational of naturally occurring events
Quasi - comparing one group of ppts to another
The Experimental Method:
Name the three types of experimental designs
Independent Measures
Repeated Measures
Matched Pairs
The Experimental Method:
Define Independent Measures Design and state its advantages and disadvantages
Using different participants per condition.
A: Reduces chance of ppts realising what the aim of the study is and either displaying demand characteristics or the screw you effect
D: Guaranteed individual differences
The Experimental Method:
Define Repeated Measures Design and state its advantages and disadvantages
Using the same participants in every condition
A: Eliminates individual differences
D: Creates the order effect and increased possibility of demand characteristics and the screw you effect as a results of participants being more likely to figure out what the aim of the study is
The Experimental Method:
Define Matched Pairs Design and state its advantages and disadvantages
Finding participants who are similar in a relevant variable to create similar proportion groups per condition
A: Reduces risk of individual differences, order effects, demand characteristics and the screw you effect
D: Time consuming and doesn’t fully eliminate individual differences
The Case Study Method:
Define a case study
a natural observation of a single person or small group of people who experienced a singular event resulting in the creation of a new theory to explain any unusual behaviour observed
The Case Study Method:
Give possible evaluations for case studies
- poor generalisability
- low reliability due to nonreplicable
- over-reliance on case study support for a theory
The Observation Method:
Define an observation
a systematic measure of spontaneously occurring behaviour
watching someone either at a specific event (event sampling) or at timed intervals (time sampling)
The Observation Method:
Define the types of observations
Naturalistic - observing an individual natural environment
Controlled - observing an individual in a controlled environment (lab)
Structured - Used when potential behaviours have been predicted by the researchers
Unstructured - Used when researchers haven’t a flying fuck what they’re going to observe
Participant - researcher joins in the behaviour of the participants
Non-participant - researcher does not participate with the participants
Covert - participants don’t know they are being observed
Overt - participants know they are being observed
The Observation Method:
How do we establish inter-observer reliability
- researchers collectively decide how behavioural categories will be operationalised
- different researchers observe the same scene by themselves
- they collect their data independently of each other
- they compare their results
- if they correlate to +0.8 PMCC then the observation has inter-observer reliability
The Observation Method:
How do we establish validity in observations
whether or not the participants are aware they are being observed and therefore resulting in the possibility or lack thereof of demand characteristics and the screw you effect or any other behavioural changes
The Self-Report Method:
What are the two ways in which self-reports can be used when conducting a study
- surveys
- questionnaires
The Self-Report Method:
Define the types of questionnaires
Open - collecting qualitative data by leaving open ended questions
Closed - collecting quantitative data by giving participants specific responses
The Self-Report Method:
Define interviews used as a self-report method of a study
- natural and flexible approach to questioning
- interviews can be structured, unstructured or semi-structured
- questions can be open or closed resulting in either qualitative or quantitative data collection
The Self-Report Method:
Define the types of interviews
Structured - interviewers ask pre-written questions whilst following a set format
Unstructured - interviewers explore topics the participants want to explore using open questions
Semi-structured - pre-written questions as well as letting participants lead the interview
The Correlation Method:
What are the features of correlations
- Measures the relationship between two covariables
- relationship is either positive or negative
- relationship can be strong or weak
- correlations are represented by a scattergram and analysed using Spearman’s Rank Correlation Coefficient or Pearson’s Product Moment Correlation Coefficient (PMCC) and produces quantitative data
Thematic and Content Analysis:
Describe thematic analysis
- data is collected through interviewing participants through open ended questions and case studies
- once the interviews are complete, detailed transcripts are made of the interview process and the researcher familiarises themselves with these notes
- the researcher then attempts to identify any common themes that appear throughout the data - this may take several run throughs of all the data
Thematic and Content Analysis:
Describe content analysis
- go through data several times to become familiar with the content
- identify any categories that can be used to dissect the data
- tallies down the frequency of the categories
- produces categorical (nominal) date which can be displayed in graphs and statistically analysed
Data Handling and Analysing:
Define primary data
data collected directly from participants by the researchers for their own research ain.
researchers complete studies themselves
e.g. anyone except Van Ijzendoorn
Data Handling and Analysing:
Define secondary data
data collected by researchers for a different research aim but can be used to support other research aims.
researchers use other peoples data (meta-analysis)
e.g. Van Ijzendoorn
Data Handling and Analysis:
What is descriptive data/statistics
used to describe and summaries quantitative data
this includes;
measures of central tendency - mean, median, mode
measures of dispersion - range, standard deviation
Data Handling and Analysis:
Where is the mean, median and mode found in a normal distribution
All at the peak of the graph
Data Handling and Analysing:
Where is the mean, median and mode found in a positively skewed distribution
Mode - peak
Median - right of mode
Mean - right of median
Data Handling and Analysing:
Where is the mean, median and mode found in a negatively skewed distribution
Mode - peak
Median - left of mode
Mean - left of median
Inferential Statistical Testing:
What are the types of data collected
Nominal
Ordinal
Interval
Inferential Statistical Testing:
What are the features of nominal data
all data is in categories
Inferential Statistical Testing:
What are the features of ordinal data
- data is ranked/ordered
- the data has a true zero (has starting point)
- gaps between the data is unequal/unknown
Inferential Statistical Testing:
What are the features of interval data
- data is ranked/ordered
- the data has a true zero (has starting point)
- gaps between data is equal (data is in intervals)
Inferential Statistical Testing:
What is the S value when conducting a sign test
S=Frequency of the least occurring sign in a difference change
Inferential Statistical Testing:
What is the N value when conducting a sign test
N= number of rows excluding the rows which have no difference change
Inferential Statistical Testing:
What is a type 1 error
where the researcher wrongfully accepted the alternate hypothesis
this means they believed there was a difference or relationship when there wasn’t
also known as a false positive or an error of optimism
Inferential Statistical Testing:
What is a type 2 error
the researcher wrongfully accepted the null hypothesis
this means they believed there was no difference or relationship when there was
also known as a false negative or an error of pessimism
The Scientific Process:
What is the role of peer review
Researchers separate from the study review the study and its methodology to ensure it is credible, valid, reliable and not susceptible to ethical criticism
The Scientific Process:
What are the features of science and their meaning
- Objective - can not be misinterpreted
- Empiricism - something we can directly observe
- Replicability - equal results when replicating investigations
- Falsifiability - something can be theoretically proven wrong
- Paradigm - the majority of those in the field share the same beliefs on a particular topic
The Scientific Process:
What is included in a written report
- Title - information sentence or two
- Abstract - brief (150-200) word summary of the report
- Introduction - introduces the background of the study to the reader making reference to past research. Ideas behind the research
- Method - how the study was conducted to allow it to be replicated
- Results - Shows the findings of the study as well as stats tests
- Discussion - summary of results and explanation of what the findings mean
- Appendices - section for raw data
- References - list of past research mentioned in the report
The Scientific Process:
How do you write a reference for a journal/research article
Surname, First initial. (year of publication). Title or research article. Name of journal, issue number, page ranges
e.g.
Bandura, A., Ross, D., and Ross, S.A (1961). Transmission of Aggression through Imitation of Aggressive Models. Journal of Abnormal and Social Psychology, 63, 572-582.
The Scientific Process:
How do you write a reference for a book
Surname, First initial. (Year of publication). Title of book. Location of publishers. Publisher..
e.g.
Duck, S. (1992) Human relationships. London. Sage
Designing a study:
What features are included in designing an experiment
- IV’s and DV’s
- standardisation
- data analysis
- ethics
Designing a study:
What features are included in designing an observation
- type of observation
- behavioural checklist
- reliability
- ethics
Designing a study:
What features are included in designing a correlation
- covariable
- no IV’s or DV’s
- stats tests
- ethics
- data presentation (scattergram)
Evaluations:
Give possible evaluations for theories
- Research support/opposition
- Practical applications
- Issues and Debates - free will/deterministic, reductionist/holistic, alpha/beta bias
- Face/Temporal Validity
- Explanatory power
Evaluations:
Give possible evaluations for treatments
- Effectivity - research support/opposition
- Ethics - protection from harm (side effects)
- Cost vs benefit - implications on the economy
- Aetiological fallacy - just because the treatment works, doesn’t mean the theory is correct
- Side effects
- Half-life of drugs - cost
Evaluations:
Give possible evaluations for studies
- Ethics
- Ecological/Internal/External Validity
- Generalisability - reliability
- Andro/ethnocentric
- Methodology - standardisation, replicability (reliability)
- Practical applications