Scientific processes (Research Methods) Flashcards
An ‘aim’ in research is…
A general statement of what the researcher wants to investigate
A hypothesis is
A clear testable statement that states a relationship or effect between variables
A one tailed hypothesis is
Directional - it predicts the direction of the outcome (e.g. which group will perform better)
A two tailed hypothsis is
Non directional - it predicts there will be a difference/relationship but not in which direction
When do researchers tend to use a directional hypothesis?
When there is previous research on the topic
When do researchers tend to use a non-directional hypothesis?
When there is no previous research or it is contradictory
Independent variable
the variable that the researcher MANIPULATES or that changes naturally (the cause)
Dependent variable
the variable the researcher MEASURES (the effect)
What is meant by ‘levels of the IV’
The experimental conditions participants are in e.g. if the IV is Amount of Caffeine the levels may be ‘Caffeine’ and ‘No Caffeine’
What is the term for clearly defining your variables in terms of how they can be measured?
Operationalisation
What is the ‘baseline’ condition called in an experiment?
Control group
An extraneous variable is
A variable outside of the IV which has the capability to affect the DV if not controlled
How is an extraneous variable different to a confounding variable?
EVs dont systematically vary with the DV so have the capability to affect the DV is not controlled. CVs do vary with the IV so it’s impossible to tell if this has affected the DV or not! It has confounded our results!
Participants react to cues from the researcher/environment and this is known as
Participant reactivity
These are cues from the research regarding the AIM which lead to the potential of participants changing their behaviour.
Demand characteristics
What are the behavioural consequences of demand characteristics?
The please-U effect (over-perform to please the researcher) or the screw-U effect (underperform to sabotage the research).
The investigator may (consciously or unconsciously) affect the participant’s behaviour, this is known as…
Investigator effects
A method to control the effects of bias when designing materials and deciding the order of experimental conditions
Randomisation
A method of controlling for investigator effects by keeping all procedures the same for each participant
Standardisation
A control method where the participant is not aware of the research aims to prevent demand characteristics
Single blind procedure
A control method where the participant is not aware of the research aims to prevent demand characteristics and the researcher is not aware of the aims to prevent researcher bias
Double Blind Procedure
What is meant by experimental designs?
The different ways in which participants are organised in relation to the experimental conditions
Name the experimental design: different participants complete different levels of the IV and the two separate groups are compared.
Independent measures design
Name the experimental design: Pairs of participants are matched on a variable relevant to the DV with one being assigned to condition A and the other to B. The two separate groups are then compared.
Matched pairs design
Name the experimental design: all participants take part in all conditions of the experiment and the two groups are them compared.
Repeated measures design
1 strength and 1 weakness of an independent measures deign
+ no order effects, less chance of demand characteristics
- more participant variables, more time consuming and costly
1 strength and 1 weakness of repeated measures design
+ controls for participant variables, more economical
- more chance of order effects, more chance of demand characteristics
1 strength and 1 weakness of a matched pairs design
+ controls for participant variables, reduces order effects and demand characteristics
- can never match participants exactly, time consuming/costly
What is meant by order effects?
Performance in a second set of conditions is improved (practice effect) or worsened (fatigue/boredom effect) compared to the first
How might you control for order effects?
Counterbalancing or use an independent measures design
What is counterbalancing?
the ABBA technique - participants complete the conditions in different order to spread out order effects.
How can you control participant variables?
Repeated measures design, matched pairs design, random allocation to conditions
What is meant by the ‘population’
A group of people who are the focus of the research from which the sample are drawn
The group of people who take part in the research and are presumed to represent a larger target population are called…
Sample
The methods used to collect your sample are collectively known as
Sampling techniques
What is meant by a random sample?
All members of the target population have an equal chance of being selected for the sample
How is a random sample collected?
- Get a complete list of all names of people in the target population
- assign each name a number
- use a lottery method to select X amount (picking from a hat, computer method)
1 Strength and 1 weakness of random sampling
+ potentially unbiases due to the laws of chance = increase internal validity
- time consuming and could still be unrepresentative particularly is some refuse to take part (then its more like a volunteer sample)
What is systematic sampling?
Selecting every nth person in a population
How is a systematic sample collected?
- Create a sampling frame (organised list of everyone in the population e.g. alphabetical)
- Choose a sampling system (every 3rd or 5th for example)
- start your sampling from a random point
1 Strength and 1 weakness of systematic sampling
+ Objective as the researcher has no influence over participant selection
- time consuming and if participants refuse it becomes biased like a volunteer sample
What is stratified sampling?
Where the composition of a sample matches the composition of a population based on its subgroups (or strata)
How is a stratified sample collected?
- Identify the strata (or sub groups)
- Work out the representative proportions of each strata for the sample
- Use random sampling to select the number needed in each strata
1 Strength and 1 weakness of Stratified Sampling
+ representative as it reflects the proportions of the population
- cannot represent all differences so can’t get a completely accurate representation of the population
What is opportunity sampling?
The sample is made up of anyone is is willing and available at the time of the research
How is an opportunity sample collected?
The research would ask anyone who is around at the time of the study if they would take part
1 Strength and 1 weakness of opportunity sampling
+ quick and convenient
- unrepresentative sample as participants may have something in common if they’re all free and available at that time and also open to researcher bias as they choose the participants.
What is a volunteer sample?
Participants self-select i.e. volunteer to take part in response to an advert
How is volunteer sampling conducted?
The researcher places an advert in a relevant place (poster, newspaper or magazine, online) and waits for responses
1 Strength and 1 weakness of volunteer sampling
- easy and not time consuming on the part of the researcher
- sample suffers from volunteer bias
What is meant by generalisability?
The extent to which the findings of research using a sample can be broadly applied to the population
Who is responsible for creation of ethical guidelines?
BPS (British Psychological Society)
Name the 4 major principles of the ethics code
- Respect
- Responsibility
- Competence
- Integrity
What method would an ethics committee use in determining if a piece of research is ethically acceptable?
Cost- benefit analysis (does the benefit of the research outweigh the costs?)
What is meant by informed consent?
Participants are aware of the aims and procedures of the research and their right to withdraw (without penalty) as well as how their data will be used before deciding to take part in the study
How do you get informed consent?
Issue a consent letter/brief with all relevant information that participants can sign (parents sign if under 16)
What is meant by deception?
Deliberately misleading participants about the true nature of the study (aims, procedures or nature of confederates) meaning you don’t obtain informed consent
What is meant by protection from harm?
Participants should not, as a consequence of their participation, be placed at a greater risk of physical or psychological harm than in daily life. Participants should have the right to withdraw if they feel so as part of their protection from harm.
If you have deceived/exposed participants to harm - what should researchers do?
- Full Debrief (including reveling the true nature of the study and how the data will be used)
- Provide the right to withdraw/withhold data
- Offer counselling if relevant
What is meant by confidentiality?
Having the right to control your own information (right to privacy) so participants’ data should not be personally identifiable (should be anonymous or coded) nor should institutions/locations be named and should be stored in line with the data protection act.
What is meant by a pilot study?
A small scale version of the research conducted prior to the main study.
What is the aim of pilot studies?
Check materials/procedures and review before the larger scale study
In a structured observation how does the researcher record behaviour?
Using a predetermined set of behavioural categories (or behavioural checklist)
Which type of observation records behaviour continuously?
Unstructured observations
In an observation the researcher counts the number of instances a particular behaviour is displayed. This is known as…
Event sampling
In an observation the researcher records what behaviour is occurring a pre-established intervals of time (e.g. every 10 seconds). This is known as…
Time sampling
Questions for which there is no fixed choice and participants are free to answer in as much or little detail as they choose
Open questions
Questions which have a fixed set of responses determined by the question setter
Closed questions
A form of closed question where respondents indicate their agreement with a statement on a 5 point scale (strongly disagree to strongly agree)
Likert Scale
A form of closed question where respondents indicate their strength of feeling towards a statement/between two semantic opposites (e.g. happy to sad)
Rating Scale
A form of closed question where there is a list of possible options and respondents select those which apply to them
Fixed choice options
What should be avoided during question design?
- Jargon
- emotive language
- leading questions
- double barrelled questions (two questions in one)
- double negatives (I am not unhappy - agree or disagree)
When conducted research is assessed by others who specialise in the same field to ensure high quality this is known as
Peer Review
When does peer review happen?
Before research can become part of a journal
What are the aims of peer review?
- Allocate funding decisions
- Validate quality and relevance of research (looking for fraud also)
- Suggest amendments and improvements
Anonymity in peer review can be a problem, why?
Using this to criticise rival research
Publication bias in peer review can be a problem, why?
Tendency to only want significant findings, ground-breaking research otherwise we see the file drawer phenomenon
Why might someone bury groundbreaking research in the process of peer review
It challenges the status quo
We must consider the impact of psychological research on what factor that represents financial sustainability?
The economy
Name the term: refers to consistency, i.e. the ability to get the same results. If a study is repeated using the same method, design and measurements, and the same results are obtained
Reliability
Name the term: the extent to which a particular measure used in an investigation (e.g. a questionnaire or test administered) is consistent within itself
Internal reliability
Name the term: the extent to which the results of a measure are consistent from one use to another
External reliability
Explain how the split half method tests reliability
You would compare the scores from the two halves using a correlational analysis
Explain how the test-retest method tests reliability
Complete a task twice on two different occasions cores from the 2 tasks (the test and the retest) are compared using correlational analysis
Explain how you would establish inter-observer reliability
Two observers would carry out the test separately and then the observers’ sores would be analysed using a correlation
What score on a correlation would indicate reliability
0.8 or above
How do you increase reliability?
- Operationalising variables
- Pilot studies
- Standardisation
Name the term: concerns accuracy; the degree to which something overall measures what it intends to
Validity
Name the term: concerns whether the research is accurate in itself, and whether the researcher has measured what they intended
Internal validity
Name the term: whether the results are still accurate in other settings
External validity
Name the term: The extent to which a measure, at ‘face value’, looks like it is measuring what it intends to
Face validity
Name the term: correlating scores on a new test of unknown validity with another test that is known to be valid and trusted to check for accuracy
Concurrent validity
Name the term: the extent to which the results are considered an accurate representation of other people
Population validity
Name the term: the extent to which results are considered accurate outside the research setting
Ecological validity
Name the term: the extent to which results are considered accurate across time
Temporal Validity
Name the term: refers to the view that gathering data and evidence from experience (sensory information) is central to the scientific method, rather than simply relying upon our
own viewpoints.
empiricism
Name the term: the extent to which research or materials/procedures are able to be repeated
replicability
Name the term: not open to interpretation - using critical distance to analyse information rather than subjectivity
objectivity
Name the term: - a set of shared assumptions and agreed methods within a scientific discipline (these may change)
paradigms and paradigm shifts
Name the term: the opportunity to refute a claim and prove it as false
falsifiability
Name the section of a report: A brief summary (150 – 200 words) of the key points of the study that appears at the start of the report
Abstract
Name the section of a report: Background to the research area and rationale (why the study was conducted).
The background will include a literature review of relevant past studies and
theories,
Introduction
Name the section of a report: Describes how the study was carried out in sufficient detail for someone else to be able to
replicate it.
Method
Name the section of a report: Summarises the findings of the research clearly and accurately. There is normally a section on
descriptive statistics and also inferential statistics.
Results
Name the section of a report: This section explains what the results mean and is broken down into several sections including looking at modifications and implications for further research
Discusson
Name the section of a report: Information on sources of information used in the report shown in alphabetical order.
References
Name the section of a report: Copies of materials that are not suited to any other section of the report
Appendices
What method of referencing is used
Harvard
Outline how an end of test reference should be written
- Author surname then comma and initial followed by full stop
- Publication year in brackets
- Article title with no capitals apart from the first word and full stop at end
- Journal title in italics with a comma
- Volume & issue (in brackets) followed by comma
- Page numbers with hyphen in between and full stop at end
What information goes in an in text reference?
Surname of researchers, year of publication (pages numbers only if its a direct quote)
If there are two or more researcher in the reference use a
&
If there are three or more researchers in the reference use
et al.
What are the purposes of referencing?
- To avoid plagiarism.
- Provide a theoretical framework for the topic.
- To acknowledge direct quotes.
- To provide evidence to support arguments.
- So that readers can check how much preparation has gone into your work and can find extra information