Research Methods Flashcards
Qualitative data
Non-numerical
Rich and detailed
Used for attitudes, beliefs and opinions
Collected in real-life settings
Subjective
For example:
Words, opinions, emotions
Feelings and descriptions
Quantitative data
Objective
Numerical form
Lacks detail
High in reliability
Collected in artificial (lab) settings
For example:
used to measure behaviours
Define primary data and evaluate it
Information that has been obtained first-hand by a researcher for the purpose of a research project.
In psychology data is often gathered directly from participants as part of an experiment.
Strengths:
- authentic data
Limitations:
- requires time and effort
Define secondary data and evaluate it
Information that has been already been collected by someone else and so pre dates the current research project.
In psychology such data might include the work off other psychologists or government statistics.
Strengths:
- inexpensive
- easily accessible
Limitations:
- substantial variation in the quality and accuracy of data
Meta analysis
The process of combining the findings from a number of studies on a particular topic.
The aim is to produce an overall statistical conclusion based on a range of studies.
Important: A meta-analysis should not be confused with a review where a number of studies are compared and discussed.
What are some positives and negatives about Meta-analysis?
Positives:
- allows us to create a larger, more varied sample
- results can be generalised more often
Negatives:
- prone to publication bias
- not all relevant studies may be selected
- therefore, conclusions might be biased.
Imposition problem
Respondents might not be able to answer adequately because the questions limit what they are able to say, and may not reflect the issues that respondent themselves feel are important.
Independent variable
The IV is directly manipulated by the experimenter. The different values/levels of the IV are known as conditions.
Dependent variable
The DV is measured to see how the different levels of the UV have affected it
Demand characteristics
Any aspect of a study which has an influence on participants to do or answer what is expected of them.
Social desirability bias
A demand characteristic - The tendency of participants to answer question or behave in a manner that will be viewed favourably by others
Investigator effect
An investigator effect in psychology is anything the investigator does which has an effect on a participant´s performance in a study
For example: rapport, lack of standardised instructions
Experimental condition
The one in which the IV is present
Control condition
The one where the IV is not present
Open question
They do not have fixed responses, and so they allow the participant to answer however he/she wishes. They generate qualitative data.
Closed question
They restrict the participant to a predetermined set of responses and generate quantitative data.
Interviewer effect
Because an interview is a social interaction, the interviewer’s appearance or behaviour may influence the respondent’s answers.
What are the 4 types of experimental methods?
- Laboratory
- Field
- Natural
- Quasi
Explain what is mean by “causal relationship”
- It refers to cause and effect.
- Does the IV really causes the change in the DV
What are the 6 types of observations a researcher can choose from?
- Covert
- Overt
- Participant
- Non-participant
- Naturalistic
- Controlled
What are the 2 observation sampling methods?
- Event sampling
- Time sampling
What is a structured interview and evaluate it?
A quantitative research method where the interviewer asks as set of prepared closed-ended questions in the form of an interview scheduled, which he/she reads out exactly as worded. The interview is standardized.
Strengths:
- Easy to replicate
- Needs only a short amount of time
Limitations:
- Not flexible as an interview schedule must be followed
- Answers lack detail and only create quantitative data
What is a unstructured interview and evaluate it?
The researcher asks open-ended questions based on a specific research topic, and will try to let the interview flow like a natural conversation. The interviewer modifies his/her questions to suit the candidate´s specific experiences.
More useful in gathering qualitative data.
Strengths:
- More flexibility as questions can be modified
- Qualitative data helps the researcher to develop a sense of a person´s understanding of a situation
- More validity because of more detail
Limitations:
- Time-consuming
- Certain skills may be needed by the interviewer
- Interviewers may bias the respondents answers
- Interviewees may develop demand characteristics and social desirability issues
What are questionnaires and evaluate them?
Questionnaires are used to ask a large sample of people for information on a specific topic.
The purpose is to get an accurate representation of the target population by using a sample, so that results can be generalised.
Strengths:
- As participants can remain anonymous they are less likely to respond in a socially desirable way (more sensitive issues and topic can be asked on)
- Easily replicable and data can be collected from a large number of people relatively quickly and cheaply
- Large sample size increases population validity
Limitations:
- Social desirability bias occurs when participants manipulate their responses in order to present themselves more favourably
- Misinterpretation is more likely to occur and cannot be corrected like in an interview
What are case studies and evaluate them?
An in-depth study conducted on an individual person or small group of people. They are often longitudinal and incorporate a range of other techniques including interviews and questionnaires.
Strengths:
- High ecological validity as the environment is realistic (increased generalisability)
- Case studies can use different methods, being able to yield lots of rich and detailed data that can be used to prompt or aid further research
Limitations:
- Lacks populations validity as the sample size is severely restricted (no generalisation)
- Participants often possess very unique characteristics (difficult to generalise findings)
- Data is often collected retrospectively about past events, meaning information may be forgotten or incorrectly recalled (decreased value of insight)
What are pilot studies?
- Small scale trials of the actual study
- They check standardised procedures and the design of the study
- To determine the time needed for the research itself and the specific parts
- How much it may cost
- To identify possible extraneous variables and then eliminate them
- Practice for the researchers
- To ask for participants feedback about the experiment
- To identify any discrepancies and adjust the procedure accordingly
What is a correlation?
Correlations are a research method that investigates the relationship between two co-variables (does one increase the other?)
What are co-variables?
Variables that are examined for a relationship to see if they vary together
Define correlation coefficient
To measure the strength of a correlation (the relationship between two or more variables). A correlation coefficient can range between -1.0 (perfect negative) and +1.0 (perfect positive).
What are the 3 types of correlations?
Positive correlation – as one variable increases the other variable also increases
Negative correlation – one variable increase the other variable decreases
No correlation – there is no relationship between the covariables
What is an aim?
An aim states the intent or purpose of the study and always starts with “to investigate” …
What are hypothesis and what are the 3 types?
A hypothesis is a prediction of the investigations outcome that makes explicit reference to the IV and the DV.
- Experimental hypothesis
- Correlation hypothesis
- Null hypothesis
Define Null hypothesis
This predicts that a statistically significant effect or relationship will not be found.
Outline the 2 types of experimental hypothesis
Directional hypothesis (one-tailed)
- only use a directional hypothesis that states a predicted outcome if existing research suggests the direction of the results
Non directional hypothesis (two-tailed)
- do not predict the direction of results, they simply predict “a significant difference”
What is an experimental design and name the 3 different types?
Experimental designs are used to know how researchers want to group and organise their participants in their conditions.
- Repeated measures (same group)
- Independent groups (different participants participate in different conditions)
- Matched pairs (participants are matched on similar characteristics)
Random allocation/random assignment
Participants are randomly assigned to different groups, such as the experimental group or treatment group
This can happen through flipping a coin or using a random number generator.
Explain counterbalancing
ABBA-method
- Have some participants sit condition A first and then some sit condition B first.
- Although order effect occur for each participant, they balance each other out in the result because they occur equally in both.
Randomisation
It is used in the presentation of trials to avoid any systematic errors that the order of the trials might present.
Standardisation
Processes in the research are kept the same.
Under these circumstances changes in data can be attributed to the IV.
Explain “population” and “target population” in the context of sampling
Population - a large group of individuals in which a researcher may be interested in studying
Target population - the group on which the findings should be generalised
Name the 5 types of sampling
- Opportunity sampling
- Volunteer sampling
- Random sampling
- Systematic sampling
- Stratified sampling
Participant variables
These are characteristics of the participants that could affect the results (DV). Participants variables are ONLY an issue in independent group designs!!!
For example: IQ, age, learning difficulties
Situational variables
These are characteristics of the environment that may influence the participants behaviours.
For example: order effect, time of day, temperature, noise
Define confounding variables
An extraneous variable that has not been controlled is known as a confounding variable. Changes in the DV may be due to the confounding variable, rather than the IV, therefore the outcome is meaningless.
What are extraneous variables and name the 4 main types?
Variables that threaten the causal relationship.
- participants variables
- experimenter effect/investigator effect
- demand characteristics
- situational variables
Name ethical issues which could come up in research
- Deception
- Informed consent
- Protection of participants
- Confidentiality
What is the institution called where psychologists need to gain approval from before publishing their research regarding ethical issues?
The British Psychological Society (BPS) publishes a code of ethics that psychologists must follow. Before conducting any research psychologists must consider a cost/benefit analysis of the short and long-term consequences.
Define descriptive statistics
The use of graphs, tables and summary statistics to identify trends and analyse sets of data.
Define the measures of central tendency and name the 3 main types.
The general term for any measure of the average value in a set of data.
- Median
- Mode
- Mean
Define mean as a measure of central tendency and evaluate it
Add the values of all numbers together and divide the total by the number of values.
Strengths:
- the most sensitive measure as it takes all values into account
- Therefore, more representative of the data as a whole
Limitations:
- easily distorted by extreme values
Define mode as a measure of central tendency and evaluate it
The mode is the value that occurs most often. The mode is the only average that can have no value, one value or more than one value. Putting the numbers in order can help.
Strengths:
- easy to calculate
- For some data, it is the only method you can use
Limitations:
- very crude (vague)
Define median as a measure of central tendency and evaluate it
If you place a set of number in order, the median is the number in the middle. If there are two middle numbers, the median is the mean of those two.
Strengths:
- Extreme scores do not affect it
- easy to calculate
Limitations:
- less sensitive and accurate than the mean as not all scores are included in the final conclusion
Define Dispersion and name the 2 main measures
Describes the spread of data around a central value (mean, median or mode). They tell us how much variability there is in the data.
- Range
- Standard deviation
Explain standard deviation and evaluate it
A standard deviation is a measure of how dispersed the data is in relation to the mean. Low standard deviation means data is clustered around the mean, and high standard deviation indicates data is more spread out.
Strengths:
- much more precise than the range (includes all values)
Limitations:
- can be distorted by a single extreme value.
Explain range and evaluate it
The range is the difference between the highest and lowest values in a set of numbers. Subtract the lowest number in the distribution from the highest and add +1
Strengths:
- easy to calculate
Limitations:
- only takes into account the two most extreme values (unrepresentative of the whole data)
- extreme scores will influence the data unproportionally
Name and explain the 3 types of distribution
Normal distribution: the graph resembles a bell shape; the highest point of the chart should be at the centre of the chart, the left and right-hand sides of the graph should be symmetrical, and the tail should extend to each side of the graph.
Positively skewed distribution: the mean, median and mode or distribution are positive, so the graph has a long tail extending to the right, and the highest point of the graph is shifted to the left.
Negatively skewed distribution: the mean, median and mode of distribution are negative, so the graph has a long tail extending to the left, and the highest point of the graph is shifted to the right.
Explain a bar chart
- each bar represents a different category of data, and this is denoted by the spaces between them.
- different categories of data are known as discrete data.
- the bars can be drawn in any order.
Explain a histogram
- Mainly used to present frequency distributions of interval data (numerical data)
- The horizontal axis is a continuous scale
- There are no spaces between the bars, because the bars are not considered separate categories
Explain a line graph
- It displays information as a series of data points called “markers” connected by straight line segments.
- It is similar to a scatter graph except that the measurement points are ordered and joined with straight line segments.
Explain a scatter graph
- Used to present relationships between quantitative variables when the variable on the x-axis (typically the IV) has a large number of levels
- Each point represents an individual
- There are no lines connecting the points
- The straight line that best fits the points in the scatter graph, which is called “regression line”, can also be included
Define validity
- Validity means whether something measures what it claims to measure.
- How true or legitimate something is as an explanation of behaviour.
Define internal validity
Refers to the controls within a study. Studies with high internal validity control extraneous variables, investigator effects, demand characteristics and avoid poorly operationalised variables.
Define external validity
Relates to factors outside of the investigation. The external validity is affected by the internal validity – you cannot generalise findings of a study that was low in internal validity.
There are different types of external validity:
- Population validity (can the result be generalised to the target population)
- Cultural validity (cultures)
- Temporal validity (can findings be hold true over time)
- Ecological validity (can findings be generalised; low in mundane realism = lower ecological validity)
How can validity be assessed?
Face validity
- Concerns the issue of whether a self-report method looks like it is measuring what the researcher intended to measure. Establishing face validity only requires intuitive measurements from an expert.
Concurrent validity
- Measures how well a new test compares to a well-established test
- To do this, participants are given both measurement and the scores are compared. Similar scores on both assessment tools suggest there is a high concurrent validity
Define reliability
If a research method is reliable it will give you consistent results every time it is used
Define internal reliability
- Concerns the extent to which something is consistent within itself
- Different parts of a test should give consistent results
Define external reliability
- Concerns the extent to which a test measures consistently over time
- The test should always give consistent results regardless of when it is used
Ways to assess reliability
Test-Retest Reliability
- The same test is given to participants on two separate occasions. It is said to be reliable if the two sets of scores are consistent/similar. You need to leave enough time between each test to avoid order effects.
Inter-Observer Reliability
- Is used to assess reliability in an observation. The extent to which researchers agree on the behaviours being observed independently. There is reliability if there is a consistent number of scores for agreed operationalised behavioural categories.
Split half method
- Measures internal reliability by splitting a test into two and having a participants do both halves. If the two halves of the test provide similar results this indicates that the test has internal reliability.
Evaluate qualitative data
Strengths:
- More in-depth data/more insight
- Allows respondents to “speak for themselves”
Limitations:
- Difficult to make comparisons
- Small samples = cannot be generalized
- Low reliability, as difficult to repeat the exact context of research
- Time consuming
- Expensive per person researched
Evaluate quantitative data
Strengths:
- bigger samples increase validity
- are more easy to display in graphs
- inexpensive
- replicable
Limitations:
- does not take emotions or opinions into account
- participants are not able to give detail to their answer
Define concurrent validity
Assessing concurrent validity involves comparing a new test with an existing test (of the same nature) to see if they produce similar results. If both tests produce similar results, then the new test is said to have concurrent validity.