Research Methods Flashcards
What are case-studies?
An in-depth investigation, description and analysis of a single individual, group, institution or event.
They tend to be longitudinal.
Data may be collected from the individual themselves, or those close to them.
This data can be qualitative or quantitative, though it tends to be more qualitative - interviews, observations ect. produce qualitative data, while psychological tests may produce quantitative data.
It can involve the analysis of unusual individuals or events
What are some benefits of case studies?
. Provide rich, detailed insight - preferable to the more ‘superficial’ forms of data which might be provided from an experiment, as case studies collect data from a long period of time as opposed to a brief moment - likely to increase the validity of the data
. Enables the study of unusual behaviour - some behaviour’s are conditions are rare and can’t be studied using other methods eg. the case of HM. Being able to study unusual behaviour can help us understand ‘normal’ functioning
. Can be used to generate hypotheses for future research
What are some limitations of case studies?
. Prone to researcher bias - Conclusions are based on the subjective interpretation of the researcher, which can reduce the validity of the study
. Participants accounts may be biased - Personal accounts tend to come from participants and family members, who may be prone to inaccuracy/memory decay, especially if it is childhood stories being recounted. This means evidence provided may be low in validity
. Limited generalisability - often study unique individuals eg. with certain medical conditions - can’t be applied to wider population
What is content analysis?
An observational research technique that enables the indirect study of behaviour through the examination of the communications people produce. These communications can appear in spoken form interactions (conversations), written form (emails) or examples from the media (TV, magazines). The aim is to summarise these communications so that overall conclusions can be drawn. It is composed of Coding and Thematic analysis
What is coding?
The first stage of content analysis. Some data sets may be extremely large, so information is categorised into meaningful units. This then allows for each instance of the chosen categories to be counted up eg. how many times a particular word or phrase appears in a text. It produces quantitative data.
What is a thematic analysis?
A form of content analysis which produces qualitative data. It is focused on the identification of themes (ideas which recurrent) within a piece of communication. It tends to be more descriptive than coding
What is the final step of thematic analysis?
When the researcher is satisfied that their chosen themes represent the communciation they have chosen, they may collect a new set of data to test the validity of the themes and categories. Assuming the themes explain the new data adequately, a final report is drawn up
What are some strengths of Content analysis?
. Allows researchers to bypass the ethical issues associated with psychological research - Much of the material an analysist hopes to study is pre-existing in the public domain, so there are no issues in terms of obtaining permission. This may also provide greater external validity, and ‘sensitive’ data is provided with consent.
. Flexible - can produce both quantitative and qualitative data, depending on research aims
What are some limitations of Content analysis?
. May be affected by the researchers subjective interpretations - Content analysis tends to involve an indirect study of communications, where they are studied outside of the context they were given in. This creates a danger of the researcher attributing their opinions and motivations to the ‘author’ and their communications that weren’t originally present. As such, it may suffer from a lack of objectivity, especially when thematic analysis is used as it tends to be more descriptive
— However many modern researchers to tend to be aware of the risk, and reference their biases in their final reports
What is reliability?
How consistent a form of measurement or it’s findings are - if a particular measurement is made twice and produces the same result, it can be described as reliable
How can reliability be assessed?
. Test-retest method
. Inter-observer reliability
What is test-retest reliability?
A method of assessing reliability which involves administering the same test on the same person/group on different occasions. If the test is reliable, the results obtained should be the same, or at least similar.
There must be sufficient time between the test and retest in order to ensure that the participant can’t recall their answers, but not so long that their attitudes/opinions/abilities may have changed.
Example - Two sets of scores from a questionnaire are obtained on two different occasions, and correlated. If the correlation is significant and positive, we can assume the test has high reliability
What is inter-observer reliability?
The extent to which there is agreement between observers in their observations of a behaviour. Observers watch the same sequence of events, but record their observations privately. This may take place before the actual research, in a pilot study, in order to check that behavioural categories are being applied in the same way. Alternatively, a comparison of events may be reported at the end of the study.
The data collected by the observers is correlated to assess reliability.
In content analysis, this is referred to as inter-rater reliability. In interviews, it is inter-interviewer reliability
What is the correlation coefficient for reliability?
It should be exceed +80
How can reliability be improved in Questionnaires?
A questionnaire that produces low test-retest reliability may require some items to be deselected or rewritten eg. some open questions which are open to misinterpretation are replaced with closed questions, which may be less ambiguous
How can reliability be improved in Interviews?
Ideally, the same interviewer would be used for each interview within the study. However this may not always be possible. Instead, all interviewers must be properly trained, so that they are not asking leading or ambiguous questions. Structured interviews can improve reliability, as the interviewer is controlled by a fixed list of questions.
How can reliability be improved in Observations?
Reliability can be improved by ensuring all behavioural categories have been operationalised (making abstract concepts measurable). Categories should not overlap (eg. hugging and cuddling) and all possible behaviours should be covered on the checklist
Observers should also be trained in using behavioural categories, and should be able to discuss behavioural categories with eachother so categories can be applied more consistently
How can reliability be improved in Experiments?
Procedures are the focus of reliability in experiments. In order to achieve reliability, researchers must use standardised procedures (establishing consistent methods of obtaining data
What is validity?
The extent to which an observed effect is genuine - whether a psychological test, observation or experiment produces a legitimate result. It can be further divided into:
. Internal validity - whether the researcher has measured what they intend to measure
. External validity - The extent to which findings can be applied beyond the research setting - how far it accurately applies to the wider world
What is Internal validity?
Whether the findings/observed effects in a piece of research are due to the manipulation of the independent variable, rather than another factor.
Demand characteristics can severely limit internal validity, as participants behaviour is down to what they see as the researchers expectations, rather than the IV
What is External validity?
The extent to which findings can be generalised beyond the research setting to other populations, settings and time periods
. Ecological validity - A form of ecological validity concerned with applying findings to other settings and situations. It is not necessarily the setting of the research that affects this eg. field, lab - the task can also lower ecological validity eg. using word lists to test memory
. Temporal validity - A form of ecological validity -
The extent to which the findings of a piece of research, or concepts from a particularly theory, can be applied to other time periods and eras
How can validity be assessed?
. Face validity - Whether a test, scale, or measure appears to measure what it is supposed to measure. This can be determined by ‘eyeballing’ the measuring instrument, or having it checked by a professional eg. does a test of anxiety look like it measures anxiety?
. Concurrent validity - Where the results obtained from a psychological measure are close to, or match, the results from another recognised and well-established test. Close agreement between the two sets of data indicates that the new test has high concurrent validity - if the correlation score exceeds +80
How can the validity of Experiments be improved?
. Using a control group - Allows researcher to better assess whether changes in the dependent variable were due to the effects of the independent variable
. Using standardised procedures - Minimises the impact of participant reactivity and investigator effects - single/double blind procedures may also be used to achieve this
How can the validity of Questionnaires be improved?
Many questionnaires and psychological tests incorporate a lie scale within questions. This assesses the consistency of the participants response, and aims to control the effects of social desirability bias. Validity can also be enhanced by assuring patients that all data will remain anonymous
How can the validity of Observations be improved?
. Observational research tends to produce data high in ecological validity, as there is minimal intervention by the researcher. This is especially true for covert observations
. Behavioural categories that are too broad, overlapping, or ambiguous can reduce validity
How can the validity of Qualitative research be improved?
. Qualitative research is typically seen as having higher ecological validity than quantitative methods, as the detail and depth associated with qualitative methods eg. case studies, interviews is seen being better at reflecting participants’ realities.
However the researcher may still have to demonstrate the interpretative validity of their conclusions. This can be shown through the coherence of the researchers narrative, and including direct quotes from participants
Validity is also enhanced through triangulation - using multiple sources as evidence eg. data compiled through interviews with friends and family, personal diaries, observations etc..
What is interpretive validity?
The extent to which the researchers interpretation of events matches that of their participants
What sections is a scientific report made up of?
. Abstract
. Introduction
. Method
. Results
. Discussion
. Referencing
What is an abstract?
A short summary which outlines the major elements of the research - the aims, hypotheses, procedure, results and conclusions. The key details of the report.
What is an introduction?
A literature review of the chosen area of research. It details the theories, concepts and studies relevant to the current study. It should follow a logical progression, beginning broadly and becoming more specific throughout until the aims and hypotheses of the current research are presented
What is a method?
A description of what the researcher did/their procedure. It should include sufficient detail, so that other researchers would be able to replicate the study if they wished. It should include:
. Design - it is clearly stated what design is used eg. independent groups, naturalistic observation - an explanation for the choice of design should be given
. Sample - Information related to the people involved in the study - Number of people involved, biological/demographic information, sampling method, target population
. Apparatus/materials - Description of the assessment materials, and other relevant materials
. Procedure - Details of the steps involved in the experiment, from beginning to end. Includes a record of everything said to participants - briefing, standardised instructions, debriefing
. Ethics - An explanation of how any ethical concerns were addressed within the study
What is in the results section?
A summary of the key findings of the investigation. It is likely to include descriptive statistics eg. tables, graphs, charts, measures of central tendency. It also includes inferential statistics eg. the choice of statistical test, critical values, level of significance, what hypotheses was rejected
If qualitative methods have been used, there will likely be an analysis of themes/categories
What is the discussion?
A consideration of what the findings of the research study tell us in terms of psychological theory.
Results are summarised in verbal, rather than statistical form. These results should be discussed within the context of the evidence presented in the introduction, as well as other relevant research.
The researcher should discuss the limitations of the present investigation, and how these could be addressed in the future.
The wider implications of the research are considered. This can include real-world applications of what has been discovered, and what contributions the research has made to the existing knowledge base within the field
What is referencing?
Full details of any source material cited in the report eg. journal articles, books, websites
How are journal articles referenced?
Author(s), Date, Article title, Journal name (in italics), Volume (issue), Page numbers
How are books referenced?
Author(s), Date, Title of Book (in italics), Place of publication, Publisher
How are websites referenced?
Source, Date, Title, Weblink, Date accessed
What are the Features of science?
. Paradigms and paradigm shifts
. Theory construction and hypothesis testing
. Falsifiability
. Replicability
. Objectivity and the empirical methods
What is meant by paradigms and paradigm shifts?
. Paradigm - A set of shared assumptions and agreed methods within a scientific discipline
. Paradigm shift - The result of a scientific revolution where there is a significant change in the dominant, unifying theory in a scientific discipline
Kuhn (1962) suggested that what distinguishes scientific and non-scientific disciplines are paradigms. Natural sciences are characterised by having a number of principles at their core eg. the theory of evolution in biology. However the social sciences, such as psychology, lack a universally accepted paradigm as there are too many internal disagreements and conflicting approaches. Kuhn argued that they are better seen as a pre-science.
Kuhn also said that progress within an established science occurs when there is a scientific revolution - a handful of researchers begin to question an accepted paradigm, this criticism gathers popularity and pace, and eventually a paradigm shift occurs as there is too much contradictory evidence to ignore.
What is theory construction and hypothesis testing?
. Theory construction - The process of developing an explanation for the causes of behaviour by systematically gathering evidence via direct observation, and then organising this into a coherent account
. Hypothesis testing - A key feature of a theory is that it should produce statements which are testable. Through this, a theory can be falsified. Theories should suggest a number of possible hypotheses, which can then be tested using systematic and objective methods. If these methods support the theory, it will be strengthened. If they refute it, it may need to be revised. The process of deriving new hypotheses from an existing theory is known as deduction.
What is a theory?
A set of general laws or principles that have the ability to explain particular events or behaviours
What is Falsifiability?
The principle that a theory cannot be considered scientific unless it admits the possibility of being able to be proved false.
Popper (1934) argued that the key criterion of scientific theory is falsifiability. Genuine scientific theories should hold themselves up for hypothesis testing and the possibility of being proven false. Even if a scientific principle has been successfully tested, it isn’t necessarily true - it just hasn’t been proven false. Studies which survive multiple attempts to falsify them are considered the strongest, not because they have been proven right, but because they haven’t been proven wrong. Pseudosciences cannot be falsified.
What is Replicability?
. The extent to which scientific procedures and findings can be repeated by other researchers
An important aspect of Popper’s approach to features of science is replicability - if a scientific theory is to be trusted, its findings must have been shown to be repeatable across a number of different contexts.
Replication is also important in proving validity, as by repeating a study across different circumstances we can determine how far it can be accurately generalised. To enable replication, psychologists must report their investigations with precision and rigor
What is meant by Objectivity and the Empirical method?
. Objectivity - All sources of personal bias are minimised so as to not distort or influence the research process
. Empirical method - Scientific approaches that are based on the gathering of evidence through direct observation and experience
Researchers must maintain objectivity in investigations - they must maintain a ‘critical distance’ during research, not allowing their personal views or biases to affect the data they collect, or to influence the behaviour of participants being studied. In general, methods associated with higher levels of control eg. lab studies, tend to be the most objective
Objectivity is based on the empirical method, which emphasise the importance of data collection based on direct, sensory experience. Examples include the experimental method and observational method
What is meant by participant reactivity?
The tendency for participants to react to cues from the researcher or the research environment
What are investigator effects?
Any effect of the investigator’s behaviour (whether conscious or unconscious) on the research outcome (DV). This can include the design of the study, and the selection of and interaction with participants.
What is a single blind trial?
A form of research design where a participant is not aware of research aims, or of the condition of the experiment they are involved in
What is a double blind trial?
Where neither the participant nor researcher conducting the study are aware of the research aims or other important details of the study, and thus have no expectations that might alter a participants behaviour
What is social desirability bias?
A tendency for respondents to answer questions in a way that presents them in a better light
What is meant by standardisation?
Using the same formalised procedures and instructions for all participants in a research study
What are statistical tests?
Inferential tests - Tests used to determine whether there is a significant difference or correlation between or within investigations
What is meant by ‘significant’?
More of an association than could have occurred by chance
What factors determine what statistical test is to be used?
- Whether the researcher is looking for a DIFFERENCE OR CORRELATION/ASSOCIATION - can be derived from the wording of the hypothesis
- If the researcher is looking for a difference, what EXPERIMENTAL DESIGN is being used - a related design (repeated measures or matched pair) or unrelated (independent groups)
- Whether the data is NOMINAL, ORDINAL, OR INTERVAL
When is a Chi-squared test used?
- When the researcher is looking for a DIFFERENCE
- UNRELATED DESIGN
- NOMINAL DATA
or - The researcher is looking for a CORRELATION
- The data is NOMINAL
When is a Mann-Whitney test used?
- The researcher is looking for a DIFFERENCE
- UNRELATED DESIGN
- ORDINAL DATA
When is an Unrelated T-test used?
- The researcher is looking for a DIFFERENCE
- UNRELATED DESIGN
- INTERVAL DATA
When is a Sign test used?
- The researcher is looking for a DIFFERENCE
- RELATED DESIGN
- NOMINAL DATA
When is a Wilcoxon test used?
- The researcher is looking for a DIFFERENCE
- RELATED DESIGN
- ORDINAL DATA
When is a Related T-test used?
- The researcher is looking for a DIFFERENCE
- RELATED DESIGN
- INTERVAL DATA
When is a Spearman’s rho test used?
- The researcher is looking for an association/correlation
- ORDINAL DATA
When is a Pearson’s R test used?
- The researcher is looking for an association/correlation
- INTERVAL DATA
What is Nominal Data?
A form of quantitative data, where data is represented in the form of categories. Eg. Asking people if they like chocolate - those who say yes are one category, those who say no are another.
Nominal Data is discrete, as only one item can appear in each of the categories eg. If you ask people their favourite football team, their vote will only appear in one category
What is Ordinal Data?
A form of quantitative data, where data is ordered in some way. Eg. asking people how much they like psychology on a scale of 1 to 10
Ordinal Data does not have equal intervals between each unit - it would not make sense to say that someone who rated psychology an 8 likes it two times as much as someone who rated it a 4
It can be seen as lacking precision as it is based on subjective opinion - eg. two people may give the value of 4 on a 1-10 scale different worth. As such, in statistical tests ordinal data is altered - raw scores are converted to ranks eg. 1st, 2nd, 3rd - and it is these that are used
What is Interval Data?
A form of quantitative data, where the data is based on numerical scales that includes units of equal, precisely defined size eg. scales of measurement such as time, temperature - if you recorded how long it took students to take a test, you would be recording interval data.
What is a Directional Hypothesis?
States the direction of the difference or relationship
What is a Non-Directional Hypothesis?
Does not state the direction of the difference or relationship
What is a Null Hypothesis?
The statement of no difference, correlation or association between the variables being studied
What do statistical tests do?
Determine whether a hypothesis is ‘true’, and by extension whether we accept or reject the null hypothesis
What is the usual level of significance in psychology?
0.05 or 5%
Probability must be greater than or equal to 5% to be significant
Why can psychologists never be 100% certain about a result?
All members of a population haven’t been tested under all possible circumstances
What is a calculated value?
When a statistical test has been calculated - the result is a number (the calculated value). To test for statistical significance, the calculated value must be compared to a critical value. Each statistical test has their own critical values (or ‘R’ value)
For some tests, the calculated value must be greater than or equal to the critical value. For others, the calculated value must be less than or equal to the critical value
How can researchers determine what critical value to use?
- If the test is one-tailed or two-tailed - A one-tailed test is used for a directional hypothesis, while a two-tailed test is used for a non-directional hypothesis
- The number of participants in the study, or N value
- The level of significance, or P value - the 0.05 value is standard
Why might a different P value/level of significance be used?
In studies where there is a human cost eg. drug trials, or in ‘one-off’ studies that couldn’t be replicated - in these cases, a more precise value such as 0.01 value may be used - the lower the p value, the more statistically significant the result
What is a Type 1 error?
When the null hypothesis is rejected when it is true, and the alternative hypothesis is accepted - sometimes referred to as a false positive as the researcher claims to have found a significant relationship, when it doesn’t exist. Occurs when the significance is too high or lenient
What is a Type 2 error?
When the null hypothesis is accepted when it is false, and the alternative hypothesis is rejected - a false negative. Occurs when the significance is too low or strict
What is an Aim?
A general statement of what the researchers intents to investigate - the purpose of the study
What is a hypothesis?
A clear, precise and testable statement that states the relationship between the variables being investigated. It can be directional or non-directional
When is a directional hypothesis used?
When a theory or the findings of a previous piece of research suggest a particular outcome
What is meant by the Experimental Method?
Involves the manipulation of an Independent Variable to measure the effect on the Dependent Variable. Experiments may be laboratory, field, natural, or quasi
What is Operationalisation?
Clearly defining variables in terms of how they can be measured
What are Extraneous Variables?
Any variable, other than the Independent Variable, that may affect the Dependent Variable if not controlled. They do not vary systematically with the Independent Variable.
What are Confounding Variables?
A form of extraneous variable that DOES vary systematically with the Independent Variable. Therefore we can’t tell if any change in the Dependent Variable is due to the manipulation of the Independent Variable, or the confounding variable
What are Demand characteristics?
Any cue from the researcher or research situation that may be interpreted by participants as revealing the purpose of the investigation. This may lead to the participant changing their behaviour. This unnatural behaviour is an extraneous variable.
What is Randomisation?
The use of chance methods to control for the effects of bias when designing materials and deciding the order of experimental conditions. It aims to reduce extraneous/confounding variables, specifically investigator effects
What is Standardisation?
Using the same formalised procedures and instructions for all participants in a research study
What is meant by Experimental design?
The different ways participants can be organised in relation to the experimental conditions
What are the 3 types of experimental design?
. Independent groups
. Repeated measures
. Matched pairs
What is an Independent groups design?
Where two or more separate groups of participants are placed into different conditions of the experiment. One group represents one experimental condition. Group 1 would do Condition A, and Group 2 Condition B
What is a Repeated measures design?
Where all participants experience both/all conditions of the experiment. A participant would first take part in Condition A, then later take part in condition B.
What is a matched pairs design?
Participants are paired together based on variables, or a single variable, relevant to the experiment. Eg. in a memory study participants may be matched based on IQ. A participant from each pair is then placed in each condition.
Evaluate the use of an Independent groups design:
- Problems with validity - Participants in each group are different in terms of participant variables. If researchers find a difference between the groups in terms of the dependent variable, they cant be certain if this is because of the independent variable or participant variables. These differences can act as confounding variables, reducing the validity of the study. To deal with this, researchers may use random allocation, but this doesn’t fully solve the issue
- More expensive and time consuming - Each participant is only used once, so more time and money has to be spent recruiting more participants. Twice as much is needed to produce the same amount of data as a repeated measures design
+ No order effects - Participants are only used once, meaning participants are less likely to guess the aims of the study or show any practice or fatigue effects, increasing the internal validity of the study.
Evaluate the use of a Repeated measures design:
- Order effects - Participants appear in both conditions, so may experience order effects. These could take the form of Fatigue effects, where participants become tired or bored by the time they take part in the second condition, or Practice effects, where they perform better in the second condition as they have had practice with the task when doing the first condition. Order acts as a confounding variable. To deal with it, researchers may use counterbalancing, where half the participants do condition A first, then condition B, while the other half do condition B first, then condition A.
- Demand characteristics - When participants do all conditions of the experiment it becomes more likely that they will work out the aim of the study. This is why demand characteristics are more of a problem in Repeated measures designs.
+ Participant variables are controlled - The same participants are used in each condition, so participant variables are reduced leading to higher validity
+ Less costly and time consuming - Same participants are used in all conditions, so less participants have to be recruited than in an independent or matched pairs design
Evaluate the use of a Matched pairs design:
+ Order effects - Not a problem as participants only take part in one condition
+ Demand characteristics - Also less of a problem as participants are only taking part in one condition, so are less likely to guess the research aims
- Participant variables are less of a problem - as participants are matched for key characteristics, increasing the studies validity. However participants can never be fully matched, so there may still be important differences affecting the dependent variable
- Costly and time consuming - Because different participants are used in each condition, but also because a pre-test may be required to ensure matching of pairs is accurate and effective
What are the 4 main types of experiment?
. Laboratory experiment
. Field experiment
. Natural experiment
. Quasi experiment
What is a Laboratory experiment?
An experiment that takes place in a highly controlled environment where the researcher manipulates the Independent variable and records the effect on the Dependent variable.
Participants may be required to travel to the location of the study, but the location may not always be a lab eg. it could be a school. Strict control of extraneous variables is maintained.
Evaluate the use of Laboratory experiments:
+ High control of confounding and extraneous variables - this means the researcher can be more certain that any effect on the dependent variable is the result of the manipulation of the Independent variable. We can be more certain about establishing cause and effect, increasing the research’s internal validity
+ High replicability - Replication is more possible than in other types of experiment as there is a high level of control. This can help to ensure that new extraneous variables are not introduced when repeating an experiment
- Low Generalisability - Lab experiments may lack generalisability as the environment used may be artificial and unlike everyday life. Participants may behave differently due to the lab setting, so their behaviour can’t be generalised beyond the research setting (low external validity)
- Risk of Demand characteristics - Participants are usually aware that they are being tested in a lab experiment, so even if they don’t know why there is still the risk of demand characteristics
- Low mundane realism - The tasks participants are asked to carry out are often unrepresentative of everyday activities eg. recalling a random list of words in a memory study - this is often not how memory is used in everyday life
What is a Field experiment?
An experiment that takes part in a natural setting where the researcher manipulates the Independent variable and records the effect on the Dependent variable.
The researcher travels to the participants usual environment
Evaluate the use of Field Experiments:
+ Higher mundane realism - The independent variable is manipulated in a more natural, everyday setting, so participants may produce behaviour that is more natural and authentic, especially if participants are unaware that they are being studied (high external validity)
- Less control of confounding and extraneous variables - this means that cause and effect relationships in field experiments are more difficult to establish. It also means that replication is less possible
- Ethical concerns - If participants are unaware that they are being studied they cannot consent to being studied - so the research may violate ethical guidelines as an invasion of privacy
What is a Natural experiment?
The researcher has no control or manipulation of the independent variable - it is naturally occurring - but they do still measure the effect on the dependent variable. The dependent variable may also be naturally occurring eg. exam results. The research setting can be natural or a lab environment.
Evaluate the use of Natural Experiments:
+ Access to unique research opportunities - Natural experiments provide an opportunity to research areas that may not otherwise be studied for practical or ethical issues (eg. effects of institutionalisation and Romanian orphan studies)
+ High external validity - As they involve the study of real-world issues and problems as they occur eg. the effects of a natural disaster on stress levels
- Limited replicability - A naturally occurring event may only happen rarely, limiting opportunities for similar research
- Participants might not be randomly allocated - Participants might not be randomly allocated to experimental conditions (eg. in Romanian orphan studies, the independent variable was whether children were adopted early or late - but there may be differences between participants that affect results. For example, poor social skills could be seen as a result of long term institutionalisation - but the children might have been in an institution for a long time because they already had poor social skills which made them less attractive to parents). This reduces realism, and can make demand characteristics an issue
What are Quasi-experiments?
Have a pre-existing Independent variable (eg. age, gender) which isn’t determined by the researcher. It is an existing difference. The dependent variable may be naturally occurring, or be devised by the experimenter and measured in the field or a lab.
Evaluate the use of Quasi-experiments:
+ Carried out under controlled conditions - so have higher replicability
- Participants cant be randomly allocated to conditions, as they are based on existing differences, so there may be confounding variables
- The Independent variable is not deliberately changed by the researcher, so we cannot claim it has caused any observed changed (cause and effect)
What is a ‘Population’?
A group of people the researcher is interested in studying eg. women in their fifties. Also known as the target population, as it is a chosen subset of the wider population. From this, a sample is drawn.
What is a ‘Sample’?
A smaller group of people who are part of the target population. Ideally, the sample is representative of the target population so that the findings of a study are generalisable.