Research Methods Flashcards
• independent variable
the variable in an experiment that is specifically manipulated or is observed to occur before the dependent, or outcome, variable, in order to assess its effect or influence. Independent variables may or may not be causally related to the dependent variable
• dependent variable
the outcome that is observed to occur or change after the occurrence or variation of the independent variable in an experiment, or the effect that one wants to predict or explain in correlational research
• extraneous variable
a measure that is not under investigation in an experiment but may potentially affect the outcome or dependent variable and thus may influence results. Such potential influence often requires that an extraneous variable be controlled during research
• confounding variable
an independent variable that is conceptually distinct but empirically inseparable from one or more other independent variables
• operationalisation
the position that the meaning of a scientific concept depends upon the procedures used to establish it, so that each concept can be defined by a single observable and measurable operation
• Directional hypothesis
a scientific prediction stating (a) that an effect will occur and (b) whether that effect will specifically increase or specifically decrease, depending on changes to the independent variable
• Non-directional hypothesis
a hypothesis that one experimental group will differ from another without specification of the expected direction of the difference
• Correlation Definition
the degree of a relationship (usually linear) between two variables, which may be quantified as a correlation coefficient
• Laboratory experiment definition
scientific study conducted in a laboratory or other such workplace, where the investigator has some degree of direct control over the environment and can manipulate the independent variables
• Field experiment definition
a study that is conducted outside the laboratory in a “real-world” setting. Participants are exposed to one of two or more levels of an independent variable and observed for their reactions; they are likely to be unaware of the research. Such research often is conducted without random selection or random assignment of participants to conditions and without deliberate experimental manipulation of the independent variable by the researcher
• Natural experiment definition
the study of a naturally occurring situation as it unfolds in the real world. The researcher does not exert any influence over the situation but rather simply observes individuals and circumstances, comparing the current condition to some other condition
• Quasi experiment definition
research in which the investigator cannot randomly assign units or participants to conditions, cannot generally control or manipulate the independent variable, and cannot limit the influence of extraneous variables
• BPS
a professional organization, founded in 1901, that is the representative body for psychologists and psychology in the United Kingdom. By royal charter, it is charged with national responsibility for the development, promotion, and application of psychology for the public good
• Physical Harm
defined as pain, injury, illness or impairment caused by another
• Psychological Harm
emotional or cognitive disturbances resulting from another’s actions
• Deception
any distortion of or withholding of fact with the purpose of misleading others
• Informed Consent
a person’s voluntary agreement to participate in a procedure on the basis of his or her understanding of its nature, its potential benefits and possible risks, and available alternatives
• Right to Withdraw
participants must be informed that they can leave the study at any point if they wish, and are under no obligation to disclose a reason why if they do
• Privacy and Confidentiality
the right of patients and others (e.g., consumers) to control the amount and disposition of the information they divulge about themselves
• Repeated Measure definition
an experimental design in which the effects of treatments are seen through the comparison of scores of the same participant observed under all the treatment conditions
• Independent Group definition
a study in which individuals are assigned to only one treatment or experimental condition and each person provides only one score for data analysis
• Matched Pair definition
a study involving two groups of participants in which each member of one group is paired with a similar person in the other group, that is, someone who matches them on one or more variables that are not the main focus of the study but nonetheless could influence its outcome
• Order effects
in within-subjects designs, the influence of the order in which treatments are administered, such as the effect of being the first administered treatment
• Counter balancing
arranging a series of experimental conditions or treatments in such a way as to minimize the influence of extraneous factors, such as practice or fatigue, on experimental results
• Randomisation
in experimental design, the assignment of participants or units to the different conditions of an experiment entirely at random, so that each unit or participant has an equal likelihood of being assigned to any particular condition
• Demand Characteristics
in an experiment or research project, cues that may influence or bias participants’ behavior, for example, by suggesting the outcome or response that the experimenter expects or desires. Such cues can distort the findings of a study
• The Single blind method
a procedure in which participants are unaware of the experimental conditions under which they are operating
• The Double blind method
a procedure in which both the participants and the experimenters interacting with them are unaware of the particular experimental conditions
• Pilot Study
a small, preliminary study designed to evaluate procedures and measurements in preparation for a subsequent, more detailed research project
• Random sampling definition
a process for selecting a sample of study participants from a larger potential group of eligible individuals, such that each person has the same fixed probability of being included in the sample and some chance procedure is used to determine who specifically is chosen
• Stratified sampling definition
the process of selecting a sample from a population comprised of various subgroups (strata) in such a way that each subgroup is represented
• Systematic sampling definition
a type of sampling process in which all the members of a population are listed and then some objective, orderly procedure is applied to randomly choose specific cases
• Opportunity sampling definition
any process for selecting a sample of individuals or cases that is neither random nor systematic but rather is governed by chance or ready availability
• Volunteer Sampling definition
participants self-select to become part of a study
• Controlled Observation definition
an observation made under standard and systematic conditions rather than casual or incidental conditions
• Naturalistic Observation definition
data collection in a field setting, without laboratory controls or manipulation of variables
• Overt Observation definition
where those being observed are aware of the fact
• Covert Observation definition
participant observation in which the identity of the researcher, the nature of the research project, and the fact that participants are being observed are concealed from those who are being studied
• Participant Observation definition
a quasi-experimental research method in which a trained investigator studies a pre-existing group by joining it as a member, while avoiding a conspicuous role that would alter the group processes and bias the data
• Mean
he numerical average of a set of scores, computed as the sum of all scores divided by the number of scores
• Mode
the most frequently occurring score in a set of data
• Median
the midpoint in a distribution, that is, the score or value that divides it into two equal-sized halves
• Range
a measure of dispersion obtained by subtracting the lowest score in a distribution from the highest score
• Standard Deviation
a measure of the variability of a set of scores or values within a group, indicating how narrowly or broadly they deviate from the mean
• Measures of central tendency
a single value that attempts to describe a set of data by identifying the central position within that set of data
• Measures of dispersion
the spread of the data - range, standard deviation, variance
• How to calculate a percentage
(value/total value)×100%.
• How to interpret data
assemble the information you’ll need, develop findings, develop conclusions, develop recommendations.
• How to write an application of data
Understand what the data means, and what that those findings can be applied to
• How to write an implication of data
Understand what a set of data is showing and what it means for the subject of the data
• Bar chart, Scatter-gram and Histogram
Bar charts compare subjects in terms of the y axis and are independent whereas scatter-grams compare in terms of the x and y axis. Histograms are a depiction of continuous data.
• Normal distribution
a theoretical distribution in which values pile up in the center at the mean and fall off into tails at either end. When plotted, it gives the familiar bell-shaped curve expected when variation about the mean value is random
• Positive distribution
the mean, median and mode of the distribution are positive
• Negative distribution
values are concentrated on the right side
• Questionnaire definition
a set of questions or other prompts used to obtain information from a respondent about a topic of interest, such as background characteristics, attitudes, behaviours, personality, ability, or other attributes. A questionnaire may be administered with pen and paper, in a face-to-face interview, or via interaction between the respondent and a computer or website
• Structured Interview definition
a method for gathering information, used particularly in surveys and personnel selection, in which questions, their wordings, and their order of administration are determined in advance. The choice of answers tends to be fixed and determined in advance as well. With structured interviews, answers can be aggregated and comparisons can be made across different samples or interview periods; interviewees can be assessed consistently (e.g., using a common rating scale); and order effects are minimized
• Unstructured Interview definition
an interview that is highly flexible in terms of the questions asked, the kinds of responses sought, and the ways in which the answers are evaluated across interviewers or across interviewees. For example, a human resource staff member conducting an unstructured interview with a candidate for employment may ask open-ended questions so as to allow the spontaneity of the discussion to reveal more of the applicant’s traits, interests, priorities, and interpersonal and verbal skills than a standard predetermined question set would
• Case study definition
an in-depth investigation of a single individual, family, event, or other entity. Multiple types of data (psychological, physiological, biographical, environmental) are assembled
• Self Report Technique
methods of gathering data where participants provide information about themselves without interference from the experimenter
• Open Question
in an interview, a question that encourages the respondent to answer freely in his or her own words, providing as much or as little detail as desired
• Closed Question
a test or survey item in which several possible responses are given and participants are asked to pick the correct response or the one that best matches their preference
• Primary Data
information cited in a study that was gathered directly by the researcher from his or her own experiments or from first-hand observation
• Secondary Data
information cited in a study that was not gathered directly by the current investigator but rather was obtained from an earlier study or source. The data may be archived or may be accessed through contact with the original researcher
• Meta-analysis
a quantitative technique for synthesizing the results of multiple studies of a phenomenon into a single result by combining the effect size estimates from each study into a single estimate of the combined effect size or into a distribution of effect sizes
• Quantitative Data
information expressed numerically, such as test scores or measurements of length or width. These data may or may not have a real zero, but they have order and often equal intervals
• Qualitative Data
information that is not expressed numerically, such as descriptions of behaviour, thoughts, attitudes, and experiences. If desired, qualitative data can often be expressed quantitatively through codification
• Correlations
Correlations are very useful as a preliminary research technique. This allows researchers to identify a link that can be further investigated through more controlled research. This is useful as it allows the findings to base another in another investigation.
• Correlations
Low external validity -> When the findings of an investigation cannot be applied in other settings -> There might be a third variable present which is influencing one of the co-variables, which is not considered -> limits the use of correlation
• Laboratory experiments
It is easier to replicate a laboratory experiment as a standardized procedure is used
• Laboratory experiments
allows for precise control of extraneous and independent variables so a cause and effect relationship can be established.
• Field experiments
A strength of using the field experiment design is that its ethical. This means that the procedure is done in such a way that no one is excessively or unnecessarily harmed. This is important as it means the experiment can be repeated safely.
• Field experiments
A limitation of field experiments is that it lacks external validity. This is where research lacks any generalisable findings.
• Natural experiment
A strength of using natural experiments is the high ecological validity. This is when an experiment is conducted in an environment similar to where it would naturally occur. As variables are unchanged, the findings will be quite accurate. This is good as it means the results are more easily generalisable.
• Natural experiment
One limitation of using natural experiments is the lack of control over confounding variables. This is where unapparent or inconceivable events may skew results in someway. This is found in natural settings as the researchers are allowing everything to occur without control. The reason this is bad is because these variables can be random, or cause the findings to be meaningless.
• Quasi experiment
Beginning research with non-equivalent groups presents a threat to internal validity and can be weaknesses of quasi-experimental design. Internal validity refers to the degree to which a researcher can be sure that the treatment was responsible for the change in the experimental group. If the researcher does not start with equivalent groups, then the researcher cannot be sure that the treatment was the sole factor causing change. Weaknesses of quasi-experimental design may contribute to the change. Therefore, not using random sampling methods to construct the experimental and control groups, increases the potential for low internal validity.
• Quasi experiment
Some quasi-experimental research designs offer the benefit of comparison between groups that can be statistically analyzed as quasi experiment strengths and weaknesses. For example, if an experimental group of elderly arthritis sufferers is given treatment and the control group receives no treatment, the findings could potentially reveal a statistically significant difference in pain relief or increased mobility among the treated group. This is a major advantage because it helps the researcher to make inferences about the possible existence of a cause and effect relationship of the treatment.
• Repeated Measure
A benefit of using repeated-measures (using the same participants for both manipulations) is it allows the researcher to exclude the effects of individual differences that could occur if two different people were used instead (Howitt & Cramer, 2011). Factors such as IQ, ability, age and other important variables remain the same in repeated-measures as it is the same person taking part in each condition (Field, 2011). This is one of the disadvantages of using independent groups.
• Repeated Measure
Another weakness of repeated-measures is the need for additional experimental materials. For example, if a study was testing how Factor A and Factor B affected participants’ memory for learning lists, in repeated-measures the researcher would require a different list of words for participants to memorise for both Factor A and B, whereas in independent groups the same list could be used for each factor because each group only sees the material once. Therefore, in using repeated-measures the individual differences of participants are reduced but this instead produces problems with individual differences between the materials participants are exposed to. Therefore results may be due to these differences in materials rather than the independent variable in question. The materials must therefore be carefully examined to ensure equal quality in factors such as difficulty.
• Independent Group
A strength of the independent measures design is that because participants only take part in one condition participants are less likely to become border or practiced and therefore the experiment is more likely to measure natural real-life behaviour.
• Independent Group
Individual differences in participants can sometimes lead to differences in the groups’ results. This can lead to false conclusions that the different conditions caused results when it was really just individual differences between the participants. Random sampling can help with this problem.
• Matched Pair
Reduced Participant Variables: As researchers pair participants with people that share similar characteristics, there is a reduction in participant variables and lurking variables. This makes it easier to attribute any changes within the pairs to the treatment being studied.
• Matched Pair
Loss of Data: If a participant drops out of the study, the data of the participant that they are paired with will no longer be useful in this research design. Therefore, in such a case, the researcher will lose two participants’ data.
• Random sampling
Because individuals who make up the subset of the larger group are chosen at random, each individual in the large population set has the same probability of being selected. This creates, in most cases, a balanced subset that carries the greatest potential for representing the larger group as a whole.
• Random sampling
In simple random sampling, an accurate statistical measure of a large population can only be obtained when a full list of the entire population to be studied is available. In some instances, details on a population of students at a university or a group of employees at a specific company are accessible through the organization that connects each population.
• Stratified sampling
Stratified random sampling accurately reflects the population being studied because researchers are stratifying the entire population before applying random sampling methods. In short, it ensures each subgroup within the population receives proper representation within the sample. As a result, stratified random sampling provides better coverage of the population since the researchers have control over the subgroups to ensure all of them are represented in the sampling.
• Stratified sampling
Unfortunately, this method of research cannot be used in every study. The method’s disadvantage is that several conditions must be met for it to be used properly. Researchers must identify every member of a population being studied and classify each of them into one, and only one, subpopulation. As a result, stratified random sampling is disadvantageous when researchers can’t confidently classify every member of the population into a subgroup. Also, finding an exhaustive and definitive list of an entire population can be challenging.
• Systematic sampling
Clustered selection, a phenomenon in which randomly chosen samples are uncommonly close together in a population, is eliminated in systematic sampling. Random samples can only deal with this by increasing the number of samples or running more than one survey. These can be expensive alternatives.
• Systematic sampling
The systematic method assumes the size of the population is available or can be reasonably approximated. For instance, suppose researchers want to study the size of rats in a given area. If they don’t have any idea how many rats there are, they cannot systematically select a starting point or interval size.
• Opportunity sampling
Sometimes opportunity sampling is the only available method of data collection. If the study is preliminary and the data does not have to be exact, then this type of sampling is efficient. Because it takes less time, opportunity sampling is cheaper than other forms of sampling, which may be more representative of the general population.
• Opportunity sampling
The main disadvantage of opportunity sampling is selection bias. No one likes to be rejected; if approaching people to take a survey in the mall, a researcher is likely to pick people who make eye contact, smile, or give other nonverbal cues that they will likely consent to take the survey. Additionally, researchers are more likely to choose people similar to themselves both socially and culturally. These factors introduce a bias into the sample selection.
• Volunteer Sampling
Volunteer Sampling often achieves a large sample size through reaching a wide audience, for example with online advertisements. This is useful as a larger sample size is more accurate.
• Volunteer Sampling
Those who respond to the call for volunteers may all display similar characteristics (such as being more trusting or cooperative than those who did not apply) thus increasing the chances of yielding an unrepresentative sample.
• Controlled Observation
Controlled observations are fairly quick to conduct which means that many observations can take place within a short amount of time. This means a large sample can be obtained resulting in the findings being representative and having the ability to be generalized to a large population.
• Controlled Observation
Controlled observations can lack validity due to the Hawthorne effect/demand characteristics. When participants know they are being watched they may act differently.
• Naturalistic Observation
Like case studies, naturalistic observation is often used to generate new ideas. Because it gives the researcher the opportunity to study the total situation it often suggests avenues of inquiry not thought of before. This can lead to more accurate findings.
• Naturalistic Observation
These observations are often conducted on a micro (small) scale and may lack a representative sample (biased in relation to age, gender, social class or ethnicity). This may result in the findings lacking the ability to be generalized to wider society.
• Overt Observation
Overt Observation is ethical as it is possible to inform participants in advance and obtain informed consent. This works to avoid psychological or physical harm.
• Overt Observation
Behaviour can be distorted through investigator effects in which the participant changes their behaviour through social desirability bias. This creates results which differ from reality.
• Covert Observation
Investigator effects are unlikely meaning that participants’ behaviour will be genuine. This means accurate results are more likely
• Covert Observation
Less ethical as participants are not aware they are taking part and cannot give fully informed consent. This means they may unwillingly experience psychological harm.
• Questionnaire
One of the biggest advantages is being able to ask as many questions as you like. The scope of a topic can be scaled by asking several questions, both qualitative and quantitative.
• Questionnaire
While there are many positives to questionnaires, dishonesty can be an issue. Respondents may not be 100% truthful with their answers. This can happen for a variety of reasons, including social desirability bias and attempting to protect privacy.
• Structured Interview
A structured interview is reliable, the results are easy to evaluate and analyse and can be replicated as required. This makes the process of understanding the results a lot easier.
• Structured Interview
A large number of applicants are to be interviewed to make a comparison and hence it is time-consuming and also needs more questions to be designed. Moreover intensive resources are to be used in the process.
• Unstructured Interview
The informality of the interview allows the researcher to build a relationship of trust and understanding known as a rapport, with the interviewee. This may put the interviewee at ease and encourage them to open up and give more truthful answers.
• Unstructured Interview
Being in-depth explorations they can take a long time to conduct, this in turn limits the amount of interviews that can be conducted. Therefore, this means they will have a smaller sample size, possibly making it unrepresentative
Ordinal measurement level
The ordinal level of measurement groups variables into categories, just like the nominal scale, but also conveys the order of the variables.
Nominal measurement level
It classifies and labels variables qualitatively. In other words, it divides them into named groups without any quantitative meaning.
Interval measurement level
The interval level is a numerical level of measurement which, like the ordinal scale, places variables in order. Unlike the ordinal scale, however, the interval scale has a known and equal distance between each value on the scale.