Research Methods Flashcards
reliability
refers to the consistency of a research study or measuring test; if findings from research are replicated consistently, they are reliable
external reliability
the extent to which a measure varies from one use to another
test-retest reliability
a type of external reliability; the degree to which test scores remain unchanged when measuring a stable individual characteristic on different occasions; stability of the test over time; stability of scores; testing the same individual on two or more separate occasions should yield the same results
inter-rater reliability
a type of external reliability; refers to the degree to which different raters give consistent estimates of the same behavior; can be used for interviews; two people do the interviews/observing separately and come back and compare scores/notes, if the data is similar then it is reliable; if the data is not similar it can be improved - training observers in agreed techniques, ensuring behavior categories are operationalized (objectively define)
internal reliability
extent to which a measure is consistent within itself
split-half reliability
a type of internal reliability; measures the extent to which all parts of the test contribute equally to what is being measured; done by comparing the results of one half of a test with the results from the other half of the test; two halves provides similar results, this would suggest that the test has internal reliability; quick and easy way to measure reliability; only effective with large questionnaires b/c all questions measure the same construct
validity
a test is valid if it measures what it claims to measure
internal validity
refers to whether the effects observed in a study are due to the manipulation of the independent variable and not some other factor; there is a causal relationship between the IV and DV; can be improved by controlling extraneous variables, using standardized instructions, counterbalancing, and eliminating demand characteristics
external validity
refers to the extent to which the results of a study can be generalized to other settings (ecological validity), other people (population validity) and over time (historical validity); can be improved by setting experiments in more natural settings and using random selection in samples
content validity
how well an instrument (test, questionnaire) measures a theoretical construct
face validity
a type of content validity; the degree to which an assessment, or test, subjectively appears to measure the variable, or construct, that it is supposed to measure
construct validity
a type of content validity; how well the instrument measures what it claims to measure in terms of hypothesis and theory
criterion validity
refers to how well one measure predicts the outcome for another measure
concurrent validity
a type of criterion validity; demonstrated when a test correlates well with a measure that has previously been validated the two measures in a study are taken at the same time
predictive validity
a type of criterion validity; one measure occurs earlier and is meant to predict some later measure
scientific method
a set of assumptions, attitudes, goals, and procedures for creating and answering questions about nature (which is lawful, determined and understandable)
hypothesis
part of the scientific method; a somewhat tentative statement or proposition concerning a relationship among variables (always subject to empirical test, they must be capable of disproof)
theory
part of the scientific method; a proposition or integrated set of propositions that attempts to explain the available facts concerning some phenomenon; functions of a theory: to be scientifically useful, a theory must do both of the following - explanation (theories describe, organize and summarize available facts) and prediction (a theory must also predict new facts and relationships, not presently known)
pseudoscience
consists of statements, beliefs, or practices that claim to be both scientific and factual but are incompatible with the scientific method
variable
a property or measure whereby the members of a group or set differ from one another (ex: age, gender, test scores, performances on specified tasks, etc.)
independent variable (IV)
aka factor or treatment; any variable that is either systematically manipulated or purposefully selected, in order to determine its effect on behavior
manipulation check
secondary evaluation of an experiment; used to determine the effectiveness of the manipulation; a way to help ensure that the IV has effectively been manipulated or that the participants understood the IV in the way that the researcher planned
dependent variable (DV)
the behavior or change in behavior that is monitored as a function of the IV
confounds
variables that have affected the results (DV), apart from the IV; a confounding variable could be an extraneous variable that has not been controlled
operationalization
basically means concretely defining how you will measure an abstract construct; refers to how you will define and measure a specific variable as it is used in your study; make it absolutely clear what you mean by the terms/behaviors/variables as they were studied and measured in your experiment; if you do not operationalize, it would be very difficult to compare findings of different studies into the same behavior; advantages include - it generally proves a clear and objective definition of even complex variables and makes it easier for other researchers to replicate a study and check for reliability
replicability
refers to whether a particular method and finding can be repeated with different/same people and/or on different occasions, to see if the results are similar; if we get the same results over and over again under the same conditions, we can be sure of their accuracy beyond reasonable doubt; this gives us the confidence that the results are reliable and can be used to build up a body of knowledge or a theory; vital in establishing a scientific theory
sample
the process of selecting subjects to study from the target population (a specified selection of individuals or things); since results will be generalized back to the target population, samples should be as representative (typical) of the target population as possible
generalization
refers to the extent to which we can apply findings of our research to the target population we are interested in; generalize from the sample to the target population; the more representative the sample, the more confident the researcher can be that the results can be generalized to the target population
random sampling
occurs when every member of a target population has an equal chance of being selected; example - picking names out of a hat; two criteria must be met to ensure randomness - every individual or thing in the population has the same chance of being chosen for the sample, and selection of one individual or thing in no way influences the selection of another; quasi random sample - number of variations on random sampling have been developed, referred to as “quasi random” procedures (for ex: a computer program that picks every 50th individual in the target population)
convenience sampling
aka incidental or accidental sample; any subset of a population chosen b/c it is readily accessible; in this type of sampling, the extent to which you can generalize results (technically known as external validity) is critically dependent on the type of study; if you are doing a study on social interactions, sampling just college students would be inappropriate; strength - quick, convenient and often the most economical method of sampling (most common); weakness - give very unrepresentative samples and is often biased on the part of the researcher who may choose subjects who will be “helpful”
survey design
research method used for collecting data from a predefined group of individuals (for ex: college students) to gain insights or info about various topics)
experiment
a procedure carried out to either support, refute, or validate a hypothesis; give insight regarding the cause-and-effect relationship between two more variables; determines what outcome happens when something is manipulated
control group
group that is composed of participants who do not receive the experimental treatment; these people are randomly assigned to be in this group; closely resemble the participants in the experimental group/those who receive the treatment; standard to which comparisons are made in an experiment; comparisons between the two groups are used to measure the effect of the treatment
quasi-experiment
these are methods for the introduction of something approximating experimental control, when it is not possible to fulfill the requirement of “real” experiments that there be equivalent groups at the outset; thus, quasi experiments are not “true” experiments; involves the manipulation of an IV without the random assignment of participants to conditions or order of conditions; eliminates the directionality problem b/c it involves the manipulation of the IV, but it does not eliminate the problem of confounding variables b/c it does not involve the random assignment to conditions and is generally higher in internal validity than correlational studies but still lower than true experiments
observational study (method)
the sole goal of this approach is description; it includes naturalistic observation, participation observation, case studies, surveys and questionnaires, and archival research; problems with this approach include - it is descriptive only, so impossible to establish cause-effect relationships, and there are many possible sources of bias in reporting, including those you bring to the situation and those that are a function of the current conditions; useful approach - serves a useful purpose by leading to hypotheses that might not otherwise be available, and that then can be tested using more rigorous methods
qualitative data
non-numerical data expressed in words (for ex: extracted from a diary); cannot be counted but can be placed in categories and then counting frequencies; gathered through open questions in questionnaires, and/or observational studies
case study
in depth investigations of a single person, group, event, or community; data is typically gathered from a variety of sources and by using different methods; info is mainly biographical; case study itself is not a research method, but researchers select methods of data collection and analysis that will generate material suitable for case studies; advantages - provides lots of data, insight for further research; disadvantages - lacking scientific rigor, little generalizability, difficult to replicate, time consuming and expensive
archival study
research method that requires searching and seeking out info from past records; for ex: census data or past survey data
data
characteristics or info, usually numerical, that are collected through observation; in a more technical sense, data are a set of values of qualitative or quantitative variables about one or more persons or objects
quantitative data
numerical data (for ex: reaction time or number of mistakes); represents how much or how long, how many there are of something; a tally of behavioral categories and closed questions in a questionnaire collect quantitative data
nominal measurement type
distribution in which constancies or differences are stated only in qualitative terms; simple frequency headcount fund in discrete categories (ex: categories of response, such as agree-indifferent-disagree, number of children, gender, etc.); least precise; data are usually frequencies of responses to the different categories
ordinal measurement type
a distribution in which we classify in terms of serial position but with no exact measurement (for ex: rankings or ratings on any physical or psychological scale, such as rank in class, etc.); there is some quantification, but unequal intervals, and no true zero point; sometimes rankings obscure serious discontinuities in the distribution
interval measurement type
a distribution in which variables are expressed in equal units but without a true zero point (ex: test scores, performance on psychological scales); quite precise, even without a true zero point
ratio measurement type
measures are expressed in equal units and take from a true zero point (for ex: length, weight, frequency of conditioned responses, amplitudes of responses, etc.); this is the most precise method of measurement
coding
the interpretation a person gives to an experience; how you define what the data you are analyzing is about or saying; mostly used in qualitative research
ethics
moral principles that govern a person’s behavior or the conducting of an activity
debriefing
APA code of ethics 8.08; psychologists provide a prompt opportunity for participants to obtain appropriate info about the nature, results, and conclusions of the research, and they take reasonable steps to correct any misconception that participants may have of which the psychologists are aware; if scientific of human values justify delaying or withholding this info, psychologists take reasonable measures to reduce the risk of harm; when psychologists become aware that research procedures have harmed participants, they take reasonable steps to minimize harm
IRB
stands for institutional review board; also known as an independent ethics committee, ethical review board, or research ethics board; it is a type of committee that applies research ethics by reviewing the methods proposed for research to ensure that they are ethical
Human Subjects Protection
the fundamental principle of human subjects protection is that people should not (in most cases) be involved in research without their informed consent, and that subjects should not incur increased risk of harm from their research involvement, beyond the normal risks inherent in everyday life
APA style
writing style and format for academic documents such as scholarly journal articles and books; it is commonly used for citing sources within the field of behavioral and social sciences
plagiarism
the act of presenting the words, ideas, or images of another as your own; it denies authors or creators of content the credit they are due; if you model a study after one conducted by someone else, give credit to the author of the original study
biases
systematic error introduced into sampling or testing by selecting or encouraging one outcome or answer over others (this can be from the researcher side or participant side)
social desirability
a type of bias; tendency of some respondents to report an answer in a way they deemed to be more socially acceptable than would be their “true” answer; they do this to project a favorable image of themselves and to avoid receiving negative evaluations
demand characteristics
a type of bias; the clues in an experiment that lead the participants to think they know what the researcher is looking for (ex: researcher body language)
focus groups
a group interview involving a small number of demographically similar people; reactions to specific researcher-posed questions are studied; they are used in market research and studies of people’s political views; the discussion can be guided or open
funnel debriefing
includes participants’ thoughts/suspicions about what the research was on/about; at the end of the study, ask participant: what did you think it was about? what did you think about your participation in the study? any concerns/suspicions about the study?; then you debrief them fully on what the research experiment was about; not the same as a manipulation check; looking to see if the participant is suspicious about any aspect of the study (what they thought); helps the researcher understand the quality of participant and their responses