Research Flashcards
Inductive Research
When researchers aim to infer theoretical concepts and patterns from observed data (ie. theory building research)
Deductive Research
When the researchers aim to test concepts and patterns informed by theory using new empirical data (ie. theory testing research). Deductive research is often employed through the use of the scientific method.
The Scientific Method
Refers to the standardized set of techniques that build scientific knowledge by informing how researchers make valid observations, interpret results, and generalize findings. Must meet the following four characteristics: replicability, precision, falsifiability, and parsimony.
Replicability
If the same study is repeated by another team of researchers, the experiment should yield identical or nearly identical results as the initial study.
Precision
Moving a theoretical concept from an abstraction to a precise operational definition, allowing for other researchers to measure the same defined concepts through similar or varied methodologies.
Falsifiability
Essentially, all theories must be discussed in ways that clearly identify a route for the theory to be disproven or falsified.
Parsimony
In the event that the data produces multiple explanations for the same phenomenon, researchers must always accept and prioritize the least complex and most logically economical explanation.
Construct
An abstract concept that is specifically chosen to explain a given phenomenon.
Descriptive Research
Research that is directed at making careful observations and detailed documentation of an identified phenomenon. Observations here are based on the scientific method.
Epistemology
Refers to our assumptions about the best way to study the world
Exploratory Research
Research conducted in new areas of inquiry, where the goals of the research are:
1) to scope out the magnitude or extent of a particular phenomenon, problem, or behavior
2) to generate some initial ideas about that phenomenon
3) to test the feasibility of undertaking a more extensive study regarding that phenomenon
Ontology
Refers to our assumptions about how we see the world
Operational Definitions
Used to define constructs in terms of how they will be empirically measured
Operationalization
The process of designing precise measures for abstract theoretical constructs.
Sampling
The target population from which they wish to collect data.
Unit of Analysis
Refers to the person, collective group, or object who/that is the target of the investigation.
Variable
A measurable representation of an abstract construct.
Internal Validity
Also referred to as causality; examines whether the observed change in a dependent variable is indeed caused by a corresponding change in a hypothesized independent variable and NOT by variables extraneous to the research context. Essentially, it the data congruent to the hypothesis and measured variables as opposed to other factors not accounted for.
External Validity
Also referred to as generalizability; refers to whether the observed associations can be generalized from the sample to the population, to other people, organizations, contexts, or time.
Construct Validity
Examines how well a given instrument scale is measuring the theoretical construct that it is expected to measure.
Statistical Conclusion Validity
Examines the extent to which conclusions drawn derived using a statistical procedure are valid.
Experimental Studies
Studies that are intended to test cause-effect relationships (hypotheses) in a tightly controlled setting by separating the cause from the effect in time, administering the cause to one group of subjects (treatment group), but not to the other group (control group), and observing how the main effects vary between subjects in the two groups. In a true experimental design, the subjects must be randomly assigned to each group. Otherwise, it is considered “quasi-experimental.”
Field Surveys
Non-experimental designs that do not control for or manipulate independent variables or manualized treatments, but instead measure operationally defined variables and test their effects using statistical methods. They capture snapshots of practices, beliefs, or situations from a random sample of subjects in field settings through a survey questionnaire, or less frequently, through a structured interview.
Secondary Data Analysis
An analysis of data that has previously been collected and tabulated by other sources.
Case Research (Case Studies)
An in-depth investigation of a problem in one or more real-life settings over an extended period of time.
Focus Group Research
A type of research that involves bringing in a small group of subjects (typically 6 to 10 people) at one location, and having them discuss a phenomenon of interest for a period of 1-2 hours.
Ethnography
An interpretive research design emphasizing that research phenomenon must be studied within the context of its native culture. The researcher is deeply immersed in a certain culture over an extended period of time (8 months to 2 years), and during that period, engages, observes, and records the daily life of the studied culture, and theorizes about the evolution and behaviors in that culture
Survey Research
A research method involving the use of standardized questionnaires or interviews to collect data about people and their preferences, thoughts, and behaviors in a systematic manner.
Interview Survey
Interviews are more personalized forms of data collection methods than questionnaires and are conducted by trained interviewers using the same research protocol as questionnaire surveys.
Qualitative Analysis
The analysis of qualitative data such as text data from interview transcripts. Qualitative analysis is largely dependent upon the researcher’s analytic and integrative skills and personal knowledge of the social context where the data was collected.
Quantitative Analysis
Statistics driven and largely independent of the researcher.
Grounded Theory
An inductive technique of interpreting recorded data about a social phenomenon to build theories about that phenomenon. Develops theory by letting meaning emerge from the data or to be “grounded” in data.
Mean
The simple average of all values in a given distribution.
Median
The middle value within a range of values in a distribution.
Mode
The most frequently occurring value in a distribution of values.
Standard Deviation
The second measure of dispersion, which corrects for such outliers by using a formula that takes into account how close or how far each value lands relative to the distribution mean.
Correlation
A number between -1 and +1 denoting the strength of the relationship between two variables.
Inter-Rater Reliability
Also called inter-observer reliability, this is a measure of consistency between two or more independent raters (observers) of the same construct.
Test-Retest Reliability
Measures the consistency between two measurements (tests) of the same construct administered to the same sample at two different points in time.
Split-Half Reliability
Measures the consistency between two halves of a construct measure.
Internal Consistency Reliability
Measures consistency between different items of the same construct.
Validity
Refers to the extent to which a measure adequately represents the underlying construct that it is supposed to measure.
Ordinal Scales
Scales that measure rank-ordered data, such as the ranking of students in a class as first, second, or third based on their GPA or test scores.
Interval Scales
Scales where the values measured are not only rank ordered but are also equidistant from adjacent attributes. Ie. the temperature scale where the difference between 30 and 40 degrees is the same as between 80 and 90 degrees.
Nominal Scales
Also called categorical scales; measure categorical data. These scales are used for variables or indicators that have mutually exclusive attributes (ie. religion or gender)
Likert Scale
Measures ordinal data in social science research. This scale includes Likert items that are simply worded statements (ie. strongly disagree to strongly agree).
Program Evaluation
Asks the question of whether or not this intervention or treatment program is effective in its intended purpose; does what we do work?
Empirically-Supported Treatments (EST)
EFTs are manualized treatments for specific populations/disorders that have been evaluated as being effective through controlled trials
Evidence-Based Practices (EBP)
Treatments and interventions employed by therapists that are informed by current research findings regarding client population or concerns that guide clinical expertise and adapting treatment based upon the unique contextual factors of each client encounter.
Practice-Based Evidence
Far from an EST, and not necessarily informed by empiricism. Instead it is informed exclusively by the therapist’s direct experience of work with clients.
Multi-Dimensional Family Therapy
An evidence based family therapy model found to be effective in treating adolescent substance abuse.
Family Based Interventions
An exhaustive literature review found family therapy to be an effective treatment for the following disorders of childhood and adolescence: ODD, ADHD, aggressive behaviors, conduct disorder, delinquency, substance abuse, anxiety, depression, child abuse, eating disorders, emotional problems, and first episode psychosis.
Parent-Child Interaction Therapy (PCIT)
Reduces recidivism of child abuse in families over time.
Emotionally Focused Couples Therapy
70% of couples receiving this treatment experience moderate to significant benefits from couples therapy when addressing relational distress and over 73% for psychosexual problems.
Major Mental Illness(SPMI)
Psychoeducational approaches to family therapy have been found to be effective in helping families cope with the stressors associated with serious and persistent mental illness.
Ratio Scales
Interval data with an absolute, not an arbitrary zero point. Ie. time, mass, length, duration.
Correlation Coefficient
Reliability coefficient, measures ability to yield consistent results each time it is applied. Symbolized by the letter “r,” which ranges from -1 to 0 to +1. The higher the coefficient, the more reliable the test is.
Alternate Form Reliability
Two separate but equivalent versions of the test are given with a time period in between. Both versions are given in succession to the same group. This measures both equivalence between the two forms of test and stability.
Factors That Impact Reliability
Length of the test, range or variability in scores, guessing or interpretation of reliability coefficient (generally above .80 considered acceptable)
Face Validity
Refers to the extent to which a test appears to measure a particular construct. It asks, “does it look like a reasonable test for whatever the purpose it is being used?”
Logical Content
Refers to the method the developer engaged with to make sure the required content was included in the test.
Convergent Validity
Explores if a construct, such as OCD, correlates with theoretically relevant variables. For ex., obsessive thoughts or compulsive behaviors. With convergent validity, it is crucial to establish statistically significant correlations between the instrument itself and relevant variables.
Discriminant Validity
Establishes how theoretically non-relevant variables are not associated with scores on the measurement. The goal of discriminant validity is to find an instrument that does not correlate variables that should not be correlated because they are irrelevant to the theoretical construct that is being measured.
Criterion Validity
Looks at the relationship between scores obtained using an instrument of interest and scores obtained using an existing, “standard” criterion instrument’s score.
Concurrent Validity
Determined when test scores and criterion measurements are either made at the same time (concurrently) or close to each other.
Predictive Validity
A correlation coefficient that represents the degree to which one instrument’s score predicts an individual’s score in a future situation (ie. a driving test) .
Nomothetic
Explanations focus on a class of events and attempt to specify the conditions that seem common to all those events
Longitudinal
Study over time. Observes trends in the same population
Cross-Sectional
Study at one point in time. Studies current trends and attitudes.
Closed-Ended Questions
Fixed set of alternatives that are all possible, theoretically relevant options determined in advance. Ease of data handling
Open-Ended Questions
Respondents develop their own responses, thus response options are not predictable. Complex data handling
Dependent Variable
Variable whose changes are being measured
Independent variable
This is a variable that is being manipulated and the impact of such manipulation upon the dependent variable is being studied. An independent variable in an experiment is called a factor.
Extraneous Variables
Influences on the dependent variable from a source other than the independent variable
Holding Variables Constant
Systematically selecting homogenous sample from a heterogeneous population and randomly assigning sample to experimental or control conditions
Matching Participants
Experimental group members are paired with similar control group members for data analysis
Blocking Variables
A combination of matching and randomization.
Double-Blind
Both the subjects and those who evaluate the outcome are ignorant of which treatment was given
Idiographic Explanation
Explanations focus on as ingle person, event or situation and attempt to specify all the conditions that helped produce it
Bimodal
A distribution with two most-frequently occurring scores
Multimodal
Two or more most frequently occurring scores.
Bell Shaped Curve
Unimodal, largest cluster of scores fall in the center, and frequently decreases with progress toward the extremes in both directions. Maximum height (mode) at the mean.
T-Test
A formula for evaluating the means of two groups. It is used in comparing two groups such as in an experiment that involves controlling a variable in each group and looking for a difference in outcome.
ANOVA (Analysis of Variance)
This is a statistical technique that is used to compare and contrast the means of two or more populations.
ANCOVA (Analysis of CoVariance)
This is a statistical technique that compares and contrasts one variable in two or more populations. ANCOVA allows you to remove covariates which are independent variables that are not of interest in the study.
Respect for Persons
We recognize the personal dignity and autonomy of individuals and we should provide special protection of those persons with diminished autonomy (Belmont Report)
Beneficience
We have an obligation to protect persons from harm by maximizing anticipated benefits and minimizing possible risks of harm (Belmont Report)
Justice
The benefits and burdens of research should be distributed fairly (Belmont Report)
Institution Review Board Criteria (IRB)
Risks to subjects are to be minimized and are reasonable in relation to anticipated benefits. Selection of subjects is equitable. Informed consent is sought and documented. Monitoring of data collection ensures subject safety. Provisions to protect privacy and maintain confidentiality.
Informed Consent
Refers to telling potential research participants about all aspect of the research that might reasonably influence their decision to participate.
Privacy
Participants have the right to privacy and have the following three ways to protect privacy: editing the data (including destroying it), anonymity, confidentiality.
Court & Legislative Challenges
Privileged communication generally NOT extended to social researchers and in criminal cases, the right of the public to protection supersedes research confidentiality.
Harm, Distress, and Benefit
Sources of harm include exposure to powerful psychological stimuli and revelation of a deviant lifestyle. Distress can be alleviated through debriefing.
Effect Size
A statistic that compares the effects of different treatment interventions across studies even though the studies used different types of outcome measures (.2= low, .5= moderate, .8= strong)
Research Hypothesis (H1)
A tentative and testable prediction about how the independent variable will cause or explain changes in the dependent variable.
Null Hypothesis (Ho)
A rival to the research hypothesis, predicts that there is NO relationship between the independent variable and the dependent variable explained by the independent variable. The null hypothesis asserts that the relationship between the IV and the DV is explained by a sampling error. A relationship does NOT exist in the population.
Statistical Significance
When the change in the DV is largely due to independent variable or treatment, this means that the sampling error was low, which means it is statistically significant. Testing for it means assessing the probability that the null hypothesis is true.
Type I Error
Rejecting a true null hypothesis. Sampling error is why we rejected the null. We determine that there are group differences in fact, there are not. (there is a 5% chance we make this error)
Type II Error
Failing to reject a false null hypothesis. We determine there are no group differences but there really are. We determine there’s no effect on the DV, but there really is. It’s like a false negative.
Significance Level
When we can conclude that sampling error has less than a 5% chance of playing a role in the change of the DV, then we have statistical significance. Symbol: p<.05 (this reads that the probability is less than .05.