Research Flashcards
in an experiment, this is the group of research participants that does not receive the new treatment being studied. The control group is compared to the experimental/intervention group (the group that receives the new treatment), to see if the new treatment works
Control Group
in research, a collection of subjects who are matched
and compared with a control group in all relevant respects, except that they are also subject to a specific variable being tested (that is, they receive the new treatment/intervention).
Experimental/Intervention Group
An experimental design that measures the effect of an
intervention by randomly assigning participants to either an experimental/intervention group or a control group. These are difficult to do in social work practice because of the ethics of
withholding treatment from those who need it during the random assignment of participants to the control group.
For example, a new drug to treat depression is developed. Participants are randomly
assigned to either a control group (receiving a placebo/sugar pill) or the treatment group
(receiving the new drug). The Beck Depression Inventory (BDI) is given at the beginning
and end of treatment to see whether the treatment group saw a greater decrease in
depressive symptoms than the control group.
Randomized Controlled Trial
refers to the use of chance procedures to ensure each
participant has the same opportunity to be assigned to any given group
Random Assignment
The prefix ‘quasi’ means ‘resembling.’ So this is a design that resembles a randomized controlled trial, but does not involve the random assignment to a control group and experimental group (instead, it allows the researcher to control the assignment to the treatment and control groups using some criteria other than random assignment). It is commonly used in field research where random assignment is difficult or not
possible.
Example: You can use the exact same example as the randomized controlled trial with one difference: the researcher places the participants in the treatment or control group using some other criteria besides random assignment.
Quasi-Experimental Design
Research where the subject serves as their own control, rather than using another individual/group.
Example, a social worker administers the BDI to client(s) before beginning a specific
treatment to establish a baseline, and then administers the BDI again after receiving the
treatment to see if the treatment was effective in decreasing their BDI score..
Single Subject Design
in this design participants are asked to retrospect (literally,
to ‘look back’) and try to remember what they were like at an earlier time point.
Example: researchers could ask older teenagers how they were disciplined as kids.
Retrospective Design
researchers collect data at a single point in time from participants of different ages.
● For example, researchers might hypothesize that people become more traditional in their attitudes and more resistant to social change as they get older. To study this, they might get participants in their 20s, 40s, and 60s to complete a measure of traditionalism and
then test whether there is a positive correlation between age and traditionalism
Cross-sectional Design
the same people are measured at different ages.
● For example, researchers could follow the development of babies who experienced
developmental delays.
Longitudinal Design
a combination of cross-sectional and longitudinal designs. At the first point, groups of people from several different ages are measured. If the design were to stop there, it would be a simple cross-sectional design, but these groups are then followed over time, incorporating the longitudinal aspect.
Cross-sequential Design
A mutual relationship between two variables that are related; a change in one variable is associated with a change in the other variable.
● For example: there is a positive correlation between height and weight. Taller people
tend to be heavier and vice versa.
● Correlation (a pattern between two variables) does not always mean causation (that one variable causes the other).
○ For example: Someone may find that children who get tutoring receive worse
grades than children who do not receive tutoring. There is a correlation between
tutoring and lower grades, but tutoring does not cause the lower grade (it is likely
that tutoring is sought out because of the child’s low grade).
Correlation
When performing an experiment, we look at the effect the BLANK has on the dependent variable. This is the variable changed (or controlled/manipulated) in a scientific experiment.
Independent Variable
the variable tested and measured in a scientific experiment. An easy way to remember this is that it is dependent on the independent variable. As the experimenter changes the independent variable, the effect on the BLANK is observed and recorded.
For example: Someone is testing the effect of a new antidepressant. The new
medication is the independent variable and the level of depression is the dependent
variable.
Dependent Variable
The degree to which different people give similar scores for the same observations; refers to the consistency of a measure.
Inter-rater Reliability
The process of searching published work to find out what is already known about a research topic
Literature Review
The average value or measure of central tendency. To find the BLANK, add up the values
in the data set and then divide by the number of values that you added.
Mean
The middle score. To find the BLANK, list the values of the data set in numerical order
and identify which value appears in the middle of the list.
Median
The value that occurs most frequently. To find the BLANK, identify which value in the data
set occurs most often.
Mode
A statement that no relationship exists between study variables.
Null hypothesis
A questionnaire or other data-gathering instrument administered to a subject just
before a period of inquiry that provides a baseline for comparison with the end results
Pretest
A questionnaire or other data-gathering instrument administered to a subject at the
end of a specific period of inquiry
Posttest
A procedure for testing and validating a questionnaire or other instrument by
administering it to a small group of respondents from the intended test population. The
procedure helps determine whether the test items possess the desired qualities of
measurement and the ability to discriminate other problems before the instrument is put to
widespread use.
Pilot Study
The degree to which a tool measures what it claims to measure. For example: the
Beck Depression Inventory is supposed to measure a person’s level of depression. It has validity if it measures that. It would not have validity if, in reality, it measures a person’s level of anxiety and not their level of depression.
Validity
The degree to which an instrument measures the characteristic being
investigated (can be thought of essentially the same as just the term ‘validity’)
construct validity
The confidence that can be placed in the cause-and-effect relationship
in a study
internal validity
The extent to which an effect in research can be generalized to other
populations, settings, and treatment variables
external validity
The extent to which the results of a particular test, or measurement,
correspond to those of a previously established measurement for the same construct.
You want to make sure the test accurately measures what it is supposed to measure.
One way to do this is to look for other tests that have already been found to be valid
measures of your construct, administer both tests, and compare the results of the tests
to each other.
concurrent validity
this involves testing a group of subjects for a certain construct, and
then comparing them with results obtained at some point in the future. For example: You
want to predict the risk factors for High School Dropout. You create a survey for 10th
graders and then later look at high school dropout rates of the surveyed students to see
if the results predicted dropping out.
predictive validity
the overall consistency of a measure. Higher BLANK indicates a measure will produce statistically similar results under consistent experimental conditions. Ex.
If two different social workers administer the same interview to a client, do they get the same
results?
Reliability
data that you can measure. This includes things like age, the
number of times a behavior occurred, blood pressure, temperature, etc. It should be
an unbiased observation/measurement.
Objective Data
data that is given from the viewpoint of the client (or
someone in the client’s life) and that is not measurable. This includes things like how a person
feels or their pain level. This is someone’s personal evaluation.
Subjective Data
Systematic investigations that include inductive, in-depth studies of
individuals, groups, organizations, or communities. They focus on the ‘why’ and ‘how’ of
decision making to better understand human behavior
Qualitative Research
systematic investigations that include descriptive and inferential statistical analysis
Quantitative Research