Exam No. 2 Flashcards
Errors in Research/Reasoning:
Overgeneralization: Occurs when we unjustifiably conclude that what is true for some cases is true for all cases. Selective or Inaccurate Observation: Choosing to look only at things that are in line with our preferences or beliefs. Illogical Reasoning: The premature jumping to conclusions or arguing on the basis of invalid assumptions. Resistance to Change: The reluctance to change our ideas in light of new information.
Methods That Social Science Avoids These Errors:
Systematic Procedures - Reduce likelihood of overgeneralization by using systematic procedures for selecting individuals or groups to study. Study subjects are representative of the individuals or groups we want to generalize. Selective of Inaccurate Observation Error Method - To Avoid: Measure and sample phenomena systematically. Illogical Reasoning Error Method - Use explicit criteria is used by identifying causes and for determining whether these criteria are met in particular instances. Resistance to Change Error Method - Use of scientific method(s) to lessen the tendency to answer questions about the social world from ego-based commitments, excessive devotion to tradition, or unquestioning respect for authority
Goals of Research
The four most important goals of social research are (1) description, (2) exploration, (3) explanation, and (4) evaluation.
The Four Goals of Research
(1) Description: Research in which social phenomena are defined and described. The findings simply describe difference or variations in social phenomena. (2) Exploration: Seeks to find out how people get along in the setting under questions, what meanings they give to their actions, and what issues concern them. The goal is to learn “what’s going on here?” (3) Explanation: Seeks to identify causes and effects of social phenomena and to predict how one phenomenon will change or vary in response to variation in another phenomenon. Social scientists want to explain social phenomena, usually by identifying cause and effect. To predict how one phenomenon will change or vary in response to variation in another phenomenon. Evaluation: Describes or identifies the impact of social policies and programs
Research Process
Research Strategy: When conducting social research, we try to connect theory with empirical data—the evidence we obtain from the real world. Researchers may make this connection in one of two ways;
Research Circle
A diagram of the elements of the research process, including theories, hypotheses, data collection, and data analysis
Deductive Research
The type of research in which a specific expectation is deduced from a general premise and is then tested
Inductive Research
The type of research in which general conclusions are drawn from specific data Inductive Research: Begins with specific data, which is then used to develop (induce) a theory to explain the data. Rather than starting at the top of the circle with a theory, the inductive researcher starts at the bottom of the circle with data and then moves up to a theory
Anomalous
Patters that don’t seem to fit the theory.
Serendipitous
Unexpected patterns in data, which stimulate new ideas or theoretical approaches
Qualitative Research is often
inductive—the researcher observes a social interaction or interviews social actors in depth, and then develops an explanation for what has been found.
Descriptive Research:
Starts with data and proceeds only to the stage of making empirical generalizations; it does not generate entire theories. Valid description is critical in all research. Good description of data is the cornerstone for the scientific research process and an essential component of understanding the social world.
Social scientists evaluate their research questions in terms of three criteria:
(1) feasibility given the time and resources available, (2) social importance, and (3) scientific relevance.
Theory:
A logically interrelated set of propositions about empirical reality i.e., the social world as it actually exists
Hypothesis:
A tentative statement about empirical reality involving a relationship between two or more variables. States a relationship between two or more variables—characteristics or properties that can vary, or change.
Variable:
A characteristic or property that can vary (take on different values or attributes).
Dependent Variable:
A variable that is hypothesized to vary depending on or under the influence of another variable. The variable that depends on another, at least partially, for its level. The predicted result in a hypothesis.
Independent Variable:
A variable that is hypothesized to cause, or lead to, variation in another variable. The independent variable is the hypothesized cause.
Direction of Association:
A pattern in a relationship between two variables—that is, the value of a variable tends to change consistently in relation to change in the other variable. The direction of association can be either positive or negative.
Cross-Sectional Designs:
A study in which data is collected at only one point in time. Cross-sectional designs suffer because they use data collected only at one time.
Longitudinal Designs:
A study in which data is collected that can be ordered in time; also defined as research in which data is collected at two or more points in time.
Approximating Longitudinal Designs:
Better estimate cause and effect
Retrospective Research:
Gathers information about the past
Trend (repeated cross-sectional design):
A longitudinal study in which data is collected at two or more points in time from different samples of the same population
Cohort Study Design:
Individuals or groups with a common starting point. A longitudinal study in which data are collected at two or more points in time from individuals in a cohort.
Panel Study:
A longitudinal study in which data is collected form the same individuals—the panel—at two or more points in time. Uses a single sample that is studied at multiple points across time. The same people will be asked questions on multiple occasions. A panel design allows us to determine how individuals change, as well as how the population as a whole has changed.
Units of Analysis:
The entities being studied, whose behavior is to be understood.
Units of Analysis - Individuals:
A unit of analysis in which individuals are the source of data and the focus of conclusions
Units of Analysis - Groups/Organizations:
A unit of analysis in which groups are the source of data and the focus of conclusions
Social Artifacts.
Product of social beings. Cultural artifacts are all the things that are created by humans, including the built environment, furniture, technological devices, clothing, art and music, advertising and language–the list is truly endless.
Casual Fallacies:
Errors related to unit of analysis.
Ecological Fallacy:
An error in reasoning in which conclusions about individual-level processes are drawn from group-level data. A group-level finding from data is used to draw (erroneous) conclusions about individuals.
Reductionist Fallacy (reductionism):
An error in reasoning that occurs when incorrect conclusions about group-level processes are based on individual-level data.
Ethical Guidelines:
No Harm to Participants: Balance potential benefits of research v. possible harm to subjects. Voluntary Participation: Subjects should be free to participate. Anonymity and Confidentiality: Anonymity: Researcher does not identify/associate information about the subject. Confidentiality: Researcher can link information to a subject/person, but promises not to do so. Deceiving Subjects: Usually unethical unless justified by compelling scientific or administrative consent. Analysis and Reporting: Ethical obligations to the scientific community. Withholding Desirable Treatment: Fairness in the distributions of the treatments.
Codes of Professional Ethics
Formal codes of conduct that describe acceptable and unacceptable professional behavior
Institutional Review Board (IRBs):
Committee that reviews proposals for biomedical or behavioral research for institutions that seek federal funding for research. Weigh risks to subjects against benefits of research. Determine whether research procedures include adequate safe-guards for safety and protection of subjects/research.
Levels of Measurement:
The type of information that can be gained from values/attributes of each variable. The mathematical precision in which values/attributes of a variable can be expressed.
LOM - Nominal Level:
Variables that vary in kind but not amount. Values/attributes for the variables are just labels.
LOM - Ordinal Level
Variables with values/attributes that care rank ordered. Different values represent more or less of the variables.
LOM - Interval Level
Variables with values/attributes that are logically rank ordered, where the distance has mathematical precision.
LOM - Ratio Level
Interval level variable with a true zero amount.
LOM - Points to Consider
Variables/Attributes can assume different levels
Measurement Validity:
Whether a measure adequately reflects the meaning of the concept under consideration. Exists when an indicator measures what we think it measures.
Types of Validity:
Face Validity: Measure developed agrees with our common understanding about a particular concept. Considered to be very weak and subjective. Criterion Validity: Results form one (1) measure match those obtained w/ a more direct or an already validated measure of the same concept
Types of Criteron Validity:
Concurrent Validity: A criterion test conducted at the same time yields scores that are closely related to the scores on the measure. Predictive Validity: A measure is validated by predicting scores on a criterion measured in the future. Construct Validity: The results from one (1) measure match those obtained from other measures of the same concept specified in theory. Content Validity:A measure covers full range of concepts meaning
Generalizability:
Exists when a conclusion holds true for the population, group, setting, or event that we say it does, given the conditions that we specify; It is the extent to which a study can inform us about persons, places, or events that were not directly studied
Sample Generalizability:
Exists when a conclusion based on a sample, or subset, of a larger population holds true for that population
Cross-Population Generalizability (External Validity):
Exists when findings about one group, population, or setting hold true for other groups, populations, or settings
Casual Validity
Exists when a conclusion that A leads to, or results in, B is correct. Refers to the truthfulness of an assertions that A causes B.
Belmont Report
Report in 1979 of the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research stipulating three basic ethical principles for the protection of human subjects: Respect for persons, beneficence, and justice
Beneficence
Minimizing possible harms and maximizing benefits.
Confidentiality
Provided by research in which identifying information that could be used link respondents to their responses is available only to designated research personnel for specific research needs.
Certificate of Confidentiality
Document issued by the National Institutes of Health to protect researchers form being legally required to disclose confidential information.
Debriefing
A researcher’s informing subjects after an experiment about the experiment’s purposes and methods and evaluating subjects’ personal reactions to the experiment.
Federal Policy for the Protection of Human Subjects
Federal regulations codifying basic principles for conducting research on human subjects; used as the basis for professional organizations’ guidelines.
Health Insurance Portability and Accountability Act (HIPPA)
A U.S. federal law passed in 1996 that guarantees, among other things, specified privacy rights for medical patients, in particular those in research settings.
Institutional Review Board (IRB)
A group of organizational and community representatives required be federal law to review he ethical issues in all proposed research that is federally funded, involves human subjects, or has any potential for harm to subjects.
Justice
As used in human research ethics discussions, distributing benefits and risks of research fairly
Nuremberg War Crime Trials
Trials held in Nuremberg, Germany, in the years following WWII, in which the former leaders of Nazi Germany were charged with war crimes and crimes against humanity.
Obedience Experiments (Milgram’s)
A series of famous experiments conducted during the 1960’s by Stanley Milgram, a psychologist from Yale University, testing subject’s willingness to cause pain to another person if instructed to do so.
Prison Simulation Study (Zimbardo’s)
Famous study from the early 1970’s, organized by Stanford psychologist Philip Zimbardo, demonstrating the willingness of average college students quickly to become harsh disciplinarians when put in the role of (simulated) prison guards over other students; usually interpreted as demonstrating an easy human readiness to become cruel.
Respect for Persons
In human subject’s ethics discussions, treating person as autonomous agents and protecting those with diminished autonomy.
Tearoom Trade
Book by Laud Hemphreys investigating the social background of men who engage in homosexual behavior in public facilities; controversially, he did not obtain informed consent from his subjects.
Tuskegee Syphilis Study
Research study conducted by a branch of the US Government, lasting for roughly 50 years (ending in the 1970’s), in which a sample of African America men diagnosed with syphilis were deliberately left untreated, without their knowledge, so that researchers could learn about the lifetime course of the disease.
Reliability
A measurement procedure yields consistent scores when the phenomenon being measured is not changing.
Test-Retest Reliability
A measurement showing that measures of a phenomenon at two points in time are highly correlated, if the phenomenon has not changed or has changed only as much as the phenomenon itself.
Interitem Reliability (Internal Consistency)
An approach that calculated reliability based on the correlation between multiple items used to measure a single concept.
Alternate-Forms Reliability
A procedure for testing the reliability of responses to survey questions in which subjects have been asked slightly different versions of the questions or when randomly selected halves of the sample have been administered slightly different versions of the questions.
Spilt-Halves Reliability
Reliability achieved when responses to the same questions by two randomly selected halves of the sample are about the same.
Interobserver Reliability
When similar measurements are obtained by different observers rating the same person, events, or places
Concept
A mental image that summarizes a set of similar observations, feelings, or ideas
Conceptualization
The process of specifying what we mean by a term. In deductive research, conceptualization helps translate portions of an abstract theory into testable hypotheses involving specific variables. In indicative research, conceptualization is important part of the process used to make sense of related observations
Constant
A number that has a fixed value in a given situation; a characteristic or value that does not change
Operation
A procedure for identifying or indication the value of cases on a variable.
Operationalization
The process of specifying the operations that will indicate the value of cases on a variable.
Content Analysis
A research method for systematically and quantitively analyzing characteristics of messages.
Close-Ended (Fixed Choice) Question
A survey question that provides preformatted response choices for the respondent to circle or check.
Mutually Exclusive
A variable’s attributes (or values) are mutually exclusive when every case can be classified as having only one attribute (or value).
Exhaustive
Every case can be classified as having at least one attribute (or value) for the variable.
Open-Ended Questions
A survey question to which respondents reply in their own words, either by writing or by talking.
Index
A composite measure based on summing, averaging, or otherwise combining the responses to multiple questions that are intended to measure the same concept
Scale
A composite measure based on combining the responses to multiple questions pertaining to a common concept after these questions are differentially weighted, such that questions judged on some basis to be more important for the underlying concept contribute more to the composite score.
Triangulation
The use of multiple methods to study one research question.
Population
The entire set of individuals or other entities to which study findings are to be generalized.
Sample
A subset of a population used to study the population as a whole.