767 Midterm Notes (Imported) Flashcards
How does science differ from common sense?
- Not reality - best approximation of reality 2. Test hypotheses/theories 3. Control variables/causes 4. Pursue relations 5. Focus on testable not metaphysical
Charls Sanders Peirce’s 4 methods of knowing/fixing belief (+ 1)
- Method of tenacity 2. Method of authority 3. Priori method / method of intuition 4. Method of science 5. one’s own direct experiences
method of tenacity (method of knowing)
people hold firmly to the truth because they’ve always believed it to be the truth
method of authority (method of knowing)
believing in “established beliefs”, i.e. from others, particular authority; superior method to tenacity because there can be slow progress through findings
priori method/method of intuition (method of knowing)
propositions that “agree with reason” and not necessarily with experience; people have natural inclinations toward truth. Difficulty lies in—whose reason is right?
method of science (method of knowing)
beliefs not determined by anything human, whereby the method can yield conclusions that are the same for every man. Includes the characteristic of self-correction/build-in checks along the way to be unbiased
2 broad views of science
- static view 2. dynamic view/heuristic view
static view of science
science as an activity that contributes systematized information to the world (common to laypeople); emphasis on the present state of knowledge and adding to it
dynamic/heurisitic view of science
science as an activity that scientists perform, emphasis on theory and approaches that are fruitful for further research; discovery
2 views on function of science
1) science as a discipline or activity aimed at improving things/making progress
2) science is to establish general laws/theories that explain phenomena to create predictability
Sampson’s 2 opposing views of science
1) conventional/traditional perspective
2) nontraditional/sociohistorical perspective
Conventional/traditional perspective in Sampson’s view of science
science as a mirror of nature that presents nature without bias; goal is to describe with accuracy what the world is like
Nontraditional/sociohistorical perspective in Sampson’s view of science
scientists as storytellers and not neutral arbitrators
What is purpose of science?
Theory
What is theory?
- A set of interrelated concepts that present a systematic view of phenomena by specifying relations among variables, with the purpose of explaining and predicting the phenomena
- good theory is one that can’t fit all phenomena, should be able to find an occurrence that would contradict it; modest, limited, and specific research aims are good
- simple explanation is preferred (Occam’s Razor)
Scientific approach
special systematized form of reflective thinking and inquiry based on Dewey’s analysis
Scientific problem
is a statement that asks “what relationship exists between 2+ variables?
Criteria of good scientific problem
- Problem should express a relationship between 2+ variables
- Should be stated clearly and unambiguously in question form
- Statement should imply possibilities of empirical testing to be scientific
Hypothesis
conjectural statement of the relationship between 2+ variables
Criteria for good hypothesis
1) Hypothesis as statement about the relations between variables
2) Hypothesis carry clear implications for testing the stated relations
3 reasons why hypotheses are important tools of scientific research
1) They are working instruments of theory
2) Can be tested and shown to be probably true or false (predictive)
3) They are tools for advancement of knowledge b/c it allows non-bias of thinkers
What are errors of a hypothesis/problem?
1) They are not ethical questions/value questions (no should, better than, ought)
2) Problems that are too general/vague cannot be tested
3) Problems that are too specific are not generalizable
4) Probs/hyps need to reflect multivariable nature of reality
2 levels on which scientists operate
- theory-hypothesis-construct 2. observations
What is a concept?
expresses an abstraction formed by generalization from particulars, i.e. weight
What is a construct?
concepts that are consciously invented for scientific purpose
Constitutive definition of construct
defines a construct using other constructs
Operational definition
assigns meaning to constructs by specifying the activities/operations necessary to measure it, and evaluate the measurement
2 types of operational definitions
- Measured 2. Experimental
Measured type of operational definition
describes how variable will be measured
Experimental type of operational definition
states details (operations) of the experimenter’s manipulation of a variable, i.e. reinforcement can be operationally defined by giving details of how subjects will be rewarded
Variables
symbols to which numerals/values are assigned
Dichotomous/binary variables
only 2 values, i.e. 1 or 0, yes or no
Polytomous variable
more than 2 values
Continuous variable
taking on an ordered set of values within a certain range
Categorical variable
grouped by either having or not having the characteristic of subset
Independent variable
presumed cause of the dependent variable (antecedent)
Dependent variable
the presumed effect (consequent)
Manipulated variable/active variable/stimulus variable
any variable that is manipulated in experiment
Measured variable/attribute variable/response variable
variables that can’t be manipulated are attribute/subject characteristic variables/organismic variables/individual differences
Latent variable
unobserved, presumed to underlie observed measured variable, i.e. intervening or construct variable
Characteristics of the scientific revolution of the 17th century
1) using observations to correct apparent errors rather than using to support theories
2) active observations, experiments
3) controlling for intervening variables, such as by random assignment or adding control groups
Inus condition
a condition that is insufficient but non-redundant part of a unnecessary but sufficient condition; i.e. doesn’t need it for something to happen, but with it event will happen – a match leading to forest fire – most causes are more accurately called inus conditions
Effect
the difference between what did happen and what would have happened; this follows the counterfactual model (Hume), something that is contract to fact; hence, what would have happened if the cause variable was not there? How are results different under that condition
John Stuart Mill: causal relationship exists if
1) The cause preceded the effect
2) The cause was related to the effect
3) We can find no plausible alternative explanation for the effect other than the cause
Correlation
does not prove causation; does not indicate which variable came first
Causal description
describing the consequences attributable to deliberately varying a treatment; this is what experiments do better at than causal explanation; line is not clear
Moderator variable
explains the conditions under which the effect holds
Mediator variables
explains the causal effect
Randomized experiment
Sir Ronald Fisher; various treatments being contrasted are assigned to experimental units by chance
Quasi-experiment
lacks random assignment; uses self-selection or administrator selection
Natural experiment
naturally occurring contrast between a treatment and comparison condition
Nonexperimental designs/correlational design/passive observational design
situations in which a presumed cause and effect are identified and measured, but other structural features of experiments are missing; i.e. no random assignment, no design elements such as pretests and control groups
How is strength of experimentation defined?
ability to illuminate causal inferences
2 kinds of generalizations
- Construct validity generalizations 2. External validity generalizations
Construct validity generalizations
inferences about the constructs that research operations represent (representation)
External validity generalizations
inferences about whether the causal relationship will remain with variations in persons, settings, etc (extrapolation)
Grounded theory of causal generalization (scientists make causal generalizations by using 5 closely related principles):
1) Surface Similarity
2) Ruling out irrelevancies
3) Making discriminations
4) Interpolation and extrapolation
5) Causal explanation
Surface Similarity
They assess the similarities between study operations and of the target
Ruling out irrelevancies
identify those things that do not change a generalization
Making discriminations
Clarify key discriminations that limit generalization
Interpolation and extrapolation
construct data of unsampled values within the range of the sampled instances and extrapolate beyond the sampled range
Causal explanation
develop and test explanatory theories about pattern of effects
The Kuhnian critique about the scientific revolution
consisted of many different and incomparable paradigms; theory-free observation is impossible
According to Ellis 1991b, aim of science
- what (observe and describe) 2. why (explain and predict)
Main points of Ellis 1991b
- Supervision models not tested empirically; poorly organized
- Supervisors must be critical consumers of research and pursue scientifically rigorous research
- Primary aim of science is theory
Rules of science according to Ellis 1991b
1) need to be explicit about propositions and list one’s biases
2) acknowledge the theory influence guiding research
3) focus on scientific rigor rather than type of research for critique
4) focus on empirically validation and foster rival explanations
paradigm (Ponterotto 2005)
set of interrelated assumptions about the social world which provides a philosophical and conceptual framework for the organized study of that world
positivism (Ponterotto 2005)
form of philosophical realism, belief in tightly controlled experimental study, theory verification
postpositivism- (Ponterotto 2005)
updated positivism, belief in an imperfectly apprehendable objective reality, theory falsification
constructivism-interpretivism (Ponterotto 2005)
relativist, assumes multiple equally valid and apprehendable realities constructed in mind of each individual, brought to light via intense researcher-participant interaction
critical-ideological (Ponterotto 2005)
disrupt and challenges status quo – no one theory, but sees criticalist as central tool who uses his work for social criticism; focus on power relations
ontology/nature of reality and being (Ponterotto 2005)
1) positivists: there is 1 reality (naïve realism)
2) postpositivists: there is 1 reality but it is imperfectly understood
3) constructionism: multiple constructed realities, subjective
4) critical-ideological: reality is shaped by environment and power relations
- epistemology/relationship between knower (participant) and would-be-knower (researcher) (Ponterotto 2005)
1) positivists: emphasize objectivism, everything assumed to be independent, no bias
2) postpos: acknowledges researcher may have bias, but objectivity is a guideline
3) const-int: subjective, reality is socially constructed, interaction between researcher and participant is central to describing the “lived experience” of the participant
4) crit-ide: work collaboratively through interactions toward empowerment
- axiology/role of researcher in scientific process (Ponterotto 2005)
1) positivists and postpose: no place for values in research process
2) const-int: researcher’s values need to acknowledge own values but not eliminate
3) crit-ide: hope and expect value biases to influence the research process/outcome
realist’s view of science (Heppner et al. 1992)
1) Knowledge is a social and historical product and cannot be obtained only by studying individual in isolation
2) Experiences of an individual, observable or not, are appropriate topics of study
3) Focus of research should not be on events and relationships among events, but on underlying causal properties of structures
what is the importance of being trained in and incorporating scientific thinking in practice? (Heppner et al. 1992)
hypothesize about client, collect data, test, develop model, and predict (Pepinsky)
Philosophy of science (Corso, 1967)
makes explicit and systematize basic assumptions about the world
Guidelines for scientific method (Corso, 1967)
- distinguish between observations and inferences
- selection of a problem, which is simplified to a specific question
- come up with a hypothesis
- design a controlled testing situation (that’s appropriate for the question)
- analyze and interpret the data
- evaluate findings and generalize findings
science (Corso, 1967)
continuous, cumulative and self-correcting, therefore students need to develop questioning attitudes
determinism is associated with (Corso, 1967)…
the notion of control (e.g., in conducting experiments) and prediction
Understanding consists of (Corso, 1967)
description (classification, ordering, correlational) and explanation.
Assumptions that scientists make (Corso, 1967)
orderly universe, space, time, and matter
main point of Chamberlin 1897 article
multiple working hypotheses
two modes of thinking (Chamberlin, 1897)
(a) imitative, repetitive (b) creative and independent - can look at old subject matter but critically and through a new lens
3 phases in the history of mental evolution (Chamberlin, 1897)
(a) ruling theory (b) working hypothesis (c) multiple working hypotheses
ruling theory (Chamberlin, 1897)
(a) the need to provide an explanation even before evidence is found
(b) attachment to a given theory - biases that limit different views, increased tendency to fit data to theory
(c) how a theory becomes a ruling theory: premature explanation -> tentative theory -> adopted theory-> ruling theory
working hypothesis (Chamberlin, 1897)
(a) used as a means to determine facts rather than to establish a proposition
(b) is a mode rather than an end (which acc to Chamberlin, is what the ruing theory was)
(c) as likely to gain attachment to a working hypothesis - can become the controlling idea
multiple working hypotheses (Chamberlin, 1897)
(a) to overcome the notion of controlling idea, use multiple working hypotheses
(b) allows complexity, avoids notion of singular cause
According to Platt (1967), * certain fields advance at a greater speed because
of an accepted method of doing things that is taught systematically and is accumulative
According to Platt (1967), * need to teach how to
sharpen inductive inferences
According to Platt (1967), shouldn’t be _______, rather _______.
method-oriented, problem-oriented
According to Platt (1967), science only advances with
disconfirming evidence
According to Platt (1967), Chamberlin’s proposal of multiple hypotheses
is right on! cure to being too narrow in view. Forces one to look at alternative hypotheses
According to Platt (1967), statistics
- are tools, but need to be flexible in using them. 2. Don’t be overreliant on statistics and methodology
What are the main points of Serlin (1987)?
addresses inappropriate sampling procedures, hypothesis-testing procedures, and the notion of atheoretical research
According to Serlin (1987), Sampling
(a) difficulty of true random sampling in psychology, and the tendency to use samples of convenience
(b) notion of representative sample: and what demographics are important is determined by the questions asked (race, gender, SES etc.) “good enough” sampling
(c) theory must guide the selection of sampling procedure, and in what ways it must be representative
(d) random sample allows generalizability to sampled population, and a non-random sample allows generalizations to a hypothetical population
According to Serlin (1987), Philosophy of Science and Statistics (7 points)
(a) cannot prove theories but can certainly disprove them
(b) critiques the role of statistics when it is used as a replacement for really thinking about the results
(c) use theory as a basis for the interpretation of statistics
(d) statistical results can be used to inform theories
(e) need to use tests and confidence interval procedures that help determine the size of the observed effect
(f) any theoretical development results in differentiating between relevant and less relevant variables - these theories need to be tested
(g) stats and theory inform each other
What are the main points of Skinner (1956)?
- empirical analysis is better than a formal one 2. graduate school doesn’t really teach its students to become scientists because most programs explicitly focus on model building and theory construction in stats - which is only a method of science (distinction between scientific research and stats) 3. * difficulties of a practicing scientist as work habits haven’t been formalized 4.
According to Skinner (1956), case studies may illustrate…
(a) Shift gears upon unexpected findings
(b) flexibility in research design;
(c) different types of measurements can be informative
(d) never felt the need to use explicit formal hypotheses
(e) larger samples and greater number of apparatuses, lower the flexibility (modifications become more cumbersome)
(f) recognize that common suggestions such as increasing n to get significance may not be as practically useful as exploring the existing research design and looking at the variables - explore the why behind discrepancy between data and the theory
According to Skinner (1956), * purpose of experimental analysis of behavior is to
devise techniques that reduce effects of idiosyncrasies unless that is what is being observed
According to Skinner (1956), stats can be useful but
it is problematic when used blindly
According to Kerlinger and Lee, data is defined as
the research results from which inferences are drawn (usually numerical).
According to Kerlinger and Lee, research data is defined as
the result of systematic observation and analysis used to make inferences and arrive at conclusions
According to Kerlinger and Lee, analysis is defined as
the categorizing, ordering, manipulating, and summarizing of data to obtain answers to research questions
According to Kerlinger and Lee, the purpose of analysis is
to reduce data to intelligible and interpretable form so relationships of research problems can be studied and tested.
According to Kerlinger and Lee, the purpose of statistics is
manipulate and summarize numerical data and to compare the obtained results to chance expectations.
According to Kerlinger and Lee, interpretation functions to
takes the results of analysis, makes inferences pertinent to the research relations studied, and draws conclusions about these relations
According to Kerlinger and Lee, 2 methods of interpretation
- The relations within the research study and its data are interpreted. 2. The broader meaning of research data is sought.
According to Kerlinger and Lee: Quantitative data come in 2 general forms
frequencies and continuous measures
According to Kerlinger and Lee: continuous measures are
associated with continuous variables
According to Kerlinger and Lee: • Frequencies are
the numbers of objects in sets and subsets
According to Kerlinger and Lee: • The first step in any analysis is
Categorization
According to Kerlinger and Lee: Five rules for categorization
- Categories are set up according to the research problem and purpose. 2. The categories are exhaustive. 3. The categories are mutually exclusive and independent. 4. Each category is derived from one classification principle. 5. Any categorization scheme must be on one level of discourse (conceptual clarity).
According to Kerlinger and Lee: Frequency Distributions
- Primarily used for descriptive purposes 2. Can be used for other research purposes (compare test score distributions) 3. used for descriptive purposes
According to Kerlinger and Lee: • Graphs and Graphing
- Powerful tool of analysis. 2. A two dimensional representation of a relation or relations. 3. Displays relations and their nature (strength, direction, linearity/non-linearity)
According to Kerlinger and Lee: • Measures of Relations
- Product-moment coefficient of correlation, correlation ratio, etc. 2. Express the extent to which the pairs of sets or ordered pairs vary concomitantly. 3. Show magnitude and (usually) the direction of the relation.
According to Kerlinger and Lee: • Analysis of Variance and Related Methods
method of identifying, breaking down, and testing for statistical significance variances that come from the different sources of variation
According to Kerlinger and Lee: • Profile Analysis
the assessment of similarities of the profiles of individuals or groups
According to Kerlinger and Lee: • Multivariate Analysis
general term used to categorize a family of analytical methods whose chief characteristic is the simultaneous analysis of k independent variables and m dependent variables
According to Kerlinger and Lee: Multiple regression
analyzes the common and separate influences of two or more independent variables on dependent variables
According to Kerlinger and Lee: • An index is an
observable phenomenon that is substituted for a less-observable phenomenon. It is a number that is a composite of two or more numbers
According to Kerlinger and Lee: • Indices are important because
they simplify comparisons
According to Kerlinger and Lee: • Ratio
composite of two numbers that relates one number to the other in a fractional or decimal form.
According to Kerlinger and Lee: • Proportion is a
fraction with the numerator one of two or more observed frequencies and the denominator the sum of the observed frequencies
According to Kerlinger and Lee: • Percentage is a
proportion multiplied by 100.
According to Kerlinger and Lee: indices can be dangerous becase
they are often a mixture of two fallible measures
According to Kerlinger and Lee: • Interpreting negative and inconclusive results can be contributions to science if we
can prove the methodology, measurement, and analysis were adequate
According to Kerlinger and Lee: What should you do with unpredicted and unexpected findings?
Don’t ignore them! But treat them with suspicion.
According to Kerlinger and Lee: can one prove theory?
No, nothing is ever proved. Interpretation of research data culminates in conditional probabilistic statements of the “If p, then q” kind.
According to Kerlinger and Lee: • Categorical or nominal variables
are those variables where y= {0,1}, 0 and 1 being assigned on the basis of the object x either possessing or not possessing some defined property or attribute.
According to Kerlinger and Lee: • Continuous variables
are those variables where y= {0,1,2,…,k}, or some numerical system where the numbers indicate more or less of the attribute in question
According to Kerlinger and Lee: • A crosstab is
a numerical tabular presentation of data, usually in frequency or percentage form, in which variables are cross partitioned.
According to Kerlinger and Lee: principle use of crosstabs
study the relations between categorical or nominal data
According to Kerlinger and Lee: According to Kerlinger and Lee:
to explore relations, to organize data, to control variables, and to sensitize the researcher to the design and structure of research problems.
According to Kerlinger and Lee: • Degrees of freedom defines
the latitude of variation contained in a statistical problem
According to Kerlinger and Lee: what are the four purposes of statistics?
- To reduce large quantities of data to manageable and understandable form. 2. To aid in the study of populations and samples. 3. To aid in decision making. 4. To aid in making reliable inferences from observational data; helping to make decisions among hypotheses.
According to Kerlinger and Lee: • Statistical inferences have 2 characteristics
- Inferences are usually from samples to populations; 2. Inferences are used when investigators are not interested in the populations, or only secondarily interested in them.
According to Kerlinger and Lee: What is a binomial system?
consists of two admissible outcomes. When something is “counted in” because it possesses the attribute in question, it is assigned a 1. If it does not possess the attribute, it is assigned a 0.
According to Kerlinger and Lee: What is The Law of Large Numbers?
the larger the sample, the closer the sample value approaches the true (population) value.
According to Kerlinger and Lee: What is a normal curve and why is it important?
• Chance events tend to distribute themselves in a normal curve. The most important statistical reason for using the normal curve is to be able to interpret the probabilities of the statistics one calculates easily.
According to Kerlinger and Lee: What is a standard deviation?
a length along the base line of the curve from the mean or middle of the baseline out to the right or left to the point where the curve inflects.
According to Kerlinger and Lee: • Z-scores (standard scores)
- are linear transformations of raw scores. 2. Expressed as “standard deviation units” 3. can be compared to one another from different z-score distributions.
According to Kerlinger and Lee: •What is Variability?
measure of the dispersion of the set of scores. It tells us how much the scores are spread out.
According to Kerlinger and Lee: What is sampling variance?
Difference between statistics of multiple samples
According to Kerlinger and Lee: What is • Systematic variance?
the variation that can be accounted for
According to Kerlinger and Lee: What is between-groups variance?
variance that reflects systematic differences between groups of measures. Between-groups variance is a term that covers all cases of systematic differences between groups, experimental and nonexperimental
According to Kerlinger and Lee: What is • Experimental variance?
variance engendered by active manipulation of independent variables
According to Kerlinger and Lee: What is • Error variance?
fluctuation of varying of measures that is unaccounted for. It’s the variance left over in a set of measures after all known sources of systematic variance have been removed from the measures.
According to Kerlinger and Lee: What is • Covariance?
the relationship between two or more variables. It is an unstandardized correlation coefficient
According to Kerlinger and Lee: - A priori probability is defined as
probability of event is # of favorable cases divided by the total # of equally possible cases: p = f / (f +u); p = probability; f = favorable cases; u = unfavorable cases
According to Kerlinger and Lee: - A posteriori probability is defined as
relative long-term frequency. Ratio of the # of times an event occurs to the total number of trials
According to Kerlinger and Lee: - Relative frequency is defined as
1) probability value itself, and 2) the weight of evidence associated with it. *Important for the behavioral sciences.
According to Kerlinger and Lee: What is - Conditional probability?
the probability of A, given B. In the instance of conditional probability, our foreknowledge of a characteristic reduces the sample space (or the denominator in the probability equation. We are dealing with a pertinent subset of U, this variable is related to the criterion variable.
According to Kerlinger and Lee: - Bayes’ theorem is the basis for
Confirmatory factor analysis, SEM, and discriminant analysis
According to Kerlinger and Lee: What is - Sampling?
taking a portion of a population or universe as a representation of that population or universe.
According to Kerlinger and Lee, what is - Random Sampling
method of drawing samples from a population such that every possible sample of a particular size has an equal chance of being selected. Resulting samples are called ‘random samples.’
According to Kerlinger and Lee, what is - Sampling with replacement?
placing each individual back into the population so that the probability remains the same
According to Kerlinger and Lee, what is - Sampling without replacement?
not replacing individuals chosen from a population
According to Kerlinger and Lee, what is representative sampling?
to be typical of a population – the sample has approximately the same characteristics as the population relevant to the research question.
According to Kerlinger and Lee, what is - Randomization?
is the assignment to experimental treatments of members of the universe in a way such that, for any given assignment to a treatment every member of the universe has an equal probability of being chosen for that assignment. (random assignment)
According to Kerlinger and Lee, what is - The purpose of randomization?
spread out individuals with varying characteristics equally among treatment groups. No guarantee that this actually happens.
According to Kerlinger and Lee, what is the importance of sample size (with regard to randomization)?
use as large a sample as possible – gives the principle of randomness and randomization a chance to work
According to Kerlinger and Lee, what is - Probability sampling ?
use some form of random sampling in one or more of their stages
According to Kerlinger and Lee, what is • Stratified sampling?
population is divided into strata (men and women, Latino / white, etc.) and then random samples are drawn from the stratified groups.
According to Kerlinger and Lee, what is the purpose of random sampling?
helps to prevent, or curtail, bias
According to Kerlinger and Lee, what is According to Kerlinger and Lee, what is • Cluster sampling?
successive random sampling of units, or sets and subsets. A cluster is a group of things of the same kind. Also referred to as area sampling (school districts, etc.)
According to Kerlinger and Lee, what is • Two-stage cluster sampling?
successive random sampling of units, or sets and subsets. then select a random sample of the elements and measure those elements
According to Kerlinger and Lee, what is • Systematic sampling?
order the elements of the population, figure out number needed for sample and pick every kth person (every 10th person on the list)
According to Kerlinger and Lee, what is Non probability sampling?
Sampling that does not use any form of random sampling
According to Kerlinger and Lee, what is • Quota sampling?
Type of non-probability sampling that using strata (sex, race, region, etc.) to select sample members that are representative or typical. (public opinion polls use this)
According to Kerlinger and Lee, what is • Purposive sampling?
Type of non-probability sampling that use of judgment and deliberate effort to obtain representative samples by including presumably typical areas or groups in the sample (marketing research)
According to Kerlinger and Lee, what is • Accidental sampling?
Type of non-probability sampling that weakest form of sampling, but most highly used – classes of seniors in high school, sophomores in college, etc
According to Winnicot &Winnicot, - Regression allows
prediction – how will Y be affected by X in a given situation
According to Winnicot &Winnicot, “good fit” for the regression
line that minimizes error the most
According to Levin, experimental method is best for
manipulating and controlling causal variables and identifying specific factors as the causes of a behavior
According to Levin, - The method of observation is
careful and systematic observation to observe behavior and its antecedents in the absence of outside intervention.
According to Levin, what is the definition of - Manipulation?
creating or selecting discrete levels of a variable and comparing responses across levels
According to Levin, what is the definition of control?
holding extraneous variables (anything other than the IV that could affect an experiment) constant across levels of the IV
According to Levin, what is the definition of randomization?
assigning subjects randomly to treatment conditions
According to Levin, what is the definition of the independent variable?
variable that is manipulated (by way of assigning subjects to treatment groups)
According to Levin, what is the definition of the dependent variable?
measure of performance or other outcome of the IV
According to Levin, what is the definition of - Subject characteristics?
Inherent characteristics of the participants that can affect the DV
According to Levin, what is Statistical inference based upon?
the notion that research data represent a random sample from some population where everyone in that population has an equal chance of being selected.
According to Levin, what is the definition of - Subject bias?
when a subject’s belief about what he/she should do in an experiment affects his/her response
According to Levin, what is the definition of - Experimenter bias?
when an experimenter’s measurement and treatment of the data influence the outcome of the research
According to Levin, what is the definition of double-blind investigation?
neither experimenter nor participant knows which experimental group the participant is in.
According to Levin, what is the definition of - Method of observation?
observation, recording and classification of behavior to determine relationships between variables
According to Levin, what is the purpose of - Longitudinal studies?
can help to understand trend and successive changes over time
According to Levin, what is Between subjects manipulation?
subjects receive different levels of the IV. Advantage – rules out possible contaminating influences introduced when a subject is in more than one treatment condition.
According to Levin, what is matching?
elaborate way to achieve control of subject variables than complete randomization – match subjects in each group on potentially confounding variable
According to Levin, what is Independent random groups design?
subjects are assigned to either the treatment condition or the control condition. Groups are treated alike in all ways except the IV and are randomized into treatment conditions.
According to Levin, what are matched pairs?
between subjects design – take 2 subjects with similar traits and put one in treatment group, one in control group.
According to Levin, what is Randomized blocks design?
subjects are rank ordered on the basis of a numerical subject variable such as scores on a pretest. Then arranged in blocks corresponding in size to the levels of the IV. For 3 IV’s, the 3 highest scores go in 1st condition, next 3 highest in the next condition, and so on.
According to Levin, what is factorial design?
all possible combinations or levels of a variable are included in the experiment. Separate effects are considered as well as the interaction effects between variables.
According to Levin, what is Correlation?
a way to quantify the relationship between paired scores
According to Levin, what is Regression?
a way to use correlation information to predict values of one of the scores.
According to Levin, what is linear relationship?
two-dimensional scatter plot graphs data with a straight line = line of best fit.
According to Kerlinger and Lee: what must we do when examining differences between means, correlations, etc.?
Compare relative difference, not only absolute difference
According to Kerlinger and Lee: what is the relation of n to statistical significance?
Large ns make statistical significance more likely (those effect sizes may be quite small)
According to Kerlinger and Lee, what is a substantive hypothesis?
conjectural statement about the relationship between variables, but is not testable until translated into operational terms
According to Kerlinger and Lee, what is a statistical hypothesis?
a conjectural statement in quantitative/statistical terms deduced from the relations of the substantive hypothesis… a prediction of how the stats used in analysis will turn out.
According to Kerlinger and Lee, what is standard error?
the SD of the sampling distribution of any given measure. Since it is impossible to find the mean of the entire population, we use the sample mean to stand for the population mean. Testing this mean uses the SE of the mean, or sampling error. The SE of the mean is not the same as measuring the whole population, but is a measure of the dispersion of the distribution of the sample means.
According to Kerlinger and Lee: can we prove/accept the null hypothesis?
No. We can fail to reject the null hypothesis but not prove that it is true, so do not accept.
According to Kerlinger and Lee, what is Monte Carlo Demonstration?
use computer generated random numbers to obtain solutions to mathematical, statistical, numerical, and verbal tests. These demonstrations can show that the SE of the mean can approximate the SD of the entire population.
According to Kerlinger and Lee, what is The central limit theorem?
states that if samples are drawn from a population at random, the means of the samples will tend to be normally distributed. If a distribution approximates normality, we can assess it against the known properties of the normal curve.
According to Kerlinger and Lee: why is the standard error important?
determine stat. significance by dividing the statistic by the SE for that statistic.
What is type 1 error?
Rejecting the null hypothesis in the sample when it is true in the population.
What is type 2 error?
Failing to reject the null hypothesis in the sample when it is false in the population.
Reducing alpha (e.g. .05 -> .01) has what effect?
Minimizes type 1 error
Reducing beta (e.g. .2 -> .1) has what effect?
Minimizes type 2 error
According to Kerlinger and Lee: What are the 5 steps of hypothesis testing?
- State statistical hypothesis.
- State null hypothesis.
- Compute the test statistic using empirical data.
- Decision: reject or do not reject null.
- Leap of inference from our decision to the actual problem.
What is the main point of Maxwell & Delaney, 1990?
Discuss differences in philosophy of science. Rejects the notion of deductive science as pure objectivity and atheoretical investigation is impossible.
According to Maxwell & Delaney, 1990: Science has two main assumptions
- Lawfulness of nature 2. Finite Causation
According to Maxwell & Delaney, 1990: Lawfulness of nature refers to
nature is understandable, uniform, and the principle of causality (Hume indicates that correlation is all we can know about causality… also, we no longer accept constant conjunction, the idea that the cause is necessary and sufficient for the effect. We now know that cause and effect are influenced by the context.)
According to Maxwell & Delaney, 1990, what is Finite Causation?
causes are not limitless, but we must determine which elements are the cause and under what conditions the cause creates the effect.
According to Maxwell & Delaney, 1990: Positivism means
All knowledge in this stage is based on positive (certain, sure) methods of science.
According to Maxwell & Delaney, 1990: positivism relies on
Verifiability Criterion of Meaning: A criterion is meaningful if and only if it is measured empirically.
According to Maxwell & Delaney, 1990: Popper’s Falsificationism states that
Progress occurs by falsifying theories. However, due to the limitations of data collection and stat. analysis, it may be impossible to disprove a theory. Also, ‘disproved’ theories may be the best current explanation. The converse of this is confirmationism, which is invalid but not entirely useless.
According to Maxwell & Delaney, 1990: Kuhn posits paradigms as
that which defines universally accepted scientific knowledge and defines ‘normal science’. Anomalies slowly occur that disconfirm ‘normal science’ and thus, the paradigm, so a scientific revolution occurs to create a new paradigm.
According to Maxwell & Delaney, 1990: what is the problem with Kuhn’s characterization of paradigms?
Kuhn postulated that there is no objective truth and science never evolves, which is not generally accepted by modern scientists.
According to Maxwell & Delaney, 1990: What is Realism?
There is an objective truth, and it is possible to discover it. However, lower level studies (chemistry) are not entirely adequate to address all of higher level studies (psych).
According to Maxwell & Delaney, 1990: What is Validity?
Correctness of our proposition of how things work.
According to Maxwell & Delaney, 1990: What are the 4 broad types of validity?
- Statistical Conclusion Validity 2. Internal Validity 3. Construct Validity 4. External Validity
According to Maxwell & Delaney, 1990: What is Statistical Conclusion Validity?
Correctness of our proposition regarding relationship between variables. Threats include low power because of small and/or diverse sample, unreliable measures, and violations of statistical assumptions
According to Maxwell & Delaney, 1990: What is Internal Validity?
Approximate truth regarding inferences about cause and effect relationships. Threats include attrition, confounding variables, selection bias, maturation, history, regression.
According to Maxwell & Delaney, 1990: What is Construct Validity?
Given there is a valid causal relationship, is the interpretation of the constructs involved correct? Threats include experimenter bias, treatment diffusion, resentful
According to Maxwell & Delaney, 1990: What is External Validity?
Can I generalize these finding across populations, settings, and time?
What is the main point of Pepinsky & Pepinsky?
Discusses the importance of observation and inference within clinical work
According to Pepinsky & Pepinsky, observation is
Statement of what which is given by immediate sensory experience
According to Pepinsky & Pepinsky, What are the purposes of observation?
- Serve as a basis for inference about client behavior
- facilitate the restatement of inferences as meaningful
- afford a means of verifying hypotheses.
According to Pepinsky & Pepinsky, how does one expand their sample of client data
Through direct observations of a client. her knowledge of clinical/experimental studies of behavior, knowledge of society/the client’s culture, and knowledge of subcultures the client takes part in.
According to Pepinsky & Pepinsky, what is Direct observation?
focus on behavior of client, behavior of counselor, and their interaction.
According to Pepinsky & Pepinsky, what is Indirect observation?
Client’s social history, test scores, anecdotal of individuals outside the agency.
According to Pepinsky & Pepinsky, what is inference?
tentative conclusion based on observational data. Inferences can be evaluated only if the observations that led the counselor to make the inference are made explicit.
According to Pepinsky & Pepinsky, What is a prediction?
may be derived from an inference, which is based on observations and be regarded as a hypothesis. A prognosis is a special kind of prediction
What are the major points of Gelso 1979?
- The expectation that our grad programs produce people who exemplify the scientist-practitioner model is unrealistic, as many students entering these programs are ambivalent about research, and the counselor/research roles can be conflicting.
- There is a large amount of research on counseling supervision, but not research supervision. Many programs are lacking in encouraging students to be researchers.
What are the major points of Gelso & Lent, 2000?
- Research training environment affect Research Self Efficacy, which affects research productivity.
- Personality (investigative) is positively related to RSE + positive research outcome expectations. Generally, gender does not have an effect. As graduate students go through their programs, they gain more RSE.
- Negative attitudes toward research can be improved by teaching students that all studies are flawed, teaching varied approaches to research, encouraging students to look inward for research ideas, showing students that science and practice are wedded, and making stats instruction relevant to research.
What are the major points of Gelso, 2003?
*Students’ research attitudes and productivity are positive affected by: faculty modeling appropriate scientific behavior and attitudes, scientific behavior is reinforced both formally and informally, students are involved in research early in their training in a non-threatening way, it is emphasized that all studies are flawed, varied approaches to research are taught, students are shown how science and practice are wedded, and making research a partly-social experience.
According to Kerlinger and Lee, what is the basic principle that underlies statistical tests?
Comparison of obtained results to chance expectations
According to Kerlinger and Lee, what are statistics/purpose of statistics (4 things)?
1) A measure calculated from a sample. Are representative of population.
2) Aid in the study of population and samples.
3) Aid in decision making
4) Aid in making reliable inferences from observable data.
What is the effect of increasing sample size on variance?
Decrease in variance as you come closer and closer to the “true” value in the population
What are the characteristics of the normal curve?
- Unimodality 2. Symmetry 3. Mathematical properties
What are properties of Z scores?
Standard scores with SD of 1, mean of 0, and range from -3 to 3.
What percentage of scores fall within +/- 1 SD of the mean in a normal distribution?
~68%
What percentage of scores fall within +/- 2 SDs of the mean in a normal distribution?
~96%
What percentage of scores fall within +/- 3 SDs of the mean in a normal distribution?
~99%
How does one use the Normal Probability Curve of frequency data to interpret data?
Specify the probability that chance events will occur (e.g., - Chance of something occurring outside of range of Z2 is 4%, outside of Z3 is 1%.)
How does one calculate the Standard Error of Means?
SD (population SD approximated by sample SD) divided by the square root of cases in the sample
What is the standard error of the mean?
a standard deviation of an infinite number of means, only chance error makes them fluctuate
What does it mean for a model to be linear?
No terms have powers greater than 1.
What is the theoretical rationale of the t-ratio approach?
When we compare two means we want to ask do the means differ significantly? Or is the difference within the bounds of chance? (e.g., Does A differ from B beyond the difference expected by chance?)
What is the theoretical rationale of the Analysis of Variance Approach?
- A difference of two or more groups can be tested for significance.
- This method uses variance entirely and pits two variance against one another.
- One variance (independent, or experimental variable) pitted against a variance due to error or randomness
- Compare the “between group variance” with the “within group variance”
What is within group variance?
variance of the means of the groups
What is standard variance of the mean?
standard error of the mean squared
What is the F-Ratio?
- Between Group Variance / Within Group Variance
- Your results are compared to F table. If higher than said number on the table, then results are significantly significant
How do F and T relate to one another?
- F= t^2
- t= (square root of) F
When does one use an F test over a t test?
- When comparing more than two groups F is used (t for two groups only).
What is the effect on mean of adding a constant to all variable scores?
Increases mean
What is the effect on variance of adding a constant to all variable scores?
No effect on variance
What is the effect on between-group variance of adding a constant to all variable scores?
Changes between group variance
What is the effect on within-group variance of adding a constant to all variable scores?
Does not affect within group variance
What is the relevance to the F test of not having independence of between group and within group variance?
F-test assumptions are violated
In Calculating of One Way Analysis of Variance, SSt = (equation)
SSb + SSW
In Calculating of One Way Analysis of Variance, SSt is (words)?
Sums of Squares Total
In Calculating of One Way Analysis of Variance, - SSb
Between Sums of Squares
In Calculating of One Way Analysis of Variance, - SSw
Within Sums of Squares
What does significance of T or F tests tell us?
That it is likely that an effect exists (is unlikely to be due to error)
What does significance of T or F tests not tell us?
Significance does not tell us magnitude of the effect/relation
What is factorial analysis of variance?
two or more independent variables vary independently or interact with each other to produce variation in a dependent variable
How would one represent the factorial analysis of variance in the GLM?
y = ao + A + B + AB + e
What is interaction?
The working together of two or more independent variables in their influence on the dependent variable.
The influence of one independent variable on a dependent variable depends on the level of another independent variable.
What is first order interaction?
2 independent variables
What are separate independent effects called?
Main effects
What is disordinal interaction?
A crisscross pattern of interaction where one independent variable is effective in one direction but another independent variable is effective in another direction
What are three possible causes of interaction?
1) “True” interaction, where the variance contributed by the interaction that “really” exists between two variables in their mutual effect on a third variable.
2) Error – a significant interaction occurring by chance.
3) An extraneous, unwanted interaction is operating on one level of an experiment but not at another.
What are Advantages and Virtues of Factorial Design and Analysis of Variance?
1) Enables researchers to manipulate and control two or more variables simultaneously. Can control other variables.
2) More precise than one-way analysis
3) Enables researchers to hypothesize interactions because the interactive effects can be directly tests.
When participants are assigned to the experimental group at random within a factorial analysis of variance design, the only estimate of chance variation is
within groups
What do Tracey and Glidden cite as the 3 major problems with current research?
1) Focusing on wrong question or questions
2) Poor Assumptive Specification
3) Viewing components of research as discrete
What recommendations do Tracey and Glidden give?
- Theory must drive the research design and analysis including what tests to use.
- Shift should go from “How should I test this?” to “what do I wish to be able to say?
- Transfers research process into realm of logical argument.
What are 3 crucial aspects for reasoned arguments endorsed by Tracey and Glidden?
1) Assumption set completeness
2) Assumption accuracy
3) Conclusions following logically from assumptions
What do Tracey and Glidden suggest asking regarding the complete specification of assumptions?
1) Why is this assumption relevant to the argument I am constructing?
2) Why do I assume this assumption is true or at least defensible?
According to Tracey and Glidden - The aspects of reasoned argument (completeness, accuracy, and logic) need to be manifested in each of the four basic components of research:
1) the theoretical underpinnings of phenomena
2) the research design
3) the measurement model
4) the analysis
What do Tracey and Glidden stress via “Lack of Integration of Component Research”?
The four components of research (substantive theory, design, measurement, analysis) should be viewed not as discrete units but as a part of an integrated whole.
Tracey and Glidden state that good theory contains
1) Relevant assumptions about empirical events
2) Rules for systematic interactions among constructs of theoretical interests (both which influence decisions about appropriate design and analysis)
3) Operational definitions (guides choices corresponding to psychological measurements)
According to Tracey and Glidden - In setting measures, a researcher must commit to positions on 9 dimensions of a study’s underlying theory and constructs of interest:
1) Concreteness
2) Type of measure
3) Perspective
4) Realism
5) Level
6) Biases
7) Sensitivity
8) Ideographic vs. Nomothetic
9) Dimensionallity
In Tracey and Glidden’s 9 dimensions of underlying theory, what is meant by concreteness?
The degree to which the construct of interest is overt and easily grasped
In Tracey and Glidden’s 9 dimensions of underlying theory, what is meant by
The underlying properties of the construct of interest
In Tracey and Glidden’s 9 dimensions of underlying theory, what is meant by
The focus that is most appropriate in construct assessmen
In Tracey and Glidden’s 9 dimensions of underlying theory, what is meant by
Where the measurement is to take place and focuses on the naturalness of the situations in which assessment occurs
In Tracey and Glidden’s 9 dimensions of underlying theory, what is meant by
The specificity of the measurement
In Tracey and Glidden’s 9 dimensions of underlying theory, what is meant by
Any variance in the measures unrelated to the construct of concern
In Tracey and Glidden’s 9 dimensions of underlying theory, what is meant by
The specific construct regions where a measure is expected to differentiate individuals.
In Tracey and Glidden’s 9 dimensions of underlying theory, what is meant by
The reference comparison desired. Focus on the individual trait (Ideographic) or how the individual’s trait compares to others (Nomothetic).
In Tracey and Glidden’s 9 dimensions of underlying theory, what is meant by
The clarity of definition of the construct of interest