301 E Flashcards
Variable
Any factor or attribute that can assume two or more variables
Qualitative variable
Represent properties that differ in type
Quantitative
Represent properties that differ in amount
Discrete variables
Between two adjacent values there are no intermediate values
Continuous variables
Intermediate values are possible
Situational variables
Characteristic that Differs across environment and stimuli
Subject variable
Characteristic hat differs across individuals
Hypothetical constructs
Underlying characteristics or processes that are not directly observed but instead inferred from measured behavior or outcomes
Mediator variable
A variable that provides a casual link in sequence between independent and dependent
Moderator variable
A factor that alters the the strength or direction of relation between the independent and dependent variable
Operational definition
Refers to defining a variable in terms of the procedures used to measure or manipulate it
Measurement
The process of systematically assigning values to represent attributes of Organisms objects or events
Scale sod measurement
Refers to rules for assigning scale values to measurements
Nominal scale
The scale values represent only qualitative differences of the attribute of negative
Ordinal scale
The different scale values represent relative differences in the amount of some attribute
Interval scale
Equal distance between values on scale
Ratio scale
True zero point
Accuracy
Represents the degree to which the measure yields results that agree with a known standard
Systematic error
Also called bias
A consistent degree of error that occurs with each measurement
Reliability
Assed by examining consistency
Random measurement error
Random fluctuations that occur during a measurement and cause the obtained scores to deviate from truth
Test retest reliability
Determined by administering the same participants on two or more occasions under same conditions
Split half reliability
The items that compose a test are divided into two subsets an correlation between each is determined
Inter observer reliability
Represents the degree to which independent observers share agreement in observations
Validity
Con we truthfully infer that measure actually does what it claims to do
Face validity
Concerns the degree to which the items on a measure appear to me reasonable
Content validity
Represents the degree to which items on a measure adequately represent the entire range or set of items that could be included
Criterion validity
Addresses the relation between scores and outcome
Construct validity
A measure trips asses the construct that it claims to asses
Convergent validity
Scores on a measure should correlate highly with scores on other measures of same construct
Discrimination validity
Cores on a measure should not correlate to strongly with score of other constructs
Experimental control
Manipulate independent variable
Chose dependent variable
Regulate environment
Confounding variable
A factor that covaries with the independent in such a way that we can no longer determine which one causes the change
Between subjects design
Different participants Are assigned to each condition
Random assessment
Eachbparticpant has an equal probability of being assigned to any one of the conduction sin the experiment
Within subjects design
Each participant engages in every condition of the experiment one or more times
Counter balancing
A procedure in which the order of comfituins is varied so that no condition has an advantage
Experimental condition
Involves exposing participants to treatment or an active level of independent variable
Control condition
Participants do not receive treatment
Independent groups design
Participants are randomly assigned to various conditions
Block randomization
We conduct a single round of conditions, then another, then another, with each round conditions order random
Matching variable
A characteristic of which we match sets of individuals as closely as possible
Matched groups
Each participant set that has been matched is randomly assigned to various conditions
Progressive effected
Reflect changes in participants responses which result from their cumulative exposure
Carryover effect
Responses influenc d by conditions preceding it
All possible order design
Every possible condition sequence
Random selected order design
From the entire set of all possibl orders, a subset is randomly selected
Block randomization design
Every participant exposed to multiple blocks each block a new random order of conditions
Reverse counter balancing
Each participant receives a random order of all conditions and then again in reverse order
Mixed factorial design
A factorial design that involves at least one between and one within subject variable
Main effect
Occurs when an independent variable has an overall effect on dependent
Interaction
The way a iv influences dv differs dependedingvanother iv
Simple main effects
Represent the effects of one iv at a particular level of another I v
Person x situation factorial design
An experiment that incorporated at least on subject variable and one manipulated situational variable
Quasi experiment
Has some factor of an experiment but lacks environmental control
One group post test design
A treatment occurs and afterward the dependent variable is measured once
Simple interrupted time Servis
Dv is repeatedly measured at intervals before and after a treatment interaction
Selection interaction
The interaction of selection with another threat to interval validity
Post test only design with non equivalent group
Participants in one condition are wxposed to treat ment and an non equivalent group is not
Needs assessment
Determined whether there is a need for a social program and the general steps required to meet that need
Program theory and design assessment
Evaluated the rationale for why in program has been or will be designed a particular way
Process evaluation
Determined whether a program has been implemented as intended
Outcome evaluation
As sessed a programs affectiveness
Contamination
Knowledge services or experienced intended for one groups received my another
Efficiency assessment
Weighs benifits and cost
Program diffusion
Implementing and maintaining effective programs in other setting
Program evaluation
Involves the use of research methods to asses need for design implementation and effectiveness at a social intervention.
Abab
Withdrawal design
Multiple baseline design for across behaviors
The same treatment is applied to two or more distinct behaviors of the same individual and the switch from baseline to treatment periods is staggered across behaviors
Changing criterion design
An initial baseline phase
Treatment lasts until criterion becomes stable