Hypotheses, Ethics & Variables Flashcards
Theory
An organised system for explaining certain phenomena and how they are related (e.g. Darwin’s theory of evolution)
Hypothesis
A specific and falsifiable prediction regarding the relationship between or among two or more variables. A brief, tentative statement about what the researcher expects to find. Theories are more complex and comprehensive than hypotheses
Deductive method
The process of using a theory (general) to generate specific ideas that can be tested through research
A good hypothesis is
Logical
Testable
Refutable
Positive
Evaluate this hypothesis
- The colour red is seen differently by males and females
Not testable or refutable.
The question concerns an internal, subjective experience that cannot be observed or measured.
Evaluate this hypothesis
- A list of three-syllable words is more difficult to memorise that a list of one syllable words
Yes, testable and refutable
A good opportunity for students to develop a brief version of a research proposal.
Evaluate this hypothesis
- The incidence of paranoia is higher among people who claim to have been abducted by aliens than in the general population
Yes, testable and refutable.
Although the topic of UFOs and aliens may seem outside the realm of science, the task of measuring paranoia is perfectly acceptable.
Testable
It must be possible to observe and measure all of the variables involved
Refutable
It must be possible to obtain research results that are contrary to the prediction. That is, it must be capable of being demonstrated as false
Evaluate this hypothesis
Evaluate this hypotheiss
If the force of gravity doubled over the next 50,000 years, there would be a trend toward the evolution of larger animals and plants that could withstand the higher gravity
Not testable or refutable.
The question concerns a hypothetical situation that does not exist and cannot be created.
Research Variables
The IV and the DV
Independent variable (Cause)
The factor that is controlled and manipulated by the researcher. The variable whose effect is being studied.
Correlational designs -predictor variable
Dependent variable (Effect)
The factor that may change in response to manipulations of the independent variable. In Psychology it is usually a behaviour or mental process.
Correlational designs - Outcome variable
Ethics in research
General Principles of the APA Code of Ethics
- Beneficence and Non-maleficence (maximise benefit and avoid harm)
- Fidelity and responsibility
- Integrity
- Justice
- Respect for People’s Rights and Dignity
Reasons for ethics guidelines
Scientists sometimes engage in practices that may be questioned on ethical grounds
Welfare of the individuals
Balance between protecting participant rights, and the greater good that can come from research. Can be asked – what is the cost of not doing some research? E.g. drug trials
4 Basic Goals of Ethical Research
- No Harm
- Informed Consent
- Awareness (and mitigation) of the power differentials (avoid abuses of power)
- Honesty and transparency describing the research (minimal deception and debriefing)
Variable
A variable refers to any attribute that can assume different values (e.g. in different people, or within the same person at different times).
There are two types of variables:
- Manipulated variables, controlled by the experimenter
2. Measured variables, observed by the experimenter
Experimental studies make use of both types of variables
• In experiments, we manipulate (either directly or indirectly) the values or levels of one or more variables & measure the effect(s) on one or more other variables. e.g., does a warm and encouraging teaching style foster more learning (better exam marks) than a cold and arrogant teaching style?
Observational studies make use of only measured variables
• If we cannot manipulate the values of any of the variables of interest, then we resort to measuring all of the variables.
An independent variable is one that the researcher directly manipulates. If the researcher cannot directly manipulate the independent variable, then s/he conducts a
quasi-experimental study
• Quasi-independent variables
Quasi-independent variables are variables that the researcher indirectly manipulates E.g., the researcher can indirectly manipulate sex by gathering an equal number of male and female participants to receive each dosage of the antidepressant.
If the researcher cannot manipulate (directly or indirectly) either variable, then s/he conducts an
observational (correlational) study
E.g., are depressed people more or less likely to be smokers than non-depressed people? The IV (which is observed and not manipulated) is depression
Conceptual variables
Abstract ideas that form the basis of research designs and that are measured (e.g. parenting styles, self-esteem).
Measured variables
Numbers that represent conceptual variables and that can be used in data analysis
Operational definition
A precise statement of how a conceptual variable is measured or manipulated. • A procedure for indirectly measuring and defining a variable that cannot be observed or measured directly.
Measurement
The assignment of numbers to objects or events according to specific rules
Scales of measurement
• The numbers we assign to the objects or events can have different qualities
Nominal
Ordinal
Interval
Ratio
Nominal Scale
A measurement scale consisting of categories which are differentiated only by qualitative names. Categorical variables such as sex and marital status.
Ordinal Scale
A scale in which objects or individuals are categorised and the categories form a rank order along a continuum. Ordinal variables are ranked variables such as 1st place and 2nd place.
Interval scale
A scale in which the units of measurement (intervals) between the numbers on the scale are all equal in size. A measured variable in which equal changes in the measured variable are known to correspond to equal changes in the conceptual variable being measured such as temperature where there is no natural zero point (i.e. 0 degrees does not mean no temperature)
Ratio scale
In addition to order and equal units of measurement, there is an absolute zero that indicates an absence of the variable being measures e.g. speed, length or BAC.
Two primary types of measures in psychology research
Self-report measures
Behavioural measures
Self-report measures
Individuals are asked to respond to questions about themselves
Issue: Do people know the answer? Are they truthful?
Behavioural measures
Individuals’ actions are observed
Free-format self-report measures
Measures variables in which respondents are asked to freely list their thoughts or feeling as they come to mind.
Such as Rorschach tests and word association
Difficulties with free-format
Coding
Inter-rater reliability
Takes a long time and generates lots of data
Fixed-format self-report measures
Measured variables in which the respondent indicates his or her thoughts or feelings by answering a structured set of questions.
Involves answering from a list of items or scales such as a likert scale (strongly agree, agree etc).
Likert Scale
A fixed–format self–report scale that consists of a series of items that indicate agreement or disagreement with the issue that is to be measured, each with a set of responses on which the respondents indicate their opinions.
Acquiescent responding (yeah saying bias)
A problem with Likert scales. A form of reactivity in which people tend to agree with whatever questions they are asked. Can be solved with reverse scoring.
Reverse scoring
On some items (e.g. 1) Strongly agree means high SE
On other items (e.g. 3) Strongly agree means low SE (asterisks)
Self-Report Measure Advantages
- Easy to construct
- Easy administer
- Quick (ask lots of Q’s)
- Flexible (e.g. many different types of Q’s can be asked)
- Useful (because lots of Q’s asked, should produce useful data)
Self-report measure Disadvantages
- Assumes participants are willing and able to answer Qs
- Does not always correspond with what would be observed
Reactivity
Dishonest answers. Participants may lie in an effort to make themselves look better or to chose the desired result or to be considered socially desirable or acceptable.
How to control for reactivity
Administer a social desirability scale to measure individual’s tendencies to lie or self-promote. Use MMPI validity scales or a defensiveness scales to determine which participants should not be trusted.
Behavioural Measures
Measured variables designed to directly measure an individual’s actions. Can overcome some self-report disadvantages
Behavioural Measures can be based on
- Frequency (e.g., stuttering in speech)
- Duration (e.g., number of mins worked on a task – measure ‘interest’)
- Intensity (e.g., loudness of hand claps)
- Latency (e.g., days to start project – procrastination)
- Speed (e.g., time taken to run maze – learning)
Nonreactive behavioural measures
• Behavioural measures that are designed to avoid reactivity because the respondent is not aware that the measurement is occurring, does not realise what the measure is designed to assess, or cannot change his or her responses.
Physical Measures
Psychophysiological measures
Brain Imaging (EEG, MRI, PET, CAT)
Psychophysiological measures (Heart rate, blood pressure, respiration etc)
Psychophysiological measures
Measures designed to assess the physiological functioning of the nervous or endocrine system. Even if the P knows what is you’re interested in measuring, they can’t change their response
Choosing a measure
• Self-Report
- Pro: Efficient
- Con: Reactivity
• Behavioural
- Pro: Reduces reactivity (if participant doesn’t know)
- Difficult to operationalise
Random Error
Chance fluctuations in measurement that influence scores on measured variables. Can be due to not understanding items, how well a participant is feeling or experimenter error in scoring. Inherently unpredictable. Always present, but we can minimise it. Can obscure results however they are not biased.
Systematic Error
The influence of other conceptual variables on a measured variable that are not part of the conceptual variable of interest. Biased errors that can occur due to participants answering in a certain way due to anxiety or optimism. These variables systematically increase or decrease scores on the measured variable
Reliability and Validity
Techniques for evaluating the relationship between measured and conceptual variables
Correlation
Reflects the degree to which variables are related. The most common measure of correlation is the Pearson Product Moment Correlation (called Pearson’s correlation for short). Represented by r
Reliability
The extent to which a measured variable is free from random error. The reliability of a measurement procedure is the stability or consistency of the measurement. If the same individuals are measured under the same conditions, a reliable measurement procedure will produce identical (or nearly identical) measurements
True score and random error
True score
• “True” ability, “true” level, or the “thing” that we are trying to estimate with our measured variable
Random error
• Chance fluctuations in measurement that influence scores on measured variables.
𝑨𝒄𝒕𝒖𝒂𝒍 (𝑶𝒃𝒔𝒆𝒓𝒗𝒆𝒅) 𝑺𝒄𝒐𝒓𝒆 =
𝑻𝒓𝒖𝒆 𝑺𝒄𝒐𝒓𝒆 + 𝑹𝒂𝒏𝒅𝒐𝒎 𝑬𝒓𝒓𝒐𝒓
Formula for reliability:
Reliability = true score divided by actual score `
Test-Retest Reliability
The extent to which scores on a measurement scale correlate with scores when measured on a second occasion. If measure ISperfectly reliable (no random error) and conceptual variable doesn’t change, then r = 1.00 but there is always some error so r is always less than 1. 1. The closer r is to 1, the stronger the test-retest reliability
Rest-Retest Reliability Limitations
Reactivity that occurs when the responses on the second administration are influenced by respondents having been given the same or similar measures before. (trying not to replicate, self-promotion, memory)
Alternate-Forms Reliability
A form of test–retest reliability in which two different but equivalent versions of the same measure are given at different times and the correlation between the scores on the two versions is assessed.
Split-Half reliability
A measure of reliability that involves correlating the respondents’ scores on one half of the items with their scores on the other half of the items.
Inter-rater Reliability
The internal consistency of the ratings made by a group of judges. Can use α. Can be measured as a %.kappa (κ). A statistic used to measure inter-rater reliability
Validity
A measure of the ‘truthfulness’ of a measuring instrument. Measurement procedure must accurately capture the variable that it is supposed to measure (construct validity)
Construct validity
• The extent to which a measured variable actually measures the conceptual variable that it is designed to assess.
Criterion Validity
An assessment of validity calculated through the correlation of a self- report measure with a behavioural measured (criterion) variable. That is, extent to which a measuring instrument accurately predicts behaviour or ability in a given area
Concurrent Validity – present performance (e.g., pass/fail driving test)
Predictive Validity – future performance (e.g., VCE score predict Uni)
Face validity
The extent to which a measured variable appears to be an adequate measure of the conceptual variable.
Content validity
The degree to which a measured variable appears to have adequately sampled from the potential domain of topics that might relate to the conceptual variable of interest. Emphasises the importance of defining constructs.
Relationship Between Reliability and Validity
The validity of a measure is not the same as its reliability. A measure can be less valid than it is reliable, but it cannot be more valid than it is reliable. A measure can be reliable but not valid however a measure cannot be valid and not reliable. Reliability is necessary but not sufficient for validity.