Midterm 1 Flashcards
What is a hypothesis?
An assertion of one possible state of the phenomenon or relationship under investigation. In other words, a proposed explanation for a question you are asking.
When is a hypothesis falsifiable?
A scientific hypothesis is falsifiable when it it is specific. A genuine test of a hypothesis is one that tries to refute it, not confirm it.
When is a hypothesis useless?
A hypothesis is essentially useless if it is consistent with every possible outcome. Rather, a hypothesis should be consistent with only a subset of possible empirical (observable) outcomes, and incompatible with others.
Why are testable hypotheses necessary?
This is because science is ever-changing, and refining. The scientific status of a theory is based on its falsifiability, refutability, or testability.
What is the purpose of a hypothesis?
It helps us generate simple models of the physical world, that allow us to predict phenomena, determine causes of phenomena, explain phenomena and control phenomena.
When is a hypothesis unfalsifiable?
- When no empirical evidence is obtainable
- When predictions are so vague that they can hardly fail
- When a hypothesis is upheld even though refuted by data, by introducing additional assumptions after the fact.
What is an operational definition?
A testable hypothesis must be operationally defined. An operational definition is a description of how a concept will be measured. Essentially, turning a concept into a quantity.
What is an example of an operational definition?
Happiness can be measured by how many times someone smiles in an hour, brain activity, or a self-report survey.
What is the purpose of an operational definition?
- They allow us to quantify and measure concepts.
- It makes sure variables are measured throughout the study.
- It allows us to communicate ideas to others.
What makes a good operational definition? VAAPORC
V - Validity (did your operational definition measure what
it actually intended to measure)?
A - Absence of bias
A - Acceptance in the scientific community
P - Practicality (something easy to measure)
O - Objective
R - Reliability
C - Cost (it is cost-effective)
Reliability and bias refer to____?
Refers to the difference between the measure and the “true” value of that variable. This difference is referred as systematic error.
What is bias?
Bias is the average error over many measurements.
What are the differences between hypotheses and predictions?
Hypothesis is framed as a statement, whereas a prediction is more related to the specific methodological details.
Hypothesis is often phrased in present tense, whereas predictions are often in the future tense.
Hypothesis is derived from a broader theory, whereas a prediction is quite specific.
What are the two main ways to assess operational definitions?
Reliability and validity.
Details about reliability
- Operational definition has to be based on concrete, observable behaviours.
- It must facilitate consistency/precision across measurements.
The more the variation, random error, and noise decrease ______.
Reliability
Details about validity
- Must be based on relevant behaviour
2. Facilitates the accuracy of measurements
The more the systematic error, and bias decrease _____.
Validity
A measurement is…….?
the true score + measurement error
Measurement error is…..?
systematic error + random error
What factors contribute to measurement error?
- Precision of the operational definition (lack of detail, subjectivity, and specificity)
- Error as a result of the measurement device.
- Human error (level of training, expertise, and attention level)
The more specific the operational definition, the more ______.
Consistent the measurements.
What does the r value represent?
It represents the correlation between the two variables.
What are the r values?
r>0, positive correlation
r<0, negative correlation
r = 0, no correlation
What are the types of reliability measures?
Inter-rater reliability (finding consistency between raters)
Test-retest (re-taking the same test over and over)
Internal consistency reliability includes split-half reliability, Cronbach Alpha, and item-total.
How come test re-test is sometimes difficult?
It can become biased. If someone is taking the same test, over and over, they may become better at it over time.
What is construct validity?
How well a test or tool (hypothesis, operational definition etc.) measures what it intended to actually measure.
What are the indicators of construct validity?
- Face validity
- Content Validity
- Predictive Validity
- Concurrent Validity
- Convergent Validity
- Discriminant Validity
What is face validity?
Does the test appear to measure what it is intending to measure. As face validity is a subjective measure, it’s often considered the weakest form of validity. However, it can be useful in the initial stages of developing a method.
What is content validity?
Content validity assesses whether a test is representative of all aspects of the construct. To produce valid results, the content of a test, survey or measurement method must cover all relevant parts of the subject it aims to measure. If some aspects are missing from the measurement (or if irrelevant aspects are included), the validity is threatened.
What is an example of content validity?
A mathematics teacher develops an end-of-semester algebra test for her class. The test should cover every form of algebra that was taught in the class. If some types of algebra are left out, then the results may not be an accurate indication of students’ understanding of the subject. Similarly, if she includes questions that are not related to algebra, the results are no longer a valid measure of algebra knowledge.
What is predictive validity?
This is the degree to which a test accurately predicts a criterion that will occur in the future. For example, a prediction may be made on the basis of a new intelligence test, that high scorers at age 12 will be more likely to obtain university degrees several years later. If the prediction is born out then the test has predictive validity. “Can our measure predict something in the future?” Can this selection test predict performance on the job?
What is concurrent validity?
This is the degree to which a test corresponds to an external criterion that is known at the time and is already valid. If the new test is validated by a comparison with a currently existing criterion, we have concurrent validity.
These are both for the same construct.
What is an example of concurrent validity?
For example, let’s say a group of nursing students take two final exams to assess their knowledge. One exam is a practical test and the second exam is a paper test. If the students who score well on the practical test also score well on the paper test, then concurrent validity has occurred.
What is convergent validity?
Convergent validity is a supporting piece of evidence for construct validity. The underlying idea of convergence validity is that related construct’s tests should be highly correlated. For example, in order to test the convergent validity of a measure of self-esteem, a researcher may want to show that measures of similar constructs, such as self-worth, confidence, social skills, and self-appraisal are also related to self-esteem.
- different methods of measuring the same construct, to see whether both are related
What is discriminant validity?
Discriminant validity tests whether concepts or measurements that are not supposed to be related are actually unrelated.
- the same methods, measure different constructs, give scores that are NOT correlated.
Concurrent and predictive validity are the _______, whereas convergent and discriminant is based on ______.
- gold standard
2. other measures
What is a variable?
an event, situation, behaviour, or characteristic. Something that has a quality or quantity.
What is a quantitative variable?
A variable that measures a magnitude or quantity.
What are the types of quantitative variables?
- Interval - all quantitative variables are interval, but 0:00 is not defined. Celsius or Fahrenheit is not a ratio variable because 0°C does not mean there is no temperature
- Ratio - takes into account a true zero. Such as time at 0:00, that is a meaningful time. Weight, age, pulse rate, etc.
- Discrete variable - Variables that can only take on a finite number of values are called “discrete variables.” For example, you can only use whole numbers when describing your siblings. You can’t have HALF a sibling.
- Continuous variable - Variables that can take on an infinite number of possible values are called “continuous variables.” For example, height can be continuous as in 1.65 metres.
What is a categorical variable?
Variables that have different qualities (gender, colours, where you live etc).
What are the types of categorical variables?
Nominal - there is no obvious relationship between the levels
Ordinal - takes on an order (i.e., pain level on a scale of 1-5).
How can we distinguish between these variables?
You can usually take an average or use a subtraction test for a quantitative variable.
Quantitative variables can be discrete or continuous. If we use a midway test, by taking two values and averaging them, if that new value has a meaning it is discrete. If not, it is continuous.
What is monotonic vs. non-monotonic?
Non-monotonic means that the relationship does not always go in a single direction (i.e., non-linear).
What are the key points of non-experimental research?
- No direct intervention
- Observational or correlational
- Both variables are measured
- You can record record physiological responses, or observe behaviour.
- Examples include self-reports, or using existing records.
- Cannot determine causal relationships
What are the key points of experimental research?
- At least one variable is manipulated, the IV.
- One variable is measured, the DV.
- Can determine causal relationships
What is a spurious correlation?
Two variables that appear causal, when they are actually not.
What is a confounding variable?
Variables that influence both the dependent and independent variable. The confound makes it hard to determine which variables are actually causing the effect.
What is the difference between a confounding variable and extraneous?
Extraneous variables, which are any factors that are in the experiment but not being studied, and confounding variables, which are related to the independent variable and affect the dependent variable.