Final Deck 3 Flashcards
Contrast descriptive/inferential statistics
- Descriptive statistics describe a single variable/distribution
- Inferential statistics make inferences about a population based on a sample (uses 2 variables and compare them to each other)
Mean, median and mode use which type of statistics (descriptive or inferential?)
Descriptive
Standard Deviation uses which type of statistics? (descriptive/inferential)
Descriptive
Correlation uses which type of statistics? (descriptive/inferential)?
Inferential
T-Test uses which type of statistics? (descriptive/inferential)
Inferential
Anova uses which type of statistics? (descriptive/inferential)
Inferential
How to use the mean and standard deviation of standard (deviation IQ) scores to determine normal limits.
On standardize tests, the SD is 15. So if the mean score is 100, the range of normal limits is between 85 and 115 (1 SD below/above the mean)
Define correlation
The way in which two variables are related to each other. Correlation considers strength and direction of the relationship
Strength (as part of correlation)
- The consistency of the pattern (how close the dots are to the line)
- *the closer the dots are to the line, the stronger the correlation.
- *if the dots are farther away from the line, the weaker the correlation
- r value closer to positive or negative 1 means a stronger correlation
- r value closer to 0.3 means a moderate correlation
- r value close to 0 means a weak or non-existent correlation
Direction (as part of correlation)
- Is r positive or negative?
- Positive r- both values are either increasing/decreasing
- negative r- one value is increasing, the other is decreasing (inverse relationship)
What are two disadvantages of a correlation test?
- Does not identify confounding variables
- Does not explain why the variables are related
- Affected by range of scores/outliers
- Not effective for curvilinear relationships
Describe a t-test
- a t test can compare 2 groups only on another variable. Ex: pre-test, give intervention, post test → you are comparing group of scores 1 vs group of scores 2 on another variable (ex: intervention)
Independent Sample T-Test
- A type of t-test where unrelated samples comparing the results
- Two different samples of participants test under different conditions → Ex: Comparing control/treatment groups
Related Sample T-test
- One sample tested under two separate conditions
- Comparing related groups → Ex: comparing pre-test and post test after the same dependent variable (intervention)
Describe ANOVA
- testing to see if there is a difference between more than 2 variables.
- Ex: pre test, intervention, post test, 6 months go by, follow up test.
What is the difference between a t-test and an anova?
T-Test looks at two variables only, Anova looks at 3 or more
What are non-parametric tests?
- A non-parametric test is capable of analyzing quantitative data that is ordinal or categorical, or does not conform to assumptions of normality, linearity, and homogeneity
(every parametric (ie t-test/anova) has a non-parametric equivalent)
Name 4 non-parametric tests
- Mann-Whitney U test
- Wilcoxon signed-ranks test
- Kruskal-Wallis ANOVA by ranks
- Friedman two-way ANOVA by ranks
Define standardization
- the process that specifies standards for a method of measure to be used across places.
- Standardization limits variability and increases the accuracy of our test scores.
- same way of administering a test across everywhere to keep it all the same and increase accuracy of scores
Define norm-referenced
- any assessment where scores are interpreted in comparison to a normed group.
- Most tests are age-normed.
What is the difference between standardized and norm-referenced
- Standardization focuses on consistency in the administration/scoring of an assessment (consistency/comparability of scores)
- Norm referenced means comparing the test scores to a normative group (ie. neurotypical children)
- *a lot of tests are standardized and norm referenced
Define reliability
- Consistency or repeatability of results obtained using a specific method of measurement
Define inter-rater reliability
- consistency between observers who are scoring the same test.
- Ex: if two administrators give the same test to the same child- there should be similar scores.
- There can be subjectivity if you are both observing the same thing.
- *high interrater reliability scores mean the test was given similarly
Define test-retest reliability
- consistency over time.
- Ex: testing each participant twice and make sure that they got the same score each time.
- *they should get around the same score each time you take it