Module 1 Flashcards
What four methods does Neuman (2011) outline that people use to acquire knowledge?
1) Personal experience and common sense
- seeing is believing
- biases: overgeneralisation, halo effect, false consensus, premature closure, selective obsercation
Common sources of bias: -
- overgeneralisation
- premature closure
- the halo effect,
- false consensus
2) Authorities and experts
3) Media and peers
4) Ideological beliefs and values
What is premature closure?
knowledge through personal experience and common sense
premature closure is when we think we have the answer so we stop seeking information asking questions and feel we have no need to listen.
What is the halo effect?
knowledge through personal experience and common sense
we give a halo or a positive reputation things and people we respect. e.g. picking up a report from Harvard university not an unknown university and expecting its amazing. or
when we make an attempt to judge someone’s entire personality based off one trait
what is overgeneralisation?
knowledge through personal experience and common sense
when we have some believable evidence but then think it applies to every situation
what is selective observation?
knowledge through personal experience and common sense
kind of like confirmation bias. selective observation reinforces preexisting ways of thinking. we focus on particular cases so that they fit preconcieved ideas we have
What are the five basic norms of the scientific community according to Neuman? (DOUCH)
Universalism Organised Skepticism Disinterestedness Communalism Honesty
what is false consensus?
knowledge through personal experience and common sense
a tendency to project ones way of thinking on to other peoples. we assume that others think like we do. we overestimate how much of our views match other peoples
what is pseudoscience?
a body of ideas or information that seem like science but aren’t created with systematic rigor or standards of science
what is junk science?
junk science is used to discredit scientific research even if conducted properly.. it is usually used when an advocacy group opposes a scientific research paper
what is universalism?
norms of the scientific community
all research should be judged equally on its merit regardless who did it
what is organised scepticism?
norms of the scientific community
process of scrunity of studies and their methods and approaches but not of the person (which would be a kind of anti-universalism)
what is disinterestedness?
norms of the scientific community
the idea that scientists remain free from bias, are impartial and neutral and open to new ideas
what is communalism?
norms of the scientific community
the idea that scientific knowledge and the outcomes of studies are available to the public and shared without censorship
what is honesty?
norms of the scientific community
honesty is the most important. all research and reporting must be done honestly and without cheating
most quantitative data techniques are…
data condensers - when data is condensed you see the big picture
most qualitative data techniques are…
data enhancers - when data are enhanced you can see the aspects and nuances
How might the scientific community be described as concentric circles?
There are few researchers in the middle of the concentric circles who do the groundbreaking work.
Practitioners, clinicians and technicians are more numbered and lie in the outer circles, moving back and forth between the centre and outer edges gathering new information to use in practice
What is the key factor of quantitative research? What is the key factor of qualitative research?
The key factor of quantitative research is reliability, while the key factor of qualitative research is authenticity.
What is a non-experimental design
Non-experimental: Does not involve manipulation of an independant or dependant variable
what is a correlational design?
Correlational: Examining variables through interpretation of their association (no causality).
Examine Direction, Strength and Statistical Significance (all interrelated)
what is a quasi-experimental design?
Quasi-experimental: Use pre-existing or other non-randomly assigned groups or interventions. ie a correlational study with manipulation
what is an experimental design?
Experimental: allow inference of causal relationships between variables. The exprimenter actively manipulates the independent variable(s).
What is the difference between exploratory, descriptive and explanatory research?
Exploratory: Primary purpose is to examine a little understood area and develop preliminary ideas.
Addresses “what” questions
Formulate and focus questions for future research
Most uses qualitative data (surveys, interviews)
Descriptive: Presents a picture of the specific details of a situation, social setting, or relationship.
Focus on “how” and “who” questions (ie how often, describe patterns)
Use most data-gathering techniques: surveys, field research, content analysis, and historical-comparative research
Explanatory/Experimental: To explain why events occur and to build,
elaborate, extend, or test theory.
Testing a (usually causal) hypothesis
Neuman (2011) reported 7 steps in the quantitative approach to research. What are they?
- select a topic
- focus the question
- design the study
- collect data
- analyse the data
- interpret the data
- inform others
When should parametric statistics be used? When should non-parametric stats be used?
Parametric tests are used only where a normal distribution is assumed. The most widely used tests are the t-test (paired or unpaired), ANOVA (one-way non-repeated, repeated; two-way, three-way), linear regression and Pearson rank correlation.
Non-parametric tests are used when continuous data are not normally distributed or when dealing with discrete variables. Most widely used are chi-squared, Fisher’s exact tests, Wilcoxon’s matched pairs, Mann–Whitney U-tests, Kruskal–Wallis tests and Spearman rank correlation.
What does Katrina Simpson video suggest considering after you’ve developed a research question?
x
What does a null hypothesis predict? What does an alternative hypothesis predict? Which one do we test?
Null predicts no relationship so we test the alternative
What is a double-barrelled hypothesis? What are some of the problems it can create? How can it be made better?
A confusing and poorly designed hypothesis with two IVs so its unclear whether one or the other variable or both in combo produce an effect
Use an Interaction hypothesis instead
What are crucial experiments?
An experiment held to decide with certainty between two rival hypotheses about some matter.
How can you deal with Missing Data?
List-Wise Deletion: deleting the case (not good option, deletes all other data)
Mean: Replace value with the mean of the item
Common-Point: Replace value with the midpoint of the scale
Transform -> Recode into Same Variable -> Old and New Variables -> Select System or User Missing and add the midpoint value
Multiple Imputation (best option): Using regression to predict the response based on their other data and the group Analyse -> Multiple Imputation -> Impute Missing Data Values Options: Add all IVs and DVs into model, 1 imputation, New Data Set Constraints: min and max values of scale, use as predictor and impute and use as predictor
What are the four possible reasons for outliers?
- Data Entry Errors (always screen for data entry errors before anything else)
- Not Specifying missing-value codes in SPSS (always screen missing values first)
- The outlier is not a member of the population you intended to sample
- The outlier is from the intended population but the distribution for the variable is more extreme than a normal distribution
When do we reject the null hypothesis?
If there is less than a 5% chance of a result as extreme as the sample result if the null hypothesis were true, then the null hypothesis is rejected. When this happens, the result is said to be statistically significant .
If the p-value is less than 0.05, we reject the null hypothesis that there’s no difference between the means and conclude that a significant difference does exist.
What is the assumption of Normality?
what is skewness?
what is kurtosis?
Normality assumes that the random error (residuals) are normally distributed throughout the data set
- very important when n<200 before the central limit theorem sets in
- it impacts type 1 error (false positive)
skewness: symmetry of distribution
- positive skew = clustered at low end
kurtosis: peakedness of distribution
- platykurtic = flat
- leptokurtic = tall and slender
how can you test for normality?
It is reccomended to use multiple methods of normality
Shapiro Wilk test: Ho is a normal distribution. If the result is significant, the error distribution is not normal
Split the groups: Data -> Select case -> if condition satisfied
Analyse -> Explore -> Group filter -> plots -> normality plots
Multivariate normality: no direct test in SPSS
If each DV is normal via Shapiro Wilk, generally overall is normal
Can run a multivariate normality test via Syntax if needed
What is Homoscedasticity/homogeneity of variance?
homogeneity of variance is the equality of variances and is a common assumption amongst statistical analysis
Running a test without checking for equal variances can have a significant impact on your results and may even invalidate them completely. How much your results are affected depends on which test you use and how sensitive that test is to unequal variances. For example, while a fixed-factor ANOVA test with equal sample sizes is only affected a tiny amount, an ANOVA with unequal sample sizes might give you completely invalid results.
homoscedasticity means “having the same scatter.
The opposite is heteroscedasticity (“different scatter”), where points are at widely varying distances from the regression line.
What are the assumptions of linear regression
Linear relationship Multivariate normality No or little multicollinearity No auto-correlation Homoscedasticity
What are the 5 characteristics of casual hypotheses? (VCPLF - very cool people like food)
- they have at least 2 variables
- they express a causal or cause-effect relationship between the variables
- they can be expressed as a prediction or an expected future outcome
- they are logically linked to a research question and theory
- they are falsifiable; they are capable of being tested against empirical evidence
what is tautology
an error in explanation in which the IV (Causal factor) and the DV (result) are the same or restatements of one another, making an apparent causal relationship true by definition
what is teology?
an error in explanation in which the causal relationship is empircally untestable because the causal factor does not come earlier in time than the result or because the causal factor is vague, a general force that cannot be empircally measured
what is ecological falacy?
an error in explanation in which empirical data about associations found amongst largesale units of analysis are greatly overgeneralised and treated as evidence for statements about relationships about smaller units
what is reductionism?
an error ine xplanation in which empirical data about associations found among small scale units of analysis are overgeneralised and treated as evidence for statements about relationships for larger units
what is spuriousness?
an illusiary causal relaitonship due to the effect of an unseen or hidden causal factor; the unseen factor has a causal impact on both an independent and dependent variable and produces tehf alse impression that a relationship between them ecists
Why do we never use the word “proof” in science? What are some better terms to use when explaining the meaning of empirical findings?
proof doesn’t exist, all scientific knowledge is tentative and provisional and there’s no such thing as the final answer
experimental evidence
What are crucial experiments?
an experiment capable of determining whether or not a particular hypothesis or theory is superior to all other hypothesis or theories whos acceptance is widespread
when do we reject the null hypothesis?
If there is less than a 5% chance of a result as extreme as the sample result if the null hypothesis were true, then the null hypothesis is rejected. When this happens, the result is said to be statistically significant.