test 1 key things Flashcards

1
Q

Core Characteristics of Science:

A
  1. Systematic Empiricism:
     Systematic: rigourous application of the
    scientific method to collect valid and
    reliable data about our observations of the
    world and allows research to be open for
    criticism and peer review. A standardised
    scientific methodology ro reduce bias.
     Empiricism: learning through the senses
    (experiences; including scientific tools)
    rather than from logic or authority.
  2. Addresses Empirical Questions:
     Scientific research engages in hypothesis
    testing; falisifiable, on topics we can
    collect data for, is ethical, not too broad or
    narrow and provides novel insights.
  3. Creates Public Knowledge:
     Findings are available to, communicated
    and open to critique by the public.
     Review and replicatio of studies findings
    by other scientists to rule out alternative
    explanations, replicate findings on other
    samples and extend theory.

*biggest difference between science and
authority is its acceptance of critiscims and
challanging perspective.
*it’s not “what” you study but “how” you
study.
*public trust in science has increased since
the introduction of mathematical modelling
and developing of vaccines.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How Do We Know What We Know?

Three types of knowledge:

A
  1. Experience:
     Information obtained from our own life
    experiences.
     Its unfalsifiable.
     Its subjective; not everyone has the same
    experiences, can have the same
    experiences or wants to have the
    experience.
     Prone to memory biases (i.e., reconstruction
    of events).
     Limited sampling.
  2. Intuition:
     Unspoken understanding and assumptions
    about how the world works.
     Its implicit; so occurs outside of our
    unconscious awareness so is not guided by
    our existing knowledge or open to criticism;
    prone to biases; e.g. myths.
     Its unfalsifiable.
     Its subjective.
  3. Authority:
     Information obtined from an authority
    figure/source; teacher, textbook, parents,
    school, political leaders, spiritual leaders
    etc.
     Is often accepted without consideration for
    its validity.
     The truth can be weilded to the authority
    figures will and biases they may hold.
     Lacks falsifiability.
     May be biased.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

theory, hypothesis, predicition

A
 Theory:
o A possible explanation of a broad range of 
   phenomena.
 Hypothesis:
o A more specific explanation of how 
   something might work.
o About the construct of interest.
 Prediction:
o What will happen in my study if my 
   hypothesis is right.
o A specific prediction on the relationship 
   between two or more variables.
o Operationionalized variables which 
   indirectly measure the construct of interest.

*the cycle of deductive reasoning used in
experimental designs which builds on previous knowledge.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Two Ways of Knowing:

*both valid approaches

A

 Inductive Reasoning:
o Building a theory/hypothesis from patterns
we see in observational data.

 Deductive Reasoning:
o Testing our theory/hypothesis by finding
patterns in observational data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Three Types of Scientific Claims:

A
  1. Describe:
    - Frequency claim typically about one variable
    at a time.
    - Descriptions about how many, how much,
    how often and what kinds can be either
    quantitiative or qualitative:
    o Qualitative:
     Coding, thematic anlaysis, enthnography or
    diaries are common techniques used in
    naturalistic observational work.
    o Quantitative:
     Mean, median, mode, frequency, standard
    deviation, percentiles, z-scores are
    quantitiative tools used to describe the
    shape of the data.
  2. Predict:
    - Association claim about two or more
    variables.
    - Good way to validate your hypothesis that
    there is an association between the
    variables.
    - Variables are measured, they are not
    manipulated.
    - Individual group differences with no
    manipulation, associations or correlations
    can be used.
    - Correlation/Association ≠ causality.
  3. Explain:
    - Provides causal explanations of why or how
    things occur.
    - Manipulation of the independent varaible (IV)
    to measure its effects on the dependent
    variable (DV).
    - This allows us to determin whether the (IV)
    causes changes in the DV, by controlling all
    other variables.
    - Claims about the direction of the relationship
    between variables.
examples:
§ How would we make this claim into a 
  hypothesis/prediction?
o Frequency
 How many people are practicing 
   mindfulness?
 How many people are anxious?
o Association
 Are people who practice mindfulness less 
   anxious?
 Are people who are mindful in their daily 
   lives less anxious?
o Causation
o Does the practice of mindfulness reduce 
   anxiety?
o How does mindfulness reduce anxiety?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Use The Right Language:

A

 Association:

  • Correlated with
  • Associated with
  • Related to
  • Relationships between
  • X predicts Y (predicts ≠ cause)
  • Group differences without a manipulation

 Causal:

  • Causes/makes/affects
  • Increases/decreases
  • Heightens/reduces
  • Attenuates/potentiates
  • Eliminates (bold claim)
  • directional
  • action verbs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

three conditions for causality

A

Three Conditions for inferring causation:
1. An association between mindfulness and
anxiety (correlation; bidirectional)
2. Temporal precedence (cause before
effect) IV causes changes in DV.
3. Control of alternative explanations
(confounds) keep extraneous constant and
manipulate the IV!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Experiments:

All experiments have 2 components (defining features).

A

Experiments:
All experiments have 2 components (defining features).

  1. Manipulation
    - of one or more independent variables
    (cause) to determine its effect on a
    dependent variable.
  2. Control
    - of alternative explanations (extraneous
    variables)

*This allows us to fulfill the three conditions from inferring causation!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Variables

A

Variables:
1. Independent variables (IV)
- Manipulated by the experimenter
- The Cause.
2. Dependent variable (DV)
- Measured by the experimenter
- The Effect.
3. Subject variables
- Pre-existing factors (age, gender,
ethnicity, personality, etc)
- NOT MANIPULATED
4. Extraneous variables
- All the other variables that contribute to
the DV
- Example for things that contribute to
anxiety and will always be present in a
study: gender, SES, genetics,
environment, social media, physical
health, temperament, trauma/stressful
life events etc.
- Random assignment allows for these
factors to be equal across conditions-
the only difference between groups
should be the level of the independent
variable they experience.

*if not manipulated we cannot infer cause!

§ Subject Variables:
- Might be interested in whether different
types of participants are affected by your IV
differently.
- BUT – you can’t manipulate these variables,
so they are not independent variables. And
you cannot make causal claims about them.
o Demographic variables (gender, age,
culture or ethnicity, income…)
o Traits (personality, abilities,
o Disorders (depression, anxiety, eating
disorders…)
o Life experiences (travel, major, lifestyle
choices…)

§ Extraneous Variables:
- All other possible variables that could 
  contribute to variability in the DV.
§ Confounds:
- Extraneous variables that vary 
  systematically with the IV. 
- Confounds create alternative explanations 
  for findings.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Recap

A

§ We get rid of confounds by designing better
conditions. This allows us to infer causation
and have stronger internal validity within our
study.
§ We can satisfy the three conditions of
causality through the use of manipulation
and control. Manipulation helps us to
determine the direction of relationship
(temporal) and control allows us to rule out
alternative explanations or confounds.
§ All studies have limitations and are always
reported at the end of a scientific article.
§ No study is perfect. There are tradeoffs
between all decision researchers make
when designing and implementing an
experiment that will place limitations on the
study.
§ Just because a study is flawed doesn’t mean
that its findings are useless.
§ We cannot answer all questions in one
study. There are multiple ways to
operationalize IV-DV’s and other
methodological choices to answer the same
question which all provide a useful piece to
the puzzle.
§ You cannot make a conclusion from one
study; we need different studies that adopt
different perspectives, methodologies and
flaws which all draw the same conclusion to
draw a conclusion and be confident the
effect is present (converging
operations/consensus). More important to
be aware of your studies flaws than to have
none. Some studies methodologies can be
so flawed that their findings are not valuable
in the scientific community- lab example or
non-scientific studies run by businesses.
Psychology is valuable in any area or
industry of life; critical analysis of findings
and the implementation of non-scientific
studies (i.e., education; ministry of health).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Random Assignment:

A

§ All participants have an equal likelihood of
being assigned to each experimental
condition.
§ If groups are large enough, controls for
differences in all extraneous variables
(equally distributed across conditions)
§ Assignment must be truly random. Failures
of random assignment occur when:
o People choose their own conditions
o People are assigned to condition based on
some preference/trait/behaviour
o There are systematic differences between
groups.
§ Any differences between groups other than
the IV are confounds.
§ What is the difference between an
extraneous variable and confound?
o An extraneous variable is all other possible
variables that could contribute to variability
in the DV.
o Confund variables are extraneous variables
which differs on average across the levels of
the independent variable[s] (i.e.,
intellegence if there is not an equal mix of
high and low IQ participants in each
condition).
§ Example of people are assigned to condition
based on some preference/trait/behaviour:
o Are executive monkeys more likely to
develop stress induced ulcers relative to
employee monkeys?
o Monkeys were trained on pressing a lever to
stock minor electrical shocks from occurring
in their cage. They took the first 6 monkeys
to learn the association and made them the
executives. The rest were the employees
that were at the mercy of the executive to
press the lever at the correct time to stop
the shock. They concluded that executive
monkeys with the additional responsibility
experienced more stress and developed
ulcers relative to employee monkeys.
o We now know that powerlessness is the
bigger cause for ulcers than
power/responsibility is.
o This is not random assignment because
monkeys were assigned to a group based
on a subject variable. There was no
manipulation! Or equal probability of being
assigned to a condition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Confounds:

A

§ Extraneous variables that vary
systematically with the IV.
§ Confounds create alternative explanations
for findings.
§ For example, differences between the
mindfulness and waitlist-control condition:
o Expectations of improvement.
o Time commitment.
o Relationship with group leader.
o Relationship with other group members.
o Relaxation.
§ What would be a better control group to
use than a waitlist control group:
o Mindfulness training and mindfulness
training without all those other things.
o Mindfulness treatment vs alternative
treatment.
§ Different control groups will address
different confounds. Thus, as a researcher
we need to decide which one’s are most
important for us to control to ensure the
study has high internally validity and we
can be confident that changes in the DV
are caused by changes in the IV.
§ Gina’s example:
o Use an “active control-group” i.e., an
experiment where there are two treatment
conditions.
o For example,
§ Mindfulness vs home improvement class
(with structure and social aspects but
doesn’t include mindfulness training).
§ Mindfulness vs CBT (alternative treatment).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Factorials why add another variable?

A

Why add another variable?
1. Save time. Assess 2 or more potential
causes at once (relative to multiple studies)
2. To refine a theory (because it depends… to
identify what situational factors the effect
of the IV depends on).
3. To rule out confounds.
4. To increase external validity (extend to
other populations, stimuli, situations…;
good for quasi-experimental designs).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Summary

A

Summary:

	Experimental Designs
1.	Post-test only
2.	Pre-test/Post-test
3.	Matched Pairs
4.	Within-Subjects
5.	Factorial Designs
	Independent variables
•	Manipulated
•	One or more IVs
•	Each with 2 or more levels
•	Manipulated between- or within-subjects
•	Usually categorical, but might reflect an underlying continuous variable

 Dependent variables
• Measured
• Can be categorical or continuous
• Can include more than 1 DV in a study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

calculate t

A

1) Calculate the test statistic (t for t tests)
• Test statistic
• group difference divided by standard error
• = ratio (variability between groups
explained by IV/natural variability in
sample-sampling error-difference from
population to mean-confidence it’s close to
true mean)
• 28/128
• t = 0.218

Q: How often would I get a t of this big or bigger if the null hypothesis is true?
• Is answered comparing the t= 0.218 with
the sampling distribution.

2) Compare it to the sample distribution
• The sampling distribution tells us how likely
it is to get the same t-score if the null
hypothesis is true (or bigger when
sampling randomly from the same sample).
• Sampling distribution shows me how often I
will get different values of a test statistic
(e.g., t) by randomly sampling from a single
population.
• Shape of the sampling distribution depends
on how big my sample is (for an
independent t-test, degrees of freedom = N
– 2)
• The larger the N the more likely given the
null hypothesis is true should the
differences between groups should be
smaller i.e., closer to 0.
• In the tail of the sampling distribution, the
rejection region where it is not likely that
the test statistic was produced by the null
hypothesis being true (sampling error and
not due to IV)
• T-score that is closer to 0 means it’s more
likely to be caused by null and not IV,
Bigger Test scores are better! In tails.
• Shape of the t distribution depends on the
number of people in your sample. The more
people, the more likely t will be close to 0
(i.e., more like a normal distribution).
• The shape changes based on DF, which is
in turn, impacted by N size and group # (i.e.,
20 – 2 = 18).

  • Reject null: if the likelihood of the null
    hypothesis being true is less than 5%
    (produce test statistic this size or greater
    less than 5% of the time).
  • Bidirectional (5% reject, two tail, 2.5% each
    side; need bigger t to fit in small tail or more
    evidence to reject)
    -Directional (5% reject, one tail 5% on one
    side, smaller t is needed, less evidence
    needed, divide p-value by two to account
    for bigger rejection region)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

P-value vs Cohen’s D:

A

§ P-value:
o How confident I am that the null
hypothesis did not produce this data (less
than 0.05, less than 5% chance that the t-
test score was due to the null hypothesis
(sampling error) rather than the IV). P-
values do not tell me how big the effect is!
Smaller p-value doesn’t mean bigger
effect, the P-value tells us about our
confidence that the effect is due to IV (i.e.,
is effect statistically significant not how big
is the effect).
§ Cohen’s D:
o Effect size is determined by Cohens d =
difference between means/SD (pooled) not
SE so not corrected for n
o How big is the effect of the IV on the DV?
o - sign is arbitrary in Jamovi, so ignore it.
o Directional hypothesis= one tail
hypothesis, divide .83/2 = .415
o P-value .415 (one tailed; significant)
o P-value .83 (two tailed; non-significant)
o 0.098 is smaller than small (small .2,
medium .5 and large .8).

17
Q

Smaller Error because of within subject’s design means the test statistic will be…

A

bigger and is more likely to be statistically significant (not big effect though), p-value smaller.

18
Q

Not the last word!

A

Not the last word!
• A single study never answers the question
(i.e., specific to participants or variables
used; to answer question we need
replication; each study is a single piece in
the puzzle).
• No study is perfect. There is always
limitations due to the trade-offs in designing
a study that is ethical and practical.
• Scientific literature is cumulative and self-
correcting (to build a body of literature; self-
correcting through replication to support or
contradict existing theory; one study
doesn’t throw out a theory but as more and
more accumulate with similar findings
revisions to theory are made; science is
always progressing; they support theory
but do not PROVE anything).
• You do not get paid to publish, review or
edit research articles. It is just a part of your
job as scientists to contribute to the body
of scientific literature.

19
Q

Two Types of Articles

A

Two Types of Articles
Peer-reviewed scientific literature

  1. Empirical Research
    • Report the results of a research project[s] (more
    than one if combined paper with 3-4 studies
    testing the same specific hypothesis)
    • Their standard format is
    Abstract/Intro/Method/Results/Discussion
  2. Review Articles
    • Combine the results of several (many) research
    studies to summarise and draw conclusions.
    • Narrative review (more like a story where the
    author takes bits and pieces from different
    studies to make a point on a psychological
    phenomenon).
    • Systematic review (the author has done very
    careful library work to ensure they’ve
    mentioned EVERY study that’s been published
    on the specific question; nothing left out;
    comprehensive review).
    • Meta-analysis (combine data from quantitative
    studies on the specific question to perform
    analysis on and make a comprehensive
    conclusion on the effect in question).
    *still peer-reviewed
    *commentary on how different studies have
    addressed the same question with different
    samples, methods, variables, analysis and
    comment on any consistent patterns found.

Is it any good?
Peer-Reviewed Journal:

 Is it in a “good” peer-reviewed journal?
• Use journal rankings to know which journals
have high standards and peer-review.
• You need to watch out for predatory journals
which are journals in it to make money. They
charge authors money to publish studies in their
journal.
 Are conflicts of interest declared?
• Funding? Especially in medical research,
college, intervention/treatment research that is
funded by a company who wants to sell that
treatment if successful.
• Business interests? Financial interests in the
thing they’re writing about.
• These should be declared at the beginning or
the end of the article.
 Read critically yourself
• What are the studies strengths and
weaknesses? How generalisable are the
findings? How does this fit in with other studies
on the same topic? Consistency across studies
increases our confidence that the findings are
reliable and valid.

(B) Secondary Sources:
People who have read primary literature who
have synthesized it into make It readable in a
short period of time.

  1. Textbooks:
    o A researcher who has knowledge on the field
    and reviewed all the literature on the topic to
    synthesize it and communicate it in an easier to
    digest format.
  2. Trade Books:
    o These are books written by researchers on
    their own work/experiences over the years.

Is it any good?
Text/Trade Books:

 Is the author also a researcher? Read the books
bio to see if the writer is a researcher or works
at a university.
 Is the author selling something? Do they have
financial interests, an agenda or lifestyle they’re
peddling?
 Who recommends the book? Is it a valid source
like your lecturer, fellow researcher, librarian or
multiple recommendations like reviews.
 Does it cite sources? Do they have citations in
footnotes or at the end to support that their
claims are based on scientific literature.
 Is it out of date? Newer up to date research is
because it is cumulative and self-correcting, so
it changes overtime.
 Read critically!

  1. Media/Journalism Sources
    o Journalism, social media, news outlets, blogs
    etc.
    o Some is good but some of it is very bad
    (sensationalism, misinformation, selling
    something etc.).
    o The media likes to exaggerate and draw
    conclusions that are not supported by science.
    This conflicts with scientist’s tendency to draw
    tentative conclusions that are fully supported
    by science.

Is it any good?
Media/Journalism Sources:

 Is it a reputable science news source?
 Does it cite the source of the research (authors,
journal)? The best news sources will provide
links or citations to read the original article.
 Does it get opinion from an independent
scientist? An expert in the field’s opinion. Small
peer-review.
 Does it confuse correlation and causation?
Language. Do they confuse correlation with
causation. Is their type of science claim valid or
supported?
 Does it extrapolate from animals to humans?
Across cultures. Babies to adults etc.
 Read critically!

20
Q

SD/SEM

A

 SD variation of data from the mean (more clustered around the mean – smaller SD)
 SEM is used because we are doing analysis on a sample of the population and not the true population. Thus, SEM is used to estimate how close the sample mean is to the true population mean.
 The larger the sample we have the more confident we are that it will be closer to the true mean. There will be variability in the mean obtained with every sample, how spread are these means?
 SEM = S / square root of the sample size to scale it to fit the population
 The smaller the SEM the more confident we are closer to the true mean.

 As we increase the N, the SD becomes more accurate (closer to the true mean)
 As we increase the N, SEM becomes smaller.
 The mean we get in sample will always vary when we sample, we want it as close to the true mean as possible (need 30 per condition or smaller if it’s a within subjects design).

21
Q

Sampling Error:

A

Sampling Error:
 In the population, people’s RTs should be normally distributed (some people will be very fast, some people will be very slow, most people will be in the middle).
 If I randomly select 30 of these people and put them in the angry group, and randomly select 30 and put them in the happy group, there is a probability that the two groups won’t have EXACTLY the same mean RT.
 So, there is a difference between my groups before I’ve even done anything to them. EVEN WITH RANDOM ASSIGNMENT. This variability of the observed difference compared to the true difference is called SAMPLING ERROR (i.e., natural variation of the mean which occurs when sampling from the population).