Seminars Flashcards

1
Q

What is the difference between selective accessibility and scale distortion?

A

Selective accessibility - a change in anchor, changes the entire representation of the thing being represented

Scale distortion - a change in anchor will result in a different scale used to describe the stimulus - does not change how you represent that stimulus; important as it means that objective scales can become subjective scales

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What the three predictions from Frederick and Mochon’s scale distortion theory of anchoring?

A

Prediction 1 = participants are provided with a scale with which they are expected to choose a stimulus that fits that value, this then anchors responses when asked to estimate further values

Prediction 2 = participants provide a self-selected scale value following provision of a stimulus, this then anchors responses when asked to estimate further values

Prediction 3 = these scaling effects will only occur with values estimated in the same numerical scale, even when the scales pertain to the same category of information ie weight

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What did experiment 1 test, which prediction did it support and what were its key findings?
(Frederick and Mochon)

A

Prediction 1

Select 1000lb animal/15 choices ordered by weight; half asked to estimate weight of wolf before

Wolf group made larger animal choices

Demonstrates scaling effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What did experiment 2a test, which prediction did it support and what were its key findings?

A

Prediction 1

Select food closest to 400cal/13 foods ordered by calories;
half asked to estimate apple cal before

Apple group picked more calorific items

Replication of experiment 1 = reliability + not dependent on scale type

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What did experiment 2b test, which prediction did it support and what were its key findings?

A

Prediction 1

Draw a line on a glass where 200 cal of hersheys syrup would be; half estimated cal in a hersheys kiss before

Kiss condition (smaller calorie anchor) reported 200 cal larger compared to control

Replication of 2a - reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What did experiment 3a test, which prediction did it support and what were its key findings?

A

Prediction 2+3

Estimate weight of giraffe alone (lbs); or estimate weight of raccoon then giraffe (lbs)

Raccoon anchor lowered estimations of giraffes in subsequent ratings

Didn’t show a scale first = self anchored, a more classical approach

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What did experiment 3b test, which prediction did it support and what were its key findings?

A

Prediction 2+3

Weight of giraffe (lbs); OR first estimate blue whale (lbs) or blue whale (tonnes)

Blue whale lbs impacted giraffe estimations; blue whale tonnes did not

Distortion is scale specific

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What did experiment 4 test, which prediction did it support and what were its key findings?

A

Prediction 2+3

Estimate 3 features of a giraffe: weight (lbs), height (ft), weight relative to a grand piano (1= piano much heavier, 7 = giraffe much heavier) ; half participants estimated raccoon (lbs) before

Giraffe = judged less when previously asked to estimate weight of raccoon; no effects on the other characteristics

Distortion is scale specific; demonstrated that through other metrics (giraffe height, comparative heaviness) that the mental representation of the item remains the same

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What did experiment 5 test, which prediction did it support and what were its key findings?

A

Prediction 2+3

Estimate strawberry (cal) or domino’s pizza; estimate: a) cal b) g fat c) lbs lost if not eaten d) number per serving e) number of days a rat could survive off (all concerning McDonalds fries)

Only the calorie anchor had an effect

Replication of studies 3a/b/4

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What do these results suggest about the selective availability theory?

A

One set of experiments does not nullify; still a degree of difference between subjective and objective scales

Where subjective scales ie small and large are used, meaning can be interpreted contextually – either clearly ie large mouse running up a small elephants trunk; or ambiguously ie lowering customer expectations about a product may increase satisfaction ratings but this is unsure whether it is because of a change in experience of the product or a mere scaling effect

Objective scales ie inches, pounds etc are classically thought to be immune to contextual effects and the changes that occur are due to changes in how the target stimulus is represented – this research challenges this – anchoring effects = response language effects

Still a compelling argument for when respondents have a large pool or relevant knowledge to draw from

Judging non scalable information may still be subject to anchoring ie viewing images or houses etc

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the practical implications for Frederick and Mochon’s findings?

A

Question the validity of all ‘objective’ numerical scales

Response scale effects on objective scales may have different behavioural consequences compared to subjective scales - objective have a meaningful zero point; context effects = judgement relativity of labels

Can explain a function of the mind in terms of language ie its not architecture

Useful to be aware of when making financial decisions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are multiple comparisons for fMRI and why are they important?

A

130,000 voxels = probability of at least one false positive almost certain; needs accounting for

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What methods of correcting for multiple comparisons are there?

A

Correction holds false positives to a standard rate

Family-wise error rate (FEWR) ie Bonferroni correction = 5% chance of 1+ FP in whole data set (conservative measure - misses some data) OR Gaussian Random Field Theory

False discovery rate (FDR) = 5% of the results are expected to be false positives = less conservative than FEWR, balances statistical power better

Standard statistical thresholds ie P=<0.001 and low minimum clusters ie K=>8 are inefficient control for multiple comparisons - sometimes too conservative sometimes too liberal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What did the Salmon scanners find?

A

Used an uncorrected P value of <0.001 and found activation in several voxels in a dead salmons brain

When corrected for multiple comparisons (using FDR and FEWR separately) - no activation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the difference between multiple comparisons and a non-independence error?

A

Non-independence error = inflation of cluster-wise statistical estimates when constituent voxels were selected using the same statistical measure ie the correlation value of a voxel cluster will be inflated if the voxels were originally selected due to the criteria that they have a high correlation

Multiple comparisons = related to the prevalence of false positives present across the set of selected voxels at the first stage; not just with fMRI but all data gathered over multiple tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the problem with low statistical power?

A

Reduced likelihood of finding true positive

True positives found will be of inflated significance

Increased likelihood of false positives

Absolute minimum recommendation for fMRI is 20 participants

17
Q

What solutions are there for low statistical power?

A

Caution in extrapolating from effect sizes in small samples - likely inflated

Sample size justified by a priori power analysis = chooses a sample size needed to be informative (usually larger than can afford)

Learn from genetics - data sharing = key; meta-analysis = great

In necessarily smaller samples (ie patients) - collect more data and present at individual level rather than group; more liberal stat threshold (larger FDR?); restrict search space to minimise noise; Bayesian methods - stabilise low level estimates

18
Q

What are some problems with flexibility and exploration in data analysis?

A

Research now is pushed as hypothesis driven - little allowance for exploration = nonsensical as fail to develop decent hypotheses in the first place

Leads to HARKing = Hypothesising After Results are Known = hiding data driven choices and overstate actual evidence

Same things apply to stats/processing packages - lots to choose from = no consistency

19
Q

How do you solve some problems with flexibility and exploration in data analysis?

A

Pre-registration of methods and analysis plans ie sample size, analysis tools, predicted outcomes, ROI/localiser strategies for analysis etc

Encourage peer validation of published exploratory results in order to confirm effects found downstream in the research process

20
Q

What are some problems with multiple comparisons?

A

People dont use it… Use mass univariate testing instead ie separate hypothesis applied to each voxel = inflate the false positive rate if not corrected for

People ‘shop around’ for statistical programming that inflates their findings = P hacking/selective reporting/inflation bias

21
Q

How do you improve the problems with multiple comparisons?

A

Make sure people actually do it…

Report whole brain results (if available)

Justify any non-standard methods for correction

Share unthresholded statistics so can be reanalysed/interpreted in meta analyses

Non-parametric methods ie permutation tests = more accurate

Abandon univariate in= favour of multivaraite = whole brain as measurement

22
Q

What are some problems with software?

A

Increasingly complex = increased likelihood of bugs

Some is widely used/open source = a degree of standardisation; others are written for specialised functions by non-professionals = no quality checking

23
Q

How do we solve software errors?

A

Code reviews - check for inflation and minimise error rates

Dont fall for the ‘not invented here’ fallacy ie if we didnt make it its shit even if purposes are the same - errors more likely found in large user base

Custom code submitted with publishing of articles

24
Q

What are some problems with study reporting?

A

Some skimp on methods including correction for multiple comparison correction techniques

Neuroimaging claims often reported without statistical groundings

25
Q

How do we improve reporting?

A

Authors to follow accepted standards and journals should require for publishing

durrr…

26
Q

What are some problems with replication?

A

Focus on novelty of findings rather than replicability - those that are replicated dont often find same results

27
Q

How can we improve replication?

A

Use data from consortiums

Community needs to change its priorities - replication awards/grants given out

Findings with medical/political implications should be replicated before findings published

28
Q

What are some general arguments to support neuroscience?

A

All of the processes have to be grounded somewhere - most likely the brain; even if we dont have all the pieces we still have something of significance

Nothing can be replicated exactly - superficial differences, even in apparent noise, might provide insight

Data we have is independent of the validity of the discipline