MRI lec 2 Flashcards

1
Q

what is subtraction logic?

A

An idea originally proposed by Donders (1868) to infer cognitive processes from reaction times.

Different tasks contain different cognitive processes. if you subtract the reaction time from one task and another it will give you the reaction time for the process not overlapping in both.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What do we get with subtraction logic

A

An estimate for the duration of a cogntiive process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How does fMRI apply a similar approach to subtraction logic (a behavioural thing)?

A
  • Add an experimental manipulation and compare to activity without manipulation.*
  • Conditions should only differ by the inclusion or exclusion of a* single mental process. Activation – BOLD response – linked to that single mental process.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

how can we control the mental operations carried out by n in the scanner?

A
  • manipulate the stimulus - best for automatic mental processes (e.g., stim colour)
  • manipulate the task - the instructions given to n for the same stimulus. best for controlled mental processes (a top down element involved)
  • do NOT manipulate both at the same time. change 1 keep ither constant
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What confounds did Malach et al., 1995 take into account in their study

A
  • familiarity of object - can name one
  • eye movements - might just reflect activation of eye movments
  • visual features - the brightness, contrast, spatial frequency
  • attention - one condition simply more interesting, not related to the object itself
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How did Malach et al., (1995) take into account these confounds

A

study 2 - familiarity

  • showed n objects that were unfamiliar sculptures
  • not nameable
  • nos semantic meaning

study 3 - differences in low-level features

  • match the low-level features
  • how bright the object is
  • match the luminance values by pixel across stimuli
  • ensure the average is same

study 4 - eye movements

  • fixation cross in middle
  • controlled for eye movements
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What area did Malach et al., 1995 discover

A

They identified an area specifically to object perception. the LOC

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

1 issue of subtraction logic

A

the assumption that simply inserting one process into a task is questionable.

  • Adding manipulation - to task or stimulus - often also changes other processes.
  • this is called an interaction, the addition of one process interacts with another
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are some general confounds we see with fMRI especially with visual perception

A
  • differences in low-level features
  • attentional confounds
  • eye movement confounds
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

how can you tend to eye movement confounds in fRMI

A

one condition elicits more/difficult eye movments

  • fixation cross, or measure eye movements by eye-tracking devices compatible with fMRI.
  • Could then check if eye movements were similar
  • or use data as a confound predictor in statistical model when you analyse your model
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How can we tend to confounds with low level features in fRMI

A

simply scramble the pixels of stimulus picture in comparison condition (kourtzi & Kanwisher 2000)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How can we tend to confounds with attention in fRMI

A

One condition more difficult/interesting.

  • asking n to passively view stuff = always problematic.
  • Could include a task. i.e. detect repetitions of stimuli.
  • equates attentional demands.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How can we tend to motor confounds in fRMI

A

one condition requires more/different responses

  • keep motor commands comparable across conditions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

BIG confound: comparing process X to a rest condition

A

to showcase problematic nature:

Example: reaching task - rest = visual stimulus

  • but lots of different processes are involved
  • compared to if they were doing nothing
  • all these activated but doesn’t mean that they are all specific to the task
  • in papers you see that “this task activated the fronto-parietal-temporal etc” but this isnt meaningful
  • not meaningful to compare a task even a simple task tot doing nothing - because theres just too much going on
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Why are rest periods important in fMRI research?

A
  1. allows the hemodynamic response to return to baseline - useful for analysis altough not necessary
  • BOLD responses summate linearly
  • so you could just disentangle them anyways
  • but somtimes its clearner to have a period where nothing is on the screen so the influx of fresh blood has happened and goes back to normal
  1. nice for n to have a break
    * particularly good if you have a demanding taks
  2. rest condition helps you dissociate an activation vs deactivation effect
    * default mode network
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the default mode network ?

A
  • brain network active when we do nothing
  • indicates the brain is never idle

DMN

  • it’s a network that stretches along midline of your brain
  • really sits in the midline - above the corpus callosum (PPC, LP) and stretches to the medial prefrontal cortex (Fox and Raichle 2007)

DMN = deactivated when you do something - task < rest

Other areas = activated when you do a task - task > rest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What’s the big problem with the default mode network? whys it vital we include rest?

A

imagine 2 conditions: A and B

  • condition A has a stronger signal than condition B
  • don’t know if this difference - due to A MORE activated than B
  • or if A LESS deactivated than B

including rest gives us an indication of baseline activity and THEN we can disentangle this. so this is

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How can factorial designs handle the issue of confounds better than binary designs

A
  • study the comparison between A and B across different levels of our variable
  • an efefct detected across multiple comparisons = very reassuring that the initial effect is not driven by confounds
  • interaction effects = very helpful to develop theories
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Discuss a factoral design study and how this helped develop a theory?

A

Sugiura et al (2005)

  • 2 x 2 factorial design used
  • Compared** **places** vs **objects
  • Tested this comparison for two other things:
  • familiar places (their office) + familiar objects (their bag)
  • unfamiliar places (strangers office) + unfamiliar objects (strangers bag)

finding a consistent difference across both these conditions then the results are quite reassuring

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

once you have defined conditions and set up your design, how do we present objects?

A

Different ways

Block design

  • Trials blocked into clusters of ONE TYPE of thing
  • block place pictures together, rest, block pictures of face pictures together

event-related design

  • separate single trials or single events
  • place, rest, object, rest, object again, rest, place etc.
  • can be done rapidly – becomes “rapid ER design”
  • Mixed design - can put these together*
  • blocks of random distribution of mixed stimuli

jittering

  • randomise the rest period in between events
  • i like that one
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Why should you use jittering /

A

Complex statistical reason why you should do this

  • two conditions shouldn’t be anti-correlated
  • would be a problem for your statistical model
  • shouldn’t be the case that condition A is always on and condition B is always off
  • the BOLD response elicited
  • sometimes they should overlap

Advisable to change the duration of your rest period.

Don’t get this LOOK IT UP!!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Design parameters

A
  • block design
  • event related design
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Describe 1 fMRI run: block design

A
  • have different blocks (e.ge of faces)
  • face block - 14s rest - house block - 14s rest
  • etc for about 10-20 blocks
  • rest - the fixation period between blocks is very important
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

how many runs would we normally see in an fMRI experiment

A

5-10 runs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Within the same run should you contain the two conditions you want to compare, OR should you have 1 run one condition (e.g., faces) and the other run the other condition (e.g., houses)

A

they should both be contained within the same run

the order of conditions per run should be randomised

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

how long would a fMRI scanning session last on average

A

1 hour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

what other scan do we need as well as the fMRI scan from n

A

We need a structural scan of n too to overlay the fucntional data on

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

why do we need both conditions per run?

A

There are always unspecific changes between runs

  • could be due to the scanner - changes. inthe magnetic field
  • changes in attention, fatigue in n
29
Q

event related designs

A
  • typical duration of specific event - 100ms -2000ms (2s)
  • time between it -shorter- about 1-2s- between different responses
  • even though the BOLD responses overlap and build up each time you can still disentangle them because they build up linearly
30
Q

what’s the smallest stimulus duration you can detect with fMRI?

A

Savoy et al., (1995)

  • Stimuli as short as 34ms can be detected with fMRI
  • but BOLD amplitude reduces with stimulus durations
  • so it gets quite small when stimuli are that short
31
Q

which should we choose event related or block design

A

well a good fMRI design should give us

Good detection power

  • findings voxels/regions
  • that change in response to experimental manipulation

Good estimation power

  • an accurate measure of the time course of an active voxel (time course of a process)
  • in response to an experimental manipulation
  • this is what we look at when constructing the event-related average plot (similar to ERP)
32
Q

Things to consider when running an fMRI study

A

TEHCNICAL PARAMETERS. Things related to your setting up the scanning sequence.

  • How many slices
  • How much coverage of the brian you aim for – whole or parts - affects duration of scan.
  • Duration of ur snan
  • Slice thickness
  • Field of view, matrix size – x and y dimensions
  • TR (how often you send 90 degree RF pulse), echo time, flip angle
  • Length of run
  • Number of runs
  • Stimuli – duration and sequence of events within each run. Use block designs ot detectblobs” and event related designs to estimate time course
  • whether you need to counterbalance the orders within or between subjects
33
Q

what is the difference between a preprocessing stage and statistical analysis stage

A

pre processing (damn like in EEG)

  • aims to remove artefacts
  • head motion
  • about improving the quality of the data
  • important to normalise the data spatially - bring brains to a common group space

statistical analysis

  • second step where you test your hypothesis/resaerch question
34
Q

the issue of automated software?

A

with a few clicks you can produce some really nice images.

  • problem is that you don’t actually know what the software does
  • can cause problems because - might be the case you do something systematically wrong
  • there’s a systematic flaw/bias in the data
  • so you should really do some quality control and inspect the raw data first
  • should understand what the software is doing - understand the different steps to ensure the pretty pictures reflect something meaningful
35
Q

what different artefacts might we come across in fMRI?

A

physical noise

  • can blame the scanner/laws of physics for this one
  • Its noise intrinsic to the scanner e.g. small differences in the static magnetic field
  • It can also come from external sources e.g. hair clips – produce massive artifacts in data
  • generally lead to distortions of the magnetic field
  • You can control for these artifacts by scanning a phantom – a phantom is simply a water bottle. You put this into the scanner before the human subject. Them look at the i mage of the phantom and you will be able to see any inhomogeneities which tell you artefacts are present.

physiological noise

  • Comes from n
  • Most critical noise is head motion – other things – breathing, fatigue, respiration, heart rate – all affect fMRI data
  • These artifacts tend to show up on the rim of the head or will actually displace the region of interest!! Think ur looking at the amygdala but its not bc theres been some head motion!
  • Difficult to tell apart the activation that results from artifacts to activation that results from ur experimental manipulation
  • Often artifacts work against true effects – so you wont see them anymore in your data
36
Q

What can we do to contend with artefacts in fMRI - different pre-processing techniques

A

Every imaging software has these tools implemented to combat noise

  • Physical noise – can do sum called distortion correction
  • Slice scan time correction – corrects for fact that you sample different samples at different time points
  • Remember in fMRI data each TR corresponds to 1 data point – but twithin TR you still cycle through head and that happens at slightly different times but you treat the whole thing as one data point. So you need to correct for these small differences related to different slice positions
  • 3d motion correction – probably most important pre processing steap – corrects for head movment
  • Temporal filtering – removes low freq drifts from data that often result from physical noise – scanner.
  • Can also do filtering in the spatial domain with smoothing.
  • Spatial normalisation – brains differ. To analyse data at a group level you need to normalise this data
37
Q

how many ways can we move our head?

A

Can move head in 6 different ways

  • Move head forwards and backwards (body comes with it)
  • Left to right
  • Up and down

→ called a translation movement – move your head from one fixed position to another

  • “Pitching” - rotate head up and down
  • X rotation - roll head with head facing front – imagine a ball rolling from one ear to the next
  • Yaw / y rotation – rolling head from left to right

→ called a rotation movement

38
Q

how do we correct for head motion in the scanner?

A

Algorithms used to correct for this: 3D motion corrections

  • assumes theres a rigid body that we scan - so the body is stable and doesnt deforem during the scan
  • so gets target volume/data = the first volume in the first run
  • then software measures any displacment (from the 6 motion parameters) and aligns each incoming data point to a target volume
  • so works backwards - it measures the displacment and calculates the data
  • Lot of fancy algorithms/software but often the most effective measure to control for motion is simply to ask n to keep still.*
  • Can use external aid e.g., head straps
39
Q

what is 1 TR

A

between 2 90 degree excitation pulses

40
Q

why do we need pre-processing: Spatial normalisation

A

brains differ!

  • Diffs in head size
  • Folds in the cortex

Need to bring these brains into single common space if you want torderive any meaningful conclusions about a region e.g. amygdala.

41
Q

Appraoches to spatially normalise brains: TAL

A

Talairach Transformation (Talairach and Tournoux 1988)

  • Atlas based on a single brain – of elderly alcoholic – controversial on whether this brain was the best option to use but is widely used
  • squeezes/squishes n’s brain to fit this standard brain
  • after this you can describe your structural/functional locations using a 3D coordinate system - x, y, and z
  • something possible only after you have transformed data to a standard space
42
Q

Appraoches to spatially normalise brains: MNI

A

Montreal Neurological Institute (MNI)

  • Template is based on the morphed average of hundreds of brains.
  • More logical than TAL
  • many different versions – e.g. for children

Advantage:

  • average based on loads of brains
  • transformations use nonlinear warping -→ the algorithm behind it = more complex and elads to more reliable transformation; better alignment between subjects
43
Q

Appraoches to spatially normalise brains: MNI

A

Montreal Neurological Institute (MNI)

  • Template is based on the morphed average of hundreds of brains.
  • More logical than TAL
  • many different versions – e.g. for children

Advantage:

  • average based on loads of brains
  • transformations use nonlinear warping -→ the algorithm behind it = more complex and elads to more reliable transformation; better alignment between subjects
44
Q

Appraoches to spatially normalise brains: MNI

A

Montreal Neurological Institute (MNI)

  • Template is based on the morphed average of hundreds of brains.
  • More logical than TAL
  • many different versions – e.g. for children

Advantage:

  • average based on loads of brains
  • transformations use nonlinear warping -→ the algorithm behind it = more complex and elads to more reliable transformation; better alignment between subjects
45
Q

Two main statistical analysis approachs

A

One challenge with fMRI data is that you have data in 4 dimmensions ( x,y,z but also have time )

  1. Voxel based analysis
  2. Region of interest analysis
46
Q

Statistical analysis: Voxel based analysis

A
  • looks at whole brain
  • mainly used for detection - localising functions
  • identifying which areas are active during which tasks
  • relies often on simple block designs
  • typically uses a general linear model (regression approach)
47
Q

Statistical analysis: region of interest analysis

A
  • to come up with estimates of the time course of activation
  • typically based on event related designs
  • here dont look so much at activation maps but the % signal change across time
  • create similar plots as you would with ERP data
48
Q

Steps using a GLM to do statistical analysis of fMRI data

(step 1)

A
  1. based on task protocol define regressors for each condition (task predictors e.g., faces )
  • at what point you present each stimulus
  • get onset and offset time for each stimulus
  • then model an on and off box car function based on your prediction of how a specific voxel in a specific area would response to the task
  • brain is a physiological system - need to multiply the on/off function with the hemodynamic response function.
  • theoretical function based on the knowledge of the speed of BOLD response

Reflects the theoretical assumption for how a certain voxel should behave (see image)

49
Q

Steps using a GLM to do statistical analysis of fMRI data

(step 2)

A

so you have your theoretical model plot

  • use this set of regressors to predict actual BOLD signal time course for each voxel
  • if you have a high coefficient - amplitude kept highif you have a low response - adjust theoretical function by reducing height
  • of the theoretical predictor function
  • mathematically speaking, we adjust the regression coefficient (beta weight)
  • large beta tells us a voxel responds strongly to a condition

then do some data fitting - get the betas that fit your data best.

50
Q

Steps using a GLM to do statistical analysis of fMRI data

(step 3)

A

Now test to see. ifthe beta’s are statistically signficant

  • get all the betas we obtained from the experimental condtion e.g., faces
  • and the betas obtained from the control
  • run a t test in each voxel
  • then colour code the resulting statistical values (functional data) on structural scan
51
Q

what is first and second level analysis in fMRI?

A
  • first level analysis - run a GLM (per 1 voxel) for each n seperately
  • second level analysis - 1 analysis per voxel for all subjects together → also called random effects analysis. allows us to generalise from your sample to the population
52
Q

Secribe how you can do second level analysis?

A

Lots of different single subject analyses

  • for each n, look at the regression coefficient for each voxel responding to the predictor variable e.g., faces
  • each with its own theoretical response function

Combine this on your second level analysis

  • essentially you run a paried t-test or if more complicated design a repeated measures ANOVA on n’s betas
  • remember this is carried out for each voxel seperately
  • so essentially its 1 analysis per voxel for all subjects together

Then map the resulting statistical parameters e.g. t (for t test) or f (for ANOVA) on structural data, colour coding the height of statistical values. key only shows us the significant association voxels (above significance level)

53
Q

ROI analysis example?

A
  • aims to look at time course of BOLD signal within specific regions (alreasdy need idea of where things should happen)
  • try to localise certain area then calculate the time course

Culham et al., 2003

can we dissociate object recognition form object based action? does LOC only compute 2D images? Or is it also involved in computing objects during grasping movements

  • first had to identify region of interest - presented localiser with objects that were intact vs scrambled
  • Using these conditions they were able to identify the LOC area using GLM contrast
  • THEN did ROI analysis - Computed within LOC the time courses for their task of interest (graspoing vs reaching)
  • by event related averaging of BOLD signals in LOC (visual plot of time course)
  • also extracted peak values from each n in each condition then ran a paired t test to compare values
    *
54
Q

Issues with fMRI research

A
  1. reverse inferences
  2. Multiple comparisons problem - Type 1 errors
  3. Increases in sample size – type 2 error
  4. Double dipping / voodoo correlations
  5. Flexibility of data analysis
55
Q

Issues with fMRI research: reverse inferences

A

Kosslyn (1999)

“ many of the studies didnt answer questions about the functioning of the brain. Rather they can be best described as exploratory. people were asked to engage in a task while their brains were monitored. then theactivity was interpreted post hoc.”

highlights the problem of reverse inference.

Reverse inference = if one region lights up, you conclude a certain variable (behaviour, mental process) must be present.

Iacoboni et al., (2007)

  • showed pictures of politicians
  • found the amygdala was active
  • concluded n must have showed voter anxiety
  • anxious because they thought these were terrible terrible people

but the amygdala is also activated w anger, happiness or sexual excitment

  • to link activation in a brain region with a certain behaviour/state
  • need to ask n - assess behavioural responses e.g., through physiological responses/self report e.g. do you experience voter anxiety?
  • then you can correlate answers with brain activity
56
Q

What is forward inference?

A
  • if i present an image of a scary spider the amygdala will light up?
  • this would give a much more valid conclusion as opposed to apost hoc reverse inference
57
Q

Type 1 and type 2 error

A

type 1 error - false positive

  • test say yeah
  • but acc wasnt an effect
  • whenever you have multipe tests you risk getting false positives
  • p value tells us there is less than a 5% chance that a voxel our stats have declared active is NOT active

type 2 error - false negative

  • tet say wrong
  • but there acc was an effect
58
Q

Issues with fMRI: multiple comparisons - type 1 error

A

big problem with fMRI

  • you have as many statistical test as you have voxels
  • if we have 32 slices of a matrix size 64 x 64
  • we have 131,072 voxels, and
  • thus we carry out a GLM and t test 131,072 tests
  • if you have a p value of 5% you of course get a lot of false positives - always have some voxels activated by chance

fish study (Bennet et al., 2009) highlights the issues with activation maps that are not corrected for multiple comparisons.

if we changed. thep value to 1% but this would be problematic because then you increase risk of type 2 error and risk missing a true effect

59
Q

Multiple comparisons problem: potential remedies

A
  • Bonferroni Correction
  • False discovery rate
  • Cluster correction
60
Q

Multiple comparisons problem: potential remedies

A

you could reduce the p value to minimise the risk of getting type 1 error. this is whats done by Bonferroni Correction

  • Bonferroni Correction (Family-wise error, FEW correction)
  • you divide the desired p value by the number of voxels/tests you have
  • e.g., you want a p value of 0.05 (area of probability is 5%)
  • Then divide this by the number of scanned voxels – e.g. 131,072
  • Reduce significant level by a lot
  • If a voxel still shows up activated – you can be really certain that this is a true effect and not a false positive
  • general drawback: the value is very conservative (p < 0.0000003815)*
  • One workaraound around this problem, somethign called small-volume correction.*
61
Q

What is small-vulume correction and what problem does it workaround?

A

Bonforoni correction - P value is tiny. very conservative.

  • small volume correction
  • Do the same thing as bonferroni correction - but only for voxels within the brain,at the cortical surface or in a region of interest
  • reduces number of voxels/tests and thus the severity of the bonferroni correction
62
Q

Multiple comparisons problem: false discovery rate

A
  • Uses q instead of p values
  • Q values: control or set the maximum proportion of activated voxels that will be significant by chance at a given p value
  • again we have a subset of voxels to analyse
  • advantage: less conservative than bon cor – reduces risk of type 2 error
  • limitation: but it is still conservative so you might want to try something else
63
Q

Multiple comparisons problem: false discovery rate

A
  • Other methods assume each voxel is independent of each other. In reality however, this isn’t true!
  • in fMRI neighbouring voxels are more likely to significant in fMRI data thannon-adjacent voxels;
  • vice versa if you have falsely activated voxels they should just be randomly dispersed
  • SO any clusters below a certain cluster size (k) are removed as they will probably be false positives
64
Q

issues with fMRI research: Increases in sample size – type 2 error

A
  • Small sample sizes increase risk of type 2 errors
  • more likely to miss a true effect
  • also small ssample size reduces the power of the study
  • scannign is really expensive. sounderstand why you wcant have such a large sameple size
  • needs a balance between cost effective scanning design and a well powered study
  • over teh years tsample sizes have increased steadily (Poldrack et al., 2017) - studies more able to detect smaller effects

the larger

65
Q

the larger the effect…

A

the easier it is to demmonstrate. even with a small sample size.

vice versa the smaller the effect the harder it is to demmonstrate - need larger sample size

66
Q

Issues with fMRI: double dipping/the problem of voodoo correlations. what did Vul et al (2009) discover?

A
  • Vul et al (2009) identified a puzzle in the fMRI literature*
  • Especially in terms of social cognition/emotion/personality research.*
  • Found high correlations here between behavioural measures and fMRI data!

but when they looked at the reliability of behavioural/emotional/personality measures and the known reliability of fMRI measures they discovered something weird.

  • we know behavioural/emotional/personality measures usually have a re-test reliability of = .07
  • fMRI has an approximate reliabiility of = 0.7
  • Using these numbers we can calculate the highest possible correlation between these two measures.* The perfection correlation, with no measurement error, would be r = 0.74.
  • How then are fMRI studies reporting brain-behaviour correlations above .90? studies which tend to be blasted over media for the very significant relationships derived.*
67
Q

Issues with fMRI: double dipping/the problem of voodoo correlations. How can we explain this?

A

Vu et al., 2009 pointed out a specific statisticsl problem

  • “double dipping” also known as the “non-independence error”
  • arises when statistical tests are computes for specific brain regions
  • and these tests are not independent from the statistical test used to identify the brain region

Imagine we run a study on empathy

  • Group A – score highly on empathetic questionnaire
  • Group B – score low
  • Then run GLM and find region X is more active in the high empathy group. thats cool finding in itself but we want to relate these findings to the personality questionnaire.*
  • then use this identified brain area X as a region of interest and compute a correlateion between the questionaire scores and BOLD signal in this area (X)*
  • gives us a very high correlation of r = .9*
  • The problem is that you pre-selected that area X based on that empathy questionnaire already!!!!!! the voxels you selected are the same voxels that origionally showed this relationship. to run a correlation between the region and the scores on the q of course the correlation will be really high.*
  • scholars have reported this correlation as something exciting and peple have bought it!*
  • Kriegeskorte et al., (2009) showed that 42% of fMRI papers in 2008 showed at least one example of non-independence error.*
  • Always check that the test you used to identify a region and your subsequent analysis is independent from each other.*
68
Q

double dipping - how many papers do this? what should we do to contend this problem?

A
  • Kriegeskorte et al., (2009) showed that 42% of fMRI papers in 2008 showed at least one example of non-independence error.*
  • Always check that the test you used to identify a region and your subsequent analysis is independent from each other.*
69
Q

issues with fMRI: Flexibility of data analysis

A

fMRI data analysis is very flexible. Researchers have high degree of freedom in which ways they decide to implement their analysis - which steps to select.

  • Screenshot from brain voyager analysing software – allows you to chose different pre processing steps
  • can just check some options, run data, look at results
  • Carp et al., (2012) found 69,120 possible workflows to analyse fMRI data
  • Can lead to “p-hacking” or “method shopping” where you do something over and over in different ways until you get a nice result

Importantly, this problem is not specific to fMRI. This issue is apparent for all complex methods e.g. genetic analysis