L9: MEG Preprocessing Flashcards
Preprocessing of MEG data involves - (3)
- Inspecting MEG data
- Epoching
- Dealing with noise (e.g., noise reduction, noise removal [filtering, automatically/manually rejecting noise trails], averaging)
Preprocessing and event-related analyses of MEG is similar to
EEG (apply similar steps)
What is MEG data measuring for each sensor?
magnetic field strength over time
An MEG system samples from lots of
sensors at lots of points in time
What is the typical sampling rate for MEG per second and millisecond - (2)
around 1000 Hz (1000 samples per second)
1 sample per millisecond
Why does MEG have a sampling rate?
MEG has amazing temporal resolution (when the activity has happened) so don’t need MEG’s data everytime time
Our MEG system in York has around how much sensors?
248
The EGG in York has approximately how many electrodes?
64 and 128 electordes
The raw data from MEG can be stored in a very large
Time x Sesnor matrix
What is shown in the rows and columns?
Along the top is the different sensors in MEG and along the bottom is millisecond
MEG’s Time x Sensor matrix in a 10-minute worth of data can have how many rows and columns with 248 sensors
600,000 rows and 248 columns
MEG’s Time x Sensor matrix , each entry in the matrix is
the magnetic field strength detected by a given sensor at that point in time, measured in femto-tesla (10 to the power -15 telsa)
In EEG, we can have a matrix of electrode x time in which each entry
EEG values which are the magnitude of the electrical (activity) potential in microVolts
MEG’s timecourse for all sensors can initally be
inspected
First step of MEG - inspecting MEG data across time
Diagram of MEG time course for all sensors which plots.. and what are y and x axis - (2)
plots the timecourse for all the sensors
y is the magnetic field strength (femto telsa [fT) and x axis is time in seconds (s)
First step of MEG- inspecting MEG data across time
What can you see in this example? - (2)
6 sensors are very noisy (noisy lines underneath - broken sensors which should be removed)
others sensors are more stable showing a gradual drift (cone) of some picking up stronger magnetic field strength and others not so much –> some sort of artefact (e.g., changes in temperature/changes in magnetic noise)
Although inspecting MEG data across time is not usually informative, it can reveal
gross problems such as dead sensors or big artefacts
Inspecting MEG data such as one below is not enough to answer
our RQ
Usually in MEG studies, we would want to present stimuli at specific times and see how the brain responds
this is like what design in MRI?
event-related design
What does this diagram show? - (4)
- Participant and their EEG/MEG recordings are being taken
- Pps is looking at dog at stimulus PC
- Then the stimulus PC sends a trigger (i.e., i showed image of puppy at this time) to EEG/MEG recording PC
- Then EEG/MEG recording PC adds this to the data as the little 1s demonstrate when participants saw the puppy
- This way know exactly when pps were shown something and look at the brain activity after they have done that to see when we show them a puppy
Different conditions have triggers with different
numbers
Diagram of example of different conditions have triggers with different numbers - (2)
- Aside from recording when participant saw a puppy in MEG trace (e.g., 1)
- We can also record when participants saw a cat (another condition) in MEG trace which is 2
What is a trigger in MEG?
A trigger is indicating stimulus onset is stored as a number in MEG
Typically, the stimulus presentation PC sends the trigger signal to the
EEG/MEG recording PC
In epoching, we can extract an ‘epoch’ of data around the
stimulus time [onset] (around trigger)
In epoching, we want some time before… and after… (2)
time (e.g., 500ms) before stimulus presentation as baseline and enough time after to see effects (e.g., 1500ms)
Why do we epoch some data (e.g., 500 ms) before stimulus presentation? - (2)
- The effects we may see after stimulus presentation may be due to activity already going on in brain - lots of spontaneous activity
- Acts as baseline and expect change to happen after that 500 ms
Diagram of epoching shows… (3)
- Showed 100 puppies
- After each time we showed them puppies (1s), we can look at the data after a 1 second or so and epoch it
- This image does not include any baseline
In epoching we can extract the data before and after stimulus time (around trigger) in each.. - (2)
each trial in one condition
(e.g., epoch data for when pps seeing dog, epoch data when pps see cat)
In epoching, we can also have trial by trial
data
What does this trial of epoching of trial by trial data time course plot show? - (7)
- 0 timepoint which is the stimulus presentation
- baseline before 0
- 0.5 and 1.0s (after 0) after stimulus presentation
- Red is positive and blue is negative
- Have different sensors at different trials
- At 0.6s all sensors are consistently showing some change when showing some stimulus (e.g., puppy)
- By looking at individual trials start to see effects consistent across sensors or across different trials (e.g., look at next trial showing puppy to see if we also have something [effect] at 0.6s)
In epcohing trial by trial data we can view data at one trial across
different sensors
In epoching trial by trial data, external noise such as… can often show up as …
subject movement and other artefacs (as well as real effects) often show up as correlated activity across sensors
In epoching trial by trial data the timings are now meaningful as 0
is stimulus presentation, see what changes
What is this diagram - (2)
This is a sensor montage which plots epoching trial by trial data in a different way (as compared to timeplot)
- plots amplitude across different sensors in montage of head
It is very hard to distinguish EEG/MEG signal from
noise
The noisiness in EEG/MEG signals tend to be - (2)
much bigger than effects we are looking for (tiny magnetic strength/electrical potential effects)
problem for preprocessing EEG/MEG data
What does this diagram show? - (2)
brain’s magnetic fields we are looking for in an experiment are very small (down)
above it we have sources of magnetic effect that can reduce ability to see small experimental magnetic effects (e.g., earth’s magnetism, magnetic noise of town)
What are many noise sources in MEG (and EEG)? - (3)
- electromagnetic interference from cars, mains power line, fans, MRI scanners etc..
- Earth’s magnetic field
- Participant movement, blinks and own heartbeats
Blinks of a participant show up on a
Electrooculography (EOG)
To detect and remove the blink artefacts in EEG/MEG
we use
electroculogram (EOG) sensors)
What does EOG stand for?
Electrooculography
EOG sensors is the
additional electordes placed on face to monitor blinks and eye movements
EOG may lead to
positive and negative voltage changes at some scalp electrodes
What does this diagram show? - (2)
Recordings of EOG is shown at the bottom two traces (circle)
These traces causes massive electrical /magnetic changes in EEG/MEG data when participant blinks
What does this diagram show in EEG? - (3)
- The eye movements and blink have a specific topography sensors
- Has effect where positive charge at the front of head and negative at back
- If we see this kind of activity, we may see this as a blink
Can also use ECG sensors with EEG/MEG which record
electrical activity of the heart including rate and rhythm
What does ECG stand for?
electrocardiogram
Can also use ECG with EEG/MEG to remove heartbeat effects but less likely to - (2)
be correlated with task/stimuli presentation
(good thing! = enables us to do comparisons properly without worrying about heartbeat correlated unless comparing older vs younger which have different heartbeat rates)
What are common artefacts in MEG? - (3)
- Blinking
- Eye movements
- Heartbeat
Diagram of blinking in sensor space vs source space in MEG
What is this diagram showing of 3 different artefacts in MEG and why is its topgraphy of sensor space (eye movements) different to EEG? - (2)
We have different topography of different magnetic effects across MEG sensors when we blink, move eyes to the right and the heartbeat
This topography is different as whenever you have an electric charge you get a magnetic field that is perpendicular to it
The problem of noise (artefacts) is that it causes us to
not ‘see’ neural signals on a single trial
sa
We can deal with noise in the data by doing 3 things - (3)
- Noise reduction
- Noise removal (filtering, automatically/manually rejecting noisy trials
- Averaging
What is the solutions with noise reduction when doing the experiment? - (3)
- study design should minimise movement, including eye movement and identify movement between conditions (i.e., no movement expected in one condition than another, get pp comfortable in scanner and get cushion and blanket)
- Experiment taken place in magnetically shielded room (Faraday cage) –> earth’s magnetic field effect will reduce, no electrical machinery in scanner, de-mental the partticipants
- Automatic comparison to reference channels
Reference channels in MEG systems is where their coils are positioned far away from
participants’ head
The purpose of reference channels is that
real MEGs sensors can be compared to the reference channels
Diagram of what inside of MEG scan looks like and what it shows… - (3)
SQUIDs stay cold so near liquid helium reservoir
Pink is signal coils picking up the magnetic signal and trying to send it near to SQUID
In between wehave green reference sensors which does not pick real brain activity but nearby to pick ambient things (e.g., traffic, earth’s magnestism
The reference electrodes are intended to pick up
ambient noise and interference not coming from the participant
The reference signals are then subtracted from the
real signals from rest of real MEG sensors = taking a lot of external sources of noise
How is subtraction of reference signals from real signals from MEG sensors is done? - (2)
done automatically (see ‘processed’ vs ‘raw’ folders)
OR
MaxFilter software for signal space separation has similar aim based on decomposing data to tell which are probably far from head
Noise reduction from reference electrodes does not help in detecting and removing
noise from the participant - reference electrode too far from the participant
After noise reduction, we will need to think about
noise removal
What are the steps of noise removal - (2)?
- Filtering
- Automated removal of artefacts or Manual (or rule-based) removal of trails with artefacts
After noise reduction and noise removal then we can think about
averaging over multiple ‘epoched’ trials
For noise removal, ideally you want to do the same
preprocessing steps of noise removal for all participants
To ensure noise removal is exactly the same (consistency and reproducibility) preprocessing noise removal steps for all participants,
we can
automated using a script, which also makes your analysis pipeline reproducible (i.e. someone else could run it and get the same results)
There may be some sessions in which aggressive filtering of noise is necessary due to - (2)
idiosyncratic (unusal) conditions on the day
e.g., drilling noise in one day
Any difference in analysis of noise removal between participants/sessions should be
reported
With noise removal always want quality check data visually and calculate signal-to-noise ratio before preprocessing (manual or automated artificial artefact) and after
if we don’t then… - (2)
finding and remove/interpolate (e.g., broken) channels before statistical or automated artefact rejection as
will have to exclude many trials and may affect other sensors (automatic removal this signal may project to other sensors)
Brain activity often happens at specific
frequencies (e.g.. alpha)
Brain activity often happens at specific frequencies (e.g., alpha) which is the same of
noise
Brain activity often happens at specific frequencies (e.g., alpha) which is the same of noise for example - (3)
the electrical sockets in UK have electrical polarity of 50Hz
So 50 times a second they switch direction
We get artefact in EEG/MEG at 50 Hz electrical sockets in other rooms
What does filtering involve? - (3)
- Using Fourier analysis to calculate the amount of activity of different frequencies
- Plot frequency spectrum (also named Fourier spectrum, or sometimes amplitude/power spectrum)
- Remove specific frequencies or frequency ranges to clean up signal that think might be noise (e.g., removing 50 Hz mains hum)
Filtering is a recommended step in EEG/MEG preprocessing as
good for removing mains noise, muscle artefact and sensor noise
Diagram of MEG frequency spectrum plot
what does y and x axis show and it is an output of.. - (3)
X axis is frequency from 0 -200 Hz
Y is strength of magnetic field in Log power (dB)
This is output of Fourier analysis
What does this MEG frequency spectrum graph show? - (2)
Very prominent peaks around 50, 100, 150Hz, all coming from the mains voltage
(e.g., electrical sockets) - assume its noise
But we are not really interested in these frequencies!
So we can use a filter to get rid of them and clean up our data
What are the 4 types of filters? - (4)
- low pass filter
- high pass filter
- bandpass filter
- notch filter
A low pass filter lets through
signals below a particular frequency (low-frequencies)
A low pass filter is used to remove
high-frequencies we are not interested in
A high pass filter lets through
signals above a particular frequency (lets high frequencies pass)
We use a high pass filter to remove low-frequencies such as
remove low-frequency drift in magnetic fields and DC component (cone shape)
A low pass filter can be used to remove
50Hz mains effect ‘hum’
A bandpass filter allows through signals between - (2)
two limits
used to combine a low and high pass filter
A bandpass filter only lets through a particular
frequency band - that not very highest or lowest
A bandpass filter gets rid of
some high and low frequencies
What is a notch filter? - (2)
Removes a very tight range of frequencies e.g., around 50Hz to remove main hums
Not letting signals through a specific frequency , as compared to high , low, bandpass, but removing
Example of notch filter
Want to get rid of anything between 49 and 51 Hz to remove main hums
With different types of filters (e.g., high, low, bandpass, notch) you can
do multiple if useful
e.g., no point applying notch filter above the cutoff for a low pass (because signal will be 0 already)
Diagram of showing effect of applying bandpass filter (0.5 - 48 Hz) of frequency plot of magnetic strength x frequency Hz - (2)
Scale changed as rid of anything above 48 Hz and removed spikes
Brain has 1/f property so low frequencies more relevant as shown in both graphs (more spikes in low frequencies) so less brain activity at higher frequencies
Diagram of showing effect of applying bandpass filter (0.5 - 48 Hz) of magnetic field strength (fT) across time
Removed all the problematic sensors (dodgy lines) as well as got rid of gradual drift (cone) as slow changes of magnetic field over seconds have been removed (e.g., changes in temp has gone)
What is another way to remove artefacts aside from filtering
automatic or manual removal of artefacts
SSP and blind source separation is what type of removal of artefacts
automated
If a type of artefact has usual or known topography (i.e., known spatial layout i in sensor space) then it can be identified and removed with
signal-space projection (SSP)
What SSP require? - (2)
pre-defined topographies or spatial distributions (layout) for each artefact in sensor space
have a sense the type of artefact and where it is
For each type of artefact in SSP, we can - (3)
- work out how much of the data can be explained by this topography i.e., what number to multiply it by)
- Remove the weighted version of each noise topography from the data
- What is left is considered the signal
In SSP we can remove or reduce real signal with a
similar spatial distribution e.g., frontal
What does this diagram show - (3)
if blinking has this specific left-right topography in sensor space (left) then how much of data can be explained in each time point (e.g., over time we say does it look like we are blinking now)
weight spatial topography maps of sensor space of blinking, heartbeat and eye movement by multiplying by some number as regression with real signal to see what data is made up of
Then take the artefacts out if only want signal
SSP automatic removal of artefacts works well for and okay for
well with eye movements &blinks and okay for heartbeat as consistent topography in sensor space
In automatic removal of artefacts if we are not confident we know the - (2)
topography of artefacts we can instead separate data in data-driven way and identify which parts are likely artefacts
- blind source separation
Examples of blind source separation methods - (3)
- Principal Component Analysis
- Singular Value Decomposition
- Independent Component Analysis (ICA)
What is the most popular blind source separation method?
Independent Component Analysis (ICA)
The blind source separation methods does not require
pre-defined topographies in sensor/source space
What is the difference between blind source separation and SSP?
In blind source separation, separate data in a driven way and identify which parts are likely artefacts based on mathematical assumption and not on spatial maps
What are the steps of blind-source separation? - (3)
- Decompose data e.g., separate into components
- Choose which are likely artefacts and remove these
- Reconstruct the data
The ICA separates sources
that are not dependent on each other i.e., are independent
Diagram of ICA in ME - (3)
- Have MEG data (x) on left
- Spilt that data into time courses into 2 different sources on basis on being independent
- One source could be heartbeat due to its spatial topgraphy in sensor space and it being rhythmic over time course which is different to brain signal so remove it
In ICA they get
approximation of each source = time course and spatial distribution
ICA in MEG ‘unmixes; the
signals assuming they are independent of each other and have separate sources
The ICA explains as much of the data as
possible
The ICA ‘unmixes’ the MEG signals by assuming they are independent of each other and separate sources so
they can recombine those signals that have not been removed (e.g., not heartbeat artefact)
In ICA, some components of it wil mostly be.. and others will be..
mostly be artefact and others MEG signal
Diagram of ICA
Benefits of automatically removing artefacts - SSP or blind source separation (2)
- Avoids having to reject data that are affected by an artefact (cleaning trials not removing trials - end up with same number of trials)
- Can quickly and automatically clean up data - computer does it for you
Can you do SSP and ICA together?
No , either one or the other
Disadvantages of automatically removing artefacts - SSP or blind source separation (2)
Can’t detect all artefacts (e.g., idiosyncratic [strange] movements)
Not always accurate and can lose or distort signal-
In automatic removing artefacts - SSP or blind source separation always check (2)
your results and excluded/included components
In manual/statistical artifact removal most of times involve:
Go through each epoch and check it manually for obvious problems
The disadvantage of manual/statistical artifact removal where researchers go through each epoch and check for obvious problems
very time-consuming but many researchers do this way
Another way of manual/statistical artifact removal is when
removing outliers e.g., exceeding 3SD from mean of other trials
The reason why other researchers don’t remove artefacts post-filtering at all via manual/statistical artefact removal as its - (3)
time consuming
could remove real effects
could be subjective
Consider collecting other measures with MEG such as
for manual/statistical artefact removal
– ECG, EOG, EEG to help detect eye movement and cardiac effects (or MEG setup may have head position indicators)
For manual/statistical artefact removal always quality check your data
before and after preprocessing steps
Manual/statistical artefact removal approach may depend on participant grp e.g.,
more important with the high level of motion when testing children
Averaging across multiple ‘epoched’ trials’ in which it has… and can average.. (2)
- has preprocessed (epoch) per trial
- can average the time course across sensors over multiple repetitions of the condition (e.g., of seeing a dog) and see if responses are consistent
Averaging the time course over multiple repetitions of the condition
if responses are consistent across trials,
they are increased
Diagram of averaging many ‘epoch’ trials of a condition - N is number of trials
Averaging the time course over multiple conditions of the condition
if noise is different across trials the noise is (e.g., eye blinks/heart beat is random time)
decreased
Averaging the time course multiple repetitions of condition improves - (2)
signal to noise ratio as hoping responses are consistent (e.g., same peak every 100 ms after seeing puppy)
but can lose real effects if they are not consistent across trials
Which of these statements is true description of MEG compared to fMRI?
A. Better temporal but worse spatial resolution and more frequency info
B. Better temporal rbut wrose spatial resolution and less frequency info
C. Better spatial but wrse temporal resolution and quieter
D. Better spatial but worse temporal resolution and louder
A
Which statements is TRUE?
A. We collect MEG data in source space and transform into sensor space
B. We collect MEG data in source or sensor space and transform it
C. We collect MEG data in sensor space and trasform into source space
D. We collect MEG data in outer space and transmit back to Earth
C
MEG and EEG sources in brain are represneted by little battery like currents called what
A. Unipole
B. Bipoles
C. Dipoles
D. North poles
C
Which of these statements is TRUE?
A. SQUIDS are further from the brain as they need to be extremely cold
B. SQUIDs are further from the brain as they need water to swim
C. OPM are further away from the brain so participant can move freely
D. OPMs are closer to the brain resulting in worse signal
A
TRUE OR FALSE? - (3)
You can not predict orientation of magnetic effects from the electric current
MEG struggles with deep sources but EEG is fine
Activity must be synchronised over a few mms to be detected with EEG or MEG
- False - predict the orientation of magnetic effect from the electric current as its 90 degrees (perpendicular)
- False - both MEG and EEG struggle with deep sources as on the surface
- True
Compared to EEG, MEG allows
monitoring of cortical activation sequences without severe disortion by the skull and other extracerebral tissues
MEG as compared with fMRI directly reflects neuronal phemona whereas MRI does it
indirectly via hemodynamic response
The main advantage of MEG is - (2)
excellent (sub)millisecond temporal resolution
insensitivity of the signals to the distorting effects of skull
Although most MEG applications
remain in basic brain research, the use of MEG is increasing in
clinical medicine (epilespy etc..)
The tiny cerebral magnetic fields can be detected by
sensors using SQUIDs that convert magnetic flux into recordable electric voltage
SQUIDs in MEG is used in combination with
superconducting pick up coils that guide neuroimaging fields into SQUID loop
The geometry of the pick up coil in MEG determines the
sensitivity pattern of a sensor
MEG recordings are typically performed in magnetically shielded rooms constructed out of
mu metal and aluminum
What are the main external artefacts and biological artefacts that contaminate MEG signal? - (2)
Main artefacts may arise from power lines, moving vehicles or magnetic stimulations and response devices
Biological artefacts: cardiac function, eye movement, blinks, muscular activity and artefacts related to participants’ articulation and movement
The novel sigal space separation method (SSS) and temporal extension (tSSS) is type of what method in MEG
filtering
The tSSS can
supress external interference
MEG/EEG can give a grasp of dissociation between .. that can not be done with fMRI
early unconscious and later conscious processing
Main focus of EEG was recording
spontaneous brain activity
The brain’s spontaenous MEG/EEG activity contains both
rhythmic and irregular components
The brain’s spontaneous MEG/EEG activity is usually below
30 Hz depending on participants’ vigilance, task, possible medications and disease
Frequency tagging in magnetoencephalography (MEG) refers to a technique used to .. and example - (3)
selectively label or tag neural responses to specific frequencies of visual or auditory stimuli.
e.g, checkerboard pattern, that alternates between black and white at a specific flicker rate or frequency. For example, they might present the checkerboard with a flicker rate of 10 Hz (cycles per second).
ocusing on the frequency of interest (e.g., 10 Hz), researchers can isolate and analyze the neural responses specifically related to the flicker rate of the checkerboard.
‘Life Time’ is
the recovery time of stimulus response (higher source is in processing hierarchy longer life time. quicket in primary sensory areas)
Brain maturation is reflected in … - (2)
frequency content and distribution of the spontaneous MEG/EEG brain rhythms (same for child and adults - same region)
and in the time
lags and shapes of evoked responses to external
stimuli (timing of activations is delayed in child as compared to adults e.g. word perception)
Which of the following statements about artefacts in MEG is FALSE?
A. Artefacts are signals in the data which are not related to the process which we want to measure.
B. EOG and ECG signals can provide useful information to assist with the artefact removal process.
C. The aim of artefact rejection is to remove both physiological and non-physiological noise from the dataset.
D. The use of Independent Components Analysis for artefact removal requires that the user mark individual epochs of data as “bad”.
D
Which of these methods do NOT help reduce the amount of noise in MEG data?
A. Shielding the room
B. Avoiding metal near the scanner
C. Allowing the participant to move around
D. Comparing the sensor data to reference channels
C
Question 3
In what ways are Magnetic Evoked Potentials (MEPs) different to other Event-Related Potentials?
A. They have inconsistent amplitudes and are labelled with an ‘M’
B. They have inconsistent directions and are labelled with an ‘M’
C. They have inconsistent timings and are labelled with an ‘E’
D. They have inconsistent frequencies and are labelled with an ‘E’
B
Which of these would be the most important step to change in your pipeline because you think the data contains responses that are induced but not evoked?
A. Average the time courses across multiple trials
B. Filter the data
C. Perform a frequency-based analysis
D. Remove ill-fitting epochs
C