PhD Interview 1 Flashcards
Why have you applied to do a PhD in York? What is it about working with X that appeals to you? - (2)
I am very keen to do a neuroscience PhD at the University of York as it is renowned
for being one of the top 10 universities in the UK and well known for its world-
leading research, housing cutting-edge neuroimaging methods such as two 3 Tesla
fMRI systems as well as having the ability to work with experts in the field of
neuroscience research.
I am already assisting on a project with Professor Andrews, and I very much enjoy working with him. I have found him to be very supportive and encouraging. His expertise on face recognition using advanced methods like MVPA aligns with my
research interests.
The face recognition project I am working on with Professor Tim Andrews - (4)
Currently helping a PhD student on a project ‘The importance of conceptual knowledge
when becoming familiar with faces during naturalistic viewing’ under the supervision of
Professor Tim Andrews.
- Acted as a ‘second coder’ to check the reliability and consistency of participants
behavioral responses
Gained some hands-on- experience in fMRI data analysis using FSL as had extracted brain images using a brain extraction tool in FSL
This project I am currently assisting shares some similarities with my PhD proposal and my proposal serves as an extension of it.
What skills and attributes do you have that make you a suitable for a PhD in this area of research? - (3)
- As I mentioned, I am already involved in working on a similar project in which I
have gained experience on analysis of behavioral data where I acted as a
second coder and some hands-on experience in fMRI data analysis using FSL. - I have a completed a module called ‘Neuroimaging of Vision’ taught by Professor
Andrews. This module covers the brain mechanism including face perception. In
addition, I have also done a module on Principles of Cognitive Neuroscience,
which covers how fMRI works and advanced data analysis methods like inter-
subject correlation and MVPA. Both these modules form the foundation of my
preparation for the PhD. - Over the years, I have also acquired programming skills using R, MATLAB, which I
used in my undergraduate dissertation. I have also a working knowledge of
Python which I used during my GCSE. These programming skills are needed for
fMRI data analysis.
How did you use R/Matlab during your final dissertation project? - (3)
I used R to perform statistical tests of performing Pearson correlations , hierarchical regressions as well as testing assumptions such as checking my data was normally distributed by performing kolmogoroiv-Smirnov tests
I also practiced conducting these tests in Matlab since my supervisor encouraged me to do it
The experiment of my dissertation was also ran using Matlab so in my project I had the responsibility of running the code independently as well as being taught by PhD students in the lab how to deal wih the code if it crashes
Can you tell us (in simple terms) about a research project that you have been
involved in – what were you investigating and what did you find? - (5)
My final dissertation project at Newcastle University was called ‘Prediction of
Speech-In-Noise Performance Using Non-Speech Stimuli’ under the supervision of
Professor Tim Griffiths.
Speech-in-noise perception is basically how well people can hear speech in
background noise. For example, in a pub how well we can hear a friend wile lot of people are talking in the background (This is also called the cocktail party effect). Now the question how to measure the speech-in -noise performance? The typical test use a sample of speech (like a word or sentence) within noise added to it. The task is to recognize the word/sentence.
The speech content is typically recorded in specific language (e.g. English) and thus the tests can not be used with participants who are not fluent in that language.
To address this limitation a non-speech stimulus called Figure-Ground was
developed. This stimulus consisted of a pattern of coherent tones , which is the
figure a background of non-coherent tones. After hearing two figure-ground stimuli,
participants were asked if the two patterns were same or different.
We then investigated if the performance on the Figure-Ground stimulus was
correlated with performance on conventional speech -in-noise test. We found that
that this was indeed the case. (Typical correlations around 0.4 to 0.5)
More information of my dissertation project
whats on CV, - (5)
Dissertation on ‘Prediction of speech-in-noise performance using non-speech stimuli’,
under the supervision of Prof Tim Griffiths where I gained experience on how to perform pure-tone audiometry and run psychophysical tests on participants independently.
More information of my dissertation project
whats on inclusion,
Inclusion that participants were native English speakers, born in UK and had normal hearing thresholds which we measured before we began our study using pure tone audiometry where we presented pure tones to the ear via headphones and measured lowest intensity in decibels (dB) at which this tone is perceived 50% of time - this measurement is their hearing threshold
More information of my dissertation project
whats tasks they did,- (5)
Participants did figure-ground and measured their speech-in-noise performance using both sentence in noise and word in noise on a computer in a sound-booth room
More information of my dissertation project
whats , limitation (5)
Limitation is only considered those with normal hearing and native English speakers since tests had speech content in English and thus findings not generalise to them –> criteria of normal hearing lead to exclusion of large number of participants like 10
More information of my dissertation project
whats future research - (5)
Future work - > unconver the brain areas that are activated by our new figure-ground test and compare with convential speech-in-noise tests
- What issue you would like to address during your PhD? - (12)
- The main question that I am trying to address is how conceptual knowledge helps in face recognition and where in the brain this knowledge is represented.
- Some previous work has also addressed this question but there are a number of
limitations. - The main being that the paradigm used lacks ecological validity
a. Static facial stimuli are associated with artificial conceptual information such
name/occupation etc.
b. This does not happen in real life where perceptual and conceptual
information is not separated.
c. In real life, we see faces in many different contexts and from this we
accumulate conceptual knowledge. - To overcome this limitation, task-free naturalistic paradigms have been developed
where participants are watching a engaging movie in a MRI scanner. - The naturalistic viewing paradigm is nice from the ecological perspective, but the problem is how to separate the conceptual from perceptual.
- A recently collected behavioural data using naturalistic paradigm has shown that
conceptual information helps in recognition of faces after a delay period. - In this paradigm, there are two groups of participants. One group watches a movie in its original order and the other watches a scrambled version of the movie.
- The idea is that the Original group will be able to construct a narrative where as the scramble group will not be able to.
- After watching the movie, subjects perform a face recognition task to identify actors in
the movie. - Interestingly, immediately after watching the movie, performance of both the original and scrambled version did not differ indicating that both groups are using perceptual
features for face recognition. - However, after a delay of 4-week, the original group performns better than scrambled
indicating use of conceptual knowledge in face recognition after delay. - In this current we will use MVPA and ISC to identify the brain network for conceptual
information representation and how the perceptual and conceptual network interacts
using dynamic causal modelling.
What is inter-subject correlation? - (3)
since stimuli presentations in naturalistic viewing
paradigms are not controlld lies in interpretation of data.
Hasoon et al., (2004)
used inter subject correlation ISC as a measure to calculate how consistent the
activation of a given brain area across different participants when watch same movie.
- Mathematically works by producing correlation coefficient between the time
series of activation extracted from a given voxel from two participants
Several studies have shown that: (i) consistent and reliable pattern of activation can be seen in
many brain areas (e.g. visual and auditory cortex and ‘higher level’ areas such as precuneus) across
different participants while they watch the same movie (ii) activations of some of these areas in
response to ‘artificial’ stimuli were weak indicating a preference for the natural stimuli and, (iii) ISC
could also distinguish whether the movie was presented forward or backward in time, (iv) ISC can distinguish between two groups scuh as patient and controls
Some studies have shown ISC distinguish whether movie was presented forward or backward in time example
while visual cortex showed no differences between the forward and backward presentation, higher
order areas like temporal parietal junction (TPJ) show weaker ISC in response to backward movie
compared to forward movie
ISC can differentiate between two groups (between-subject) like patient and controls - (2)
- reduced neural similarity during movie viewing with individuals with psychosis/chizophrenia and autism
- Reduced neural similarity to autistic compared to neurotypical individuals and extent to which individual’s neural response differ from neurotypical response is predictive of social comprehension of clip
What is difference between inter and intra subject correlation? - (2)
Inter-subject = measures consistency of neural responses across different
participants in study
- Intra subject = measure consistency of neural responses within same individual
across different conditions
Methodology of our study is similar to previous behavioural work - (3)
We will use the Life on Mars naturalistic viewing paradigm (see Noad and
Andrews, 2023).
However, participants will initially view either the Original or Scrambled version of the movie in the MRI scanner.
Participants from both groups scanned after a de;ay of 4 weeks and watch same movie containing excerpts from previously unseen episodes of Life on Mars
Aim of the proposal is in more detail if they ask
Aim of the proposal is using advances in experimental paradigm and analysis methods of addressing question of whether neural representation that are important for the learning and subsequent recognition of familiar faces found in core (visual) or extended (conceputal) regions and how the two brain networks interact when processing familiar faces (which to our knowledge no study has done)
What is core network - perceputal network? - (3)
Theoretical model proposed by Haxby and colleagues divide brain areas into core and extended network
The core network has brain areas such as FFA, STS and inferior ocipital gyrus which is thought to extract visual features of faces and store in subsequent recognition
STS extract dynamic features of faces such as its emotional expression whereas FFA involved in extraction of visual features that are invariant (e.g., identitiy)
What is extended network - conceputal network?
Extended network extract ‘higher level’ conceputal info such as traits, biographical info associated with faces and include brain areas such as anterior temporal lobe (ATL), cingulate and precuenus
What is function of ATL?
It retrieves sematnci memories related to people we know such as recalling
its name, occupation, past experiences etc..
What is function of cingulate?
involves in recalling emotions associated with person’s face like
feelings of trust/empathy
What is role of precuneus?
creates mental representations of situations and past experiences =
plays role of retrieving person-specific details through mental imagery and
simulation, reconstruct past encounters with specific people and interactions you
had with them.
What is conceptual knowledge - (2)
Information such as semantic like how old the person is, their gender and episodic memories associated with the person
Importantly this is completely separate from the visual info of the person
What does MVPA stand for?
Multi voxel pattern analysis
What is MVPA? - (3)
A classifier in machine learning algortithm learning to associate patterns of brain activity with speecific labels
Classifer is trained on portion of dataset and told certain brain activity correspond to different labels
Classifer then tested on separate portion of dataset not used on training to classify in appropriate labels and accuracy is observed
MVPA vs traditional - (2)
Traditional fMRI analyses focus on identifying brain regions that show increased activity during tasks and does not tell about how precise info is encoded in those activated regions
MVPA tells activity patterns across multiple voxels in specific brain regions
How is MVPA used in the study? - (3)
- Using MVPA ask whether pattern of responses in different regions of brain can predict identitiy of a person
To do this, compare pattern of responses when one identitiy viewed in one part of video with pattern of response in different part of video
MVPA analysis used in search light approach on whole brain to identify brain regions which will correctly classify identity above chance level
Idea of using MVPA and search light approach - (6)
- They watch movie
- Identify 3 actors in movie which is continuously appear in different contexts in he
movie - During first part , actor show first and see what is their identity
- Train classifier that this brain activity of actor 1 appearing , drawing 6 mm sphere
and centre point and pick up all voxels which is surrounding that sphere so get
500 vooxels with that and feed into classifier and activity of all these 500 voxels
correspond to actor 1 - Still in first segment, pick up actor 2 and activity of same areas when actor 2
appearing and telling classifier this is what pattern of activity looks like in actor 2. - Then do test phase which input pattern of voxel’s into classifer and see whether
it correctly classifies actor 1 or actor 2
What will be the classifier’s input - (5)
- We are going to estimate BOLD activity using GLM = general linear model
- Since BOLD activity is slightly delayed
- Add timepoints actor appearing and convol with HRF and use GLM to get beta
values indicating of BOLD strength signal - Estimate beta value of GLM at every voxel
- Those beta values using from GLM feed into MVPA
What is search light approach - (6)
- Draw small sphere at given time point and take voxels lining within sphere
- Lets say 100 voxels within sphere
- Do classifer performance onto that region and then find out whether that
region was deco - Find performance
- Move sphere 1 mm further to another region
- Shifting sphere to cover whole brain
Hypothesis with MVPA - immediate - (2)
Since immediately after the movie,
both groups of participants predominantly use perceptual information for recognition, the hypothesis is
that brain areas of the core network will have above chance classification in both groups and that the
performance between the two groups will not be significantly different.
In contrast, brain areas of the
extended network will not show above chance classification in either group.
Hypothesis with MVPA - delay - (2)
Given that after a delay, there is a difference in the role of conceptual knowledge for
recognition, the hypothesis is that brain areas of the extended network will have above chance
classification in the Original group and that the performance will be significantly higher compared to the Scrambled group.
Since conceptual knowledge can also activate perceptual system, there will be above
chance classification in the core network for Original group and this and this will not be so in the
Scrambled group.
Using ISC, the neural responses to movie shown after a delay will be compared in original and scrambled and hypothesis would be - (2)
The hypothesis is that regions that are involved in familiar face recognition should show a higher
ISC in participants from the Original group compared to the Scrambled group.
Moreover, these regions
will be evident in the extended network.
We are also using dynamic causal modelling to understand the
causal interactions between the core and extended network
Two types of connectivity - (2)
- Functional connectivity
- Causal or effective connectivity
To check if the two areas are functionally connected,
a simple correlation coefficient is
computed.
Functional connectivity does not tell the
direction of influence.
The causal or effective connectivity
tells direction of influence like whether region A drives B, B drives A or both
DCM is
causal or effective connectivity technique.
Granger causality analysis is also another
effective/causal connectivity method
Not using Granger Causality in project as - (2)
Granger causality can be applied to any time series data (that is it blind where does the
time series data come from: it could from economics, biology, like a regression analysis.
The DCM is specifically developed for brain data using fMRI.
How is DCM related to brain data? - (4)
DCM tries to go at neural level,what you are measuring at fMRI is indirect
measurement of neural activity by measuring blood flow
Start a model where area A and area B have neural activity between them
and area A is driving area B
In DCM there is a mathematical equation of neural activity to
predicted BOLD signal.
Can compare predicted and actual BOLD and if it matches well the
model is true and if not correct.
Hypothesis of DCM - (2)
Since, conceptual knowledge can activate perceptual system, the hypothesis is
that the backward model will be the best model in ‘Original’ group for the data acquired after the delay
period when conceptual information is dominant for face recognition.
Since the backward flow of
information in the Scrambled group would be absent, the best model for the Scrambled group would be
the forward model.
- What do you see yourself doing in 5 years? - (2)
In the next five years, I envision myself having completed a PhD and working a
research lab doing my independent research.
My long-term plan is to make a career
in research and teaching in a University. This PhD program will be a foundation for
this aspiration.
Do you have any questions
- With the studentships at York, I understand that PhD students are required to
teach some classes and teaching is something I am really interested in doing,
what are some of the classes the PhD students teach at the University of York.