Theory of methods Flashcards
Psychophysics
Psychophysics is the study of the relationship between physical stimuli and the sensations or perception that they evoke. It is significant in psychology because it aims to quantify the psychological experience of stimuli and understanding of the underlying principles of sensory perception
In psychophysics, one method that we can use to get a threshold measurement is the method of constant stimuli. Describe this method and explain what additional information can be obtained when using this method.
In method of constant stimuli the researcher choose several stimuli with varying intensities, these are fixed intensity levels. The stimuli are then presented to the participant in a pseudorandomized order and the participant answer seen/not seen for each stimuli level. An example of method of constant stimuli is letting participants listen to beforehand chosen volume levels, lets say 1-5 and playing them at random maybe 3, 1, 4, 2, 5. The participant answer each time if they can hear it or not. The information that can be obtained from this is a psychometric function, which is a mathematical model of the relationship between stimuli intensity and perceptive experience.
Additional information obtained from the method of constant stimuli:
Sensitivity Analysis: By using a range of stimuli, the method allows researchers to assess the sensitivity of the participant’s sensory system across different levels of stimulation. This can provide insights into the participant’s ability to discriminate between different intensities or qualities of stimuli.
Detection Thresholds: In addition to measuring discrimination thresholds (the ability to distinguish between different stimuli), the method of constant stimuli can also be used to measure detection thresholds (the ability to detect the presence of a stimulus at all). This is particularly useful for understanding the absolute limits of sensory perception.
Individual Differences: By analyzing the variability in participants’ responses, researchers can gain insights into individual differences in sensory perception. This information can be useful for understanding factors such as age-related changes in perception or differences between clinical populations and healthy controls.
On a signal detection/Yes-No task, the proportion of Hits and False Alarms depends on two parameters, sensitivity and criterion. Describe what these two parameters represent in signal detection theory.
SDT is a framework for analysing how people make decisions during uncertainty, particularly in perceptual tasks. The goals is to discriminate a signal (meaningful information that we are intrested in) from noise (not meaningful). This can result in four different answers, two correct and two wrong. Hits, which means detecting signal that is present. Miss, which means failing to detect a present signal, Correct rejection which means saying a signal isn’t present when it isnt and False alarm which means wrongly identifying a signal as present.
Criterion is the decision threshold for answering that the signal is present. The criterion can be influenced by bias and is subjectively set by the participant. If the perceptive information surpasses/exceeds the criterion you answer signal. If the criterion is lowered, the number of hits go up but the risk of false alarms also goes up. Sensitivity is how easy it is to detect the signal, set by the researcher or the one administrating the test. Let’s use where’s Waldo as an example. The goal by the participant is to find Waldo (signal) in a picture with a lot of people in it (noise). The criterion for Waldo might be that he has glasses, a hat and and a striped shirt. Let us say that you can’t find him so you change the criterion to only be a striped shirt.This will increase you chances of finding him but also increase the risk of misidentifying another person with a striped shirt as him. Using the same example for sensitivity, the researcher could choose to increase the size of waldo (easier to detect) or make the picture black and white (harder to detect). If the sensitivity goes up so does the hits while the risk for false alarms go down.
Who was Eadweard Muybridge and how was his work relevant for motion measurement? (1p)
Eadweard Muybridge was the first person to “film” a moving object. He did this by placing multiple cameras by a horse racing track and taking pictures of a horse. Then he put all the pictures together, showing the motion of a horse running, and proving that horses didn’t have any feet at the ground at one point while running. This revolutionized motion measurement and was the first ever motion picture
Explain why “dead reckoning” can cause problems for researchers who use accelerometers to study behaviour and describe two principal strategies one can use to minimize ‘dead reckoning problems’ in accelerometer-based assessments
“Dead reckoning” in accelerometer-based behavior studies refers to the potential inaccuracies caused by estimating position solely from accelerometer data. To minimize these problems:
Integration with Other Sensors: Combine accelerometer data with gyroscopes, magnetometers, or GPS to improve accuracy.
External Calibration and Validation: Periodically validate accelerometer data against external reference points or measurements to correct for drift and errors.
These strategies help researchers mitigate dead reckoning issues and enhance the reliability of their behavior tracking.
Name and describe two different ways of assessing the reliability of observation coding.
Reliability in observational methods
One way to assess the reliability of observation coding is checking Interatter/observer reliability. You can do this by letting two or more observers observe the same thing. If the observers agree, and code behaviors the same way for example, the interobserver reliability is higher. A way to increase the chances of agreement is having clear definitions of behaviors and criteria for coding.
Another way to test reliability is Test-retest reliability. This can be assessed by letting the same observer observe the same thing more than once and see if they code the behaviors the same both times.
When making observations the researcher often change the study participants’ behavior and make the observation less valid. Name at least four strategies to reduce participant reactivity to the observer.
One way to minimize participant reactivity is to make the observation double-blind, meaning that both the participant and the observer are blind to the study’s hypothesis, this reduces participant reactivity and expectancy effects from the observer. You can do this by taking in an observer that is not familiar with the ongoing research for example. Another strategy is to not observe for the first 10 minutes for example, it both minimizes observer drift and participant reactivity. You can also say you are going to actively observe for only 10 minutes during the hour, but not which ten minutes and thus minimize the participant reactivity.
You would like to know how the medial temporal lobe reacts when watching cat-gifs. You are going to use both fMRI and PET for this wonderful study. In the PET-study you are going to use a tracer for the serotonin transporter called DASB. Both the PET and fMRI study are using subtraction design. Using the simplest possible design for both PET and fMRI:
What is subtracted from what in the PET study? How many sessions will you run with each participant?
What is subtracted from what in the fMRI study? How many sessions will you run with each participant?
PET is a functional brain imaging technique where you can study neurochemical processes such as metabolism for example. In this method a radioactive tracer is injected and as it decays in the body it emits gamma rays that can give us information about where the neurochemical process takes place. In this study I will use a tracer for serotonin. In subtraction design for this method I will inject the tracer in the participant and measure the baseline activation before showing cat gifs, then I will repeat the process and do it while showing cat gifs. PET has poor temporal resolution and is not suited for measuring quick changes, thus I would run it two times. Since it is also exposure to radiation because of the tracers I wouldn’t do it in too many sessions. What is subtracted here is the normal baseline level of serotonin in the medial temporal lobe, from the serotinin level while watching cat-gifs. With this method you can see serotinin levels that are only cause by the task and not baseline.
fMRI is also a functional brain imaging technique, but in this method you can see activation in the brain based on the different magnetic properties of oxygenated and non-oxygennated blood. The activated parts of the brain uses more oxygen and thus this method is valuable for functional imaging. When using subtraction design with this method you first measure the “baseline” activation of the medial temporal lobe when you are not watching cat gifs and then measure again during the cat gifs-task. fMRI is an indirect measurement of activation and it’s spatial resolution is good, but since the temporal resolution is a bit worse (about 1s per frame) you could miss activation by only measuring the participant once, if I had resources I would make sure to run the session multiple times to not miss any activation. But the simplest
way would be to run it in in one session, first measuring the baseline and then showing the cat- gifs. What is subtracted is the baseline activation from the activation from the cat gifs-task. Which in fMRI would mean, the level of oxygenated blood in the medial temporal lobe while watching gifs minus the level normally. This would give me information about activation caused only by the cat gifs.
Provide a psychological construct (e.g., extraversion) and create an implicit test to measure it.
Implicit Test: Implicit Association Test (IAT) for Self-Esteem
Task: Participants categorize self and other-related words/images using key presses.
Stimuli: Self-related and other-related words/images are presented randomly.
Category Pairings: Participants categorize stimuli, pairing self with positive/negative attributes.
Response Time: Faster response times for self-positive pairings indicate higher implicit self-esteem.
Analysis: The test calculates a D-score, indicating the strength of association between self-esteem and positive/negative attributes.
This test provides insights into individuals’ automatic evaluations of themselves.
Describe how to examine the construct validity and the reliability of your implicit measure
Convergent Validity: Assess the degree to which the IAT correlates with other measures of self-esteem, such as explicit self-report questionnaires like the Rosenberg Self-Esteem Scale. A strong positive correlation would support the construct validity of the IAT.
Reliability:
Internal Consistency: Calculate the internal consistency of the IAT by examining the correlation between responses to different pairs of self/other and positive/negative attributes. Higher correlations indicate greater reliability.
Test-Retest Reliability: Administer the IAT to the same participants on two separate occasions and examine the correlation between their scores. A high correlation suggests that the measure is stable over time
Magnitude matching
/adjustment task
ex. participant has a weight in their hand, and should say when the weight in other hand matches the first weight
ICC
Item characteristic curve
Reflexive Thematic Analysis
FUCK GNUER
FCGNR
Not a theory, but an approach to how to process data in Qualitative research
Familiarize
Code ( categorically, descriptive or analytically )
Generate themes (related to research question
Name themes
Report
Signal Detection Theory
Framework used to analyze decision making in the presence of uncertainlty, particularly in the situations where a participant must distinguish between a signal and noise
Signal: meaningful info, what we’re trying to detect
Noise: non meaningful info
Occam’s razor
If two models have the same goodness of fit,
choose the more simple one
How well the model explains the data
Ordered response categories
A higher person location (sum score) on the latent variable should entail an increased probability of a higher response (category) for all items and vice versa. Sometimes referred to as ‘monotonicity’
We can check this by looking at the item characteristic curves (ICC)
Which of the following items are included under “methods” when planning & reporting qualitative research?
Limitations
Interpretation
Context
Purpose/research question
Context
Which research approach is based on the epistemological viewpoint of pragmatisms?
Mixed methods research
Quantitative research
Qualitative research
All of the above
Mixed methods research