bpk 304w Flashcards
What is the scientific method
- define problem
- develop a research question and hypothesis
3.design study and protocol - test hypothesis
compile results - communicate teh results
Making a good research question - key points
- specify the patients
- specify the intervention
- specify the control
- specify the outcomes
- WHO, WHAT, HOW, WHY
- Instead of asking it like a question - make it a statement that you will be trying to prove
eg - does it do this (wrong)
- This is what it does (right)
Validity vs reliability
Validity: systematic measurement errors that improves the accuracy of the result
Reliability: minimizing random measurement error that allows you to reproduce the results again
Testing the hypothesis for an effect or a difference (what test)
T-test, ANOVA, ANCOVA
testing the hypothesis for relationships or associations (what test)
correlation testing, regression, and p-value for significance
What is a Normal Frequency Distribution and what are the values at the peak
normal = 0 std (95% confidence)
1 std = 34% deviation
at the peak you find the:
Mean: the average value (use if normal dis)
Median: the absolute middle of the distribution (from least to greatest) (use if not normal dis)
Mode: the most often occurring value
Skewness
- what is it
- what does it represent
- what values are significant
Skewness is a measure of symmetry (or the lack of)
- this causes a bell curve to shape if it is perfectly symmetrical data
- the skewness can be negatively skewed (LEFT TAIL (less than -1)
- or positively skewed (RIGHT TAILmore than 1)
- if -1 < skewness< +1 = perfect normal
Kurtosis
- what is it
- what does it represent
- what values are significant
Kurtosis is a measure of peakedness
- high kurtosis (over 1) have more outliers and have more tail data than an even distribution
(distinct peak - leptokurtic)
- low kurtosis (between 1-3) have a more normal uniform distribution because they have less outliers and less tail data
(Flat peak - platykurtic)
normal = mesokurtic
Standard Deviation
to quantify the amount of variation, spread out numbers from the normal
Central Limit Theorem
the sampling mean will always be normally distributed if the population is large enough
Standard error of the mean (SEM)
how different the population mean would be in comparison to the sample mean (the accuracy of the results)
- this is obtained through your 95% confidence interval
(SD/sqrt(N)) –> as N inc the distribution will slowly become normal
- This is the error WITHIN the mean
Z scores (Standardization)
how far the mean is from a normal population - this is done by #std (unit) above or below the population mean
- this standardizes the data and ELIMINATES UNITS IN THE GRAPH – it’s just measured by STDs
- this does NOT change the distribution of the data
- done by making the reference Mean = 0 and the STD = 1 then seeing the scoring
internal norm of a Zscore
Standardizing based on changing the reference mean = 0 and STD = 1
- this is based on the mean and STD of the sample itself and *compares INTERNALLY - ind compared to other ind in that group
External Norm (Z-score)
Reference Mean and STD are changed and measured by NORMATIVE data’s mean and STD
- this compares the experiment group to the EXTERNAL normative data
Tscore
the same thing as a Z score but the reference mean and STD come from an external population (not normative data)
percentile
the percentage of population that lies at or below that score
- Eg. 95%, 75% 50% etc
inferential stats
you make relationships between variables and are able to test hypothesis
Testing for statistical significance
Compare the test statistic to the critical value (alpha - usually 0.05 @ 95% confidence interval)
Type 1 error (Alpha Error)
You conclude that the results are significant when they are actually not
- you reject the null hypothesis
- the larger your alpha is the more likely you are to make a type 1 error
p-value
probability that is NOT due to chance
- if your p-value is small the results obtained are likely not due to chance
- if your p-value is large is it likely that the result was because of chance (not real)
the alpha (significance level) is chosen as per the experiment but common is 0.05 (95% confidence)
type 2 error (Beta)
the probability of NOT rejecting the null hypothesis when your results are actually significant
one-tailed test
you are sure that the result only moves in ONE direction (either above or below normal distribution)
Two-tailed test
The alpha tests both directions of the study (above the normal distribution and below it)
Independent T-test Definition
&what’s the tvalue for this
Comparing 2 independent means from 2 different groups
- a T-stat is calculated depending on the difference of the means
* IT IS OPPOSITE FOR THE TVALUE*
– CALCULATED TVALUE HAS TO BE GREATER THAN THE CRITICAL TVALUE (0.05) TO REJECT NULL
*less than 120 subjects
Assumptions of the Independent T-Test
- The dependent variable is continuous and ~normal
- independent variable is in two separate groups
- there are no observed relationships between the groups
- have the same variance (levenes)
Levene’s Test
Checking if several groups have the same variance in a certain population
- tests the null – the comparison of samples occurs if they have the same variance
- Pvalue greater than 0.05 - variance is not significant
standard error of the difference BETWEEN means
- the difference between two means
- this includes the variances in each mean
Paired T-Test
comparing the means of two Dependent variables
- before and after exp
- repeated measurement
- the groups are related to each other
*less than 120 subjects
ANOVA
Used to test 2 or more means of Factors
*more than 120 subjects
Assumptions for Randomized (between-groups) ANOVA
- dependent is continuous and ~normal dist.
- independent variable is 2+
What is a factor in an ANOVA test
a factor is an independent variable with a number of levels
- Between-subjects factor identifies same independent variable but different groups
eg. walking in both seniors and young ppl
- within subjects factor (or repeated measures) identifies different conditions (ind. vari) but the same 1 group
eg. 1 group doing the same exercise at 3 different weights
One-way ANOVA
1 factor is being tested - but it can have 2 or more levels
Two way ANOVA
2 or more grouping factors are present
- used to see the sig of individual factors and the relationship between the others
Types of ANOVA
- Between subjects factors => use Randomized Groups ANOVA
- Within subject factors => use Repeated measures ANOVA
both factors = mixed ANOVA
What is the F-Statistic and how does it inc?
F-stat is a ratio that compares the variability in each group
F-stat inc if:
between group variability inc and
within group variability dec
Post-Hoc Test
A follow up test thats done if the og ANOVA was significant
- comparing 3+ means
- done because you dont know which out of 3+ means are significant to each other
- scheffes/tukeys is good
Correlation
a relationship exists between 2 variables and they are related in some way
linear correlation coefficient (r)
the strength of the correlation between 2 variables
- closer to +1 or -1 = a perfect (pos or neg) linear relationship (0 = no correlation at all)
- the correlation depends on the RANGE => when all points are considered the correlation inc
- critical value based on sample size and significance
- df = n-1
- stat sig doesn’t mean practical sig
Pearson Correlation
strength of LINEAR association using a best fit line between the data points of the 2 variables
Requirements to run a pearson correlation and what is an outlier/types that affect coeff
- the pair of data is a random sample of independent quantitative data
- visual examination that the plot is in a straight-line pattern
- if the outliers are errors they have to be removed
- (outlier = more than 3 std away from mean)
- on-line outliers inc coeff. and off-line outliers dec coeff.
coefficient of determination
correlation^2 = coeff of determination
- this is the % of variance out of 100%
Linear Regression
Y = mx+b
- predicting a variable using another variable to do it
- the one being used to predict = independent and the one trying to be explained = dependent
- linear regression basically tells you how much the dependent will change if you change the independent (the strength of correlation, relation, and stat sig of relation)
whats the difference between correlation and linear regression?
correlation will tell you the relationship between 2 variables where
LR will tell you the relationship of two variables in an eq where only 1 affects the other (x,y)
standard error of the estimate
how well and precisely the line predicts the linear regression (distance between the predicted and the observed)
- units of Y
- every 2/3 times you can predict the dependent variable using SEE
slope of linear regression
y=mx+b
m= slope
- tells you how drastically ‘y’ changes for every unit x is changed
how do you test linear regression
cross-validation study (use one sample and then test another sample) - if the SEE is the same its valid
split-sample study (use 100% of the sample data and then split it in half) - if you get the same results its valid
standardized linear regression
making everything into z-scores to show the importance of each variable and how much it impacts the Y
non-parametric stats
don’t require data to to be in a distribution
- not normal or continuously distributed
-Eg. a survey or a likert scale
parametric stats
what is generally thought of when you think of stats
- normal distribution
- continuous
Eg. T-test, ANOVA, correlation, and Linear Reg
chi-squared
finds the difference in proportions of observed frequency across a set of expected freq
2 way chi-squared
2 categorical variables are considered - test of independence between them
- if significant the chi-square value p<0.05
Binary Digits
has the base of 2 and codes digital information as 1’s and 0’s
(2^x)
decimal system
base of 10 and codes single digit numbers
accelerometer
spring is stretched in x, y, z coordinates and
force plates
3 components (x, y, z) ranging 4000-20,000 N force
- sampling rate ~1000Hz
- frequency gets noisy bc the heel of the shoe causes vibrations thats picked up by the GRF
motion capture
passive or active markers, cameras, and each have a 3d coordinate
- putting the markers on bony landmarks allows you to use the data and create a visualization on the computer after compiling it
treadmills
standard
- incline
- split belt
- force plates
- projectors (VR) after stroke by putting projections/ how to change gait by changing visual input
dynamometer (biodex system)
assessing, joint angles, force, and torque at different/specific joints
goniometer
angle changes (electrical goniometer allows to see changes over time)
ultrasound
soft tissue – high resolution
- you can track the insertion and origin and muslce makeup to gauge muscle mechanics
- some ultrasounds are mobile and can be used to make videos of the muscles (shortening/lengthening) in real time
energetic testing
used to measure metabolic rate and energy expended
- O2 consumption, CO2,VO2 max?
what is analog to digital conversion
- picking up a signal (analog) and then doing signal processing so it gets converted into a binary code to be read as a digital signal
how to do analog to digital conversion
- record analog signal - transducer or electrodes
- condition the signal (first amplify it, then filter it, then intergrate it)
- convert it using a conversion board and a software (arduino etc)
- this can then be stored as binary code and digtal signal
what are the 3 functions of a A/D board
sampling, quantiziation, and encoding
Sampling of A/D board
the analog signal is sampled at a specific frequency
- Nyquist rule - the sampling freq has to be at least 2X the highest frequency of the signal to avoid aliasing
* aliasing = when the sample freq is too low and cannot make a proper wave
what is quantization A/D board
numbers of levels of voltage is determined by the number of bits in the conversion
1 bit = 2^1 = 2
2 bits = 2^2 = 4
3 bits = 2^3 = 8 etc
encoding A/D board
assigning a digital code to certain bits so the digital sample can be read on the computer
resolution of an A/D board
the smallest amplitude that can be detected and make a change in teh digital signal
- V/bit
raw signals of EMG (what are they)
sum of all action potentials in teh underlying muscle captured by the electrode
- no filtering or anything has been done - just collected data (very noisy)
signal amplification
small signal input is amplified to a larger signal output
A= (output/input)
- this allows us to see the data better and shows weak data
- larger signal to noise ratio
- better transmission
Offsetting Data
this is when you set a baseline for data to allow it to match up with the time vector
eg. data starts at 300ms - you offset it so that the baseline is 0 ms so that it is able to match the time vector
- amplitude offset - the signal starts at 3 volts –> you lower it to 0 so that you can begin the signal processing at 0 (baseline)
Serial Data
Collected over time (longitudinal)
- data points are dependent to each other and not smooth
- remeasuring the data points over time with a variable
Signal Noise:
The signals that the transducer picks up that is not a measured signal
- this can be 60Hz of AC and other electrical noise (touching your laptop while on charge)
- motion artifact (the motion can get caught ~0-20Hz)
Smoothing (EMG)
creating a low pass filter and high pass filter using the ‘typical’ frequency of the signal you are collecting
eg. EMG raw data picks up frequencies as low as 10 hz to 500Hz –> creating a low-band pass can help you remove the 10hz and under freq and high bandpass to remove anything above 500 to leave you with a cleaner set of EMG data without unnecessary freq
difference between 21 point moving average and butterworth smoothing
- you can ‘over smooth’ the data with 21 points (10 avg before and 10 avg after) and cause the amplitude of the actual data to be lost
using a butterworth filter allows you to create a threshold for the freq you want to cancel out and therefore is a better way to filter data because you can play around with the Hz and not cut necessary data
moving average (avg all the values) vs weigthed avg
unweighted: you just average the point beginning and end (treating them all equal)
weighted: there is more importance put towards the middle points of data => this allows you to potentially give you a more accurate score because you may know that the very beginning and very end values may not be very accurate
signal averaging
adding the signal up in order to cancel the random noise out - this allows for the true data to show
Frequency and Time Domain
freq: amplitude on the y and freq on the x
- cycles per second =>
time domain => its just over the period of the frequency
slow twitch fiber filter
high twitch fiber filter
(know approx)
70-125 Hz
125-250 Hz
fast fourier transform
allows you to know what frequencies are in your signal and in what proportions
- this also finds the base signal frequency
- magnitude and amplitude are the same
stationary frequency vs non-stationary frequency
stationary freq: a repeating cycle of waves that are at a set freq (2Hz, 10Hz, and 20Hz)
non-stationary freq: a repeating cycle that goes from 2-20Hz but continuously changing
freq analysis of a muscle unit (fast and slow)
gather the freq spectrum (and components of the signal)
- fourier transform
- determining the onset of muscle fatigue
- as fatigue inc fast twitch fibers drop out
- non-invasive way to see the motor units characteristics in an objective way
Ground Reaction Forces
forces measured by force plates that are in all axes (x, y, z) (vert, A=>P, M=>L)
- time resolution can be important depending on the type of measuremetn youre doing (eg. stationary your time res doesnt have to be high vs sprinting you want a high res so you can measure the data really closely adn precisely)
common gait analysis sampling rates
200-1000Hz
GRF data
vertical data - force in the vertical (y) direction (double hump)
GRF Processing
filtering - lowpass (2-4th order)
- 5-20th Hz
- let everything below that filter pass
band pass filter:
- 2nd order
- filter to let things pass between 2 frequencies
GRF Analysis
looking at gait events
- heel strike 5%+ body weight force
- toe off less than 5% body weight force
peak force GRF
- rough estimate of heel strike to heel strike
freq domain analysis GRF
- power content across the freq
- freq bandwidth (diff between the max and minimum if power > 1/2max power)
- median freq
- these allow for gait impairments to be detected (Parkinson’s, CP, MS etc.)
Crouch Gait (CP): unable to fully extend - you can see the GRF is diff from a ‘typical’ gait pattern in vertical and ant/post gait events
GRF Analyses
-symmetry between 2 sides (affected by limb disproportions, health issues, diseases)
- first peak:
- loading rate (force/sec) and amplitude
- implications of prevention of running injuries (type of running, footwear etc)
combine all of these with anthropometric and kinematic data => joint dynamics
ECG
small EXTRAcellular signals thru the AP’s of cardiomyocytes
- 12 lead ECG
-mV vs time
- tells u the orientation of the heart
- chamber sizes
- irregularities of the heart
- arrhythmias/MI/heart failure etc
Cardiac Hz
heart has a freq of 1Hz (1cycle/sec)
- and picks up unwanted noise (5-50Hz = muscle
0.5 > respiration
60Hz electrical
ecg filter (Bandpass)
0.5 - 40Hz
ECG electrode
depolarization results in the voltage becoming more +pos in the cell
- depolarization moving toward the pos electrode = upward deflection
- depolarization moving away from pos electrode = downward deflection
12 lead ECG
2 ECG = 1 lead
- 1 electrode is the positive
- 1 electrode is the negative
- they record the votage diff and produces a view of the heart in an angle and plane
PQRST Wave
p = atrial depolar
QRS Complex = atrial repolar and ventricular depolar
T = ventricular repolar
Atrial fibrillation causes the p wave to disappear (no full artrial depolar) and QRS to be irregular
Heart Rate Variability
Changes in time intervals between heartbeats
- this can change rapidly depending on the physiological factors
- time between 2 R-waves (R-wave intervals is a measurement of HRV)
HRV Time Domain
SDNN: Standard deviation of NN intervals (NN = RR wave)
RMSSD: Root mean square of diffs between NN’s
NN50: the number of pairs of diffs > 50ms apart
2 mins - 24hrs
HRV Freq Domain
assigns Bands of freq that determine the number of NN intervals in each range
eg. 15 RR intervals at a 600ms time
Reasons why to collect EMG data
- timing of contractions
- response latencies (reflex and recover balance)
- clinical devices - myoelectric control / electric stimulation of myocytes
- est. force of contractions
- monitor fatigue
Power Spectral Density analysis (Fast Fourier transform)
provides info on how power (variance) distributes as a function of freq
HRV Tech wearable
PPG Light sensors
- track the RR interval in ms
- the light has dec precision when detectign RR interval - dec accuracy of HRV compared to ECG
Low freq/high freq ratio
symp and vagal balance
HRV Freq
High = 0.15 - 0.5Hz (parasymp)
Low = 0.04 - 0.15Hz (symp)
para = breathing
symp = Blood pressure
ECG electrode Compared to a PPG light ring
ECG becaue stuck on your skin adn uses the depolarization (either towards the pos electrode or away from the neg electrode) allows for a more accurate collection of RR interval thru the change of electricity of the hearts chambers and that in a ms time domain
- compared to a PPG light that has to penetrate through skin to detect the blood flow of a vessel (pulse) and from that has to detect the RR interval in an accuracy of ms of time
EMG
APs summed from the underlying muscle
- continuously changes voltage in the time domain
myoelectric control prostheses
- you do electromyography of the muscles that control a certain pattern of activation => this can control the prosthesis and allow it to be ‘connected’ with the ind
myoelectric controlled prosthesis target innervation
recognizes the nerve innervation that control the limb from the brain - this emg is collected so that those nerves can control the prosthetic
- you can think of closing your hand and the nerves that are in the deltoid are innervated and that is programmed with the arm prosthetic to then close the palm of the hand
functional electrical stimulation
picking up a muscles innervation and being programed to then electrically stimulate another one
eg. if the extensor muslces work but the one that extends a digit does not you can program the functional electrical stimulation to stimulate that digit extensor muscle every time the other associated muscles contract
- can also work for foot drop (emg on your gastroc to look for a muscle contraction bc within 2 sec the Tib Ant. needs to contract (and can do so thru FES)
EMG freq analysis
you can take the raw EMG signal and decompose it to see the shape and how fast the motor unit is firing
EMG characteristics
- amplitude: stochastic - random probablity and cannot be predicted precisely
- freq 0-500Hz usually from 50-150Hz
EMG signal purity
1 Signal to noise ratio: the ratio of energy signal to the energy in the noise
- Distortion of the signal: relative contribution of the freq component in the EMG signal shouldnt be altered (your noise is the same Hz as the EMG signal making it diff from the true data
Electrical Noise
Motion artifacts:
- interface between the detection surface of electrode and skin
- movement of the cable connecting the electrode to the amplifier
- 0-20Hz usually
Differential Amplification
when the output is the difference between 2 voltages and the amplification factor multiples that to minimize noise
ground electrode
placed on bone so that a reference is provided for the differential amplifier
- reduces ambient noise
EMG offset
changing the baseline to become 0 before starting
- average voltage/time and subtract the offset signal (to adjust for it
rectification of EMG
allow the absolute value of the EMG to be shown rather than pos and neg values to show the abs strength of the EMG
sample avg of EMG
moving avg of full rectified data
- sampling rate ~100-200ms
linear envelope emg
2 step process of rectification adn low pass filter
- use as a control signal
short time fourier transform
analyzing a small window of signal
- narrow window = poor freq resolution
- wide wide = poor time resolution
wavelet
limited duration that has an avg value of 0