2 Flashcards

1
Q
  1. What is DV?
A

The proposed effect, outcome variable, measure not manipulated in exp

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
  1. Null Hypothesis Signifiance Tesitng (NHST) computes proability of
A

Probability of null hypothesis being true (referred as p-value) and computing statistic and how likely statistic has that value by chance along

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q
  1. What is misconceptions of NHST?
A

A significant result means effect is important, a non-sig result means null hypothesis is true and significant result means null hypothesis is false

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q
  1. Effect size can help with NHST issue, what is effect size?
A

Effect size is quantiative measure of magnitude of experimental effect, larger effect size the stronger relationship between 2 variables and can be used to compare studies on basis on effect size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q
  1. How to calculate effect size using Cohen’s D
A

Mean 1 minus mean 2 divided by standard devation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q
  1. What distribution do you need for parametric tests?
A

Normal distribution which can be described by mean (central tendency) and SD (dispersion)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
  1. Is mean good measure of central tendency?
A

Mean can be misleading measure of central tendency in skewed distributions as it can be greatly influenced by extreme scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
  1. Aside from mean as central tendency, median and mode can be used where: - (2)
A

Median is unaffected with extreme scores and used with ordinal, interval and raio data

Mode only used with nominal data and greatly subject to sample fluctuations and many distributions have more than one mode

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
  1. What happens in positively skewed distributions?
A

Mean is greater than median which is greater than mode (left skewed)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q
  1. What happens in negatively skewed distributions?
A

The mode is greater than median which is greater than mean (right skewed)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q
  1. What are tests of normality dependent on?
A

Sample size, and if you have a massive sample size have normality tests significant even when data is visually normally distributed (usually trust visual plot)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q
  1. Two ways a distribution can deviate from normal
A

Lack of symmetry (skew) and pointiness (kurotosis)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q
  1. What does kurotosis tell you?
A

It tells you how much of our data lies around tails of histogram and helps us to identify when outliers may be present in the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q
  1. What is difference between parametric and non-parametric tests? - 4
A

Parametric tests assume specific distributions, like the normal distribution, and require adherence to certain statistical assumptions, such as homogeneity of variances and independence of observations.

They tend to be more powerful when these assumptions are met, making them suitable for analyzing data that closely aligns with their requirements, such as interval or ratio data.

On the other hand, non-parametric tests make fewer distributional assumptions, making them robust and applicable to a wider range of data types, including ordinal or skewed data.

While non-parametric tests are generally less powerful than their parametric counterparts when assumptions are met, they provide reliable results in situations where assumptions are violated or when dealing with non-normally distributed data. These differences in assumptions and robustness make each type of test valuable in different research contexts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q
  1. Non-parametric equivalent of correlation
A

Spearman’s Rho or Kendall Tau

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q
  1. If skewness values between -1 and 1 then but…
A

Its good … but is below -1 then negatively skewed and above 1 then positively skewed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q
  1. If skew and kurotsis is 0 then tell you your data has a
A

Normal distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q
  1. If your kurotsis value between -2 and 2 then but…
A

All good but..if less than -2 then platykurtic and above 2 then leptokurtic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q
  1. Correlationa study does not rule out presence of third variable (tertium quid) which can be ruled out using
A

RCTs to even out confounding variable between groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q
  1. What is variance?
A

Average squared deviation of each number from its mean (SD squared)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q
  1. To get SD from variance you:
A

Square root variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q
  1. What is central limit therom?
A

States sampling distribution of mean approaches normal distribution as sample sixe sizes and especially case for sample sizes over 30

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q
  1. What is a type 1 error?
A

False positive so think there is a sig effect but there isn’t = alpha

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q
  1. Wha is type II error?
A

False negative so much variance unaccounted for by the model so thinking there is sig effect but there is not = beta

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q
  1. Acceptable level of type I error is:
A

Alpha level of 0.05 and alpha level means probability of making type 1 error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q
  1. Non-parametirc equivalent of multti-way repeated ANOVA
A

Loglinear analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q
  1. Non parametric equivalent of one-way repeated ANOVA
A

Friedman’s ANOVA

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q
  1. Non-parametric equivalent of one-way independent ANOVA
A

Kruskall Wallis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q
  1. We accept results as true (accept H1) when
A

Less proability of test statistic happening by chance e.g., p less than 0.05 means low proability of obtaining at least extreme results given H0 is true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q
  1. Acceptable level of Type II error is probability of
A

Beta level (often 0.2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q
  1. R = 0.1, d = 0.2 (small effect) mean
A

Effect explains 1% of variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q
  1. R = 0.3, D = 0.5 (medium effect)
A

Effect accounts 9% of total variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q
  1. R = 0.5, d = 0.8 (large effect)
A

Effect accounts for 25% of total variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q
  1. Non-parametric equivalent of paired t-tests
A

Wilcoxon signed rank test  compare two dependent groups of scores  compute differences between socres of two condition, note sign (pos or negative), rank differences and sum ranks for positive and negattive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q
  1. Non-parametric equivalent of independent t-tests
A

Mann Whitney or Wilcoxon rank sum (rank data irrespective of group and see one group has higher ranking of scores than other) compare two independent groups of scores

36
Q
  1. What is log-linear? - (2)
A

Extension of chi-square as investigate relationship between more than 2 categorical variables

Assum independence and expected counts greater than 5 and goodness of fit is non sig in loglinear tests hypothesis that frequencies predicted by model sig different from actual  want to be non-sig

37
Q
  1. In chi-sqaure: - 7
A

One categorical DV and one categorical IV with different participants at each predictor level

Compare observed frequencies from data with frequencies expected if no relation between two variables

Tests if there is a relationship between 2 categorical variables

Assumption like data values from simple random sample of interest, for each combination of levels of 2 variables need at least expected counts greater than 5  if not met then use Fisher’s exact test

Expected counts = comun total * row total divided by grand total

We calculate chi-square test of calculate difference from actual vs expected for each combination of 2 variables and square difference and divideexpected by expected value of combination and add all values

DF = (r-1)(c-1)

38
Q
  1. What is power? - (3)
A

Proability of ability of test to find an effect assuming there is one in population

	It is 1 minus beta or probability of making type II error

To increase power of study, you can increase sample size , probability of type 1 erorr or alpha be bigger so more power

39
Q
  1. What is correlation?
A

Correlation measures strength and direction of linear relationship of 2 variables

40
Q
  1. What is assumptions of correlation?
A

IV and DV continous

41
Q
  1. What is covariance? - (5)
A

It gathers info if one variable covary with other so interested if one variable deviates from mean then maybe other variable deviate from mean similar way

It is calculated by (x – mean of x) (y-mean of y) divided by N-1 and add all the values

Positive covariance says as one variable deviates from mean, other variable deviates in same direction, negative is difference in direction

Problem with covariance is dependent on units and we standardise by dividng by product of SD of both variables sigma(x-mean of x)(y – mean of y)/ (N-1) sxsy (formula of pearson correlation)

Can’t say predictor uniquely explain outcome and its coefificents can be exhcangable

42
Q
  1. What is assumptions of correlation? - (2)
A

Assumptions of correlation include linearity, homoscedasticity (constant variance), and bivariate normality;

correlation typically involves two continuous variables (IVs and DVs) and assumes a linear relationship between them.

43
Q
  1. Note on biserial and point-biserial - (2)
A

Point biserial =discrete dichotomy whether someone is pregnant

Biserial = variable is continuous dichotomoy like passing or failing exam like someone may just pass whether some fail and some will excel

44
Q
  1. What is regression?
A

Regression analyses relationship between DV and one or more IV and estimate impact of IV(s) on DV and different to correlaiton as adds a constant bo

45
Q
  1. Assumptions of simple regression - 9
A

1 DV continuous and 1 continous predictor

Predictors have non-zero variance

Independence = All values of outcome come from different person

Linearity

Homoscedasticity = each value of predictor variance of error term is constant
Both linearity and homodesasticity tested of scatterplot of ZPRES against ZPRED

Independent errors = errors should be uncorrelated – durbin Watson

Normally distributed e

Multiple regression = no co linearity where two predictors correlated strongly with each other check via VIF and tolerance

46
Q
  1. Closer SSM to SST then?
A

Better model can account for the data and smaller SSR must be

47
Q
  1. What is SST?
A

Difference between observed data and mean value of Y

48
Q
  1. What is SSM?
A

It is the sum of squares attributable to regression model which measures how much variation in DV explained by IV

49
Q

What is SSR?

A

It is differences between observed values of DV and values predicted by regression model = measures unexplained variation in DV not accounted for by IV

50
Q
  1. What does R^2 represent? - 4
A

It is the coefficient of determination and gives how well predictor explain variance in outcome
Gives overall fit of model
Adjusted R^2 is how well R^2 generalises to population and generally more smaller as adjust statistic based on number of IVs
R^2 explains variance SSM to total variance of SST

51
Q
  1. What is mixed effect ANOVA? - (3
A

A mixture of between and within

Many IVs have measured and some measured with diff pps whereas other use same pps

Need at least two IVs for mixed design that are categorical and one 1 DV continuous

52
Q
  1. What does F-raito? - (2)
A

Used in regression and ANOVA to assess overall sig of IVs in model and calculated by dividng MSM by SSR

If model is good large portion of variance MSM as compared to what is left – MSR

53
Q
  1. In multiple regression it has:
A

2 or more predictors that are continuous

54
Q
  1. What is hierarchical regression? - (2)
A

In this we are seeing if one model explains significantly more variance than other

As a general rule, predictors selected based on past work and experimenter decides what order in which predictors entered in model -> typically known predictors added first and new predictors add second

55
Q
  1. In VIF and tolerance for MR:
A

VIF above 10 and tolerance (reciprocal of VIF) below 0.2 = issue with multicollinearity

56
Q
  1. In Durbin-Watson :
    - (2)
A

Closer to 2 then assumption met

If value less than 1 or greater than 2 than raises alarm bells

57
Q
  1. What is a t-test?
A

A t-test is a statistical test used to compare the means of two groups and determine if there is a significant difference between them, taking into account the variability within each group

58
Q
  1. In two-way Independent ANOVA, break SSM to:
A

SSA (variance explained by 1st IV) and SSB (variance explained by second IV) and its interaction (SSA * SSB)

59
Q
  1. What is difference between partial and eta-squared?
A

This differs from eta squared in that it looks not at the proportion of total variance that a variable explains, but at the proportion of variance that a variable explains that is not explained by other variables in the analysis

60
Q
  1. How is eta-squared calculated?
A

SSM/SST

61
Q
  1. What are the 3 types of t-tests?
A

One-samples t-test (compare mean of sample data to known value), paired t-test and independent t-test

62
Q
A
63
Q
  1. What is assumptions of t-tests? - (7)
A

One DV continuous and one predictor categorical (interval) and two levels

Same (paried-t-test) and different (independent) for each predictor level

Normal distribution via frequency histogram, P-plot (Straight line) and non sig normality tests

No sig outliers

Homogenity of variances via Levene’s test (independent)

Independence – no relationship between groups (independent since scores coming from different people)

Related samples (paired-t-test)

64
Q
  1. What is formula of calculating t statistic?
A

T = observed difference between sample means subtracted between expected differences between population mean (if H0 is true, typically 0) divided by estimate of standard error of difference between 2 sample mean

65
Q
  1. Guidelines for interpreting partial eta-sqaured
A

0.01 = small effect
0.06 = medium effect
0.14 = large effect

66
Q
  1. In one-way ANOVA,
A

Between = SSM
Within = SSR

67
Q
  1. How to calculate paired- t-tests? - (2)
A

T = Mean differences between samples (D line) subtracted by differences in sample means expect to find between population means (uD) which is divided by standard error of differences (standard deviation of differences divided by square root of N)

If D line is greater than standard error of differences then differences we observed in sample is not chance result and caused by experimental manipulation

68
Q
  1. How to calculate standard error?
A

SD / square root of N

69
Q
  1. How to calculate Cohen’s D?
A

Mean of group 1 (i.e., control group )minus mean of group 2 divided by SD of control group (can be pooled SD but control group SD common used)

70
Q

Independent vs paired

A

With independent-tests, different participants participate in different condition, pair of score differ not just of experimental manipulation but of other sources of variance like IQ etc.. whereas this individual differences is eliminated in paired t-tests

71
Q

What is ANOVA? - 5

A

ANOVA = Analysis of variance

It is used to compare whether there is significant differences between three or more groups of their means

It provides F-statistic and associated p-values to assess sig of differences

But does not say where differences lie with F ratio = follow up with planned contrasts or post-hoc tests since omnibus test

Can be used to reduce familywise error (error on statistical tests on same data) and false positive by not conducting t-tests to compare every group

72
Q
  1. What is assumption of one-way ANOVA? - 8
A

1 DV continuous, 1 predictor variable (one-way ANOVA but 2 predictor variables in two-way independent and two-way repeated ANOVA) that is categorical and more than 2 levels (ANCOVA)
Different participant in each predictor (one-way ANOVA + ANCOVA) same (one-way repeated ANOVA, two-repeated ANOVA).
No sig outliers (ANCOVA)
Normal distribution of DV for each level of categorical IV (ANCOVA)
Homogenity of variance (independent design, levene’s test not sig, one-way ANOVA + two-way independent ANOVA)
Independent of covariate (covariate as DV and IV as IV should be non-sig p-value) and treatment effect and homogeneity of regression slopes (- covariate has same relationship of outcome regardless of level of IV via scatterplot and if interaction between covariate and IV non-sig ANCOVA)
Spherecity (equality of variances of the differences between treatment levels , via non sig Maulchy’s test which is dependent on sample size – one and two-way repeated ANOVA, if sig then use corrections to produce valid F ratio such as green house geisser [conservative] or huynh feldt correction)
Independent of covariate and homogeneity as well as spherecity in mixed ANOVA

73
Q

In repeated ANOVA,

A

SSB between, SSW = SSM and SSR

74
Q
  1. When should Sikdmak correction used?
A

When concerned about loss of power associated with Bonferroni corrected values

75
Q
  1. What happens in ANCOVA (Analyse of covariance)? - (3)
A

Compare Many means without increasing chance of type 1 error

Assume different participant in each predictor level

Measure covariates and include them in analysis of variance which reduce SSR (within group variance) and eliminate confounds

76
Q

Bonferroni correction - (2)

A

Ensures type 1 error is 0.05 and reduces type 1 error (conservative) but also lacks statistical power (probability of type II error of false negative increase) so increase chance of missing genuine difference

Alpha level divided by number of comparison

77
Q

Slide 1 - Intro

A

My name is Gitanjali Sharma. I will be talking about my UG dissertation project called: Predicting Speech in Noise Ability from Non-speech stimuli.

78
Q

Slide 2 - 6 - SiN

A

SiN is the ability of how well individuals can hear speech in background noise.
For example, listening to a friend in a busy pub.
This ability varies with age and hearing loss.
The question is how to measure this ability? It is typically measured through SiN tests that use a sample of speech (like a word or sentence) with noise added to it.
The task is to recognize the word/sentence.
For example:

79
Q
A
80
Q

Slide 3 - (3) - Limitations

A

The disadvantage of these tests is that the speech content is typically recorded in a specific language (typically English) and thus give undue performance advantage to fluent/native speakers of the language and these tests can not be employed to non-fluent/non-native speakers as well as children whose language skills are not fully developed.

To overcome this limitation, we can ask the question of whether non-speech auditory stimuli can be used to predict the performance on SiN tests.

In the project, two non-speech stimuli were tested: 1) SFG (stochastic figure-ground) and, 2) Harmonic roving SFG (HRSFG).

81
Q

Slide 4 - Stimul - (5)

A

The SFG stimuli consist of a pattern of coherent tones, which is perceived as a figure, against a background of non-coherent tones and sounds like this

The HRSFG is different from SFG in two aspects:
Frequencies in figure is harmonically related and,
The frequencies changing with time
These two features are more speech like

82
Q

Slide 5 - Hypotheses and Method - (3)

A

This study was conducted in sound-proof booth with 54 participants in which they completed a hearing test (called pure-tone audiometry) as well as computer tasks such as a sentence in noise task, pattern discrimination task using HRSFG and figure discrimination task using SFG.
We expected that SFG and HRSFG performances will be individually correlated with SiN test performance
Additionally, HRSFG will explain the performance on SiN tests better than SFG

83
Q

Slide 6 - Non-speech stimuli tasks - (2)

A

In each trial of figure discrimination task, participants heard two examples of SFG stimuli and had to say whether one of the two had a gap in the figure portion

In each trial of pattern discrimination task, participants heard two HRSFG stimuli and had to say whether they have a same or different pattern

84
Q

Slide 7 - Results - (2)

A

Here are the results as you can see on the left panel is correlation between SFG and SiN thresholds and on the right is between SiN and HRSFG thresholds which both show significant correlations.

Using hierarchical regression, it was observed that HRSFG increased the explained variance by 15% in SiN tests as compared to SFG.

85
Q

Slide 8 - (4)

A

The results show that non-speech auditory figure ground stimuli such as our SFG and HRSFG can predict SiN test performance

Furthermore, stimuli that captures aspect of speech such as harmonicity and time varying frequency, HRSFG, predict the performance better than static SFG stimuli

These findings support the hypothesis

Future research needs to explore whether there is a shared brain regions for processing non-speech stimuli like stochastic figure ground (SFG) and harmonic SFG as well as SiN stimuli used in SiN tests