Lecture 7 - ML Flashcards
1
Q
Give applications for Gaussian Mixture Models (GMMs) in speech applications
A
- Speaker recognition
- Emotion classification
- Voiced/unvoiced decisions
2
Q
Why is a HMM preferred over a GMM
A
Using a GMM, there is no way to capture time. Time is an important aspect in speech and therefore, use Hidden Markov Models (HMMs)
3
Q
A gaussian or normal distribution is common and easily analysed. To estimate the parameters, the sample mean and sample variance can be used. What’s the disadvantage of this?
A
This is called the frequentist approach.
-> doesn’t really work for large and complex training data.
4
Q
It can be assumed that the matrix has diagonal covariance. What does this mean?
A
This means that the features are independent.
5
Q
A