Test 2 Flashcards
!P is true if
P is false in the current model (negation)
P and Q is true IF
Both P & Q are true in the model (conjunction)
P or Q is true IF
P or Q is true in the model (disjunction)
P -> Q is true UNLESS
P is true and Q is false in M (implication)
P <-> Q is true IF
P & Q are both false or both true (bi conditional)
Sentences have a truth value with respect to a model. This means that:
The model is the list of the values of the variables slotted into the given sentence. I.e. a sentence might have 3 variables, and for a particular model, those values might be, say, “true, true, false”
A ^ B is logically equivalent too
B ^ A
A v B is logically equivalent to
B v A
((A ^ B) ^ C) is logically equivalent to
(A ^ (B ^ C))
((A v B) v C) is logically equivalent to
(A v (B v C))
!(!A) is logically equivalent to
A
A -> B is logically equivalent to
!B -> !A and also !A v B
A <-> B is logically equivalent to
((A -> B) ^ (B->A))
!(A ^ B) is logically equivalent to
!A v !B
!(A v B) is logically equivalent to:
!A ^ !B
((A ^ (B v C)) is logically equivalent too:
((A ^ B) v (A ^ C))
(A v (B ^ C)) is logically equivalent to:
((A v B) ^ (A v C))
A sentence is satisfiable if:
It is true in some models.
A sentence is valid if:
It is true in all models.
A sentence is unsatisfiable if:
It is true in no models.
Horn Clause:
A clause where at most one literal is positive.
E.I: (!P v !Q v V) and (!P v !W) are Horn Clauses.
Definite Clause:
A clause where exactly 1 literal is positive.
EX: (!P v !Q v H v !D) is a definite clause
How do you rewrite a Horn Clause to an implication?
Ex: (!C v !B v A)
First separate ! Terms
(!C v !B) v A
Negate ! Terms
(C ^ B) v A
Switch to implicate
(C ^ B) -> A
What is Bayes Theorum?
P(b | a) = P(a | b) P(b) / P(a)
What can P(a | b, c), and P(c | b) get us
P(a | b) = P(a | b, c) P(c | b)
P(a ^ b) = ?
P(a | b) (P(b)
P(x,y) = ? When both are independant
P(x) P(y)
P(x, y) = ? When x & y are related to each other
P(x) P(y | x)
If you want the probability of P(!d), but dont have it, what could you use to solve it
If you had them, you could: P(!d | f)P(f) + P(!d | !f)P(!f) to get P(!d)
Aka: The probability of it being !day given that aliens are friendly + probaility of it being !day given aliens are !friendly
What does random forest do and why does it work?
Random Forest is when you take a random sample of your data, and build an ongoing series of decision trees. This gives us a larger model. The more decision trees you have with different criteria, the better the random forest will perform due to it increasing prediction accuracy.
It reduces overfitting, variance, and bias.
What is Underfitting & what are its causes?
Underfitting usually occurs when your model doesnt have enough time or enough data. This results in it being unable to recognize general patterns and making bad predicitions.
Might also be due to having a bad model.
What is overfitting and how does it occur?
Overfitting tends to occur when your model is too accurate, or rather, too tuned to the training data.
It’ll occur when your model has learned patterns which are only seen in the training data.
New predicitions will be much less accurate because they dont have the patterns only seen in the training data.
If there is a large variance between Validation data and Training data (Where you have high training data accuracy and low validation accuracy), you likely have overfitting.
To resolve it
Suppose you train a classifier and test it on a validation set. It gets 30% classification accuracy on the training set and 30% accuracy on the validation set.
What is your model suffering from?
How can you resonably improve it?
It is likely suffering from Underfitting. To improve the model, you could likely add new features, as well as collect more training data.
What does very low error on the training data, but high error on the test data represent?
Overfitting (high variance)
What does High error on training data and higher error on the the test data suggest?
Underfitting
If you have high bias, low variance, and low complexity, your model is
Underfitting.
If you have low bias, high variance, and high complexity, you have…
Overfitting
What are some ways to reduce overfitting
-Get more training data
-Add better features (to better capture the structure)
-Remove some features (to reduce noise)
P( a | b) = ?
P(a | b) = P(a ^ b) / P(b)
P(a ^ b) = ?
P(a ^ b) = P(a | b) P(b)
What is modus ponens?
A <-> B, A / B
B <-> A, B / A