Chapter 3 Flashcards
What is the definition of Information regarding informative attributes ?
Information is a quantity that reduces uncertainty about something.
The better the information the more uncertainty is reduced.
Selecting informative attributes provides an intelligent method for selecting an informative subset of the data.
What is tree induction method ?
Tree induction is a tree-structured model incorporates the idea of supervised segmentation in an elegant manner, repeatedly selecting informative attributes.
What is a model ?
A model is simplified representation of reality created to serve a purpose.
In data science, a predictive model is a formula for estimating the unknown value of interest: the target.
In data science, prediction mean to estimate an unknown value.
What is the difference between predictive and descriptive modeling ?
Predictive modeling main purpose is to estimate a value. (what customers who churn typically look like ?)
Where descriptive main purpose is to gain insight into the underlying phenomenon . (why do people churn ?)
What is supervised learning model ?
Supervised learning is model creation where the model describes a relationship between a set of selected attributes and a predefined variable called the target variable.
The model estimates the value of the target variable as a function.
What is model induction ?
It is the creation of model from data. Induction is generalizing from specific cases to general rules.
The input data for the induction algorithm is the Training data.
What is purity measure (entropy) ?
Purity measure is a formula that evaluates how well each attributes splits a set of examples into segments with respect to target variable. Entropy is a measure of disorder that can be applied to a set.
The most common splitting creation is called information gain.
What is the entropy equation ?
entropy = p1 x log(p1) - p2 x log(p2) - ….
What is information gain ?
information gain (IG) is used to know how informative an attribute is with respect to our target variable. How much information gain it gives us about the value of the target variable. information gain measures how much an attribute improves/ decrease entropy over the whole segmentation it creates.
What is the entropy equation ?
IG (parent, children) = entropy (parent) - [p(c1) x entropy(c1) + p(c2) x entropy(c2) ]
How to calculate purity for numeric values (regression) ?
The natural measure of impurity for numeric values is variance.
What is a classification (decision) tree ?
A classification (decision) tree is made up of root in the top and nodes where each node in the tree contains a test of an attribute. Each path terminates at a leaf, each leaf correspond to a segment and the attributes along the path give the characteristics of the segment. Each leaf contain a value for the target variable.
What is the goal of the decision tree ?
To provide a supervised segmentation, to partition the instances, based on their attributes, into subgroups that have similar values for their target variables.
What is the frequency-based estimate of class membership probability ?
the frequency-based estimate is the probability of any new instance being positive is n/(n+m)
Where n is positive instances and m negative instances of the leaf.
What is Laplace correction ?
If a leaf have only one single instance (less evidence), then the probability that members of the segment will belong to the class is 100%. n/(n+m) The Laplace will reduce it to 75% to reflect the uncertainty. To avoid over-fitting instead of computing the frequency, we can use smoothed version of the frequency-based estimate. The purpose of which is to moderate the influence of leaves with only a few instances.
Laplace: n+1 / (n+m+2)
the effect of Laplace on high number of instances decrease.
e.g with 20 instances 20+1 / 20+2 = 0.95
where with 2 instances 2+1 / 2+2 = 0.75