AI Flashcards
Cons of Hill Climbing:
- No Guarantee to be COMPLETE or OPTIMAL
- Can get stuck on local maxima & plateaus
(RUN FOREVER IF NOT PROPERLY FORMULATED)
Pros of Hill Climbing:
- Rapidly find good solutions by Improving over bad initial state (GREEDY)
- Lower Time & Space Complexity compared to search algorithms
- No Requirements of problem-specific heuristics
(UNINFORMED) - Start from Candidate Solution, instead of building up step-by-step
(UNLIKELY BUT POSSIBLE, SOLUTION PICKED AT RANDOM )
Run algorithm for a Maximum number of iterations m
Variants of Hill climbing can sort out getting stuck on local maxima/ plateaus
Variants of Hill Climbing:
- Stochastic Hill Climbing
- First - Choice Hill Climbing
- Random - Restart Hill Climbing
Stochastic Hill Climbing :
- Variants Randomly selects a neighbour that involves an uphill move
- Probability of picking a specific move can depend on the steepness
- Converges slower than steepest ascent but can find higher solutions
First - Choice Hill Climbing
-Randomly generates a single SUCCESSOR neighbour solution & moves to it if it’s better than current solution
- No UPHILL, keeps randomly generating solutions until there is an uphill move
- After MAX num of tries OR generating all neighbours, hasn’t found UPHILL move, it gives up & assumes that it’s now at Optimal solution.
- Time complexity of this is lower as not all neighbours need to be generates before 1 is picked
(GOOD WHEN THERE ARE LOADS OF NEIGHBOURS FOR EACH SOLUTIONS)
Random - Restart Hill Climbing:
- Generates a series of different hill climbing searches of the same problem from random to initial states
- Stops when goal is found
- Can be parallelised / Threaded easily so does not take much time on modern Computers
- RARE to have to wait for this to happen
What is the only Complete variant of Hill Climbing?
Random - Restart Hill Climbing
It generates a solution if one exists, as eventually random start solution will be OPTIMAL SOLUTION
Pros of Simulating Annealing:
- Find near Optimal Solutions in reasonable time
(High Chance of going to GLOBAL MAX, But only
Good as the formulation of Optimisation Problem) - Avoids getting stuck in poor LOCAL MAX & PLATEAU by combining EXPLORATION & EXLIOTATION
(many solutions we concurrently work with at the end of EXPLORATION )
Cons of Simulating Annealing:
- Not Guaranteed to be complete OR OPTIMAL
(SENSITIVE TO FORMUALTION) - NOT reliable = can’t guarantee completeness
- Time & Space Complexity is problem & Representation- dependent
Formulating Optimisation Problems:
min / max f(x) => min/max objective functions
s.t g_i(x) <= 0, i = 1,…,m
h_i(x) <= 0, i = 1,…,n => Feasibility Constraint
==> x is the DESIGN VARAIBLE (can be anything)
SEARCH SPACE is space of all possible x values
What is Search Space of a Problem:
is the space of all possible x values ALLOWED by constraints in CANDIDATE SOLUTIONS
Define Explicit Constraint:
Explicitly mentioned
CANNOT BE ASSUMED
Define Implicit Constraint:
Rules that must be in place by the problem definitions in order for solutions to be CONSIDERED FEASIBLE
e.g: x,y > 0
Define A*:
Heuristic path finding algorithm & most used example of Informed Search
What is the Evaluation Function:
f(n) = g(n) + h(n)
g(n) cost to reach node n
h(heuristic from node n to goal)
It determines which node to expand next
Steps of A*:
BASIC
1) Expand the shallowest node in frontier with SMALLEST EVALUATION FUNCTION
2) Node is in the list of visited nodes, DON’T ADD TO FRONTIER
3) Stop when a GOAL node is visited
(KEEP EXPANDING OTHERWISE)
What is the Final Cost of A*?
Sum of g(n) along the paths
- DON’T INCLUDE HEURSITICS
Dijkstra’s:
- UNINFORMED
- Goes Through all the nodes of the graph
- No Heuristics / ADDITIONAL INFO
- Basic Attempt to find shortest distance from the nodes on the graph to the destination
what is h(n)?
Euclidean Distance from each node to destination
Heuristics measure of cost as if a node is physically closer to destination
Good Assumption the cost to the destination will be lower
Pros & Cons of A* :
Pros:
- Doesn’t go through all nodes
- safer to assume to expand node with lower straight line distance (LEAVE OUT OTHER NODES )
Cons:
- Can lead to non-optimal paths (UNLIKELY)
Uninformed Search ?
Define the set of strategies having no additional information about the State Space beyond what is give during Problem formulation
What does Uninformed Search do?
-Only generates successors
-Distinguish a goal state from a non goal-state
What is Breadth First Search ?
-An Uninformed Searching Strategy
FRONTIER NODES ARE EXPANDED LAYER BY LAYER
Expanding shallowest unexpanded node in frontier
Similar to a QUEUE (FIFO)
- Unless State otherwise, never add children that are already in the Frontier
Steps for BFS?
-Root node is expanded
- Then the successors of the root
- Repeatedly expand all the successors of the all children, till goal node reached
What are the 3 Quantities used to measure the Performance?
- Branching Factor (b):
Max No. of successor of any Node - Depth (d):
Depth of Shallowest goal Node - Max Length (m):
MAX length of any path in state space
4 quantities that Determine the Performance?
Completeness=>Guarantees finding solution, if 1?
Optimality =>Capability of finding Optimal Solution
Time Complexity => How long to find solution
Space Complexity => Memory space required?
Performance of BFS
Complete:
Yes, if goal node is at some finite d and will find goal node. b must also be finite
Optimal:
Yes, if path cost is a non-decreasing function of the depth of the node
(ALL ACTIONS HAVE SAME COST)
Time: O(b^(d))
Assume Uniform Tree, each node has b successors
Space: O(b^(d))
Stores all Expanded nodes
Frontier is O(b^(d)) AND in Memory it is O(b^(d-1))
What does
I
Always
Take
Goal
Path
stand for?
I = Initial State (agent starts its search)
A=Action Set
(Actions that can be executed in any state)
T = Transition Model
(Mapping between States and Actions )
G = Goal Test (Determine if State is a Goal State)
P= Path Cost Function (Assigns values to each cost)
What is the Solution?
Sequence of Actions from initial state to Goal
What is Cost of Solution?
Sum of the cost of actions from initial to goal
What is the Path?
Sequence of states connected by a sequence of actions
Does goal node appear in Order of nodes visited in BFS?
NO
Does goal node appear in Order of nodes visited in DFS?
YES
What is Depth First search?
-An Uninformed Searching Strategy
==> Expanding the deepest unexpanded node in Frontier
Stack LIFO is used for expansion
Most recently generated node is expanded
- Usually just do the left most node
Steps for DFS?
-Expand the Root first
- Then expand the first successor of root node
(CAN DO IT RANDOMLY)
- Repeat Expanding deepest node until goal is found otherwise go back to try alternative paths
Performance of DFS?
Completeness:
Not complete if search space is Infinite or we
don’t avoid infinite loops.
Yes, only is search space is finite
Optimality:
Not Optimal, because it will expand entire left
subtree, even if goal is first level of subtree
Time: O(b^m)
Depends on the Max length of the path is
Search space
Space: O(b^m)
Store all the nodes from each path from Root
to leaf
Variants of DFS:
b) what performance attribute stays the same for all of these variants?
-Less Memory Usage (LMU)
- Depth LIMITED Search (DLS)
- DLS with LMU
b) Optimality is ALWAYS NO
What is LMU?
If u reach the leaf node for the left-subtree and there is no GOAL node u can remove the entire subtree from memory
Space Complexity become O(bm)
- Store a single path with siblings for each node on the path
- COMPLETE IF the search space is finite
What is DLS?
- has a DEPTH LIMIT L
once limit is reached we go find an alternate path
if L<d may be incomplete if goal node is L+1 position
if L>d it is not optimal
Time complexity reduces to O(b^L) , when L<d
What is DLS with LMU?
Once we reach our DEPTH LIMIT L and if we have found NO goal node we remove the subtree from our memory
Space Complexity = O(bL)
COMPLETE if L => d
What are Informed Search Strategies?
Use problem-specific knowledge beyond problem definition
Can find solutions more efficiently compared to uniformed
General Approach for Informed?
Best-First search:
-Determines which node to expand based on an
evaluation function f(n).
f(n) - acts as cost estimate =>
Lowest cost expanded first
What is Best First Search?
-Determines which node to expand based on an
evaluation function f(n).
Most include a heuristic h(n), which is the estimated cost of the cheapest path from current node to goal
goal node h(n) = 0
Known as GREEDY as will always pick cheapest path
What is the f(n) for A*?
f(n) = g(n)+h(n)
g(n)=> cost to reach node n
h(n) => heuristic from n to goal
Steps for A*?
COMPLEX
-Expand the node in the frontier with smallest f(n)
- Repeated States & Loopy Paths.
- Node already in frontier don’t add
- If the state of a given child is in the Frontier:
-if frontier node has large g(n), replace child
& remove node with the larger g(n)
- Stop when goal is visited
Performance of A*?
A* is Complete & Optimal if h(n) is consistent
- A* is exponential in the length of the solution
CONSTANT STEPS COSTS : O(b^(Σd))
h* is actual cost from root to goal, Σ = (h-h)/h
Σ is RELATIVE ERROR
Space: O(b^d)
is Main Issue with A*
-Keeps all generated nodes in memory
- Keep ALL expanded & ALL nodes in frontier
-NOT suitable for LARGE SCALE PROBLEMS
What does Consistent mean in terms of h(n)?
If the estimate is no greater than the estimated distance from any neighbouring node n’ to the goal, + the cost of reaching the neighbour:
h(n) <= cost(n, n’) + h(n’)
cost is just distance from n to n’ (g(n))
What do design variables represent?
-Candidate solutions of a Problem
-Are variables belonging to pre-defined domains
-There may be one or more design variable in
a given optimisation problem.
- These can be possible decision needed to be made in the problem
What is the objective function?
-Takes design variables as an Input
- Outputs a NUMERICAL value that problem aims to MININMISE OR MAXIMISE
CAN have Multiple Objective functions in Formulation
Defines the Cost or Quality of a Solution
What are Constraints in Formulating Operation Problems?
-Constraints that design variables must satisfy for the solution to be FEASIBLE
-Usually depicted by functions that take
the design variables as input and output a numeric value.
-They specify the values that these functions are allowed to take for the solution to be feasible.
-There may be zero or more constraints in a problem
DEFINES THE FEASIBILITY OF THE SOLUTION
Benefits of Local Search?
DO NOT keep track of the paths or States that have been visited
NOT systematic ,but PROS are:
- Use very little Memory
- Find Reasonable solutions in Large or Infinite
State Space
What are Local Search Algorithms?
Optimisation Algorithms that operate by searching from initial state to neighbouring states
What is The Aim Of a MAXIMISING problem?
Reaching the HIGHEST PEAK/ Global Maximum
What is The Aim Of a MINIMISING problem?
Reaching the LOWEST TROUGH/ Global Minimum
Purpose Of Hill Climbing?
TO find & Reach Global Maximum
Purpose of Gradient Descent
TO find & Reach Global Minimum
Why is Hill Climbing Greedy?
Does not look beyond the immediate neighbours of the current state
What are the 3 Components of Hill Climbing we must design?
-Representation
-Initialisation Procedure
-Neighbourhood Operator
What is Representation?
How to store Design variables in the problem(s)
Should facilitate the Application of the Initialisation Procedure
What is Initialisation Procedure?
How to pick initial solution.
USUALLY RANDOM, Can Be Heuristic
What is Neighbourhood Operator?
How to generate Neighbourhood Solutions
(INCREMENT/STEP SIZE)
Performance of Hill Climbing:
Completeness:
No, Depends on problem formulation & design of the algorithms (GET STUCK ON LOCAL MINIMA)
OPTIMALITY:
Not Optimal, (GET STUCK ON LOCAL MINIMA)
Time: O(mnp)
m = MAX no. of iteration,
n = MAX no. of neighbours,
EACH take O(p) to generate
Space: O(nq+r) ==> r is a constant so ==> O(nq)
n = MAX no. of neighbours,
Variable takes O(q) and
r represents the space to generate the neighbours sequentially(NEGLIGIBLE COMPARED TO n & q)
What are the 3 Types of Machine Learning?
-Supervised Learning,
-Unsupervised Learning
-Reinforcement Learning
What is Supervised Learning?
- Most Prevalent Form
- Learning with a teacher
- Teacher: expected output, label, class, etc
- Solve 2 Types of problems: Classification & Regression
How can AI be used to Solve Machine Learning Problem?
- Automatically create models from data to perform certain tasks through machine learning
- Not Guaranteed perfect model, but will find good model depending on difficulty of problem
- Good for problem where it is difficult to create good models manually
- Good for problems that don’t require perfect answers
AI in Optimisation Problems:
- Solve them in a reasonable amount of time through optimisation techniques
- No guarantee to find optimal solution in reasonable amount of time but a good solution
- Good for problems where no specific technique exists that guarantees that optimal solution can be found
Think Humanly?
-Can machine think humanly?
- Can we consider machine as human?
- How to define think humanly?
==> With AI we need to define things with mathematical forms
Act Humanly?
- What can humans do?
- What if human’s action is wrong?
==> Doesn’t mean we shouldn’t copy the wrong actions
Act Rationally?
==> Rationality: Doing the right thing can be mathematically defined & General enough, linked to human behaviour
Think Rationally?
- Think Logically?
- Logical AI?
- Too Narrow?
Define AI using ‘Rational Agents’:
Rational Agents are computer programs that perceive their environments & take actions that maximize their chances of achieving best EXPECTED outcome
What is Machine Learning?
An agent is learning if it improves its performance after making observations about the world.
Machine Learning when the agent is a Computer
What is Machine Learning Problem?
Problems that require a model to be built automatically from data e.g => to make classification
What is Supervised Learning?
- Most popular form in Real World
- Learning with a Teacher
- Teacher: expected output, labels, classes, etc.
- Solve 2 types of Problems : Classification & Regression Problem
What is Classification Problem?
{Give example}
- Predict categorical class Labels
==> Spam Detection
What is Regression Problem?
{Give example}
- Prediction of a Real Value
==> Student Grades, Stock Price Prediction
Define Supervised Learning:
Agents Observe input-output pairs & learns a function that maps from input to output
Define Unsupervised Learning:
Agent Learns patterns in the input without any explicit feedback
What is Unsupervised Learning?
- Learning without a teacher
- Find Hidden Structures
Clustering ==> Group inputs based on similar properties
What is Reinforcement Learning?
- COMBO of Supervised & Unsupervised
- Learning with (delayed) feedback / reward –> Don’t have instant labels
- Learn series of actions => Sequence of decision Making
Agents learn from a series of Reinforcement Rewards & Punishment. Decides which of the actions prior to reinforcement were most responsible for it and alter actions towards more reward in Future.
Why is Overfitting a Problem?
- Fitting it too well is not helpful as the data you want to classify or predict is not the same as the training data
- Learning ever irrelevant detail in training data is irrelevant
- Occurs when model is more complex than required
When does Underfitting occur?
when model is more Simpler than required
IS BAD CAUSE will Not classify all point into correct classes
What is a Parametric Model?
Parametric models are learning models that summarise data with a set of parameters.
e.g Logistic & Linear Regression
What is a NON-Parametric Model?
Non-parametric models are learning models that do not assume any parameters
e.g KNN
Limitations of K means?
- Outliers can influence the clusters that are found & increase WCSSS
- Problem when Clusters are differing
Applications of K means?
key tool in image and Signal Compression
(SPECIALLY VECTOR QUANTIZATION)
K means Steps?
- Pick k random elements & these are our centroids
- Assign each element to its closest centroid
(EUCLIDEAN DISTANCE IF NOT SPECIFIED) - Recalculate each centroid’s centre & repeat until centroid don’t change an you get Min WCSS
Selecting K in K means?
Use Prior Info like no of group we want to cluster
How to do Elbow Method?
Runs K means with different K values
See where WCSS has inflection point
(KINK IN GRAPH BEFORE IT LEVELS OFF)
What is Elbow Method?
Finds the optimal K value
What is Agglomerative Clustering?
-Bottom UP
- Each item starts as it’s own Cluster
- Merge the 2 Similar (SMALLEST INTER CLUSTER DISSIMILARITY) clusters in each step, until there is one cluster
What is Divisive Clustering?
-TOP down
- Each items starts in ONE cluster
- Splits Cluster into 2 New cluster with Largest inter-cluster dissimilarity, until each item has its own cluster
Advantage Single Linkage & Complete linkage?
- If it has outliers, which heavily affect cluster being merged, but they are not representative of the whole cluster
Cons of SL?
- Cause a ‘chaining effect’ where clusters are combined at close intermediate examples
- Cluster may not be as compact as required
Cons of CL?
- Provide clusters with small diameter
- Cluster may end up being crowded, as items can be very close to items in other clusters
Pro of group average?
Attempt to produce a relatively compact cluster, which are relatively close
What is Constraint Handling?
Methods that ensure Optimisation Algorithms can effectively search the feasible regions.
Avoiding or Penalising INFEASIBLE solutions by Modifying the Algorithm Operator OR Objective Function
What is Modifying the Algorithm Operator?
- Modifying the Search Space to generate ONLY feasible Solutions
- Algorithm Operators is a function that generates a new candidate solution based on current solution
- AVOIDS generating Invalid Solutions, where constraint dictate whether solution fits
Pros of Modifying the Algorithm Operator
- Will not generate Infeasible Solutions, allowing search for optimal solutions
- Makes Hill Climbing & Simulated Annealing Complete
Cons of Modifying the Algorithm Operator
- May be difficult to design, Problem-Dependent
- Restricts the search space too much harder to find Optimal Solutions.
(GLOBAL OPTIMUM MAYBE BETWEEN FEASIBLE & NON FEASIBLE SOLUTIONS)
What is Modifying the Objective Function?
- Incorporating Constraints into Objective Function, often adding a penalty term for constraint violation
OBJECTIVE FUNCTION WOULD BE INCREASED/DECREASED IF CONSTRAINT VIOLATED
What is the Death Penalty Method?
A method of Modifying the Objective Function
MIN PROBLEM
Let x be a Solution
Objective function is f(x) + Q(x) // Q(x) is penalty term
- if x is FEASIBLE Q(x) = 0
- Else Q(x) is a LARGE +VE constant C
// So that feasible solutions are smaller than
infeasible ones
What’s a Limitation/Problem with Death Penalty Method
C is Always the same Constant, so INFEASIBLE SOLUTION will have same Penalty
- Hard to find solutions in a dominated Region of Infeasible solutions
What is Levels of Infeasibility Approach?
A method of Modifying the Objective Function
- This distinguishes the objective value of infeasible solution to help the algorithm find Feasible ones.
Objective function still gets a Penalty added/subbed to it
Give an Example of Q(x) for Levels of Infeasibility Approach?
THE PROBLEM IS MINIMISE f(x)+Q(x)
Q(x) =
vg_1Cg_1(x) +vg_2Cg_2(x) +…+vg_nCg_n(x)+
vh_1Ch_1(x) +vh_2Ch_2(x) +…+vh_nCh_n(x)
i = 1,…,n
so if g_i(x) / h_i(x) is VIOLATED, then vg_i/vh_i is 1
OTHERWISE vg_i/vh_i is 0
More Violations for each g_i & h_i then higher output of Q(x)
Only adds Penalties corresponding to Violated Constraints
What does g_i & h_i in Levels of Infeasibility Approach?
The Constraints
What does C in Levels of Infeasibility Approach?
A Constant, that represents how important a specific constraints is.
e.g=>
say vg_1Cg_1(x) is more important than vg_2Cg_2(x), so C will be different values
(SCALING EACH CONSTRAINT BY HOW IMPORTANT IT IS)
Why is Levels of Infeasibility better than Death Penalty?
As it Solves the issue with with Death Penalty, As it Gives Different Penalties for ALL Infeasible solutions.
-Smaller for Solutions that are close to constraints.
-Larger for Solutions that are Further away from the constraint
What does Squaring the values of the Constraints in the Penalty term?
- Make distinction between objective values even larger.
- Distinguish Different infeasible solutions more effectively
- Removes -VEs
- Bigger Span of Solutions
- Effectively Distinguish
Pros of Levels of Infeasibility?
Easy to Desgin
Cons of Levels of Infeasibility?
-Algorithms have to search for Feasible solution to design rather than just having the search space only certain feasible solution
-(STILL GENERATES SOLUTIONS BUT THEY GET IGNORED DUE TO LARGE/SMALL OBJECTIVE FUNCTION IF WE ARE MIN/MAX)
-(THEY WILL JUST BE A NEIGHBOURS NOT A SOLUTION WE MOVE TOO)
How do Hill Climbing & Simulating Annealing become Complete?
If the Strategy never enables infeasible solution to be generated at all
Which of the CONSTRAINT HANDLING methods makes Hill Climbing & Simulating Annealing COMPLETE?
Modifying Algorithm Operator can as it does not Generate Infeasible Solutions
What are the Hyperparameters?
HIGH LEVEL free parameters
How is a Predictor Obtained?
By training the free parameters of the considered model using available data.
What is the Training set used for?
Estimate the free parameters
What is the Test set used for?
Evaluate the performance of the trained predictor before deploying it
Reserved for the Evaluation of the predictor, so can’t use it as the model learn nothing
Holdout Validation Steps?
1) Randomly Choose 30% of data to form a Validation set
2) Remaining data forms the training set
3) Training your model on the training set
4) Estimate the test performance on the validation set
5) Choose the Model with lowest Validation Error
6) Retrain with chosen model on joined training validation to obtain predictor
7) Estimate future performance of the obtained predictor on TEST SET
8) Ready to deploy the Predictor
What is the Mean Square Error (L2)?
(f(x)-y)^2
What are the 3 Models?
- Linear Model
- Quadratic Model
- Line Model (OVERFITTING)
For Holdout Validation how do we
ESTIMATE THE TEST PERFORMANCE ON THE VALIDATION SET?
REGRESSION : we compute the cost Function (L2) on the examples of the validation set (INSTEAD OF TRAINING SET)
CLASSIFICATION: Don’t compute cross entropy cost on the validation set
COMPUTE the 0-1 error metric
0 -1 error Matrix =
(num of wrong Predictions)/num of predictions =
1- accuracy
Steps for k-Fold Cross-Validation?
1) Split the training set randomly into k (equal sized) disjoint sets
2) Use K-1 of those together for training
3) Use the remaining one for validation
4) Permute the k sets & repeat k times
(CHANGE THE PARTITONS OF VALIDATION SETS & REPEAT k TIMES)
EACH PARTITION WILL BE USED AS A VALIDATION SET
5) Average the Performances on the k validation sets
6) Choose the model with smallest AVG k fold cross validation error
7) Re-train with chosen model on joined training & validation to obtain the predictor
8)Estimate future performance of the obtained predictor on TEST SET
9) Ready to deploy the Predictor
Steps for Leave-one-out Validation
- Leave out a single example & train on all the rest of the annotated data
- For a total of N example, we repeat this N times leaving out a single example
- Take Avg. of the validation errors as measured on the left-out points
- Same as N-folds cross validation where N is the number of labelled points
(EVERY POINT WILL BE USED AS A VALIDATION POINT)
What do we have to do with the models for k-fold Cross Validation?
For each of the partition create separate lines/graphs & Then compute validation error using the points in each partition
- Then take the mean of these ERRORS for each of the graphs
Pros of Holdout Validation?
Computationally Cheapest
(IDEAL FOR LARGE SAMPLES)
Cons of Holdout Validation?
Not reliable if sample size is not large enough
Pros of 3 fold?
Slightly more reliable than Hold out
Pros of 10-fold?
Only waste 10%
- Fairly reliable
Cons of 3-fold?
- Wastes 1/3-rd annotated data
- Computationally 3-times as expensive as holdout
Cons of 10-fold?
- Wastes annotated data
- Computationally 10-times as expensive as holdout
Pros of Leave-one-out?
Doesn’t waste data
(IDEAL FOR SMALL SAMPLES)
Cons of Leave-one-out?
Computationally most Expensive
What is Storage Complexity of Dendrogram?
O(n^2)
- Storing the distance matrix require s storage (n^2)/2 entries (Don’t need to store b to a if a to b is already there)
What is the Time Complexity of Dendrogram?
O(n^3)
- n iterations
- Every iteration the n^2 size distance matrix has to be updated and searched
(GOING THROUGH ONCE IS n^2 & THROUGH n TIMES is n^3 )
- Complexity can be reduced to O(n^2 log(n)) if using different algorithms
Downside of Dendrogram, regarding Complexity ?
-Limits the size of the dataset that can be processed
- Anything more than n^2/n^3 is hard to work on
NOT IDEAL FOR LARGE DATABASES
Pros of KNN?
-VERSATILE: classification & regression & Non parametric
- Easy to Implement & Interpret
- Can Approx. complex functions, so it has very good Accuracy
- Instance based so it defers all calculations until end
Cons of KNN?
- Performance decreases as dimensionality increase
- Sensitive to noise / INACCURATE DATA especially when the value of k is small
- Specify the distance function & pre-define k value
- ALL training data it needs to calculate the value of new points based on training data only
- Computationally Expensive as dataset grow
Pros of Linear Regression?
Relatively low computational cost
- Don’t worry about K value
- Less Space required does not have to store the training set
Logistic Regression ?
- Supervised Learning Technique
- Takes Inputs and plots them like Linear Regression
- Classifies the outputs into discrete distinct category using sigmoid function
What is sigmoid function?
1/(1-e^-u)
where u is the line or function w0 + w1*x1
What is f00
Pairs don’t have same cluster & class
what is f10
Pairs have same cluster, but not class
what is f01
Pairs don’t have same cluster , but same class
What is f11
Pairs have same cluster & class
BFS is not Optimal when?
The cost function is NOT non-decreasing
Explain hypothesis function y = w_0 + w_1*x
- Creates a line in 2D space relating x values to predictors
- w_0 is the y intercept/offset
- w_1 is the gradient
Explain Cost Function :
L(x,y) = Σ (y_i - h(x_i))^2
Compares difference between actual label data & model’s prediction
- squares to eliminate -VES
- Done for every element in training set/example
- SUMS value for cost function value
Explain the Hypothesis Function
k : y = w_0+ w_1x + w_2x^2
Takes in ONE x value
- Uses it raised to different powers to get a polynomial function
- THUS NON LINEAR
What is an ideal Cluster Similarity matrix?
For examples/data points it create an nxn matrix
Then compares whether each point is in same Cluster giving 1 if they are or otherwise 0
What is an ideal Class Similarity matrix?
For examples/data points it create an nxn matrix
Then compares whether each point is in same Class giving 1 if they are or otherwise 0
How do we Classify in KNN for Regression Problems ?
Take the Average y_i for all K neighbours
(VALUE U ARE CLASSIFIYING IN FOR NEW DATA)
The Average value is the y_i value for new data
e.g:
we want to find the height of new data point p,
- Find closet k neighbours
- Find Average height of those neighbour
- the Average is the height of p
How do we Classify in KNN for Classification Problems ?
Just look at the y_i values for all k neighbours, whichever is more prevalent that is the value of our new data point’s y_i
e.g
we want to find the height of new data point p,
- Find closet k neighbours
- Look at the height for the neighbours
- Pick the one that shows up the most in the Neighbours
What is k means ++
First pick random initial centroid
Then find the centroid furthest away using the via finding EUC distance, so picking the next point the probability is proportional to distance ^2.
Then repeat till you have k centroid
Then do k means to find clusters
How is k means non deterministic?
Different initialisation lead to different local optima
If optimisation problem in k means is not convex ?
Algorithm may not converge at global minimum, rather to a local minima
2 things that make k mean converge
- WCSS monotoically decreases
- works with finite number of partitions of data