Cost function formula Flashcards
Q: What is the first key step in implementing linear regression?
A: Defining the cost function to measure how well the model is doing.
Q: What do the parameters w and b represent in the linear regression model?
A: w is the weight (slope), and b is the bias (intercept).
Q: How do different values of w and b affect the linear regression model’s function f(x)?
A: They change the slope and position of the line on the graph.
Q: What is the function f in linear regression commonly called?
A: The model.
Q: How do you write a linear regression function using w and b?
A: f(x)=wx+b.
Q: What do you call the difference between the predicted value y^ and the actual value y?
A: The error.
Q: How is the cost function J(w,b) defined in linear regression?
Q: What is meant by “squared error” in the cost function?
A: It is the square of the difference between the predicted value y^ and the true value y.
Q: Why do we divide by 2m in the cost function formula?
A: For mathematical convenience and to simplify further calculations.
Q: What is the goal when minimizing the cost function J(w,b)?
A: To find values of w and b that make the cost function as small as possible, indicating a better fit to the data.
Q: How do you denote a specific training example in machine learning notation?
Q: What happens if J(w,b) is large?
A: It indicates that the model’s predictions are far from the actual values, reflecting poor performance.
Q: What does a small value of J(w,b) indicate?
A: That the model’s predictions are close to the actual values, indicating good performance.
Q: How does the cost function help in the optimization process of linear regression?
A: It provides a quantitative measure of the model’s error, guiding adjustments to w and b to improve predictions.
Q: What is the goal of the cost function in linear regression?
A: To measure how well the model’s predictions match the actual data and to help improve the model by adjusting the parameters.