first 100 ML/AI terms Flashcards
(100 cards)
Artificial Intelligence (AI)
The simulation of human intelligence processes by machines, particularly computer systems.
Machine Learning (ML)
A subset of AI that involves the use of algorithms and statistical models to enable computers to learn from data and make decisions without being explicitly programmed.
Deep Learning
A subset of ML involving neural networks with many layers that learn complex patterns in data.
Neural Network
A network of artificial neurons that mimics the structure of the human brain to process information in a layered fashion.
Supervised Learning
A type of ML where the model is trained on labeled data (input-output pairs).
Unsupervised Learning
A type of ML that deals with unlabeled data, finding patterns or structures in input data without predefined labels.
Reinforcement Learning
An ML technique where an agent learns to make decisions by taking actions in an environment to maximize cumulative reward.
Classification
A supervised learning task where the model predicts a discrete label for input data.
Regression
A supervised learning task where the model predicts a continuous value for input data.
Clustering
An unsupervised learning method that groups data points into clusters based on their similarity.
Decision Tree
A model that splits data into branches based on feature values, used for classification and regression tasks.
Random Forest
An ensemble learning method that constructs multiple decision trees during training and outputs the mode of the classes for classification.
Support Vector Machine (SVM)
A supervised learning algorithm that finds the hyperplane best separating different classes in the data.
K-Nearest Neighbors (KNN)
A simple ML algorithm that classifies data points based on the majority class of their k nearest neighbors.
Gradient Descent
An optimization algorithm used to minimize the loss function in various ML models by iteratively adjusting parameters.
Learning Rate
A hyperparameter that controls the step size at each iteration of gradient descent.
Overfitting
A modeling error that occurs when the model learns the noise in the training data too well, performing poorly on new data.
Underfitting
When a model is too simple to capture the underlying pattern in the data, resulting in poor performance on both training and new data.
Bias
The error due to overly simplistic assumptions in the learning algorithm, leading to underfitting.
Variance
The error due to the model’s sensitivity to small fluctuations in the training set, leading to overfitting.
Hyperparameter
A parameter whose value is set before the learning process begins and controls the model’s training process.
Cross-Validation
A technique for evaluating ML models by partitioning the data into subsets and training/testing the model multiple times.
Regularization
Techniques like L1 (Lasso) and L2 (Ridge) used to prevent overfitting by penalizing large coefficients.
Loss Function
A function that measures the discrepancy between the predicted value and the actual value, guiding model training.