AI Flashcards

1
Q
  • Artificial Intelligence (AI):
A
  • Artificial Intelligence (AI):

AI is a subfield of computer science that aims to create systems capable of performing tasks typically requiring human intelligence. These tasks include learning, reasoning, problem-solving, perception, and language understanding.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
  • Machine Learning (ML):
A
  • Machine Learning (ML):

Machine Learning, a crucial subset of AI, leverages algorithms and statistical models to enable machines to improve their performance over time through experience and training. It’s essentially about teaching computers to learn from data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q
  • Deep Learning:
A
  • Deep Learning:

Deep Learning is a type of machine learning that utilizes neural networks with many layers to analyze data and derive conclusions. It is especially adept at processing large amounts of unstructured data, such as images and text.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q
  • Sam Altman:
A
  • Sam Altman:

Formerly the president of Y Combinator and now the CEO of OpenAI, Sam Altman is a key figure in the AI industry. He has been influential in advancing OpenAI’s mission of ensuring artificial general intelligence benefits all of humanity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q
  • Neural Networks:
A
  • Neural Networks:

These are computational models inspired by the human brain. They consist of interconnected nodes or “neurons” that process information and identify patterns in data. They are the backbone of deep learning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q
  • Supervised Learning:
A
  • Supervised Learning:

In Supervised Learning, an AI model is trained using labeled data. It involves the model learning to map input data to the correct output using feedback from a ‘teacher’.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
  • Unsupervised Learning:
A
  • Unsupervised Learning:

Unsupervised Learning involves training an AI model using data that is neither classified nor labeled, enabling the model to identify patterns and structures within the data on its own.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
  • Reinforcement Learning:
A
  • Reinforcement Learning:

A type of machine learning where an ‘agent’ learns to make decisions by taking actions in an environment to maximize some notion of cumulative reward

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
  • Natural Language Processing (NLP):
A
  • Natural Language Processing (NLP):

NLP is a field of AI that focuses on the interaction between computers and humans through natural language. The ultimate objective is to read, decipher, understand, and make sense of human language in a valuable way.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q
  • Computer Vision:
A
  • Computer Vision:

Computer Vision aims to mimic human vision by electronically perceiving and interpreting an image or a sequence of images. It is a key technology for fields like autonomous vehicles, medical imaging, and face recognition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q
  • Generative Adversarial Networks (GANs):
A
  • Generative Adversarial Networks (GANs):

GANs are a class of machine learning systems invented by Ian Goodfellow and his colleagues in 2014. Two neural networks contest with each other in a game, with one network (the generator) making data instances to fool the other network (the discriminator).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q
  • Bias in AI:
A
  • Bias in AI:

Bias in AI refers to situations where AI systems may systematically produce outcomes that are unfair or discriminatory, typically as a result of biases present in the training data or the design of the algorithms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q
  • Explainable AI (XAI):
A
  • Explainable AI (XAI):

XAI is an emerging field in machine learning that aims to address how black box decisions of AI systems are made. This field aims to make AI decision-making transparent and understandable to human users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q
  • Data Mining:
A
  • Data Mining:

Data Mining involves extracting valuable, yet non-obvious, information from large databases. It uses techniques from machine learning, statistics, and database systems to discover patterns in large data sets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q
  • Robotic Process Automation (RPA):
A
  • Robotic Process Automation (RPA):

RPA refers to the use of software bots to automate highly repetitive and routine tasks traditionally performed by human workers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q
  • AI Ethics:
A
  • AI Ethics:

AI Ethics is a branch of ethics dedicated to understanding and addressing the moral issues raised by the development and implementation of artificial intelligence technologies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q
  • Algorithm:
A
  • Algorithm:

An algorithm is a step-by-step procedure for solving a problem or accomplishing a task. In the context of AI and ML, algorithms are used to find patterns in data and make decisions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q
  • Artificial General Intelligence (AGI):
A
  • Artificial General Intelligence (AGI):

AGI refers to a type of artificial intelligence that has the ability to understand, learn, and apply its intelligence to any intellectual task that a human being can do.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q
  • Convolutional Neural Networks (CNNs):
A
  • Convolutional Neural Networks (CNNs):

CNNs are a type of deep learning model primarily used for image processing. They have proven to be highly effective in areas such as face recognition and image and video recognition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q
  • Recurrent Neural Networks (RNNs):
A
  • Recurrent Neural Networks (RNNs):

RNNs are a type of deep learning model designed to recognize patterns in sequences of data, making them particularly effective for tasks such as language modeling and speech recognition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q
  • Transfer Learning:
A
  • Transfer Learning:

Transfer Learning is a machine learning method where a pre-trained model is adapted for a new, different data set. It’s a powerful technique when there’s a lack of labeled data for the task at hand.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q
  • Feature Extraction:
A
  • Feature Extraction:

This refers to the process of transforming raw data into a set of input features that can be handled by a machine learning algorithm. This process can dramatically improve the performance of ML models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q
  • Overfitting and Underfitting:
A
  • Overfitting and Underfitting:

These refer to the common problems in machine learning where a model performs well on training data but poorly on unseen data (overfitting), or where a model performs poorly on both training and unseen data (underfitting).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q
  • Hyperparameter Tuning:
A
  • Hyperparameter Tuning:

This refers to the process of choosing a set of optimal parameters for a learning algorithm to improve its performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q
  • Precision and Recall:
A
  • Precision and Recall:

These are evaluation metrics used in machine learning. Precision measures the proportion of correctly identified positive observations out of the total predicted positives, while recall measures the proportion of actual positives that were identified correctly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q
  • Chatbot:
A
  • Chatbot:

A chatbot is an AI software designed to simulate a conversation with human users, especially over the internet. They can communicate via text or audio, and are used in various customer service and information acquisition scenarios.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q
  • Autonomous Vehicles:
A
  • Autonomous Vehicles:

These are vehicles capable of sensing their environment and moving safely with little or no human input. They combine a variety of techniques to perceive their surroundings, including radar, lidar, GPS, odometry, and computer vision.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q
  • Quantum Computing:
A
  • Quantum Computing:

Quantum computing is a type of computation that uses quantum bits, or qubits, and leverages quantum mechanical phenomena such as superposition and entanglement. It has the potential to solve complex problems much more quickly than traditional computers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q
  • Swarm Intelligence:
A
  • Swarm Intelligence:

Swarm intelligence is the collective behavior of decentralized, self-organized systems. It can be used to coordinate multiple AI agents, such as drones or robots, to achieve a larger task.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q
  • Knowledge Graph:
A
  • Knowledge Graph:

A knowledge graph is a structured graphical representation that connects facts about the world and is used for information retrieval in a semantic, meaningful manner.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q
  • Edge AI:
A
  • Edge AI:

Edge AI refers to the process of running AI algorithms locally on a hardware device. The algorithms are run on the device’s data without requiring data communication with a server, allowing real-time processing and decision-making.

32
Q
  • Multimodal model:
A
  • Multimodal model:

An AI model that can process and understand multiple types of data inputs, such as images and text. These models can be more versatile in handling tasks that require understanding of different types of data.

33
Q
  • GPT-4:
A
  • GPT-4:

The fourth iteration of the Generative Pre-trained Transformer series developed by OpenAI. It’s a multimodal model that can accept both text and image inputs and emits text outputs. It’s been trained to perform at human-level on various professional and academic benchmarks.

34
Q
  • Multimodal model:
A
  • Multimodal model:

An AI model that can process and understand multiple types of data inputs, such as images and text. These models can be more versatile in handling tasks that require understanding of different types of data.

35
Q
  • ChatGPT:
A
  • ChatGPT:

A conversational AI model developed by OpenAI. The model is designed to generate human-like text responses given a series of input text. It’s part of the GPT series, with the latest version (as of June 2023) being powered by GPT-4.

36
Q
  • OpenAI Evals:
A
  • OpenAI Evals:

An open-source framework developed by OpenAI for the automated evaluation of AI model performance. It’s designed to help identify shortcomings in AI models to guide further improvements.

37
Q
  • Stable Diffusion:
A
  • Stable Diffusion:

A text-to-image model developed by Stability.ai. It’s trained to generate images based on text descriptions and uses a specific type of model architecture called Latent Diffusion Models. The model is open-sourced and has been trained on a large dataset of image-text pairs.

38
Q
  • Latent Diffusion Models:
A
  • Latent Diffusion Models:

These models work by breaking down the image into a lower-dimensional latent space, then applying a method of noising and denoising to gain the final contextual decoding into the pixel space. This approach is used in the Stable Diffusion model to generate images from text descriptions.

39
Q
  • LAION-5B dataset:
A
  • LAION-5B dataset:

A subset of the larger LAION dataset used to train the Stable Diffusion model. It consists of 5.85 billion images.

40
Q
  • DALL-E 2:
A
  • DALL-E 2:

The second generation of the text-to-image generative model developed by OpenAI. It’s trained on undisclosed stock images and uses the GPT-3 model as part of its foundation.

41
Q
  • Geoffrey Hinton:
A
  • Geoffrey Hinton:

Often referred to as the “godfather of deep learning,” Hinton’s work on neural networks and machine learning has been foundational to the field. He is a recipient of the Turing Award, along with Yoshua Bengio and Yann LeCun, for his work in deep learning.

42
Q
  • Yann LeCun:
A
  • Yann LeCun:

LeCun has made significant contributions to convolutional neural networks and other areas of machine learning, computer vision, mobile robotics, and computational neuroscience. He is the Silver Professor of the Courant Institute of Mathematical Sciences at New York University, and Vice President, Chief AI Scientist at Facebook.

43
Q
  • Yoshua Bengio:
A
  • Yoshua Bengio:

Known for his work in artificial neural networks and deep learning, Bengio received the Turing Award along with Geoffrey Hinton and Yann LeCun in 2018. He is a professor at the Department of Computer Science and Operations Research at the Université de Montréal.

44
Q
  • Andrew Ng:
A
  • Andrew Ng:

Co-founder of Coursera and an Adjunct Professor of Computer Science at Stanford University, Ng’s work in AI focuses on deep learning. He was the founding lead of the Google Brain team and was Chief Scientist at Baidu.

45
Q
  • Demis Hassabis:
A
  • Demis Hassabis:

Hassabis is a co-founder and the CEO of DeepMind, which was acquired by Google in 2014. He led the team that developed AlphaGo, the first AI to defeat a world champion Go player.

46
Q
  • Fei-Fei Li:
A
  • Fei-Fei Li:

Li is an influential researcher in the field of computer vision and cognitive and computational neuroscience. She is a professor at Stanford University and was the director of the Stanford Artificial Intelligence Lab (SAIL).

47
Q
  • Ian Goodfellow:
A
  • Ian Goodfellow:

Goodfellow is the creator of generative adversarial networks (GANs), a popular AI model type used to generate realistic images. He was previously a researcher at OpenAI and is now a director of machine learning at Apple.

48
Q
  • Daphne Koller:
A
  • Daphne Koller:

Koller’s main research interest is in developing and using machine learning and probabilistic methods to model and analyze complex domains. She co-founded Coursera and is now a CEO of insitro, a drug discovery startup.

49
Q
  • Judea Pearl:
A
  • Judea Pearl:

Pearl is a computer scientist and philosopher, known for his development of Bayesian networks and the formulation of causality in probabilistic modeling. He received the Turing Award in 2011.

50
Q
  • Stuart Russell:
A
  • Stuart Russell:

Russell is a computer scientist known for his contributions to artificial intelligence. He is a professor of Computer Science at the University of California, Berkeley and is the author of the standard AI textbook “Artificial Intelligence: A Modern Approach”.

51
Q
  • TensorFlow:
A
  • TensorFlow:

TensorFlow is an open-source machine learning framework developed by Google Brain that provides a suite of tools for developing and training machine learning models.

52
Q
  • PyTorch:
A
  • PyTorch:

PyTorch is an open-source machine learning library based on the Torch library, used for applications such as computer vision and natural language processing. It is primarily developed by Facebook’s AI Research lab.

53
Q
  • Ensemble Learning:
A
  • Ensemble Learning:

Ensemble Learning is a machine learning concept in which multiple models such as classifiers or experts are strategically generated and combined to solve a particular computational intelligence problem.

54
Q
  • Grid Search:
A
  • Grid Search:

Grid Search is a hyperparameter tuning technique used to find the optimal hyperparameters for a model. It exhaustively tries every combination of the provided hyperparameter values to find the best model.

55
Q
  • Random Search:
A
  • Random Search:

Random Search is another technique for hyperparameter tuning. Unlike Grid Search, Random Search selects random combinations of the hyperparameters to find the best solution. It’s often more time-efficient than Grid Search.

56
Q
  • AutoML (Automated Machine Learning):
A
  • AutoML (Automated Machine Learning):

AutoML refers to the automated process of end-to-end training of machine learning models. It automates the process of applying machine learning to real-world problems.

57
Q
  • Data Wrangling:
A
  • Data Wrangling:

Also known as data munging, data wrangling is the process of cleaning, structuring and enriching raw data into a desired format for better decision making in less time.

58
Q
  • Feature Engineering:
A
  • Feature Engineering:

Feature engineering is the process of using domain knowledge to create features (characteristics, properties, attributes) that make machine learning algorithms work.

59
Q
  • Cross-Validation:
A
  • Cross-Validation:

Cross-validation is a resampling procedure used to evaluate machine learning models on a limited data sample. It partitions the data into subsets, holds out a subset for validation, and uses the remaining subsets for learning.

60
Q
  • Federated Learning:
A
  • Federated Learning:

Federated Learning is a machine learning approach that trains an algorithm across multiple devices or servers holding local data samples, without exchanging them. It’s useful for developing AI models on data that’s privacy-sensitive or distributed across multiple devices.

61
Q
  • Sentiment Analysis:
A
  • Sentiment Analysis:

Sentiment Analysis, or opinion mining, is a subfield of NLP that uses machine learning and text analysis to identify and extract subjective information from source materials.

62
Q
  • Anomaly Detection:
A
  • Anomaly Detection:

Anomaly detection refers to the process of identifying data points, items, or events that deviate from an expected pattern or sequence of data. These identified elements are often referred to as outliers, anomalies, or exceptions.

63
Q
  • Turing Test:
A
  • Turing Test:

The Turing Test, proposed by Alan Turing in 1950, is a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

64
Q
  • Capsule Networks (CapsNets):
A
  • Capsule Networks (CapsNets):

CapsNets are a type of artificial neural network that attempt to address some limitations of Convolutional Neural Networks, including preserving hierarchical relationships in objects and dealing with viewpoint variance.

65
Q
  • Self-supervised Learning:
A
  • Self-supervised Learning:

Self-supervised Learning is a type of machine learning where the data provides the supervision. In other words, it’s a method where the labels for training are automatically generated from the input data.

66
Q
  • Meta-learning:
A
  • Meta-learning:

Also known as “learning to learn”, Meta-learning is the process by which a model not only learns a specific task, but also how to learn. It’s useful in scenarios where data is scarce, as it allows for rapid learning from few examples.

67
Q
  • Named Entity Recognition (NER):
A
  • Named Entity Recognition (NER):

NER is a subtask of information extraction that seeks to locate and classify named entities mentioned in unstructured text into predefined categories such as person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc.

68
Q
  • Topic Modeling:
A
  • Topic Modeling:

Topic modeling is a type of statistical model for discovering the abstract topics that

69
Q
  • Prompt Engineering:
A
  • Prompt Engineering:

Prompt Engineering is the process of designing and refining the input prompt to a machine learning model, especially a language model, to achieve desired output. It involves strategically crafting the input text to guide the model’s response.

70
Q
  • Few-Shot Learning:
A
  • Few-Shot Learning:

Few-Shot Learning refers to the concept where a machine learning model produces reliable outputs given only a few examples. In the context of prompt engineering, it often involves providing a few examples in the prompt to guide the model’s output.

71
Q
  • Zero-Shot Learning:
A
  • Zero-Shot Learning:

Zero-Shot Learning refers to the scenario where the model is expected to handle tasks that it has not explicitly seen during training. In the context of prompt engineering, the prompt does not contain any example, but it clearly instructs the model about the desired task.

72
Q
  • One-Shot Learning:
A
  • One-Shot Learning:

One-Shot Learning is a concept where a machine learning model is able to perform a task after seeing just one example. In the context of prompt engineering, it often involves providing a single example in the prompt to guide the model’s output.

73
Q
  • Instruction Following:
A
  • Instruction Following:

Instruction Following refers to the task of making an AI system execute a given instruction. In the context of prompt engineering, the model is guided by the explicit instructions contained in the prompt.

74
Q
  • Multi-modal Prompts:
A
  • Multi-modal Prompts:

Multi-modal Prompts are input prompts that contain different types of data, such as text and image. They are especially useful for multi-modal models that can process and understand multiple types of data inputs.

75
Q
  • Prompt Length:
A
  • Prompt Length:

The length of a prompt can significantly influence the behavior of a language model. Short prompts may lead to more general responses, while long prompts may result in more specific, tailored responses.

76
Q
  • Prompt Ordering:
A
  • Prompt Ordering:

In the context of prompt engineering, Prompt Ordering refers to the strategic placement of elements within the prompt to affect the output. For instance, placing the most important information at the end of the prompt can often yield better results.