Introduction to AI Flashcards

1
Q

Artificial intelligence

General Types of AI

A

Artificial intelligence (AI) refers to computer systems capable of performing complex tasks that historically only a human could do, such as reasoning, making decisions, or solving problems.

General Types of AI

By Capability
* Narrow AI task-specific
* General AI human-level intelligence
* Superintelligent AI beyond human intelligence

By Functionality
* Reactive machines no memory
* Limited memory learns from past experiences
* Theory of mind understands emotions
* Self-aware AI hypothetical consciousness

By Approach:

  • Rule-based AI
  • Machine learning-based AI
  • Deep learning-based AI
  • Natural language processing AI
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

General Types of AI – Capability

A

General Types of AI – Capability

Narrow AI: task-specific

  • Definition: AI systems designed to perform a specific task or set of tasks.
  • Examples: Virtual assistants (Siri, Alexa), Recommendation algorithms (Netflix, Amazon), Facial recognition software, and
    Chatbots
  • Key Points: Narrow AI cannot perform tasks outside of its trained domain and lacks general intelligence.

General AI human-level intelligence

  • Definition: An AI system with generalized intelligence, meaning it can perform any intellectual task that a human can do.
    Does not exist yet.
  • Examples: Would be comparable to human cognition across various domains
  • Key Points: General AI would be able to learn, reason, and solve problems across a wide range of fields without being pre-programmed

Superintelligent AI: beyond human intelligence

  • Definition: A theoretical form of AI that surpasses human intelligence in all respects, including creativity, problem- solving, and decision-making.
  • Examples: Does not exist yet.
    Would be beyond human cognition across domains
  • Key Points: would potentially redefine technology, but it raises ethical concerns about control and safety.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

General Types of AI – by Functionality

A

Reactive machines: no memory

  • Definition: Simplest form that reacts to specific inputs without ability to form memories or use past experiences to influence future decisions.
  • Examples: IBM’s Deep Blue, the chess-playing AI.
  • Key Points: Reactive machines can handle specific tasks but do not learn or adapt over time.

Limited memory learns from past experiences
* Definition: Use past experiences and historical data to make better decisions, but information is not permanently stored.

  • Examples: Self-driving cars that use data about traffic patterns or obstacles.
  • Key Points :Limited-memory AI systems learn from historical data but are still specialized and task-specific.

Theory of mind understands emotions

  • Definition: More advanced type that, in theory, could understand emotions, intentions, and human mental states
    Not yet fully developed.
  • Examples: AI that could engage in more human-like interactions by understanding user emotions is in progress.
  • Key Points:Would be better at social interactions and complex decision-making but remains largely theoretical.

Self-aware AI hypothetical consciousness

  • Definition: Theoretical system that has consciousness, self-awareness, and emotions like humans.
  • Examples: Does not exist yet.
  • Key Points: Self-aware AI would raise philosophical, ethical, and existential questions about its role in society.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

General Types of AI – by Approach

A

Rule-based AI

  • Definition: Follows predefined rules and logic to make decisions. Requires explicit instructions for decision-making.
  • Examples: Early expert systems in medicine or finance that use if-then rules.
  • Key Points: Rule-based AI is highly interpretable but limited in its ability to adapt or learn from new data.

Machine learning-based AI

  • Definition: Learns from data and improves over time without being explicitly programmed.
    Identifies patterns and makes predictions or decisions.
  • Examples: Spam filters, recommendation systems, and AI in video games
  • Key Points: Allows AI to adapt to new situations, making it more flexible than rule-based systems.

Deep learning-based AI

  • Definition: A subset of machine learning that uses artificial neural networks to learn from vast amounts of data. Particularly effective for image recognition, speech processing, and natural language understanding.
  • Examples: Google’s AlphaGo, deep neural networks for image classification.
  • Key Points: require large datasets and computing power but are capable of handling more complex tasks.

Natural language processing AI

  • Definition: Specialized AI focused on enabling machines to understand, interpret, and respond to human language.
  • Examples: Chatbots, translation tools, and virtual assistants.
  • Key Points: NLP involves tasks like sentiment analysis, language translation, and speech recognition.

General Types of AI – Overviews

General Types of AI – Overviews

S25 - 28 Folie 03

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

General Types of AI – Reinforcement Learning (I)

A

Advantages

  • Solves higher-order and complex problems. Will be very accurate.
  • Reason: very similar to the human learning technique.
  • Rigorous training process, takes time but helps to correct any errors.
  • Learning ability – can be termed as deep reinforcement learning.
  • Model learns constantly, a earlier mistake unlikely in the future.
  • Various problem-solving models are possible.
  • Even when no training data, it will learn through experience.
  • For various problems, which might seem complex to us, it provides the perfect models to tackle them.

Disadvantages

  • For solving simpler problems not correct.
  • Requires unnecessary processing power and space
  • We need lots of data. Reinforcement Learning models require a lot of training data to develop accurate results.
  • Maintenance costs are very high.
  • Like for driverless vehicles, robots, we would require a lot of
    maintenance for both hardware and software.
  • Excessive training can lead to overloading of the states of the model.

System Architecture of AI

S32-39 Folie 03

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

History and Evolution of AI

A

1950s - Early Foundations:
* 1950: Turing Test
Alan Turing proposes a test to determine if a machine can exhibit intelligent behavior indistinguishable from a human.
* 1956: Dartmouth Conference
Considered the birth of AI as a field of study. Researchers, including John McCarthy, coined the term “Artificial Intelligence.”
* 1957: Perceptron Model
Frank Rosenblatt develops the first neural network, laying the groundwork for machine learning.

1960s – AI in Reasoning and Problem-Solving
* 1961: UNIMATE
The first industrial robot, introduced in General Motors, showed early automation in manufacturing.
* 1966: ELIZA
Joseph Weizenbaum develops ELIZA, an early natural language processing program mimicking human conversation (early chatbot)
* 1969: Shakey the Robot
The first general-purpose robot, capable of perceiving its environment and performing simple tasks autonomously.

1970s – Expert Systems and Early AI Applications:
* 1972: PROLOG
Logic programming language for AI development, widely used in AI research, especially in natural language processing and problem- solving.
* 1979: Stanford Cart
One of the earliest self-driving vehicles, it successfully navigated obstacles autonomously.

1980s – AI in Commercial Use:
* 1980s: Expert Systems
AI systems like MYCIN (medical diagnosis) and XCON (configuring computer systems) dominated industries, using rule-based logic to solve complex problems.
* 1986: Backpropagation Algorithm
Geoffrey Hinton popularizes backpropagation, revolutionizing neural networks and laying the foundation for modern deep learning.

1990s – A I Achieves High-Profile Wins: * 1997: IBM’s Deep Blue
Deep Blue defeats world chess champion Garry Kasparov.
2000s – AI Expands into New Domains:
* 2000: Kismet
A robot developed at MIT with emotional AI capabilities, Kismet could recognize and respond to human emotions.
* 2002: Roomba
The introduction of the Roomba robot vacuum showed how AI could be incorporated into consumer products.

2010s – Deep Learning Revolution:
* 2011: IBM Watson
Watson wins Jeopardy!, demonstrating AI’s ability to process and understand natural language at a high level.
* 2012: AlexNet
This deep learning model wins the ImageNet competition, marking a turning point in image recognition and computer vision.
* 2016: Google DeepMind’s AlphaGo
AlphaGo defeats the Go world champion, showcasing AI’s potential in complex, intuitive games.

2020s – AI and Generalization:
* 2018: GPT-2
OpenAI introduces a highly advanced natural language processing model, capable of generating human-like text.
* 2020: GPT-3
GPT-3, one of the most powerful natural language models to date, capable of understanding & generating human-like text with minimal input.
* 2023: ChatGPT (based on GPT-4)
A conversational AI model capable of assisting with a wide range of complex tasks, from writing to problem-solving.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Key Approaches / Types of Algorithms in AI

Symbolic AI (1)

A
  • Symbolic AI (Rule-Based AI): Explicitly encodes knowledge in form of rules.
  • Machine Learning: Data-driven models learning patterns and making predictions.
  • Deep Learning: Advanced neural networks for tasks like image and speech recognition.
  • Evolutionary Algorithms: Natural selection principles to evolve solutions.
  • Bayesian Inference: Probabilistic models that handle uncertainty.
  • Reinforcement Learning: Agents learn through trial and error by interacting with their environment.
  • Cognitive Computing: Mimics human thought processes & decision-making.
  • Hybrid Approaches: Combine multiple AI techniques to create more powerful systems.

Symbolic AI (1)

  • Definition: Based on explicitly encoding human knowledge: Rules & logic. “Good Old-Fashioned AI” (GOFAI).
  • Key Concepts:
  • Knowledge Representation: Information is encoded in symbolic form,
    such as facts, rules, and logical structures.
  • Logic and Inference: AI systems use logical reasoning to draw conclusions from the encoded knowledge.

Examples

  • Expert Systems: AI programs that apply a set of rules to data in order to make decisions, such as MYCIN (medical diagnosis) and DENDRAL (chemical analysis).
  • Decision Trees: A hierarchical structure where decisions are made based on logical rules.
  • Strengths: Easy to interpret and explain.
  • Weaknesses: Hard to scale to more complex, unstructured data; inflexible to new situations.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Machine Learning (1)

A
  • Definition: Machine learning (ML) enables computers to learn from data and make decisions or predictions based on patterns. It
    is data-driven rather than relying on pre-programmed rules.

Key Concepts:

  • Supervised Learning: AI learns from labeled examples, where input-output pairs are known.
  • Unsupervised Learning: AI finds hidden patterns in data without any labeled outputs.
  • Reinforcement Learning: AI learns by interacting with its environment & receiving feedback (rewards or penalties).

Example Systems:

  • Spam Filters: Supervised learning algorithms used to classify emails as spam or not.
  • Customer Segmentation: Unsupervised learning used for clustering customers based on purchasing behavior.
  • Game-Playing Agents: AlphaGo, which learn to play complex games through trial and error.

Strengths: Highly adaptable, handles large amounts of data, can generalize to new situations.

Weaknesses: Requires large datasets, may not always provide interpretable results (black box problem).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Deep Learning

A

Definition: Subset of machine learning that uses neural networks with multiple layers to model complex, high-dimensional
patterns in data.

Key Concepts:

  • Neural Networks (NN): Layers of interconnected “neurons” that process input data and extract features.
    DL has many layers (deep neural networks).
  • Convolutional NN (CNNs): For image and video analysis, particularly in image recognition and classification.
  • Recurrent NN (RNNs): Specialized for sequential data, such as time-series data or natural language.

Example Systems:

  • Image Classification: CNNs used in applications like facial
    recognition or autonomous driving.
  • Natural Language Processing (NLP): RNNs and transformer-based models like GPT-3 for text generation and understanding.

Strengths: Excels at complex tasks like image and speech recognition, can process unstructured data.

Weaknesses: Requires significant computational power and large datasets, and models are often difficult to interpret.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Evolutionary Algorithms

A

Definition: Inspired by natural selection, these algorithms evolve solutions to optimization problems over time by generating and testing variations of possible solutions.

Key Concepts:
* Genetic Algorithms: Use principles of mutation, crossover (recombination), and selection to evolve solutions.

  • Fitness Function: Measures how well a solution solves the problem.

Example Systems:

  • Neural Network Optimization: Evolutionary algorithms are used to evolve
    neural network architectures or weights.
  • Optimization Problems: Applied in fields such as engineering design and finance for optimizing solutions.

Strengths: Useful for where the search space is vast and complex.

Weaknesses: Often computationally expensive and slow, especially for large-scale problems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Bayesian Inference

A

Definition:

Uses probability theory to make inferences and predictions based on uncertainty. It relies on Bayes’ Theorem to update the probability of a hypothesis as more data becomes available.

Key Concepts:

  • Bayesian Networks: Graphical models representing probabilistic relationships between variables.
  • Hidden Markov Models (HMMs): Used to model systems where the underlying state is hidden, but observable outcomes are available.

Example Systems:

  • Spam Detection: Bayesian inference is commonly used to classify emails based on the probability of containing spam.
  • Speech Recognition: HMMs are used in systems where the sequential nature of spoken language needs to be modeled.

Strengths: Effective for handling uncertainty and making probabilistic predictions.

Weaknesses: Can become complex and computationally expensive for large-scale problems with many variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Reinforcement Learning

A

Definition: An agent learns to make decisions by interacting with an environment & receiving feedback (rewards or penalties). Goal is to maximize cumulative rewards over time.

Key Concepts:
* Q-Learning: A value-based approach where the agent learns the value of different actions in specific states.
* Policy Gradient Methods: The agent directly learns a policy for action selection without using a value function.

Example Systems:
* Autonomous Robots: Robots learn to navigate and interact with their environments through trial and error.

  • Game AI: AlphaGo, a reinforcement learning agent that learned to play Go at a superhuman level.

Strengths: Ideal for problems where sequential decision-making is required, can learn from its own experience.

Weaknesses: Requires a lot of trial and error, making training slow and resource-intensive.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Cognitive Computing

A

Definition:
This approach aims to mimic human thought processes and decision-making by simulating how the human brain
works. Cognitive computing systems are designed to understand, reason, and learn in a more human-like way.

Key Concepts:
* Natural Language Processing (NLP): Systems that can understand and generate
human language

  • Contextual Understanding: Cognitive systems take context into account when making decisions or solving problems.

Example Systems:
* IBM Watson: A cognitive computing system that understands and processes vast amounts of information, such as medical research and patient data.

Strengths: Capable of tackling complex decision-making tasks, particularly in domains like healthcare and finance.

Weaknesses: Requires significant computing resources, and fully mimicking human cognition is still an ongoing challenge.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Hybrid Approaches

A

Definition:
These approaches combine multiple AI techniques to build more robust, efficient systems.
E.g., ML can be combined with rule-based AI, or NNs can be used alongside probabilistic reasoning models.

Key Concepts:
* Multi-Agent Systems:Systems that involve multiple AI agents working together or in competition.

  • Deep Reinforcement Learning: A combination of deep learning and reinforcement learning to create systems capable of handling complex environments (e.g., autonomous vehicles).

Example Systems:
* Robotics: Combining machine learning with rule-based logic to control autonomous robots in dynamic environments.

Strengths: Allows systems to leverage the best aspects of different approaches, improving performance and flexibility.

Weaknesses: Complex to design and implement due to the integration of multiple techniques.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Example Applications of AI

A

What you can do with AI
* China: Ping An – Car Insurance Claims
* China: AI in Farming
* China: Ping An Good Doctor

AIHH-Startups:
* Check for Pet
* Exazyme
* Yoona
* Streamboost

AIHH-Startups – Health Care:
* Nosc AI – Holistic Patient-Centered Practice Organization
* Fuse AI – Prostate Cancer Detection
* PDV – AI-powered Heart Check
* Casuu - Nursing Education

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Ethical Considerations of AI

A
  1. BiasandFairness: PreventingdiscriminationinAI outcomes.
  2. Privacy&Surveillance: Protectingpersonaldata& prevention.
  3. Transparency&Accountability:EnsuringAIdecisionsare understandable and accountable.
  4. Job Displacement: Addressing economic impacts of automation.
  5. AutonomousWeapons: Ensuring human control in life-and- death decisions.
  6. Misinformation: Preventingm anipulation and misinformation through AI.
  7. Autonomy and Agency: Balancing AI efficiency with human control.
  8. ExistentialRisks: Managingtherisksof superintelligent AI.
  9. Consent and Manipulation: Ensuring user consentand preventing exploitation.
  10. Equity in Access: Ensuring AI benefits are shared broadly.
17
Q

Ethical Considerations – Bias and Fairness

A
  • Concern: AI systems are trained on data that may contain historical biases or reflect societal inequalities.
  • Example: Facial recognition algorithms perform better on lighter-skinned individuals than on darker-skinned ones.
  • Impact: Biased AI can lead to unfair treatment, discrimination, and amplification of existing societal inequalities in areas like hiring, policing, and lending
  • Ethics: How can we ensure AI systems are fair, unbiased, and do not perpetuate existing societal discrimination?
18
Q

Ethical Considerations – Privacy & Surveillance

A
  • Concern: AI systems rely on vast amounts of personal data to function effectively, raising concerns about data privacy and the potential for surveillance.
  • Example: Social media platforms using AI for targeted advertising or governments employing AI for mass surveillance.
  • Impact: Individuals lose control over their personal data. AI-driven surveillance violates privacy and civil liberties.
  • Ethics: How can AI systems be designed to protect personal privacy and prevent misuse of personal data?
19
Q

Ethical Considerations – Transparency & Accountability

A
  • Concern: AI systems, particularly those involving DL, are “black boxes,” meaning it is difficult to understand how they arrive at decisions.
  • Example: In criminal justice systems, AI used for sentencing and parole decisions lacks transparency in how decisions are made – raises concerns about accountability.
  • Impact: If AI decisions cannot be explained or understood, it is difficult to hold individuals or organizations accountable for errors or harmful outcomes
  • Ethics: How can AI systems be made more transparent and interpretable, ensuring accountability?
20
Q

Ethical Considerations – Job Displacement

A
  • Concern: AI automates tasks done by humans: concern about mass job displacement.
  • Example: Self-driving trucks potentially replacing truck drivers, or AI-based chatbots replacing customer service agents
  • Impact: Widespread automation leads to unemployment and economic inequality.
  • Ethics: How should society address economic disruptions by AI, ensuring fair treatment of displaced workers?
21
Q

Ethical Considerations – Autonomous Weapons

A
  • Concern: Development autonomous weapons systems raises ethics of allowing machines to make life-and-death decisions in warfare.
  • Example: AI-powered drones or robots capable of identifying and engaging targets without human intervention
  • Impact: Deployment could lead to unintended casualties, loss of human control in warfare, and escalations in conflict.
  • Ethics: Should AI be allowed to control weapons, and how can we ensure human oversight in critical decisions?
22
Q

Ethical Considerations – Misinformation & Manipulation

A
  • Concern: AI systems are increasingly used to create deepfakes (realistic fake videos), spread misinformation, or manipulate public opinion on social media.
  • Example: Deepfake videos impersonating public figures or AI- algorithms amplifying false information during elections
  • Impact: AI-generated misinformation undermines trust in institutions, manipulates elections, incites violence.
  • Ethics: How can AI systems be regulated to prevent the spread of misinformation and protect democratic processes?.
23
Q

Ethical Considerations – Existential Risks & Superintelligence

A
  • Concern: Theorists worry about “superintelligent” systems surpassing human intelligence, become uncontrollable.
  • Example: Without safeguards, poses an existential threat by acting in ways that are misaligned with human interests.
  • Impact: Ensuring that advanced AI systems are aligned with human values and priorities.
  • Ethics: How can we ensure that the development of highly advanced AI systems remains under human control and serves humanity’s best interests?
23
Q

Ethical Considerations – Autonomy and Agency

A
  • Concern: AI in healthcare and autonomous vehicles makes decisions on human lives w/o human intervention.
  • Example: AI medical diagnostic system making critical decisions about patient treatment without a doctor’s involvement
  • Impact: Raises concerns about the loss of human control and the erosion of personal autonomy in critical life decisions.
  • Ethics: How do we balance the efficiency of AI systems with the need for maintaining human control and judgment?
24
Q

Ethical Considerations – Consent and Manipulation

A
  • Concern: AI systems, especially in marketing and social media, may manipulate users’ behavior without their full understanding or consent.
  • Example: Recommendation algorithms keep users engaged but may manipulate emotions or lead to addictive behaviors.
  • Impact: Manipulation of users for commercial gain or other interests, often without explicit user consent.
  • Ethics: How should AI be designed to ensure users are aware of how their data is being used and not manipulated?
25
Q

Ethical Considerations – Equity in Access

A
  • Concern: Benefits of AI, such as in healthcare or education, may not be equally distributed, leading to a digital divide where some populations are left behind.
  • Example: AI healthcare tools may only be accessible to wealthier individuals or institutions, exacerbating inequality
  • Impact: Without equitable access, AI could widen existing societal divides rather than improve overall well-being
  • Ethics: How can we ensure that the benefits of AI are accessible to all, and not just to the wealthy or technologically advanced?
26
Q

Zusammenfassung in Deutsch

A
  1. Grundlagen und Definitionen der Künstlichen Intelligenz (KI)
    • Was ist Intelligenz?
    • Intelligenz wird als die Fähigkeit beschrieben, aus Erfahrungen zu lernen, Probleme zu lösen und sich an neue Situationen anzupassen. Die Folien enthalten verschiedene psychologische Definitionen von Intelligenz und wie sie in unterschiedlichen Bereichen interpretiert wird. Intelligenz kann als Basis für das Verständnis künstlicher Intelligenz dienen, da sie zeigt, wie kognitive Prozesse im Menschen ablaufen.
    • Was ist Künstliche Intelligenz (KI)?
    • Künstliche Intelligenz wird als Fähigkeit von Computern definiert, Aufgaben auszuführen, die normalerweise menschliche Intelligenz erfordern, wie logisches Denken, Entscheidungsfindung und Problemlösung. Da KI-Anwendungen unterschiedlich und variabel sind, existiert keine universelle Definition. In vielen Anwendungen wird KI als die Fähigkeit eines Systems beschrieben, komplexe Aufgaben selbstständig zu bewältigen, oft mit einer zunehmenden Fähigkeit zur Datenverarbeitung und Problemlösung.
  2. Arten von KI
    • Nach Fähigkeit:
    • Narrow AI (Schmale KI): Diese KI ist auf spezifische Aufgaben spezialisiert, z. B. Sprachassistenten wie Siri und Alexa, Empfehlungsalgorithmen oder Gesichtserkennung. Schmale KI kann nur innerhalb ihres spezifischen Anwendungsbereichs operieren und hat keine allgemeine Problemlösefähigkeit.
    • General AI (Allgemeine KI): Eine theoretische Form von KI, die auf menschlicher Ebene agiert und jede intellektuelle Aufgabe ausführen könnte. Eine allgemeine KI würde lernen, denken und Probleme lösen können, wie es ein Mensch tun würde. Aktuell existiert diese Form von KI noch nicht.
    • Superintelligent AI (Superintelligente KI): Eine hypothetische KI, die die menschliche Intelligenz in allen Bereichen übertrifft. Diese KI würde menschliche Fähigkeiten wie Kreativität, Problemlösung und Entscheidungsfindung übersteigen und erhebliche ethische und sicherheitstechnische Fragen aufwerfen.
    • Nach Funktionalität:
    • Reactive Machines: Diese KI hat kein Gedächtnis und reagiert nur auf spezifische Eingaben. Beispiele sind frühe Schach-Computer wie IBM’s Deep Blue. Solche Maschinen können spezifische Aufgaben erledigen, aber nicht lernen oder sich anpassen.
    • Limited Memory: KI, die vergangene Erfahrungen und historische Daten nutzen kann, um bessere Entscheidungen zu treffen. Beispiel: Selbstfahrende Autos, die Verkehrsdaten verarbeiten. Diese KI-Form lernt aus Daten, bleibt jedoch auf spezifische Aufgaben beschränkt.
    • Theory of Mind: In der Theorie könnte diese KI menschliche Emotionen, Absichten und mentale Zustände erkennen und darauf basierend interagieren. Derzeit ist dieser Ansatz noch nicht vollständig entwickelt, wird jedoch erforscht, um soziale Interaktionen zu verbessern.
    • Self-Aware AI: Eine hypothetische Form der KI, die ein Bewusstsein und Emotionen wie ein Mensch haben könnte. Diese Art von KI würde philosophische, ethische und existenzielle Fragen über ihre Rolle in der Gesellschaft aufwerfen.
    • Nach Ansatz:
    • Rule-based AI (Regelbasierte KI): Arbeitet auf Basis von vordefinierten Regeln und Logik (z. B. Expertensysteme in der Medizin, die “wenn-dann”-Regeln anwenden). Vorteil: einfach zu interpretieren, aber limitiert und unflexibel.
    • Machine Learning-based AI: Diese KI lernt aus Daten, ohne explizit programmiert zu sein, und verbessert sich kontinuierlich. Beispiele sind Spam-Filter und Empfehlungsdienste.
    • Deep Learning-based AI: Ein spezifischer Bereich von Machine Learning, der künstliche neuronale Netzwerke verwendet und besonders effektiv bei komplexen Aufgaben wie Bild- und Sprachverarbeitung ist.
    • Natural Language Processing (NLP) AI: Eine spezialisierte Form der KI, die Maschinen befähigt, menschliche Sprache zu verstehen und darauf zu reagieren, z. B. Übersetzungstools und Chatbots.
  3. Lernmethoden in der KI und deren Vorteile und Nachteile
    • Supervised Learning:
    • Vorteile: Hohe Präzision, da die KI auf markierten Daten trainiert wird; gut für Probleme, bei denen Vorhersagen notwendig sind.
    • Nachteile: Benötigt umfangreiche und teure manuell markierte Daten; kann bei falschen Labels problematisch werden.
    • Unsupervised Learning:
    • Vorteile: Findet verborgene Muster in Daten, ohne dass diese vorab markiert werden müssen, und ist dadurch flexibel.
    • Nachteile: Schwierig zu bewerten, ob die Ergebnisse korrekt sind; begrenzt in der Anwendung auf spezifische Aufgaben.
    • Reinforcement Learning:
    • Vorteile: Besonders gut für sequentielle Entscheidungsfindung und komplexe Probleme; kontinuierliches Lernen ermöglicht langfristige Anpassungen.
    • Nachteile: Erfordert sehr viel Rechenleistung und Speicherplatz; Trainingsdatenbedarf ist hoch; für einfache Aufgaben oft überdimensioniert.
  4. Systemarchitektur der KI
    • Die Folien stellen verschiedene KI-Infrastrukturen vor, darunter Architekturmodelle von IBM und Accenture sowie Cloud-basierte Machine-Learning-Architekturen wie AWS. Diese Systeme enthalten Module zur Datenverarbeitung, -speicherung und Modellentwicklung und bilden das technische Rückgrat für KI-Anwendungen.
  5. Geschichte und Evolution der KI
    • 1950er-1960er Jahre: Grundlagen durch den Turing-Test und erste neuronale Netzwerke (Perzeptron-Modell).
    • 1970er-1980er Jahre: Entwicklung von Expertensystemen wie PROLOG und MYCIN, ein medizinisches Diagnosesystem.
    • 1990er-2000er Jahre: KI erlangt Popularität durch Systeme wie IBM’s Deep Blue, das Schachweltmeister besiegt, und durch die Einführung des Roomba, eines KI-gesteuerten Staubsaugers.
    • 2010er-2020er Jahre: Revolution durch Deep Learning und Meilensteine in Computer Vision und Natural Language Processing, u. a. AlphaGo und die GPT-Modelle von OpenAI.
  6. Wichtige KI-Ansätze und Algorithmen
    • Symbolic AI: Nutzt explizite Wissensregeln und logische Strukturen, eignet sich gut für klar strukturierte Probleme.
    • Machine Learning: Datengesteuert und adaptiv, jedoch oft eine „Black Box“ ohne klare Interpretierbarkeit.
    • Deep Learning: Setzt auf tiefgehende neuronale Netzwerke für die Erkennung von Mustern in hochdimensionalen Daten wie Bilder und Sprache.
    • Evolutionary Algorithms: Inspiriert von der natürlichen Selektion, um Lösungen zu optimieren.
    • Bayesian Inference: Modelle, die Unsicherheiten verarbeiten und für Wahrscheinlichkeitsabschätzungen genutzt werden.
    • Reinforcement Learning: Eignet sich für Entscheidungen in sequenziellen Prozessen, wie etwa beim autonomen Fahren.
    • Cognitive Computing: Versucht, menschliche Denkprozesse zu imitieren und wird oft in Entscheidungsprozessen im Gesundheitswesen verwendet.
    • Hybridansätze: Kombination verschiedener Techniken für vielseitige und robuste Systeme.
  7. Beispielanwendungen der KI
    • Globale Anwendungen: Beispiele sind KI-gestützte Versicherungs- und Gesundheitsdienste (Ping An in China) sowie landwirtschaftliche Anwendungen.
    • AIHH-Startups: Verschiedene KI-Startups, die sich auf spezifische Bereiche wie Haustiergesundheit, Enzymforschung, Modedesign und Gesundheitsdienstleistungen konzentrieren.
  8. Ethische Überlegungen zur KI
    • Bias and Fairness: Sicherstellung, dass KI-Systeme fair und frei von Diskriminierung sind, indem sie auf unvoreingenommenen Daten trainiert werden.
    • Privacy & Surveillance: KI benötigt oft große Datenmengen, was zu Bedenken hinsichtlich Datenschutz und Überwachung führen kann.
    • Transparency & Accountability: Entscheidungsprozesse in KI-Systemen, insbesondere bei tiefen neuronalen Netzwerken, sind schwer nachvollziehbar, was zu „Black Box“-Problemen führt.
    • Job Displacement: KI-bedingte Automatisierung könnte menschliche Arbeitsplätze gefährden.
    • Autonomous Weapons: Der Einsatz autonomer Waffensysteme stellt ethische Fragen, da Maschinen über Leben und Tod entscheiden könnten.
    • Misinformation: KI kann zur Verbreitung von Desinformationen beitragen, z. B. durch Deepfakes und manipulative Inhalte in sozialen Medien.
    • Autonomy and Agency: Besondere Herausforderungen in der Gesundheitsversorgung und bei autonomen Fahrzeugen, wo Entscheidungen ohne menschliche Intervention getroffen werden.
    • Existential Risks: Die Risiken von Superintelligenz erfordern Sicherheitsmechanismen, um eine Kontrolle zu gewährleisten.