Lecture 1: Introduction to AI Flashcards
Hierarchy of artificial intelligence
AI > Machine learning > Deep learning
Engineering Approach vs Cognitive Approach
Engineering:
- Tires to find optimal solutions.
- No matter how
- Act intelligentyly, think intelligently
Cognitive
- Tries to understand the process
- Tries to reproduce human beavior (even if wrong result)
- Act like humans, think like humans
Weak vs Strong AI
Weak:
- Capabilities not intended to match or exceed the capabilities of humans
- usually a small application with a single purpose
Strong:
- Matches or exceeds human intelligence
- conciousness, self-awareness
- General purpose application of several tasks.
What is intelligence
- Intellectual vs physical capabilities
- reflex vs planned/reasoned action
- awareness of existence
What is the turing test?
- “If a human interrogator cannot tell the computer and human apart, then the computer is intelligent”`
What year was the turing test created?
1950
What are some of the capabilities required to pass the Turing Test?
- Natural Language Processing (NLP) to
communicate - Knowledge Representation to store knowledge
- Automated Reasoning to infer new knowledge
- Machine Learning
Arguments in favor and against the Turing Test?
Pros:
- Objective notion of intelligence
- Prevents us from arguments about the computer’s consciousness
- Eliminates bias in favor of humans
Cons:
- Not reproducible
- Not constructive
- Machine intelligence designed w.r.t. humans
- test is anthropomorphic. It only tests if the subject resembles a human being.
Limitations of the Turing Test?
Superficiality: It focuses on deception, not intelligence.
Lack of Scalability: Not practical for evaluating the vast
array of AI capabilities.
Human Imitation vs. AI Innovation: Prioritizes mimicking
human behavior over unique AI strengths.
Not Comprehensive: Fails to assess the wide range of
abilities that modern AI possesses.
Advanced Interactions: AI now handles nuanced and
context-rich interactions, which the Turing Test doesn’t
fully capture.
Ethical AI Considerations: Ensuring AI systems are fair,
unbiased, and safe.
Automated Testing: Evaluate AI on a much larger scale and
more efficiently than the Turing Test
What is being evaluated in modern chat bots?
Natural Language Understanding (NLU): Understanding and processing human language (e.g., sentiment analysis, question answering).
Creative Tasks: AI’s ability to generate creative content (e.g., writing poems, creating artwork).
Decision-Making Skills: AI’s effectiveness in scenarios requiring complex decision-making (e.g., strategic game playing, business forecasting)
HOW are chatbots are being evaluated today?
Accuracy and Precision: Measure of correctness in AI’s outputs (e.g., percentage of correct answers in NLU tasks).
Response Time: Speed at which AI provides responses, important in real-time applications.
Robustness and Generalization: AI’s ability to handle unexpected inputs or scenarios
What are some of the things we do with AI?
Knowledge representation
(including formal logic)
Search, especially
heuristic search
(puzzles, games)
Planning
Reasoning under
uncertainty, including
probabilistic reasoning
Learning
Agent architectures
Robotics and perception
Natural language
processin
When was the early work in neural networks, purely theoretical?
1943
Alan Turing Describes the Turing Test
1950
Darmouth workshop
- Get-together of the big guys
- The term AI is born
1956
When was the “rise of AI” era?
1956 - 70s
What are some notable things on “rise of AI”
- The era of Good old fashioned AI (GOFAI)
- Symbolic computation instead of numeric computation
- Development of AI-specific programming languages
What lead to the first AI winter?
Innacurate prediction in regards to the capabilities of machiens in the future.
1966: the ALPAC report kills work in machine translation (and
NLP in general)
People realized that scaling up from micro-worlds (toyworlds) to reality is not just a manner of faster machines and
larger memories…
Minsky & Papert’s paper on the limits of perceptrons (cannot
learn just any function…) kills work in neural networks
in 1971, the British government stops funding research in AI
due to no significant results
it’s the first major AI Winter…
What are Expert Systems?
1970-1980s
- Knowledge-intensive, rule-based techniques
- Decision support systems
Humans need to write the rules by hand
Context of second AI Winter?
Mid-80s to mid-90s,
End of expert systems, too tedious to write rules by hand and too expensive to maintiain
When is the rise of machien learning?
1980-2010
- More powerful CPUs - usable implementation of neural networks.
- Big Data
- Rules are now learned automatically!
Deep Learning
2010 - Today
“Deep neural networks”
- Generic networks are used for many applications, such as image recogniiton, self-driving cars, etc.
What is Eliza?
Developed by Joseph Weizenbaum
In the 1960’s
Simulation of a dialogue with a psychotherapis
“Input sentences are analyzed on the basis of decomposition rules which are triggered by key words appearing in the input text. Responses are generated by reassembly rules associated with selected decomposition rules.”
What are the 4 Steps of Eliza’s algorithm?
- Identify Keywords:
- Get Decomposition Patterns
- Apply Reassembly Pattern
- If there is no keywords in the input sentence, then generate a canned
respons
Example:
Keyword:
I am
Decomposition pattern:
I am <whatever>
Reassembly Pattern:
How long have you been <whatever>?</whatever></whatever>
Input sentence:
“It seems that you hate me”.
Keywords: “you” and “me“
Decomposition pattern:
<whatever1> you <whatever2> me
It seems that you hate me
Reassembly pattern
What makes you think I hate you?</whatever2></whatever1>
Eliza - Steps of “Identify Keywords”
- Scan the input sentence to look for words in a dictionary of keywords.
- Keywords are given a rank, and the highest ranking keyword is considered
first
Eliza - Steps of “Get decomposition patterns”
- For each keyword, there is an associated list of decomposition patterns
- The first decomposition pattern that matches the input sentence is
selected. - If the decomposition rule does not match, then the next best ranking
keyword is selected (and go back to step 2.1).
Eliza - Steps of “Apply Reassembly pattern”
- For each decomposition rule, there is a set of associated reassembly
patterns to generate a response. - If a subsequent sentence selects the same decomposition pattern, the
next reassembly pattern is used (so the output is not repetitive)
What are the 6 Things we learned from Eliza?
- User experience Matters
- The Illusion of understanding
- Ethical concerns
- Simplicity Can be effective
-Interdisciplinary Insignts - Data Security and Privacy