Week 1 - lecture 1 Flashcards
Why is AI not solely a psychology nor solely a computer science?
Understand that AI is not solely psychology or computer science but draws from both disciplines.
Recognize that AI emphasizes computation more than psychology and emphasizes perception, reasoning, and action more than computer science.
Why AI in psychology?
Psychology is a big inverse problem*
AI can use forward modelling*:
we design a simple system -> see how it behaves
= the connection between AI and computational psychology
1940s:
Warren McCulloch
Walter Pitt
2 views on artificial intelligence
Any computable function can be computed by a network of neurons
All logical operators* can be implemented by simple neural networks
Warren McCulloch&Walter Pitts’three principles:
Basic physiology
Propositional logic
Turing’s theory of computation
1932
John Searle
- View on Artificial Intelligence
A collection of cells can lead to thought/action/consciousness
Consciousness requires actual physical-chemical properties of human brains. This led to the idea that only brains can cause minds!
Chinese room argument
- Explain
The Chinese Room Argument is the idea that a computer, following a set of instructions for manipulating symbols, can’t truly understand and possess consciousness.
In the scenario, a person who doesn’t understand Chinese is in a room with a set of instructions for manipulating Chinese symbols. Despite producing coherent Chinese responses, the person inside the room does not comprehend the meaning, analogous to how a computer might process symbols without genuine understanding. The argument aims to question the sufficiency of symbol manipulation for genuine understanding or consciousness.
1950:
Alan Turing
Sentient or non-sentient AI?
Non sentient AI
1950:
Alan Turing
- The imitation game
- When do we call a machine intelligent?
A machine is intelligent if we cannot distinguish it from a human in conversation
1956:
Dartmouth conferences
Pioneers in the field of:
Computer science
Mathematics
Cognitive science
The pioneers got together for a month-long conference and coined the term artificial intelligence
50s & 60s:
- Development in AI (2)
AI developed to be able to:
Play chess
Formal theorem proving
1965:
Weizenbaum
- ELIZA machine; How does this machine work?
ELIZA looks for keywords in its input (father, mother, angry, happy, sad, etc.)
ELIZA uses a database of rules
ELIZA constructs new sentences, using the key words:
‘I hate my father’
‘Why do you hate your father?’
If there are no key words present in the input:
‘I see, please go on’
1972:
Kenneth Colby
Modified Turing Test: PARRY
- How does this machine work?
PARRY simulated a patient with paranoid schizophrenia
PARRY was often inconsistent or meaningless sentences, but therefore realistic
60s-70s:
AI winter
Overconfidence in AI systems
Led to the AI winter*
Many unanswered questions: how do we deal with perception, robotics, learning and pattern recognition
Symbolic AI does not suffice
After the 70s:
Rumelhart
McClelland
- Revival of connectionism; development of which three kinds of connectionism?
Connectionism was revived:
Parallel distributed processing
Connectionism
Artificial neural networks
1981:
McClelland
- What model did he build?
He build a model of human memory:
Human memory is content-adressable*
-> memory is not stored in neurons, but in the connection between neurons (Excitatory and inhibitory connections)
1997
- Development in AI
First time a computer beat a grandmaster at chess in a tournament (IBM’s deep blue)
2005:
- Development in AI
Last time a human beat a top chess computer under tournament conditions
2016
- Development in AI
Google’s DeepMind’s AlphaGo defeated the world’s number one player in GO (An extremely difficult game)
2020:
- Developments of AI (2)
Models capable of simulating general-purpose language understanding.
Large Language Models (LLM’s)
Symbolic AI* (GOFAI)
- Definition
Symbolic AI (GOFAI) does not concern itself with neurophysiology
Human thinking is a kind of symbol manipulation:
IF (A>B) AND (B>C) THEN (A>C)
Intelligence = thought of as symbols and the relations between them
Conclusion:
intelligent behavior through manipulation of symbols.
Symbolic AI* (GOFAI)
- Cons (3)
Why do we need it?
-> seems unnecessary for many behaviors
It is unclear how processes like pattern recognition would work in a purely symbolic way
Representations dealing with noisy input are needed
Connectionist AI
- Definition; biologically inspired
Connectionism is based on the structure of the human brain.
Neurons receive input through dendrites
Neurons send output through axon
It is highly connected (1000s of synapses on axon and dendrites)
-> Computation is massively parallel
How do these neural networks compute?
Principles of the formation of memories: (4)
Mental States as Vectors*, N-Dimensional Vectors
Neural Network Units*
Activation Values*
Memory and Connection Strength
Connectionist AI
- Conclusion of definition
representations in the brain are distributed, processing massively parallel.
Connectionist AI
- Pro: Lesion tolerant
Lesioned or damaged networks can still process informationC
Connectionist AI
- Pro: Capable of generalization
ANNs are capable of learning, and are able to generalize rules to novel input.
Connectionist AI
-Other Pro’s (4)
It can solve complex, non-lineair or chaotic classification problems
There is no a priori assumption about problem space or statistical distribution.
Artificial neural networks can compute any computable function.
Pattern recognition
Large Language Models
- Definition
Known as a form of generative AI.
Large Language Models
- Con’s (4)
False information presented by the model as fact
Compression artifacts
Also caused by the reward model of ChatGPT.
Human trainers of ChatGPT prefer longer answers over factual content.