Artificial Intelligence and Deep Learning Flashcards
Definition
Systems that think like humans, act like humans, think rationally, act rationally
Thinking humanly
the cognitive model approach
- programs that think like humans → towards theory of mind
- psychological experiments and brain imaging → get into working process inside human brain
- necessarily based on experimental investigation
acting humanly
the Turing Test
- operational test for intelligent behaviour, written communication
- major components of AI: knowledge, reasoning, language, understanding & learning
- cannot be analysed mathematically
thinking rationally
the “Laws of Thought” approach
- Aristotele: right thinking by providing patterns for argument strucutres
- suppose to govern operation of the mind → field of logic
acting rationally
the ration Agent
- agent → agere = to do
- doint the right thing (rational behaviour) → not necessarily involves thinking → reflexes
- rational agent → acts to achieve best /best expected outcome
- “laws of thought” approach
Important Historical events
- 1943: McCullon & Pitts develop Boolean circuit model of the brain → model of artificial neurons → any computable function could be computed by some network of connected neurons
- 1949: Hebb → connection strenght between neurons
- 1950: Turing → Turing test, machine learning, genetic algorithms and reinforced learning
State of the Art
Robotic vehicles → driverless robotic car STANLEY
Speech recognition → book flight at United Airlines
Autonomous planning and scheduling
game playing
Spam fighting
logistics planning
Robotics → robotic vacuum cleaner
Machine translation
Newer: Apple Siri, Eugene Goostman, Amazon Alexa, Microsoft Tay, Google A.I. Alpha Go
Subfields in A.I:
- Natural language procession
- Knowledge representation
- Automated reasoning → using stored information to answer questions
- Machine learning → adpating to new circumstances
- Computer vision
- Robotics
Neuronal Network hypothesis
- mental activity = electromagnetic activit
- McCullen & Pitt’s Neuron model
→ fires when inputs exceed some threshold
→ neural network = collection of such units, connected together by direct links
→ properties determined by topology & neurons
Neural Network strucutre
- link from neuron i to j → propagate the activity from ai to aj
- each link: numeric weight wi,j → determines strenght & sight of connection
- each neuron j first computes weighted sum of its inputs & then it applied activation function g to this sum to derive the output
Perception learning rule
wi + r (y-aj) * ai r = learning rate y = desired output aj = current output ai = current input
Neural Network Architecutre
- feed-forward networks → connection in one direction → function of current input
- multi-player networks → arranged in layers → not connected layers = hidden
- recurrent networks feed its output back into inputs
→ represent dynamic systems → might reach stable state, exhibits oscillations or chaotic behaviour
Interpretability problem
We can describe how artificial networks compute their outputs, but we are far away from explainng its decisions in understanable way
three main classes of explainable artificial networks
controlling the black box
- guarantee relationships btw two variables → more transpartent models, that help to control neural network
probing the black box
- perturb inputs → find out what affects descicion making → can reveal cause for one decision, but not the overall logic
embracing the darkness
- help understand other NN → combine generator with classifier
controlling the blackbox and exploiting the darkness
real time explanation generation → computational model learns to translate agent’s states and actions into natural language
- Neural network plays game “Fogger”
- Human subject plays game & describes tactic
- 2nd NN translates from code to english
- translational network wired into original game-playing network