Week 9 UAS Flashcards
Types of feedback in Learning?
- Supervised learning: correct answers for each example
- Unsupervised learning: correct answers not given
- Reinforcement learning: occasional rewards
Simplest form of supervised learning?
Inductive Learning, learn a function from examples
Feed-forward vs recurrent?
– Feed-forward: outputs only connect to later layers
• Learning is easier
– Recurrent: outputs can connect to earlier layers or same layer
• Internal state
Neural Networks Evaluation
• Advantages
– Handle errors well
– Graceful degradation
– Can learn novel solutions
• Disadvantages
– “Neural networks are the second best way to do anything”
– Can’t understand how or why the learned network works
– Examples must match real problems
– Need as many examples as possible
– Learning takes lots of processing
• Incremental so learning during play might be possible
5 analysis for NLP?
- Morphological Analysis
e.g. in-to-nat-ion, en-ter-tain-ment, en-ter-tain-er - Syntax Analysis (Grammar or structure)
e.g. I see the statue - Semantic Analysis
= the meaning of a sentence
e.g. I saw the boy in the park with a telescope - Discourse Integration
= close relation (cohesion) between sentence or between paragraph - Pragmatic Analysis
= the meaningfulness or the logic in a sentence
e.g. this world is made of green cheese
2 parts of NLP?
- Natural language understanding
= try to understand every human language - Natural language generation
= deciding what to say
What is prior probability and posterior probability?
prior (unconditional) probability is probability without seeing other events (independent).
posterior (conditional) probability is probability that depends on other events.