Chapter 20 Bayesian Belief Networks Flashcards
What is conditional dependence?
External
In probability theory, conditional dependence is a relationship between two or more events that are dependent when a third event occurs. For example, if A and B are two events that individually increase the probability of a third event C, and do not directly affect each other, then initially (when it has not been observed whether or not the event C occurs:
P(A|B)= P(A) and P(B|A)= P(B) [A and B are independent]
But suppose that now C is observed to occur. If event B occurs then the probability of occurrence of the event A will decrease because its positive relation to C is less necessary as an explanation for the occurrence of C (similarly, event A occurring will decrease the probability of occurrence of B). Hence, now the two events A and B are conditionally negatively dependent on each other because the probability of occurrence of each is negatively dependent on whether the other occurs. we have:
P(A|C and B)< P(A|C)
What is the challenge of designing and using probabilistic models?
P 199
Probabilistic models can be challenging to design and use. Most often, the problem is the lack of information about the domain required to fully specify the conditional dependence between random variables. If available, calculating the full conditional probability for an event can be impractical.
What are two approaches to addressing the challenge of not knowing the conditional dependence between all the variables in probabilistic models?
P 199
- A common approach to addressing this challenge is to add some simplifying assumptions, such as assuming that all random variables in the model are conditionally independent. This is a drastic assumption, although it proves useful in practice, providing the basis or the Naive Bayes classification algorithm.
- An alternative approach is to develop a probabilistic model of a problem with some conditional independence assumptions.
Bayesian belief networks are one example
of a probabilistic model where some variables are ____.
P 199
conditionally independent
A Bayesian belief network is a type of ____ model.
P 199
probabilistic graphical
What’s a probabilistic graphical model (PGM)?
P 199
A probabilistic graphical model (PGM), or simply graphical model for short, is a way of representing a probabilistic model with a graph structure.
What are nodes and edges in a graphical probabilistic model?
P 199
Nodes/Vertices: Random variables in a graphical model.
Edges/Links: Relationships between random variables in a graphical model.
There are many different types of graphical models, although the two most commonly described are the ____Model and ____.
P 199
Hidden Markov , the Bayesian Network
What’s the main difference between Hidden Markov and Bayesian Network graphs?
P 199
The Hidden Markov Model
(HMM) is a graphical model where the edges of the graph are undirected, meaning the graph contains cycles. Bayesian Networks are more restrictive, where the edges of the graph are directed, meaning they can only be navigated in one direction. This means that cycles are not possible,
Graph structure that do not allow cycles are referred to as a ____.
P 199
directed acyclic graph (DAG)
A Bayesian belief network describes the ____ for a set of
variables.
P 200
joint probability distribution
Bayesian probability is the study of
____ or belief in an outcome, compared to the frequentist approach where probabilities are based purely on ____.
P 200
subjective probabilities, the past occurrence of the event
What is conditional independence?
External
In conditional independence two events (which may be dependent or not) become independent given the occurrence of a third event.
Central to the Bayesian network is the notion of ____.
P 200
conditional independence
A graphical model (GM) is a way to represent a joint distribution by making
[Conditional Independence] CI assumptions. In particular, the nodes in the graph represent random variables, and the (lack of) edges represent ____. A better name for these models would in fact be ____
P 200
CI assumptions, independence diagrams
Bayesian networks provide useful benefits as a probabilistic model. Name 3 of them.
P 200
- Visualization. The model provides a direct way to visualize the structure of the model and motivate the design of new models.
- Relationships. Provides insights into the presence and absence of the relationships between random variables.
- -Structured- Computations. Provides a way to structure complex probability calculations.
Designing a Bayesian Network requires defining at least three things, Name them
P 201
Random Variables. What are the random variables in the problem?
Conditional Relationships. What are the conditional relationships between the variables?
Probability Distributions. What are the probability distributions for each variable?
In many cases, the architecture or topology of the graphical model can be specified by an expert, but the probability distributions must be estimated from data from the domain.
Both the probability distributions and the graph structure itself can be estimated from data, although it can be a challenging process. As such, it is common to use learning algorithms for this purpose. True/False
P 201
True
Once a Bayesian Network has been prepared for a domain, it can be used for reasoning, e.g. making decisions. Reasoning (inference) is performed by introducing evidence (data) that sets variables in known states, and subsequently computing probabilities of interest, conditioned on this evidence. What are some use cases for this model?
P 201
Practical examples of using Bayesian Networks in practice include:
* Medicine (symptoms and diseases)
* Bioinformatics (traits and genes)
* Speech recognition (utterances and time)
Bayesian Networks can be developed and used for inference in Python. A popular library for this is called ____ and provides a range of tools for Bayesian modeling, including graphical models like Bayesian Networks.
P 202
PyMC3
What’s objective probability and subjective probability?
External
Objective probability is the probability an event will occur based on an analysis in which each measure is based on a recorded observation or a long history of collected data. In contrast, subjective probability allows the observer to gain insight by referencing things they’ve learned and their own experience. Subjective probability represents a personal belief about the likelihood of an event. If someone claims that their favorite sports team has an 80% chance of winning the next game, they are expressing a subjective probability.