LESSON 2/3 - Mathematical background Flashcards
What is the relationship between a 1D vector and a 2D matrix in a matrix space?
In a matrix space, a 1D vector corresponds to a hyperplane in the context of a 2D matrix.
Why is graph theory important in describing systems with interacting elements?
Graph theory provides a mathematical language for describing systems with interacting elements, such as neural networks in this course, where each element represents a neuron, and links between elements indicate connections and information exchange.
How is graphical formalism used in neuroscience to describe the brain?
In neuroscience, the brain or specific areas are often described as graphs. This graphical representation helps depict connectivity at both the whole brain level and local circuit levels.
Why is probability theory powerful for describing stochastic systems like the brain?
Probability theory is powerful for stochastic systems because complex systems, like the brain, may not be fully observable. Additionally, approximations, errors in data, and intrinsic stochasticity make probabilistic approaches valuable.
What is the significance of conditional independence in graphical representation?
Conditional independence allows exclusion of variables that do not directly influence each other, simplifying graphical representations. In neural networks, this concept helps create more organized graphs by excluding irrelevant connections.
How are random variables like uniform distribution different from normal distribution?
Uniform distribution involves totally random variables with no predictable outcome, while normal distribution has a peak, making certain outcomes more probable, allowing predictions based on mean values.
Explain the Bayesian perspective in probability theory.
The Bayesian perspective focuses on establishing a degree of belief that can change over time based on observations. It involves updating opinions and assigning probabilities to hypotheses given evidence.
Why is linear algebra relevant in neural networks?
Neural networks, often non-linear systems, use linear algebra to project vectors into a more complex space, facilitating the separation of classes by hyperplanes.
How does calculus relate to neural networks?
Calculus is essential in understanding how functions change in neural networks. Input variables, connections, and weights in a neural network involve calculus to model their transformations.
What defines an algorithm, and how is it different from a logarithm?
An algorithm is a procedure or sequence of operations that produces a well-determined result in a finite amount of time. It is distinct from a logarithm, which is a mathematical function describing a non-linear increase over time.
How does graph theory represent neural networks in artificial intelligence?
In artificial intelligence, neural networks are represented using graph theory, where each element corresponds to a neuron, and connections between elements signify information exchange and connectivity.
Why is probability theory valuable in cases where a system cannot be fully observed, such as the brain?
Probability theory is valuable in situations like studying the brain, where complete observation is often impossible due to the complexity of the system. It allows for making probabilistic guesses and handling uncertainties.
What is the role of conditional independence in simplifying graphical representations in neural networks?
Conditional independence helps simplify neural network graphs by excluding connections that do not directly influence each other, leading to more organized and interpretable representations.
How does the concept of attractors apply in the context of neural networks?
In neural networks, attractors are patterns or states to which the system tends to converge over time. They play a role in stability and memory recall in the network’s behavior.
What does the term “Black Swan” refer to in a Bayesian perspective?
In a Bayesian perspective, a “Black Swan” event refers to an unexpected observation that can lead to a change in beliefs or probabilities. It emphasizes the adaptability of beliefs based on new evidence.