Week 2: Theory of Errors (18 - 19 C) Flashcards
How was Probb viewed in 18th C?
- Objective Interpretation
A literal interpretation; probability describes the frequency of events
If P(head) = 0.5, then in 1000 coin tosses I should expect ~500 heads
Closely aligns to the modern interpretation of probability; with many realizations of an event, the outcome will converge on an expected result
• Assumes outcomes on infinite tosses though, which is impractical - Subjective interpretation (dominant)
Probability describes degrees of confidence, rather than an expected result
More practical in that it does not rely on infinite coin tosses to be meaningful
Belief that probability lends itself as a tool (confidence bar) for decision making
• Akin to Pascal’s viewpoint of statistical analysis as a tool
What is determinism?
•Determinism is built on a few core beliefs:
All substances are made of particles
• Much like stellar bodies, there are forces interacting between them
Governed by motion and initial conditions of the world
• By knowing the initial conditions and understanding the forces and play, you can have perfect knowledge of the world
System is complex but fully determined
• Humans are merely incapable of understanding its full extent
Responsible for and can explain all the natural phenomena in the world
Roots trace back to 17th C & Newton’s Principia.
Many champions, e.g. Laplace
How does determinism mesh with Probb?
- Combination of ontological determinism with epistemological uncertainty emerged as dominant opinion among scientists and theorists
- Would remain through 19th century and into the 20th century, until the rise of quantum mechanics
What was the problem with Astronomical data?
• Starting in the 15th and 16th century, oversea navigation became key for European exploration
o Precision became increasingly critical through the 17th and 18th centuries
• Problem was rooted in DATA SYNTHESIS
o Events happened just once – but there were multiple data sets from many observatories
Reconciling these data sets was an issue not only seen in astronomy, but also optics and electric science
How did data synthesis change? When?
Pre-19th century, data synthesis focused on finding the “best” data (demonstrated by Nevil Maskelyne)
o There’s only one correct result – we should seek it out and disregard the rest
Implies that all the other observations are incorrect, and their observers incompetent and not up to their task
Pushed data collection to be a matter of great moral/ethical responsibility to follow best procedures and collect “correct” data
How you identify the best… Most consistent? Best vs theory?
19th century brought a shift in data synthesis (e.g. Friedrich Bessel)
o Realized identifying the best was not feasible – sought to use all data
Synthesize all the sets somehow using probability theory and statistics
o Removed the moral implications – delta in observations was due to psychological/physiological differences
o This change in mindset gives rise to the Theory of Errors
How was the new data synthesis done?
• Techniques to synthesize data begin in the 19th century
o Adrien-Marie Legendre is working with the positional data of comets, and realizes there are variations in the measurement data
o 1805: Legendre publishes Method of Least Squares
Problem: Body has N measured quantities, governed by (N+1) parameters. If we have M data sets (»(N+1)), how do we mesh?
CRITICAL INSIGHT: Assume each measurement set has its own associated error.
Minimizing the total squared error through calculus, he develops N+1 equations which can be solved. METHOD OF LEAST SQUARES
This became the dominant method through 19th C.
Why was Least Squares accepted?
Legendre’s 1805 proof was grounded in calculus, but needed probability theory support. It came shortly after:
Gauss’s max likelihood estimate showed that L2 was the most likely to be correct if error was Gaussian
Laplace’s Central Limit Theory showed that averaging many IID RV’s would approach a Gaussian distribution
Carl Friedrich Gauss
German matehmatician
Performed a MAX LIKELIHOOD ESTIMATE on the method of least squares
i.e. What values of parameters maximize probability to be correct?
Assumed error of data sets were IID RV’s.
Showed Least Squares method was highest probb of being correct, as long as error followed Gaussian distribution
Adrien-Marie Legendre
A French mathematician
o 1805: Legendre publishes Method of Least Squares
Pierre Simon Laplace
French mathematician - previously saw his work on inverse/conditional probb.
The Central Limit Theorem:
o Suppose you have N independent and identically distributed random variables (IID RV’s)
As N increases, the distribution of Sn (their average) approaches a Gaussian distribution
Nevil Maskelyne
An English Astronomer.
1. Late 18th century, was reforming his data in his Greenwich observatory
2. Noticed some assistants’ recorded times were consistently delayed when compared to accepted results
Assumed assistants were incapable and fired them
Friedrich Bessel
German astronomer/mathematician.
- Noticed the same phenomena, where some observers were always deviations
- Through the 1820’s, he performed experiments to analyze these errors
Asked assistants to perform contrived measurements, such as timing a known duration, to measure their individual errors - Realized errors were consistent among each observer
• Bessel developed the idea of personal equations to account for this
Personal Equations
Developed by Friedrich Bessel, early 19th C
- attributed deltas to psychological/physiological traits of the observers
In the same vein as “reaction time” - The personal equation of an observer was inherent to the data they provided
All data was subsequently provided alongside the recorder’s personal equation - This only further validated the idea of Least Squares data synthesis, as opposed to trying to find a “best” data set
What were the implications of Personal Equations?
- Must actively reduce uncertainty in our measurements
Can train observers to mitigate delays
Can develop automatic measurement techniques (~19th century) - We can study uncertainty
It is human nature to be different; this is something which can be studied and measured rather than treated as incompetence
This gives the beginnings of “psychophysics”
precursor to psychology; develops in the 2nd half of the 19th century