- Joined
- 3/18/18
- Messages
- 5
- Points
- 13
Question
Given the usefulness of Bayesian Inference in dealing with real-world uncertainty, why don't we see more of it in the MFE literature/coursework?
For example, the books on FEpress don't really mention it. I also don't see anything on the Baruch MFE curriculum website.
Background
To me, financial engineering theory involves laying down a core set of mathematical assumptions about something and then carrying the logic forward to a closed form solution or good numerical approximation. Practitioners breathe life into the equations by making markets based on theory blended with practice.
Practitioners sometimes run into trouble when the "real world" gets in the way of theory - the 10,000 year flood happens every decade or so. Models with a limited historical look-back get overly excited by "good" data and practitioners lever up imperfect assumptions into disastrous consequences. The fall of LTCM is a great example of this.
Even before learning about situations where assumptions hit a brick wall in markets (Oct 1987, Aug 2007, etc.) - the frequentist view of the world I was presented with in undergraduate studies never quite worked for me. I eventually stumbled upon Bayesian Inference by way of Statistical Rethinking and it totally changed my approach towards modeling.
My Definition of Bayesian Inference
A meaningful subset of the following ideas:
1) Hypotheses are not models
2) Strict hypothesis falsification is not possible in most situations
3) Uncertainty should be aggregated and reported rather than ignored/averaged out
4) Your past beliefs and biases should be explicitly quantified in your priors
5) Models should be updated with new data using Bayes' Theorem
6) Asymptotic analysis doesn't work well in the real world
7) Nothing is "random", distributions capture our uncertainty under some assumptions
8) Parameters/models have distributions, data (usually) do not
9) Multilevel models should be used more often (a.k.a Mixed effects models)
10) Regularization is extremely important
11) Every model should deliver a Posterior Predictive Distribution
(Given a small sample of data, where tuning via cross validation is difficult)
12) A pair of competing models should be compared using the Bayes factor
13) Multiple models should be compared using an information criterion
Given the usefulness of Bayesian Inference in dealing with real-world uncertainty, why don't we see more of it in the MFE literature/coursework?
For example, the books on FEpress don't really mention it. I also don't see anything on the Baruch MFE curriculum website.
Background
To me, financial engineering theory involves laying down a core set of mathematical assumptions about something and then carrying the logic forward to a closed form solution or good numerical approximation. Practitioners breathe life into the equations by making markets based on theory blended with practice.
Practitioners sometimes run into trouble when the "real world" gets in the way of theory - the 10,000 year flood happens every decade or so. Models with a limited historical look-back get overly excited by "good" data and practitioners lever up imperfect assumptions into disastrous consequences. The fall of LTCM is a great example of this.
Even before learning about situations where assumptions hit a brick wall in markets (Oct 1987, Aug 2007, etc.) - the frequentist view of the world I was presented with in undergraduate studies never quite worked for me. I eventually stumbled upon Bayesian Inference by way of Statistical Rethinking and it totally changed my approach towards modeling.
My Definition of Bayesian Inference
A meaningful subset of the following ideas:
1) Hypotheses are not models
2) Strict hypothesis falsification is not possible in most situations
3) Uncertainty should be aggregated and reported rather than ignored/averaged out
4) Your past beliefs and biases should be explicitly quantified in your priors
5) Models should be updated with new data using Bayes' Theorem
6) Asymptotic analysis doesn't work well in the real world
7) Nothing is "random", distributions capture our uncertainty under some assumptions
8) Parameters/models have distributions, data (usually) do not
9) Multilevel models should be used more often (a.k.a Mixed effects models)
10) Regularization is extremely important
11) Every model should deliver a Posterior Predictive Distribution
(Given a small sample of data, where tuning via cross validation is difficult)
12) A pair of competing models should be compared using the Bayes factor
13) Multiple models should be compared using an information criterion