It is common knowledge that human beings commit errors in judgment all the time. In areas of uncertainty, most of us go with our gut intuition, and in most cases this intuition turns out to be wrong. Much of this is derived from the fact that humans are poor statistical thinkers, and thus poor Bayesian thinkers. What is Bayesian thinking? Let us start with an illustrative example, called the Monty Hall problem -- famously depicted in the Kevin Spacey movie "21."
There are three doors, and behind each door is either a goat or a car. There will always be two doors with goats and one door with a car. The player first chooses a door without opening, and the game show host whose interests are opposed to the player, proceeds to open a different door. Since the host knows what is behind each door, he always opens a door with a goat. Now that the player is left with the initially chosen door and another closed door, the host offers an opportunity to switch to the other unopened door.
Should the player switch? The answer for an intuitive Bayesian, a purely statistical thinker, should be easy. Unfortunately, human beings are not intuitive Bayesians. In fact, most people answer that it doesn't matter if the player switches or not, since the probability of winning a car is 50% between the two doors anyways.
Order custom essay What is Bayesian Thinking? with free plagiarism report
They would be wrong. Now, before we examine the correct way to think about this problem, one might ask, so what? Why does it matter if humans are not intuitive Bayesians, or even more broadly, bad statistical thinkers? Simply, Bayesian reasoning corrects some of the issues with bad statistical thinking.
Bad statistical thinking leads to bad judgments and decisions, which have a wide variety of consequences in everyday life as well as in arenas such as politics and science. Thus, everyone should become better Bayesian thinkers, because under uncertainty, accurate probabilistic judgments are useful and important.To give a accurate depiction of how Bayesian reasoning works, let us return to the Monty Hall problem, and examine why not only switching doors matters, but that it is beneficial to switch.
When the host first opened the door with the goat, something happened: opening the door gave the player extra information, and thus changed the probability of outcomes. By utilizing this extra information, it is no longer a 50% chance for the player to win the car after switching doors, but a ~67% (2/3) chance. Let us suppose that the player picks the door which contains the car. The host opens either the first goat door or the second (it does not matter), and the player switches to the other goat door and loses.
Now, suppose the player picks the first goat door instead, which means the host is forced to open the second goat door. Since the only other door contains the car, the player switches and wins. Lastly, suppose the player picks the second goat door. The host is forced to open the first goat door, which again, means the player will win the car after a switch. These are the only three possible scenarios, and so we see that the probability of winning is two out of three if the player switches.
Conversely, what if the player doesn't switch? In the first scenario, the player wins the car, but in scenarios two and three, the player obviously loses. Thus, to not switch is to have only a 33% (1/3) chance to win the car.The Monty Hall problem is a rather simple illustration of how Bayesian reasoning works, so in order to gain a more complete understanding, we must explore its principles.
In 1763, a paper by Reverend Thomas Bayes was published posthumously called "An Essay towards solving a Problem in the Doctrine of Chances," and brought about a paradigmatic shift in statistics: by using ever-increasing information and experience, one can gradually approach the unknown or understand the unknown (of course, his main motive was to prove the existence of God).
Fundamentally, Bayesian reasoning believes in the correction of probabilities over time, and that all probabilities are merely estimates of the likelihood of events to occur. Through the further efforts of mathematicians like Lagrange in perfecting the Bayesian framework, we now have a modern and complete theory of probability. First, there are what we call priors, which is the strength of our beliefs, or put it another way, the likelihood that we are to change our beliefs.
Then, we have our posteriors, which is the empirical aspect, or the influx of new information. The Bayesian framework then takes these two components and mathematically analyzes how posteriors affect priors. If we know nothing about an event, then all we can do is estimate a probability. However, if there is new information, then the probability must be corrected based on this new information.
Over time, as experiences grow through more information, these estimates of probabilities will eventually fit "reality." In the Monty Hall case, the moment the the host opened the goat door, that influx of new information, or change in posteriors, immediately influences the player's priors. If the host doesn't open a door, the player merely has a 33% chance to win the car between the three doors, and switching makes no difference.
However, since the host removes a door, and specifically the door that contains a goat, these two new posteriors directly influence the original prior from 33% to 66%. One might think that this method of thinking is mysteriously similar to the scientific method, which is certainly true. However, To put it another way, Bayesian thinking is how to use some known information or experience to judge or predict the unknown.
For example, event A is "rainy tomorrow" and event B is "cloudy tonight". If you see cloudy tonight, what is the probability of raining tomorrow? If you use the Bayes theorem directly, you only need to know the probability of raining every day, the probability of cloudy nightly, and if one day it rains, then the probability of the cloudy night of the previous night will be substituted into the formula and done.
The question is, where do these probabilities come from, and how do we infer the possibility based on the information we have . In fact, most of the valuable problems are backward problems, for example: the stock market, through those few signs can be judged to be a more or less opportunity; the hospital, through which symptoms can determine what is the disease; science Research, through several experimental data, you can construct what theory to explain the model and so on.
In general, mathematicians, physicists, etc. are all about backward problems, or they can not predict or judge the outcome with few signs or phenomena, and there is no value (by the way, do not know the reverse Problem-thinking people can not fight in the financial market or the stock market. At present, the most advanced research in the speculative market is almost a process of backward stochastic process and martingale theory. It is known that the incidence of a disease is 0.001, that is, 1 in 1,000 people is sick.
There is a reagent that can test whether a patient is sick or not, and its accuracy is 0.99, which means that 99% of the patients may be positive when the patient really gets sick. Its false positive rate is 5%, which means that 5% of the patients may get positive if they do not get sick. There is a positive test result of a patient, what is the probability that he does get sick?We got a staggering result of about 0.019. In other words, even if the test is positive, the probability of getting sick is only increased from 0.1% to 2%.
This is the so-called "false positive", that is, the positive result is not enough to show that the patient is sick.Why is this? Why is the accuracy of this test up to 99%, but the credibility is less than 2%? The answer is related to its false positive rate. Here we see the power of the Bayesian theorem, that it allows us to deduce the unknown probability from the known probability and the information at hand.The human brain and quantification vs heuristic thinking.
The advantage of Bayesian analysis is that it does not require any objective estimation, just guess a priori casually. This is the key, because most of the events that occur in the real world have no objective probability. This is actually very similar to the scientific method: we did not know anything from the beginning, but we are willing to experiment and gradually find out the laws of nature. Bayesian reasoning operates in the same way, through continually the posterior probability in accordance with existing experimental data.
Biggest problem with Bayesian reasoning is that human brains cannot quantify information easily. The most commonly raised example is Malcolm Gladwell's "Outliers", where many people who are trained enough in certain low-chaotic environments make correct decisions and judgments without using the Bayesian framework at all. Firefighters, for example, do not undergo a Bayesian calculus before deciding whether or not it's safe to pull a child out of a burning building.
They just do it because they've done it many times before, and have a rough heuristic estimate on the safety of such an action. Similarly, chess players do not use Bayesian analysis to think many turns ahead; what research has found is that through thousands of hours of practice and becoming familiar and experienced with similar setpieces in the past, gives them an ability to predict moves assuming that the opposing player is also rational.
Conversely, high chaotic environments, such as the political sphere, is where Bayesian reasoning thrives due to the high amount of uncertainty.The other criticism are from the frequentists. In general, the probability of teaching in school can be called frequencyism. An event, if performed repeatedly multiple times independently, dividing the number of occurrences by the number of executions yields a frequency.
For example, throwing coins, throwing 10000 times, 4976 times positive, the frequency is 0.4976. Then if the implementation of many many, the frequency will tend to a fixed value, is the probability of this time. In fact, to prove it involves the central limit theorem, but it does not start.
Cite this Page
What is Bayesian Thinking?. (2018, Apr 25). Retrieved from https://phdessay.com/what-is-bayesian-thinking/
Run a free check or have your essay done for you