This screen takes prior probabilities for a set of alternative hypotheses, conditional probabilities for several possible outcomes, and information about which outcome(s) occurred. It produces revised probabilities for the original hypotheses. It can handle up to 5 mutually exclusive and exhaustive hypotheses and up to 5 mutually exclusive and exhaustive outcomes. Warning: The program always computes an answer, even if your entries violate the rules. See instructions below.
Bayes' Theorem provides a way to apply quantitative reasoning to what we normally think of as "the scientific method". When several alternative hypotheses are competing for our belief, we test them by deducing consequences of each one, then conducting experimental tests to observe whether or not those consequences actually occur. If an hypothesis predicts that something should occur, and that thing does occur, it strengthens our belief in the truthfulness of the hypothesis. Conversely, an observation that contradicts the prediction would weaken (or destroy) our confidence in the hypothesis.
In many situations, the predictions involve probabilities-- one hypothesis might predict that a certain outcome has a 30% chance of occurring, while a competing hypothesis might predict a 50% chance of the same outcome. In these situations, the occurrence or non-occurrence of the outcome would shift our relative degree of believe from one hypothesis toward another. Bayes theorem provides a way to calculate these "degree of belief" adjustments.
In Bayes' Theorem terminology, we first construct a set of mutually-exclusive and all-inclusive hypothesis and spread our degree of belief among them by assigning a "prior probability" (number between 0 and 1) to each hypothesis. If we have no prior basis for assigning probabilities, we could just spread our "belief probability" evenly among the hypotheses.
Then we construct a list of possible observable outcomes. This list should also be mutually exclusive and all inclusive. For each hypothesis we calculate the "conditional probability" of each possible outcome. This is just the probability of observing each outcome if that particular hypothesis is true. For each hypothesis, the sum of the conditional probabilities for all the outcomes must add up to 1.
We then note which outcome actually occurred. Using Bayes' formula, we can then compute revised, or posterior, probabilities for the hypotheses. This page implements the rather messy-looking formula. You need only identify the hypotheses and outcomes, assign prior probabilities to the hypotheses and conditional probabilities to the outcomes, and identify the outcome that actually occurred; the JavaScript program will do the rest.
Suppose a woman is the daughter of a carrier of hemophilia, and therefore is known to have a 50/50 chance of being a carrier herself. If she subsequently has a normal child, how does this affect the likelihood that she is a carrier.
You can also click the Reset button to reset all cells in the table to their default values, or click the Revise Priors button to move the revised probabilities into the prior probability fields (this option is useful for analyzing sequential outcomes).
Reference: An Introduction to Scientific Research by E. Bright Wilson, Jr., McGraw-Hill (and Dover).