This section shows how to interview an expert and construct a model of his or her judgments using Bayesian probability models. Start with selecting the event to predict and the expert to interview. the expert will specify the clues helpful in making the prediction. For each clue level, the expert assesses a likelihood ratio. The probability of the target event is predicted from these ratios using the odds form of Bayes formula. The model, once created, is validated by comparing model predictions against experts' judgments.
Statisticians can tell the future by looking at the past; they have developed several tools to forecast from historical trends future events. For example, future sales can be predicted based on historical sales figures. Sometimes, analysts must forecast unique events that lack antecedents. Other times, the environment has changed so radically that previous trends are irrelevant. In these circumstances, the traditional statistical tools are of little use. An alternative method must be found. In this section, we provide a methodology for analyzing and forecasting events when historical data are not available. The approach is based on Bayesian subjective probability models.
To motivate this approach, suppose you have to predict demand for a special new type of HMO. HMOs are .group health insurance packages sold through employers to employees. HMOs require employees to consult a primary care physician before visiting a specialist; the primary physician has financial incentives to reduce inappropriate use of services. Experience with HMOs shows they can cut costs by reducing unnecessary hospitalization. Suppose we want to know what will happen if we set up a new type of HMOs where primary care physicians have email contact with their patients. At first, predicting demand for the proposed new HMO seems relatively easy, because there is a great deal of national experience with HMOs. But the proposed HMO uses technology to set it apart from the crowd: The member will initiate contact with the HMO through the computer, which will interview the member and send a summary to the primary care doctor, who would consult the patient's record and decide whether the patient should:
With the decision made, the computer will inform the patient about the primary physician’s recommends. If the doctor does not recommend a visit, the computer will automatically call a few days later to see if the symptoms have diminished. All care will be supervised by the patient's physician.
Clearly, this is not the kind of HMO with which we have much experience, but let's blend a few more uncertainties into the brew. Assume that the local insurance market has changed radically in recent years‑‑competition has increased, and businesses have organized powerful coalitions to control health care costs. At the federal level, national health insurance is again under discussion. With such radical changes on the horizon, data as young as two years old may be irrelevant. As if these constraints were not enough, we need to produce the forecast in a hurry. What can we do? How can we predict demand for unprecedented product?
Step 1. Select Target Event
To use the calculus of probabilities in forecasting an event, we need to make sure that the events of interest are mutually exclusive (the events cannot occur simultaneously) and exhaustive (one event in the set must happen). Thus, in the proposed HMO, we might decide to predict the following exhaustive list of mutually exclusive events:
The event being forecasted should be expressed in terms of the experts’ daily experiences and in terms they are familiar with. If we plan to tap the intuitions of benefit managers about the proposed HMO, we should realize they might have difficulty with our event types, which are described in terms of the entire employee population. If benefit managers are more comfortable thinking about individuals, we can calculate the four events from the probability that one employee will join. It makes no difference for the analysis how one defines the events of interest. It may make a big difference to the experts, however, so be sure to define the event of interest in terms familiar to them. Expertise is funny. If we ask experts about situations slightly outside their specific area or frame of reference, we often get erroneous responses. For example, some weather forecasters might predict rain more accurately than air pollution because they have more experience with rain. Therefore, it would be more reasonable to ask benefit managers about the probability of events, but focus on the individual, not the group:
Many analysts and decision makers, recognizing that real situations are complex and have tangled interrelationships, tend to work with a great deal of complexity. We prefer to forecast as few events as possible. In our example, we might try to predict the following events:
Again the events are mutually exclusive and exhaustive, but now they are more complex. The forecasts deal not only with applicants' decisions but also with the stability of those decisions. People may join when they are sick and withdraw when they are well. Turnover rates affect administration and utilization costs, so information about stability of the risk pool is important. In spite of the utility of such a categorization, we think it is difficult to combine two predictions and prefer to design a separate model for each‑‑for reasons of simplicity and accuracy. As will become clear shortly, we can use simpler methods of forecast with two events than when more events are possible.
The events must be chosen carefully because a failure to minimize their number may indicate you have not captured the essence of the uncertainty. One way of ensuring that the underlying uncertainty is being addressed is to examine the link between the forecast event and the actions the decision maker is contemplating. Unless these actions differ radically from one another, some of the events should be combined. A model of uncertainty needs no more than two events unless there is clear proof to the contrary. Even then, it is often best to build more than one model to forecast more than two events.
For our purposes, we are interested in predicting how many employees will join the HMO, because this is the key uncertainty that investors need to judge the proposal. To predict the number who will join, we can calculate p(Joining), the probability that an individual employee will join, If the total number of employees is n, then the number who will join is n * p(Joining). Having made these assumptions, let's return to the question of assessing the probability of joining.
Step 2. Divide & Conquer
We suggested that demand for the proposed HMO can be assessed by asking experts, "Out of 100 employees, how many will join?" The suggestion was somewhat rhetorical, and an expert might well answer, "Who knows? Some people will join the proposed HMO, others will not‑‑it all depends on many other factors." Clearly, if posed in these terms, the question is too general to have a reasonable answer. When the task is complex, meaning that many contradictory clues must be evaluated, experts' predictions can be way off the mark.
Errors in judgments may be reduced if we break complex judgments into several components or clues. Then the expert can specify how each clue affects the forecast and we can judge individual situations based on the clues that are present. We no longer need to directly make an estimate of the probability of the complex event. Its probability can be derived from the clues that are present and the influence these clues have on the complex judgment. In analyzing opinions about future uncertainties, we often find that forecasts depend .on a host of factors. In this fashion, the forecast is decomposed into predictions about a number of smaller events. In talking with experts, the first task is to understand whether they can make the desired forecast with confidence and without reservation. If they can, then we rely on their forecast and save everybody's time. When they cannot, we can disassemble the forecast into judgments about clues.
Let’s take the example of the online HMO and see how one might follow our proposed approach. Nothing is totally new, and the most radical health plan has components that resemble aspects of established plans. Though the proposed HMO is novel, experience offers clues to help us predict the reaction to it. The success of the HMO will depend on factors that have influenced demand for services in other circumstances. Experience shows that the plan's success depends on the composition of the potential enrollees. In other words, some people have characteristics that dispose them toward. or against joining the HMO, As a first approximation, the plan might be more attractive to young employees who 'are familiar with computers, to older high‑level employees who want to save time, to employees comfortable with delayed communications on telephone answering machines, and to patients who want more control over their care. If most employees are members of these groups, we might reasonably project good demand.
If we have to make a prediction about an individual employee, one thing is for sure. Each employee will have some characteristics that suggest they are more likely to join the health plan and some that suggest the reverse. Seldom will we have a situation where all clues point to one conclusion. Naturally, in these circumstances the various characteristics should be weighted relative to each other before one can predict if the employee will join the health plan. How can we do so? Bayes' probability theory provides one way for doing so. Bayes' theorem is a formally optimal model for revising existing opinion (sometimes called prior opinion) in the light of new evidence or clues. The theorem states:
Bayes theorem states:
Using the Bayes theorem, if C1 through Cn reflect the various clues, we can write the forecast regarding the HMO as:
The difference between p(Joining | C1, …. Cn) and p(Joining) is the knowledge of clues C1 through Cn. Bayes theorem shows how our opinion about an employee's reaction to the plan will be modified by our knowledge of his or her characteristics. Because Bayes' theorem prescribes how opinions should be revised to reflect new data, it is a tool for consistent and systematic processing of opinions.
Step 3. Identify Clues
In the last section, we described how a forecast can be based on a set of clues. In this section, we describe how an analyst can work with an expert to specify the appropriate clues in a forecast. The identification of clues starts with the published literature. Even when we think our task is unique, it is always surprising how much has been published about related topics. To our surprise, there was a great deal of literature on predicting decisions to join an HMO, and even though these studies don't concern HMOs with our unique characteristics, reading them can help us think more carefully about clues.
It is our experience that one seldom finds exactly what is needed in the literature. Once the literature search is completed, we strongly advocate using experts to identify clues for a forecast. Even if there is extensive literature on a subject, we cannot expect to select the most important variables or to discern all important clues. In a few telephone interviews with experts, one can find the key variables, get suggestions on measuring each one, and identify two or three superior journal articles.
Experts should be chosen on the basis of accessibility and expertise. To forecast HMO enrollment, appropriate experts might be people with firsthand knowledge of the employees, such as benefit managers, actuaries in other insurance companies, and local planning agency personnel. It's useful to start talking with .experts by asking broad questions designed to help the experts talk about themselves. A good opening query might be:
The expert might respond with an anecdote about irrational choices by employees, implying that a rational system cannot predict everyone's behavior. Equally, the expert might mention how difficult it is to forecast, or how many years he or she has spent studying these phenomena. The analyst should understand what is occurring here. In these early responses, the expert is expressing a sense of the importance and the value of his or her experience and input. It is vital to acknowledge this hidden message and allow ample time for the expert to describe historic situations.
After the, expert has been primed by recalling these experiences, the analyst asks about characteristics that might suggest an employee's decision to join or not join the plan. An opening inquiry could be:
After a few queries of this type, ask more focused questions:
We refer to the second question as a positive prompt because it elicits factors that would increase the chances of joining. Negative prompts seek factors that decrease the probability. An example of a negative prompt is:
This distinction is important because research shows that positive and negative prompts yield different sets of factors. When Snyder and Swann (1978) asked subjects to identify clues for introversion and extroversion, they got differing responses. Though introversion and extroversion are opposite concepts and clues identifying one yield information about the other, the subjects identified two unrelated sets of clues. Thus, forecasting should start with clues that support the forecast, and then explore clues that oppose it. Then, responses can be combined so the model contains both sets.
It is important to get opinions of several experts on what clues are important in the forecast. Each expert has access to a unique set of information; using more than one expert enables us to pool information and improve the accuracy of the recall of clues. Our experience suggests that at least three experts should be interviewed for about one hour each. After a preliminary list of factors is collected during that interview, the experts should have a chance to revise the list, either by telephone, by mail, or in a meeting. If time and resources allow, we prefer the Integrative Group Process for identifying the clues.
Let us suppose that our experts identified the following clues for predicting an employee's decision to join:
Step 4. Describe Levels of Each Clue
A level of a clue measures the extent to which it is present. At the simplest, there are two levels, presence or absence; but sometimes there are more. Gender has two levels, male and female. But age of employees may be described in terms of six discrete levels, each corresponding to a decade: younger than 21, 21‑30, 31‑40, 41‑50, 51‑60, older than 60. Occasionally we have continuous clues with many levels. For example, when any year between 1 and 65 is considered, we have at least 65 levels for age.
In principle, it is possible to accommodate both discrete and continuous variables in a Bayesian model. In practice, discrete clues are used more frequently for two reasons: (1) experts seem to have more difficulty estimating likelihood ratios associated with continuous clues, and (2) in the health and social service areas, most clues tend to be discrete and virtually all other types of clue can be transformed to discrete clues.
As with defining the forecast event, the primary rule for creating discrete levels is to minimize the number of categories. Rarely are more than five or six categories required, and frequently two or three suffice.
We prefer to identify levels for various clues by asking the experts to describe a level at which the clue will increase the probability of the forecast event. Thus, we may have the following conversation:
In all cases, each category or division should represent a different chance of joining the HMO. One way to check this would be to ask:
After much interaction with the experts, we might devise the following levels for each of the clues identified earlier:
In describing the levels of each clue, we also think through some measurement issues. How do we determine the value of time? We use income, hence hourly wage, as a surrogate, even though it would be more accurate to survey the group. This decision rests on the fact that income data are accessible while a survey would be slow and expensive. But such decisions may mask a major pitfall. If income is not a good surrogate for value of time, we have wrecked our effort by taking the easy way out. Remember the story about the man who lost his keys in the street but was searching for them in his house. Asked why he was looking there, he responded with a certain pinched logic: "The street is dark‑‑the light's better in the house." The lesson is that surrogate measures must be chosen carefully to preserve the value of the clue.
Step 5. Test for Independence
Conditional independence is an important criterion that can streamline a long list of clues (Schum 1965). Independence means that the presence of one clue does not change the value of any other clue. Conditional independence means that for a specific population, such as employees who join the HMO, presence of one clue does not change the value of another. .Conditional independence simplifies the forecasting task. The impact of a piece of information on the forecast, we noted earlier, is its likelihood ratio. Conditional independence allows us to write the likelihood ratio of several clues as a multiplication of the likelihood ratio of each clue. Thus, if C1 through Cn are the clues in our forecast, the joint likelihood ratio of all the clues can be written as:
Assuming conditional independence, the impact of two clues is equal to the product of the impact of each clue. Conditional independence simplifies the number of estimates needed for measuring the joint impact of several pieces of information. Without this assumption, evaluating the joint impact of two pieces of information requires more than two estimates. With it, the likelihood ratio of each clue will suffice.
Let us examine whether age and sex are conditionally independent in predicting the probability of joining the HMO. Mathematically, if two clues, age and gender, are conditionally independent, then we should have:
This formula says that the impact of age on our forecast remains the same even when we know the gender of the person, Thus, the impact of age on the forecast does not depend on gender, and vice versa.
The chances for conditional dependence increase along with the number of clues, so clues are likely to be conditionally dependent if the model contains more than six or seven clues. When clues are conditionally dependent, either one clue must be dropped from the analysis or the dependent clues must be combined into a new cluster of clues. If age and computer literacy were conditionally dependent, then either could be dropped from the analysis. As an alternative, we could define a new cluster with these levels:
The new clue is constructed by combining the levels of age and computer literacy.
There are statistical procedures for estimating conditional dependence; however, the following behavioral procedure works quite well (Gustafson et al. 1973a):
Experts will have in mind different, sometimes wrong, notions of dependence, so the words conditional dependence should be avoided. Instead, we focus on whether one clue tells us a lot about the influence of another clue in specific populations. We find that experts are more likely to understand this line of questioning as opposed to directly asking them to verify conditional independence.
Analyst can also assess conditional independence through graphical methods. The decision maker is asked to list the causes and consequences (signs, symptoms or characteristics commonly found) of the condition being predicted. Only direct causes and consequences are listed, as indirect causes or consequences cannot be modeled through Odds form of Bayes probability model. A target event is written in a node at center of the page. All causes precede this node and are shown as arrows leading to the target node. All subsequent signs or symptoms are also shown as nodes that following the target event node (arrows leave the target node towards the consequences). Figure 1 shows 3 causes and 3 consequences for the target event.
Figure 1: Causes & Consequence of Target Event Need to Be Drawn
For example, in predicting who will join an HMO, the analyst might ask to draw the causes of people joining the HMO and the signs which will distinguish people who have joined from those who have not. A node is created in the center and called with the name of the target event. The decision maker might see time pressures as one reason for joining an online HMO and frequent travel as another reason for joining an online HMO. The decision maker might also see that as a consequence of joining the HMO, the HMO will be predominantly people who are male, computer literate and young. These are shown in Figure 2:
Figure 2: Two Causes & Three Signs (Consequences) of Joining the HMO
To understand conditional dependencies implied by a graph, the following rules are applied:
If conditional dependence is found, the analyst has three choices. First, the analyst can ignore the dependencies among the clues. This would work well when multiple clues point to the same conclusion. But when this is not the case, ignoring the dependencies can lead to erroneous predictions.
Second, the analyst could help the decision maker revise the graph or use different causes or consequences so that the model has fewer dependencies. This is often done by better defining the clues. For example, one could reduce the number of links among the bodes by providing a tighter definition of a consequence so that it only occurs through the target event. If at all possible, several related causes should be combined into a single cause to reduce conditional dependence among the clues used in predicting the target event.
Finally, third, barring any revisions of the graph, the implication of conditional dependence in predicting the target event is the following:
For example, the Bayes formula for predicting the odds of joining the HMO can be presented as follows:
Posterior odds of joining = Likelihood ratio time pressure & travel frequency * Likelihood ratio age * Likelihood ratio gender * Likelihood ratio computer use * Prior odds of joining
Step 6. Estimate Likelihood Ratios
In previous steps, we defined the forecast event and organized a set of clues that could be used in the forecast. Since we intend to use the Bayes' formula to aggregate the effects of various clues, the impact of each clue should be measured as a likelihood ratio. This section explains how to estimate likelihood ratios, but other approaches are possible (Huber 1974).
To estimate likelihood ratios, experts should think of the prevalence of the clue in a specific population. The importance of this point is not always appreciated. A likelihood estimate is conditioned on the forecast event, not vice versa. Thus, the impact of being young (age less than 30) on the probability of joining the HMO is determined by finding the number of young employees among joiners. There is a crucial distinction between this probability and the probability of joining if one is young. The first statement is conditioned on joining the HMO, the second on being young. The definition of likelihood must be kept in mind‑it is conditioned on the forecast event, not on the presence of the clue. The likelihood of individuals younger than 30 joining is p(Younger than 30 | Joining) , while the probability of joining the HMO for a person younger than 30 is p(Joining l Younger than 30). The two concepts are very different.
A likelihood is estimated by asking questions about prevalence of the clue in populations with and without target event. For example, for joining the HMO, we ask:
The ratio of ‑the answers to these two questions determines the likelihood ratio associated with being younger than 30. This ratio could be estimated directly by asking the expert to estimates the odds of finding the clue in population with and without target event. For example, we could ask:
We estimate the likelihood ratios by relying on experts' opinions, but the question naturally arises about whether experts can accurately estimate probabilities. Before answering we need to emphasize that accurate probability estimation does not mean being correct in every forecast. For example, if we forecast that an employee has a 60 percent chance of joining the proposed HMO but the employee does not join, was the forecast inaccurate? Not necessarily. The accuracy of probability forecasts cannot be assessed by the occurrence of a single event. A better way to check the accuracy of a probability is to check it against observed frequency counts. A 60 percent chance of joining is accurate if 60 of 100 employees join the proposed HMO. A single case reveals nothing about the accuracy of probability estimates.
Systematic bias may exist in subjective estimates of probabilities (Lichtenstein and Phillips 1977; Slovic, Fischhoff, and Lichtenstein 1977). Research shows that subjective probabilities for rare events are inordinately low, while they are inordinately high for common events. These results have led some psychologists to conclude that cognitive limitations of the assessor inevitably flaws subjective probability estimates. For example, Hogarth (1975) concludes: "Man is a selective, sequential information processing system with limited capacity, he is ill‑suited for assessing probability distributions."
Alemi, Gustafson, and Johnson (1986) argue that accuracy of subjective estimates can be increased through three steps. First, experts should be allowed to use familiar terminology and decision aids. Distortion of probability estimates can be seen in a diverse group of experimental subjects, but not among all real experts. For example, meteorologists seem to be fine probability estimators (Winkler and Murphy 1973). Weather forecasters are special because they assess familiar phenomena and have access to a host of relevant and overlapping objective information and judgment aids (such as computers and satellite photos). The point is that experts can reliably estimate likelihood ratios if they are dealing with a familiar concept and have access to their usual tools. In this regard, Edwards writes:
If substantive experts are indeed allowed the time and the necessary tools (e.g., paper and pencil), they can accurately assess probabilities. Granted that assessed probability is not precise to the third digit, it nevertheless is a systematic and coherent assessment of the individual's belief. (See Edward's comments following Hogarth's,1975 article.)
A second way of improving experts' estimates is to train them in selected probability concepts (Lichtenstein and Fischhoff 1978). In particular, experts should learn the meaning of a likelihood ratio. Ratios larger than 1 support the occurrence of the forecast event; ratios less than 1 oppose the probability of the forecast event. A ratio of 1‑to‑2 reduces the odds of the forecast by half; a ratio of 2 doubles the odds.
The experts also should be taught the relationship 'between odds and probability. Odds of 2‑to‑1 mean a probability of 0.67; odds of 5‑to‑1 mean a probability of 0.83; odds of 10‑to‑1 mean a probability of an almost certain event. The forecaster should walk the expert through and discuss in depth the likelihood ratio for the first clue before proceeding. We have noticed that the first few estimates of probability can take four or five minutes each, as many things are discussed and modified. Later estimates often take less than a minute.
A third step for improving experts' estimates of probabilities is to rely on more than one expert and on a process of estimation, discussion, and re-estimation. This method can reduce inaccuracies by as much as 33 percent compared to individual estimates (Gustafson et al. 1973b). Relying on a group of experts increases the chance of identifying major errors. In addition, the process of individual estimation, group discussion,. and individual re-estimation reduces pressures for artificial consensus while promoting information exchange among the experts.
Step 7. Estimate Prior Odds
According to Bayes' formula, forecasts require two types of estimates: likelihood ratios associated with specific clues, and prior odds associated with the target event. Prior odds can be assessed by find in the prevalence of the event. In a situation without a precedent, prior odds can be estimated by asking experts to imagine the future prevalence of the event. Thus, the odds for joining may be assessed by asking:
The response to this question provides the probability of joining, p(Joining), and this probability can be used to calculate the odds for joining:
Odds for joining = p(Joining) / [1 -p(Joining)]
When no reasonable prior estimate is available, we prefer instead to assume arbitrarily that the prior odds for joining are 1‑to‑1, and allow clues to alter posterior odds as we proceed.
Step 8. Develop Scenarios
Decision makers use scenarios to think about alternative futures. The purpose of forecasting with scenarios is to make the decision maker sensitive to possible futures. The the decision maker can work to change the possible futures. Many future predictions are self‑fulfilling prophecies‑‑a predicted event happens because we take steps to increase the chance for it to happen. In this circumstance, predictions are less important than choosing the ideal future and working to make it come about. Scenarios help the decision maker choose a future and make it occur.
Scenarios are written as coherent and internally consistent narrative scripts. The more believable they are, the better. Scenarios are constructed by selecting various combinations of clue levels, writing a script, and adding details to make the group of clues more credible. An optimistic scenario may be constructed by choosing only clue levels that support the occurrence of the forecast event; a pessimistic scenario combines clues that oppose the event's occurrence. Realistic scenarios,, on the other hand, are constructed from a mix of clue levels. In the HMO example, scenarios could describe hypothetical employees who would join the organization. A customer most likely to join is constructed by assembling all characteristics that support joining:
A 29‑year‑old male employee earns more than $60,000. He is busy and values his time; he is familiar with computers, using them both at work and at home. He is currently an HMO member, though not completely satisfied with it.
A pessimistic scenario describes the employees least likely to join:
More realistic scenarios combine other clue levels:
A large set of scenarios can be made by randomly choosing clue levels and then asking experts to throw out impossible combinations. To do this, first write each clue level on a card and make one pile for each clue. Each pile will contain all the levels of one clue. Randomly select a level from each pile, write it on a piece of paper, and return the card to the pile. Once all clues are represented on the piece of paper, have an expert check the scenario, and discard scenarios that are wildly improbable.
If experts are evaluating many scenarios (perhaps 100 or more), arrange the scenario text so they can understand them easily and omit frivolous detail. If experts are reviewing a few scenarios (perhaps 20 or so), add detail and write narratives to enhance the scenarios' credibility.
Because scenarios examine multiple futures, they introduce an element of uncertainty and prepare decision makers for surprises. In the example, the examination of scenarios of possible customers helped the decision makers understand that large segments of the population may not consider the HMO desirable. This led to two changes. First, a committee was assigned to make the proposal more attractive to segments not currently attracted to it. This group went back to the drawing board to examine the unmet needs of people unlikely to join. Second, another committee examined how the proposed HMO could serve a small group of customers and still succeed.
Sometimes forecasting is complete after we have examined the scenarios, but if the decision makers want a numerical forecast, we must take two more steps.
Step 9: Validate the Model
Any subjective probability model is in the final analysis just a set of opinions, and as such should not be trusted until it passes vigorous evaluation. The evaluation of a subjective model requires answers to two related questions: (1) Does the model reflect the experts' views? and (2) Are the experts' views accurate?
To answer the first question, design about 30 to 100 scenarios, ask the expert to rate each, and compare these ratings to model predictions. If the two match closely, then the model simulates the expert's judgments. For example, we can generate 30 hypothetical employees and ask the expert to rate the probability that each will join the proposed HMO. To help the experts accomplish this, we would ask them to arrange the cases from more to less likely, to review pairs of adjacent employees to see if the rank order is reasonable, and to change the rank orders of the employees if needed. Once the cases have been arranged in order, the experts would be asked to rate the chances of joining on a scale of 0 to 100. Table 1 shows the resulting ratings. For each scenario, we would use the Bayes' model to forecast whether the employee will join the HMO. Table 1 also shows the resulting predictions.
Table 1: Two Experts Ratings & Bayes Forecast on 30 Hypothetical Scenarios
Next we compare the Bayes prediction to the average of the experts' ranking. If the rank order correlation is higher than 0.70, we would conclude that the model simulates many aspects of the expert's intuitions. Figure 3 shows the relationship between model predictions and average experts' ratings.
Figure 3: Validating a Model by Testing If It Simulates Experts' Judgments
The straight line shows the expected relationship. Some differences between the model and the experts should be expected, as the experts will show many idiosyncrasies and inconsistencies not found in the model. But the model's predictions and the experts' intuitions should not sharply diverge. One way to examine this is through correlations. The model predictions and the average of the experts' ratings had a correlation of 0.79. If the correlation is lower than 0.5, then perhaps the expert's intuitions have not been effectively modeled, in which case the model must be modified, the likelihood ratios might be too high, or some important clues might have been omitted. In this case, the correlation is reasonable high enough that we can conclude that the model has simulated the experts' judgments.
The above procedure leaves unanswered the larger and perhaps more difficult question of the accuracy of the expert's intuitions. Experts opinions can be validated if they can be compared to observed frequencies, but this is seldom possible (Howard 1980). In fact, if we had access to observed frequencies, we would probably not bother consulting experts to create subjective probability models. In the absence of objective data, what steps can we take to reassure ourselves regarding our experts?
One way to increase our confidence in expert opinions is to use several experts. If experts reach a consensus, then we feel comfortable with a model that predicts that consensus. Consensus means that experts, after discussing the problem, independently rate the hypothetical scenarios close to one another. One way .of checking the degree of agreement among experts' ratings of the scenarios is to correlate the ratings of each pair of experts. Correlation values above 0.75 suggest excellent agreement; values between 0.50 and 0.75 suggest more moderate agreement. If the correlations are below 0.50, then experts differed, and it is best to examine their differences and redefine the forecast. In the example above the two experts had a correlation of 0.33, which suggests that they did not agree on the ratings of the scenarios. In this case, it would be reasonable to investigate the source of disagreement and arrive at a better consensus among the experts. Alternatively, if experts are unlikely to resolve their differences, then each can be modeled separately.
Some investigators feel that a model, even if it predicts the consensus of the best experts, is still not valid because they think that only objective data can really validate a model. According to this rationale, a model provides no reason to act unless it is backed by objective data. While we agree that no model can be fully validated until its results can be compared to real data, we nevertheless feel that in many circumstances expert opinions are sufficient grounds for action. In some circumstances (surgery is an example), we must take action on the basis of experts' opinions. If we are willing to trust our lives to expertise, we should be willing to accept expert opinion as a basis for business and policy action.
Step 10. Make a Forecast
To make a forecast, we should begin by describing the characteristics of the employees. Given the characteristics of the employees, we use the Bayes' formula and the likelihood ratios associated with each characteristic to calculate the probability of the employee joining the HMO. In our example, suppose we evaluate a 29‑year‑old man earning $60,000 who is computer literate but not an HMO member. Suppose the likelihood ratios associated with these characteristics are 1.2 for being young, 1.1 for being male, 1.2 for having a high hourly rate, 3.0 for being computer literate, and 0.5 for not being a member of an HMO. Likelihood ratios greater than 1.0 increase odds of joining, while ratios less than 1.0 reduce the odds. Assuming an equal prior odds, this employee's posterior odds of joining are:
Odds of joining = 1.1 x 1.2 x 3 x 0.5 x 1 = 1.98
The probability of a mutually exclusive and exhaustive event “A” can be calculated from its odds using the following formula:
p(A) = Odds (A) / [1 + Odds (A)]
In the above example, the probability of joining is:
Probability of joining = 1.98 / (1 + 1.98) = 0.66
The probability of joining can be used to estimate the number of employees likely to join the new HMO (in other words, demand for the proposed product). If we expect to have 50 of the above type of employee, we can expect 33 = (50 x 0.66) to join. If we do similar calculations for other types of employees, we can calculate the total demand for the proposed HMO.
Analysis of demand for the proposed HMO showed that most employees would not join but that 12 percent of the employed population might join. Careful analysis allowed the planners to identify a small group of employees who could be expected to support the proposed HMO, showing that a niche was available for the innovative plan.
Forecasts of unique events are useful, but they are difficult because of the lack of data on which to calculate probabilities. Even when events are not unique, frequency counts are often unavailable, given time and budget constraints. However, the judgments of people with substantial expertise can serve as the basis of forecasts.
In tasks where many clues are needed for forecasting, experts may not function at their best, and as the number of clues increases, the task of forecasting becomes increasingly arduous. Bayes' theorem is a mathematical formula that can be used to aggregate the impact of various clues. This approach combines the .strength of human expertise (estimating the relationship between the clue and the forecast) with the consistency of a statistical model. Validating these models poses a thorny problem because no objective standards are available. But once the model has passed scrutiny from several experts, from different backgrounds, we feel sufficiently confident about the model to ‑recommend action based on its forecasts.
Advanced learners like you, often need different ways of understanding a topic. Reading is just one way of understanding. Another way is through writing. When you write you not only recall what you have written but also may need to make inferences about what you have read. Please complete the following assessment:
Construct a probability model to forecast an important event at work (if you cannot find an appropriate project construct a model to predict breast cancer risks for women 50 years and older. For background read attached paper). Select an expert that will help you construct the model. Make an appointment with the expert and construct the model. Prepare a narrated Power Point slide presentation that answers the following questions:
To assist you in reviewing the material in this lecture, please see the following resources:
Narrated lectures require use of Flash.
More & References
Additional readings (Log off after viewing each article. Library membership required)
Recently Asked Questions
In this section, you will find answers to questions asked by you or others.
Question: Do all scenarios need to be presented to the expert or will a few scenarios suffice to obtain the model prediction? Answer: The more scenarios the better. However in the case of strong time constraints, some scenarios, particularly those similar to each other, may be skipped. In such a case it is important to present as diverse scenario as possible. Asked on 6/14/2008 2:59:26 PM and answered on 6/16/2008 4:07:06 PM.
Question: When interviewing more than one expert, to create scenarios and one expert is more credible or experienced than the other does one experts opinion have more weight? Answer: They could but you need to let the experts to come to a behavioral consensus, the expert whose opinion you weight more may convince the other expert to change his mind. The lecture on group utilities shows you how to do this. Asked on 3/3/2008 8:25:01 PM and answered on 3/3/2008 9:18:55 PM.
Question: I do not understand what you are asking in question 4: can you please ask in a different way? Answer: Scenario generation is a method of planning. It accomplishes all the tasks described in the reading except the creation of the Bayesian predictors. So for example, all clues are identified and scenarios are generated and these scenarios are reviewed by decision makers but no overall prediction is made for each scenario. The purpose of this approach is to make management more sensitive to possible events that might occur. It is not the purpose of this event to predict any particular event will occur but to make management sensitive to a range of events that might occur. The ides is that managers can change the future by actions they take today and thus predicting the future is futile but helping managers shape the future is best. The question is when is scenario planning a reasonable way to plan and when are numerical forecasts necessary. Asked on 3/2/2008 11:52:09 PM and answered on 3/3/2008 7:13:24 AM.
Question: In step 5: test for independence. You used the age and sex example to verify the conditional independence. Why did you only compare the likelihood ratio of the age with gender to the likelihood of age only, and not to the likeloohd ratio of gender as well?. I expected to see this side in the equation as well: = p(Gender|Joijning)/p(Gender|Not Joining). Answer: Both comparisons can be made. The result will be the same. Asked on 2/20/2007 1:44:04 PM and answered on 2/23/2007 9:51:09 AM.
Question: Ideally, objective data should be used to compare with and/or against the data analyzed. However, based on the lecture, the analyzed data can be compared with and/or against the expert's opinion/judgment, because the expert is a reliable source. I'm just wondering, how reliable (in terms of numbers) is that? Or how often is this type of comparison used in the real world? Wouldn't companies or upper management want to see analyzed data compared with and/or against objective data in the research? Answer: When you use a multi-attribute model to predict objective data, you need 3-5 data points for each attribute. This often leads to a large data set that is not practical to construct in real non-research settings. Asking the expert to construct a model and then comparing the model to the objective data reduces the data needed radically. Asked on 2/19/2007 4:13:31 PM and answered on 2/23/2007 9:50:06 AM.
Question: I must admit to being often lost within the electronic structure of the course. For example, Section 4 "Modeling Uncertainty" includes a link to examine an example for how to calculate rare events "http://gunston.doit.gmu.edu/healthscience/730/ProbabilityRareEvent.asp The link provided takes one to a complete full length additional lecture. One that is NOT listed on the syllabus or in the course outline, yet the additional lecture is included in the pull down window for selecting topics here in the Get Answers to Your Questions. In a prior suggestion I mentioned similar example wherein a 'suggested' link brought one to an additional lecture. It is easy to imagine some students never tapping into such 'suggested' lectures that are otherwise not transparent in the syllabus or course overview. More critical is the fact that one cannot independently judge how critical these 'hidden' links are to basic understanding for the course. Can you please advise whether or not Probability of Rare Events lecture is required reading AND if so, why is it not explicit in the syllabus and outline? Thank you in advance for your attention to this broader issue related to allocation of time and preparation for course work. Answer: I agree with you and will think through this issue. I will attempt to combined it with the lecture. It is a required reading and the material is used in completing the assignments at end of the lecture. Asked on 2/13/2007 9:09:16 AM and answered on 2/23/2007 9:44:32 AM.
Question: does every conflict have a pareto optimal solution? Answer: Yes, it does (unless it has no solution). Asked on 3/30/2006 4:13:07 AM and answered on 4/2/2006 9:04:19 PM.
Question: In testing conditional independence by drawing causes and effects, how do we ensure that clues are independence each other because analysis is based on opinion? Is the model prediction the probability of odds? Is the odds of joining the posterior adds ? Answer: If there are multiple causes for the target event, they are by definition conditionally dependent on each other. To solve this problem, you need to look at (1) combination of causes or (2) describe the cause in other terms that are more like symptoms/effects. A cause is something that precedes the target event and leads to it in short steps. An effect or symptom is the difference between people/situations who have the target event and those who do not. So people who join an HMO maybe younger than people who do not. Therefore, age is a symtom or clue for predicting the target event of joining HMOs but it is not a cause of it. Whenever possible, show the factors as effects and not causes. Asked on 3/6/2006 3:29:30 AM and answered on 3/8/2006 10:46:06 AM.
Question: Where do I begin when trying to predict the probability of an event that HAS occured as opposed to one that will or will not. Please point in the right direction. Answer: An event that has occured has a probability of 1. It is for sure that it has occured. You can use that event to condition what else might occur. Asked on 2/27/2006 7:19:20 PM and answered on 3/1/2006 7:57:42 AM.
Question: Dr. Alemi, do we need to interview more than one expert for the biweekly project to model uncertainity. The lecture is always talking about interviewing more than one expert in order to predict a more accurate model? Answer: No you do not need to do so. In practice you would but not for the biweekly class projects Asked on 2/26/2006 12:04:27 AM and answered on 2/26/2006 11:30:59 PM.
Question: when conducting an interview with the expert am I supposed to ask him/her to identify clues for people without the condition, or just to identify the number of clues for people with the condition? Answer: The prompt matters. You should do both and you will get two set of different clues and you should then use both sets in making the prediction Asked on 2/21/2006 6:59:55 PM and answered on 2/22/2006 7:36:08 PM.
Question: This document is cited in the bibliography. Is it a book or journal? Is "14" the volume number? Beach, H. B. 1975. "Expert judgment about Uncertainty: Bayesian Decision Making in Realistic Settings." Organizational Behavior and Human Performance 14: 10‑59 Answer: Same answer Asked on 6/17/2005 3:02:02 PM and answered on 6/17/2005 10:11:16 PM.
Question: This document is cited in the bibliography. Is it a book or journal? Is "14" the volume number? Beach, H. B. 1975. "Expert judgment about Uncertainty: Bayesian Decision Making in Realistic Settings." Organizational Behavior and Human Performance 14: 10‑59 Answer: It is a journal and it is edition 14 issue 1. Asked on 6/17/2005 2:58:00 PM and answered on 6/17/2005 10:10:42 PM.
Question: About Odds and probabilities (Step 7 in the lecture) If 33% of the employees will join the HMO, how do we express the odds? Odds for joining = p(joining)/ [1-p(joining)] Odds for joining = 0.33/ 1-0.33 I get “0” when I plug this into Excel What’s wrong here? Answer: You need to put parantheses around 1-0.33 Asked on 6/16/2005 7:47:12 AM and answered on 6/16/2005 2:52:29 PM.
Question: Just need to understand conditional independence. Answer: This topic is discussed in several different areas including under measuring uncertainty, modeling uncertainty and root cause analysis Asked on 3/14/2005 8:08:02 PM and answered on 3/15/2005 11:09:16 AM.
Question: Is there online class for this lecture? I don't find any narrated Power Point! If so, I need to read more and practice to completely understand Answer: Yes there is and I will post it by end of Monday, sorry for the delay. Asked on 3/6/2005 6:28:50 PM and answered on 3/6/2005 10:30:30 PM.
Question: Please explain conditional independence? Thank you. Answer: This is a very complex issue. You may want to read the section on measuring uncertainty. A is conditionally independent of B if knowing B does not change the probability of A. Similarly, two clues are said to be conditionally independent from each other if in a population that has the condition, knowing one clue does not change the probaility of knowing the other. Asked on 3/5/2005 8:48:28 PM and answered on 3/6/2005 10:18:30 PM.
Question: In your opinion, which method of verifying conditional independence works the best? Or does it depend of the circumstance? Answer: It really depends on the availability of data. If you have the data, I would rely on it and use partial correlations to identify large dependencies. If you do not have the data, in my opnion the fastest and easiest way to query the expert is to ask him to draw a causal model and to explore the causal model for possible conditional dependencies. Asked on 2/24/2005 3:07:40 PM and answered on 2/26/2005 1:22:08 PM.
Question: Calculating likelihood ratios is a difficult topic. Answer: It may help you to think of it as prevalence of the clue among the cases with the condition. Review the section on measurement of uncertainty, which gives more details. Asked on 3/23/2004 1:03:08 AM and answered on 3/24/2004 5:28:45 PM.
Question: think I understand Answer: Glad you understand it Asked on 3/16/2004 8:44:39 PM and answered on 8/27/2004 12:50:32 PM.
You can suggest changes or review below suggestions made by others:
Suggestion: I've just graduated <a href=" http://w11.zetaboards.com/okyluhoomy/ ">plavix prescription discount card</a> Utilize appropriate professional treatment guidelines for a given disease state Comment made on 5/17/2013 5:20:54 PM.
Suggestion: Could you please repeat that? <a href=" http://www.certpc.co.uk/user/view.php?id=42203 ">nn preteen erotica</a> Jack strong... i believe... i forgot, he went to Steeds school of stroke.... to get into the industry!!! Comment made on 5/13/2013 10:29:15 AM.
Suggestion: Could you please repeat that? <a href=" http://www.certpc.co.uk/user/view.php?id=42203 ">nn preteen erotica</a> Jack strong... i believe... i forgot, he went to Steeds school of stroke.... to get into the industry!!! Comment made on 5/13/2013 10:29:12 AM.
Suggestion: I really like swimming <a href=" http://patekiki.mywichitathunder.com/ ">nude lolita teen photos</a> That guy should go to the gym a bit more often and gain some muscle.. Looks like shit.. Girl is allright tho.. Comment made on 5/12/2013 9:51:25 AM.
Suggestion: Can I call you back? <a href=" http://www.rocktropia.com/forum/blogs/u10390-custom161/ ">lolita young girl naked</a> hiiii, video excelent mmmmmm nice. i would like know whats names the music, i like it a lot. Someone know please tell me Comment made on 5/11/2013 5:30:33 PM.
Suggestion: Incorrect PIN <a href=" http://urosecoqyjiru.mysoonerspace.com/ ">youngest nude lolitas pics</a> best lesbian vid ever!!! they r both hot, got into great positions and pinky destroyed lacey... i luv watchin pinky's ass bounce while shes fuckin lacey Comment made on 5/7/2013 6:03:48 AM.
Suggestion: I'm sorry, I'm not interested <a href=" http://ajupysybyate.ozzfans.net/ ">young nude lolita tgps</a> great vid whit the perfect moments and the best of sex - Mouth Cum d after cum sucking !! Comment made on 5/7/2013 2:50:02 AM.
Suggestion: Some First Class stamps <a href=" http://www4.chainat.ac.th/chainattech/user/view.php?id=17940 ">board message preteen</a> This video is crazy!!!!!! She is SWALLOWING Those dICKs!!! Comment made on 5/3/2013 6:29:57 AM.
Suggestion: Could you transfer $1000 from my current account to my deposit account? <a href=" http://blog.udn.com/estracecremela/7304800 ">estrace creme</a> See Appendix B: Additional Guidance on Practice Experiences Comment made on 5/1/2013 5:27:44 PM.
Suggestion: Is this a temporary or permanent position? <a href=" http://zsemoodle.bialystok.pl/user/view.php?id=5815 ">korean models pics</a> This girl is the sublime sex. Comment made on 4/25/2013 5:10:09 AM.
Suggestion: I'm sorry, I didn't catch your name <a href=" http://php.educanet2.ch/epsonvi/mottaz/user/view.php?id=5224 ">5 year old kid</a> this bitch is a good fuck Comment made on 4/24/2013 4:29:49 AM.
Suggestion: A book of First Class stamps <a href=" http://el.smpn5yogyakarta.sch.id/user/view.php?id=1457 ">artistic underage girls</a> her tits are so awesome : D Comment made on 4/23/2013 9:49:36 AM.
Suggestion: I'd like , please <a href=" http://amfeafip.org.ar/campus2/user/view.php?id=5585 ">underage vids</a> I want to cum on those gorgeous titties Comment made on 4/23/2013 3:36:26 AM.
Suggestion: When do you want me to start? <a href=" http://www.qa-cbac.com/moodle/user/view.php?id=5653 ">naked preteens xxx</a> ``yeeouu . Wonderful contribution, thanks Comment made on 4/18/2013 4:05:49 AM.
Suggestion: A staff restaurant <a href=" http://www.officialhotboyz.com/ilyperefid ">bbs forum lolita post</a> I beg to differ, it didnt look too hard to penetrate her really but I couldve done without the anal. Comment made on 4/15/2013 11:17:02 PM.
Suggestion: Free medical insurance <a href=" http://top.uhostfull.com/soccermomporn/ ">soccer mom porn </a> Ribbed for her pleasure....oooohhh! <a href=" http://top.uhostfull.com/amateurporncommunity/ ">amateur porn community </a> that was amzing... it was hott and i started to touch myself then i got into it... WTF happens!?!?!?!? <a href=" http://top.uhostfull.com/xxxpornmovies/ ">xxx porn movies </a> Her ass is a masterpiece and I wanna buy it!!! <a href=" http://top.uhostfull.com/downloadablepornclips/ ">downloadable porn clips </a> Good stuff right here. Comment made on 4/15/2013 2:33:39 PM.
Suggestion: What do you study? <a href=" http://www.aumax.com/ucufeloceeu/ ">free extreme pedo</a> wow she tries way to hard!!! <a href=" http://www.aumax.com/borodebopoc/ ">free young pedo clips</a> i want more of her. <a href=" http://www.aumax.com/ylohetunisa/ ">pedo little boys nude</a> very hot cum pilation <a href=" http://www.aumax.com/puiusiequj/ ">pedo porn clips</a> hehe thats a good girl!! <a href=" http://www.aumax.com/osabarirof/ ">pedo shock porn</a> Orange face lool Comment made on 4/13/2013 7:54:51 AM.
Suggestion: Who would I report to? <a href=" http://www.aumax.com/ojuoyroqiro/ ">preteens virgin pics</a> I would love to fuck Tori Black!!!! <a href=" http://www.aumax.com/sekyhigahyj/ ">nonnude preteen pix</a> couldn't resist for that one.. she looks like sara palin <a href=" http://www.aumax.com/dubokoehou/ ">preteens sex hot</a> i want her tits ! <a href=" http://www.aumax.com/fubabymocu/ ">free preteen bondage</a> holy fuck shes sexy <a href=" http://www.aumax.com/anefeumood/ ">nudes preteens pics</a> She is a lovely slut Comment made on 4/11/2013 1:24:04 PM.
Suggestion: Where did you go to university? <a href=" http://www.aumax.com/ojuoyroqiro/ ">preteen hantai</a> Mom and daughter engage in lesbianism, only in porn and on Jerry Springer <a href=" http://www.aumax.com/yperoreecyhe/ ">preteen rika</a> i want that cock inside of me! <a href=" http://www.aumax.com/ylygemehoqy/ ">preteen sick pics</a> they were both so hott :) <a href=" http://www.aumax.com/leyqemacaban/ ">preteens lesbian</a> Who was the girl at the beginning? <a href=" http://www.aumax.com/ufyfihacelodi/ ">preteen pedo sex</a> First Scene is the best, how did that guy hold back for so long with he's dick up her ass! Comment made on 4/11/2013 1:24:03 PM.
Suggestion: Another year <a href=" http://blog.udn.com/buycheapaccutari/7266072 ">buy accutane without rx</a> make sure that the value entered in the Other Coverage Code field corresponds to the Comment made on 4/8/2013 3:58:44 PM.
Copyright © 1996 Farrokh Alemi, Ph.D. Created on Tuesday, September 17, 1996. Sunday, October 06, 1996 4:20:30 PM Most recent revision 09/29/2008. This page is part of the course on Decision Analysis lecture on Uncertainty This page is based on a chapter with the same name published by Gustafson DH, Cats-Baril WL, Alemi F. in the book Systems to Support Health Policy Analysis: Theory, Model and Uses, Health Administration Press: Ann Arbor, Michigan, 1992.