Sunday, December 15, 2019

Decision Analysis Free Essays

CREATE Research Archive Published Articles Papers 1-1-1980 Structuring Decision Problems for Decision Analysis Detlof von Winterfeldt University of Southern California, winterfe@usc. edu Follow this and additional works at: http://research. create. We will write a custom essay sample on Decision Analysis or any similar topic only for you Order Now usc. edu/published_papers Recommended Citation von Winterfeldt, Detlof, â€Å"Structuring Decision Problems for Decision Analysis† (1980). Published Articles Papers. Paper 35. http://research. create. usc. edu/published_papers/35 This Article is brought to you for free and open access by CREATE Research Archive. It has been accepted for inclusion in Published Articles Papers by an authorized administrator of CREATE Research Archive. For more information, please contact gribben@usc. edu. Acta Psychologica 45 (1980) 71-93 0 North-Holland Publishing Company STRUCTURING DECISION PROBLEMS FOR DECISION ANALYSIS * Detlof von WINTERFELDT ** University of Southern California, Los Angeles, CA 90007, USA Structuring decision problems into a formally acceptable and manageable format is probably the most important step of decision analysis. Since presently no sound methodology for structuring exists, this step is still an art left to the intuition and craftsmanship of the individual analyst. After introducing a general concept of structuring, this paper reviews some recent advances in structuring research. These include taxonomies for problem identification and new tools such as influence diagrams and interpretative structural modeling. Two conclusions emerge from this review: structuring research is still limited to a few hierarchical concepts and it tends to ignore substantive problem aspects that delineate a problem it its real world context. Consequently structuring research has little to say about distinctions between typical problem classes such as regulation, siting, or budget allocation. As an alternative the concept of â€Å"prototypical decision analytic structures† is introduced. Such structures are developed to meet the substantive characteristics of a specific problem (e. g. , siting a specific Liquid Natural Gas plant) but they are at the same time general enough to apply to similar problems (industrial facility siting). As an illustration, the development of a prototypical analytic structure for environmental standard setting is described. Finally, some typical problem classes are examined and some requirements for prototypical structures are discussed. An introduction to problem structuring Decision analysis can be divided into four steps: structuring the problem; formulating inference and preference models; eliciting probabilities and utilities; and exploring the numerical model results. Prac* This research was supported by a grant from the Department of Defense and was monitored by the Engineering Psychology Programs of the Office of Naval Research, under contract # NOOO14-79C-0529. While writing this paper, the author discussed the problem of structuring extensively with Helmut Jungermann. The present version owes much to his thought. Please don’t take footnote 3 too seriously. It is part of a footnote war between Ralph Keeney and me. ** Presently with the Social Science Research Institute, University of Southern California, University Park, Los Angeles, CA 90007, (213) 741-6955. 12 D. von Winterfeldt /Structuring decision problems titioners of decision analysis generally agree that structuring is the most important and difficult step of the analysis. Yet, until recently, decision analytic research has all but ignored structuring, concentrating instead on questions of modeling and elicitation. As a result, structuring was, and to some extent still is, considered the ‘art’ part of decision analysis. This paper examines some attempts to turn this art into a science. Trees are the most common decision analytic structures. Decision trees, for example, represent the sequential aspects of a decision problem (see Raiffa 1968; Brown et al. 1974). Other examples are goal trees for the representations of values (Keeney and Raiffa 1976) and event trees for the representation f inferential problem aspects (Kelly and Barclay 1973). In fact, trees so much dominate decision analytic structures that structuring is often considered synonymous to building a tree. This paper, however, will adopt a more general notion of decision analytic structuring. According to this notion, structuring is an imaginative and creative process of transla ting an initially ill-defined problem into a set of welldefined elements, relations, and operations. The basic structuring activities are identifying or generating problem elements (e. g. , events, values, actors, decision alternatives) nd relating these elements by influence relations, inclusion relations, hierarchical ordering relations, etc. The structuring process seeks to formally represent the environmental (objective) parts of the decision problem and the decision makers’ or experts’ (subjective) views, opinions, and values. Graphs, maps, functional equations, matrices, trees, physical analogues, flow charts, and venn diagrams are all possible problem representations. In order to be useful structures for decision analysis, such representations must facilitate the subsequent steps of modeling, elicitation, and numerical nalysis. Three phases can be distinguished in such a generalized structuring process. In the first phase the. problem is identified. The elements which are generated in this phase are the substantive features of the problem: the decision maker(s); the generic classes of alternatives, objectives, and events; individuals or groups affected by the decision; characteristics of the problem environment. This list is pruned by answering questions such as: what is the purpose of the analysis? For whom is the analysis to be performed? Which alternatives can the decision maker truly control? At this stage only very rough relations between problem elements are constructed. Examples include organizational relations D. von Winterfeldt /Structuring decision problems 73 among decision makers, influence relations between classes of actions and events, and rough groupings of objectives. Products of this problem identification step are usually not very formal, and are seldom reported in the decision analytic literature. They may be in the form of diagrams, graphs, or ordered lists. Among the few documented examples are Hogarth et al. (1980) for the problem of city planning and Fischer and von Winterfeldt 1978) for the problem of setting environmental standards. In the second structuring step, an overall analytic structure is developed. The elements generated in this step are possible analytic problem representations. Besides tree structures, these may include more complex structures previously developed for similar problems such as screening structures for siting decisions or si gnal detection structures for medical decision making. Paradigmatic structures of alternative modeling approaches (e. g. , systems dynamics or linear programming) which could fit the problem should also be examined at this step [ 1 I. A creative activity in this structuring phase is to relate and combine part structures, e. g. , simulation structures with evaluation structures, or decision trees of different actors. From the candidate structures and their combinations an overall structure is selected which is judged most representative of the problem and manageable for further modeling and elicitation. Only a handful of analytic structures have been developed which are more complex than decision trees. Gardiner and Ford (in press) combined simulation and evaluation structures. Keeney (in press) developed decision analytic structure for the whole process of siting energy facilities. Von Winterfeldt (1978) constructed a generic structure for regulatory decision making. The third structuring phase coincides with the more traditional and limited notion of structuring. In this step the parts of the overall analytic structure are formalized in detail by refining the problem elements and relations identified in the first step. This includes a detailed construction of decision trees, event trees, and goal trees. Linkages between part structures are established, e. g. between simulation and evaluation structures. Decision makers and groups affected by possible decisions are specified together with events or actions linking [l] Although such structures alternatives to decision analytic in the remainder of this paper. structures should be considered, I will ignore 14 D. von Winterfeldt/Structuring decision problems them. Examples of this structuring step can be found in most decision analytic textbooks. This three step structuring process of identifying the problem, developing an analytic structure, and formalizing its detailed content seldom evolves in strict sequence. Instead, the process is recursive, with repeated trials and errors. Often the analyst decides on a specific structure and later finds it either unmanageable for modeling or non-representative of the problem. The recognition that a structure needs refmement often follows the final step of decision analysis, if numerical computations and sensitivity analyses point to places that deserve more detailed analysis. Knowing about the recursive nature of the structuring process, it is good decision analysis practice to spend much effort on structuring and to keep an open mind about possible revisions. The above characterization of the structuring process will be used as a format to review the structuring literature. First, the use of problem taxonomies for the step of problem identification is examined. Methods to select analytic approaches are then reviewed as possible aids for the second structuring step. Finally, some recent advances in formalizing part structures are discussed. * Two conclusions emerged from this review and motivated the subsequent sections of this paper: (1) Although structuring research has much to say about analytic distinctions between decision problems and structures (e. . , whether a problem is multiattributed or not), it has little bearing on substantive problem distinction (e. g. , the difference between a typical regulation problem and a typical investment problem). (2) Structuring research is still limited to a few, usually hierarchical concepts and operations. Emphasis is put on simple, operational and computerized structuring. Little effort is spen t on creating more complex combinations of structures that represent real problem classes. As an alternative, the concept of prototypical decision analytic structures is introduced. Such structures have more substance and complexity than the usual decision trees or goal trees. They are developed to meet the substantive characteristics of a specific problem, but are at the same time general enough to apply to similar problems. As an illustration, IIASA’s [21 development of a prototypical decision analytic [2] International Institute for Applied Systems Analysis, Laxenburg, Austria. D. von Winterfeldt /Structuring decision problems 75 structure for environmental standard setting will be described. Finally, several typical classes of decision problems will be examined and some requirements or prototypical structures will be discussed. Taxonomies for problem identification The taxonomies described in the following typically classify decision problems by analytic categories (e. g. , whether a problem is multiattributed or not) and they attempt to slice the universe of problems into mutually exclusive and exhaustive sets. The purpose of such taxonomies is twofo ld: to facilitate the identification of an unknown element (e. g. , a medical decision problem) with a class of problems (e:g. , diagnostic problem); and to aid the process of matching classes in the problem taxonomy (e. . , diagnostic problems) with an analytic approach (e. g. , signal detection structures). Thus, by their own aspiration, problem taxonomies should be useful for the early phases of structuring decision problems. MacCrimmon and Taylor (1975) discuss on a rather general level the relationship between decision problems and solution strategies. Decision problems are classified according to whether they are ill-structured or well-structured, depending on the extent to which the decision maker feels familiar with the initial state of the problem, the terminal state, and the transformations equired to reach a desired terminal state. Three main factors contribute to ill-structuredness: uncertainty, complexity, and conflict. For each category MacCrimmon and Taylor discuss a number of solution strategies. These strategies include, for example, reductions of the perceptions of uncertainty, modeling strategies, information acquisition and processing strategies, and methods for restructuring a problem. Taylor (1974) adds to this classification scheme four basic types of problems: resource specification, goal specification, creative problems, and well structured problems (see fig. 1). Problem types are identified by the decision maker’s familiarity with the three subparts of the problem. Taylor discusses what types of decision strategies are appropriate for each of these problem categories, for example, brainstorming for creative problems and operations research type solutions for well structured problems. Howell and Burnett (1978) recently developed a taxonomy of tasks 16 D. von Winterfeldt /Structuring Problem Type Initial State decision problems Terminal State Transformation Type 1, Resource Specification Problems UnfamllIar Type 11, Goal Specification Problems Type III, Creative Problems Type IV, Well-Structured Problems Varies Varies Unfamihar Varies Vanes Familiar Unfamiliar Familiar Fig. 1. Types of problem structures (Taylor 1974). and types of events with the intention of assessing cognitive options for processing probabilistic information for each taxonomy element. Uncertain events are classified according to three dichotomies: frequentistic – not frequentistic; known data generator – unknown data generator; process external – internal to the observer. Task characteristics are complexity, setting (e. g. , real life us. laboratory), span of events, and response mode characteristics. For each vent/task combination Howell and Burnett discuss how different cognitive processes may be operating when making probability judgments. For example, in estimating frequentistic events with unknown data generators, availability heuristics may be operative. Brown and Ulvila (1977) present the most comprehensive attempt yet to classify decision problems. The ir taxonomy includes well over 100 possible characteristics. Decision problems are defined according to their substance and the decision process involved. Substantive taxonomic characteristics are mainly derived from the analytic properties of the situation, i. . , amount and type of uncertainty, and amount D. von Winterfeldt/Structuring decision problems 71 and types of stakes, types of alternatives. Only a few elements of this part of the taxonomy can be directly related to problem content, i. e. , current vs. contingent decision, operating vs. information act. The taxonomic elements of the decision process refer mainly to the constraints of the decision maker, e. g. , reaction time, available resources. The taxonomy by Brown and Ulvila incorporates most previous problem taxonomies which tried to define decision problems by categories derived from decision analysis. These include taxonomies by von Winterfeldt and Fischer (1975), Miller et al. (1976), and Vlek and Wagenaar (1979). To be useful for problem identification, the above taxonomies should lead an analyst to a class of problems which has characteristics similar to the decision problem under investigation. Unfortunately, the existing problem taxonomies are ill-suited for this purpose, because they use mainly analytic categories to distinguish problems. Such categories are derivatives of the decision analytic models and concepts, rather than characteristics of real world problems. For example, the analytic categorizations f problems into risky vs. riskless classes is based on the distinction between riskless and risky preference models. Analytic categories create more or less empty classes with little or no correspondence to real problems. For example, none of the above taxonomies allows distinguishing between a typical siting problem and a typical regulation problem in a meaningful way. I t appears that substantive rather than analytic characteristics identify real problems. Substantive characteristics are generalized content features of the problems belonging to the respective class. For example, a substantive eature of regulation problems is the involvement of three generic decision makers: the regulator, the regulated, and the beneficiary of regulation. To become useful for problem identification, taxonomies need to include such substantive problem characteristic Methods for selecting an overall analytic structure Most taxonomies include some ideas or principles for matching lems with analytic structures or models. MacCrimmon and attempted to match their basic type of decision problems with tive solution strategies, Howell and Burnett speculated on which tive processes may be invoked by typical task/event classes in probTaylor ognicogniproba- 18 D. von Winterfeldt /Structuring decision problems bility assessment; von Winterfeldt and Fischer identified for each pro blem category appropriate multiattribute utility models. But in none of these papers explicit matching principles or criteria for the goodness of a match are given. Rather, matches are created on the basis of a priori reasoning about the appropriateness of a strategy, model, or a cognitive process for a particular class of decision problems. Brown and Ulvila (1977) attempted to make this selection process more explicit by creating an analytic taxonomy in correspondence with the problem taxonomy. The analytic taxonomy classifies the main options an analyst may have in structuring and modeling a decision problem. The taxonomy includes factors such as user’s options (amount to be expended on the analysis), input structure (type of uncertainty), elicitation techniques (type of probability elicitation). These categories identify options, both at a general level (optimization, simulation, and Bayesian inference models) and special techniques (e. g. , reference gambles, or Delphi technique). To match problems with analytic approaches Brown and Ulvila created a third taxonomy, called the â€Å"performance measure taxonomy†. This taxonomy evaluates analytic approaches on attributes like â€Å"time and cost measures†, â€Å"quality of the option generation process†, â€Å"quality of communication or implementation†, etc. Different problem classes have different priority profiles on the performance measure categories. Similarly, different analytic approaches have different scoring profiles on the performance measures. The analytic approach chosen should perform well on the priority needs of a particular problem, Brown and Ulvila discuss the ‘goodness of fit’ of several analytic approaches to a number of decision situations in terms of these performance measures. For example, they argue that a contingency type analysis (an element of the analytic taxonomy) is appropriate for decision problems that occur repeatedly and require a fast response (elements of the decision situation taxonomy) because contingency type analysis allows fast calculations (elements of the performance measure taxonomy). Several authors have developed logical selection schemes, which can identify an appropriate analytic approach or model based on selected MacCrimmon (1973), for example, developed a problem features. sequential method for selecting an appropriate approach for multiattribe evaluation. The first question to be answered is whether the purpose of the analysis is normative or descriptive. Further questions D. von Winterfeldt /Structuring decision problems 79 include whether the type of problem has occurred frequently before, if there are multiple decision makers with conflicting preferences, and whether alternatives are available or have to be designed. All questions are of the yes-no type and together create a flow chart for selecting among 19 possible approaches. For example, if the purpose of the analysis is normative, if direct assessments of preferences (e. g. ratings) are valid and reliable, and if the type of problem has frequently occurred before, regression models or ANOVA type approaches would be appropriate. Johnson and Huber (1977) and Kneppreth et al. (1977) discuss a three step procedure for selecting a multiattribute utility assessment approach. In the first step, the characteristics of the multiattribute problem are listed, including discreteness vs. c ontinuity of dimensions, uncertainty vs. no uncertainty, and independence considerations. In the second step the evaluation situation is characterized on the basis of judgments about the task complexity, mount of training required for assessment, face validity required, assessment time, accuracy and flexibility. In the third and final step the profile describing the evaluation problem is compared with a profile characterizing five different generic assessment models or methods. The technique that best matches the situation profile is selected. For example, lottery assessment methods and models would be appropriate if the evaluation problem involves uncertainties, does not require high face validity, and allows for a good amount of training of the assessor. Both the taxonomy riented and the sequential selection methods for matching problems and analysis suffer from certain drawbacks. As stated earlier, problem characteristics used in taxonomies typically neglect substantive aspects o f the decision problem. Consequently, an analyst may choose an analytic approach based on a match with a spuriously defined problem class. For example, when facing a medical diagnosis problem, an analyst may find that some detailed substantive characteristics of the problem (e. g. , the way doctors process information, the physical format of information, etc. ) suggest a signal detection structure. Yet, as far as I can see, none of the above matching processes would directly lead to such a structure. Advances in formalizing structures Influence diagrams are a recent development in decision analytic structuring (see Miller et al. 1976). Influence diagrams draw a graphical 80 D. von Winterfeldt /Structuring decision problems picture of the way variables in a decision model influence each other, without superimposing any hierarchical structure. For example, a decision variable (price) may ‘influence’ a state variable (demand) and thus ‘influence’ a final state (successful introduction of a new product into market). Influence diagrams have been conceived mainly as an initial pre-structuring tool to create a cognitive map of a decision maker’s or expert’s view of a decision problem. In the present stage influence diagrams are turned into hierarchical structures and analyzed with traditional tools. But research is now underway at SRI Internatio nal on the use of influence diagrams directly in EV or EU computations. Another generalization of the tree approach is Interpretative Structural Modeling (ISM) developed, for example, in Warfield (1974) and Sage (1977). In interpretative structural modeling, matrix and graph heory notions are used to formally represent a decision problem. First, all elements of the problem are listed and an element by element matrix is constructed. The structure of the relationships between elements is then constructed by filling in the matrix with numerical judgments reflecting the strength of the relationship, or by simply making O-l judgments about the existence/non-existence of a relation. Computer programs can then be used to convert the matrix into a graph or a tree that represents the problem. Influence diagrams, value trees, decision trees, and inference trees can all be thought of as special cases of ISM. For example, in value tree construction, the analyst may begin with a rather arbitrary collection of value relevant aspects, attributes, outcomes, targets and objectives. Using alternative semantic labels for the relationships between these elements (e. g. , ‘similar’, ‘part of’), an element by element matrix can be filled. Finally, the analyst can explore whether a particular relational structure leads to useful goal tree structure. Besides these generalizations of traditional hierarchical structuring tools, several refinements of special structuring techniques have been suggested, particularly for evaluation roblems. Keeney and Raiffa (1976) devoted a whole chapter to the problem of structuring a value tree. They suggest a strategy of constructing a value tree by beginning with general objectives and disaggregating by using a pure explication logic (i. e. , what is meant by this general objective? ). This approach has previously been advocated by Miller (1 970) and others. Mannheim and Hall (1967) suggest in addition the possibility of disaggregating general D. van Winterfeldt /Structuring decision problems 81 objectives according to a means-ends logic (how can this general objective be achieved? ). Other disaggregation logics (problem oriented, process oriented, etc. ) could be analyzed in the ISM context. There are a number of papers that suggest more empirical or synthetic approaches to value tree construction. Of particular interest is a repertory grid technique described by Humphreys and Humphreys (1975) and Humphreys and Wisuda (1979). In this procedure similarity and dissimilarity judgments are used to span the value dimensions of alternatives. Several computer aids have been developed recently to aid decision makers or experts in structuring decision problems. Some of these are discussed in Kelly ( 1978), and Humphreys (1980). These aids typically rely on empty structuring concepts (decison trees, value trees, inference trees, or influence diagrams) and they guide the decision maker/expert in the analytic formulation of his/her problem. Special aids are OPINT for moderately complex problems which can easily be formulated into a decision tree or matrix structure, the decision triangle aid for sequential decision problems with a focus on changing probabilities, and EVAL for multiattribute utility problems (Kelly 1978). In addition to these structuring and assessment aids, there are now computerized aids under development xploiting the idea of influence diagrams and fuzzy set theory. Influence diagrams, ISM, and computer aids are indicative of a trend in structuring research and perhaps in decision analysis as a whole. This trend turns the fundamentally empty structures of decision trees, goal trees, and inference trees into more operational, computerized elicitation tools, without adding problem substance. T here are clear advantages to such an approach: a wide range of applicability, flexibility, user involvement, speed, limited training, and feedback, to name only a few. It also reduces the demands on the decision analyst’s time. There is, of course, the other extreme, the prestructured, precanned problem specific version of decision analysis applicable to essentially identical situations. A military example is Decisions and Designs Inc. % SURVAV model (Kelly 1978) which applies to routing decisions for ships to avoid detections by satellites. Such a structure and model can routinely be implemented with almost no additional training. In turn it gives up generalizability. Neither extreme is totally satisfactory. Empty general structures must consider each problem from scratch. Substantive specific struc- 82 D. von Winterfeldt /Structuring ecision problems tures have limited generalizability. The middleground of problem driven but still generalizable structures and models needs to be filled. Problem taxonomies may help here by identifying generic classes of problems. But as was discussed earlier, existing taxonomies are ill equipped for this task since they neglect substantive problem features. The question of filling in the middleground between ‘too general’ structures and ‘too specific’ structures thus becomes a question of searching for generalizable content features of problems that identify generic classes of decisions. These generic classes can then be modelled and structured by â€Å"prototypical decision analytic structures† which are specific enough to match the generalizable problem features and general enough to transfer easily to other problems of the same class. At the present stage of research this search process will necessarily be inductive because too little is known about problem substance to develop a problem driven taxonomy and matching analytic structures. An inductive research strategy may attempt to crystallize the generalizable features of a specific application, . or compare a number of similar applications (e. . , with siting problems), or simply use a phenomenological approach to delineate problem classes in a specific application area (e. g. , regulation). In the following two sections some possibilities for developing prototypical decision analytic structures will be discussed. An example of developing a prototypical structure The following example describes the struct uring process in the development of a decision aiding system for environmental standard setting and regulation. The work was performed as part of IIASA’s (see fn. 2) standard setting project (see von Winterfeldt et al. 1978), which had oth descriptive and normative intentions (how do regulators presently set standards? how can analytic models help in the standard setting process? ). Because of this wide approach of the standard setting project, the research group was not forced to produce workable models for specific decision problems quickly. Consequently, its members could afford and were encouraged to spend a substantial amount of time on structuring. Inputs into the structuring process were: – retrospective case studies of specific mental protection agencies; standard processes of environ- national Railway Corporation energylevelmeasure 3 measurefor aeroplanenoise 1 Japanese dB’ ‘SO†, AT SOURCE RULES ROUTING USE SCHEMES SCHEMES LAND Fig. 2. Regul atory alternatives for Shinkansen noise pollution. IMPLEMENTATION AND MEASUREMENT INSTRUMENT /I ALTERNATIVE OF HOUSE IN HOUSE IN FRONT lMldB(A) WCPNLl MEAS†6iiA~â€Å" 30 – di) MEASURED LEO’ EQUIP- TION FICA- SPECI- MENT SPE:D CONTROL RES+RlCT TIMES OPERATION 84 D. von Winterfeldt /Structuring decision problems – previous models suggested for standard setting; – field studies of two ongoing standard setting processes (oil pollution and noise standards). In addition, the structuring process benefited much from continuing discussions with leading members of environmental agencies in the United Kingdom, Norway, Japan and the United States. Although the structuring effort was geared towards decision analysis, substantial inputs were given by an environmental economist (D. Fischer), an environmental modeller (S. Ikeda), a game theorist (E. Hopfinger), and two physicists (W. Hafele and R. Avenhaus), all members of IIASA’s standard setting research team. The overall question was: how can standard setting problems best be formulated nto a decision analytic format and model such that the model is specific enough to capture the main features of a particular standard setting problem and, at the same time, general enough to apply to a variety of such problems? In other words, what is a prototypical decision analytic structure for standard setting? Since the regulator or regulatory agency was presumed to be the main client of such models, the initial structuring focussed on regulatory alternatives and objectives. In one attempt a wide but shallow alternative tree was conceived which included a variety of regulatory ptions ranging from emission standards, land use schemes, to direct interventions. An example for noise pollution standards is presented in fig, 2. Coupled with an appropriate tree of regulatory objectives, a decision analysis could conceivably be performed by evaluating each alternative with a simple MAU procedure. A possible value tree is presented in fig. 3 for the same noise pollution problem. This simple traditional structure was rejected since regulators seldom have to evaluate such a wide range of alternatives and because it does not capture the interaction between the regulators and the regulated. Also, regulators are much concerned about monitoring and implementation of standards, an aspect which a simple MAU structure does not address. The second structure was a narrow but deep decision tree, exemplified in fig. 4 for an oil pollution problem. In addition to the regulator’s alternatives, this tree includes responses of the industry to standards, possible detection of standards violations, and subsequent sanctions. This structure was geared at fine tuning the regulators’ definitions of D. von Winterfeldt /Structuring decision problems 85 of hospitals, schools, retwement homes MINIMIZE f residential life DISTURBANCE other / EEggF M,NIM,zE HEALTH Hearing EFFECTS PsychologIcal Synergetic (aggravation of existing illness) Investment for pollution equipment MINIMIZE COST ~—– Operation of pollution eqwpment RAILWAY CORP. OBJECTIVES Speed MAXIMIZE SERVICE - Aeliablllty ClXlllOrt wth mtemational regulation CONSISTENCY OF REGULATION with other national â⠂¬Å"cise standards (car, mr. other trams) POLITICAL OBJECTIVES -/ Enwonmental policy AGREEMENT POLICY WITH GOVERNMENT Transportation policy t Ewnomtc growih policy Fig. 3. Regulatory objectives for noise pollution control. he standard level (maximum emission, etc. ) and monitoring and sanction schemes, and to assessing environmental impacts. The structure is specific in terms of the regulatory alternatives. But by considering industry responses as random events, and by leaving out responses of environmental groups, it fails to address a major concern of regulatory decision making. The third structure was a three decision maker model, in which the regulator, the industry/developer and the environmentalists/impactees are represented by separate decision analytic models (see von Winterfeldt 1978). A signal detection type model links the regulator’s decision through possible detections of violations and sanction schemes to the the industry model. An event tree of pollution generating events and effects links the developer’s decisions to the impactee model (see fig. 5). The model can be run as follows: the regulator’s alternatives are left 86 EPA average UK aver,, UK maximum Norway average DEFINITIONS OF OIL EMISSION STANDARDS parts per million ofoil No pollution – Grawty Separator cugated Plate Inter- equipment Gas Flotation Filters ceptrr n ob STANDARD LEVEL in watt r ofoil POLLUTION EQUIPMENT PERFORMANCE o00 patis per milhon in water n First vidabon of No udat#on of standard occurs at tulle DETECTION STATES standard dunng all opemons n t POLLUTION EQUIPMENT DECISION BY THE OIL INDUSTRY PENALTY No pdlution equipment Gravity separator Gas Flotatux corrugated Plate bltw- Pais Filters EQUIPMENT PERFORMANCE per million n Second wdation POLLUTION EQUIPME NT DECISION BY THE OIL INDUSTRY No more vidations DETECTION STATES Find eflects~ on environment (pdlution levels) FINAL EFFECTS – industry (cost) – regulatlx (political) Fig. 4. Segment of a decision tree for setting oil pollution standards. A standard is usually defined by the number of samples to be taken, how many samples form an average, and how many exemptions from a violation are allowed. For example, the EPA average definition is as follows: four samples are to be taken daily, the average of the four samples may not exceed the standard level (e. g. , 50 ppm) more than twice during any consecutive 30 day period. 87 D. von Winterfeldt /Structuring decision problems REGULATORY 1 DECISION MODEL I U R (0 1 DETECTION OF REGULATION VIOLATION DEVELOPER – SANCTIONS POLLUTION GENERATING EVENTS I IMPACTEE DECISION MODEL POLLUTION EFFECTS Fig. 5. Schematic representation of the regulator-developer-impactee model. 1: variable standard of the regulator d(r): expected utility maximizing treatment decision of the developer a[d(r)]: expected utility maximizing decision of the impactees variable. The developer’s response is optimized in terms of minimizing expected investment, operation, and detection costs or maximizing equivalent expected utilities. Finally, the impactees are assumed to maximize their expected utility conditional on the regulator’s and the developer’s decision. At this point the model stops. The structure only provides for a Pareto optimality analysis of the three expected utilities accruing to the generic decision units. This model allows some detailed analyses of the probabilities and value aspects of the standard setting problem, and it proved feasible in a pilot application to chronic oil discharge standards (see von Winterfeldt et al. 1978). Regulators who were presented with this model, con- 88 D. von Winterfeldt /Structuring decision problems REGULATOR’S CHOICE Fig. 6. Game theoretic structure of the regulation I problem. sidered it meaningful, and it offered several insights into the standard setting problematique. Yet, there was a feeling among analysts and regulators that the static character of the model and the lack of feedback loops required improvement. The final structure considered was a game theoretic extension of the three decision maker model. The structure of the game theoretic model is presented in fig. 6. In this model the standard setting process in explicitly assumed to be dynamic, and all feedbacks are considered. In addition, transitions from one stage to another are probabilistic. The model was applied in a seven stage version in a pilot study of noise standard setting for rapid trains (Hapfinger and von Winterfeldt 1978). The game theoretic model overcomes the criticisms of the static decision analytic model, but in turn it gives up the possibility for fine tuning and detailed modeling of trade-offs and probabilities. Considering such aspects in detail would have made the running of the model impossible. Therefore, relatively arbitrary (linear) utility functions and simple structures of transition probabilities have to be assumed. Although the appropriateness of the different structures was not explicitly addressed in this study, two main criteria come to mind when judging structures: representativeness of the problem and manageability for further analysis. Each of these criteria can be further broken down. For example, representativeness includes judgments about the adequacy of the structural detail, and coverage of important problem aspects. The overall conclusions of many discussion with regulators, analysts, D. von Winterfeldt /Structuring decision problems 89 industry representatives, and the results of the pilot applications led us to accept the third structure as a prototypical decision analytic structure for relatively routine emission standard setting problems. The model is presently considered for further applications in emission tandard setting and an extension to safety standards will be explored. Towards a kit of prototypical decision analytical structures Not every decision analysis can afford to be as broad and time consuming as the previous study. Decision analysis usually has a much more specific orientation towards producing a decision rather than developing a generic structure. Still I think that it would be helpful i f analysts were to make an effort in addressing the question of generalizability when modeling a specific problem, and in extracting those features of the problem and the model that are transferable. Such an inductive pproach could be coupled with more research oriented efforts and with examinations of similarities among past applications. Such an approach may eventually fill the middleground between too specific and too general models and structures. But rather than filling this middleground with analytically specific but substantively empty structures and models, it would be filled with prototypical structures and models such as the above regulation model, more refined signal detection models, siting models, etc. In the following, four typical classes of decision problems (siting, contingency planning, budget allocation, and regulation) are examined nd requirements for prototypical structures for these problems are discussed. Facility siting clearly is a typical decision problem. Keeney and other decision analysts have investigated this problem in much detail and in a variety of contexts (see the examples in Keeney and Raiffa 1976). A typical aspect of such siting problems is sequential screening from candidate areas to possible sites, to a preferred set, to final site specific evaluations. Another aspect is the multiobjective nature with emphasis on generic classes of objectives: investment and operating cost, economic benefits, environmental impacts, social impacts, and political onsiderations. Also, the process of organizing, collecting, and evaluating information is similar in many siting decisions. Thus, it should be possible to develop a prototypical structure for facility siting decisions, 90 D. von Winterfeldt /Structuring decision problems simply by assembling the generalizable features of past applications [ 31. Contingency planning is another recurring and typical problem. Decision and Design Inc. addressed this problem in the military context, bu t it also applies to planning for actions in the case of disasters such as Liquid Natural Gas plant explosions or blowouts from oil platforms. Substantive aspects that are characteristic of contingency planning are: strong central control of executive organs, numerous decisions have to be made simultaneously, major events can drastically change the focus of the problem, no cost or low cost information comes in rapidly, and organizational problems may impede information flows and actions. Although, at first glance, decision trees seem to be a natural model for contingency planning, a prototypical decision model would require modifying a strictly sequential approach to accommodate these aspects. For example, the model should be flexible enough to allow for the ‘unforeseeable’ (rapid capacity to change the model structure), it should have rapid information updating facilities without overstressing the value of information (since most information is free), and it should attend to fine tuning of simultaneous actions and information interlinkages. Budget allocation to competing programs is another typical problem. In many such problems different programs attempt to pursue similar objectives, and program mix and balance has to be considered besides the direct benefits of single programs. Another characteristic of budgeting decisions is the continuous nature of the decision variable and the constraint of the total budget. MAU looks like a natural structure for budget allocation decision since it can handle the program evaluation aspect (see Edwards et al. 1976). But neither the balance issue nor the constrained and continuous characteristics of the budget are appropriately adressed by MAU. A prototypical decision analytic structure would model an evaluation of the budget apportionment, or the mix of programs funded at particular levels. Such a structure would perhaps exploit dependencies or independencies among programs much like independence assumption for preferences. Regulation covers a class of decision problems with a number of recurrent themes: three generic groups involved (regulators, regulated, [,3] I believe that. Keeney’s forthcoming book on siting energy facilities is a major step in that direction. Of. course, it could also be a step in the opposite direction. Or in no direction at all (see also first asterisked footnote at the beginning of the article). D. von Winterfeldt /Structuring decision problems 91 beneficiaries of regulation), importance f monitoring and sanction schemes, usually opposing objectives of the regulated and the benefrciaries of regulation, and typically highly political objectives of the regulator. In the previous section, the more specific regulation problem of standard setting was discussed, and a prototypical decision analytic structure was suggested. A decision analytic structure for regulation in general can build on the main features of the standard setting model. This list could be extended to include private investment decisions, product mix selection, resource development, diagnostic problems, etc. But the four examples hopefully re sufficient to demonstrate how prototypical decision analytic structuring can be approached in general. In my opinion, such an approach to structuring could be at least as useful for the implementation of decision analysis as computerization of decision models. Besides the technical advantages of trahsferability, prototypical decision analytic structures would serve to show that decision analysts are truly concerned about problems. Today decision analysis books have chapters such as ‘simple decisions under uncertainty’ and ‘multiattribute evaluation problems’. I am looking forward to chapters such as ‘siting industrial acilities’, ‘pollution control management’, an d ‘contingency planning’. References Brown, R. V. and J. W. Ulvila, 1977. Selecting analytic approaches for decision situations. (Revised edition. ) Vol. I: Overview of the methodology. Technical report no. TR77-7-25, Decisions and Designs, Inc. , McLean, VA. Brown, R. V. , A. S. Kahr and C. Peterson, 1974. Decision analysis for the manager. New York: Holt, Rinehart, and Winston. Edwards, W. , M. Guttentag and K. Snapper, 1976. A decision-theoretic approach to evaluation research. In: E. L. Streuning and M. Guttentag (eds. ), Handbook of evaluation research, I. London: Sage. Fischer, D. W. and D. von Winterfeldt, 1978. Setting standards for chronic oil discharges in the North Sea. Journal of Environmental Management 7, 177-199. Gardiner, P. C. and A. Ford, in press. A merger of simulation and evaluation for applied policy research in social systems. In: K. Snapper (ed. ), Practical evaluation: case studies in simplifying complex decision problems. Washington, DC: Information Resource Press. Hogarth, R. M. , C. Michaud and J. -L. Mery, 1980. Decision behavior in urban development: a methodological approach and substantive considerations. Acta Psychologica 45, 95-117. Hiipfmger, E. and R. Avenhaus, 1978. A game theoretic framework for . dynamic standard setting procedures. IIASA-RM-78. International Institute for Applied Systems Analysis, Laxenburg, Austria. 92 D. von Winterfeldt /Structuring decision problems Hopfinger, E. and D. von Winterfeldt, 1979. A dynamic model for setting railway noise standards. In: 0. Moeschlin and D. Pallaschke (eds. ), Game theory and related topics. Amsterdam: North-Holland. pp. 59-69. Howell, W. C. and S. A. Burnett, 1978. Uncertainty measurement: a cognitrve taxonomy. Organizational Behavior and Human Performance 22,45-68. Humphreys, P. C. , 1980. Decision aids: aiding decisions. In: L. Sjoberg, T. Tyszka and J. A. Wise (eds), Decision analyses and decision processes, 1. Lund: Doxa (in press). Humphreys, P. C. and A. R. Humphreys, 1975. An investigation of subjective preference orderings for multiattributed alternatives. In: D. Wendt and C. Vlek (eds. ), Utility, probability, and human decision making. Dordrecht, Holland: Reidel, pp. 119-133. Humphreys, P. C. and A. Wisudha, 1979. MAUD – an interactive computer program for the structuring, decomposition and recomposition of preferences between multiattributed alternatives. Technical report 79-2, Decision Analysis Unit, Brunel University, Uxbridge, England. Johnson, E. M. and G. P. Huber, 1977. The technology of utility assessment. IEEE Transactions on Systems, Man, and Cybernetics, vol. SMCJ, 5. Keeney, R. L. , in press. Siting of energy facilities. New York: Academic Press. Keeney, R. L. and H. Raiffa, 1976. Decisions with multiple objectives: preferences and value tradeoffs. New York: Wiley. Kelly, III, C. W. , 1978. Decision aids: engineering science and clinical art. Technical Report, Decisions and Designs, Inc. , McLean, VA. Kelly, C. and S. Barclay, 1973. A general Bayesian model for hierarchical inference. Organizational Behavior and Human Performance 10, 388-403. Kneppreth, N. P. , D. H. Hoessel, D. H. Gustafson, and E. M. Johnson, 1977. A strategy for selecting a worth assessment technique. Technical paper 280, U. S. Army Research Institute for Behavioral and Social Sciences, Arlington, VA. MacCrimmon, K. R. , 1973. An overview of multiple criteria decision making. In: J. L. Cochrane and M. Zeleney (eds. ), Multiple criteria decision making. Columbia, SC: The University of South Carolina Press. pp. 18-44. MacCrimmon, K. R. and R. N. Taylor, 1975. Problem solving and decision making. In: M. C. Dunnette (ed. ), Handbook of industrial and organizational psychology. New York: Rand McNally. Mannheim, M. L. and F. Hall, 1967. Abstract representation of goals: a method for making decisions in complex problems. In: Transportation, a service. Proceedings of the Sesquicentennial Forum, New York Academy of Sciences – American Society of Mechanical Engineers, New York. Miller, J. R. , 1970. Professional decision making: a procedure for evaluating complex alternatives. New York: Praeger. Miller, AC. , M. W. Merkhofer, R. A. Howard, J. E. Matheson and T. R. Rice, 1976. Development of automated aids for decision analysis. Technical report, Stanford Research Institute, Menlo Park, CA. Raiffa, H. , 1968. Decision analysis. Reading, MA: Addison-Wesley. Sage, A. , 1977. Methodology for large scale systems. New York: McGraw-Hill. Taylor, R. C. , 1974. Nature of problem ill-structuredness: implications for problem formulation and solution. Decision Sciences 5,632-643. Vlek, C. and W. A. Wagenaar, 1979. Judgment and decision under uncertainty. In: J. A. Michon, E. G. Eijkman and L. F. W. DeKlerk (eds. ), Handbook of psychonomics, II. Amsterdam: North-Holland. pp. 253-345. Warfield, J. , 1974. Structuring complex systems. Batelle Memorial Institute Monograph, no. 4. Winterfeldt, D. von, 1978. A decision aiding system for improving the environmental standard D. von Winterfeldt /Structuring decision problems 93 setting process. In: K. Chikocki and A. Straszak (eds. ), Systems analysis applications to complex programs. Oxford: Pergamon Press. pp. 119-124. Winterfeldt, D. von and D. W. Fischer, 1975. Multiattribute utility: models and scaling procedures. In: D. Wendt and C. Vlek (eds. ), Utility, probability, and human decision making. Dordrecht, Holland: Reidel. pp. 47-86. Winterfeldt, D. von, R. Avenhaus, W. Htiele and E. Hopfmger, 1978. Procedures for the establishment of standards. IIASA-AR-78-A, B, C. International Institute for Applied Systems Analysis, Laxenburg, Austria. How to cite Decision Analysis, Papers

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.