The Broadening of Failure Rate Distributions in Risk Analysis: How Good Are the Experts?1
In: Risk analysis: an international journal, Band 5, Heft 2, S. 89-91
ISSN: 1539-6924
23 Ergebnisse
Sortierung:
In: Risk analysis: an international journal, Band 5, Heft 2, S. 89-91
ISSN: 1539-6924
In: FINANA-D-23-00335
SSRN
In: Risk analysis: an international journal, Band 24, Heft 4, S. 947-948
ISSN: 1539-6924
In: Risk analysis: an international journal, Band 24, Heft 3, S. 515-520
ISSN: 1539-6924
This article discusses the use of quantitative risk assessment (QRA) in decision making regarding the safety of complex technological systems. The insights gained by QRA are compared with those from traditional safety methods and it is argued that the two approaches complement each other. It is argued that peer review is an essential part of the QRA process. The importance of risk‐informed rather than risk‐based decision making is emphasized. Engineering insights derived from QRAs are always used in combination with traditional safety requirements and it is in this context that they should be reviewed and critiqued. Examples from applications in nuclear power, space systems, and an incinerator of chemical agents are given to demonstrate the practical benefits of QRA. Finally, several common criticisms raised against QRA are addressed.
In: Risk analysis, Band 24, Heft 4, S. 947-948
ISSN: 0272-4332
In: Risk analysis: an international journal, Band 19, Heft 1, S. 23-32
ISSN: 1539-6924
As the use of digital computers for instrumentation and control of safety‐critical systems has increased, there has been a growing debate over the issue of whether probabilistic risk assessment techniques can be applied to these systems. This debate has centered on the issue of whether software failures can be modeled probabilistically. This paper describes a "context‐based" approach to software risk assessment that explicitly recognizes the fact that the behavior of software is not probabilistic. The source of the perceived uncertainty in its behavior results from both the input to the software as well as the application and environment in which the software is operating. Failures occur as the result of encountering some context for which the software was not properly designed, as opposed to the software simply failing "randomly." The paper elaborates on the concept of "error‐forcing context" as it applies to software. It also illustrates a methodology which utilizes event trees, fault trees, and the Dynamic Flowgraph Methodology (DFM) to identify "error‐forcing contexts" for software in the form of fault tree prime implicants.
In: Risk analysis: an international journal, Band 18, Heft 4, S. 471-484
ISSN: 1539-6924
This paper investigates electrical overheating events aboard a habitable spacecraft. The wire insulation involved in these failures plays a major role in the entire event scenario from threat development to detection and damage assessment. Ideally, if models of wire overheating events in microgravity existed, the various wire insulations under consideration could be quantitatively compared. However, these models do not exist. In this paper, a methodology is developed that can be used to select a wire insulation that is best suited for use in a habitable spacecraft. The results of this study show that, based upon the Analytic Hierarchy Process and simplifying assumptions, the criteria selected, and data used in the analysis, Tefzel is better than Teflon for use in a habitable spacecraft.
In: Risk analysis: an international journal, Band 13, Heft 6, S. 625-636
ISSN: 1539-6924
This paper introduces conditional influence diagrams into risk management. A contaminated‐site cleanup involving two stakeholders is used as a hypothetical case study. The treatment choices must satisfy several conflicting objectives. Any decision made by one stakeholder will affect the choices of the other stakeholder. In building the influence diagrams for each of the stakeholders, the logical relationship of all relevant factors is determined and the values of these factors are analyzed. The influence diagram for each stakeholder is conditional on the options available to the other stakeholder. The influence diagrams are, then, used to evaluate the possible choices of each stakeholder based on decision options of the other stakeholder. These results are analyzed using game theory methods to gain insights useful to risk management and to demonstrate how mutual trust and cooperation can lead to decisions benefiting both stakeholders.
In: Risk analysis: an international journal, Band 6, Heft 4, S. 447-461
ISSN: 1539-6924
A method is developed for estimating a probability distribution using estimates of its percentiles provided by experts. The analyst's judgment concerning the credibility of these expert opinions is quantified in the likelihood function of Bayes'Theorem. The model considers explicitly the random variability of each expert estimate, the dependencies among the estimates of each expert, the dependencies among experts, and potential systematic biases. The relation between the results of the formal methods of this paper and methods used in practice is explored. A series of sensitivity studies provides insights into the significance of the parameters of the model. The methodology is applied to the problem of estimation of seismic fragility curves (i.e., the conditional probability of equipment failure given a seismically induced stress).
In: Risk analysis: an international journal, Band 6, Heft 1, S. 43-59
ISSN: 1539-6924
A model is developed for the detection time of fires in nuclear power plants, which differentiates between competing modes of detection and between different initial fire severities. Our state‐of‐knowledge uncertainties in the values of the model parameters are assessed from industry experience using Bayesian methods. Because the available data are sparse, we propose means to interpret imprecise forms of evidence to the develop quantitative information, which can be used in a statistical analysis; the intent is to maximize our use of all available information. Sensitivity analyses are performed to indicate the importance of structural and distributional assumptions made in the study. The methods used to treat imprecise evidence can be applied to a wide variety of problems. The specific equations developed in this analysis are useful in general situations, where the random quantity of interest is the minimum of a set of random variables (e.g., in "competing risks" models). The computational results indicate that the competing modes formulation can lead to distributions different from those obtained via analytically simpler models, which treat each mode independently of the others.
In: FINANA-D-23-02175
SSRN
In: Risk analysis: an international journal, Band 25, Heft 2, S. 361-376
ISSN: 1539-6924
The extreme importance of critical infrastructures to modern society is widely recognized. These infrastructures are complex and interdependent. Protecting the critical infrastructures from terrorism presents an enormous challenge. Recognizing that society cannot afford the costs associated with absolute protection, it is necessary to identify and prioritize the vulnerabilities in these infrastructures. This article presents a methodology for the identification and prioritization of vulnerabilities in infrastructures. We model the infrastructures as interconnected digraphs and employ graph theory to identify the candidate vulnerable scenarios. These scenarios are screened for the susceptibility of their elements to a terrorist attack, and a prioritized list of vulnerabilities is produced. The prioritization methodology is based on multiattribute utility theory. The impact of losing infrastructure services is evaluated using a value tree that reflects the perceptions and values of the decisionmaker and the relevant stakeholders. These results, which are conditional on a specified threat, are provided to the decisionmaker for use in risk management. The methodology is illustrated through the presentation of a portion of the analysis conducted on the campus of the Massachusetts Institute of Technology.
In: Journal of risk research: the official journal of the Society for Risk Analysis Europe and the Society for Risk Analysis Japan, Band 2, Heft 1, S. 11-29
ISSN: 1466-4461
In: Risk analysis: an international journal, Band 18, Heft 5, S. 621-634
ISSN: 1539-6924
The National Research Council has recommended the use of an analytic/deliberative decision making process in environmental restoration decisions that involve multiple stakeholders. This work investigates the use of the results of risk assessment and multiattribute utility analysis (the "analysis") in guiding the deliberation. These results include the ranking of proposed remedial action alternatives according to each stakeholder's preferences, as well as the identification of the major reasons for these rankings. The stakeholder preferences are over a number of performance measures that include the traditional risk assessment metrics, e.g., individual worker risk, as well as programmatic, cultural, and cost‐related impacts. Based on these results, a number of proposals are prepared for consideration by the stakeholders during the deliberation. These proposals are the starting point for the formulation of actual recommendations by the group. In our case study, these recommendations included new remedial action alternatives that were created by the stakeholders after an extensive discussion of the detailed analytical results.
In: Risk analysis: an international journal, Band 3, Heft 3, S. 181-188
ISSN: 1539-6924
Risk acceptance criteria in the form of limit lines are investigated in the context of prospect theory. This theory departs from utility theory in several respects, an important one being the use of weights other than probabilities in the evaluation of the expected impact of uncertain outcomes. Hypothetical functions reflecting certain attitudes toward consequences and rare events are developed and combined to produce several limit lines.