Psychology Wiki
Register
Advertisement

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Cognitive Psychology: Attention · Decision making · Learning · Judgement · Memory · Motivation · Perception · Reasoning · Thinking  - Cognitive processes Cognition - Outline Index


Risk concerns the expected value of one or more results of one or more future events. Technically, the value of those results may be positive or negative. However, general usage tends focus only on potential harm that may arise from a future event, which may accrue either from incurring a cost ("downside risk") or by failing to attain some benefit ("upside risk").

Historical background[]

The term risk may be traced back to classical Greek rizikon (Greek ριζα, riza),[citation needed] meaning root, later used in Latin for cliff. The term is used in Homer’s Rhapsody M of Odyssey "Sirens, Scylla, Charybdee and the bulls of Helios (Sun)" Odysseus tried to save himself from Charybdee at the cliffs of Scylla, where his ship was destroyed by heavy seas generated by Zeus as a punishment for his crew killing before the bulls of Helios (the god of the sun), by grapping the roots of a wild fig tree.

For the sociologist Niklas Luhmann the term 'risk' is a neologism which appeared with the transition from traditional to modern society.[1] "In the Middle Ages the term riscium was used in highly specific contexts, above all sea trade and its ensuing legal problems of loss and damage."[1] In the vernacular languages of the 16th century the words rischio and riezgo were used,[1] both terms derived from the Arabic word "رزق", "rizk", meaning 'to seek prosperity'. This was introduced to continental Europe, through interaction with Middle Eastern and North African Arab traders. In the English language the term risk appeared only in the 17th century, and "seems to be imported from continental Europe."[1] When the terminology of risk took ground, it replaced the older notion that thought "in terms of good and bad fortune."[1] Niklas Luhmann (1996) seeks to explain this transition: "Perhaps, this was simply a loss of plausibility of the old rhetorics of Fortuna as an allegorical figure of religious content and of prudentia as a (noble) virtue in the emerging commercial society."[2]

Scenario analysis matured during Cold War confrontations between major powers, notably the U.S. and the USSR. It became widespread in insurance circles in the 1970s when major oil tanker disasters forced a more comprehensive foresight.[citation needed] The scientific approach to risk entered finance in the 1980s when financial derivatives proliferated. It reached general professions in the 1990s when the power of personal computing allowed for widespread data collection and numbers crunching.

Governments are using it, for example, to set standards for environmental regulation, e.g. "pathway analysis" as practiced by the United States Environmental Protection Agency.

Definitions of risk[]

There are many definitions of risk that vary by specific application and situational context. The widely inconsistent and ambiguous use of the word is one of several current criticisms of the methods to manage risk.[3]

One set of definitions presents risks simply as future issues which can be avoided or mitigated, rather than present problems that must be immediately addressed. E.g. "Risk is the unwanted subset of a set of uncertain outcomes." (Cornelius Keating)

More formally (and quantitatively), risk is proportional to both the results expected from an event and to the probability of this event. E.g. "Risk is a combination of the likelihood of an occurrence of a hazardous event or exposure(s) and the severity of injury or ill health that can be caused by the event or exposure(s)" (OHSAS 18001:2007). Mathematically, risk often simply defined as

Or more generally,

One of the first major uses of this concept was at the planning of the Delta Works in 1953, a flood protection program in the Netherlands, with the aid of the mathematician David van Dantzig.[4] The kind of risk analysis pioneered here has become common today in fields like nuclear power, aerospace and chemical industry.

There are more sophisticated definitions, however. Measuring engineering risk is often difficult, especially in potentially dangerous industries such as nuclear energy. Often, the probability of a negative event is estimated by using the frequency of past similar events or by event-tree methods, but probabilities for rare failures may be difficult to estimate if an event tree cannot be formulated. Methods to calculate the cost of the loss of human life vary depending on the purpose of the calculation. Specific methods include what people are willing to pay to insure against death,[5] and radiological release (e.g., GBq of radio-iodine).[citation needed] There are many formal methods used to assess or to "measure" risk, considered as one of the critical indicators important for human decision making.

Financial risk is often defined as the unexpected variability or volatility of returns and thus includes both potential worse-than-expected as well as better-than-expected returns. References to negative risk below should be read as applying to positive impacts or opportunity (e.g., for "loss" read "loss or gain") unless the context precludes.

In statistics, risk is often mapped to the probability of some event which is seen as undesirable. Usually, the probability of that event and some assessment of its expected harm must be combined into a believable scenario (an outcome), which combines the set of risk, regret and reward probabilities into an expected value for that outcome. (See also Expected utility.)

Thus, in statistical decision theory, the risk function of an estimator δ(x) for a parameter θ, calculated from some observables x, is defined as the expectation value of the loss function L,

In information security [citation needed], a risk is written as an asset, the threats to the asset and the vulnerability that can be exploited by the threats to impact the asset - an example being: Our desktop computers (asset) can be compromised by malware (threat) entering the environment as an email attachment (vulnerability).

The risk is then assessed as a function of three variables:

  1. the probability that there is a threat
  2. the probability that there are any vulnerabilities
  3. the potential impact to the business.

The two probabilities are sometimes combined and are also known as likelihood. If any of these variables approaches zero, the overall risk approaches zero.

The management of actuarial risk is called risk management.

Risk versus uncertainty[]

Risk: Combination of the likelihood of an occurrence of a hazardous event or exposure(s) and the severity of injury or ill health that can be caused by the event or exposure(s)

In his seminal work Risk, Uncertainty, and Profit, Frank Knight (1921) established the distinction between risk and uncertainty.

... Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk, from which it has never been properly separated. The term "risk," as loosely used in everyday speech and in economic discussion, really covers two things which, functionally at least, in their causal relations to the phenomena of economic organization, are categorically different. ... The essential fact is that "risk" means in some cases a quantity susceptible of measurement, while at other times it is something distinctly not of this character; and there are far-reaching and crucial differences in the bearings of the phenomenon depending on which of the two is really present and operating. ... It will appear that a measurable uncertainty, or "risk" proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all. We ... accordingly restrict the term "uncertainty" to cases of the non-quantitive type.

A solution to this ambiguity is proposed in How to Measure Anything: Finding the Value of Intangibles in Business and The Failure of Risk Management: Why It's Broken and How to Fix It by Doug Hubbard:[6][7]

Uncertainty: The lack of complete certainty, that is, the existence of more than one possibility. The "true" outcome/state/result/value is not known.
Measurement of uncertainty: A set of probabilities assigned to a set of possibilities. Example: "There is a 60% chance this market will double in five years"
Risk: A state of uncertainty where some of the possibilities involve a loss, catastrophe, or other undesirable outcome.
Measurement of risk: A set of possibilities each with quantified probabilities and quantified losses. Example: "There is a 40% chance the proposed oil well will be dry with a loss of $12 million in exploratory drilling costs".

In this sense, Hubbard uses the terms so that one may have uncertainty without risk but not risk without uncertainty. We can be uncertain about the winner of a contest, but unless we have some personal stake in it, we have no risk. If we bet money on the outcome of the contest, then we have a risk. In both cases there are more than one outcome. The measure of uncertainty refers only to the probabilities assigned to outcomes, while the measure of risk requires both probabilities for outcomes and losses quantified for outcomes.

Risk as a vector quantity[]

Hubbard also argues that that defining risk as the product of impact and probability presumes (probably incorrectly) that the decision makers are risk neutral.[7] Only for a risk neutral person is the "certain monetary equivalent" exactly equal to the probability of the loss times the amount of the loss. For example, a risk neutral person would consider 20% chance of winning $1 million exactly equal to $200,000 (or a 20% chance of losing $1 million to be exactly equal to losing $200,000). However, most decision makers are not actually risk neutral and would not consider these equivalent choices. This gave rise to Prospect theory and Cumulative prospect theory. Hubbard proposes instead that risk is a kind of "vector quantity" that does not collapse the probability and magnitude of a risk by presuming anything about the risk tolerance of the decision maker. Risks are simply described as an set or function of possible loss amounts each associated with specific probabilities. How this array is collapsed into a single value cannot be done until the risk tolerance of the decision maker is quantified.


Risk in psychology[]

Main article: Decision theory

Regret[]

In decision theory, regret (and anticipation of regret) can play a significant part in decision-making, distinct from risk aversion (preferring the status quo in case one becomes worse off).

Framing[]

Framing[8] is a fundamental problem with all forms of risk assessment. In particular, because of bounded rationality (our brains get overloaded, so we take mental shortcuts), the risk of extreme events is discounted because the probability is too low to evaluate intuitively. As an example, one of the leading causes of death is road accidents caused by drunk driving—partly because any given driver frames the problem by largely or totally ignoring the risk of a serious or fatal accident.

For instance, an extremely disturbing event (an attack by hijacking, or moral hazards) may be ignored in analysis despite the fact it has occurred and has a nonzero probability. Or, an event that everyone agrees is inevitable may be ruled out of analysis due to greed or an unwillingness to admit that it is believed to be inevitable. These human tendencies for error and wishful thinking often affect even the most rigorous applications of the scientific method and are a major concern of the philosophy of science.

All decision-making under uncertainty must consider cognitive bias, cultural bias, and notational bias: No group of people assessing risk is immune to "groupthink": acceptance of obviously wrong answers simply because it is socially painful to disagree, where there are conflicts of interest. One effective way to solve framing problems in risk assessment or measurement (although some argue that risk cannot be measured, only assessed) is to raise others' fears or personal ideals by way of completeness.

Neurobiology of Framing[]

Framing involves other information that affects the outcome of a risky decision. The right prefrontal cortex has been shown to take a more global perspective[9] while greater left prefrontal activity relates to local or focal processing[10]

From the Theory of Leaky Modules[11] McElroy and Seta proposed that they could predictably alter the framing effect by the selective manipulation of regional prefrontal activity with finger tapping or monaural listening.[12] The result was as expected. Rightward tapping or listening had the effect of narrowing attention such that the frame was ignored. This is a practical way of manipulating regional cortical activation to affect risky decisions, especially because directed tapping or listening is easily done.

Fear as intuitive risk assessment[]

For the time being, people rely on their fear and hesitation to keep them out of the most profoundly unknown circumstances.

In The Gift of Fear, Gavin de Becker argues that "True fear is a gift. It is a survival signal that sounds only in the presence of danger. Yet unwarranted fear has assumed a power over us that it holds over no other creature on Earth. It need not be this way."

Risk could be said to be the way we collectively measure and share this "true fear"—a fusion of rational doubt, irrational fear, and a set of unquantified biases from our own experience.

The field of behavioral finance focuses on human risk-aversion, asymmetric regret, and other ways that human financial behavior varies from what analysts call "rational". Risk in that case is the degree of uncertainty associated with a return on an asset.

Recognizing and respecting the irrational influences on human decision making may do much to reduce disasters caused by naive risk assessments that pretend to rationality but in fact merely fuse many shared biases together.

Risk assessment and management[]

Main article: Risk assessment

Because planned actions are subject to large cost and benefit risks, proper risk assessment and risk management for such actions are crucial to making them successful.[13]

Since Risk assessment and management is essential in security management, both are tightly related. Security assessment methodologies like CRAMM contain risk assessment modules as an important part of the first steps of the methodology. On the other hand, Risk Assessment methodologies, like Mehari evolved to become Security Assessment methodologies. A ISO standard on risk management (Principles and guidelines on implementation) is currently being draft under code ISO 31000. Target publication date 30 May 2009.


See also[]


References[]

  1. 1.0 1.1 1.2 1.3 1.4 Luhmann 1996:3
  2. Luhmann 1996:4
  3. Douglas Hubbard The Failure of Risk Management: Why It's Broken and How to Fix It, John Wiley & Sons, 2009
  4. Wired Magazine, Before the levees break, page 3
  5. includeonly>Landsburg, Steven. "Is your life worth $10 million?", Everyday Economics, Slate, 2003-03-03. Retrieved on 2008-03-17.
  6. Douglas Hubbard "How to Measure Anything: Finding the Value of Intangibles in Business" pg. 46, John Wiley & Sons, 2007
  7. 7.0 7.1 Douglas Hubbard "The Failure of Risk Management: Why It's Broken and How to Fix It, John Wiley & Sons, 2009
  8. Amos Tversky / Daniel Kahneman, 1981. "The Framing of Decisions and the Psychology of Choice."[verification needed]
  9. Schatz, J., Craft, S., Koby, M., & DeBaun, M. R. (2004). Asymmetries in visual-spatial processing following childhood stroke. Neuropsychology, 18, 340-352.
  10. Volberg, G., & Hubner, R. (2004). On the role of response conflicts and stimulus position for hemispheric differences in global/local processing: An ERP study. Neuropsychologia, 42, 1805-1813.
  11. Drake, R. A. (2004). Selective potentiation of proximal processes: Neurobiological mechanisms for spread of activation. Medical Science Monitor, 10, 231-234.
  12. McElroy, T., & Seta, J. J. (2004). On the other hand am I rational? Hemisphere activation and the framing effect. Brain and Cognition, 55, 572-580.
  13. Flyvbjerg 2006

Bibliography[]

Referred literature[]

  • Bent Flyvbjerg, 2006: From Nobel Prize to Project Management: Getting Risks Right. Project Management Journal, vol. 37, no. 3, August, pp. 5-15. Available at homepage of author
  • Niklas Luhmann, 1996: Modern Society Shocked by its Risks (= University of Hongkong, Department of Sociology Occasional Papers 17), Hongkong, available via HKU Scholars HUB

Books[]

  • Historian David A. Moss's book When All Else Fails explains the U.S. government's historical role as risk manager of last resort.
  • Peter L. Bernstein. Against the Gods ISBN 0-471-29563-9. Risk explained and its appreciation by man traced from earliest times through all the major figures of their ages in mathematical circles.
  • Porteous, Bruce T.; Pradip Tapadar (2005). Economic Capital and Financial Risk Management for Financial Services Firms and Conglomerates, Palgrave Macmillan.
  • Tom Kendrick (2003). Identifying and Managing Project Risk: Essential Tools for Failure-Proofing Your Project, AMACOM/American Management Association.
  • Lev Virine & Michael Trumper (2007). Project Decisions: The Art and Science, Management Concepts.
  • David Hillson (2007). Practical Project Risk Management: The Atom Methodology, Management Concepts.
  • Kim Heldman (2005). Project Manager's Spotlight on Risk Management, Jossey-Bass.
  • Dirk Proske (2008). Catalogue of risks - Natural, Technical, Social and Health Risks, Springer.
  • Gardner, Dan, Risk: The Science and Politics of Fear, Random House, Inc., 2008. ISBN 0771032994

Articles and papers[]

  • Clark, L., Manes, F., Antoun, N., Sahakian, B. J., & Robbins, T. W. (2003). "The contributions of lesion laterality and lesion volume to decision-making impairment following frontal lobe damage." Neuropsychologia, 41, 1474-1483.
  • Drake, R. A. (1985). "Decision making and risk taking: Neurological manipulation with a proposed consistency mediation." Contemporary Social Psychology, 11, 149-152.
  • Drake, R. A. (1985). "Lateral asymmetry of risky recommendations." Personality and Social Psychology Bulletin, 11, 409-417.
  • Hansson, Sven Ove. (2007). "Risk", The Stanford Encyclopedia of Philosophy (Summer 2007 Edition), Edward N. Zalta (ed.), forthcoming [1].
  • Holton, Glyn A. (2004). "Defining Risk", Financial Analysts Journal, 60 (6), 19–25. A paper exploring the foundations of risk. (PDF file)
  • Knight, F. H. (1921) Risk, Uncertainty and Profit, Chicago: Houghton Mifflin Company. (Cited at: [2], § I.I.26.)
  • Kruger, Daniel J., Wang, X.T., & Wilke, Andreas (2007) "Towards the development of an evolutionarily valid domain-specific risk-taking scale" Evolutionary Psychology (PDF file)
  • Miller, L. (1985). "Cognitive risk taking after frontal or temporal lobectomy I. The synthesis of fragmented visual information." Neuropsychologia, 23, 359 369.
  • Miller, L., & Milner, B. (1985). "Cognitive risk taking after frontal or temporal lobectomy II. The synthesis of phonemic and semantic information." Neuropsychologia, 23, 371 379.
  • Neill, M. Allen, J. Woodhead, N. Reid, S. Irwin, L. Sanderson, H. 2008 "A Positive Approach to Risk Requires Person Centred Thinking" London, CSIP Personalisation Network, Department of Health. Available from: http://networks.csip.org.uk/Personalisation/Topics/Browse/Risk/ [Accessed 21 July 2008]

External links[]

Further reading[]

Magazines and journals[]

Societies[]

Wikimedia sister projects[]

Template:Wiktionary-inline

  • Template:Wikisource1911Enc Citation
This page uses Creative Commons Licensed content from Wikipedia (view authors).
Advertisement