Teams often label decisions as “data driven”, but then evaluate a single number (a forecast, a KPI, or an expected value) as if uncertainty were a detail. Risk management standards define risk as the effect of uncertainty on objectives, which is a useful starting point for decision work. [1]
In practice, decision risk answers a simple question:
Where is uncertainty concentrated in this decision, and how exposed are we if we are wrong?
This article presents a framework you can operationalise in product, growth, pricing, and strategy decisions.
What I mean by Decision Risk
Decision risk is the probability weighted impact of error caused by uncertainty in:
- the inputs (data),
- the assumptions (explicit and implicit),
- the range of plausible outcomes, and
- the model’s fragility (sensitivity to small changes).
The purpose is not to chase false precision. The purpose is to make uncertainty visible, decomposable, and actionable.
A four component model you can operationalise
I decompose Decision Risk into four measurable components:
- Data Quality (DQ)
- Assumption Load (AL)
- Outcome Variability (OV)
- Sensitivity (SN)
These four components align well with two practical realities:
- Decision quality cannot be judged by outcomes alone, especially under uncertainty. [2]
- The weakest component often dominates the overall robustness of the decision.
Component 1: Data Quality (DQ)
If the data feeding a decision is weak, every downstream calculation becomes fragile.
A practical way to score DQ is to use established data quality dimensions such as accuracy, completeness, timeliness, validity, consistency, and uniqueness. [3] The UK Government’s Data Quality Framework is a useful reference point for how organisations assess and improve input data quality. [4]
Operational approach
- Pick 4 to 6 dimensions relevant to the decision.
- Score each dimension from 0 to 1.
- Take the mean.
Example (0 to 1):
- Accuracy 0.7
- Completeness 0.9
- Timeliness 0.6
- Validity 0.8
DQ = (0.7 + 0.9 + 0.6 + 0.8) / 4 = 0.75
Interpretation: DQ is decent, but timeliness is a structural weakness.
Component 2: Assumption Load (AL)
Most strategic failures are not data failures. They are assumption failures left untested.
Assumption Load is the weighted uncertainty sitting behind the assumptions the decision relies on. Decision quality practices emphasise making the frame, alternatives, information, and reasoning explicit. Assumptions are embedded in every one of these. [2]
Operational approach
- List assumptions A1…An (explicit and implicit).
- Score each assumption:
- weight wᵢ (importance to the decision, for example 1–5)
- uncertainty uᵢ (confidence strength, for example 1–5)
Compute:
- AL_raw = Σ(wᵢ × uᵢ)
- Normalise to 0–1 by dividing by a maximum plausible score for your context.
Example:
- A1: “CAC will remain within ±10%” (w=5, u=4)
- A2: “Conversion rate uplift will persist” (w=4, u=3)
- A3: “No competitor response in 8 weeks” (w=3, u=5)
AL_raw = 5×4 + 4×3 + 3×5 = 20 + 12 + 15 = 47
If your “high” band is 60, then AL = 47/60 = 0.78
Interpretation: uncertainty is concentrated in assumptions, not in the visible metrics.
Component 3: Outcome Variability (OV)
Many teams optimise expected value while ignoring dispersion.
Scenario analysis exists to examine how outcomes shift under different states of the world, rather than anchoring on a single forecast. [5] Even a simple 3 to 5 scenario design can surface instability.
Operational approach
- Create scenarios:
- Base
- Optimistic
- Pessimistic
- 1–2 stress cases (for example, “CAC shock”, “supply constraint”, “regulatory delay”)
- Compute the outcome under each scenario.
- Compute dispersion (standard deviation, range, or interquartile range).
- Normalise to 0–1 using a scale that matches your decision.
Example:
Outcomes (profit uplift): £40k, £55k, £10k, £5k
Dispersion is high relative to the mean.
Interpretation: the decision surface is unstable, even if the “average” looks acceptable.
Component 4: Sensitivity (SN)
Sensitivity tells you how fragile the model is: do small input changes cause large output swings?
Sensitivity analysis aims to attribute output uncertainty to uncertainty in inputs, so you can prioritise what to learn or control. [6] For many business decisions, a local sensitivity (elasticity) check is enough to reveal fragility.
Operational approach (elasticity)
Elasticity ≈ | (%ΔOutcome) / (%ΔInput) |
Example:
- If CAC increases by 5% and profit uplift drops by 20%
Elasticity = |(-20%) / (5%)| = 4.0
Then map elasticity to a 0–1 score. A simple rule:
- Elasticity 0 to 0.5 → 0.0 to 0.25
- 0.5 to 1.5 → 0.25 to 0.75
- 1.5 → 0.75 to 1.0 (cap)
Interpretation: high sensitivity means forecast error is amplified, so execution risk is elevated.
For more complex models, Monte Carlo methods help propagate input uncertainty through the model to produce an output distribution, rather than a single outcome. [7]
A composite Decision Risk Score
Once each component is normalised to 0–1:
Decision Risk Score (DRS)
DRS = 100 × [ w1(1 − DQ) + w2(AL) + w3(OV) + w4(SN) ]
where w1 + w2 + w3 + w4 = 1
A reasonable starting point is equal weights, then adjust based on your context.
Interpretation bands (example)
- 0–33: Low risk (robust)
- 34–66: Medium risk (manage and stage)
- 67–100: High risk (reduce exposure or test first)
The real value is diagnostic: you see which component dominates and target it first.
How to use this in a decision meeting
A 10 minute structure:
- Score DQ for the inputs used in the meeting. [3][4]
- List assumptions and compute AL. [2]
- Run scenarios and compute OV. [5]
- Stress test sensitivity for 1–3 key inputs and compute SN. [6]
- Decide the action:
- commit and monitor,
- stage and add guardrails,
- or run a test that reduces the dominant risk component first.
This shifts the conversation from “do we have enough data?” to “where is uncertainty concentrated and how do we reduce exposure?”
References
[1] ISO. (2018). The new ISO 31000 keeps risk management simple (risk defined as “the effect of uncertainty on objectives”).
[2] UT Austin Executive Education. Strategic decision making and decision quality (decisions should not be judged by outcome; focus on inputs such as framing and alternatives).
[3] UK Government. (2021). Meet the data quality dimensions (accuracy, completeness, timeliness, validity, consistency, uniqueness).
[4] UK Government. (2020). The Government Data Quality Framework (focus on assessing and improving input data quality).
[5] Damodaran, A. (NYU Stern). Scenario analysis, decision trees and simulations (scenario based valuation logic and interpretation).
[6] Saltelli, A. et al. (2008). Global Sensitivity Analysis: The Primer (purpose of sensitivity analysis and attribution of uncertainty).
[7] JCGM. (2008). Evaluation of measurement data, Supplement 1 to the GUM: Propagation of distributions using a Monte Carlo method (JCGM 101:2008).
