The RAFI Decision Intelligence Framework: An AI Strategist’s Perspective
Origins of a Decision Framework
I often reflect on the pivotal influences that shaped my approach to decision-making. As an AI strategist, my thinking has been molded by giants in the field of judgment and uncertainty – Daniel Kahneman, Annie Duke, and Philip Tetlock. Kahneman taught us how human intuition can mislead; he urged “try to design an approach to making a judgment or solving a problem, and don’t just go into it trusting your intuition”. In other words, good decisions require structure and method, not just gut feel. Annie Duke, a former poker pro, reinforced that insight by separating the quality of a decision from the luck of its outcome. She warned against “confusing outcomes with the quality of a decision”. A bad result doesn’t always mean it was a bad call – what matters is the process we followed given what we knew at the time. And from Philip Tetlock, who ran decades-long forecasting tournaments, I learned the power of tracking and feedback. Tetlock showed that people rarely improved their predictions until they started measuring them – after all, “how can anyone expect to become better at making predictions without the feedback data on the accuracy of past predictions?”. These thinkers all pointed to a gap in how organizations handle decisions: we rarely apply the same rigor to human decisions that we do to other aspects of business.
Inspired by their insights, I set out to develop the RAFI Decision Intelligence Framework, with “RAFI” standing for Risk-Adjusted Financial Impact. At its core, RAFI is about bringing a higher level of intelligence and intentionality to organizational decisions. It’s a framework born from Kahneman’s structured “decision hygiene” to reduce bias, Duke’s focus on process-over-outcome to encourage learning, and Tetlock’s emphasis on logging outcomes for continuous improvement. In essence, RAFI aims to log decisions, evaluate their risk-adjusted financial impact, and create feedback loops so that every decision becomes a learning opportunity. It formalizes what great decision-makers do intuitively – thinking in probabilities, considering second-order effects, and reviewing results over time – into a repeatable practice anyone can use.
The Paradox: AI vs. Human Decision Tracking
Working in the AI field, I’ve witnessed a striking paradox. Today, algorithms and AI systems are quietly making an ever-growing number of decisions across organizations. They screen job candidates, set medical priorities, determine loan eligibility, decide which products to stock, and even control pricing strategies. Businesses embrace these automated decisions because AI can crunch vast datasets to make faster, more consistent choices, free of human fatigue or bias. Every one of those AI-driven decisions is typically recorded – time-stamped, evaluated against outcomes, and tuned to improve performance. If a recommendation algorithm tweaks prices or a supply chain AI reroutes a delivery, the system logs it and learns. We trust these systems in part because we can measure their impact in real-time and adjust accordingly.
And yet, human decisions in the same organizations often remain largely untracked and unevaluated. Millions of crucial calls are made by people – project approvals, strategy pivots, hiring judgments, budget allocations – with outcomes that might unfold over months or years. But how often do we document the reasoning behind those choices or return later to assess if the decision paid off? Rarely. As one observer noted, companies tend to treat decision-making skill “as if it were fixed, like height… This is strange, given that we improve at nearly everything we practice with feedback. So why don’t we do this with decisions?”. In many firms, a manager with a string of lucky outcomes earns a reputation as a “good decision-maker,” while another who made a sound decision that turned out poorly might be judged harshly – all without a clear record of the context or rationale. The result is a kind of organizational amnesia. We hold post-mortems for projects, but seldom for the individual decisions that set those projects in motion.
This paradox convinced me that we need to bring decision intelligence to the human side of the house. We instrument our algorithms with dashboards and KPIs; why not our leadership decisions? RAFI proposes exactly that: a systematic way to capture each major decision – who made it, the data and assumptions behind it, the expected risk-adjusted financial outcome – and then track what actually happens. By logging decisions in a structured way, patterns emerge. We might discover that certain teams consistently underestimate timelines, or that a particular type of initiative (say, entering a new market) tends to have hidden second-order costs. This isn’t about blaming people for “bad calls,” but about illuminating our blind spots. With transparency and data, decision-making becomes a teachable, improvable skill rather than a black box. In fact, some pioneering companies have even experimented with radical transparency around decisions; at Bridgewater Associates, for example, Ray Dalio implemented a system of employee “baseball cards” logging each person’s strengths, weaknesses, and decision-making traits – an extreme step, but one born of the belief that in a business where “decision quality could mean millions gained or lost,” transparency had to be operational, not just cultural.
Thinking in First and Second Orders
A cornerstone of the RAFI framework is capturing the first- and second-order effects of decisions. Traditional decision-making tends to fixate on first-order outcomes – the immediate impact in terms of cost saved or revenue gained right now. But as any seasoned strategist knows, the initial result of a choice can be very different from its ripple effects over time. I draw here from the wisdom of investors like Howard Marks and Ray Dalio, who emphasize second-order thinking. Dalio cautions that “failing to consider second- and third-order consequences is the cause of a lot of painfully bad decisions”. In practice, this means that a decision which looks great in the short term could sow the seeds of long-term trouble (or vice versa).
When we log a decision in the RAFI framework, we explicitly ask: “And then what?” What will this decision do not just today, but next quarter, next year, or in five years? For example, cutting a department’s budget by 20% might have the first-order benefit of immediate cost reduction. But the second-order effects could include lower product quality, stressed staff, and lost innovation – which carry financial impacts in lost customers or slower long-term growth. Conversely, investing heavily in a new technology project is a short-term cost hit (negative first-order impact) with potential second-order benefits like efficiency gains or market leadership that pay off later. A robust decision process forces us to map these out: if X happens, then what are the likely follow-on outcomes?
By capturing these cascading effects, RAFI ensures that decision-makers think beyond the obvious. It’s a bit like chess versus poker: in chess, a good player looks several moves ahead; in business, we need to do the same with our choices. This approach was influenced by Annie Duke’s idea of framing decisions as bets on the future. We make a hypothesis (“I believe doing X will yield Y over the next two years”), and then we can track if that bet pays off. It moves us away from the simplistic hindsight judgment of “good outcome = good decision” and towards a learning mindset: even a decision that didn’t achieve the hoped-for outcome can yield valuable insight into our assumptions and models. With RAFI, those insights aren’t lost. They’re logged and fed back into the system so that next time, our “bets” – our decisions – are wiser and more informed.
Importantly, looking at second-order effects also means quantifying risk and uncertainty in financial terms. The “RA” in RAFI – Risk-Adjusted – means we don’t just ask, “what is the expected outcome if things go well?” but also “what are the downsides or variability?” A decision that could generate $10M in revenue but has a high risk of failure should be evaluated differently than a decision that will reliably generate $5M. By adjusting for risk (much like investors do with risk-adjusted returns), the framework encourages a balanced view. Leaders start to think in ranges and probabilities, not certainties. Over time, this cultivates a culture that is more tolerant of calculated risks (because they’re surfaced and understood) and less tolerant of unknown risks (because not assessing second-order consequences is seen as negligent). In a way, we borrow from Tetlock’s superforecasting approach here: break big uncertainties into components, estimate outcomes with confidence levels, and update those estimates as reality unfolds. Decision-making becomes a continuously improving forecast, rather than a one-off gamble.
Logging Decisions and Learning Loops
How do we put these principles into practice? The answer lies in structured decision logging and feedback loops. Every significant decision should leave behind a “trace” – a record that can be reviewed and learned from. This idea of a decision journal isn’t entirely new; thought leaders have suggested that individuals keep decision journals to combat our faulty memories and hindsight bias. In my own experience, writing down the reasoning behind a decision – the context, options considered, the expected outcome, and how I feel about it – has been transformative. It’s humbling to look back and see what I got right or wrong for the right reasons or the wrong reasons. With RAFI, I envision bringing this practice to organizations at scale.
In a RAFI log (which could be as simple as a shared document or as sophisticated as a custom database), each entry might include: the date, the decision-maker(s), the objective (what are we trying to achieve), the options we weighed, key data or assumptions, the estimated financial impact (e.g. increase revenue by 5% next year, or avoid $500K in risk exposure), and notes on first- and second-order effects considered. Crucially, it would include what metrics will signal if the decision is successful over time – essentially defining upfront what winning or learning looks like. Writing this down imposes discipline. As Shane Parrish noted, “we often don’t know what we think until we write it down”. The act of logging forces clarity: vague thinking becomes clear, and sometimes we catch flawed logic before the decision is even made, just by articulating it.
Once a decision is logged and executed, feedback loops kick in. This is where transparency and continuous learning flourish. Rather than waiting a whole year to see if a strategy works, RAFI encourages periodic check-ins – say monthly or quarterly – to compare predicted outcomes with reality. Did that product launch generate the uptake we expected by Q2? Are costs trending as anticipated or are there surprises? By monitoring these signals, organizations can adjust course quickly. It’s similar to what modern agile teams do with product metrics, but applied to strategic decisions. In fact, companies are increasingly recognizing that “better decisions come from better systems, not just sharper instincts”. Google, for instance, moved from infrequent performance reviews to a practice of real-time feedback, reflecting the need for faster learning cycles in decision-making.
However, having data is not enough – it’s the culture around it that counts. We must create a safe environment where admitting a decision didn’t pan out is not seen as failure, but as invaluable information. This is where transparency is key. If everyone from the CEO to frontline employees logs their big decisions and shares outcomes, it demystifies success and normalizes learning from misses. Leaders can model this by openly discussing their own decision outcomes. Imagine a company all-hands where an executive says, “Last year I decided to expand into Market X expecting $2M in new revenue. We achieved only half that. Here’s what we learned and how we’ll do better next time.” Such candor builds trust and accelerates collective learning. It also encourages people to be more thoughtful upfront (knowing they’ll be reviewing their reasoning later).
The feedback loop closes when we feed the learnings back into the next decision. Over time, the RAFI log becomes a goldmine of organizational knowledge – a living playbook of what works and what doesn’t. Patterns can be analyzed. We might find, for example, that whenever we skipped getting an “outside view” (to use Duke’s term) on a big bet, we were more likely to be over-optimistic. Or we might quantify that decisions made under severe time pressure have a higher variance in outcome. These insights can then inform training, process changes, or AI decision-support tools. In essence, RAFI turns decision-making into an iterative data-driven process, much like continuous improvement in manufacturing or DevOps in software. We create a closed-loop system where decisions are hypothesized, tested, measured, and refined. Organizational decision quality, once invisible, becomes a tangible metric that can be improved. As one 2025 business study noted, structured feedback loops can “strengthen decision-making, reduce costly missteps, and support long-term certainty, even in unpredictable times”. In a volatile world, that ability to adapt and learn quickly is a formidable advantage.
It’s worth noting that instituting this kind of transparency and logging does require cultural buy-in. There may be initial fear – will this become a blame game or add bureaucracy? It’s on leadership to frame RAFI not as a scorecard to punish, but as a tool to empower. I like to draw an analogy to flight recorders (black boxes) in aviation: they’re not there to blame the pilot, but to ensure the whole industry learns from each flight. Similarly, a decision log is there to make everyone better. In fact, when done right, it can be highly motivating – people see their ideas and decisions driving real impact (or provoking valuable discoveries) over time, which is far more engaging than just executing orders and never revisiting them.
And in today’s economic climate, this approach is not just nice-to-have; it’s increasingly essential. Capital efficiency has become a rallying cry in many boardrooms, especially in uncertain times. Leaders are asking every day: “Are we investing in the right priorities? Are we realizing the return we expected?” The RAFI framework provides the clarity to answer those questions. It bakes a mindset of efficiency into the DNA of decision-making by continuously checking decisions against expected ROI and risk. Companies that treat efficiency not as a one-off cost-cutting exercise but as an “always on” discipline tend to avoid the desperate slash-and-burn tactics in downturns. Instead of reactive cuts, they have proactive intelligence on what’s working and what’s not. By logging decisions and their outcomes, organizations can trim waste and double-down on winners more rationally. In short, RAFI helps allocate precious capital – whether financial or human talent – to where it will have the most risk-adjusted impact. That is the kind of agility and prudence that stakeholders (and investors) increasingly value.
A Future of Empowered Decision-Makers
Peering into the future, I see a workplace transformed by this ethos of decision intelligence. In this vision, AI is not a threat or replacement to human decision-makers, but a powerful partner and enabler. Advanced AI systems will assist in analyzing the data for each big decision, simulating scenarios, and even suggesting potential second-order consequences we might overlook. They will serve up a “risk-adjusted financial impact” analysis at the click of a button – but the human will still be in the driver’s seat to judge qualitative factors, ethics, and alignment with the company’s vision. The collaboration is symbiotic: “it’s about humans and AI working together to unlock new possibilities… technology augments human capabilities rather than replacing them”. With mundane, data-heavy analysis handled by AI, humans can focus on creativity, strategy, and values – the uniquely human aspects of decisions.
In this future, every employee becomes a sort of investor in the company’s success, and their decisions are their portfolio. Imagine a world where a brilliant cost-saving idea or growth decision by an employee is logged and tracked – and as the years pass and that decision yields dividends for the company, the employee shares in that success. It could be through bonuses tied to long-term outcomes or even a system of internal “decision equity.” The key is that people would be empowered and rewarded for making decisions that create lasting value. This flips the traditional script where employees are often only rewarded for short-term performance or output, not for the insight or foresight of their choices. Under RAFI, someone who makes a decision that significantly reduces a future risk (say, averting a compliance issue that could have cost millions) gets recognition and tangible reward, even if the “absence of a problem” is hard to see on the surface. Such a system incentivizes long-term thinking at every level of the organization.
The transparency and accountability in decision logging also mean that unsung heroes get noticed. Often in large companies, smart decisions by lower-level employees get lost in the shuffle or attributed to someone higher up. But with a clear decision log and outcome tracking, contributions are visible. It creates a meritocracy of ideas: no matter your title, if you consistently make high-RAFI decisions – those that drive high returns relative to risk over time – you build a reputation (and record) as a valued decision-maker. I envision internal dashboards where, for instance, product teams can see the historical RAFI of their major product decisions, almost like a “decision batting average.” This isn’t to gamify for its own sake, but to celebrate learning and effective thinking. It could even foster healthy competition: teams challenging each other to think more rigorously and achieve better outcomes on their initiatives, with full awareness of long-term impacts.
Crucially, this is a future where making decisions becomes less scary and more engaging for everyone. When the process is logged and learning-focused, people are less afraid to take initiative. They know that if things don’t go as hoped, the system will catch it early (through feedback loops) and everyone will learn something – not that their career will be over. Paradoxically, by embracing the tracking and evaluation of decisions, we create an environment with more psychological safety to innovate. It’s similar to how pilots train on flight simulators: you practice, you crash in simulation, you learn, so that in the real world you perform better. RAFI makes each decision a bit more like a simulator event – something you approach consciously and from which you will get feedback, rather than a high-stakes shot in the dark.
For leaders and change-makers reading this, my message is a personal one: we stand at an inflection point. We have at our fingertips AI tools that can log and analyze decisions in ways unimaginable a generation ago. We also have mounting pressure to be efficient, thoughtful stewards of our organizations. The RAFI Decision Intelligence Framework is my attempt to marry those realities into a hopeful future – one where every decision is an opportunity to create value and to learn, where humans and AI co-create better outcomes. It’s a future where companies succeed not by chance or charisma, but by cultivating wisdom in how decisions are made. In such a future, the often-heard lament “we have always done it this way” gives way to a culture of curiosity: “What have we learned from the decisions we made, and how will we apply it going forward?” Each employee becomes not just a cog in a machine, but a decision-maker entrusted with transparent ownership of their choices and their impact.
In closing, I invite you to imagine your organization five or ten years from now, having embraced this framework. Think of the confidence you would have in your strategy if you could look at a “decision ledger” and see the trajectory of thinking and outcomes that led you here. Think of the engagement of a workforce that knows their ideas matter and will be heard, logged, and potentially rewarded. In a business world often obsessed with short-term results, this is a quietly radical vision: focus on making consistently good decisions and let the results compound over time. As Annie Duke wisely noted, “good results compound… and make possible future calibration and improvement”. My personal hope is that RAFI helps catalyze this shift – to a world where decision-making is not an opaque art but a transparent, principled discipline, powered by AI and human insight in harmony. In that world, we won’t just be working with AI, we will be working smarter with ourselves, turning every decision into a stepping stone for collective growth and long-term value creation.
Sources:
Kahneman, D. – Interview on structured decision-making issues.org
Duke, A. – Thinking in Bets (process vs. outcome) medium.commedium.com
Tetlock, P. – Superforecasting and feedback ngpcap.comngpcap.com
Indatalabs – AI enables fast, consistent decisions indatalabs.com
Scientific American – Algorithms’ growing role in key decisions scientificamerican.com
Parrish, S. (Farnam Street) – Organizations rarely give feedback on decisions fs.blog
Dalio, R. – Second-order consequences in decisions fs.blog
Nicholson, D. – Feedback loops improve decision-making certaintynews.comcertaintynews.com
Jethmalani, M. – Capital efficiency as an “always on” mindset linkedin.com
Bridgewater case – Radical transparency via decision “baseball cards” linkedin.com
Tripathi, A. – Humans + AI as collaborative partners digitate.com