AI and a new frontier for decision intelligence
Why autonomous agents need more than data to make effective decisions.
TL;DR: A growing number of voices are converging on the same insight: the most important data in any organisation has never been captured. A system of record captures transactional data. It does not explain the decision for how, where or when that data is used ‘as is’, as part of an analysis, or over-ruled in favour of alternative narratives. The challenges with systemically capturing informal and tacit knowledge that underpins real-world decision-making have been debated for decades. This new era of AI and autonomous agents has raised the stakes. However, it also creates the possibility of succeeding where prior knowledge management systems failed. The potential of AI to discover, structure and maintain much of the organisational and situational context that decision-making depends upon.
The futile search for truth
In December 2025, Jamin Ball published ‘Long Live Systems of Record’, arguing that AI agents raise the bar for what a good system of record looks like1. Ball gave a great example: what is the ARR number to be shared in a presentation? The answer given by Sales can be quite different from the one given by Finance. Is it based on the date of the signed contract, the delivered work, the payment cleared? If an AI agent gets it wrong, in Ball’s words, “the rest of the workflow is now confidently automating the wrong thing.” Determining what is truth is a knottier problem than many realise.
“The more we automate, the more important it becomes that someone has done the unglamorous work of deciding what the correct answer is and where it lives” - Jamin Ball, 2025
In the article, ‘AI’s trillion-dollar opportunity: Context graphs,’2 authors Jaya Gupta and Ashu Garg expand on Ball’s position, with the argument that there is a missing layer in the data architecture - decision traces. The exceptions, the overrides, the precedents and reasoning that live in conversations and people’s heads. Gupta and Garg posit that these traces can be captured in the form of a context graph, a map that connects the dots between what was decided, by whom, based on what, and under which circumstances, revealing how an organisation actually operates.
A recent article brought both concepts together: ‘What is the Difference Between a Semantic Layer and a Context Layer,’3 by Lulit Tesfaye. Tesfaye explained that the semantic layer provides shared meaning and defines the core, often financial, concepts everyone is using, whilst the context graph layer adds real-time data, behaviour, and operational signals that helps explain what is happening around the data and how to act on it. Momentum is growing around the concept. Gartner has predicted that, by 2028, 50% of AI agent systems will leverage context graphs (as cited by Gupta, 2026)4 and, by 2030, “universal semantic layers will be treated as critical infrastructure, alongside data platforms and cybersecurity.”5
We have been here before
Professionals in the world of information, knowledge and records management could be forgiven for rolling their eyes a little. History has a habit of repeating. Twenty years ago, I reported on Gartner research, ‘The Knowledge Worker Investment Paradox.’6 Their findings were clear about the limitations of enterprise data platforms: employees get 50% to 75% of their relevant information directly from other people. Over 80% of digitised information sits on individual hard drives and in personal files. And most organisational knowledge is lost when people leave.
The knowledge management movement of the late 1990s and 2000s was built on the insight that the context graph discourse is now rediscovering: the most valuable knowledge in an organisation is not in the formal systems. It is in the reasoning, the judgement, the institutional memory carried by people. The proposed solutions were ontologies to structure knowledge and taxonomies to classify it.
They largely failed. Not because the insight was wrong but because the manual effort required to ensure these semantic layers remained up to date was unsustainable.
Back in 2008, I wrote about five different ways in which language creates ambiguity: the same word meaning different things, different words meaning the same thing, words that mean the same thing in theory but differ in practice, near-identical spellings with different meanings, and different vocabulary used by different people for the same concept.7 And those relationships need to be kept current as organisations and the markets they operate in change. Few organisations were willing to invest in the manual effort needed and so the knowledge systems decay.
Can generative AI achieve what human curators could not? If algorithms can discover, structure, and maintain the rich and varied semantic layers of knowledge systems at scale, then semantic layers will finally become standard data infrastructure.
The knowledge we struggle to capture
AI may solve the challenge of maintaining semantics and connect the dots to form a context graph of decision traces. But it still only captures what can be made explicit. And the most important knowledge in any organisation often can't.
“We can know more than we can tell.” - Michael Polanyi, 1966
In 1966, philosopher Michael Polanyi articulated the deeper challenge with his often-quoted observation: "we can know more than we can tell."8 Much of what matters in skilled judgement and decision-making is tacit, in people’s heads. It is embedded in practice, experience, and a feel for the situation that the person holding such knowledge may not be able to fully articulate even if you ask them directly.
In 1998, psychologist Gary Klein provided examples of this challenge in his book Sources of Power9. The book was a landmark study in what became known as naturalistic decision-making: the study of how people actually make decisions in complex, high-pressure, real-world environments. Klein found that experienced firefighters entering a burning building don't weigh options and choose between them in the way classical decision theory assumes. They recognise patterns from prior experience and act on the first workable option. A commander who suddenly orders everyone out of a building moments before the floor collapses can't fully explain how they knew. The knowledge was tacit. The decision trace would show only that an evacuation was ordered.
This is the deeper reason knowledge management systems struggled. The most consequential knowledge resists being documented at all. Whilst AI-driven context graphs may solve the scale problem, how do we know when sufficient context is captured to support effective autonomous decision-making?
The dynamics of context
Tesfaye's distinction between semantic and contextual layers is useful for the data architecture needed to support better decision-making. Both are necessary. But the feasibility of incorporating and maintaining each is vastly different.
“Dynamic conditions (that is, a changing situation) are an important feature of naturalistic decision making. New information may be received, or old information invalidated, and the goals can become radically transformed.” - Gary Klein (1998)
The dynamic signals that influence real-world decision-making range from observable operational data, who approved what, which systems were consulted, what the pipeline status was, through to the tacit intuitive judgement that Klein's firefighters relied on. Those can be described as at least two distinct trails - organisational and situational. Organisational context can tell you what happened and how it happened: the process, the policy, the approver, the precedent chain. Situational context captures the conditions under which a decision happened. The challenge is determining how sensitive the decision was to some or all of those conditions.
For example, suppose an AI agent finds a decision trace showing a VP approved an extraordinary 20% discount last quarter. The organisational trail is clear. But was the sector experiencing high churn at the time? Had a competitor just launched? Was the VP under pressure to hit a retention target? Was it on the expectation of a larger deal in the future? Was it ‘mates rates’? Would the same conditions reoccur or hold true in the future or for a different client?
Decisions made by humans in complex environments contain a mix of stable patterns, situational responses, and irreducible variation. Without a framework for distinguishing between the three, a context graph accumulates a detailed record of the past but cannot provide systematic intelligence to support future actions.
The promise of AI-generated context graphs
Capturing the dynamics of context in a reusable way has long been an unsolved challenge and limited the success of traditional knowledge management systems. People will default to asking someone if they suspect the knowledge base is stale. Deploying AI agents with increasing autonomy to make decisions that previously required human judgement raises the stakes considerably. But the same technology also brings capabilities that traditional knowledge management systems lacked.
AI can discover semantic structures rather than requiring humans to build them by hand. It can detect when terminology drifts across teams, when new concepts emerge that existing classifications don't cover, and when patterns in the decision graph have gone stale. And all performed at super-human speed and scale. These are not hypothetical scenarios. Aspects of them are already emerging in automated metadata tagging, ontology learning, and drift detection that are becoming a standard feature of modern data platforms.
Furthermore, AI can operate at resolutions impossible for humans to perform, creating the potential to surface fine-grained decision traces and identify situational factors that a class of decisions is sensitive to, even when the individual experts cannot articulate what felt like gut instinct, expanding our knowledge of the naturalistic decision-making behaviours observed by Klein through fieldwork.
A new frontier for decision intelligence
In positioning context graphs as the future for decision automation, Gupta and Garg suggest, “The question is whether the next trillion-dollar platforms are built by adding AI to existing data, or by capturing the decision traces that make data actionable.” I would argue both are needed, and neither is sufficient on its own.
AI is creating new possibilities across the board: to manage and analyse data at scale, to construct context graphs from decision traces that would otherwise remain scattered across conversations, and to maintain the semantic layers that make those graphs interpretable. The work here has already begun. Modern data platforms like Databricks have already put AI to work automating and optimising data management on their platform.10 (Disclaimer: I used to work for Databricks, other data platforms are also available). Arguably, Palantir has led in making ontology management operationally viable at scale with AI11.
The challenge remains in knowing what to do with context once you have it, and recognising what context remains out of reach. It requires knowing which situational factors matter for which decisions. How sensitive an outcome was to conditions that may have already changed. Where the boundary lies between a pattern that can be automated and a judgement that should stay human. These are not technology problems. They are decision intelligence problems, ones that will require new frameworks and methods, drawing on fields like naturalistic decision-making that have studied real-world human judgement for decades.
Context graphs are a genuine and exciting advance. For the first time, organisational reasoning is becoming visible, structured, and queryable at speed and scale. But knowing when the context is sufficient for an AI agent to act, and when it is not, remains a harder problem. This is a new frontier for decision-making.
Related posts
Even with experience, AI will not understand (March 2026)
Why current AI is both brilliant and dumb (March 2026)
Is narrative all we need to achieve AGI? (October 2025)
References
All sources as accessed on or before 29 March 2026.
Ball, J. (2025). “Clouded Judgement 12.12.25 - Long Live Systems of Record.” Clouded Judgement, Substack. Source
Gupta, J. & Garg, A. (2025). “AI’s trillion-dollar opportunity: Context graphs.” Foundation Capital. Source
Tesfaye, L. (2026). "What is the Difference Between a Semantic Layer and a Context Layer?" Enterprise Knowledge. Source
Gartner (2026). "Gartner Announces Top Predictions for Data and Analytics in 2026." Gartner Newsroom, 11 March 2026. Source
Polanyi, M. (1966). The Tacit Dimension. University of Chicago Press.
Klein, G. (1998). Sources of Power: How People Make Decisions. MIT Press


