From Expertise to Agent Intelligence

In the late 1980s, a large American bank set out to build a system that could automate the work of its best credit analysts.

The analysts were good at their jobs – technically competent with genuine expertise. They could look at a loan application, a set of financial statements, an industry context, and form a judgment about credit quality that held up over time. Their approval decisions defaulted less and pricing was more accurate. The bank wanted to scale that judgment without scaling the headcount.

So, they brought in a team to build the system. The team spent months with the analysts. They documented the process, captured the rules and built a model that encoded everything the analysts had been able to articulate about how they made decisions.

The system worked. It processed applications faster than any human could. It was consistent in ways humans can’t be. It never had a bad day, never got tired, never let the last application colour its view of the next one.

On the cases that looked like the training data it performed well. On those that didn’t, it failed quietly at the margins, in the accumulation of small misjudgements that took years to show up in the loss data.

What the team had built was not an expert system. It was a very sophisticated encoding of what the analysts could say about their work, not what they actually did. The gap between those two things is the subject of this article.

The Translation Chain

Moving human expertise into an agent is not a single act. It is a chain of translations, each one introducing the possibility of loss.

The chain has four links.

Tacit knowledge is where expertise actually lives — in the perceptual sensitivity, the contextual judgment, the feel for when rules apply and when they don’t that experts develop over years of practice.

This knowledge is held in the body and the mind in ways that are not fully accessible to conscious reflection. It cannot be directly moved anywhere. It has to be surfaced first.

Explicit knowledge is tacit knowledge made visible through elicitation. An expert’s judgment about a credit risk, drawn out through careful questioning and case analysis, translated into statements that can be written down and examined. This is the first translation, and it is where the most important losses occur. Everything that the elicitation process fails to surface stays tacit and stays out of the system.

Structured knowledge is explicit knowledge organised into a form that a system can navigate. Rules, hierarchies, decision trees, ontologies — the architecture that gives knowledge a shape a machine can work with. This is the second translation, and it introduces a different kind of loss.

Structure imposes boundaries. It decides what counts as a relevant category and what doesn’t. It captures the relationships the designer thought to encode and misses the ones they didn’t think of. Every structured representation is a simplification of the explicit knowledge it was built from.

Encoded context is the structured knowledge loaded into an agent. It’s the prompts, the retrieval systems, the reasoning frameworks that shape how the agent uses what it knows. This is the third translation.

Context determines not just what an agent knows but how it applies that knowledge.

The same structured knowledge encoded into different contextual architectures produces very different agent behaviors.

Each translation is a compression. Something is always lost. The discipline of knowledge engineering is largely the discipline of minimising those losses by being deliberate about what gets lost where, and building systems that compensate for the losses they cannot avoid.

What Gets Lost and Where

At the first translation — from tacit to explicit:

The losses here are the ones Polanyi identified. The perceptual triggers the expert notices without knowing they’re noticing. The negative knowledge, everything they rule out instantly without deliberation. The contextual judgment that tells them when the standard approach doesn’t apply. The feel for what matters in this situation as distinct from situations that look similar on the surface.

These losses are invisible by definition. You cannot see what the elicitation process failed to surface. The explicit knowledge you end up with feels complete and the gaps only become visible when the system fails on cases that an expert would have handled differently.

At the second translation — from explicit to structured:

The losses here are architectural. Every representation scheme makes choices about what kinds of knowledge it can hold and what kinds it cannot.

Minimising these losses requires the full toolkit of elicitation methodology.

  • Protocol analysis to capture knowledge in action.
  • Critical incident technique to surface the edge cases where tacit knowledge is most visible.
  • Contrastive questioning to force the precision that description alone never produces.
  • Iteration — returning to the expert with specific failures and asking them to explain the gap.

Every representation scheme makes choices about what kinds of knowledge it can hold and what kinds it cannot.

Production rules (IF condition THEN action) are good at encoding procedural knowledge and clear causal relationships. They are poor at encoding the kind of holistic pattern recognition that characterises expert perception. An expert who looks at a loan application and forms an immediate gestalt impression of its quality is not running through a decision tree. Forcing that judgment into a rule structure loses the gestalt.

Ontologies and semantic networks are good at encoding relationships between concepts and the hierarchical structure of a domain. They are poor at encoding the dynamic, context-sensitive weightings that experts apply.

Decision trees are interpretable and easy to validate. They are poor at handling the interaction effects between variables that experienced analysts navigate intuitively.

The choice of representation is a substantive design decision. Pick the wrong one and the knowledge you encoded correctly at the first translation becomes misrepresented at the second. The system is not working with the expert’s knowledge anymore. It is working with an approximation shaped by the limits of the representation.

At the third translation: from structured to encoded context:

This is the translation that the current era of AI development has made newly important and newly complex.

In classical expert systems, the encoded context was the rule base itself. In modern agent architectures, the relationship between knowledge and reasoning is more layered.

The agent has a base model with its own capabilities and biases. It has retrieval systems that determine what knowledge it accesses and when. It has prompting structures that shape how it frames problems and weighs considerations. The structured knowledge is one input among several, and how it interacts with the others determines what the agent actually does.

This means that encoding knowledge correctly is necessary but not sufficient. The context has to be designed so that the agent actually retrieves the right things at the right moments, weights the expert’s judgment appropriately relative to its own base capabilities, applies the structured knowledge to the right kinds of cases and recognises when it is outside the boundaries of what the knowledge covers.

This is a critical capability that separates expert agents from generic GPTs. And getting this wrong is the most common failure mode in current agent development.

Without it, the knowledge is there. The agent isn’t reasoning with it.

Difference between Knowledge and Reasoning

An agent that has knowledge can retrieve relevant information when prompted. It can produce accurate answers to questions within its domain. It can follow the explicit rules it has been given.

This is useful but it is not expertise.

An agent that reasons with knowledge does something harder. It applies what it knows to novel situations such as cases that don’t match the training examples, problems that require combining knowledge from different parts of the domain, judgments that depend on understanding not just what the rules say but why they exist and when they stop applying.

The difference is between having a lot of knowledge and having the experience to know when to apply it and how to reason with it.

Most agent implementations are optimised for retrieval and rule-following and treat the second as an emergent property that will appear if you load in enough knowledge. It does not.

Reasoning with knowledge requires that the knowledge be structured in a way that supports reasoning. This certainly harder because the causal relationships need to be encoded so the the agent has access to the underlying principles, not just the derived rules.

Main Approaches for Knowledge Representation

How you structure knowledge is one of the most consequential decisions in building an expert system.

Each approach has a different theory of what knowledge is and how reasoning works.

Production rules are the oldest and most widely used representation. Each rule encodes a condition and an action:

IF the debt service coverage ratio is below 1.2 AND the borrower is in a cyclical industry THEN flag for senior review.

Rules are interpretable too. You can read them and understand what the system will do. They are also composable. You can create complex behaviours by combining many simple rules.

But such systems are also brittle. They handle the cases they were written for and failon cases they weren’t.

Semantic networks and ontologies represent knowledge as a graph of concepts and relationships.

  • The concept LOAN is connected to BORROWER, COLLATERAL, INDUSTRY, RISK RATING.
  • The concept BORROWER is connected to FINANCIAL STATEMENTS, CREDIT HISTORY, MANAGEMENT QUALITY.

The network encodes the structure of what entities exist and how they re related. Ontologies extend this by formalising the relationships and making them machine-interpretable.

They are powerful for representing taxonomic knowledge and for enabling a system to reason about relationships between concepts. They are less effective at encoding the dynamic, procedural knowledge of how to actually do something.

Decision trees represent knowledge as a sequence of branching decisions. At each node, a condition is tested and the path branches depending on the answer. The full tree encodes the logic of moving from a starting situation to a conclusion.

Decision trees are highly interpretable. They are also rigid. The structure of the tree determines what distinctions can be made, and changing the structure requires rebuilding from the root.

Case-based reasoning takes a different approach entirely. Rather than encoding general rules, it stores specific situations with their contexts, the decisions that were made, and the outcomes that resulted.

When a new situation arises, the system retrieves the most similar past cases and reasons by analogy. This approach is particularly good at capturing the kind of contextual, experiential knowledge that rule systems miss. It performs well on situations that resemble past cases and poorly on genuinely novel ones.

Each of these approaches captures something real about how expertise works. Each misses something.

The most sophisticated systems combine multiple representations using rules for the procedural knowledge, ontologies for the domain structure, case bases for the experiential knowledge that resists generalisation.

Building that combination well requires understanding the nature of the knowledge you are trying to encode.

Why Context Is the Differentiator

Two agents can have access to identical knowledge and produce very different outputs depending on how that knowledge is embedded in context.

Context determines what the agent attends to. An agent whose context foregrounds the explicit rules of a domain will apply those rules consistently. An agent whose context includes the principles behind the rules, the history of cases where the rules failed, and explicit guidance on how to recognize when a situation is outside the rules’ intended scope will reason more like an expert.

Context determines what the agent retrieves. In retrieval-augmented systems, the architecture of how knowledge is chunked, indexed, and retrieved shapes what the agent can access in any given moment. Knowledge that is not retrieved is knowledge the agent cannot use, regardless of whether it exists in the knowledge base.

Context determines how the agent weighs uncertainty.

Experts can tell the difference between a judgment they are confident in and one they are uncertain about, and they act differently in each case. An agent’s context needs to encode where the knowledge is solid, where it is incomplete, where the expert’s judgment would have been to escalate rather than decide.

Getting context right is the final and often neglected step in the translation chain. Teams that invest heavily in knowledge acquisition and representation and then deploy carelessly into context are leaving most of the value on the table. The knowledge is there. The agent is not using it.

Where Most Implementations Break Down

The pattern of failure is consistent enough to be predictable.

The first failure is stopping elicitation too early. The explicit account the expert gives in the first session feels complete. It is not. The team builds on the skeleton and discovers the gaps when the system fails on real cases.

The second failure is choosing the wrong representation for the knowledge being encoded. Teams default to the representation they are most familiar with regardless of whether the knowledge they are encoding is rule-shaped. Pattern recognition forced into rules produces a system that is technically correct and perceptually wrong.

The third failure is neglecting the context layer. The knowledge is encoded correctly but deployed into a contextual architecture that retrieves the wrong things, weights the explicit rules too heavily against base model judgment, or fails to signal when the system is operating outside the boundaries of its knowledge. The agent performs confidently in cases where it should be uncertain.

The fourth failure is treating the translation chain as a one-time project rather than an ongoing process. Knowledge ages. Domains evolve. The credit analyst’s judgment from 2019 may not be the right judgment for 2024. Expert systems that are not maintained become expert systems that encode the expertise of the past and apply it to the present. The losses compound quietly until they become visible in outcomes.

To build better agents we need to avoid these failures. The mindset shift is from treating agentic development as an IT Project to building a discipline: Plan for iteration, build validation into the process, treat gaps as the primary source of information about where the knowledge encoding needs to go next.