Tag: Tacit Knowledge

  • Why Handovers Don’t Work.

    What happens when your star performer decides to quit?

    If you’re like most managers you congratulate them, ask if you can get them to stay, and then get straight to planning their handover.

    Most companies I’ve worked at have elaborate systems for this exact situation. Most of them work very well — until they don’t. In my experience, no matter how elaborate, even the best ones work only 50% of the time. Pretty soon you run into the biggest weakness of any handover process: edge cases.

    An edge case is a problem that carries meaningful impact but happens infrequently. Easy to miss, but impossible to skip when it’s staring right at you.

    Now before you judge the documentation process or suggest using AI, let me tell you this: expert knowledge is hard to capture even with the best tools and processes.

    Here’s why.

    A philosopher named Hubert Dreyfus1 spent years studying how humans develop skill. What he found was unexpected. Experts don’t just accumulate knowledge, they reorganise it into heuristics, recognisable patterns, and perception.

    Simply put, expertise is as much learning through lived experience as it is acquiring deeper knowledge.

    Here’s how we know this.

    In the 1940s, a Dutch chess player and psychologist named Adriaan de Groot wanted to understand what separated great chess players from good ones. The obvious assumption was that grandmasters were simply smarter.

    And their intelligence gave them the edge to think further ahead, consider more moves, and process more combinations. That assumption, ironically, is still how many organisations treat expertise today: as innate ability rather than accumulated experience.

    So de Groot ran an experiment. He showed chess positions to players of different skill levels and asked them to think aloud as they analysed the board. What he found changed the was unexpected.

    The grandmasters weren’t thinking more. In many cases they were thinking less. They considered fewer moves than intermediate players but the moves they considered were almost always the right ones.

    De Groot couldn’t fully explain the mechanism. That came later, when Herbert Simon and William Chase picked up the work in the 1970s. They discovered that grandmasters had memorised somewhere between 50,000 and 100,000 meaningful board patterns or chunks, as they called them, accumulated over years of play. When a grandmaster looks at a board they aren’t seeing 32 pieces. They’re recognising configurations, the way you recognise a face without consciously processing each individual feature.

    Dreyfus took this further and made it philosophical. While a novice follows rules. For the grandmaster, decades of pattern recognition have dissolved into instinct.

    That’s the first clue to our handover problem.

    When you ask as expert to explain their thinking, what you get is a reconstruction built after the fact. The reasoning is plausible, often technically accurate, but it isn’t the actual cognitive process. It’s a story told to explain something that happened faster than language.

    This is what makes handovers structurally impossible to fix after the fact.

    Most handover documents contain reasonable questions about the job, the projects, the clients, the processes. Experts can answer them accurately. But the knowledge that handles edge cases isn’t organised as answers to questions, it’s organised as responses to situations.

    Like the grandmaster reading the position of pieces on a board, it only becomes accessible when the right situation activates it.

    Michael Polanyi captured this with a phrase that has stayed with me:

    ‘We know more than we can tell.’

    But even that undersells it. It’s not just that expert knowledge is hard to articulate.

    It’s that the very process of becoming expert reorganises knowledge into forms that are faster, more contextually sensitive, and more integrated than language allows.

    The next clue comes from Herbert Simon, the same Simon who helped explain the grandmaster’s chunking.

    Studying how people make decisions under complexity, in organisations, in economics, in everyday life, he found that nobody optimises. Not really…

    He wanted to understand how any mind, human or machine, navigates a world too complex to fully process. His answer was bounded rationality: the idea that minds don’t optimise, they satisfice.

    We search until we find something good enough, using the categories and heuristics available to us, and we stop. This is the only viable strategy for a finite mind operating in an infinitely complex world.

    The implication for expertise is precise: when you sit down to document your knowledge, you would find it much easier to write down what’s prompted and what’s top of mind. Easy peasy.

    But you won’t be able to consciously thing of every edge case unless you’re actively prompted. They knowledge exists in your experience but it’s never stored as something a finite mind could retrieve on demand.

    So what do you do?

    The answer isn’t a better offboarding process. It’s a different relationship with expertise altogether.

    One that doesn’t wait for the resignation letter. Expert knowledge needs to be treated as something you harvest continuously, while the expert is still performing, while the knowledge is still alive and activated in real situations.

    The most underused tool for this is reflection and managers can facilitate it in regular 1-2-1s.

    When your star performer has a great quarter, don’t just celebrate and move on. Help them contextualise it by asking them to walk you through exactly the specifics: What did they choose to do? What did they choose not to do, and why? Where did they make a judgment call that isn’t in any playbook?

    Be mindful that is not an interrogation. it is a way to contextualise knowledge that’s as valuable for the expert as it is for the organisation. Repeated over months and years, it builds something no exit interview ever could.

    One of my favourite experts on the subject is Peter Senge. He spent years studying why some organisations learn and others don’t. His conclusion was that the unit of capability in any organisation isn’t the individual — it’s the team.

    Top performers matter, but an organisation that depends on them is fragile by design. What makes organisations genuinely capable is the degree to which knowledge circulates, gets tested, gets refined.

    The handover problem is really a symptom of this: organisations that treat expertise as individually owned discover what they’ve lost only when the individual leaves.

    Most organisations have many people, each carrying expertise that is partially tacit, partially compiled, partially invisible even to themselves. The moment you try to build systems that run on shared expertise, this stops being an offboarding problem and becomes a competitive advantage.

    1Hubert Dreyfus: https://en.wikipedia.org/wiki/Hubert_Dreyfus

  • From Expertise to Agent Intelligence

    In the late 1980s, a large American bank set out to build a system that could automate the work of its best credit analysts.

    The analysts were good at their jobs – technically competent with genuine expertise. They could look at a loan application, a set of financial statements, an industry context, and form a judgment about credit quality that held up over time. Their approval decisions defaulted less and pricing was more accurate. The bank wanted to scale that judgment without scaling the headcount.

    So, they brought in a team to build the system. The team spent months with the analysts. They documented the process, captured the rules and built a model that encoded everything the analysts had been able to articulate about how they made decisions.

    The system worked. It processed applications faster than any human could. It was consistent in ways humans can’t be. It never had a bad day, never got tired, never let the last application colour its view of the next one.

    On the cases that looked like the training data it performed well. On those that didn’t, it failed quietly at the margins, in the accumulation of small misjudgements that took years to show up in the loss data.

    What the team had built was not an expert system. It was a very sophisticated encoding of what the analysts could say about their work, not what they actually did. The gap between those two things is the subject of this article.

    The Translation Chain

    Moving human expertise into an agent is not a single act. It is a chain of translations, each one introducing the possibility of loss.

    The chain has four links.

    Tacit knowledge is where expertise actually lives — in the perceptual sensitivity, the contextual judgment, the feel for when rules apply and when they don’t that experts develop over years of practice.

    This knowledge is held in the body and the mind in ways that are not fully accessible to conscious reflection. It cannot be directly moved anywhere. It has to be surfaced first.

    Explicit knowledge is tacit knowledge made visible through elicitation. An expert’s judgment about a credit risk, drawn out through careful questioning and case analysis, translated into statements that can be written down and examined. This is the first translation, and it is where the most important losses occur. Everything that the elicitation process fails to surface stays tacit and stays out of the system.

    Structured knowledge is explicit knowledge organised into a form that a system can navigate. Rules, hierarchies, decision trees, ontologies — the architecture that gives knowledge a shape a machine can work with. This is the second translation, and it introduces a different kind of loss.

    Structure imposes boundaries. It decides what counts as a relevant category and what doesn’t. It captures the relationships the designer thought to encode and misses the ones they didn’t think of. Every structured representation is a simplification of the explicit knowledge it was built from.

    Encoded context is the structured knowledge loaded into an agent. It’s the prompts, the retrieval systems, the reasoning frameworks that shape how the agent uses what it knows. This is the third translation.

    Context determines not just what an agent knows but how it applies that knowledge.

    The same structured knowledge encoded into different contextual architectures produces very different agent behaviors.

    Each translation is a compression. Something is always lost. The discipline of knowledge engineering is largely the discipline of minimising those losses by being deliberate about what gets lost where, and building systems that compensate for the losses they cannot avoid.

    What Gets Lost and Where

    At the first translation — from tacit to explicit:

    The losses here are the ones Polanyi identified. The perceptual triggers the expert notices without knowing they’re noticing. The negative knowledge, everything they rule out instantly without deliberation. The contextual judgment that tells them when the standard approach doesn’t apply. The feel for what matters in this situation as distinct from situations that look similar on the surface.

    These losses are invisible by definition. You cannot see what the elicitation process failed to surface. The explicit knowledge you end up with feels complete and the gaps only become visible when the system fails on cases that an expert would have handled differently.

    At the second translation — from explicit to structured:

    The losses here are architectural. Every representation scheme makes choices about what kinds of knowledge it can hold and what kinds it cannot.

    Minimising these losses requires the full toolkit of elicitation methodology.

    • Protocol analysis to capture knowledge in action.
    • Critical incident technique to surface the edge cases where tacit knowledge is most visible.
    • Contrastive questioning to force the precision that description alone never produces.
    • Iteration — returning to the expert with specific failures and asking them to explain the gap.

    Every representation scheme makes choices about what kinds of knowledge it can hold and what kinds it cannot.

    Production rules (IF condition THEN action) are good at encoding procedural knowledge and clear causal relationships. They are poor at encoding the kind of holistic pattern recognition that characterises expert perception. An expert who looks at a loan application and forms an immediate gestalt impression of its quality is not running through a decision tree. Forcing that judgment into a rule structure loses the gestalt.

    Ontologies and semantic networks are good at encoding relationships between concepts and the hierarchical structure of a domain. They are poor at encoding the dynamic, context-sensitive weightings that experts apply.

    Decision trees are interpretable and easy to validate. They are poor at handling the interaction effects between variables that experienced analysts navigate intuitively.

    The choice of representation is a substantive design decision. Pick the wrong one and the knowledge you encoded correctly at the first translation becomes misrepresented at the second. The system is not working with the expert’s knowledge anymore. It is working with an approximation shaped by the limits of the representation.

    At the third translation: from structured to encoded context:

    This is the translation that the current era of AI development has made newly important and newly complex.

    In classical expert systems, the encoded context was the rule base itself. In modern agent architectures, the relationship between knowledge and reasoning is more layered.

    The agent has a base model with its own capabilities and biases. It has retrieval systems that determine what knowledge it accesses and when. It has prompting structures that shape how it frames problems and weighs considerations. The structured knowledge is one input among several, and how it interacts with the others determines what the agent actually does.

    This means that encoding knowledge correctly is necessary but not sufficient. The context has to be designed so that the agent actually retrieves the right things at the right moments, weights the expert’s judgment appropriately relative to its own base capabilities, applies the structured knowledge to the right kinds of cases and recognises when it is outside the boundaries of what the knowledge covers.

    This is a critical capability that separates expert agents from generic GPTs. And getting this wrong is the most common failure mode in current agent development.

    Without it, the knowledge is there. The agent isn’t reasoning with it.

    Difference between Knowledge and Reasoning

    An agent that has knowledge can retrieve relevant information when prompted. It can produce accurate answers to questions within its domain. It can follow the explicit rules it has been given.

    This is useful but it is not expertise.

    An agent that reasons with knowledge does something harder. It applies what it knows to novel situations such as cases that don’t match the training examples, problems that require combining knowledge from different parts of the domain, judgments that depend on understanding not just what the rules say but why they exist and when they stop applying.

    The difference is between having a lot of knowledge and having the experience to know when to apply it and how to reason with it.

    Most agent implementations are optimised for retrieval and rule-following and treat the second as an emergent property that will appear if you load in enough knowledge. It does not.

    Reasoning with knowledge requires that the knowledge be structured in a way that supports reasoning. This certainly harder because the causal relationships need to be encoded so the the agent has access to the underlying principles, not just the derived rules.

    Main Approaches for Knowledge Representation

    How you structure knowledge is one of the most consequential decisions in building an expert system.

    Each approach has a different theory of what knowledge is and how reasoning works.

    Production rules are the oldest and most widely used representation. Each rule encodes a condition and an action:

    IF the debt service coverage ratio is below 1.2 AND the borrower is in a cyclical industry THEN flag for senior review.

    Rules are interpretable too. You can read them and understand what the system will do. They are also composable. You can create complex behaviours by combining many simple rules.

    But such systems are also brittle. They handle the cases they were written for and failon cases they weren’t.

    Semantic networks and ontologies represent knowledge as a graph of concepts and relationships.

    • The concept LOAN is connected to BORROWER, COLLATERAL, INDUSTRY, RISK RATING.
    • The concept BORROWER is connected to FINANCIAL STATEMENTS, CREDIT HISTORY, MANAGEMENT QUALITY.

    The network encodes the structure of what entities exist and how they re related. Ontologies extend this by formalising the relationships and making them machine-interpretable.

    They are powerful for representing taxonomic knowledge and for enabling a system to reason about relationships between concepts. They are less effective at encoding the dynamic, procedural knowledge of how to actually do something.

    Decision trees represent knowledge as a sequence of branching decisions. At each node, a condition is tested and the path branches depending on the answer. The full tree encodes the logic of moving from a starting situation to a conclusion.

    Decision trees are highly interpretable. They are also rigid. The structure of the tree determines what distinctions can be made, and changing the structure requires rebuilding from the root.

    Case-based reasoning takes a different approach entirely. Rather than encoding general rules, it stores specific situations with their contexts, the decisions that were made, and the outcomes that resulted.

    When a new situation arises, the system retrieves the most similar past cases and reasons by analogy. This approach is particularly good at capturing the kind of contextual, experiential knowledge that rule systems miss. It performs well on situations that resemble past cases and poorly on genuinely novel ones.

    Each of these approaches captures something real about how expertise works. Each misses something.

    The most sophisticated systems combine multiple representations using rules for the procedural knowledge, ontologies for the domain structure, case bases for the experiential knowledge that resists generalisation.

    Building that combination well requires understanding the nature of the knowledge you are trying to encode.

    Why Context Is the Differentiator

    Two agents can have access to identical knowledge and produce very different outputs depending on how that knowledge is embedded in context.

    Context determines what the agent attends to. An agent whose context foregrounds the explicit rules of a domain will apply those rules consistently. An agent whose context includes the principles behind the rules, the history of cases where the rules failed, and explicit guidance on how to recognize when a situation is outside the rules’ intended scope will reason more like an expert.

    Context determines what the agent retrieves. In retrieval-augmented systems, the architecture of how knowledge is chunked, indexed, and retrieved shapes what the agent can access in any given moment. Knowledge that is not retrieved is knowledge the agent cannot use, regardless of whether it exists in the knowledge base.

    Context determines how the agent weighs uncertainty.

    Experts can tell the difference between a judgment they are confident in and one they are uncertain about, and they act differently in each case. An agent’s context needs to encode where the knowledge is solid, where it is incomplete, where the expert’s judgment would have been to escalate rather than decide.

    Getting context right is the final and often neglected step in the translation chain. Teams that invest heavily in knowledge acquisition and representation and then deploy carelessly into context are leaving most of the value on the table. The knowledge is there. The agent is not using it.

    Where Most Implementations Break Down

    The pattern of failure is consistent enough to be predictable.

    The first failure is stopping elicitation too early. The explicit account the expert gives in the first session feels complete. It is not. The team builds on the skeleton and discovers the gaps when the system fails on real cases.

    The second failure is choosing the wrong representation for the knowledge being encoded. Teams default to the representation they are most familiar with regardless of whether the knowledge they are encoding is rule-shaped. Pattern recognition forced into rules produces a system that is technically correct and perceptually wrong.

    The third failure is neglecting the context layer. The knowledge is encoded correctly but deployed into a contextual architecture that retrieves the wrong things, weights the explicit rules too heavily against base model judgment, or fails to signal when the system is operating outside the boundaries of its knowledge. The agent performs confidently in cases where it should be uncertain.

    The fourth failure is treating the translation chain as a one-time project rather than an ongoing process. Knowledge ages. Domains evolve. The credit analyst’s judgment from 2019 may not be the right judgment for 2024. Expert systems that are not maintained become expert systems that encode the expertise of the past and apply it to the present. The losses compound quietly until they become visible in outcomes.

    To build better agents we need to avoid these failures. The mindset shift is from treating agentic development as an IT Project to building a discipline: Plan for iteration, build validation into the process, treat gaps as the primary source of information about where the knowledge encoding needs to go next.

  • Elicitation: How to learn from experts

    Imagine that you’re sitting across from a senior underwriter at a large insurance company. The underwriter has thirty years of experience. She can look at a commercial risk and within minutes form a view on whether it’s a good risk or a bad one, what the right price is, and what conditions to attach. Her loss ratio is consistently better than her peers. The company would not want to lose what she knows.

    You’re brought in to capture that knowledge. To encode it into a system that can replicate her judgment at scale. You open your notebook and asks the obvious question:

    How do you decide whether a risk is good or bad?

    She thinks for a moment. Then she talks about financial strength, about management quality, about the physical condition of the assets, about claims history, about industry sector trends. She is articulate, thoughtful, and thorough. An hour later you have pages full of notes and a clear framework.

    You go back to your lab and build it into the system.

    The system is tested and performs reasonably well on straightforward cases. On the complex ones, where judgement is needed, it underperforms. It misses things she would have caught. It prices risks she would have declined. It declines risks she would have taken.

    You go back to her to figure out what the system for wrong. You ask her what she would have done differently.

    She looks at it for a moment. Then she says something that changes everything: I would never have written that risk in the first place. Something about it just feels wrong.

    You ask her what felt wrong but she cannot say.

    This is where naive elicitation ends. And where the discipline of knowledge elicitation begins.

    Why Asking Doesn’t Work

    The failure in that interview room is not a failure of effort. The problem is structural and it goes back to everything Michael Polanyi understood about the nature of expertise.

    When you ask an expert to explain what they know, you are asking them to do something that expertise is specifically designed not to do.

    Expertise is the compression of thousands of experiences into fast, automatic judgment.

    It works because it has moved below the level of conscious deliberation. The expert is not running through a checklist when they evaluate a risk. They are perceiving a situation and responding to it in just the same way a native speaker responds to a sentence without parsing its grammar.

    Asking them to articulate that process is like asking someone to explain how they ride a bicycle while they are riding it. The articulation interferes with the performance. And what comes out is a plausible, well-intentioned, genuinely believed account of how they think they decide, which is not the same thing as how they actually decide.

    This reconstruction has a specific shape. It tends to be more logical, more sequential, and more complete than the real process. It leaves out the perceptual triggers, things the expert notices that they don’t realize they’re noticing.

    It leaves out the negative knowledge: all the things they ruled out instantly without conscious deliberation.

    It leaves out the contextual judgment: the feel for when the standard approach doesn’t apply.

    What remains is the skeleton of expertise without the flesh. Building a system on that skeleton producesomething that works on textbook cases and fails on the ones that matter most.

    The discipline of elicitation exists because the direct approach consistently fails. The lesson: You cannot get at tacit knowledge by asking for it directly. You have to come at it sideways.

    The Toolkit

    Over decades of practice, knowledge engineers developed a set of techniques for surfacing what experts cannot easily volunteer.

    Each one approaches the problem from a different angle. Each one is designed to bypass the reconstructed account and get closer to what the expert actually does.

    Thinking Aloud — Protocol Analysis

    The most direct way to get at tacit knowledge is not to ask about it after the fact but to capture it in real time.

    In protocol analysis, the expert is given a real problem to solve and asked to narrate their thinking as they go. They’re not asked to explain their reasoning, but simply to say whatever is in their mind as they work through it. The knowledge engineer sits alongside and records everything.

    What comes out is nothing like the clean account you get in an interview.

    It is messier, more fragmented, more associative. The expert notices things they don’t explain. They hesitate in places they can’t account for. They reject options for reasons that turn out to be revealing. The noise in the protocol is often more informative than the signal because this where the tacit knowledge leaks through.

    The technique was developed by Herbert Simon and Allen Newell in the 1950s and 1960s as a method for studying problem solving.

    They were interested in the cognitive processes underlying human reasoning and found that verbal protocols gave them access to those processes in a way that no other method could. Knowledge engineering borrowed the technique because it works for the same reason. It captures knowledge in action rather than knowledge in reflection.

    The limitation is that not all expertise is verbal. Some experts go quiet when they are doing their best work. The thinking that produces the best judgment is sometimes the thinking that produces no words at all.

    Laddering — Getting Below the Surface

    Laddering is a technique borrowed from psychology where it was originally developed to understand personal values and how they connect to behavior.

    In a knowledge elicitation context it works like this: the expert gives an account of why they made a decision, and the knowledge engineer asks why that matters. The expert gives another reason, and the knowledge engineer asks why that matters. The conversation moves down through layers of reasoning until it reaches a foundational belief or a principle that the expert holds but has rarely been asked to articulate.

    The value of laddering is that it surfaces the causal structure underneath explicit reasoning.

    Experts can usually tell you what they did. They can often tell you why in immediate terms. What they rarely surface unprompted is the deeper structure of beliefs and judgments that makes their reasoning work the way it does. Laddering pulls that structure into the open.

    The technique requires patience and a degree of persistence that can feel uncomfortable. Asking why repeatedly can seem like you are questioning the expert’s judgment rather than trying to understand it.

    Repertory Grid — Making Implicit Distinctions Explicit

    One of the most powerful things experts do is make distinctions that novices can’t see. The senior underwriter doesn’t just evaluate risks she also categorizes them in ways that carry implicit judgments about quality, reliability, and probability of loss.

    Those categories are often tacit. She uses them fluently without being able to name them.

    Repertory grid technique, developed by the psychologist George Kelly in the 1950s, is designed to surface exactly these implicit distinctions. The process works by presenting the expert with sets of three things (three risks, three clients, three cases)and asking them to identify how two of the three are similar to each other and different from the third.

    The expert names the dimension of difference. Then the knowledge engineer asks them to rate all the items in their domain on that dimension.

    What emerges, across many rounds of this exercise, is a map of the expert’s implicit categorisation system. The dimensions along which they actually organise their domain. The grid makes visible a structure of judgment that exists in the expert’s mind but has never been externalised.

    The technique is particularly useful when the knowledge engineer suspects that the expert’s explicit account of how they decide doesn’t match how they actually decide. The grid bypasses the reconstruction and gets at the actual cognitive structure underneath.

    Critical Incident Technique

    Abstract questions produce abstract answers. Concrete questions produce concrete knowledge.

    The critical incident technique, developed by the psychologist John Flanagan in the 1950s, works by asking experts not to describe their general approach but to recall specific cases where they made a consequential decision, particularly ones where things went well or badly in ways that were not predictable from the standard approach.

    A critical incident interview sounds like this: Tell me about a time when you looked at a risk and knew immediately it was wrong but couldn’t have explained why at the time. What did you eventually figure out? What were you noticing that you didn’t know you were noticing?

    What the technique exploits is the difference between episodic memory and semantic memory.

    Asking experts to describe their general knowledge activates semantic memory which is where the reconstructed, idealised account lives.

    Asking them to recall a specific incident activates episodic memory which is closer to the actual experience, with all its texture and detail intact.

    The incidents that are most valuable are often the ones where the standard approach failed, where the expert’s judgment was later proven right for reasons they didn’t fully understand at the time, cases where they made a mistake and figured out why.

    These edge cases are where the tacit knowledge is most visible, because they are the cases where the expert had to work harder than usual to know what to do.

    Contrastive Questioning

    When you ask an expert to describe what they do, they give you an account of the general case. When you ask them to explain the difference between two specific situations they are forced into a level of precision that general description never reaches.

    Contrastive questioning works because comparison activates a different cognitive process than description. To explain a difference, the expert has to identify the specific features that drove the distinction. Those features are often things they noticed without realising they were noticing them.

    The technique is most powerful when the two cases being compared are superficially similar but produced different judgments.

    The underwriter who approved one manufacturing risk and declined another that looks almost identical on paper is sitting on a piece of tacit knowledge that a contrastive question can help surface.

    What Good Elicitation Looks Like

    All techniques are tools. What makes elicitation work is the way you uses them and the relationship you build with the expert in order to use them well.

    Good elicitation looks like a genuine collaboration.

    Good elicitation is not information extraction. You need to work alongside an expert to surface something that they can’t see clearly alone.

    The expert has the knowledge. As a knowledge engineer you have the methods to make it visible. The work requires both.

    This means the knowledge engineer has to earn the expert’s trust. Experts are often sceptical of the process and not unreasonably. They have been asked before to explain themselves and found the process frustrating or reductive. They are protective of their judgment and wary of having it misrepresented.

    The knowledge engineer who comes in with a clipboard and a fixed agenda will get the reconstructed account. The one who comes with genuine curiosity and patience will get something closer to the truth.

    Good elicitation is also iterative. No single session surfaces everything. Knowledge engineers build a model of what they think the expert knows, tests it against cases, finds where it fails, and goes back to the expert with specific questions about the gaps. The process cycles between between drawing out knowledge and checking whether what has been drawn out is actually what the expert does.

    The Knowledge Engineer as Interviewer

    Knowledge engineers need enough domain knowledge to follow what the expert is saying.

    This is an essential ingredient to being an effective interviewer. This will help you know when an account is incomplete, to recogniwe the gaps, to ask the follow-up question that opens the right door.

    Be mindful not to fill in the gaps yourself and import your own understanding where the expert’s actual knowledge should go.

    Beyond sufficient technical skills you need the ability to listen carefully, to sit with silence, to ask the question that hasn’t been asked yet.

    Above all you need patience for a process that moves slowly and produces results that are often ambiguous.

    In essence, you need a whole lot of intellectual humility to understand how the expert understands the domain. Your own intuitions about how things work are a liability as much as an asset.

    The best knowledge engineers are the ones who can hold their own understanding lightly enough to see what the expert is actually doing rather than what they expect the expert to be doing.

  • The Tacit Knowledge Problem

    In the 1970s, a team of researchers set out to build a computer system that could teach surgery.

    The idea was straightforward. Find the best surgeons in the world. Record everything they did and turn it into a training program that could transmit world-class surgical skill to the next generation of doctors.

    They found the surgeons. They recorded everything. And then they ran into a problem that nobody had anticipated.

    The best surgeons couldn’t explain what made them good.

    When they sat down and tried to articulate what they were doing and why, the accounts they gave were incomplete. They described the mechanics. But they couldn’t describe was the accumulated judgment of ten thousand procedures compressed into instinct that had become invisible even to themselves.

    The researchers had set out to capture expertise. What they discovered instead was that expertise, at its highest levels, resists capture.

    That discovery has a name. It is one of the most important and most underappreciated ideas in the history of human knowledge. And understanding it is the only way to understand why building systems that perform at expert level is so much harder than it looks.

    The Philosopher Who Saw It First

    Michael Polanyi was not the kind of person you would expect to reshape the field of artificial intelligence. He was a Hungarian-born chemist who had fled Nazi Germany in the 1930s, eventually landing at the University of Manchester where he spent the second half of his career not doing chemistry but thinking about what scientific knowledge actually is, how it develops, and how it moves from one generation of scientists to the next.

    By the 1950s Polanyi had become increasingly troubled by a dominant assumption in Western philosophy of knowledge, the idea that genuine knowledge is knowledge that can be made fully explicit.

    That if you truly understand something you should be able to state it clearly, defend it logically, and transmit it to anyone willing to pay attention. Knowledge, in this view, is essentially propositional. It lives in sentences. It can be written down.

    Polanyi thought this was fundamentally wrong.

    And he spent the better part of two decades building the argument against it.

    His most concentrated statement of that argument came in 1966 in a slim book called The Tacit Dimension. The book opens with a single sentence that contains the entire problem: “We can know more than we can tell.”

    It sounds simple. It is not.

    What Polanyi Actually Meant

    Polanyi’s argument begins with perception, the most basic act of knowing something.

    Consider how you recognise a face. You can look at a photograph of someone you know and identify them instantly, across years, across changes in weight and hair and age.

    You are doing something genuinely sophisticated – processing a complex pattern and matching it against memory with remarkable reliability. But you wouldn’t be able to explain ( which features you used, how you weighted them, what the decision rule was) exactly how you did it to someone. Not because the knowledge isn’t there.

    Because the knowledge doesn’t exist in a form that can be stated.

    Polanyi called this tacit knowledge or knowledge that we hold and use reliably but cannot fully articulate. He distinguished it from explicit knowledge i.e. knowledge that can be stated, written down, and transmitted through language and instruction.

    The distinction sounds straightforward but its implications are radical. Because Polanyi’s claim was not just that some knowledge happens to be tacit. His claim was that tacit knowledge is foundational.

    He believed that all explicit knowledge rests on a substrate of tacit knowledge that can never be fully surfaced. You cannot make everything explicit because the act of making something explicit always relies on tacit capacities that are doing the work underneath.

    He illustrated this with what he called the subsidiary-focal distinction. When you use a hammer, your attention is focused on the nail. The hammer itself (its weight, its balance, the feel of it in your hand) is present to you, but subsidiarily – you are not focusing on it. You are focusing through it. If you shift your attention to the hammer itself, you lose your grip on the task. The tacit knowledge that makes you competent with the tool only functions when it stays tacit.

    This is why expertise is so hard to teach and so hard to transfer.

    The expert is not withholding anything. They are focusing through their knowledge, not on it. Asking them to articulate it is like asking them to stare at the hammer instead of the nail. The act of articulation disrupts the very thing you are trying to capture.

    The Iceberg

    The most useful way to think about expertise is as an iceberg.

    Above the surface sits explicit knowledge, everything that can be stated, taught, written down, encoded in manuals and training programs and textbooks. This is the knowledge that moves easily. You can put it in a document and send it across the world. It survives the death of the person who held it. It can be transmitted to ten people as easily as to one.

    Below the surface sits tacit knowledge. It’s vastly larger, and entirely invisible from above. This is the knowledge that makes the difference between someone who knows the rules and someone who can actually perform.

    It includes:

    Perceptual knowledge: the ability to notice what matters. The experienced radiologist who sees something in a scan that a resident misses. The fund manager who reads a room full of executives and knows within minutes whether the business is actually healthy. They are perceiving things that are genuinely there, but their perception has been trained by years of experience into a sensitivity that cannot be directly transmitted.

    Procedural knowledge: knowing how, as distinct from knowing that. You can read every book ever written about riding a bicycle without being able to ride one. The knowledge of how to ride lives in the body, in the calibration of balance and response that only practice builds. Professional skills work the same way. The senior copywriter who reads a brief and knows immediately what angle to take is not applying a rule. They are drawing on something built from thousands of briefs processed over years.

    Contextual judgment: knowing when the rules apply and when they don’t. This is perhaps the most valuable and most elusive dimension of expertise. Textbooks describe how things work in general. Experts know how they work in this situation, with these constraints, given what happened last time. That situational sensitivity is almost impossible to encode because it is not a rule at all.

    The knowledge of what to ignore is perhaps the least discussed but most practically important. Experts are not just better at processing relevant information. They are better at filtering out irrelevant information. They have learned, through experience, what doesn’t matter. That negative knowledge is as hard to transfer as the positive kind.

    When Tacit Knowledge Is Lost

    The organisational implications of tacit knowledge loss are severe and largely invisible until it is too late.

    When an expert leaves an organisation what walks out the door is not just the explicit knowledge they held. That part, if the organisation was reasonably diligent, has probably been documented somewhere. What walks out the door is everything below the surface. The perceptual sensitivity built over decades. The contextual judgment that knew when the documented process didn’t apply. The feel for what mattered and what didn’t.

    This loss is structurally invisible because explicit knowledge is easy to see and tacit knowledge is not.

    Organisations inventory what they can measure. They document processes, capture decisions, build knowledge bases. And then they are surprised when the person who wrote the process document leaves and everything quietly starts going wrong gradually, in the accumulation of small decisions that the documentation doesn’t cover and the new person doesn’t know how to make.

    NASA experienced this in one of its most documented forms. After the Apollo program ended in the early 1970s, the organisation went through waves of restructuring and downsizing.

    When NASA began planning a return to the moon decades later, it discovered that significant tacit knowledge about how to build certain components had simply ceased to exist within the organisation. The documentation was there but embodied, practiced, judgment-laden knowledge was not.

    This pattern repeats across industries and new graduates, however well trained, cannot replicate what the experienced staff did without being able to say why.

    Why This Problem Is Acute Now

    For most of the history of organisations, tacit knowledge loss was a serious but manageable problem. It was addressed, imperfectly, through apprenticeship and practice rather than instruction.

    The medieval guild system was essentially a tacit knowledge transfer mechanism. So is the residency system in medicine or the partnership track in professional services. You spend years watching someone who knows what they’re doing, and eventually some of what they know moves into you.

    Apprenticeship is slow and expensive. But it works, because tacit knowledge can be transferred through observation, practice under guidance, and through the accumulated experience of being in the room when an expert makes a hundred decisions and slowly developing a feel for why.

    The agentic era has broken this in a specific and important way.

    The promise of AI agents is that you can encode expert-level performance into a system and deploy it faster, cheaper, and more consistently than any human expert. The appeal is obvious. The problem is that the entire premise depends on being able to get the expertise into the system in the first place.

    The real question is transferring expertise that is mostly tacit.

    This means that most agent implementations are not actually encoding expertise. They are encoding the explicit layer – the documented processes, the stated rules, the guidebook version of how things work.

    Most organisations have deployed that explicit layer at scale and called it an expert system.

    What they have actually built is a very fast, very consistent, very scalable average.

    It performs well on the cases that the explicit rules cover. It fails, sometimes catastrophically, on the cases that require the judgment, the contextual sensitivity, the feel for when the rules don’t apply that lives below the surface of what any expert can easily say.

    The gap between a competent agent and an expert-level agent is almost entirely a tacit knowledge gap.

    It is not a technology or a model problem. It is the same problem the surgical researchers hit in the 1970s, the same problem Feigenbaum hit sitting with chemists in the 1960s, the same problem Polanyi was describing in 1966.

    We can know more than we can tell. And until you have a method for surfacing what can’t easily be told, you are building on the visible part of the iceberg and wondering why the system keeps running into things it didn’t see coming.

    What This Means in Practice

    Polanyi’s insight makes this problem legible. And this is where every solution begins.

    If tacit knowledge cannot be extracted through direct questioning, it can be approached through other means. Through observation rather than interview. Through cases rather than principles. Through contrast rather than description. Through the careful, patient work of watching experts perform and finding ways to surface the knowledge they are focusing through rather than on.

    That work has a name and a methodology.

    It is the discipline of elicitation and it is where the practical response to the tacit knowledge problem lives.

    But before elicitation can work, you have to understand what you are trying to elicit. You have to know that the knowledge you need is not sitting on the surface waiting to be asked for.

    You have to know that the iceberg is mostly underwater, and that the part you can see is not representative of the part you can’t.

    That understanding is what Polanyi gave us. And it is why, sixty years after he wrote it, his single sentence still contains everything you need to know about why this problem is hard.

    We can know more than we can tell.