Elicitation: How to learn from experts

Imagine that you’re sitting across from a senior underwriter at a large insurance company. The underwriter has thirty years of experience. She can look at a commercial risk and within minutes form a view on whether it’s a good risk or a bad one, what the right price is, and what conditions to attach. Her loss ratio is consistently better than her peers. The company would not want to lose what she knows.

You’re brought in to capture that knowledge. To encode it into a system that can replicate her judgment at scale. You open your notebook and asks the obvious question:

How do you decide whether a risk is good or bad?

She thinks for a moment. Then she talks about financial strength, about management quality, about the physical condition of the assets, about claims history, about industry sector trends. She is articulate, thoughtful, and thorough. An hour later you have pages full of notes and a clear framework.

You go back to your lab and build it into the system.

The system is tested and performs reasonably well on straightforward cases. On the complex ones, where judgement is needed, it underperforms. It misses things she would have caught. It prices risks she would have declined. It declines risks she would have taken.

You go back to her to figure out what the system for wrong. You ask her what she would have done differently.

She looks at it for a moment. Then she says something that changes everything: I would never have written that risk in the first place. Something about it just feels wrong.

You ask her what felt wrong but she cannot say.

This is where naive elicitation ends. And where the discipline of knowledge elicitation begins.

Why Asking Doesn’t Work

The failure in that interview room is not a failure of effort. The problem is structural and it goes back to everything Michael Polanyi understood about the nature of expertise.

When you ask an expert to explain what they know, you are asking them to do something that expertise is specifically designed not to do.

Expertise is the compression of thousands of experiences into fast, automatic judgment.

It works because it has moved below the level of conscious deliberation. The expert is not running through a checklist when they evaluate a risk. They are perceiving a situation and responding to it in just the same way a native speaker responds to a sentence without parsing its grammar.

Asking them to articulate that process is like asking someone to explain how they ride a bicycle while they are riding it. The articulation interferes with the performance. And what comes out is a plausible, well-intentioned, genuinely believed account of how they think they decide, which is not the same thing as how they actually decide.

This reconstruction has a specific shape. It tends to be more logical, more sequential, and more complete than the real process. It leaves out the perceptual triggers, things the expert notices that they don’t realize they’re noticing.

It leaves out the negative knowledge: all the things they ruled out instantly without conscious deliberation.

It leaves out the contextual judgment: the feel for when the standard approach doesn’t apply.

What remains is the skeleton of expertise without the flesh. Building a system on that skeleton producesomething that works on textbook cases and fails on the ones that matter most.

The discipline of elicitation exists because the direct approach consistently fails. The lesson: You cannot get at tacit knowledge by asking for it directly. You have to come at it sideways.

The Toolkit

Over decades of practice, knowledge engineers developed a set of techniques for surfacing what experts cannot easily volunteer.

Each one approaches the problem from a different angle. Each one is designed to bypass the reconstructed account and get closer to what the expert actually does.

Thinking Aloud — Protocol Analysis

The most direct way to get at tacit knowledge is not to ask about it after the fact but to capture it in real time.

In protocol analysis, the expert is given a real problem to solve and asked to narrate their thinking as they go. They’re not asked to explain their reasoning, but simply to say whatever is in their mind as they work through it. The knowledge engineer sits alongside and records everything.

What comes out is nothing like the clean account you get in an interview.

It is messier, more fragmented, more associative. The expert notices things they don’t explain. They hesitate in places they can’t account for. They reject options for reasons that turn out to be revealing. The noise in the protocol is often more informative than the signal because this where the tacit knowledge leaks through.

The technique was developed by Herbert Simon and Allen Newell in the 1950s and 1960s as a method for studying problem solving.

They were interested in the cognitive processes underlying human reasoning and found that verbal protocols gave them access to those processes in a way that no other method could. Knowledge engineering borrowed the technique because it works for the same reason. It captures knowledge in action rather than knowledge in reflection.

The limitation is that not all expertise is verbal. Some experts go quiet when they are doing their best work. The thinking that produces the best judgment is sometimes the thinking that produces no words at all.

Laddering — Getting Below the Surface

Laddering is a technique borrowed from psychology where it was originally developed to understand personal values and how they connect to behavior.

In a knowledge elicitation context it works like this: the expert gives an account of why they made a decision, and the knowledge engineer asks why that matters. The expert gives another reason, and the knowledge engineer asks why that matters. The conversation moves down through layers of reasoning until it reaches a foundational belief or a principle that the expert holds but has rarely been asked to articulate.

The value of laddering is that it surfaces the causal structure underneath explicit reasoning.

Experts can usually tell you what they did. They can often tell you why in immediate terms. What they rarely surface unprompted is the deeper structure of beliefs and judgments that makes their reasoning work the way it does. Laddering pulls that structure into the open.

The technique requires patience and a degree of persistence that can feel uncomfortable. Asking why repeatedly can seem like you are questioning the expert’s judgment rather than trying to understand it.

Repertory Grid — Making Implicit Distinctions Explicit

One of the most powerful things experts do is make distinctions that novices can’t see. The senior underwriter doesn’t just evaluate risks she also categorizes them in ways that carry implicit judgments about quality, reliability, and probability of loss.

Those categories are often tacit. She uses them fluently without being able to name them.

Repertory grid technique, developed by the psychologist George Kelly in the 1950s, is designed to surface exactly these implicit distinctions. The process works by presenting the expert with sets of three things (three risks, three clients, three cases)and asking them to identify how two of the three are similar to each other and different from the third.

The expert names the dimension of difference. Then the knowledge engineer asks them to rate all the items in their domain on that dimension.

What emerges, across many rounds of this exercise, is a map of the expert’s implicit categorisation system. The dimensions along which they actually organise their domain. The grid makes visible a structure of judgment that exists in the expert’s mind but has never been externalised.

The technique is particularly useful when the knowledge engineer suspects that the expert’s explicit account of how they decide doesn’t match how they actually decide. The grid bypasses the reconstruction and gets at the actual cognitive structure underneath.

Critical Incident Technique

Abstract questions produce abstract answers. Concrete questions produce concrete knowledge.

The critical incident technique, developed by the psychologist John Flanagan in the 1950s, works by asking experts not to describe their general approach but to recall specific cases where they made a consequential decision, particularly ones where things went well or badly in ways that were not predictable from the standard approach.

A critical incident interview sounds like this: Tell me about a time when you looked at a risk and knew immediately it was wrong but couldn’t have explained why at the time. What did you eventually figure out? What were you noticing that you didn’t know you were noticing?

What the technique exploits is the difference between episodic memory and semantic memory.

Asking experts to describe their general knowledge activates semantic memory which is where the reconstructed, idealised account lives.

Asking them to recall a specific incident activates episodic memory which is closer to the actual experience, with all its texture and detail intact.

The incidents that are most valuable are often the ones where the standard approach failed, where the expert’s judgment was later proven right for reasons they didn’t fully understand at the time, cases where they made a mistake and figured out why.

These edge cases are where the tacit knowledge is most visible, because they are the cases where the expert had to work harder than usual to know what to do.

Contrastive Questioning

When you ask an expert to describe what they do, they give you an account of the general case. When you ask them to explain the difference between two specific situations they are forced into a level of precision that general description never reaches.

Contrastive questioning works because comparison activates a different cognitive process than description. To explain a difference, the expert has to identify the specific features that drove the distinction. Those features are often things they noticed without realising they were noticing them.

The technique is most powerful when the two cases being compared are superficially similar but produced different judgments.

The underwriter who approved one manufacturing risk and declined another that looks almost identical on paper is sitting on a piece of tacit knowledge that a contrastive question can help surface.

What Good Elicitation Looks Like

All techniques are tools. What makes elicitation work is the way you uses them and the relationship you build with the expert in order to use them well.

Good elicitation looks like a genuine collaboration.

Good elicitation is not information extraction. You need to work alongside an expert to surface something that they can’t see clearly alone.

The expert has the knowledge. As a knowledge engineer you have the methods to make it visible. The work requires both.

This means the knowledge engineer has to earn the expert’s trust. Experts are often sceptical of the process and not unreasonably. They have been asked before to explain themselves and found the process frustrating or reductive. They are protective of their judgment and wary of having it misrepresented.

The knowledge engineer who comes in with a clipboard and a fixed agenda will get the reconstructed account. The one who comes with genuine curiosity and patience will get something closer to the truth.

Good elicitation is also iterative. No single session surfaces everything. Knowledge engineers build a model of what they think the expert knows, tests it against cases, finds where it fails, and goes back to the expert with specific questions about the gaps. The process cycles between between drawing out knowledge and checking whether what has been drawn out is actually what the expert does.

The Knowledge Engineer as Interviewer

Knowledge engineers need enough domain knowledge to follow what the expert is saying.

This is an essential ingredient to being an effective interviewer. This will help you know when an account is incomplete, to recogniwe the gaps, to ask the follow-up question that opens the right door.

Be mindful not to fill in the gaps yourself and import your own understanding where the expert’s actual knowledge should go.

Beyond sufficient technical skills you need the ability to listen carefully, to sit with silence, to ask the question that hasn’t been asked yet.

Above all you need patience for a process that moves slowly and produces results that are often ambiguous.

In essence, you need a whole lot of intellectual humility to understand how the expert understands the domain. Your own intuitions about how things work are a liability as much as an asset.

The best knowledge engineers are the ones who can hold their own understanding lightly enough to see what the expert is actually doing rather than what they expect the expert to be doing.