The documentation model is the dominant approach to organizational knowledge management, and it is structurally broken. Not broken in the sense of being poorly executed. Broken in the sense that the incentives required to make it work do not exist in any organization that has other things to do, which is every organization.
Most knowledge management strategies fail for the same reason: they are built on the documentation model without examining whether the documentation model can actually work. The result is a predictable cycle. A team recognizes a knowledge problem. They adopt a tool. They run a documentation sprint. The wiki fills up briefly. Then it decays. The team stops trusting it. The knowledge problem persists, now accompanied by a graveyard of outdated pages and a quiet organizational consensus that “we’ve tried that and it doesn’t work.”
The post covers what the documentation model actually is, why it fails so consistently, and what the working alternative looks like.
What the Documentation Model Actually Assumes
The documentation model rests on three linked assumptions, each of which sounds reasonable in isolation but fails in practice.
The first assumption is that experts can and will translate their knowledge into written documentation. This asks people to do something genuinely difficult: convert tacit, contextual understanding (knowledge that lives in pattern recognition, in the memory of past incidents, in felt experience) into explicit prose that will be useful to an unknown future reader who lacks the context the writer takes for granted. Even skilled writers find this hard. For engineers and technical specialists whose thinking is primarily systems-based rather than prose-based, it is harder still.
The second assumption is that documentation, once created, will stay current. Organizations change continuously: processes update, tools migrate, teams reorganize, decisions get revisited. Documentation does not update itself. It decays the moment it is published. Staying current requires ongoing human effort that is invisible, unrewarded, and competes directly with the work that is urgent and visible. The documentation that was accurate when written is quietly wrong six months later, and nobody marks it as outdated.
The third assumption is that future readers will be able to find and trust what has been documented. This requires that content be organized around how people search rather than how writers think, that search returns accurate and current results rather than a mix of live and stale content, and that readers have enough confidence in the system to consult it rather than defaulting to asking a colleague. Each of these conditions breaks down at scale.
The three documentation model assumptions (that experts will document, that documentation will stay current, and that readers will find and trust what has been documented) must all hold simultaneously for the documentation model to work. In practice, all three fail, and they fail together. The documentation model is not a flawed implementation of a sound idea. It is a structurally unsound idea that has been repeatedly reimplemented in the hope that a better tool will compensate for a broken model.
Why the Documentation Model Fails Experts
The failure of the documentation model on the supply side is not a motivation problem. It is an incentive and cognitive mismatch problem, and the two are different.
Motivation is about willingness. The experienced engineers, ops leads, and senior contributors who hold the most valuable organizational knowledge are generally not unwilling to share it. They share it every day, in Slack, in responses to questions, in the explanations they give when a colleague gets stuck. The knowledge-sharing behaviour is present. What is absent is any mechanism that makes that sharing persistent.
The cognitive mismatch is about the gap between what documentation asks of experts and what experts can actually deliver. Documentation asks people to write from memory, at a remove from the moment the knowledge was last alive, for an audience they cannot see, covering scenarios they may not anticipate. It asks them to make explicit what feels obvious to them, which is precisely the context a future reader most needs and the expert is least equipped to include. Psychologists call this the curse of knowledge: once you understand something deeply, it becomes very difficult to remember what it was like not to know it. The workaround developed after three production incidents. The reason a particular config file exists. The vendor contact who actually picks up the phone. These are exactly the details that get left out of documentation written by someone who has internalized them.
The incentive mismatch runs in the other direction. Research on institutional knowledge loss consistently finds that 42% of role-specific expertise is known only by the person currently doing the job. Yet nothing in most performance systems rewards documenting that expertise. The feedback loop on documentation is so delayed and indirect that it barely functions as an incentive at all: the beneficiary is a future colleague the author may never meet, solving a problem the author cannot anticipate. Meanwhile, answering a direct Slack ping is faster, feels more useful, and produces immediate gratitude. The documentation that would have prevented the ping goes unwritten. This dynamic is examined in depth in the post on why experienced employees don’t document their insights.
The documentation model tries to overcome this incentive mismatch through mandate: add documentation to performance reviews, run quarterly knowledge base sprints, create a definition of done that includes wiki updates. These interventions address the behaviour without changing the incentive. Documentation written under mandate tends to be thorough in format and thin in actual useful content, because the person writing it is optimizing for completing the task rather than genuinely transferring knowledge. The mandate produces the appearance of documentation without the substance of it.
Why the Documentation Model Fails Readers
The failure on the demand side is equally structural, and it compounds the supply-side failure in ways that make the whole system collapse.
Even when documentation exists and is current, readers encounter two problems that the documentation model has no answer for.
The first is retrieval failure. Documentation is organized by the writer’s mental model of the subject. Questions arrive organized by the reader’s mental model of the problem. These two organizational logics almost never match. The person who needs to know how to handle a difficult client escalation does not search for “Customer Success, Volume 3, Section 4.” They describe their situation in plain language and find either nothing or a list of results that is technically related but practically unhelpful. McKinsey research on knowledge work finds that employees spend approximately 20% of their working week searching for information or tracking down the right colleague to ask. That is nearly a full day per week spent compensating for a retrieval system that does not work.
The second is trust failure. Two or three failed searches are enough to establish a learned pattern: the documentation cannot be relied upon. Once that pattern is established, a rational person develops a heuristic: skip the docs, ask a human. This heuristic is efficient. It also means that even accurate, current documentation stops getting consulted, because the reader has no way to distinguish it from the documentation that is outdated and wrong. The trust failure makes the entire system unreliable, not just the parts that are genuinely stale.
The result is a documentation system that everyone nominally maintains and nobody actually uses. This is the pattern described in the post on why nobody uses your documentation: the behaviour that looks like generational impatience or laziness is actually rational pattern recognition in response to a system that has repeatedly failed to deliver reliable answers.
Three Symptoms, One Broken Model
The documentation model produces three observable symptoms that organizations typically diagnose as separate problems and try to fix independently.
The first symptom is the wiki graveyard: a knowledge base that accumulates pages, loses currency, and eventually stops being trusted. The standard response is a maintenance sprint or a documentation ownership policy. Neither addresses why the documentation went stale in the first place, which is that the people responsible for keeping it current are also the people with the least time and incentive to do so. The graveyard is not a failure of discipline. It is the inevitable outcome of the documentation model applied to a system that requires ongoing maintenance from people who are already fully occupied.
The second symptom is the expert bottleneck: a small number of senior people who hold most of the organization’s functional knowledge, get interrupted constantly by questions only they can answer, and gradually become protective of their time in ways that make the knowledge gap worse. The standard response is to ask these people to document more, attend more knowledge transfer sessions, or be more accessible. This addresses the symptom at the cost of the people who can least afford to bear it. The incentive structure around knowledge hoarding is not a personality problem. It is a rational response to a system that rewards holding knowledge over sharing it.
The third symptom is the new hire tax: the weeks or months a new employee spends reconstructing context that the organization already holds but cannot surface. The standard response is a more comprehensive onboarding program, a buddy system, or a better-organized wiki. These help at the margins. They do not address the underlying problem, which is that the organization’s knowledge exists primarily in people’s heads and in Slack threads that nobody can find. The new hire who spends three weeks assembling a picture of how the billing system was built is not experiencing an onboarding failure. They are experiencing the documentation model’s failure to make existing knowledge accessible. This is the same structural gap that causes knowledge silos to form between teams: expertise circulates inside individual teams and never crosses the boundary to where it is needed.
The three documentation model symptoms share a root cause. The documentation model places knowledge preservation outside the flow of actual work, asks the wrong people to do it proactively, and provides no mechanism to keep the result current or trustworthy. The symptoms will persist for as long as the model does, regardless of which tool is used to implement it. This is the argument at the core of the post on why internal wikis become graveyards: the problem is not the wiki. The problem is the model the wiki is trying to execute.
The Capture Model: How Knowledge Preservation Actually Works
The capture model inverts the documentation model’s core assumption. Instead of asking experts to create documentation as a separate activity, it captures the knowledge they are already sharing in the course of their work.
The insight that makes this possible is simple: your most experienced employees are already sharing their knowledge. Every day. In Slack.
A senior engineer explains in a thread why a particular architecture decision was made, including the two production incidents that shaped it and the conditions under which a different approach would be appropriate. A customer success manager walks a colleague through how a difficult client situation was handled, covering the relationship dynamics and the escalation path that resolved it. An ops lead articulates why a specific step in a reconciliation process must happen before 3pm on Thursdays, the consequence of missing it, and the vendor history that made the rule necessary. This is exactly the tacit, contextual, incident-grounded knowledge that documentation mandates fail to produce. It is being created continuously, in response to real questions, with full context intact, from people who demonstrably know what they are talking about.
The problem is not that the knowledge is not being shared. The problem is that the sharing disappears. Slack is a river, not a library. Messages flow past and vanish into the archive. By the time someone else needs the same knowledge, the thread is unfindable and the expert has to explain it again, with no organizational benefit from either repetition. UC Irvine research on interruption costs finds it takes an average of 23 minutes to regain full focus after a single interruption. A senior expert fielding five or six knowledge-related pings on a typical day is not just losing the time those conversations take. They are losing hours of deep work on either side of each one.
The capture model addresses this by preserving the knowledge at the moment it is already being created, rather than asking anyone to reconstruct it later. A three-click capture of a valuable Slack thread turns a disposable exchange into a permanent institutional asset. The expert contributes nothing beyond what they were already doing. The colleague who captures it contributes three clicks. The organizational benefit is permanent and compounds with every subsequent capture. Concrete examples of institutional knowledge show exactly this pattern: knowledge that surfaces in Slack, disappears into the archive, and has to be reconstructed from scratch the next time someone needs it.
What Changes When You Switch Models
The documentation model and the capture model produce different organizational outcomes across every dimension that matters for knowledge management.
Contribution burden: documentation model loads it onto experts, capture model distributes it
The documentation model places the contribution burden on experts: the people who know the most are asked to do the most additional work, with the weakest incentives, at the greatest cost to their primary responsibilities. The capture model places the contribution burden on whoever happens to notice that a valuable exchange is taking place, which can be anyone: a teammate, a manager, a new hire who recognizes that the answer to their question is something others will need. The expert is not burdened at all.
Knowledge quality: retrospective documentation vs. knowledge captured in context
Documentation produced retrospectively is almost always missing the most valuable parts: the context that felt obvious to the writer, the incident that explained the exception, the reasoning that made the decision non-obvious. Knowledge captured at the moment of creation is grounded in a real question, produced by someone who is actively drawing on their understanding, and contains the full context because the context was alive when the capture happened.
Currency: documentation decays, captured knowledge is inherently dated and attributed
Documentation decays from the moment of publication. Captured knowledge does not require maintenance in the same way, because it is dated and attributed: a reader can see that an explanation was captured four months ago by a named person whose expertise in this area has been recognized by colleagues. If the situation has changed since then, that is visible information. The reader knows to verify rather than assuming the content is either reliable or unreliable.
Incentive alignment: documentation model offers nothing, capture model builds visible expertise
The documentation model offers experts no tangible return for their contribution. The capture model offers something different: a visible, searchable, attributed record of expertise that compounds over time. An expert whose explanations are captured, attributed, and recognized by colleagues across the organization is not giving their knowledge away. They are building something more durable than the leverage that comes from being the only person who knows: a track record of demonstrated expertise that persists even when they are not available to be interrupted. This is the incentive inversion explored in the post on what a working knowledge base looks like.
Retrieval: documentation organized by topic, captured knowledge organized by questions asked
Documentation is organized by topic, which means it is findable only by people who already know the right topic label. Captured knowledge is organized around questions, which means it surfaces under the terms the asker actually uses. The retrieval gap (the mismatch between how knowledge is stored and how knowledge is sought) narrows because the knowledge entered the system through a question rather than through a writer’s prior categorization.
Why the Capture Model Compounds Over Time
The documentation model produces a maintenance problem that grows with the size of the repository: more pages means more decay, more outdated content, more trust erosion, more cleanup required. The system becomes harder to maintain as it gets larger.
The capture model produces the opposite dynamic. Each captured exchange makes the next search more likely to succeed. Each attributed contribution makes the organizational map of expertise more complete. Each peer validation makes the system more trustworthy. The knowledge base becomes more valuable with every addition rather than harder to maintain: the content is inherently current, the attribution is inherent to the capture mechanism, and the quality signal comes from peer recognition rather than from editorial oversight.
The compounding effect is also why the capture model changes the organizational dynamic around expert knowledge. When a senior engineer’s explanations are captured and attributed, they stop being the only source of truth for their domain. The pings slow down. The deep work recovers. The expertise is visible across the organization without requiring the expert to be constantly available to demonstrate it. The person who was a bottleneck becomes a contributor to a system that works without them, which is the outcome the documentation model always promised and never delivered.
Pravodha is built to create exactly this infrastructure: not a better place to store documentation, but a system that captures institutional knowledge at the moment it is already being shared, attributes it to the people who created it, and makes it permanently searchable for everyone who comes after. The documentation model has had decades of tooling investment and the result is consistent. The model is the problem. The capture model is what replaces it.