You’ve probably noticed it more with your newer hires.
They ask questions that are covered in the onboarding guide. They ping senior engineers for things that are written down somewhere. They seem to want everything explained directly rather than going to find it themselves. And somewhere in the back of your mind, a generational explanation has probably started to form: they want everything instant, they don’t read long documents, they’d rather ask than look.
It’s a reasonable hypothesis. It’s also wrong.
Because once you start paying attention, the pattern isn’t limited to new hires. The engineer who’s been on the team three years does it too. So does the ops lead who helped write the documentation in the first place. The same questions surface in Slack that were answered in the last all-hands. The same processes get explained in DMs that are described in the wiki. The behavior is universal, which means the explanation almost certainly isn’t generational.
What’s actually happening is simpler, more structural, and entirely fixable. But first you have to see it clearly.
The Generational Hypothesis Doesn’t Survive Scrutiny
The idea that younger workers won’t read documentation has become a familiar organizational complaint. And there is a real signal buried in it: nearly half of Gen Z workers say they’d rather ask ChatGPT than their manager for information they need at work. That statistic gets cited as evidence of shortened attention spans or a preference for passive consumption.
But look at what it actually describes. Workers are bypassing their organization’s documentation in favor of a tool that reliably answers questions in plain language, in seconds, in the terms the asker used. That’s not laziness. That’s a rational response to two systems: one that works and one that frequently doesn’t.
The generational framing also falls apart the moment you ask why older, more experienced employees exhibit the same behavior. They ping each other instead of checking the wiki. They ask in Slack instead of searching Confluence. They schedule a thirty-minute call to answer a question that should take thirty seconds. If this were a generational trait, it would track with age. It doesn’t. It tracks with how long someone has been using your documentation system and how many times it has failed them.
The behavior your team is showing you isn’t a personality type. It’s a learned response to a broken system. And the system has two specific failure modes worth understanding.
Failure Mode One: The Retrieval Problem
When someone encounters a question at work, they do try to find the answer. They search the wiki, scan Confluence, maybe look through recent Slack messages. What they find, more often than not, is one of three things: nothing that matches what they searched for, several results that are adjacent but don’t quite apply, or something that looks right but carries a “last updated: 14 months ago” timestamp that introduces just enough doubt to make them hesitate.
At that point, the rational move is to ask a colleague directly. Not because they’re lazy, but because the search didn’t produce a trustworthy answer, and asking a human is faster and more reliable than searching again with different keywords.
The retrieval failure is structural. Documentation is written by people who already know the answer, which means it’s organized around the writer’s mental model, not the reader’s question. The writer calls the process “Customer Resolution Workflow.” The reader searches for “what to do when a client is angry.” The knowledge exists in the system. The path between the question and the answer is broken.
This mismatch compounds as organizations grow. A ten-person team can get away with imprecise documentation because everyone roughly knows what everything is called and where everything lives. At a hundred people, that shared mental model no longer exists. McKinsey research on knowledge work finds that employees spend close to 20% of their working week searching for information or tracking down colleagues who have it. That’s nearly a full day every week, per person, spent compensating for a retrieval system that doesn’t work.
Failure Mode Two: The Trust Problem
Retrieval failure is what happens the first time someone tries to use documentation and can’t find what they need. Trust failure is what happens after that experience repeats.
Two or three failed searches are enough to establish a behavioral pattern. The person learns that the documentation system is unreliable: sometimes accurate, sometimes outdated, often silent on the thing you actually need. Faced with that track record, a rational person develops a heuristic: skip the docs, go straight to the human. This heuristic is efficient. It also means that the documentation that is current and accurate stops getting consulted, because the person has no way to distinguish it from the documentation that isn’t.
Staleness is the engine of this trust failure, and it runs automatically. Documentation starts decaying the moment it’s published, not because anyone is careless, but because organizations change continuously while documentation doesn’t. Every process update, tool migration, team reorganization, or product change makes some percentage of your wiki quietly wrong. Nobody updates the docs because that work is invisible, unrewarded, and competes with everything that is urgent and visible.
The people who could keep documentation current are precisely the people who are too busy to do it. Research on institutional knowledge consistently finds that 42% of role-specific expertise is known only by the person currently doing the job. That’s the person fielding six Slack pings before lunch. Asking them to also maintain the documentation that would eliminate those pings is asking them to solve the problem with the same bandwidth the problem is consuming.
So the docs go stale. The team learns the docs go stale. The team stops checking the docs. The pings increase. The expert has even less time to update the docs. The cycle tightens.
What the Behavior Is Actually Telling You
When your team keeps asking questions that are already answered somewhere, they are not telling you they’re incurious, impatient, or generationally deficient. They’re telling you something precise about your knowledge infrastructure: that looking doesn’t reliably work, that what’s there can’t always be trusted, and that asking a human is a more efficient path to a correct answer.
This is useful information, if you receive it as a systems signal rather than a cultural complaint.
The instinct most managers have when they diagnose this problem is to address the behavior: tell people to check the docs before asking, add “update documentation” to the definition of done, run a quarterly wiki cleanup sprint. These interventions share a common assumption: that the problem is discipline, and that more of it will fix things. It won’t, because discipline operates on behavior while the problem is structural. You can mandate that people check the docs. You cannot mandate that checking works.
The same dynamic plays out for new employees most acutely, because they arrive with no prior knowledge of where anything lives or who wrote what. They encounter the retrieval problem before they’ve built the internal network that lets experienced employees route around it. What looks like onboarding friction is often the documentation problem in its most concentrated form: a person who needs answers most is operating in a system least equipped to provide them.
Why “Keep It Updated” Isn’t an Answer
The standard prescription for broken documentation is maintenance: assign owners, enforce update cycles, make documentation hygiene part of performance reviews. Organizations that try this hard find that it works briefly. A documentation sprint produces a burst of activity. The wiki looks better for a month or two. Then the urgent work reasserts itself, the updates slow down, and six months later the decay has resumed from a slightly higher baseline.
The maintenance model fails because it treats documentation as a product that needs to be kept current, when the underlying problem is that documentation is created in the wrong place at the wrong time. It’s written retrospectively, by people reconstructing what they know from memory, at a remove from the moment when the knowledge was actually alive and in use.
The knowledge that would actually help your team is being created every day. It surfaces in Slack when a senior engineer explains why a system was built a certain way. It appears in a thread when someone walks a colleague through a process they’ve done dozens of times. It lives in the response to a question that five other people on the team will eventually ask. That knowledge is current, specific, grounded in a real question, and written by someone who demonstrably knows the answer.
It also disappears within weeks, because Slack is a river, not a library. The answers your team needs are already being created in Slack. They’re just not being kept.
A Different Model: Capture Rather Than Create
The documentation model asks experts to do something proactive: set aside time, reconstruct their knowledge, write it down for a future audience, maintain it as things change. Every step in that chain competes with the work the expert is actually evaluated on, and the feedback loop is so delayed that it barely functions as an incentive.
A capture model inverts the sequence. Instead of asking experts to create documentation separately from their work, it captures the knowledge they’re already sharing in the course of their work. The Slack thread where an engineer explains a design decision. The channel discussion where an ops lead walks through a process. The response to the question that five people have already asked this quarter. None of this requires additional effort from the expert. It’s already happening. The only question is whether it disappears or gets kept.
When captured exchanges are attributed to the contributor and made searchable by topic, two things change simultaneously. The retrieval problem improves because the knowledge is findable under the terms the asker actually uses, organized around questions rather than topics. The trust problem improves because the knowledge is recent, specific, and carries the name of someone who can be verified as credible. A Slack explanation written last Tuesday by a senior engineer who knows the system is inherently more trustworthy than a Confluence page last updated before the last two reorgs.
And when contributions are recognized through peer validation, the incentive structure shifts. The expert whose explanation gets bookmarked by colleagues isn’t just answering a ping. They’re building a visible, attributed record of their expertise that compounds over time. That’s a different proposition than updating a wiki that nobody will read.
Generation Is Irrelevant. The System Is Everything.
Any generation of workers placed in a documentation system where search is unreliable and content goes stale will develop the same heuristic: skip the docs and ask a human. That’s not a cultural trait. It’s pattern recognition. It’s what rational people do when they learn that a tool doesn’t reliably work.
The questions your team keeps asking aren’t a sign that they won’t engage with documentation. They’re a sign that your documentation system isn’t giving them a reason to. Fix the retrieval problem and the trust problem, and the behavior changes, across every generation on your team, because the behavior was always about the system.
The knowledge exists in your organization. The cost of it remaining inaccessible compounds every week in pings that interrupt your best people, decisions made without context, and new hires who spend months reconstructing what the team already knows.
Pravodha captures the knowledge your team is already creating in Slack and makes it searchable, attributed, and permanently available. Not a new documentation process. A different model entirely.