Contra NPC Memo re:Management post-LLMs
A One-Shot Critique of Sloppy Management Theory
The recent enthusiasm around large language models has encouraged a particular genre of argument: that LLMs mark a break in the history of management, altering the ontology of intent, subsuming interpretive discretion, and inaugurating a new era of “second-order management” devoted to governing machine interpretation. It is an elegant story. It is also, for the most part, overstated.
This essay takes the opposite position. It argues that LLMs change less about management than their strongest theorists claim, and that where they do matter, they matter in ways that look more like ordinary organizational change than like a recomposition of managerial being. They shift cost curves, alter communication practices, and open new fronts in surveillance and control. They do not fundamentally destabilize “intent,” nor do they transform management into a wholly new profession of “model governors.”
They are, in short, powerful tools that will be fought over and domesticated inside the same political, economic, and cultural structures that shaped every previous wave of office automation.
The argument proceeds in four moves. First, it reconstructs a continuity thesis: management has always operated in conditions of fractured intent, mediated representation, and partial automation; LLMs extend these conditions rather than overturn them. Second, it argues that the interpretive significance of LLMs is narrower than claimed, because much managerial interpretation is embodied and political rather than textual. Third, it reframes LLMs as instruments of routinization, monitoring, and labor reallocation, rather than as ontological agents. Fourth, it suggests that the most consequential management changes associated with LLMs are external and institutional—regulatory, legal, labor-market—rather than internal to the “ontology of intent.”
The conclusion is not that LLMs are trivial. It is that their impact on management looks less like the birth of a new managerial species and more like another round of familiar struggles over control, discretion, and responsibility.
1. Management Has Never Had a Stable Ontology of Intent
The subsumption story presumes a baseline: that organizations once possessed something like a stable “intent,” and that management’s job was to translate this intent into routines, protocols, and decisions. Deterministic software then encoded those routines, hollowing out procedural discretion and leaving management with interpretation and purpose. LLMs supposedly now invade that last bastion, destabilizing intent itself by generating infinite candidate purposes on demand.
The problem with this narrative is that it retrofits coherence into an environment that was never coherent.
Long before LLMs, organizations lived with:
Strategies that contradicted incentives.
Public value statements that diverged sharply from internal practices.
Coexisting, incompatible goals across functions and geographies.
Constantly shifting “priorities” driven by quarterly pressures, leadership changes, and external shocks.
In other words, intent was already plural, contingent, and contested. The CEO spoke one theory of the business to investors, another to regulators, another to employees; the CFO quietly enacted a fourth through budgeting. “Mission” functioned as a loose symbolic device, not as an ontological anchor.
On this view, LLMs do not destabilize intent; they mirror its existing instability. They make visible, and perhaps more efficient, a condition management theorists and organizational sociologists have documented for decades: the chronic misalignment between what organizations say they want, what their structures reward, and what their actors actually do.
The idea of a newly “fractured intent stack”—declared, encoded, emergent, inferred—only sounds novel if one imagines there was ever a single layer. In practice, organizational life has always involved:
Declared intent in speeches and memos.
Encoded intent in budgets, KPIs, and access rights.
Emergent intent in what actually gets done under pressure.
Inferred intent in how employees and outsiders read those patterns and decide what “must really matter” here.
LLMs add another representational layer—their own internal model of what is likely, salient, and rewarded—but do not create the underlying misalignment. At most, they increase the number of places where that misalignment has to be managed.
Seen from this angle, the “ontology of intent” was never stable enough to be disrupted by a new software class. Management has always been the art of navigating contradictory wants under institutional and political constraints. LLMs complicate that navigation, but they do not change its fundamental nature.
2. Interpretation in Management Is Not Mainly a Text-Generation Problem
The strongest LLM-centric theories place weight on the fact that managerial work is linguistic: managers draft memos, write policies, summarize data, narrate strategy, and so forth. From here it is tempting to infer that a system capable of producing fluent text in managerial genres is, ipso facto, invading the core of managerial interpretation.
That inference is too quick.
The fact that something ends in text does not mean the work is in the text. Much of what we call “interpretation” in management consists of:
Reading ambiguous political situations in the organization.
Assessing trustworthiness and capacity of particular people.
Navigating conflicting demands from powerful stakeholders.
Gauging the emotional temperature of a team after a crisis.
Anticipating regulator reactions not just to a proposal, but to who is making it and when.
These interpretive acts might eventually be summarized in a document. But the managerial work precedes the document and is not reducible to it. The memo is the record of the game, not the game itself.
LLMs are good at the record. They are less capable at the game. They operate on textual shadows—policies, emails, prior documents—not on the lived dynamics of power, status, fear, loyalty, and opportunism that structure organizations.
A manager’s interpretive labor often looks like:
Choosing which piece of information not to write down because it would be politically explosive.
Saying less than they know in a public forum to avoid undermining a stakeholder.
Using a carefully timed silence, or a particular phrasing, to signal intent to one audience and ambiguity to another.
LLMs can help phrase these moves, but they cannot originate their underlying judgment. That judgment is grounded in situated experience, reputation, and embodied interactions, not in patterns of text.
In other words: LLMs automate the surface representation of interpretive work, not the work itself. They threaten copywriting, summarization, and generic analysis more than they threaten the fundamentally interpersonal and political craft of management. If one believes that management is mostly about documents, LLMs look revolutionary. If one believes that management is mostly about power and relationships, they look more like sophisticated dictation and search tools.
3. The Real Action Is Routinization, Surveillance, and Labor Reallocation
Where LLMs do materially affect management, the mechanisms look familiar: routinization of cognitive work, centralization of knowledge, and reallocation of discretion away from the periphery toward central actors.
Three examples are illustrative.
3.1 Routinization of Middle-Layer Cognitive Tasks
Early empirical work on generative AI in call centers and software development shows large gains for novices performing routine tasks, with more modest gains for experts and complex work. That is exactly what we’d expect from any technology that compresses access to prior patterns: it turns senior practitioners’ tacit templates into accessible scaffolds for juniors.
From a management perspective, this is not ontological transformation; it is Taylorism for white-collar cognition. Patterned work—standard emails, standard analyses, standard customer responses—becomes more codified and more centrally controllable. Discretion in these domains shrinks, not because models now “interpret intent,” but because managers have a new way to enforce homogenized responses.
This has consequences:
It strengthens the center: template-driven communication and analysis can be standardized across units.
It weakens idiosyncratic local practice: improvisation becomes more costly relative to “just use the copilot.”
It changes the skill mix: fewer people are needed to produce mid-quality text; more are needed to design workflows, prompts, and evaluation regimes.
These are important shifts. They are also perfectly legible within existing management theory about routinization and standardization. They do not require reimagining management as “curation of probabilistic meaning”; they can be understood as another wave of process formalization, now at the level of language.
3.2 New Forms of Monitoring and Control
Because LLMs sit inside communication tools, they create new possibilities for management surveillance. They can:
Summarize Slack channels and infer sentiment.
Flag “risky” language in drafts before it leaves the organization.
Surface “best practice” phrase patterns that management prefers to see repeated.
Auto-generate feedback in the house tone, masking the actual variability of managerial voice.
This is not primarily about interpretation; it is about power. The same infrastructure that drafts performance reviews can be used to nudge how managers talk about performance. The same system that “helps” compose emails can standardize how dissent or concern may be expressed. The granularity of monitoring increases; the costs of rhetorical deviance rise.
The more the organization adopts LLM-mediated tools, the more its surface language can be tuned to leadership’s preferences. That tuning may be benign—consistent tone, fewer careless errors—or it may flatten expression and make it harder to detect early signals of dysfunction. Either way, the managerial concern is familiar: who controls the scripts?
“Model governance,” in this light, is just a specialized version of existing corporate governance: deciding which forms of communication are encouraged, which are discouraged, and how much oversight is tolerable. The language of “second-order management” risks obscuring that this is about control over discursive infrastructure, not a novel kind of epistemic stewardship.
3.3 Labor Reallocation and the Middle-Management Question
Historically, new office technologies—typewriters, spreadsheets, email—shifted who does what, without eliminating management. Some categories of work shrank; others grew. LLMs are likely to follow the same pattern.
They plausibly reduce the need for:
junior staff whose primary function is drafting and polishing text,
mid-level analysts who recombine known frameworks for standard problems,
some layers of “translation” between technical and non-technical stakeholders, where text synthesis is the main value-add.
They plausibly increase the need for:
people who integrate outputs across multiple domains,
people who navigate conflicting stakeholder demands and regulatory constraints,
people who can credibly own decisions in environments where automated tools were consulted.
These shifts matter for careers, organizational design, and power distributions. But they do not require a novel ontology of management. They can be described using old categories: substitution at some task boundaries, complementarity at others, bargaining over where discretion resides, and re-stratification of the managerial labor market.
4. LLMs as Rhetorical Instruments, Not Ontological Agents
One of the more serious risks in treating LLMs as a new ontological layer is that it forgets who deploys them and to what ends. Models do not autonomously destabilize or stabilize intent. People use models as rhetorical and organizational instruments.
Consider three familiar patterns.
4.1 LLMs as Cover
Leaders already use “the system,” “the model,” or “the policy” as a way to deflect responsibility: we had to make this decision; the numbers forced us; the algorithm recommended it. LLMs can join this repertoire.
“We followed the AI’s triage,” or “we relied on our generative assistant to summarize risks” can become new ways to obscure who actually made a judgment. That is not the system destabilizing intent; that is management using the system to avoid owning intent.
This pattern has nothing to do with probabilistic ontology and everything to do with accountability incentives.
4.2 LLMs as Multipliers of Existing Ideology
Models trained on existing corporate, legal, and managerial language will tend to reproduce the ideology embedded in that language. They will normalize existing governance patterns, risk framings, and stakeholder hierarchies.
Ask such a model, “Write a mission statement for a high-growth tech company,” and it will not produce a radical critique of shareholder primacy; it will echo the dominant tropes. Use it to draft performance feedback, and it will lean toward conflict-minimizing, HR-safe phrasing. Use it to write AI policies, and it will mirror the standard menu of “fairness, safety, transparency, human oversight.”
Here, the model is less an independent source of “sampled intent” and more a compression engine for the status quo. Management can lean into that—using LLMs to efficiently reproduce dominant narratives—or choose to resist it. But the underlying fight is about which ideology gets encoded, not about a new class of machine “intent.”
4.3 LLMs as Objects of Institutional Negotiation
Where LLMs do create something structurally new is not inside the firm, but at the interface between firms and their institutional environments. Regulators, courts, and standards bodies will increasingly demand:
documentation of how models were used in specific decisions,
evidence of testing for bias or harm,
mechanisms for appeal and override.
This external pressure will shape management practice—new committees, new reporting lines, new risk functions—but again, these are extensions of existing compliance logics. The ontology of management remains: someone must be able to face the regulator, the court, or the public and say, “We are responsible.”
The psychic temptation to say “the model did it” does not change that. If anything, LLMs will reinforce the expectation that management can explain and justify its use of tools, in continuity with past expectations around any consequential technology.
5. What Actually Changes—and What Doesn’t
Once the ontological claims are stripped down, what remains?
It is reasonable to expect that, over time:
Many written artifacts in organizations will be partially or heavily machine-generated.
Standard analyses and narratives will be produced faster and by fewer people.
Communication will be more standardized and more easily monitored.
New roles will emerge around AI risk, tooling, and compliance.
Some parts of middle management will be thinned; others will be re-specialized.
These are nontrivial changes. They matter for power, inequality, job quality, and the lived texture of organizational life. But they can be described with the same conceptual kit that applied to spreadsheets and ERPs: automation of routinizable tasks, centralization of control, diffusion of surveillance, renegotiation of discretion.
What does not change, even under heavy LLM adoption, are the core managerial questions:
What are we actually trying to do?
Who gets to decide?
Who bears the downside risk when tools and people get it wrong?
How do we allocate scarce resources among competing claims?
How do we maintain legitimacy with those who can hurt us?
LLMs may help draft more coherent answers to these questions. They may be deployed to obscure them. They may change the mix of people in the room when they are asked. But they do not make the questions go away, nor do they fundamentally alter the fact that they are political and moral, not just epistemic.
If anything, the presence of probabilistic tools gives management a new excuse to pretend that such questions are technical. The real danger is not that intent dissolves into sampled text. The real danger is that leaders can more easily act as if their choices were compelled by “what the AI said,” using models as legitimating devices for decisions they wanted to make anyway.
Conclusion: Management After LLMs, Without Metaphysics
The case for continuity is not that LLMs are unimportant. It is that they are important in ways that do not require a new ontology of management.
They compress and routinize large classes of textual work.
They offer new levers for standardizing communication and extending managerial monitoring.
They shift the cost and skill structure of analysis and documentation.
They create new compliance and risk-management work at the boundary between firm and regulator.
All of this matters. None of it obviates the central, old problem: that organizations are sites of contestation over purposes under conditions of uncertainty and power asymmetry.
To the extent that LLMs destabilize anything, it is the rhetoric management can use to describe its own agency. The tools make it tempting to say: “The system interpreted the situation this way; we merely followed.” But that sentence is available only because someone chose to ask the system, chose how to configure it, and chose how much weight to give its output. Those choices remain irreducibly managerial.
A sober theory of “management after LLMs” would therefore drop the idea of a new ontological era and ask more concrete questions:
Where does discretion actually move, and for whom?
Which groups gain or lose bargaining power when textual work is automated?
How do models interact with existing incentive structures and surveillance capacities?
How will external institutions—courts, regulators, unions—reshape the permissible use of LLMs?
The answers will vary by sector, jurisdiction, and organizational form. But they will be recognizable. The organizational sciences have seen this pattern before: a new technology enters; its advocates claim it will transform management; in practice, it is domesticated, fought over, embedded, and eventually normalized.
LLMs will be no different. They will change the tools; they will thicken the paperwork; they will justify some layoffs and some new hires; they will provide fresh language for old struggles. What they will not do is relieve management of its basic burden: to decide, in public and under constraint, what the organization is willing to own.

