Lost in between Speed and Scale
A preview of AI-speed production risks, a.k.a. Disorientation
Organizations will fail under AI knowledge production acceleration because irreversible commitments, such as product deployments, will propagate faster than shared orientation can update and intervene.
When the rate of commitment exceeds the rate at which understanding can meaningfully constrain action, organizations accumulate disorientation risk. This risk remains latent during periods of momentum and surfaces abruptly as systemic failure. Durable advantage under AI-speed production depends on engineering orientation infrastructure—mechanisms that interrupt momentum and bind learning into future action—rather than relying on judgment, foresight, or post hoc correction.
Faster, Faster, Faster!
On August 1, 2012, at Knight Capital. At 9:30 a.m., immediately after market open, Knight Capital deployed a new software release intended to support NYSE Retail Liquidity Program trading. The deployment activated dormant code on some servers that routed orders based on obsolete logic. Within forty-five minutes, Knight’s systems sent millions of erroneous orders into the market, accumulating roughly 4 million unwanted positions across more than 150 stocks. The firm lost approximately $440 million in under an hour, an amount exceeding its quarterly earnings. Understandably, many readers may think that the issue was the existence of error. Yet software systems fail routinely. Instead, let us hypothesize that the issue’s root cause was the speed at which commitments propagated without appropriate intervention. Orders executed at machine pace. Exposure accumulated faster than organizational processes could diagnose and intervene. And Knight Capital survived only through emergency financing arranged within days.
Two years earlier, In the morning of May 6, 2010, U.S. equity markets experienced what would later be called the Flash Crash. At approximately 2:32 p.m. Eastern Time, major indices began to fall sharply. Within minutes, the Dow Jones Industrial Average dropped nearly 1,000 points, erasing close to nine percent of its value before rebounding almost as quickly. Subsequent investigation by the SEC and CFTC traced the initial disturbance to a large automated sell order in E-mini S&P 500 futures, executed through an algorithm designed to trade aggressively based on volume rather than price or time. The algorithm performed as specified. It continued to sell as liquidity thinned, interacting with high-frequency trading systems that rapidly bought and sold contracts among themselves. Market depth collapsed. Prices detached from underlying value. Individual stocks briefly traded for pennies or surged to implausible highs. By the time human supervisors could meaningfully intervene, the event had largely run its course.
These disturbances did not originate in faulty code or malicious intent. They emerged from a compression of decision cycles. Orders became binding commitments to the market faster than participants could observe, interpret, and halt emergent behavior. Donald MacKenzie’s reconstruction in his book Trading at the Speed of Light: How Ultrafast Algorithms Are Transforming Financial Markets shows that many systems involved responded correctly to their local signals. Risk controls existed and human traders were present. The failure occurred in the interval between action and comprehension. Losses crystallized before shared understanding could form. Learning arrived after exposure had already been realized.
The underlying problems of step-change production technologies predate electronic markets.
About a hundred and fifty years ago, on September 3, 1878, the passenger steamer Princess Alice collided with the collier Bywell Castle on the River Thames near Woolwich. The collision occurred in daylight, in a heavily trafficked channel, involving two steam-powered vessels operating on regular schedules. The impact broke the Princess Alice in two, killing an estimated 650 people, one of the deadliest peacetime maritime disasters in British history. Contemporary investigations did not attribute the cause to negligence or incompetence. Both crews followed prevailing navigation rules. The failure lay in closing speeds and traffic density that exceeded the interpretive capacity of visual signaling and customary right-of-way conventions at the time. Steam propulsion compressed reaction windows. Commitments to course and speed became irreversible before situational understanding could update.
This incident was not anomalous. The spread of steam navigation in the early 19th century increased the frequency of collisions and near-misses in busy waters, a pattern that encouraged the British legislature to codify rules of the road for steamers in the 1840s and extend these rules to all vessels by 1858—a formal response to empirical pressure from evolving traffic conditions. Steamships traveled on tighter schedules, entered ports under reduced visibility, and maintained speed in conditions that sailing vessels would previously have avoided. Charts and soundings lagged behind expanding traffic volumes and new routes.
Analysis of maritime safety in the twentieth century provides further quantitative grounding. A study by the International Association of Institutes of Navigation showed that between 1956 and 1960 there were 60 collisions in the Dover Strait while in the twenty-year period since the introduction of the TSS there had been only 16.. This reduction is attributed to the routing measures that separated opposing streams of traffic and formalized systematic movement in congested waters. The ships did not slow down. Engines did not become weaker. Orientation infrastructure changed. Lanes, reporting requirements, and enforced separation reduced the distance vessels could travel under ambiguous situational awareness.
The same pattern appears in lighthouse construction and hydrographic surveying. E.G.R. Taylor, author of The Haven-Finding Art: A History of Navigation from Odysseus to Captain Cook, documents a rapid nineteenth-century expansion of Britain’s lighthouse and beacon network, overseen by Trinity House and the Admiralty, as steam navigation intensified coastal traffic and reduced tolerance for positional ambiguity, making fixed orientation infrastructure a prerequisite for safe high-speed movement. These installations functioned as externalized orientation systems. They did not improve captains’ judgment or reduce propulsion capability. They constrained where and how fast ships could safely move by making hazards legible earlier in the commitment sequence.
Across these cases, the operational logic remains consistent. Steam propulsion increased the rate at which vessels could commit to trajectories. Navigational practices updated more slowly than the engine improvements. Pressure mounted to manage increasing risks where commitments became irreversible before reliable positional updates occurred. The subsequent investments focused on orientation infrastructure rather than just appeals to caution or skill. The sea remained unchanged. The vessels retained their power. What shifted was the system’s capacity to keep movement proportional to understanding.
Measuring Disorientation
Long before organizations faced software deployments or algorithmic trading, navigators had a precise term for the condition that precedes accidents and other disasters: loss of orientation. This loss of orientation can be defined as the growing divergence between where a vessel was believed to be and where it actually was, combined with continued forward motion. During a period of disorientation, risk accumulated silently as distance traveled without a reliable fix increased. Mariners understood that a ship could operate smoothly, even confidently, while becoming progressively more exposed to hazards that might only become visible when avoidance was no longer possible.
This risk intensified with the introduction of steam propulsion. Steamships travelled faster, and they traveled farther between moments of reorientation. Celestial fixes remained intermittent, charts incomplete, and coastal soundings uneven. Each nautical mile sailed under uncertain position enlarged the envelope within which error could crystallize into grounding or collision. Importantly, nothing about this condition implied incompetence. Crews followed rules, instruments functioned, and vessels responded predictably to command. Disorientation risk arose from the imbalance between commitment—maintaining course and speed—and the slower rhythm of positional verification.
The institutional responses that followed reveal how the problem was understood. Lighthouses, buoys, traffic separation schemes, and standardized reporting reduced the distance a vessel could travel while ambiguously oriented. These systems shortened the interval between meaningful positional updates or constrained movement when uncertainty grew. Risk was managed by altering how quickly commitment could propagate under uncertainty.
Disorientation risk occurs in all organizational systems. It accumulates when irreversible commitments—capital allocations, trades, deployments, hires—are made at a rate that exceeds the organization’s capacity to update a shared understanding of position, risk, and constraint. And although modern organizations are saturated with dashboards, reports, and analytics, the issue persists through the absence of mechanisms that translate that information into timely limits on action. When action accelerates, interpretation lags behind, producing understanding that explains past decisions, with an increasing lag. Sufficient distance between understanding and action means that constraints needed to manage the present are not known on time.
The same pattern appears across domains. In financial markets, disorientation risk can be observed as the volume of trades executed between the first sign of anomaly and effective containment. In venture-backed firms, it appears as the accumulation of long-term obligations—leases, headcount, fixed costs—before demand assumptions are disciplined by constraint. In each case, the system operates coherently until the accumulated exposure forces a correction. Failure arrives abruptly because the risk did not degrade performance gradually; it accumulated invisibly.
This pattern becomes easier to recognize when set alongside the more familiar concept of technical debt. Technical debt entered engineering discourse to describe the deferred costs embedded in software systems when expedient choices substitute for durable structure. It does not imply error or negligence. Well-functioning systems can carry substantial technical debt for long periods, delivering value and even outperforming competitors. The debt becomes legible only when change is required, when the accumulated shortcuts impose friction, fragility, or delay. Its defining feature is temporal displacement: costs incurred later in exchange for speed now.
Disorientation risk shares this temporal structure, but it accumulates along a different dimension. Technical debt accrues in artifacts. It is stored in codebases, schemas, interfaces, and dependencies. Disorientation risk accrues in commitments. It is stored in trajectories already taken and obligations already assumed. Where technical debt constrains how easily a system can change, disorientation risk constrains whether a system can meaningfully understand the consequences of continued movement before those consequences become binding.
The two often grow together. An organization may operate with minimal technical debt while carrying substantial disorientation risk, a dynamic that will become increasingly common as AI agents address long ignored backlogs. This is particularly common when execution tempo increases faster than orienting practices. Conversely, systems with heavy technical debt may remain well-oriented if commitment is slow, bounded, and repeatedly checked against external reality. The remedies to each differ. Technical debt is addressed through refactoring, redesign, and deferred maintenance. Disorientation risk is addressed by shortening the distance between action and reorientation, or by constraining how far action can proceed under uncertainty.
As with technical debt, perceived success masks accumulation of disorientation. Yet the accumulation is not occurring in the representational layer of the system. It is occurring in the gap between futures already committed and what the organization can still revise without incurring disproportionate loss. When conditions change, the reckoning arrives not as gradual inefficiency, but as abrupt exposure.
Overcommitted to the Future
A commitment, in organizational terms, is any action that converts choice into obligation. Ronald Coase’s account of the firm helps clarify why this matters. Firms exist in part to replace price-mediated adjustment with administrative decision, yet each such decision carries a cost of reversal. Trades executed, inventory shipped, leases signed, policies enforced, and code deployed all transform contingent possibilities into binding trajectories. Once made, these commitments resist revision. They shape future options not through intent, but through the accumulated force of obligation.
As production speed increases, the distance over which commitments can propagate before their consequences are evaluated expands. Donald MacKenzie’s analysis of automated markets shows how this propagation amplifies exposure. In high-speed trading systems, actions that appear locally rational interact with other automated systems to produce outcomes that no single actor anticipates. Small discrepancies, when allowed to propagate at machine pace, scale into systemic events. The issue is not that errors become more frequent. It is that they travel farther before learning can intervene.
The Knight Capital incident makes this dynamic concrete. A software deployment error activated obsolete logic on a subset of servers. The code behaved consistently with its instructions. Within minutes, those instructions generated millions of orders across multiple venues, accumulating unwanted positions at a scale that overwhelmed the firm’s capacity to respond. Learning occurred, but only after exposure had already materialized. The losses compounded through the propagation of that mistake through an environment optimized for speed.
Manufacturing history exhibits the same structure. Before the development of lean production systems, defects introduced at a single workstation often propagated across entire production runs. Steven Spear’s analysis of the Toyota Production System shows how, in high-throughput environments, small process variations could convert into mass recalls if undetected. The fragility did not stem from incompetence on the line. It stemmed from the distance a product could travel through the system before the organization learned that something had gone wrong.
The ZIRP-era venture capital environment extended this logic across time rather than space. Edward Chancellor’s account of prolonged low interest rates shows how cheap capital reduced the cost of commitment. Firms expanded headcount, signed long-term leases, and entered new markets on the assumption that future growth would validate present obligations. These commitments propagated forward, embedding fixed costs and expectations long before demand assumptions were tested under constraint. When funding conditions shifted, the accumulated exposure became visible all at once.
Propagation, rather than error generation, is the central risk introduced by speed. Accelerated systems do not fail because they produce more mistakes. They fail because they allow mistakes, mismatches, or untested assumptions to travel farther before encountering resistance. Commitments accumulate faster than learning can constrain them. Exposure grows invisibly until reversal becomes costly or impossible.
Any theory of organizations operating under AI-speed production must account for this dynamic.
How Toyota Found their Bearings
Toyota’s production system offers a rare counterexample: it treated interruption as a first-order design problem. In the decades following World War II, Toyota faced severe capital constraints and volatile demand. High throughput mattered, but so did the cost of getting things wrong. A defect that propagated across thousands of vehicles could threaten the firm’s survival. Taiichi Ohno’s response was not to slow production in the abstract, but to ensure that learning could intervene before commitment hardened into irreversible exposure.
One of the most cited, and most misunderstood, mechanisms in this system is the andon cord. Its origins predate automotive manufacturing. Sakichi Toyoda’s automatic loom, developed in the early twentieth century, was designed to stop itself when a thread broke. The loom did not wait for an inspector to discover defects downstream. It halted immediately, preserving material and drawing attention to the cause. Ohno carried this principle into the factory floor. At Toyota’s plants, line workers were given the authority to pull a cord that stopped the entire production line if they detected an abnormality. This authority was real. When the cord was pulled, production stopped. Supervisors arrived. The problem was investigated on the spot.
The andon system collapsed the distance between detection and response. A defect could not quietly propagate through the system. Stopping the line was costly in the moment, but cheaper than allowing uncertainty to travel downstream. Steven Spear documents early instances at Toyota plants where lines were stopped repeatedly as workers learned to use this authority. Productivity initially suffered. Over time, stoppages declined, not because workers stopped pulling the cord, but because processes improved. Learning was bound into the system through repeated interruption.
Standardized work played a complementary role. At Toyota, revisions to standard operating procedures then constrained future action. This ensured that learning altered defaults rather than remaining trapped in reports or lessons-learned documents. Spear recounts cases where minor deviations—tool placement, sequence changes, timing mismatches—were surfaced through line stoppages and resolved by updating standards. The result was not perfection, but a system where today’s failure became tomorrow’s constraint.
What distinguishes Toyota’s approach is that interruption was engineered into the flow of work. It did not depend on managerial vigilance or post hoc review. The system assumed that problems would occur and focused on limiting how far they could propagate before being addressed. Momentum was never allowed to outrun understanding for long. High throughput continued, but only under conditions where learning could override speed.
This stands in contrast to the earlier cases. In automated financial markets, interruption arrived externally, through circuit breakers imposed by exchanges or regulators after repeated incidents. At Knight Capital, the halt came only after losses had already accumulated. In venture capital, interruption often arrives through capital withdrawal or market correction, long after expansion decisions have propagated through leases, headcount, and fixed costs. In these systems, the authority to stop is distant from the point of action, and learning tends to follow damage rather than prevent it.
For this reason, Toyota functions as a positive control case. It shows that high-speed, high-throughput systems need not accumulate disorientation risk by default. When interruption is treated as a design requirement rather than a cultural aspiration, momentum becomes manageable. Learning gains leverage over commitment. Orientation remains coupled to action, even as production scales.
Knowledge Manufacturing at the Speed of Light
AI alters organizations by transforming how knowledge itself is produced. Analyses, drafts, simulations, code, and policy language can now be generated at near-zero marginal cost and at a scale that spans the entire organization simultaneously. Many AI observers have described this shift as the industrialization of cognition: a move from bespoke intellectual work to continuous, automated production of decision-relevant artifacts. The result is a sharp increase in the volume and velocity of soft products that can plausibly demand action.
This change functions as a generalized engine upgrade. The organization does seems to think faster, but more so it commits faster. The speed at which options can be surfaced, evaluated superficially, and enacted rises across multiple domains at once.
Kei Kreutler’s work on artificial memory helps clarify why this acceleration is destabilizing. AI systems expand what organizations can remember, retrieve, and recombine, effectively enlarging the action space. Past decisions, edge cases, alternative framings, and hypothetical scenarios remain continuously available. This knowledge work expansion does not come with an automatic increase in the capacity to orient within that space. Orientation must now account for more possibilities, more dependencies, and more simultaneous lines of action. The surface area of plausible commitment grows faster than the mechanisms that maintain coherence across it.
Orientation mechanisms remain stubbornly human-paced. Review cycles, legitimacy checks, accountability structures, and escalation paths depend on attention, trust, and interpretation. Karl Weick’s analysis of sensemaking remains instructive here. Interpretation stabilizes through interaction and shared context, neither of which scales linearly with artifact production. As AI accelerates action, understanding increasingly arrives late, explaining why decisions were made rather than constraining what should be done next. The gap between action and understanding widens, not because people are inattentive, but because the tempo of the system has changed.
This widening gap is already visible in operational traces. AI-augmented teams report rising levels of rework, rollbacks, and exception handling. Decisions are reversed not because they were irrational, but because they were made under assumptions that could not be collectively surfaced and tested in time. The system moves forward confidently until friction appears, at which point learning occurs abruptly. More ground is covered before the organization pauses to reassess where it is and whether its direction remains viable.
Framing AI as “intelligence” obscures this effect. Intelligence suggests better judgment, deeper insight, or improved foresight. AI removes natural pauses that once forced organizations to reorient. Drafting delays, analytical bottlenecks, and coordination costs previously acted as informal checks on momentum. Their disappearance increases exposure. The organization travels farther between fixes.
The historical analogy to steam power remains apt. Steam did not remove the need for charts, soundings, or traffic rules. It increased the consequences of their absence. AI operates similarly. It does not eliminate the need for orientation; it raises the stakes of being without it. Where action becomes cheap and plentiful, navigation becomes the limiting factor.
The organizational challenge, then, is to ensure that speed remains survivable. As knowledge manufacturing approaches the speed of light, the decisive question becomes how far the organization is allowed to move before understanding can meaningfully catch up.
Knowledge Production Navigation Machines
In navigation, orientation links observation to course correction, deciding when a vessel can safely maintain speed and when it must slow, hold position, or change direction. John Boyd’s formulation of orientation within the OODA loop emphasized this asymmetry. Observation and action can accelerate indefinitely. Orientation cannot. Survival depends on whether orientation governs the loop rather than trailing it. High-speed systems fail when orientation is treated as implicit judgment instead of engineered infrastructure.
The historical cases make this concrete. Lighthouses did not make captains smarter. Traffic separation schemes did not improve seamanship. Circuit breakers did not increase traders’ insight. These mechanisms functioned as navigation machines. They structured movement in advance of understanding, limiting how far a system could travel while situational awareness remained ambiguous.
Organizations face the same requirement as they industrialize knowledge production. In fast-moving systems, shared interpretation must be stabilized ahead of time. It must be embedded in defaults, thresholds, permissions, and automatic pauses.
AI-driven knowledge production makes this externalization unavoidable. When analyses, recommendations, and drafts appear continuously, orientation must decide which outputs can immediately propagate into commitment and which require additional fixing. This distinction cannot be made ad hoc, production is too fast and at a scale beyond human oversight. It must be mechanized. One emerging example appears in AI research workflows that separate exploratory generation from validated synthesis. Some deep research systems now include structured “thinking threads” that remain visible and inspectable before outputs are promoted into authoritative summaries. These panels preserve traceability and slow the transition from possibility to commitment, functioning as a kind of positional fix before action.
Feed monitoring systems provide another example. In automated trading and content moderation alike, continuous feeds detect boundary conditions, such as volatility or violence, that trigger constraint. Circuit breakers halt trading. Rate limiters slow API calls. Content pipelines quarantine anomalous outputs. These systems operate on thresholds rather than understanding. They exist to reduce exposure while interpretation catches up. Their effectiveness lies in their placement within the flow of action, not in the sophistication of their analytics.
Toyota’s production system illustrates the same logic in a physical domain. Andon cords, standardized work, and immediate escalation protocols embedded orientation directly into production. Learning altered defaults in real time. A worker’s pull of the cord was not a request for permission; it was an authorized interruption that reasserted orientation over momentum. The system assumed uncertainty and designed for interruption rather than perfection. Speed remained high because exposure was constrained.
Orienteering work, then, consists of continuously distinguishing between actions that are safe to commit immediately and actions that require further orientation. This distinction must be operational. It must alter tools, permissions, and defaults. Without this binding, organizations accumulate disorientation risk even as they appear responsive.
Equally important is the legitimacy of pause. Boyd and Weick both emphasize that orientation under pressure requires the authority to interrupt action without stigma. Systems that penalize slowing down guarantee that uncertainty will be ignored until it becomes unmanageable. Navigation history offers a clear lesson here. Ships did not become safer because captains were encouraged to be cautious. They became safer because routes, signals, and reporting requirements made slowing or diverting a normal response to ambiguity.
These demands recur wherever speed increases: in markets, factories, logistics networks, and capital allocation. Organizations that fail to build navigation machines for knowledge production drift until external reality enforces a correction. Organizations that succeed convert speed into advantage by ensuring that learning can interrupt momentum before exposure hardens into loss.
This work defines the next frontier of management under AI. As knowledge production approaches continuous motion, orientation must be engineered with the same seriousness once reserved for propulsion. Engines will continue to improve. The decisive question is whether organizations build the navigation machines required to survive their own speed.

