TL;DR Corporate leaders are restructuring their organisations around AI using a compression thesis: same work, fewer people, flatter hierarchy. The evidence increasingly points to expansion: more work, different work, new capabilities. Restructuring for compression when the actual phenomenon is expansion is a classification error with cascading consequences. Before redesigning your org chart, classify the change.

‍ ‍Your CEO just sent an all-hands email. The subject line contains the words "AI-first" and "restructuring." The deck references Block, Shopify, and Anthropic. There is a slide about flattening hierarchy, a slide about smaller teams, and a slide about everyone becoming a builder. The stock price ticked up 3% on the announcement.

I have seen this deck. Not yours specifically, but the archetype. It circulates through corporate innovation teams, transformation offices, and board strategy sessions with the reliability of a virus finding a host. The argument is seductive in its simplicity: AI makes individuals more capable, so organisations need fewer people, fewer layers, and fewer role distinctions. Flatten. Shrink. Blur.

The argument is also wrong. Not factually wrong. Categorically wrong. It answers the wrong question.

The three-move playbook and its evidence base

The restructuring narrative rests on three observable shifts in technology companies.

  • Hierarchy is flattening.
    Google cut 35% of its small-team managers in a single year. Block's Jack Dorsey, co-writing with Sequoia's Roelof Botha, declared hierarchy an obsolete information-routing protocol and proposed collapsing Block's structure to three roles. At Anthropic, every employee carries the same title regardless of seniority. Microsoft's Satya Nadella has stated that AI requires structural redesign.

  • Teams are shrinking.
    Block cut 40% of headcount while targeting $2 million gross profit per employee. Shopify's revenue per employee nearly tripled to $1.4 million after a 30% workforce reduction. AI-native startups like Cursor reached $6.7 million revenue per employee. Cognition's 15-person team generated $73 million in annual recurring revenue.

  • Roles are blurring.
    Anthropic's designers spend 80% of their time with both Figma and coding tools open, making state management changes that would have required an engineer a year ago. At Shopify, the fastest-growing groups adopting Cursor are not engineering teams but sales and support. Postman's CEO describes product managers building directly in the product, skipping the documentation-to-prototype-to-handoff chain entirely.

These are real observations. The companies are real. The data points check out. And an entire consulting industry is now packaging these observations into a universal prescription: flatten your hierarchy, shrink your teams, dissolve your role boundaries.

The prescription is where the error lives.

Compression is not what is happening

Strip away the CEO narratives and look at what the same companies actually report when they study their own behaviour, and a different pattern emerges.

Anthropic conducted the most rigorous internal study available: 132 engineers and researchers surveyed, 53 in-depth interviews, granular usage data analysed. The headline finding that circulated was about productivity gains. The finding that should have circulated was this: 27% of AI-assisted work consisted of tasks that would not have been done at all without AI.

Not done faster. Not done by fewer people. Not done at all, previously.

Engineers used AI to scale projects beyond their original scope. Teams revived abandoned ideas that had been permanently deprioritised under human-only capacity constraints. Designers built interactive dashboards and experimental pipelines. The Legal team built accessibility tools. A further 8.6% of tasks were small quality-of-life improvements that had accumulated as technical debt because they never justified the time investment under the old economics.

GitHub's 2025 Octoverse data tells the same story at aggregate scale. Developers merged 43 million pull requests per month, up 23% year-over-year. Annual commits pushed past 1 billion, up 25%. These are not efficiency gains on a fixed workload. They are expansion. Product managers at AI-augmented companies now cover four to six times greater scope than their pre-AI predecessors, absorbing prototyping, prompt engineering, and quality assurance into their role.

Thomas Dohmke, GitHub's former CEO, captured the pattern precisely. No company, he observed, has finished its backlog thanks to AI. AI is creating more possibilities and more work.

This is the compression thesis colliding with reality. The compression thesis says: AI does the work, so you need fewer people to do the same work. What the evidence shows: AI expands what is possible, so the same people (or fewer) attempt work that was previously out of scope entirely.

The distinction matters because the organisational design implications are opposite. If AI compresses work, you optimise for efficiency: fewer layers, smaller teams, interchangeable generalists. If AI expands work, you optimise for judgment: who decides which new possibilities to pursue, how expanded scope is governed, and whether the expansion creates value or merely creates busyness.

The UC Berkeley and Yale ethnographic study documented the dark version of expansion. Freed time was immediately filled with more tasks. Workers used AI during breaks and off-hours. Burnout rates hit 62% among associates. One worker described the ratchet effect: delivering three AI-assisted pitch decks led to expectations of eight, then ten, then twelve-hour days. BCG coined the term "AI brain fry" for the mental fatigue of intense AI oversight.

Expansion without governance is exploitation. The question is not whether AI expands the frontier of work. The question is who controls the expansion, and whether the organisation has classified the change correctly before restructuring around it.

The classification problem, restated

I have written previously about the AI classification problem: that the dominant failure mode in corporate AI is taxonomic, not technological. Organisations misclassify AI initiatives by treating transformation as optimisation, then resource, govern, and evaluate the initiative using the wrong framework.
The reversibility test surfaces the gap: if you removed the AI tomorrow, would the organisation's processes revert to their previous state? If reverting is unthinkable because the organisation has reorganised around the capability, you have a transformation, regardless of what the business case said.

Organisational restructuring is the next misclassification.

When a CEO announces a flatter hierarchy and smaller teams in response to AI, that restructuring is itself an AI initiative. It has a classification. And in most cases, it has been misclassified.

The Block playbook (three roles, no permanent middle management, AI-driven world models replacing human context) is a transformation. It requires new authority structures, new coordination mechanisms, new incentive systems, and new career architectures. It is irreversible in any practical sense. But it is being adopted by companies who approve it as an optimisation: a leaner org chart, same strategy, reduced cost base. The restructuring is governed with optimisation timelines (6-12 months), optimisation success metrics (headcount reduction, revenue per employee), and optimisation stakeholder models (HR and finance, not product and engineering).

The misclassification cascades. When a transformation is governed as an optimisation, the timeline is wrong, the metrics are wrong, the political capital is insufficient, and the inevitable resistance is interpreted as failure rather than as information about the actual scope of the change.

Apply the reversibility test to your own restructuring. If you flattened the hierarchy tomorrow and it did not work, could you revert? If the answer involves re-hiring, re-training, rebuilding institutional knowledge, and reconstructing decision-making processes, you are not optimising. You are transforming. Govern accordingly or fail predictably.

What the sequencing reveals

The Critical Path Layers framework, which I developed for corporate innovation contexts, maps the dependency structure of organisational change. Each layer depends on the resolution of the layer beneath it. Working on later layers before resolving earlier ones does not just waste resources. It produces misleading signals.

Organisational restructuring around AI touches four layers, and most companies are starting at the wrong one.

  • Layer minus one: Problem Classification.
    Before restructuring, classify what AI represents for your specific organisation. Not for Block. Not for Anthropic. For you.
    Is AI changing how existing work gets done (optimisation), extending your organisation into adjacent capabilities (adjacency), or fundamentally altering how work is distributed, evaluated, and governed (transformation)? The answer determines everything downstream. If your organisation has not explicitly answered this question, every subsequent restructuring decision rests on an unexamined assumption.

  • Layer one: Internal Market Clarity.
    The corporate version of this layer asks: who is the internal customer for this change, and what job are they hiring it to do? Restructuring the org chart is not the same as changing how work gets done. Funding is not adoption. A flatter hierarchy on paper means nothing if the actual decision-making patterns have not changed. I have observed this pattern across multiple corporate programmes: the structure changes, the PowerPoint changes, and Monday morning stays exactly the same.

  • Layer two: Organisational Feasibility.
    This is the political layer. Who has veto power over the restructuring? Where are the pockets of resistance, and are they legitimate concerns or territorial protection? The Block model assumes resistance is overhead to be eliminated.
    In practice, resistance is diagnostic information. Kotter's guiding coalition principle applies: transformation without a coalition of sufficiently powerful sponsors fails, regardless of the quality of the design. The restructuring that looks clean on a strategy slide founders when it encounters the first VP who controls a critical budget line and was not consulted.

  • Layer four: Institutional Embedding.
    The real test of any restructuring is not whether it gets announced. It is whether, eighteen months later, the new way of working has become the default. Has the flat hierarchy become self-sustaining, or does it depend on the CEO's personal attention? Have smaller teams developed their own coordination mechanisms, or are they producing the expanded output that AI enables while quietly burning out? Have blurred roles created genuine capability growth, or have they created anxiety about career progression and accountability?

Most companies restructure at Layer four ambition with Layer minus-one preparation. They announce the destination without classifying the journey.

The cultural wall nobody mentions

Every company in the restructuring narrative is American, or Swedish-founded-but-Silicon-Valley-cultured. Every organisational pattern described assumes low power distance, high individualism, and cultural comfort with ambiguity. This is not a footnote. It is a structural limitation that invalidates the universal prescription.

Geert Hofstede's power distance index (PDI) measures how readily people in a culture accept unequal power distribution. Dorsey's argument that hierarchy is an obsolete information-routing protocol is intelligible in a culture where the PDI is 40 (United States). It is incoherent in a culture where the PDI is 80 (China), 93 (Russia), or 100 (Malaysia).

In high power-distance cultures, hierarchy is not coordination overhead. It is social infrastructure that confers meaning, status, and trust. Removing management layers does not free individuals to do their best work. It destabilises the relational fabric that makes collaboration possible in the first place. The senpai-kohai structure in a Japanese enterprise, the rigid promotion tracks of a Korean chaebol, the formal reporting lines of a French corporate: these are not bugs. They are operating systems. Replacing them requires a different kind of change than installing Cursor licences.

Role boundaries carry a parallel cultural function. In cultures with high uncertainty avoidance (Japan at 92, South Korea at 85, France at 86), clear role definitions are psychological safety mechanisms. Knowing your domain of responsibility is foundational to professional identity. "Everyone is a builder" is an invitation to anxiety, not empowerment.

HBR research from November 2025 found that AI adoption in hierarchical organisations is blocked not primarily by technology but by power dynamics. Junior employees with AI skills outperform veterans, threatening tenure-based advancement systems. Resource hoarding is measurable: programmers at one large IT firm were 16-18% less likely to recommend AI tool access to their own teammates. A Frontiers in AI study from January 2026 found that national cultural dimensions correlate directly with AI readiness, suggesting the organisational patterns described in the Silicon Valley playbook are culturally contingent rather than universally optimal.

The implication for corporate innovation leaders working internationally is blunt. Adopting the Block model in Tokyo, Seoul, or Paris without accounting for power distance, uncertainty avoidance, and the social functions of hierarchy is not bold leadership. It is a failure of classification at a cultural level.

What to do instead

The restructuring conversation needs reframing from prescription to diagnosis. The question is not "how do we flatten, shrink, and blur like Block?" The question is threefold.

  1. First: classify the change before designing the structure.
    Apply the reversibility test. If AI were removed from your organisation tomorrow, what would need to change? If the answer is "we would switch back to the old tools," you have an optimisation. Restructuring is unnecessary. If the answer involves reconstructing roles, teams, and decision-making processes, you have a transformation. Restructure deliberately, with transformation-grade governance, timelines, and political capital.

  2. Second: measure expansion, not just compression.
    Revenue per employee is a compression metric. It tells you whether you are doing the same work with fewer people. It tells you nothing about whether your organisation is attempting work it could not attempt before. Track net-new initiatives: projects, tools, capabilities, and experiments that exist only because AI made them feasible. Track scope expansion per role. Track whether the expanded output creates value or creates burnout. The organisations that will benefit most from AI are not the ones that shrink fastest. They are the ones that expand most intelligently.

  3. Third: test cultural fit before importing the playbook.
    If your organisation operates across cultures with different power distance norms, different uncertainty avoidance profiles, and different social functions for hierarchy and role definition, the Silicon Valley restructuring model will not transfer intact.
    Map the cultural dimensions of your specific organisation. Identify where the model fits and where it will trigger an immune response. Design the adaptation before announcing the destination.

The restructuring is real. The question is whether it is classified correctly.

Get the classification wrong and you will optimise for compression while the real opportunity is expansion. You will flatten hierarchy in cultures where hierarchy is load-bearing. You will shrink teams that are attempting more than they ever have. You will blur roles without building the governance to manage expanded scope.

Henderson and Clark demonstrated in 1990 that established firms fail primarily from misclassifying the type of innovation they face. Thirty-five years later, the mechanism is identical. The technology is new. The error is old.

Classify before you restructure. The org chart can wait.

Alexandra Najdanovic is the founder of Aieutics, working with founders and corporate innovation teams on strategic transformation and AI-readiness. The AI Classification Problem and the Critical Path Layers framework referenced in this article are her proprietary diagnostic models.

Next
Next

How to Build an Enterprise Pilot That Survives the Governance Process