The Operating Model Is Dead

Introduction: The Question Nobody Asked

I've been reflecting on conversations about a recent roundtable in Paris that brought together VCs, Operating Partners, and key players from the French startup ecosystem. By all accounts, it was sophisticated. Impressive metrics were shared (some funds achieving 75% seed-to-Series A rates versus 29% market average), the discussion of ROI measurement was refreshingly concrete, and everyone acknowledged that 'capital alone isn't enough.'

But throughout, one question was apparently conspicuous by its absence:

If I had to start a VC or Operating Partner function from scratch today, knowing what I know, and accounting for the 3-year trajectory, what would I fundamentally do differently?

Instead, what emerged was incremental optimisation of 2019-era playbooks. 'We added a GPT wrapper to our knowledge base.' 'We're thinking about AI for matching.' This isn't transformation. It's decoration.

The conversation revealed something important about where the European innovation ecosystem stands: mature enough for sophisticated execution discussions, not yet ready for uncomfortable structural questions.

What follows is a provocation. A thought experiment. What would the European VC and operating model actually look like if we had the courage to start from zero?

What's Actually Changing (That Nobody Discussed)

From what I've heard, the discussion touched on 'faire mieux avec moins' (doing more with less), but treated it as an optimisation challenge rather than a structural shift. AI dependencies and geopolitical risk came up as footnotes. Defence and critical infrastructure, where serious European capital is actually flowing, apparently weren't discussed at all.

Here's what the honest conversation would have addressed.

The Capital Efficiency Reality

European startups compete against US companies with substantially more capital at equivalent stages. The data is worse than most people admit: between 2016 and 2024, the EU raised $133 billion in venture capital whilst the US raised $932 billion. That's not three times the capital. It's seven times.

Break it down by stage and it gets bleaker. In 2024, seed and early-stage investments in the EU were about 80% lower than in the US. Growth-stage financing was 84% lower. There are at least seven times more funds in the US than the EU for funding rounds above €50 million. And only 12 European VC funds managed to raise tickets above $1 billion between 2016 and 2024, compared to 157 in the United States.

The discussion of 'AI-native versus classic SaaS' productivity differences didn't address the uncomfortable reality: many European B2B SaaS companies may simply not survive the next three years because they can't afford the compute to remain competitive.

Consider what it costs to run AI at scale. ChatGPT reportedly cost $700,000 per day to operate early on. Google has noted that an AI-powered search query can be ten times more costly than a standard keyword search. The average organisation now spends $62,964 per month on AI infrastructure, up from expectations, and that figure is projected to reach $85,521 by the end of 2025. A 36% increase in one year. The proportion of organisations planning to invest over $100,000 per month in AI tools is set to more than double, jumping from 20% in 2024 to 45% in 2025.

For European startups, GPU costs typically consume 40-60% of technical budgets. When you're competing against a US company that's raised $50 million to your €5 million, and your infrastructure costs are growing 36% annually, the unit economics don't work. As I've explored in 'The AI-Native Metrics Revolution,' traditional software metrics were designed for a world of predictable recurring revenue, high switching costs, and clear unit economics. AI has broken all three assumptions. But the operating model conversation is still centred on helping startups optimise for metrics that are becoming meaningless.

The Infrastructure Dependency Problem

From what I've heard, 'dépendance aux fournisseurs' came up as an aside. This is arguably the strategic question for European tech sovereignty. The entire discussion apparently assumed continued access to US cloud infrastructure and AI models at current pricing. That assumption looks increasingly fragile.

Eighty per cent of Europe's digital infrastructure is imported from abroad. US hyperscalers (AWS, Microsoft Azure, Google Cloud) control 70% of the European cloud market. European cloud providers have actually lost ground: their market share was 29% in 2017 and is now 15%. We're moving in the wrong direction.

Europe has tried to respond. France announced a €109 billion AI investment plan. The EU is mobilising €20 billion through the 'InvestAI Facility' targeting five AI Gigafactories. The GaiaX programme was supposed to build European cloud alternatives. Yet European providers keep losing market share.

In March 2025, nearly 100 European technology companies and lobbying groups signed an open letter to the European Commission calling for a sovereign infrastructure fund. The letter pointed to recent geopolitical events as evidence of 'the stark geopolitical reality Europe is now facing.' Signatories included Airbus, Dassault Systèmes, OVHcloud, SiPearl, Nextcloud, and the European Startup Network. They argued that tech dependencies 'compromise our sovereignty' and called for coordinated action.

The letter was right to be concerned. What happens to operating support when the underlying stack is subject to export controls, pricing arbitrage, or geopolitical shifts? This isn't a hypothetical concern for a three-year horizon. We now live in a world where a presidential executive order can directly suspend access to personal email accounts or Microsoft services.

Then there's the Chinese disruption. In January 2025, DeepSeek V3 demonstrated it had trained a GPT-4 equivalent model at 18 times lower training costs and 36 times lower inference costs than OpenAI, using only 2,000 GPUs. The compute cost advantage that US companies assumed was permanent just evaporated. Every European startup betting on US cloud infrastructure now faces pricing arbitrage from low-cost competitors, geopolitical leverage points, and competitive disadvantage against companies with sovereign infrastructure.

The Talent Arbitrage Is Closing

The discussion of 'AI engineers' recruitment challenges was accurate but incomplete. The real issue is that top European AI talent increasingly has three choices, not two: US companies (remote or relocated), European defence and security players with serious funding, or traditional tech companies that are becoming uncompetitive on both compensation and mission.

The European Union faces a tech talent gap of 3.9 million people by 2027, according to McKinsey. AI job postings have increased 78% year-over-year, but the talent pool has grown only 24%. In 2023, Europe lost a net 492 technical workers to the United States. Ireland reported that 81% of employers faced AI talent shortages in 2023, up from just 18% in 2018. Germany now sees average hiring delays of six months for AI-related technical positions.

But here's what's really happening: talent isn't disappearing. It's concentrating.

The number of top AI engineers in Europe's defence sector (based on published research) soared from 144 in 2014 to 1,700 in 2024. That's twelve times growth in a decade, with most of it coming in the last two years. Venture capital investment in European defence companies reached $626 million in 2024, up from $254 million in 2023 and just $62 million in 2022. Ten times growth in two years.

Defence sector attrition in the EU stands at 13%, more than four times the US rate of 3%. Yet professionals, especially engineers, are being drawn to the sector by more dynamic environments, faster growth opportunities, and compensation that's sometimes 20-50% higher than comparable roles in traditional tech. The European defence sector is expected to grow from one million direct jobs today to 1.46 million by 2030.

Take Helsing, the Berlin-based AI defence company. Founded in 2021, it raised a $223 million Series B in 2024, one of the largest European AI rounds that year. It's recruiting top AI talent from DeepMind, Meta, and other consumer tech giants by offering premium salaries and a 'mission-driven' narrative.

Or consider the story that Reuters reported in April 2025: Michael Rowley, a 20-year-old British university student, recently rejected offers from accounting firms and traditional AI companies to work for a company developing technology that allows sensors to better track troop movements. 'When I decided to go into defence I had quite a few options, and for me it was the opportunity to do meaningful work,' Rowley said. 'If I want to work for most tech companies I might write code for advertising, but to be able to contribute to the front lines and help protect democracy is an opportunity not many people get.'

In February 2025, Leonardo, the Italian aerospace, defence, and security company, stated: 'This is certainly one of the periods impacted by the most intense search for new hires in Leonardo's history, even more intense than in previous conflicts.'

This isn't a recruitment challenge that operating support can solve. It's a structural constraint that changes which categories of startups can actually win from Europe. The middle-market startup talent pool is hollowing out.

The Blank Slate Question

So what would you actually build differently if starting from zero in January 2026?

On Team Composition

You wouldn't hire 'Operating Partners with functional expertise in marketing, HR, product.' You'd hire three to four people with genuine technical depth in AI systems plus a small number of deeply specialised sector experts (energy transition, defence, regulated industries).

The generalist functional expertise that made operating teams valuable in 2018 ('here's how to structure your sales org') is increasingly commoditised by AI. Online courses on AI increased 267% in 2024, though completion rates remain low at 23%. Founders can generate marketing playbooks, pitch decks, and strategic frameworks using generative AI. The information bottleneck has shifted. It's no longer 'how do I learn this' but 'which of 47 conflicting approaches applies to my specific situation.'

The value is in what AI can't yet do: navigating regulatory complexity, deep industry relationships, and genuine technical judgement on defensibility. They've become 'non-negotiable' in a market where distributions have hit decade lows and IPOs account for just 6% of exit value. But that compensation only makes sense if they're providing irreplaceable judgement.

On Portfolio Construction

You'd probably run a much more concentrated book. The 'spray and seed, hope for Series A' model assumes abundant follow-on capital and a functioning growth equity market. Neither is guaranteed in Europe for the next cycle.

Growth-stage funding in Europe peaked at €3.5 billion in 2021, collapsed to €275 million in 2023, and partially recovered to €818 million in 2024. The majority of growth capital for rounds above €50 million comes from non-EU investors, creating a risk of relocation and loss of economic security.

Yet there's an interesting counter-signal in the data. In Central and Eastern Europe, startups have proven exceptionally efficient in capital utilisation. Between 21% and 33% of unicorns in Poland, Croatia, and Romania were bootstrapped rather than venture-backed. Compare that to 95% venture-backing across broader Europe. When late-stage funding collapsed, CEE contracted by just 15% whilst Western Europe plunged by 35%.

The lesson isn't that all startups should bootstrap. It's that European startups can achieve scale with less capital when they build for profitability from day one, but only in specific business models. Deep tech requiring massive infrastructure investment doesn't work this way, as IQM's story illustrates.

IQM, a Finnish quantum computing company, raised a $320 million Series B in September 2024. It became the largest Series B in Nordics history and in the quantum space outside of the US. The round demonstrated that category-leading innovation can still emerge from Europe. But it required a massive concentrated bet, not spray-and-pray seed rounds.

You'd want fewer bets with more conviction, and you'd want to be prepared to fund through to profitability without relying on the next investor showing up.

On What 'Support' Actually Means

The high-touch model (monthly calls, workshops, office hours, batch programming) is built for a world where founder time is the bottleneck and information is scarce. We're entering a world where founder attention is the bottleneck and information is abundant.

The valuable intervention isn't 'let me teach you about pricing strategy.' It's 'let me make three introductions that would take you 18 months to get on your own' and 'let me pattern-match this specific situation against the 40 similar ones I've seen and tell you which decision actually matters.'

In other words: less curriculum, more judgement. Less content, more access. Less 'we'll help you think through X,' more 'here's the answer, here's why, now move.'

I've heard about VCs building RAG systems over their content libraries, treating this as innovation in founder support. It's a perfect example of solving the wrong problem. The bottleneck isn't retrieving information from past workshops. The bottleneck is having the judgement to know which intervention matters for this founder at this moment.

On Reddit and Hacker News, when founders discuss what they actually want from investors, three things come up consistently: specific introductions (to customers, investors, talent), judgement calls on specific decisions ('Should I fire my CTO?' 'Is this pivot stupid?'), and peer learning from other founders two steps ahead. Not workshops on Lean Startup principles.

NATO's Defense Innovation Accelerator (DIANA) provides a useful contrast. The programme focuses on specific technical challenges rather than generic entrepreneurship. It provides access to government procurement pipelines and connects startups to defence primes for partnerships. The value is entirely in network access and sector-specific judgement, not educational content.

On Business Model

The 2% management fee and 20% carry model funds large teams doing lots of activity. If the actual value-add is concentrated in a handful of high-judgement interventions plus network access, you might run a much leaner operation with different economics. Or explicitly split the 'capital allocation' function from the 'company building support' function and price them differently.

Consider the maths. A typical European VC fund of €100 million generates €2 million per year in management fees. But operating teams of 20-30 people cost €4-6 million per year all-in. Management fees don't cover operating costs for large platform teams. Funds make it work by raising larger vehicles (€250 million plus), but this creates pressure to deploy capital into more companies, diluting the quality of support.

A lean alternative: €75-100 million fund, 2% management fee, but a team of just eight to twelve people total (partners plus sector specialists). Focus on judgement and network rather than content and curriculum. Total costs: €2-2.5 million per year. The economics work without needing to scale to €250 million.

This would be uncomfortable. It would mean admitting that much operating activity is theatre. But the question is whether the activity drives returns. There's remarkably little rigorous ROI analysis of platform team services in the public domain. Anecdotal evidence suggests the most-used services are recruiting and investor introductions. The least-used are marketing workshops and curriculum programmes.

On Sector Focus

You wouldn't build a generalist fund. You'd pick one of maybe four or five domains where Europe can actually win (energy and climate, defence and security, regulated fintech, biotech and health with regulatory moats, possibly industrial automation) and go deep.

The 'we invest in great founders regardless of sector' positioning is a luxury of abundant capital markets. In a constrained environment, the operating support that matters is the support that only you can provide because you know the specific regulatory landscape, customer dynamics, and competitive context of a particular domain.

The capital flows reveal where Europe has structural advantages. Defence and security investments grew tenfold in two years. Climate tech receives 21% of European funding compared to 11% in the US, driven by regulatory frameworks that create market demand (EU taxonomy, carbon markets, GDPR-related data residency requirements).

Compare two French AI companies. Mistral AI, building general-purpose large language models, raised approximately $1.1 billion by late 2024. Despite producing quality models, its 2024 surge in popularity has been eclipsed by Chinese models (particularly Qwen from Alibaba, according to the State of AI Report 2025). Mistral's compute access remains dependent on Microsoft Azure credits. The sovereignty problem is embedded in every line of code.

Meanwhile, Helsing and other defence-focused AI companies are thriving precisely because they operate in a sector where European governments must buy European for strategic systems. The sector provides regulatory moats, government relationships, and security clearance requirements that act as structural advantages capital can't easily replicate.

Generalist funds in 2025 are competing to deploy capital in sectors where Europe has no advantage. Their portfolio companies compete against better-funded US companies. Their talent migrates to defence and strategic sectors. Their operating support is generic versus sector-specific. The data is clear: European defence VC investment grew ten times in two years. Climate tech gets twice the funding share versus the US. Meanwhile, generalist early-stage funding is essentially flat.

What This Means for the Ecosystem

These conversations about operating models aren't wrong. The data on improved Series A conversion rates is real. The matching problem at scale is genuine. The community-centric model (question to workshop to content to capitalisation) is the right architecture.

But the conversation feels comfortable. And comfort is a warning sign when the environment is changing this fast.

The French ecosystem is sophisticated about execution. What's missing is the willingness to question whether the execution is pointed in the right direction.

  • For incubators: the 'batch model' versus 'continuous intake' debate matters less than whether you're preparing founders for a capital environment that may not exist in 18 months. Are you helping them build for profitability, or are you still optimising for the next raise?

  • For VCs: the operating team headcount matters less than whether those people can provide judgement that AI can't replicate. Are you building a content library, or a network of people who can make decisions?

  • For founders: the most important filter isn't 'which accelerator has the best curriculum.' It's 'which investors understand that the playbook from 2021 doesn't apply anymore, and have the courage to operate differently.'

The Root Cause

This pattern (incremental optimisation of existing models rather than fundamental rethinking) isn't unique to operating partners. It's the same pattern I see in corporate innovation (documented in 'Beyond the AI Hype: Why Corporate Innovation Starts with Organisational Plumbing') and in startup metrics (explored in 'The AI-Native Metrics Revolution').

The root cause is institutional. The people running these programmes built their careers on the existing model. Their networks, their expertise, their compensation (operating partners at huge base salaries plus carry) are all tied to the current structure. Asking 'what would we build from scratch?' is threatening because the honest answer might be 'something that doesn't need us in our current form.'

Consider what change would actually require. Large platform teams would need to shrink from 20-30 people to eight to twelve, meaning redundancies. Fund sizes would need to decrease from €250 million to €75-100 million, reducing management fees. Portfolio sizes would need to concentrate from 40 companies to 12-15, changing the entire investment model. Partners who built expertise in generalist operating support would need to retrain as deep sector specialists.

The incentive structures prevent adaptation at every layer. Individual partners need to justify their roles and salaries. Organisations have built infrastructure (teams, processes, real estate) that represent sunk costs. The ecosystem rewards visible activity: events get media coverage, workshops demonstrate 'value-add' to LPs, large portfolios signal deal flow.

Limited partners evaluate funds based on established metrics. They want to see 'value-add services.' Workshops and events are visible. Judgement is invisible. It's easier to market activity than judgement.

But the market is forcing questions anyway. European cloud providers lost half their market share in eight years despite billions in investment. Growth-stage funding collapsed and has only partially recovered. Talent is migrating to defence and strategic sectors. Portfolio companies are struggling with follow-on rounds.

The historical parallel is the decline of large conglomerate firms in the 1980s and 1990s. It took decades for focused companies to displace diversified holding companies, despite clear performance advantages. Institutional inertia is powerful. But eventually, returns diverge and capital follows performance.

What we're likely to see is gradual bifurcation. Large generalist funds will continue with the existing model (institutional inertia is strong). New sector-focused funds will be built from scratch with lean models. Over five to ten years, returns will diverge, and capital will follow performance. The question is whether European startups can survive that transition period, or whether the winners will be companies that either relocated or never needed the European support ecosystem in the first place.

Conclusion: Beyond Optimisation

The founders who will matter in three years are the ones asking the version of this question for their own domains. Not 'how do I optimise my current GTM' but 'if I were starting this company today knowing what I know, what would I build differently?'

The ones who can't answer that question clearly are probably working on something that won't exist in its current form.

The same test applies to their investors and support systems.

These conversations about operating models are useful. But the conversation that's actually needed would be uncomfortable. It would require acknowledging that much of the current operating model (the workshops, the content libraries, the batch programming, the functional expertise) may be optimising for relevance in an environment that's disappearing.

The data is clear. The capital gap is seven times, not three times. European cloud market share is declining despite billions in sovereign infrastructure investment. Talent is concentrating in defence and strategic sectors at twelve times the rate of the previous decade. AI infrastructure costs are growing 36% annually whilst European startups have 80-84% less capital across all stages.

The winners in the next cycle won't be the ones who added RAG systems to their knowledge bases. They'll be the ones who asked what they'd build if they had nothing, and had the courage to move towards that answer.

That might mean building something that doesn't need us in our current form. But that's exactly the question founders are asking about their own businesses. The best ones understand that the company they'd start today isn't the company they're running. The gap between those two is where existential risk lives.

The same is true for the support ecosystem. The question isn't whether to add AI tooling to existing workflows. The question is whether the workflows themselves are pointed at the problems that will matter in 2028.

I've heard about RAG systems being built to make content libraries easier to access, but they don't make the content more valuable. The value is knowing which of 47 pieces of advice applies to this founder, at this stage, in this market. That requires judgement, not retrieval.

***

This article builds on previous analysis in 'The AI-Native Paradox,' 'The AI-Native Metrics Revolution,' and 'Beyond the AI Hype.' For frameworks on diagnosing whether your current model is fit for purpose, see the Problem Framework and Wardley Mapping resources in our Labs section.

Next
Next

[Playbook] The Growth Metrics Everyone Tracks—That Don't Actually Drive Growth