How to Build an Enterprise Pilot That Survives the Governance Process
Key takeaway: The pilot that converts into a contract is not designed to prove the technology works. It is designed to arm the internal champion with the evidence the organisation's governance process requires to say yes. Three criteria separate pilots that convert from pilots that demonstrate: representativeness, boundedness, and learnability.
Your pilot worked. The technology performed. Your champion was enthusiastic. However, three months later, nothing happened.
If you've been through this, you already know something went wrong. But you probably diagnosed the wrong cause. You assumed the corporate partner was slow, or risk-averse, or "not ready for innovation." The more uncomfortable answer: your pilot was designed to prove the wrong thing.
I ask every founder the same question before they start a pilot: "Who makes the contract decision, and what evidence does their process require?" Most can't answer. They know who approved the pilot. They don't know who approves the purchase order. Those are different people with different criteria, and the gap between them is where pilots go to die.
In a companion piece, [Why Your POC Succeeded and Still Failed], I diagnosed why technically successful proofs of concept fail to convert. The Diagnostic Trap. The governance gaps. The xK Rule as a stop signal. That article explains what goes wrong. This one explains what to do instead.
A brief orientation for new readers: the Critical Path Layers framework sequences innovation work by dependency. Each layer gates the next. Pilot design sits at Layer 2 in both the startup and corporate editions of the framework. For the full structure, see [Critical Path Layers: A Dependency Map for Innovation]. Everything in this article stands without it.
Here's the core problem. Most founders design pilots that answer the question: "Does the technology work?" The pilot that converts answers a different question: "Can my champion build a business case that survives the governance process?"
Those are not the same question. They produce different pilot designs.
The three criteria
Three criteria separate pilots that convert from pilots that merely demonstrate. They come from the organisational feasibility layer of the Critical Path Layers framework, and they apply whether you're a startup selling into an enterprise or a corporate innovation team validating an internal initiative.
1.Representativeness
A startup I coached ran a pilot with a global FMCG company. Twelve weeks, strong results, clear value demonstrated. The innovation director was delighted. The pilot team had been hand-picked: digitally fluent, change-ready, already advocates for the category of solution being tested.
The procurement committee asked one question: "Will this work in the plants?" The innovation team had no answer. The plant managers hadn't been consulted. The pilot proved the product worked for people who were already looking for it. It proved nothing about the 80% of the organisation that would actually need to adopt.
This is the enthusiasm trap. The innovation team is the early adopter. They are predisposed to like new things. That is their job. A pilot that succeeds with the most forward-thinking team tells the organisation nothing about organisational viability. The question that matters is whether the early majority will adopt, because the early majority is where the volume contract lives.
The test is simple. Would the people who participated in your pilot be among the first 30% in the organisation to adopt something new? If yes, your pilot population is skewed.
A representative pilot runs with a team that is typical, not exceptional. Average enthusiasm. Average technical sophistication. Average willingness to change workflows. If the pilot works with that team, it will work with most of the organisation. If it only works with enthusiasts, you have not validated adoption. You have validated excitement.
This creates an uncomfortable trade-off. The enthusiastic team is easier to work with, faster to onboard, and more forgiving of rough edges. The representative team is harder. They have questions the enthusiasts never asked. They resist changes the enthusiasts embraced. They expose the real adoption cost, not the idealised version.
Run the harder pilot. It produces the evidence that actually converts.
2. Boundedness
A pilot without pre-defined success criteria is not a pilot. It is a demo with a calendar.
Boundedness means three things are agreed before the pilot starts: duration, success criteria, and a go/no-go framework.
Duration: for adjacency initiatives, eight to sixteen weeks. Long enough to generate meaningful data, short enough to force a decision. A pilot with no end date never concludes, because the corporate partner will always want to test "just one more thing." Every additional test case extends the timeline, dilutes the focus, and delays the decision the pilot was supposed to enable.
Success criteria must be defined with the corporate partner's decision-makers, not just the champion, before the pilot begins. The champion knows what impresses them. The decision-maker knows what the governance process requires. Those are different things.
The go/no-go framework is the forcing function. Define three explicit outcomes in advance: proceed to scale, iterate with modifications, stop. This must be agreed before the pilot starts because the conditions for saying "no" are clearer before emotional investment sets in. After twelve weeks of working together, nobody wants to say no. That is a relationship dynamic, not an evidence-based decision.
An unbounded pilot is a signal. It tells you the organisation hasn't committed to making a decision. And it enables the most common form of organisational resistance: passive waiting. Nobody says no. Nobody says yes. The pilot continues, unfunded and undecided, until everyone involved has moved on to other priorities. Boundedness is not project management. It is a forcing function against institutional inertia.
3. Learnability
A pilot that proves "the technology works" learns one thing. The least useful thing.
A learnable pilot is designed around questions, not just metrics. The metrics answer whether the product performed. The questions answer whether the organisation can adopt it.
What is the integration effort in engineering hours, assessed by the corporate partner's IT team, not yours? What does the internal champion need to present at the quarterly review? Which stakeholders shifted from sceptical to supportive during the pilot, and what convinced them? What operational changes did the pilot team make, and were those changes sustainable? What did the team stop doing because the product replaced it, and was anyone upset about that?
These are not pilot metrics. They are Layer 3 prerequisites being collected at Layer 2. The scaling questions, the adoption playbook, the cross-functional alignment: the learnable pilot starts gathering that intelligence while the technology is still being tested.
Most founders don't ask these questions because they feel premature. Why worry about scaling logistics before you know if the product works? Because the people making the procurement decision are already thinking about scaling logistics. They will ask. The champion needs answers. And those answers cannot be fabricated after the pilot ends. They emerge from the pilot itself, or they don't exist.
The difference shows up in what the pilot produces as output. A pilot designed around "does it work?" produces a binary result. A pilot designed around "under what conditions does adoption stick?" produces a narrative the champion can carry into a room you will never enter: "The pilot ran twelve weeks in the logistics function. Processing time dropped 34%. The team asked to keep it. Integration required 40 hours from IT. Two sceptical stakeholders became advocates after seeing the week-six data."
That is not a pilot report. That is a story with numbers, characters, and a resolution. The first format tells the champion whether to be excited. The second tells the champion what to say to the CFO.
Evidence packaging: arming your champion
Most founders never think about what happens after they leave the room. They should, because this is where most deals actually die.
You closed the first deal through personal credibility. You were in the room. You answered every question in real time. You adapted your pitch to the audience. You read the room and adjusted. The champion watched you do this and thought: "This person understands our problem."
The champion cannot do any of that.
The champion will present your pilot results to people you will never meet, in a meeting you will not attend, using slides you did not write, answering questions you did not anticipate. They will be one agenda item among six. They will have fifteen minutes. Maybe ten.
So what does the champion actually need? Not a pilot report. A procurement-ready business case. Those are different documents with different audiences and different burdens of proof.
The difference is whose language the document speaks:
Financial impact stated in the organisation's own metrics: their cost centres, their revenue lines, their KPIs.
Technical integration assessment from their IT team's perspective, because "easy to integrate" means nothing until their architect confirms it.
User feedback from pilot participants in their own words, because a quote from someone the decision-maker knows carries more weight than any metric you produce.
Comparison against alternatives the organisation has already considered: the "compared to what" test from the Internal Market Clarity framework.
And an implementation timeline with resource requirements from their side, because the decision is never just "should we buy this?" but "can we absorb this?"
Their metrics. Their IT team. Their people. Their alternatives. Their capacity.
Every "their" in that list is a place where founders instinctively write "our." That instinct is the problem.
The champion is not presenting your case. They are presenting their case for your product. The POC article's governance checklist asks whether the five roles (executive sponsor, business owner, dedicated team, expert access, budget holder) exist. Evidence packaging asks whether you've armed each one with what they need to say yes.
Design the pilot backward from the decision
Most founders design pilots forward. "What can we demonstrate? What features should we show? What use case is most impressive?" That is product thinking applied to a governance problem. It produces impressive demos and stalled procurement.
The pilot that converts is designed backward.
Start from the end. Who makes the procurement decision? Not who approved the pilot. Who approves the contract. These are often different people. The pilot sponsor may be an innovation director. The contract decision sits with a business unit leader, a CTO, or a procurement committee. If you don't know who signs the contract, you've designed a pilot for the wrong audience.
Work backward from that person. What evidence does the procurement decision require? Not what impresses them: what their process requires. Procurement committees have templates. Budget approvals have criteria. IT sign-offs have checklists. These are not mysterious. They are documented, usually on an intranet page nobody outside the organisation has seen. Ask your champion. Ask your champion's champion. Ask procurement directly, if you can get access. The evidence requirements exist before your pilot starts. Your job is to discover them.
Check the governance timeline. If the budget cycle closes in Q3, a pilot that concludes in Q4 is commercially dead regardless of results. Not because the results were bad but because the decision window closed while you were still collecting data. Pilot timelines must fit inside the organisation's decision timeline. The other way around is a fantasy.
Identify who can kill this. The veto audit, simplified: name the three people who could say no, and understand their specific concerns. If the CISO worries about data residency, the pilot must generate evidence on data residency. If the CFO worries about total cost of ownership, the pilot must produce a TCO model. If the business unit leader worries about operational disruption, the pilot must measure disruption explicitly. Design the pilot to address each veto holder's specific concern. Procedural resistance to innovation is often the organisation functioning correctly, applied to the wrong context. The pilot that converts is designed to pass through the procedure, not around it.
Four diagnostic questions before you design your next pilot:
Do you know who makes the procurement decision, not the pilot approval, and what evidence they require?
Are your pilot success criteria defined in the organisation's vocabulary, or yours?
Could your champion present the pilot results to their leadership without you in the room?
Is your pilot timeline aligned with the organisation's budget and planning cycles?
If you answered no to any of those, you are designing a demonstration. The technology will work. The deal will stall. And six months from now you will be sitting across from your co-founder, wondering what went wrong, having proved the wrong thing beautifully.
Alexandra Najdanovic is the founder of Aieutics, working with founders and leadership teams on strategic transformation and AI-readiness.
Further Reading
Philippe Meda, "The 3 Levels of Value-Driven Prototyping" — Meda's levels provide the value validation logic for staging what a pilot proves. The three criteria in this article extend that logic into organisational conversion: a pilot can validate value perfectly and still fail to convert if it doesn't produce evidence the governance process can absorb.
Steve Blank, The Startup Owner's Manual — The foundational text on validation sequencing. Blank established that you validate before you build. CPL extends this to: you validate for the organisation's decision process, not just your product's viability.
Alexandra Najdanovic, [Why Your POC Succeeded and Still Failed] — The diagnostic companion to this article. That piece explains why technically successful pilots fail to convert. This piece explains how to design one that doesn't.
Matthew Dixon and Brent Adamson, The Challenger Sale — Dixon and Adamson's "commercial insight" concept maps directly to evidence packaging: the champion needs to teach their organisation something it didn't know, framed in terms the organisation already uses to make decisions.
Vijay Govindarajan and Chris Trimble, The Other Side of Innovation — Govindarajan's Performance Engine concept explains why organisational resistance to pilots is the organisation functioning correctly, not malfunctioning. Designing the pilot to work with the governance process rather than against it is the practical application of that insight.