Research date: 2026-05-11 Scope: Strategic logic behind the May 4 OpenAI / Anthropic PE joint ventures, three-generation AI Rollup evolution, strategic divergence and risks
On May 4, OpenAI and Anthropic announced the same structure within hours of each other: a joint venture (JV — a company co-funded by two or more entities, each contributing capital and resources, sharing profits and risks according to agreement) with PE firms to deploy AI into large enterprises. SiliconSnark’s summary captured it best: “the technology industry’s two most philosophically opposed AI companies simultaneously announced they had invented the same company.” Two labs with long-standing disagreements on data sourcing, safety philosophy, and open-source stance arrived at the same market answer on the same day.
Today (May 11), OpenAI released DeployCo’s official announcement with more details: the acquisition of Tomoro bringing roughly 150 Forward Deployed Engineers from day one, $4B+ in initial investment, and the full roster of 19 investment firms plus three consultancies led by TPG. But when you lay the two deals’ term sheets side by side, they are playing fundamentally different games. Read together with a trajectory that has been unfolding in the AI industry over the past six months — the AI Rollup — these two deals reveal the most important competitive dimension for model companies over the next one to two years.
The AI Rollup model has gone through three iterations in the past six months. Each generation answers the same question: how does AI actually enter real businesses, not just stay in demos.
Generation 1 was the traditional PE rollup. Acquire fragmented small companies at low prices, consolidate back-office operations through standardization, and exit at a higher multiple. Thrasio is the canonical cautionary tale: $16B flooded into the Amazon brand aggregator space, with an estimated 90% of companies struggling or dead. The limitation was clear: no technological differentiation, pure operational efficiency. Buying a bunch of companies does not create value.
Generation 2 was the VC-backed AI Rollup. General Catalyst allocated $1.5B and Thrive Capital deployed $1B+. The playbook reversed the order: build an AI platform first, then acquire traditional services companies, and use equity control to force AI adoption. Crescendo, a 20-person team, acquired PartnerHero with 3,000 employees — the logic being that as a consultant you can only recommend AI deployment, but as a controlling shareholder you can replace management and restructure workflows. RAND data shows 80% of AI projects fail, with all five root causes being organizational (poor problem definition, insufficient data, power reallocation). Consulting contracts can’t solve the execution problem — only equity control can force transformation. This enforcement gap is the core insight of VC-backed AI Rollups. (For a detailed analysis of this model, see the April 9 AI Rollup survey.)
But Gen 2’s cost was also clear: one company had to simultaneously handle technology building, M&A integration, and operational transformation — the intersection of these three capabilities is extremely narrow.
Generation 3 is the structure that emerged on May 4. Three functions split across three specialized entities: the AI company provides models and engineering capability, PE provides portfolio companies and control rights, and the JV handles implementation. Reuters confirmed that most of the JV capital will be used to acquire engineering and consulting firms. Technology, control, and implementation — what was once done by a single company is now done by three. This is the classic pattern of a business model moving from vertical integration to specialization: more efficient, broader coverage, but each link takes its share of the profit.
On May 11, OpenAI released more details about DeployCo. The most concrete item: OpenAI has agreed to acquire Tomoro, an applied AI consulting and engineering firm, which will bring approximately 150 Forward Deployed Engineers into DeployCo after closing. DeployCo’s FDEs will embed at client sites to run diagnostics, select priority workflows, and design and deploy production systems. OpenAI is not building an implementation team from scratch — it is buying one ready-made.
Gen 2 tested the waters with VC. Gen 3 pivots to PE. The difference isn’t about the size of the capital — it’s about control rights.
VCs typically take minority stakes. A VC can persuade a founder to deploy AI but cannot compel them. PE takes majority stakes. PE can decide management changes, process redesign, and system deployment priorities. Bank loans come without enforcement mechanisms. Consulting reports have no authority to replace people. Enterprise software companies can’t guarantee implementation depth — but PE can make AI deployment a hard mandate from the shareholder level.
PE also needs this path. Traditional PE levers — cost-cutting, financial engineering, industry consolidation — are largely exhausted in a higher-rate environment. AI is an entirely new dimension of operational improvement. FTI Consulting’s PE AI Radar report notes that most PE firms have already integrated AI into their portfolio strategies. OpenAI’s announcement provides a scale reference: DeployCo’s PE and consulting partners collectively “sponsor more than 2,000 businesses around the world” — and that is just their own portfolio companies, not counting external clients.
There is another layer to the math that’s easy to miss: buy a traditional B2B services company at 8x EBITDA, inject AI capability, and pitch it as a tech-enabled platform — the next buyer might pay 12x. Operational improvement and narrative premium can compound. PE wants both.
OpenAI’s and Anthropic’s JVs look symmetric on the surface: both partner with PE, both plan to acquire consulting firms to build implementation capacity. But the term sheet details reveal that the two companies don’t just differ in strategy — they differ in their understanding of what kind of company they are.
First, the specifics.
DeployCo’s official announcement provides these key numbers: OpenAI retains majority ownership and control, over $4B in initial investment to be used for scaling operations and acquisitions. The partnership is led by TPG, with Advent, Bain Capital, and Brookfield as co-lead founding partners, joined by B Capital, BBVA, Emergence Capital, Goanna, Goldman Sachs, SoftBank Corp., Warburg Pincus, WCAS, and others — 19 investment firms in total — plus three consulting and systems integration firms: Bain & Company, Capgemini, and McKinsey & Company.
The widely reported $10B valuation and 17.5% guaranteed return do not appear in the official announcement. These figures come from independent reporting by Bloomberg, The Next Web, and Forbes, with cross-analysis by SaaStr. A 17.5% guaranteed return is more than double the standard PE preferred return of 8%. What it means: regardless of whether DeployCo makes money, OpenAI must pay PE partners 17.5% of their investment annually before anything else. PE earns near-risk-free returns; OpenAI bears most of the downside. In plain terms, OpenAI is renting PE’s distribution channel at a steep premium.
Why pay this premium? Three explanations hold simultaneously. First, the IPO narrative needs revenue certainty — after an $852B valuation and $122B raise, any revenue uncertainty will compress pricing. Second, both companies were competing for the same high-quality PE relationships, and the bidding drove up the price. Third, OpenAI retained majority control in the JV, which is unusual when PE is putting up the capital — part of the premium is the price of buying that control.
Taken together, these three do not point to “the channel doesn’t matter.” The 17.5% premium proves exactly the opposite: the channel matters enormously — so much that OpenAI is willing to pay twice the market rate to rent it. The key distinction is what kind of thing the channel is classified as.
OpenAI treats the channel as a cost, not a product. It is a model company; the model is the core product. But models don’t automatically reach customers — you need people on the ground to redesign workflows, do integrations, and handle internal resistance. OpenAI’s choice is not to build that workforce itself (that would take too long and drag it into the economics of a services business) but to outsource it to PE at a high price. The 17.5% is a massive channel cost — like how an airline spends enormous sums on aircraft but remains a transportation company rather than an aircraft manufacturer. The channel is expensive, but it exists to enable the core business — the model — not to turn OpenAI into a channel company.
Anthropic’s joint venture with Blackstone, Hellman & Friedman, and Goldman Sachs is valued at $1.5B, with Anthropic, Blackstone, and H&F each committing roughly $300M, Goldman Sachs roughly $150M, and Apollo, General Atlantic, Leonard Green, GIC, and Sequoia Capital participating as co-investors. TechCrunch’s side-by-side report explicitly notes that Anthropic has no guaranteed return commitment.
Anthropic’s terms are priced more like a traditional services company — no guarantee, a valuation one-seventh of OpenAI’s. This aligns with its April Cowork 3P strategy: in March, it blocked third-party clients from using Claude subscriptions to call its API; in April, it added support for GPT, Gemini, DeepSeek, and other models inside its own Claude Cowork app. The underlying judgment connecting both moves is the same — client ownership is the moat; models are interchangeable. Anthropic’s own April 21 Managed Agents blog post put it in writing: “We’re opinionated about the shape of these interfaces, not about what runs behind them.”
Under this judgment, Anthropic’s JV is not a pipe for the model — it IS Anthropic’s core business. Anthropic sells a services capability: how to deploy FDEs, how to design workflows, how to continuously tune systems. Whether the work gets done with Claude, GPT, or Gemini depends on which model performs better in a given scenario. The channel itself is the value; the model is a consumable inside it.
Side by side, the two companies have given completely opposite answers to the same question: what is the core asset of an AI company?
OpenAI believes the core asset is the model. GPT models will maintain leadership; the channel is an external attachment you can buy, and renting control from PE is just another form of customer acquisition cost. It is a model company — it’s just that the model is so complex that customers can’t use it on their own, so a JV is needed as a translator.
Anthropic believes the core asset is the client relationship and the workflow. Models are becoming interchangeable components — GPT-5.4, Claude Opus 4.6, and Gemini 3.1 Pro differ by single digits on general benchmarks, and Sam Altman himself has acknowledged that “transformer models have hit the wall.” When the gap between models narrows to this degree, competition shifts from “whose model is best” to “who is best at helping customers actually use them.” Anthropic isn’t selling models; it’s selling deployment services — the model is just one component of the service, Claude today, Gemini tomorrow, the service remains the same.
Both bets have their own vulnerabilities. If models truly commoditize, OpenAI’s 17.5% becomes a crushing fixed cost — the model no longer earns enough, but the guaranteed PE return must still be paid every year. If models do not commoditize and GPT continues to pull ahead, Anthropic’s model-agnostic deployment pipeline loses its core selling proposition — when customers ask for “give me the best model” rather than “help me design the workflow,” a model-neutral services company loses to a model monopolist’s direct sales team.
Both companies’ thesis rests on a shared premise: that enterprise AI deployment is itself profitable. If both JVs’ AI agents enter the same market simultaneously, that premise may not hold.
The logic works like this. Without competition, a $100M-revenue services company that adopts AI keeps its revenue while margins jump — a good business. But if OpenAI’s DeployCo and Anthropic’s JV both deploy AI agents into the same verticals — customer service outsourcing, accounting automation, legal services — the agents will start undercutting each other for clients.
Anthropic’s own Project Deal experiment has already previewed this: in a market populated entirely by AI agents, Opus agents systematically extracted value from Haiku agents — Opus sellers earned $2.68 more (p=0.030), Opus buyers paid $2.45 less (p=0.015), and the Haiku users whose value was extracted subjectively felt nothing was wrong. In a real market, if both JVs deploy agents into the same verticals, the end result is not both JVs’ agent services making money — it’s them eating each other’s profits down to zero. Efficiency gains alone don’t shrink the market — competition does.
Klarna’s reversal is a reminder of the flip side: even without competitive pressure, AI deployment can degrade the service itself. Klarna replaced 700 customer service staff with AI in 2023, then fully reversed by mid-2025 — problem resolution time increased 27%, dissatisfaction interactions grew 35%. Fortune’s judgment captured it plainly: “Services businesses aren’t inefficient by accident. They’re inefficient by design. The inefficiency is the product. Clients pay for flexibility, customization, and someone to blame when things go wrong.”
The competitive dimension of the AI industry has shifted twice in the past six months. First, from model capability to runtime — the same Claude Opus scores a 16-percentage-point difference on SWE-bench depending on which harness it runs in; the model isn’t the endpoint, how you use it is. Second, from runtime to distribution — when model gaps narrow to single digits, whoever can get enterprises to actually adopt wins the next phase. McKinsey’s annual survey provides a scale reference: 88% of enterprises use AI in at least one function, but fewer than 10% have achieved scaled deployment. The missing 90% is not an API quality problem — it’s about who redesigns the workflows, who migrates the data, who replaces the roles, who handles the internal resistance. PE is the extreme expression of this logic: it has not just capital, but the control rights to write AI deployment into management KPIs.
On May 4, OpenAI and Anthropic agreed on this judgment to the point of delivering the same answer on the same day. But they have completely opposite answers about who they are. OpenAI is a model company — the JV is a channel it rented at a high price, buying the pathway to deliver models into enterprises. Anthropic is a deployment services company — the JV is its core business, and the model is just a component: Claude today, maybe Gemini tomorrow.
Both bets could win. Both could also lose to the same risk: the AI agents generated by the two JVs competing each other’s profits down to an equilibrium of zero. This won’t change tomorrow’s work, but if you are assessing the competitive strategy of AI products, it tells you that your next competitors are not just other model teams — they include the PE firms controlling enterprise access, the implementation companies they are standing up, and the AI agents those companies will deploy that will undercut each other in the market.