Leaked documents show OpenAI paid Microsoft about $493.8 million in 2024 and $865.8 million in the first three quarters of 2025, while inference costs hit $8.65 billion in nine months of 2025, raising questions about profitability and AI infrastructure spending.

Leaked internal documents reported by TechCrunch expose the scale of payments between OpenAI and Microsoft and make one thing clear: operating large generative AI systems is extraordinarily expensive. These disclosures pushed phrases like OpenAI Microsoft payments and OpenAI financials leaked into the headlines and into investor conversations about AI infrastructure spending.
The documents show OpenAI paid roughly $493.8 million to Microsoft in 2024 and about $865.8 million in the first three quarters of 2025. Even more striking, inferred inference costs were about $8.65 billion in the first nine months of 2025 compared with roughly $3.8 billion for all of 2024. Those figures are central to searches such as How much does OpenAI pay Microsoft and OpenAI computing costs 2025.
OpenAI and Microsoft have a layered commercial relationship that mixes capital investment, infrastructure provisioning, and revenue sharing. Microsoft provides cloud compute through Azure and integrates OpenAI models across products such as Bing and enterprise services. In return, OpenAI shares a portion of revenues and receives credits that offset some training expenses. This arrangement is often discussed with search terms like Microsoft revenue share from OpenAI and OpenAI revenue versus expenses.
Main disclosures from the leaked documents include:
Key takeaways for industry watchers, customers, and investors include:
Businesses planning to adopt generative AI should model not only software fees but also the potential for high variable costs as usage grows. Search queries such as AI compute infrastructure expenses and OpenAI revenue versus expenses reflect the questions procurement and finance teams will ask when evaluating projects.
The leaked OpenAI and Microsoft documents make a blunt point: running state of the art AI at scale is costly and the financial arrangements that enable it are complex. For customers and competitors, expect AI pricing and feature roadmaps to be shaped as much by compute economics as by innovation. For investors and regulators, these numbers shift focus from growth alone to profitability mechanics and partner dependencies.
Observers should watch for provider responses through efficiency gains, new pricing models, or deeper cloud partnerships because those choices will determine whether large scale AI becomes a durable business or an expensive experiment. Common search queries in this area include How much does OpenAI pay Microsoft and AI inference costs 2025 which can help readers find more detailed coverage and analysis.



