San Francisco, CA – September 25, 2025 — Databricks has committed $100 million in a multi-year partnership with OpenAI, marking a pivotal moment in the race to integrate advanced artificial intelligence into enterprise operations. The deal makes OpenAI’s flagship model GPT-5 and other frontier AI tools natively accessible to more than 20,000 Databricks customers, streamlining how companies deploy production-ready AI agents on their enterprise data.
The agreement brings together two of the world’s most valuable private tech companies. OpenAI, last valued at around $500 billion, joins forces with Databricks, which recently secured a $100 billion valuation and $4 billion in annualized revenue, half of which is tied to AI products.
A First-of-its-Kind Enterprise Integration
OpenAI Chief Operating Officer Brad Lightcap described the partnership as the company’s first major integration with a business-focused data platform. “Our partnership with Databricks brings our most advanced models to where secure enterprise data already lives, making it easier for businesses to experiment, deploy, and scale AI agents with real impact,” he said.
Databricks’ platform, widely adopted for managing and analyzing corporate data, now embeds GPT-5 directly into its Agent Bricks product. This eliminates the need for additional setup, API key management, or external vendor approvals. Enterprise users can run large language models directly via SQL or API, cutting down deployment complexity while ensuring governance and compliance through Databricks’ Unity Catalog.
Enterprise-Ready AI at Scale
Databricks Agent Bricks enables companies to build AI applications that act on data, automate workflows, and generate reliable insights. Organizations in finance, healthcare, energy, and commerce are already using the system to detect fraud, optimize operations, accelerate application development, and advance medical research.
Ali Ghodsi, CEO of Databricks, said the move responds to surging enterprise demand for customized AI. “The key difference here is that any Databricks customer automatically now, just by clicking in the UI, can start using this product,” Ghodsi noted.
The deal guarantees OpenAI predictable revenue and gives Databricks customers dedicated high-capacity access to GPT-5. However, Databricks carries the risk of paying the full $100 million even if customer usage falls short, a sign of the company’s confidence in accelerating enterprise adoption.
The GEPA Breakthrough: Making AI 90x Cheaper
Beyond the headline partnership, Databricks also revealed a major research breakthrough in prompt optimization that could reshape enterprise AI economics. Its Generative Evolutionary Prompt Adaptation (GEPA) technique, developed with researchers from the University of California, Berkeley, improves how large language models are prompted.
Instead of fine-tuning model weights, GEPA iteratively rewrites prompts, enabling smaller, less costly models to perform at near state-of-the-art quality. Early tests show models optimized with GEPA can operate up to 90 times cheaper than premium counterparts, while outperforming them by 4 to 7 percentage points across tasks in finance, healthcare, and legal analysis.
At scale, GEPA transforms cost structures. For 10 million model requests, optimization costs become negligible compared with serving costs, saving enterprises millions annually. According to Databricks CTO Hanlin Tang, “It’s not just about asking a question more efficiently. It’s about asking it in a fundamentally better way that raises quality while reducing cost.”
Competitive Landscape
The partnership positions Databricks and OpenAI in direct competition with enterprise AI offerings from Snowflake, Oracle, Google, and Microsoft. Snowflake recently expanded its integration with Microsoft to offer OpenAI models, while Oracle is preparing to launch services supporting OpenAI, Google, and xAI models on its database platform.
What sets the Databricks-OpenAI collaboration apart is native integration and optimization. Instead of simply offering model access, Agent Bricks evaluates task-specific accuracy, tunes performance, and enforces governance controls before deployment.
Industry Reactions
Mastercard’s Chief AI and Data Officer Greg Ulrich welcomed the deal, noting that AI agents built on secure, trusted platforms are essential for scaling enterprise innovation. “It enables opportunity for research and targeted experimentation, bringing value to customers, enhancing employee productivity, in an environment that we trust,” Ulrich said.
Analysts highlight the dual significance of the announcement: while the $100 million commitment cements OpenAI’s enterprise momentum, Databricks’ GEPA breakthrough suggests enterprises can achieve frontier-level AI performance without frontier-level costs.
Key Highlights of the Partnership
Feature | Impact for Enterprises |
---|---|
$100M Partnership | Guarantees GPT-5 access for Databricks customers |
Native GPT-5 Integration | No external API keys or vendor setup required |
Agent Bricks Platform | Build, evaluate, and scale AI agents with governance |
Unity Catalog | Provides secure, end-to-end compliance controls |
GEPA Optimization | Cuts AI operating costs by up to 90% |
Competitive Edge | Direct challenge to Snowflake, Oracle, and Google AI stacks |
Future Outlook
For OpenAI, the deal represents a steady revenue stream as it invests heavily in new data centers to support global demand. For Databricks, it is a calculated risk that could redefine the economics of AI in enterprise settings. By pairing GPT-5’s performance with GEPA-driven efficiency, the company aims to lower cost barriers and attract a broader range of organizations into the AI ecosystem.
The message to enterprises is clear: advanced AI is no longer limited by complexity or cost. As Databricks and OpenAI deepen their collaboration, the balance of power in enterprise AI may tilt toward platforms that offer both cutting-edge intelligence and affordable scalability.