OpenAI has launched Frontier, a new platform designed to help enterprises build, deploy, and manage AI agents that can do real work across the business.
The announcement comes as companies struggle to move AI agents beyond isolated pilots and into production environments where they can meaningfully impact customer experience and operational efficiency.
According to OpenAI’s own data, 75% of enterprise workers say AI helped them do tasks they couldn’t do before. The technology is clearly capable. The problem is getting it into the hands of teams who need it most.
In the official OpenAI blog, the company claimed that “AI has let teams take on things they used to talk about but never execute.
“What’s slowing them down isn’t model intelligence, it’s how agents are built and run in their organizations.”
That’s the gap Frontier is trying to close.
HP, Intuit, Oracle, State Farm, Thermo Fisher, and Uber are among the first to adopt the platform. Existing customers like BBVA, Cisco, and T-Mobile have already piloted Frontier’s approach.
Joe Park, Executive Vice President and Chief Digital Information Officer at State Farm, explained the appeal:
“Partnering with OpenAI helps us give thousands of State Farm agents and employees better tools to serve our customers.
“By pairing OpenAI’s Frontier platform and deployment expertise with our people, we’re accelerating our AI capabilities and finding new ways to help millions plan ahead, protect what matters most, and recover faster when the unexpected happens.”
Building AI Coworkers, Not Just Chatbots
Frontier’s approach treats AI agents like new employees rather than standalone tools. That means giving them shared context, onboarding processes, hands-on learning with feedback, and clear permissions and boundaries.
The platform connects siloed data warehouses, CRM systems, ticketing tools, and internal applications to create what OpenAI calls a “semantic layer for the enterprise.”
This shared business context helps agents understand how information flows, where decisions happen, and what outcomes matter.
From there, agents can reason over data, complete complex tasks, work with files, run code, and use tools. As they operate, agents build memories that turn past interactions into useful context for future work.
Built-in evaluation and optimization features also help human managers and AI coworkers understand what’s working and what isn’t.
Each agent has its own identity, with explicit permissions and guardrails designed to make them usable in sensitive and regulated environments.
Where the Impact Shows Up
OpenAI pointed to several early use cases that highlight Frontier’s potential in customer-facing and operational roles.
At a major manufacturer, agents reduced production optimization work from six weeks to one day.
Elsewhere, a global investment company deployed agents end-to-end across the sales process, opening up over 90% more time for salespeople to spend with customers.
In one hardware troubleshooting example, millions of test failures previously required engineers to spend thousands of hours each year manually hunting down root causes.
Frontier-powered agents reduced root-cause identification from roughly four hours per failure to a few minutes by pulling together simulation logs, internal documents, workflows, and code to run end-to-end investigations.
In a customer experience context, that level of acceleration can directly impact satisfaction and retention.
The Ecosystem Play
Frontier is built on open standards, which means software teams can plug in and build agents that benefit from the same shared context.
OpenAI argues this solves a common failure mode for agent applications: lack of context.
When data is scattered across systems and permissions are complex, each integration becomes a one-off project.
Frontier aims to make it easier for applications to access the business context they need, with the right controls, so they can work inside real workflows from day one.
However, OpenAI was still keen to emphasize that Frontier isn’t just a platform. The company is also pairing Forward Deployed Engineers with customer teams to work side by side and help develop best practices for building and running agents in production.
Those engineers also give teams a direct connection to OpenAI Research.
As customers deploy agents, OpenAI learns not just how to improve systems around the model but also how the models themselves need to evolve to be more useful for specific work.
The Opportunity Gap
OpenAI framed Frontier as a response to what it calls the “AI opportunity gap,” the growing distance between what models can do and what teams can actually deploy.
Companies are already overwhelmed with disconnected systems and governance spread across clouds, data platforms, and applications.
AI has made that fragmentation more visible. Agents are getting deployed everywhere, but each one is isolated in what it can see and do.
At OpenAI alone, something new ships roughly every three days, and that pace is accelerating, with OpenAI warning:
“The gap between early leaders and everyone else is growing fast.”
For customer experience leaders, that gap translates directly into competitive risk.
Companies that can deploy agents effectively in customer-facing roles will have a significant advantage in speed, personalization, and operational efficiency.
What This Means for CX Teams
Frontier’s focus on shared context, identity, and permissions makes it particularly relevant for contact centers and customer service operations.
Agents that can access CRM data, ticketing systems, and internal knowledge bases while operating within clear boundaries could handle more complex customer interactions without requiring constant human intervention.
The platform’s memory and learning capabilities mean agents can improve over time, adapting to specific customer needs and business priorities. That’s a step beyond the static chatbots many companies have struggled to make useful.
The real test will be whether enterprises can move fast enough to take advantage of what Frontier offers.
OpenAI’s own data suggests the technology is ready; the question is whether the organizations deploying it are.