Databricks has unveiled new tools that claim to help enterprises build scalable AI agents and deploy them in high-value, mission-critical applications while ensuring accuracy, governance, and ease of use.

"Many enterprises still struggle to deploy AI agents in high-value use cases due to concerns around accuracy, governance, and security. For these organisations, it's confidence, not just technology, that presents the biggest hurdle to extracting the full data intelligence benefits of Generative AI," said Craig Wiley, senior director of Product for AI, Databricks.
Full-scale production with AI agents
"The new tools address these challenges head-on, enabling businesses to move beyond pilots and into full-scale production with AI agents they can trust," added Wiley.
The new tools provide customers with a range of features, including centralised governance monitoring, integration across OSS and proprietary AI models, and Mosaic AI Gateway support for custom LLM providers. They also simplify integration into existing app workflows with the Genie API and streamline human-in-the-loop workflows with the upgraded Agent Evaluation Review App. Additionally, the new tools offer provisionless batch inference, transforming how batch inference runs with Mosaic AI using a single SQL query. They claim to minimise the need to provide infrastructure while enabling seamless unstructured data integration.

"Batch AI with AI Functions is streamlining our AI workflows. It allows us to integrate large-scale AI inference with a simple SQL query—no infrastructure management is needed. This will directly integrate into our pipelines, cutting costs and reducing configuration burden. Since adopting it, we've seen a dramatic acceleration in our developer velocity when combining traditional ETL and data pipelining with AI inference workloads," said Ian Cadieu, CTO of Altana, a customer of Databricks.