Is open-source AI the path to smarter, scalable innovation—or a compliance headache waiting to happen?
At New York Tech Week, IBM and Hugging Face joined forces to tackle one of the most pressing questions facing AI developers today: Should businesses build on open models or trust the convenience of closed systems?
The conversation revealed more nuance than hype, showing just how divided the enterprise AI landscape still is.
What’s the News?
During a high-profile panel at IBM’s Manhattan headquarters, industry leaders from companies like IBM, Patronus AI, and Hugging Face unpacked the real-world implications of using open versus closed large language models (LLMs).
Hosted by the AI Alliance, a group backed by IBM and Meta to promote open-source AI collaboration, the panel sparked meaningful debate over customization, security, and long-term control.
Experts noted that the definition of “open-source AI” is still evolving. For some, it means releasing model weights; for others, it includes access to training data or source code. This ambiguity alone can create friction for enterprise teams trying to evaluate what’s truly open.
Anthony Annunziata, director of AI open innovation at IBM, highlighted the flexibility of open-weight models. He argued that companies gain greater control over model performance and cost when they can fine-tune smaller systems, as opposed to depending on fixed proprietary APIs.
“You can optimize the size of the model and the trade-offs it makes,” said Annunziata. “That’s simply not an option with a closed approach.”
Customization wasn’t the only theme. Rebecca Qian, CTO of Patronus AI, shared how one of their clients, Volkswagen, needed an AI model capable of responding in multiple languages with car-specific knowledge. Generic benchmarks weren’t helpful. So Patronus built a custom automotive benchmark to evaluate model fit.
Qian noted that many businesses now opt for hybrid stacks. These combine the raw capability of closed models with open-source flexibility tailored to niche use cases.
She emphasized that small language models (SLMs), when fine-tuned on curated, domain-specific data, often outperform large general-purpose systems for targeted applications.
“Of course, the data has to be curated and the tasks well defined,” Qian said. “But when done right, they can outperform larger models.”
On the flip side, closed models offer stronger guarantees around compliance and vendor responsibility—an increasingly crucial factor for companies in finance, healthcare, and other regulated sectors.
Yacine Jernite of Hugging Face acknowledged that open-source requires more internal oversight, but argued the tradeoff is worth it for mission-critical deployments.
“If your business depends on compliance,” said Jernite, “it’s often better to handle it yourself with open systems rather than rely blindly on a vendor’s claims.”
Why It Matters
This debate touches every corner of enterprise AI strategy. Open models can reduce vendor lock-in, allow for deeper customization, and create transparency around how algorithms function.
But they come with a cost: companies need in-house expertise to vet models, manage updates, and stay on top of regulatory requirements.
Meanwhile, closed systems from major providers like OpenAI or Anthropic often offer better out-of-the-box performance—but businesses may sacrifice control, flexibility, and visibility into how their AI works.
The rise of hybrid architectures reflects a maturing market: companies are no longer blindly chasing the biggest model, but instead choosing what fits their domain needs.
💡 Expert Insight or External Quote
At the AI+ NY Summit 2024, Duolingo’s Head of AI, Klinton Bicknell, shared a cautionary view:
“Open source isn’t up to par when it comes to performance compared to other models… and one reason could be the cost,” said Bicknell (Axios).
Jerry Levine, General Counsel at ContractPodAi, warned of broader risks:
“The risks of security always have accompanied the advances in open technologies,” he noted (Axios).
Red Hat’s Steven Heuls added:
“Open source guidance is lacking,” making it harder for enterprises to adopt confidently (Axios).
And Daniel Dobrygowski from the World Economic Forum emphasized equity:
“Open source has a role in helping other countries… that have typically been locked out of a lot of technological innovation” (Axios).
GazeOn’s Take
Expect hybrid models to dominate the next wave of enterprise AI deployments. The conversation is shifting from “Which model is best?” to “Which approach fits our use case, cost tolerance, and compliance needs?”
Open-source players like Hugging Face and Meta will likely keep pushing boundaries, while enterprise buyers become more strategic in how they balance performance, risk, and autonomy.
💬 Reader Question
Would you trust a fully open-source AI model to run your business-critical workflows? Or do you prefer the safety of closed APIs?
About Author:
