What AI vendor should you choose? Here are the top 7 (OpenAI still leads)


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Vendors are deploying new generative AI tools every day in a market that has been likened to the Wild West. But because the technology is so new and ever-evolving, it can be extremely confusing, with platform providers making sometimes speculative promises.

IT firm GAI Insights hopes to bring some clarity to enterprise decision-makers with its release of the first known buyer’s guide to large language models (LLMs) and gen AI. It reviewed more than two dozen vendors, identifying seven emerging leaders (OpenAI is way ahead of the pack). Also, proprietary, open source and small models will all be in high demand in 2025 as the C-suite prioritizes AI spending. 

“We’re seeing real migration from awareness to early experimentation to really driving systems into production,” Paul Baier, GAI Insights CEO and co-founder, told VentureBeat. “This is exploding, AI is transforming the entire enterprise IT stack.”

7 emerging leaders

GAI Insights — which aims to be the “Gartner of gen AI” — reviewed 29 vendors across common enterprise gen AI use cases such as customer service, sales support, marketing and supply chains. They found that OpenAI remains firmly in the lead, taking up 65% of market share.

The firm points out that the startup has partnerships with a multitude of content and chip vendors (including Broadcom, with whom it is developing chips). “Obviously they’re the first, they defined the category,” said Baier. However, he noted, the industry is “splintering into sub-categories.”

The six other vendors GAI Insights identified as emerging leaders (in alphabetical order): 

  • Amazon (Titan, Bedrock): Has a vendor-neutral approach and is a “one-stop shop” for deployment. It also offers custom AI infrastructure in the way of specialized AI chips such as Trainium and Inferentia. 
  • Anthropic (Sonnet, Haiku, Opus): Is a “formidable” competitor to OpenAI, with models boasting long context windows and performing well on coding tasks. The company also has a strong focus on AI safety and has released multiple tools for enterprise use this year alongside Artifacts, Computer Use and contextual retrieval. 
  • Cohere (Command R): Offers enterprise-focused models and multilingual capabilities as well as private cloud and on-premise deployments. Its Embed and Rerank models can improve search and retrieval with retrieval augmented generation (RAG), which is important for enterprises looking to work with internal data.
  • CustomGPT: Has a no-Code offering and its models feature high accuracy and low hallucination rates. It also has enterprise features such as Sign-On and OAuth and provides analytics and insights into how employees and customers are using tools. 
  • Meta (Llama): Features “best-in-class” models ranging from small and specialized to frontier. Its Meta’s Llama 3 series, with 405 billion parameters, rivals GPT-4o and Claude 3.5 Sonnet in complex tasks such as reasoning, math, multilingual processing and long context comprehension. 
  • Microsoft (Azure, Phi-3): Takes a dual approach, leveraging existing tools from OpenAI while investing in proprietary platforms. The company is also reducing chip dependency by developing its own, including Maia 100 and Cobalt 100. 

Some other vendors GAI Insights assessed include SambaNova, IBM, Deepset, Glean, LangChain, LlamaIndex and Mistral AI.  

Vendors were rated based on a variety of factors, including product and service innovation; clarity of product and service and benefits and features; track record in launching products and partnerships; defined target buyers; quality of technical teams and management team experience; strategic relationships and quality of investors; money raised; and valuation. 

Meanwhile, Nvidia continues to dominate, with 85% of market share. The company will continue to offer products up and down the hardware and software stack, and innovate and grow in 2025 at a “blistering” pace. 

While the gen AI market is still in its early stages — just 5% of enterprises have applications in production — 2025 will see massive growth, with 33% of companies pushing models into production, GAI Insights projects. Gen AI is the leading budget priority for CIOs and CTOs amidst a 240X drop over the last 18 months in the cost of AI computation. 

Interestingly 90% of current deployments use proprietary LLMs (compared to open source), a trend the firm calls “Own Your Own Intelligence.” This is due to a need for greater data privacy, control and regulatory compliance. Top use cases for gen AI include customer support, coding, summarization, text generation and contract management. 

But ultimately, Baier noted, “there is an explosion in just about any use case right now.” 

He pointed out that it’s estimated that 90% of data is unstructured, contained across emails, PDFs, videos and other platforms and marveled that “gen AI allows us to talk to machines, it allows us to unlock the value of unstructured data. We could never do that cost-effectively before. Now we can. There’s a stunning IT revolution going on right now.”

2025 will also see an increased number of vertical-specific small language models (SLMs) emerging, and open-source models will be in demand, as well (even as their definition is contentious). There will also be better performance with even smaller models such as Gemma (2B to 7B parameters), Phi-3 (3.8 B to 7B parameters) and Llama 3.2 (1B and 3B). GAI Insights points out that small models are cost-effective and secure, and that there have been key developments in byte-level tokenization, weight pruning and knowledge distillation that are minimizing size and increasing performance. 

Further, voice assistance is expected to be the “killer interface” in 2025 as they offer more personalized experiences and on-device AI is expected to see a significant boost. “We see a real boom next year when smartphones start shipping with AI chips embedded in them,” said Baier. 

Will we truly see AI agents in 2025?

While AI agents are all the talk in enterprise right now, it remains to be seen how viable they will be in the year ahead. There are many hurdles to overcome, Baier noted, such as unregulated spread, agentic AI making “unreliable or questionable” decisions and operating on poor-quality data. 

AI agents have yet to be fully defined, he said, and those in deployment right now are primarily confined to internal applications and small-scale deployments. “We see all the hype around AI agents, but it’s going to be years before they’re adopted widespread in companies,” said Baier. “They’re very promising, but not promising next year.”

Factors to consider when deploying gen AI

With the market so cluttered and tools so varied, Baier offered some critical advice for enterprises to get started. First, beware of vendor lock-in and accept the reality that the enterprise IT stack will continue to change dramatically over the next 15 years. 

Since AI initiatives should come from the top, Baier suggests that the C-suite have an in-depth review with the board to explore opportunities, threats and priorities. The CEO and VPs should also have hands-on experience (at least three hours to start). Before deploying, consider doing a no-risk chatbot pilot using public data to support hands-on learning, and experiment with on-device AI for field operations. 

Enterprises should also designate an executive to oversee integration, develop a center of excellence and coordinate projects, Baier advises. It is equally important to perform gen AI use policy and training. To support adoption, publish a use policy, conduct basic training and identify which tools are approved and what information should not be entered.

Ultimately, “don’t ban ChatGPT; your employees are already using it,” GAI asserts. 



Leave a Comment