Microsoft and OpenAI have had something of a symbiotic relationship, with the former giving billions of capital to a startup AI lab and in return gaining early access to cutting-edge models that are now baked into Microsoft’s suite of productivity software. The two companies have been headed in diverging directions, however, and Reuters reported today that Microsoft is looking to add more models to its 365 Copilot product that aren’t built by OpenAI.
The reasoning, according to the report, is that Microsoft sees OpenAI’s cutting-edge GPT-4 model as too expensive and not fast enough to satisfy its enterprise customers. Copilot 365 is an AI-powered assistant built into Microsoft’s suite of productivity applications including Word and PowerPoint. The tool is supposed to ingest all of a company’s data and do a myriad of things, like give users the ability to quickly find information without needing to hunt through disparate apps; quickly generate a list of the company’s most profitable business units; or instantaneously summarize meetings and emails.
It is supposed to do those things, but customers and insiders alike are still underwhelmed by Copilot 365, which costs an extra $30 per month per user on a team. In a recent Business Insider story, employees of Microsoft speaking anonymously called the tools “terrible” and “gimmicky,” not working well 75% of the time. On the customer front, Business Insider cited a survey of 123 IT leaders published by management consultancy Gartner, which found only four said Copilot provided significant value to their companies. It should be noted some other stories have reported on companies that have found value in using large language models, such as by simplifying customer support.
Some customers who spoke to Business Insider specifically noted that 365 Copilot is too expensive.
OpenAI’s ChatGPT is a frontier, general model, meaning it is trained on vast swaths of data and can be more expensive and slow to run; that is why most models are offered in “lite” versions that perform less intensive inference or “thinking.” Microsoft has been training its own in-house, smaller models like one called Phi-4, and Reuters reports that sources speaking to the outlet said the company is looking to “customize other open-weight models to make 365 Copilot faster and more efficient.”
In one sense, it makes sense that Microsoft would want to reduce its reliance on OpenAI. If the company is right and AI is going to be the next generational change in computing, relying on an independent company for the core technology is not a great idea.
Microsoft has plowed billions of dollars into OpenAI and will receive 75% of its profits until it makes it breaks even on its investment, and even then will still hold a large stake in the startup. The company in effect gets to hedge its bets—build its own in-house models while keeping a lottery ticket in OpenAI in case it continues on its current skyward trajectory.
Despite being the front-runner today, some skeptics of OpenAI say that we may not know a true winner in the AI race yet (should these technologies be as revolutionary as we are told to believe). In the same way that there were numerous search engines that came online in the ’90s, only to be quickly trounced when the latecomer Google showed up. Microsoft is likely wise to hedge.