Risk mitigation may be behind the adoption of OpenAI's language model. But Apple denies that was the reason
Apple has been very strategic in its unveiling of new generative AI features for iPhones, Macs, and iPads. Many of these features use the company’s own AI models, some running directly on the devices and others in the cloud.
This was no surprise, considering that the company has always used its own technologies in its products. What caught our attention, however, was its decision to partner with OpenAI to offer ChatGPT to its users.
Apple has been working on machine learning for years. In 2018, the company hired John Giannandrea, Google’s former head of research and AI, to lead its AI strategy. So why hasn’t it developed its own large language model (LLM) like its competitors?
It’s true that developing an advanced language model requires a lot of research talent, computing power, and time. But Apple had access to all of that.
The reason it hasn’t invested heavily in a chatbot powered by its own LLM may be the same reason that made Google hesitate in the first place: the risk that it could leak users’ private information or a company’s trade secrets, spread misinformation, defame someone, or release dangerous information (like plans for a bioweapon, for example).
When OpenAI launched ChatGPT in late 2022, big tech ’s concerns about these risks were put to rest, as they began to fear being left behind on a potentially transformative technology (and being penalized for it on the stock market). But not Apple.
The AI capabilities it showcased—such as photo corrections, emoji creation, and text summarization—perform a controlled list of low-risk functions. ChatGPT and similar chatbots are much more comprehensive.
They can be used for a wider range of tasks and are generally more unpredictable in their responses – OpenAI faces lawsuits for defamation and copyright infringement over its model's responses.
“I think that’s one of the reasons,” says Ben Bajarin, an analyst at Creative Strategies. “And it’s also why they [Apple] may not focus as much on that [ChatGPT feature] and focus more on features that seem more valuable and useful.”
Apple denies that risk mitigation was behind its decision to use a third-party chatbot. The company said the decision was more about allowing users to access an AI chatbot without having to leave the context they are working in within the operating system.
“For broader queries, outsourcing tasks to OpenAI, with disclosure of data transfers to third parties, will maintain the integrity of Apple’s trusted brand,” notes Jeremiah Owyang, an investor at Blitzscaling Ventures.
But it could also help it in other ways. “The strategic advantage is that it can analyze the queries sent to OpenAI, learn from them, and improve its own models,” he adds.
Comments