Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Apple AI researchers boast useful on-device model that ‘substantially outperforms’ GPT-4 - 9to5Mac

Apr 02, 2024 - 9to5mac.com
Apple's AI team has published a research paper detailing a new system that could enhance Siri's capabilities. The system, called ReALM (Reference Resolution As Language Modeling), allows Siri to consider on-screen entities, conversational entities, and background entities to provide more accurate and relevant responses. Apple claims that this model performs better than OpenAI’s ChatGPT 3.5 and ChatGPT 4.0, with the smallest model achieving performance comparable to that of GPT-4, and larger models substantially outperforming it.

The paper concludes that ReALM outperforms previous approaches and performs as well as the state-of-the-art language model, GPT-4, despite having fewer parameters. It also outperforms GPT-4 for domain-specific user utterances, making ReALM an ideal choice for a practical reference resolution system that can exist on-device without compromising on performance. This development could significantly enhance Siri's usefulness and intelligence in the upcoming iOS 18 and beyond.

Key takeaways:

  • Apple's AI team has published a research paper describing a system where Siri can do more than just recognize images, potentially outperforming ChatGPT 4.0.
  • The system, called ReALM, takes into account what's on the user's screen and what tasks are active, potentially making Siri smarter and more useful.
  • Apple's model has shown large improvements over existing systems, with their smallest model achieving performance comparable to GPT-4, and larger models substantially outperforming it.
  • The system can exist on-device without compromising on performance, which is a key factor for Apple. This development could be part of the next few years of platform development, starting with iOS 18 and WWDC 2024 on June 10.
View Full Article

Comments (0)

Be the first to comment!