Home > Technology peripherals > AI > Apple researchers say their on-device model ReALM outperforms GPT-4 and can significantly improve Siri intelligence

Apple researchers say their on-device model ReALM outperforms GPT-4 and can significantly improve Siri intelligence

PHPz
Release: 2024-04-02 09:16:14
forward
1199 people have browsed it

苹果研究人员称其设备端模型 ReALM 性能优于 GPT-4,可大幅提升 Siri 智能程度

According to news from this site on April 2, although Siri can currently try to describe the pictures in the message, the effect is not stable. However, Apple has not given up exploring the field of artificial intelligence. In a recent research paper, Apple's artificial intelligence team described a model that can significantly improve Siri's intelligence. They believe that this model, called ReALM, outperformed OpenAI's well-known language model GPT-4.0 in tests. .

This article introduces the special features of ReALM, which can simultaneously understand the content on the user's screen and the ongoing operations. Discussions are divided into the following three types:

  • Screen entity: refers to the content currently displayed on the user's screen.
  • Dialogue entity: refers to content related to the dialogue. For example, if the user says "call mom", then mom's contact information is the conversation entity.
  • Background entities: refers to entities that may not be directly related to the user's current operation or the content displayed on the screen, such as the music being played or the alarm that is about to sound.

If it works perfectly, ReALM will make Siri even smarter and more useful. They compared the performance of ReALM with OpenAI's GPT-3.5 and GPT-4.0:

We tested the GPT-3.5 and GPT-4.0 models provided by OpenAI and provided them with contextual information to let them Predict a series of possible entities. GPT-3.5 only accepts text input, so we only provide text hints. GPT-4 can understand image information, so we provided it with screenshots, which significantly improved its screen entity recognition capabilities.

So how does Apple’s ReALM perform?

“Our model has made significant progress in identifying different types of entities. Even the smallest model has an accuracy of more than 5% in on-screen entity recognition compared to the original system.In comparison with GPT-3.5 and GPT-4.0, our smallest model performs comparably to GPT-4.0, while the larger model significantly outperforms it."

The conclusion of the paper First, ReALM can match GPT-4 in performance even though it has much fewer parameters than GPT-4, and performs better when processing user instructions in specific fields, which makes ReALM a A practical and efficient entity recognition system that can run on the device.

For Apple, how to apply this technology to devices without affecting performance seems to be the key. As the WWDC 2024 Developers Conference is about to be held on June 10, it is widely expected that Apple will demonstrate more artificial intelligence technology achievements in new systems such as iOS 18.

The above is the detailed content of Apple researchers say their on-device model ReALM outperforms GPT-4 and can significantly improve Siri intelligence. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template