Quick Technology News on April 2, according to media reports, in a recent paper, Apple’s research team claimed that they proposed a model ReALM that can run on the device side. This model can surpass GPT in some aspects- 4. The parameter amounts of ReALM are 80M, 250M, 1B and 3B respectively. They are very small and suitable for running on mobile phones, tablets and other devices. ReALM mainly studies the process of allowing AI to identify the referential relationships between various entities (such as names, places, organizations, etc.) mentioned in the text. The paper divides entities into three types: On-screen Entities: refers to the content currently displayed on the user's screen. Conversational Entities: refers to content related to dialogue. For example, if the user says "call mom", then mom's contact information is the conversation entity. Background Entities: Refers to entities that may not be directly related to the user's current operation or the content displayed on the screen, such as the music being played or the alarm that is about to sound. The paper states that although large language models have proven to be extremely capable on a variety of tasks, their potential has not been fully utilized when used to solve the reference problem of non-dialogue entities (such as screen entities, background entities) . ReALM is a completely new method that compares its performance with GPT-3.5 and GPT-4, showing that the smallest model performs on par with GPT-4, while the larger model significantly exceeds GPT-4. This research is expected to be used to improve the Siri assistant on Apple devices, helping Siri better understand and handle the context of user queries.
The above is the detailed content of Apple's AI amplification strategy claims that its device-side model performs better than GPT-4. For more information, please follow other related articles on the PHP Chinese website!