Table of Contents
Phi-2 Key Highlights
Training details
Experimental Evaluation
Home Technology peripherals AI Mobile phone runs Microsoft's small model better than large model with 2.7 billion parameters

Mobile phone runs Microsoft's small model better than large model with 2.7 billion parameters

Dec 14, 2023 pm 10:45 PM
data Model

Microsoft CEO Nadella announced at the Ignite conference last month that the Phi-2 small-scale model will be fully open source. This move will significantly improve the performance of common sense reasoning, language understanding and logical reasoning

Mobile phone runs Microsofts small model better than large model with 2.7 billion parameters

Today, Microsoft announced more of the Phi-2 model Details and the new prompting technology promptbase. This model with only 2.7 billion parameters outperforms Llama2 7B, Llama2 13B, Mistral 7B, and closes the gap (or even better) with Llama2 70B on most common sense reasoning, language understanding, mathematics, and coding tasks.

At the same time, the small-sized Phi-2 can run on mobile devices such as laptops and mobile phones. Nadella said that Microsoft is very happy to share its best-in-class small language model (SLM) and SOTA prompt technology with R&D developers.

Mobile phone runs Microsofts small model better than large model with 2.7 billion parameters

Microsoft published a paper called "Textbook Only" in June this year, using a "textbook" containing only 7B tags Quality" data to train a model containing 1.3B parameters, namely phi-1. Despite having datasets and model sizes that are orders of magnitude smaller than competitors, phi-1 achieves a first-time pass rate of 50.6% in HumanEval and an accuracy of 55.5% in MBPP. phi-1 proved that even high-quality "small data" can lead to good model performance

Microsoft subsequently published "Just a Textbook II: Phi-1.5" in September Technical Report", further research into the potential of high-quality "small data". The article proposes Phi-1.5, which is suitable for QA Q&A, coding and other scenarios, and can reach a scale of 1.3 billion

Nowadays, Phi-2 with 2.7 billion parameters once again uses "small body ” gives excellent reasoning and language understanding capabilities, demonstrating SOTA performance in basic language models below 13 billion parameters. Thanks to innovations in model scaling and training data management, Phi-2 matches or exceeds models 25 times its size on complex benchmarks.

Microsoft says Phi-2 will be an ideal model for researchers to conduct interpretability exploration, security improvements, or fine-tuning experiments for a variety of tasks. Microsoft has made Phi-2 available in the Azure AI Studio model catalog to facilitate language model development.

Phi-2 Key Highlights

The scale of the language model has increased to hundreds of billions of parameters, which has indeed released many new capabilities and redefined nature. Landscapes of language processing. But a question remains: can these new capabilities also be achieved on smaller scale models through training strategy selection (such as data selection)?

The solution provided by Microsoft is to use the Phi series of models to achieve similar performance to large models by training small language models. Phi-2 breaks the scaling rules of traditional language models in two aspects

First, the quality of training data plays a crucial role in model performance. Microsoft takes this understanding to the extreme by focusing on "textbook quality" data. Their training data consists of a specially created comprehensive dataset that teaches the model common sense knowledge and reasoning, such as science, daily activities, and psychology. In addition, they further expand their training corpus with carefully selected web data that is screened for educational value and content quality

Secondly, Microsoft uses innovative technologies to expand, Starting from Phi-1.5 with 1.3 billion parameters, knowledge was gradually embedded into Phi-2 with 2.7 billion parameters. This scaled knowledge transfer accelerates training convergence and significantly improves Phi-2’s benchmark scores.

The following is the comparison graph between Phi-2 and Phi-1.5 for all other tasks except BBH (3-shot CoT) and MMLU (5-shot) Evaluation using 0-shot

Mobile phone runs Microsofts small model better than large model with 2.7 billion parameters

Training details

Phi-2 is a Transformer-based model , whose goal is to predict the next word. It was trained on synthetic and network datasets, using 96 A100 GPUs, and took 14 days

Phi-2 is a base model and failed Reinforcement learning with human feedback (RLHF) performs alignment and does not perform instruction fine-tuning. Despite this, Phi-2 still performed better in terms of toxicity and bias compared to the tuned existing open source model, as shown in Figure 3 below.

Mobile phone runs Microsofts small model better than large model with 2.7 billion parameters

Experimental Evaluation

First, the study experimentally compared Phi-2 with common language models on academic benchmarks, Covers multiple categories including:

  • Big Bench Hard (BBH) (3 shots with CoT)
  • Common Sense Reasoning (PIQA) , WinoGrande, ARC easy and challenge, SIQA),
  • Language understanding (HellaSwag, OpenBookQA, MMLU (5-shot), SQuADv2 (2-shot), BoolQ)
  • Mathematics (GSM8k (8 shot))
  • Encoding (HumanEval, MBPP (3-shot))

The Phi-2 model only has 2.7 billion parameters, but its performance surpasses the 7B and 13B Mistral models and the Llama2 model on various aggregation benchmarks. It is worth mentioning that Phi-2 performs better in multi-step inference tasks (i.e. coding and mathematics) compared to the massive 25x Llama2-70B model

In addition, Despite its smaller size, Phi-2's performance is comparable to the recently released Gemini Nano 2

Since many public benchmarks may leak into the training data, the research team believes that the test language The best way to measure model performance is to test it on specific use cases. Therefore, the study evaluated Phi-2 using multiple internal Microsoft proprietary datasets and tasks and again compared it with Mistral and Llama-2. On average, Phi-2 outperformed Mistral-7B and Mistral -7B outperforms the Llama2 model (7B, 13B, 70B).

Mobile phone runs Microsofts small model better than large model with 2.7 billion parameters


Mobile phone runs Microsofts small model better than large model with 2.7 billion parameters

## The research team also conducted a survey on common research community tips Extensively tested. Phi-2 performed as expected. For example, for a prompt used to evaluate a model's ability to solve physics problems (recently used to evaluate the Gemini Ultra model), Phi-2 gave the following results:

Mobile phone runs Microsofts small model better than large model with 2.7 billion parameters

Mobile phone runs Microsofts small model better than large model with 2.7 billion parameters

The above is the detailed content of Mobile phone runs Microsoft's small model better than large model with 2.7 billion parameters. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
WWE 2K25: How To Unlock Everything In MyRise
1 months ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Open source! Beyond ZoeDepth! DepthFM: Fast and accurate monocular depth estimation! Open source! Beyond ZoeDepth! DepthFM: Fast and accurate monocular depth estimation! Apr 03, 2024 pm 12:04 PM

0.What does this article do? We propose DepthFM: a versatile and fast state-of-the-art generative monocular depth estimation model. In addition to traditional depth estimation tasks, DepthFM also demonstrates state-of-the-art capabilities in downstream tasks such as depth inpainting. DepthFM is efficient and can synthesize depth maps within a few inference steps. Let’s read about this work together ~ 1. Paper information title: DepthFM: FastMonocularDepthEstimationwithFlowMatching Author: MingGui, JohannesS.Fischer, UlrichPrestel, PingchuanMa, Dmytr

The world's most powerful open source MoE model is here, with Chinese capabilities comparable to GPT-4, and the price is only nearly one percent of GPT-4-Turbo The world's most powerful open source MoE model is here, with Chinese capabilities comparable to GPT-4, and the price is only nearly one percent of GPT-4-Turbo May 07, 2024 pm 04:13 PM

Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

AI subverts mathematical research! Fields Medal winner and Chinese-American mathematician led 11 top-ranked papers | Liked by Terence Tao AI subverts mathematical research! Fields Medal winner and Chinese-American mathematician led 11 top-ranked papers | Liked by Terence Tao Apr 09, 2024 am 11:52 AM

AI is indeed changing mathematics. Recently, Tao Zhexuan, who has been paying close attention to this issue, forwarded the latest issue of "Bulletin of the American Mathematical Society" (Bulletin of the American Mathematical Society). Focusing on the topic "Will machines change mathematics?", many mathematicians expressed their opinions. The whole process was full of sparks, hardcore and exciting. The author has a strong lineup, including Fields Medal winner Akshay Venkatesh, Chinese mathematician Zheng Lejun, NYU computer scientist Ernest Davis and many other well-known scholars in the industry. The world of AI has changed dramatically. You know, many of these articles were submitted a year ago.

KAN, which replaces MLP, has been extended to convolution by open source projects KAN, which replaces MLP, has been extended to convolution by open source projects Jun 01, 2024 pm 10:03 PM

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Apr 18, 2024 pm 07:58 PM

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

Slow Cellular Data Internet Speeds on iPhone: Fixes Slow Cellular Data Internet Speeds on iPhone: Fixes May 03, 2024 pm 09:01 PM

Facing lag, slow mobile data connection on iPhone? Typically, the strength of cellular internet on your phone depends on several factors such as region, cellular network type, roaming type, etc. There are some things you can do to get a faster, more reliable cellular Internet connection. Fix 1 – Force Restart iPhone Sometimes, force restarting your device just resets a lot of things, including the cellular connection. Step 1 – Just press the volume up key once and release. Next, press the Volume Down key and release it again. Step 2 – The next part of the process is to hold the button on the right side. Let the iPhone finish restarting. Enable cellular data and check network speed. Check again Fix 2 – Change data mode While 5G offers better network speeds, it works better when the signal is weaker

The vitality of super intelligence awakens! But with the arrival of self-updating AI, mothers no longer have to worry about data bottlenecks The vitality of super intelligence awakens! But with the arrival of self-updating AI, mothers no longer have to worry about data bottlenecks Apr 29, 2024 pm 06:55 PM

I cry to death. The world is madly building big models. The data on the Internet is not enough. It is not enough at all. The training model looks like "The Hunger Games", and AI researchers around the world are worrying about how to feed these data voracious eaters. This problem is particularly prominent in multi-modal tasks. At a time when nothing could be done, a start-up team from the Department of Renmin University of China used its own new model to become the first in China to make "model-generated data feed itself" a reality. Moreover, it is a two-pronged approach on the understanding side and the generation side. Both sides can generate high-quality, multi-modal new data and provide data feedback to the model itself. What is a model? Awaker 1.0, a large multi-modal model that just appeared on the Zhongguancun Forum. Who is the team? Sophon engine. Founded by Gao Yizhao, a doctoral student at Renmin University’s Hillhouse School of Artificial Intelligence.

The U.S. Air Force showcases its first AI fighter jet with high profile! The minister personally conducted the test drive without interfering during the whole process, and 100,000 lines of code were tested for 21 times. The U.S. Air Force showcases its first AI fighter jet with high profile! The minister personally conducted the test drive without interfering during the whole process, and 100,000 lines of code were tested for 21 times. May 07, 2024 pm 05:00 PM

Recently, the military circle has been overwhelmed by the news: US military fighter jets can now complete fully automatic air combat using AI. Yes, just recently, the US military’s AI fighter jet was made public for the first time and the mystery was unveiled. The full name of this fighter is the Variable Stability Simulator Test Aircraft (VISTA). It was personally flown by the Secretary of the US Air Force to simulate a one-on-one air battle. On May 2, U.S. Air Force Secretary Frank Kendall took off in an X-62AVISTA at Edwards Air Force Base. Note that during the one-hour flight, all flight actions were completed autonomously by AI! Kendall said - "For the past few decades, we have been thinking about the unlimited potential of autonomous air-to-air combat, but it has always seemed out of reach." However now,

See all articles