Why are small language models the next big thing in the AI world?
Translator| Bugatti
##Reviewer| Chonglou
In the field of AI, tech giants have been racing to build increasingly massive Language model, Now there is a surprising new trend:小Justis big. As progress on large language models (LLM) shows signs of stalling, researchers and developersIncreasingly, attention is turning to small language models (SLM) . This compact, efficient and adaptableAI model is challenging"Bigger is better" This concept is expected to change the way we treat AI development.
Is LLM starting to stall?
V#ellum and HuggingFaceRecently released performance ComparisonResults show that the performance gap between LLMs is rapidly narrowing. This trend is ##has become ## or ;In these tasks, The performancedifference between the major models is very small. For exampleIn multiple choice questions,Claude 3 Opus,GPT-4## The accuracy rates of #Gemini Ultra
# are both above 83%, and In the inference task, Claude 3 Opus, GPT-4 and Gemini 1.5 Pro’s accuracy exceeds 92%. means , means is smaller Model (such as Mixtral 8x7B and Llama 2 - 70B) in a certain Some aspects also show surprising results, such as reasoning and multiple choice questions;
In these aspects, the performance of small modelis better thanSome bigmodels. This suggests that model size may not be the only factor that determines performance, but other aspects such as architecture, training data, and fine-tuning techniques may play an important role. Uber AIFormer person in charge, "Rebooting Artificial Intelligence" (Rebooting AI) Author of the book Gary Marcus said : "If take a look at the dozen articles published recently, they are generally the same as GPT-4At the same level. "Rebooting Artificial Intelligence" tells how to build a trustworthy #. ##AI
.Marcus accepted an interview with IT foreign media "VentureBeat" on Thursday. "Some of the are a little better than GPT-4, but not as big I think everyone will say that GPT-4 is a big improvement over GPT-3.5 ,
In more than a year,andthere hasn’t been any big leap. ”As the performance gap continues to narrow, more models show Quite competitive results, which raises the question of whether LLM has really begun to stagnate. If this trend continues, it may have a significant impact on the future development and deployment of language models, and people's attention to may change from blindlyIncrease model size to shiftto explore more effectively,more specialized
's ######architecture######. ############LDisadvantages of LM method
Although it is undeniableLLMFunction is powerful, but it also has obvious shortcomings. First, training LLM requires a large amount of data, requiring billions or even trillions of parameters. This makes the training process extremely resource intensive, and the computing power required to train and run LLM And the energy consumptionis alsois amazing. This has resulted in high costs, making it difficult for small organizations or individuals to participate in core LLM development. At an event organized by MIT# last year, OpenAICEOSam Altman stated that the cost of training GPT-4 is at least 1 billion. The complexity of the tools and techniques required to deal with LLM also reduces a A steep learning curve is placed in front of developers
, further limiting accessibility. From model training to build and deployment, developers face long cycle times, which slows down development andExperiment speed. A recent paper from the University of Cambridge showed that companiesdeployeda singlemachine learningmodelIt may take 90 days or longer . Another important problem with LLMs is that they tend to hallucinate -generate Output that seems reasonable but is not actually true. This stems from the way LLM is trained to predict the next most likely word based on patterns in the training data, rather than really knowinginformation
. As a result, LLMs can confidently make false statements, fabricate facts, or combine unrelated concepts in absurd ways. Detecting and mitigating this illusionphenomenon is critical to developing reliable language modelsface The big and difficult problem. Marcus Warning: "If you use LLM to solve Majorproblem,You
don’t want to insult the client,get the wrong medical information, or use it to Driving a car. That's still a problem. The scale and black-box nature of LLMs also makes them difficult. Interpretation and debugging, Interpretation and debuggingForIn the output of the modelBuild trust Crucial. Bias in training data and algorithms can lead to unfair, inaccurate or even harmful outputs. As we saw in GoogleGemini, making LLM
"Secure" And reliable technology can also reduce its effectiveness. Additionally, the concentrated nature of LLMs has raised concerns about the concentration of power and control in the hands of a few large technology companies. Small language model (SLM) appearsThis When the little language model appeared. SLM is a streamlined version of LLM, with fewer parameters and simpler design.
The data and training time they require is shorter,
only takes a few minutes or hours, whereas LLM takes days. This makes SLM deployment on local or small devices more efficient and simpler. One of the main advantages of SLMs is their suitability for specific application environments. Because their focus on a rangeis narrower and requires less data, So it is easier to fine-tune for a specific domain or task than a large general model. This customization enables companies to create SLMs that are very
# effective for their specific needs, such as Sentiment analysis, named entity recognition, or domain-specific question answering. The specialized nature of SLM can improve its performance and efficiency in these target applicationenvironments compared to using a general-purpose model.
Another benefit of SLM is that it promises to enhance privacy and security. With a smaller code base and simpler architecture, SLM is easier to audit and less likely to introduce unexpected vulnerabilities. This makes them attractive for application
this leading to serious consequences. Additionally, SLMs have reduced computational requirements, making them more suitable for running on local devices or local servers rather than relying on cloud infrastructure. This local processing can further improve data security and reduce the exposure of data##risk. Compared with LLM, SLM is less prone to undetected hallucinations in specific areas. SLM is typically used narrower, more specific to the intended domain or application environment Training on targeted datasets helps the model learn the patterns, vocabulary, and information most relevant to its task. This
reduces thepossibility of generating irrelevant, unexpected, or inconsistent output. Due to using fewer parameters and a leaner architecture, SLM is less prone to capturing and amplifying noise in training data sounds or wrong.
Clem Delangue, CEO of AI startup HuggingFace, said that up to 99% of use cases can be solved and predicted using SLM 2024 will be the year of SLM
元. HuggingFace's platform enables developers to build, train and deploy machine learning models, and the company announced a strategic partnership with Google earlier this year. The two companies subsequently integrated HuggingFace into Google's Vertex AI, allowing developers to quickly deploy thousands of models through Google's Vertex Model Garden. Google Gemma is sought afterwill initially LLM After giving up its advantage to OpenAI, Google is actively seizing the
SLM opportunity. Back in February,Google launched Gemma, a new family of small language models designed to improve efficiency and user-friendliness. Like other SLMs, Gemma models can run on a variety of common devices, such as smartphones, tablets or laptops, without requiring special hardware or Comprehensive optimization. Since the release of Gemma, trained The model has been downloaded over 400,000 times on HuggingFace in the last month, and has several exciting projects
. For example,Cerule is a feature powerful image and language model that combines Gemma 2B with Google's SigLIP, usedTrained on a large number of image and text data sets. Cerule leverages efficient data selection techniques to achieve high performance without requiring large amounts of data or computation. This means Cerule could be a good fit for emerging edge computing use cases. Another example is CodeGemma, which is a specialized version of Gemma that focuses on programming# and mathematical reasoning . CodeGemma offers three different models for various coding related activities, making advanced programming tools accessible to developers Easier to access and more
efficient.The huge
potential of small language modelsAs the AI community continues to explore the potential of small language models, faster development cycles, greater efficiency, and the ability to customize models to specific needsetc.advantagesbecome more and more obvious. SLM is expected to bring cost-effective and targeted solutions through program, popularizes access to AI and promotes industry innovation. Deploying SLM at the edge provides real-time, personalization and security for industries such as finance, entertainment, automotive systems, education, e-commerce and healthcare. #Application system brings new possibilities.
By processing data locally and reducing reliance on cloud infrastructure, Edge computing combined with SLM can shorten the Response times, enhanced data privacy and improved user experience. This decentralized AIapproach promises tochange how businesses and consumers interact with technology Interactive ways to create a more personalized and intuitive experience in the real world. Since LLM faces challenges related to computing resources and may encounter performance bottlenecks, the rise of LLM is expected to make the AI ecosystem Continue to develop at an amazing pace.
Original title: Why small language models are the next big thing in AI By James Thomason
The above is the detailed content of Why are small language models the next big thing in the AI world?. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics





This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Translator | Bugatti Review | Chonglou This article describes how to use the GroqLPU inference engine to generate ultra-fast responses in JanAI and VSCode. Everyone is working on building better large language models (LLMs), such as Groq focusing on the infrastructure side of AI. Rapid response from these large models is key to ensuring that these large models respond more quickly. This tutorial will introduce the GroqLPU parsing engine and how to access it locally on your laptop using the API and JanAI. This article will also integrate it into VSCode to help us generate code, refactor code, enter documentation and generate test units. This article will create our own artificial intelligence programming assistant for free. Introduction to GroqLPU inference engine Groq

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

To learn more about AIGC, please visit: 51CTOAI.x Community https://www.51cto.com/aigc/Translator|Jingyan Reviewer|Chonglou is different from the traditional question bank that can be seen everywhere on the Internet. These questions It requires thinking outside the box. Large Language Models (LLMs) are increasingly important in the fields of data science, generative artificial intelligence (GenAI), and artificial intelligence. These complex algorithms enhance human skills and drive efficiency and innovation in many industries, becoming the key for companies to remain competitive. LLM has a wide range of applications. It can be used in fields such as natural language processing, text generation, speech recognition and recommendation systems. By learning from large amounts of data, LLM is able to generate text

Machine learning is an important branch of artificial intelligence that gives computers the ability to learn from data and improve their capabilities without being explicitly programmed. Machine learning has a wide range of applications in various fields, from image recognition and natural language processing to recommendation systems and fraud detection, and it is changing the way we live. There are many different methods and theories in the field of machine learning, among which the five most influential methods are called the "Five Schools of Machine Learning". The five major schools are the symbolic school, the connectionist school, the evolutionary school, the Bayesian school and the analogy school. 1. Symbolism, also known as symbolism, emphasizes the use of symbols for logical reasoning and expression of knowledge. This school of thought believes that learning is a process of reverse deduction, through existing

This article will open source the results of "Local Deployment of Large Language Models in OpenHarmony" demonstrated at the 2nd OpenHarmony Technology Conference. Open source address: https://gitee.com/openharmony-sig/tpc_c_cplusplus/blob/master/thirdparty/InferLLM/docs/ hap_integrate.md. The implementation ideas and steps are to transplant the lightweight LLM model inference framework InferLLM to the OpenHarmony standard system, and compile a binary product that can run on OpenHarmony. InferLLM is a simple and efficient L

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A
