Since ChatGTP became popular, AI applications developed around ChatGTP have emerged one after another; making people feel the power of artificial intelligence!
Recently, Facebook parent company Meta released the Artificial Intelligence Large Language Model (Large Language Model Meta AI) referred to as LLaMA.
Like other large models, Meta LLaMA works by taking a sequence of words as "input" and predicting the next word to recursively generate text.
According to reports, Meta is developing LLaMA with multiple parameters (7B, 13B, 33B and 65B). Among them, LLaMA 65B and LLaMA 33B were trained on 1.4 trillion tokens, while the smallest model LLaMA 7B was also trained on 1 trillion tokens.
In addition, the LLaMA model has been trained in 20 languages, including Latin and Cyrillic languages, and requires far less computing power than previously launched large models.
The FAIR team stated that LLaMA has not yet been used in any Meta products.
Different from "DeepMind" and "OpenAI", LLaMA will make the training code public;Meta also plans to prioritize opening this technology to AI researchers , if you want to use LLaMA, you can submit an application and you can use it after approval.
The above is the detailed content of Another 'strong player' has been added to the AI field, Meta releases a new large-scale language model LLaMA. For more information, please follow other related articles on the PHP Chinese website!