Table of Contents
Powerful tool usage capabilities
Multi-language generation capability
Longer context and lower price
Home Technology peripherals AI With 35 billion parameters and open weights, the author of Transformer launched a new large model after starting his own business.

With 35 billion parameters and open weights, the author of Transformer launched a new large model after starting his own business.

Mar 13, 2024 am 08:58 AM
data train

Today, Cohere, an artificial intelligence startup co-founded by Aidan Gomez, one of the authors of Transformer, ushered in the release of its own large model.

Cohere’s latest released model is named “Command-R”, has 35B parameters and is designed to handle large-scale production workloads. This model falls into the "scalable" category, with a balance of high efficiency and high accuracy, helping enterprise users move beyond proof-of-concept and into production.

With 35 billion parameters and open weights, the author of Transformer launched a new large model after starting his own business.

Command-R is a generative model optimized for retrieval-augmented generation (RAG) and other long-context tasks. By combining external APIs and tools, this model aims to improve the performance of RAG applications. It works with industry-leading embedding and reordering models to deliver outstanding performance and best-in-class integration capabilities for enterprise use cases.

Command-R adopts an optimized transformer architecture and is an autoregressive language model. After pre-training is completed, the model ensures consistency with human preferences through supervised fine-tuning (SFT) and preference training to achieve better usefulness and safety.

Specifically, Command-R has the following functional characteristics:

  • High accuracy in RAG and tool usage
  • Low latency, high throughput
  • Longer 128k context and lower price
  • Powerful functionality across 10 major languages
  • Model weights are available on HuggingFace for study and evaluation

Command-R is currently available Available on Cohere’s managed API, with plans to launch on major cloud providers soon. This release is the first in a series of models designed to advance capabilities critical to enterprise mass adoption.

Currently, Cohere has opened model weights on Huggingface.

With 35 billion parameters and open weights, the author of Transformer launched a new large model after starting his own business.

Huggingface Address: https://huggingface.co/CohereForAI/c4ai-command-r-v01

High performance retrieval augmented generation (RAG)

Retrieval augmented generation (RAG) has become a key pattern in the deployment of large language models. With RAG, companies can give models access to private knowledge that would otherwise be unavailable, search private databases, and use relevant information to form responses, significantly increasing accuracy and usefulness. The key components of RAG are:

  • Retrieval: Search the corpus of information relevant to the responding user.
  • Augmented generation: Use retrieved information to form more informed responses.

For retrieval, Cohere’s Embed model improves contextual and semantic understanding by searching millions or even billions of documents, significantly increasing the practicality and accuracy of the retrieval step. . At the same time, Cohere’s Rerank model helps further increase the value of retrieved information, optimizing results for custom metrics such as relevance and personalization.

For enhanced generation, by identifying the most relevant information, Command-R can summarize, analyze, and package this information, and help employees improve work efficiency or create new product experiences. Command-R is unique in that the model's output comes with clear citations, reducing the risk of hallucinations and rendering more context from the source material.

Even without using its own Embed and Rerank models, Command-R outperforms other models in the scalable generative model category. But when used together, the lead extends significantly, enabling higher performance in more complex domains.

The picture on the left below shows Command-R and Mixtral conducting a Head-to-Head overall human preference assessment on a series of enterprise-related RAG applications, fully considering fluency and answers. Usefulness and citations. The right side of the figure shows the comparison results of Command-R (Embed Rerank), Command-R and Llama 2 70B (chat), Mixtral, GPT3.5-Turbo and other models on benchmarks such as Natural Questions, TriviaQA and HotpotQA. Cohere's big model achieves the lead.

With 35 billion parameters and open weights, the author of Transformer launched a new large model after starting his own business.

Powerful tool usage capabilities

The large language model should be the core inference engine that can automatically perform tasks and take actual actions, not just extract and generate text machine. Command-R achieves this goal by using tools (APIs) such as code interpreters and other user-defined tools that enable models to automate highly complex tasks.

Tool Use feature enables enterprise developers to turn Command-R into an engine to support the use of "internal infrastructure such as databases and software tools" as well as "external infrastructure such as CRM, search engines, etc." Tools for the automation of tasks and workflows. This allows us to automate time-consuming manual tasks that span multiple systems and require complex reasoning and decision-making.

The picture below shows the comparison of multi-step reasoning capabilities between Command-R and Llama 2 70B (chat), Mixtral, and GPT3.5-turbo when using search tools. The data sets used here are HotpotQA and Bamboogle.

With 35 billion parameters and open weights, the author of Transformer launched a new large model after starting his own business.

Multi-language generation capability

Command-R model is good at 10 major business languages ​​around the world, including English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic and Chinese.

Additionally, Cohere’s Embed and Rerank models natively support over 100 languages. This enables users to draw answers from a wide range of data sources, delivering clear and accurate conversations in their native language, regardless of language.

The following figure shows the comparison between Command-R and Llama 2 70B (chat), Mixtral, GPT3.5-turbo on multi-language MMLU and FLORES.

With 35 billion parameters and open weights, the author of Transformer launched a new large model after starting his own business.

Longer context and lower price

Command-R supports longer Context window - 128k tokens. The upgrade also reduces the price of Cohere’s managed APIs and significantly increases the efficiency of Cohere private cloud deployments. By combining a longer context window with cheaper pricing, Command-R unlocks RAG use cases where additional context can significantly improve performance.

With 35 billion parameters and open weights, the author of Transformer launched a new large model after starting his own business.

The specific pricing is as follows, where 1 million input tokens for the Command version cost 1 USD, 1 million output tokens cost 2 USD; the Command-R version costs 1 million input tokens USD 0.5, USD 1.5 for 1 million output tokens.

With 35 billion parameters and open weights, the author of Transformer launched a new large model after starting his own business.

Soon, Cohere will also release a short technical report showing more model details.

With 35 billion parameters and open weights, the author of Transformer launched a new large model after starting his own business.

Blog address: https://txt.cohere.com/command-r/

The above is the detailed content of With 35 billion parameters and open weights, the author of Transformer launched a new large model after starting his own business.. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Chat Commands and How to Use Them
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Use ddrescue to recover data on Linux Use ddrescue to recover data on Linux Mar 20, 2024 pm 01:37 PM

DDREASE is a tool for recovering data from file or block devices such as hard drives, SSDs, RAM disks, CDs, DVDs and USB storage devices. It copies data from one block device to another, leaving corrupted data blocks behind and moving only good data blocks. ddreasue is a powerful recovery tool that is fully automated as it does not require any interference during recovery operations. Additionally, thanks to the ddasue map file, it can be stopped and resumed at any time. Other key features of DDREASE are as follows: It does not overwrite recovered data but fills the gaps in case of iterative recovery. However, it can be truncated if the tool is instructed to do so explicitly. Recover data from multiple files or blocks to a single

Open source! Beyond ZoeDepth! DepthFM: Fast and accurate monocular depth estimation! Open source! Beyond ZoeDepth! DepthFM: Fast and accurate monocular depth estimation! Apr 03, 2024 pm 12:04 PM

0.What does this article do? We propose DepthFM: a versatile and fast state-of-the-art generative monocular depth estimation model. In addition to traditional depth estimation tasks, DepthFM also demonstrates state-of-the-art capabilities in downstream tasks such as depth inpainting. DepthFM is efficient and can synthesize depth maps within a few inference steps. Let’s read about this work together ~ 1. Paper information title: DepthFM: FastMonocularDepthEstimationwithFlowMatching Author: MingGui, JohannesS.Fischer, UlrichPrestel, PingchuanMa, Dmytr

Google is ecstatic: JAX performance surpasses Pytorch and TensorFlow! It may become the fastest choice for GPU inference training Google is ecstatic: JAX performance surpasses Pytorch and TensorFlow! It may become the fastest choice for GPU inference training Apr 01, 2024 pm 07:46 PM

The performance of JAX, promoted by Google, has surpassed that of Pytorch and TensorFlow in recent benchmark tests, ranking first in 7 indicators. And the test was not done on the TPU with the best JAX performance. Although among developers, Pytorch is still more popular than Tensorflow. But in the future, perhaps more large models will be trained and run based on the JAX platform. Models Recently, the Keras team benchmarked three backends (TensorFlow, JAX, PyTorch) with the native PyTorch implementation and Keras2 with TensorFlow. First, they select a set of mainstream

Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Apr 18, 2024 pm 07:58 PM

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

Slow Cellular Data Internet Speeds on iPhone: Fixes Slow Cellular Data Internet Speeds on iPhone: Fixes May 03, 2024 pm 09:01 PM

Facing lag, slow mobile data connection on iPhone? Typically, the strength of cellular internet on your phone depends on several factors such as region, cellular network type, roaming type, etc. There are some things you can do to get a faster, more reliable cellular Internet connection. Fix 1 – Force Restart iPhone Sometimes, force restarting your device just resets a lot of things, including the cellular connection. Step 1 – Just press the volume up key once and release. Next, press the Volume Down key and release it again. Step 2 – The next part of the process is to hold the button on the right side. Let the iPhone finish restarting. Enable cellular data and check network speed. Check again Fix 2 – Change data mode While 5G offers better network speeds, it works better when the signal is weaker

Kuaishou version of Sora 'Ke Ling' is open for testing: generates over 120s video, understands physics better, and can accurately model complex movements Kuaishou version of Sora 'Ke Ling' is open for testing: generates over 120s video, understands physics better, and can accurately model complex movements Jun 11, 2024 am 09:51 AM

What? Is Zootopia brought into reality by domestic AI? Exposed together with the video is a new large-scale domestic video generation model called "Keling". Sora uses a similar technical route and combines a number of self-developed technological innovations to produce videos that not only have large and reasonable movements, but also simulate the characteristics of the physical world and have strong conceptual combination capabilities and imagination. According to the data, Keling supports the generation of ultra-long videos of up to 2 minutes at 30fps, with resolutions up to 1080p, and supports multiple aspect ratios. Another important point is that Keling is not a demo or video result demonstration released by the laboratory, but a product-level application launched by Kuaishou, a leading player in the short video field. Moreover, the main focus is to be pragmatic, not to write blank checks, and to go online as soon as it is released. The large model of Ke Ling is already available in Kuaiying.

The vitality of super intelligence awakens! But with the arrival of self-updating AI, mothers no longer have to worry about data bottlenecks The vitality of super intelligence awakens! But with the arrival of self-updating AI, mothers no longer have to worry about data bottlenecks Apr 29, 2024 pm 06:55 PM

I cry to death. The world is madly building big models. The data on the Internet is not enough. It is not enough at all. The training model looks like "The Hunger Games", and AI researchers around the world are worrying about how to feed these data voracious eaters. This problem is particularly prominent in multi-modal tasks. At a time when nothing could be done, a start-up team from the Department of Renmin University of China used its own new model to become the first in China to make "model-generated data feed itself" a reality. Moreover, it is a two-pronged approach on the understanding side and the generation side. Both sides can generate high-quality, multi-modal new data and provide data feedback to the model itself. What is a model? Awaker 1.0, a large multi-modal model that just appeared on the Zhongguancun Forum. Who is the team? Sophon engine. Founded by Gao Yizhao, a doctoral student at Renmin University’s Hillhouse School of Artificial Intelligence.

The U.S. Air Force showcases its first AI fighter jet with high profile! The minister personally conducted the test drive without interfering during the whole process, and 100,000 lines of code were tested for 21 times. The U.S. Air Force showcases its first AI fighter jet with high profile! The minister personally conducted the test drive without interfering during the whole process, and 100,000 lines of code were tested for 21 times. May 07, 2024 pm 05:00 PM

Recently, the military circle has been overwhelmed by the news: US military fighter jets can now complete fully automatic air combat using AI. Yes, just recently, the US military’s AI fighter jet was made public for the first time and the mystery was unveiled. The full name of this fighter is the Variable Stability Simulator Test Aircraft (VISTA). It was personally flown by the Secretary of the US Air Force to simulate a one-on-one air battle. On May 2, U.S. Air Force Secretary Frank Kendall took off in an X-62AVISTA at Edwards Air Force Base. Note that during the one-hour flight, all flight actions were completed autonomously by AI! Kendall said - "For the past few decades, we have been thinking about the unlimited potential of autonomous air-to-air combat, but it has always seemed out of reach." However now,

See all articles