


Large AI models are very expensive and only big companies and the super rich can play them successfully
ChatGPT fire has led to another wave of AI craze, but the industry generally believes that when AI enters the era of large models, only large companies and super-rich companies can afford AI, because AI Large models are very expensive to build.
First of all, it is computationally expensive. Avi Goldfarb, a marketing professor at the University of Toronto, said: "If you want to start a company, develop a large language model yourself and calculate it yourself, the cost is too high. OpenAI is very expensive, costing billions of dollars." Of course, leasing computing will be much cheaper, but enterprises will still have to pay expensive fees to companies such as AWS.
Secondly, data is expensive. Training models requires massive amounts of data, sometimes the data is readily available, sometimes not. Data such as Common Crawl and LAION are free to use. For this type of data, the cost mainly comes from data cleaning and processing. The cost can vary widely, ranging from a few hundred dollars to millions of dollars.
Debarghya Das, founding engineer of Glean, said that in the United States, based on some rough mathematical calculations based on large language model papers, if Facebook LLaMA is used, the training cost (not considering iterations or errors) is about US$4 million. , if it is Google PaLM, about $27 million.
Even if you use free data, the cost is not low. "When you download terabytes of data, if you want to filter or use the data in some special way, such as using a text-image model, researchers will focus on certain subsets of the data," said Sasha Luccioni, a researcher at Hugging Face. Only in this way will the model get better), the whole process is quite tricky." It requires powerful computing power and a large number of professionals.
Thirdly, the cost of hiring professionals is also very high. Debarghya Das did not consider labor costs when making the above cost estimate. Sasha Luccioni pointed out: "Machine learning professionals are paid very well because they compete with Google and other technology giants for talent, and sometimes a professional talent can cost millions of dollars." In 2016, the salary of the top researchers at OpenAI was about 190 Ten thousand U.S. dollars.
Moreover, the costs of training models and hiring professionals are not one-time but ongoing. For example, if you are developing a customer service chatbot, you need to optimize it every week or every few weeks. The model is also subjected to stress testing to ensure that the answers it generates are correct. As Sasha Luccioni explains: "The most expensive cost comes from the ongoing work, having to continuously test the model, having to make sure that the AI is doing what it is supposed to do."
Finally, ongoing operating expenses are not cheap either. When everything is ready and the model is open to the public, it will receive thousands of inquiries every day. At this time, it is necessary to ensure that the model is scalable and highly stable. The maintenance cost is also high and requires professionals to handle it.
The above is the detailed content of Large AI models are very expensive and only big companies and the super rich can play them successfully. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Efficiently process 7 million records and create interactive maps with geospatial technology. This article explores how to efficiently process over 7 million records using Laravel and MySQL and convert them into interactive map visualizations. Initial challenge project requirements: Extract valuable insights using 7 million records in MySQL database. Many people first consider programming languages, but ignore the database itself: Can it meet the needs? Is data migration or structural adjustment required? Can MySQL withstand such a large data load? Preliminary analysis: Key filters and properties need to be identified. After analysis, it was found that only a few attributes were related to the solution. We verified the feasibility of the filter and set some restrictions to optimize the search. Map search based on city

There are many reasons why MySQL startup fails, and it can be diagnosed by checking the error log. Common causes include port conflicts (check port occupancy and modify configuration), permission issues (check service running user permissions), configuration file errors (check parameter settings), data directory corruption (restore data or rebuild table space), InnoDB table space issues (check ibdata1 files), plug-in loading failure (check error log). When solving problems, you should analyze them based on the error log, find the root cause of the problem, and develop the habit of backing up data regularly to prevent and solve problems.

The article introduces the operation of MySQL database. First, you need to install a MySQL client, such as MySQLWorkbench or command line client. 1. Use the mysql-uroot-p command to connect to the server and log in with the root account password; 2. Use CREATEDATABASE to create a database, and USE select a database; 3. Use CREATETABLE to create a table, define fields and data types; 4. Use INSERTINTO to insert data, query data, update data by UPDATE, and delete data by DELETE. Only by mastering these steps, learning to deal with common problems and optimizing database performance can you use MySQL efficiently.

Detailed explanation of database ACID attributes ACID attributes are a set of rules to ensure the reliability and consistency of database transactions. They define how database systems handle transactions, and ensure data integrity and accuracy even in case of system crashes, power interruptions, or multiple users concurrent access. ACID Attribute Overview Atomicity: A transaction is regarded as an indivisible unit. Any part fails, the entire transaction is rolled back, and the database does not retain any changes. For example, if a bank transfer is deducted from one account but not increased to another, the entire operation is revoked. begintransaction; updateaccountssetbalance=balance-100wh

The main reasons for MySQL installation failure are: 1. Permission issues, you need to run as an administrator or use the sudo command; 2. Dependencies are missing, and you need to install relevant development packages; 3. Port conflicts, you need to close the program that occupies port 3306 or modify the configuration file; 4. The installation package is corrupt, you need to download and verify the integrity; 5. The environment variable is incorrectly configured, and the environment variables must be correctly configured according to the operating system. Solve these problems and carefully check each step to successfully install MySQL.

Remote Senior Backend Engineer Job Vacant Company: Circle Location: Remote Office Job Type: Full-time Salary: $130,000-$140,000 Job Description Participate in the research and development of Circle mobile applications and public API-related features covering the entire software development lifecycle. Main responsibilities independently complete development work based on RubyonRails and collaborate with the React/Redux/Relay front-end team. Build core functionality and improvements for web applications and work closely with designers and leadership throughout the functional design process. Promote positive development processes and prioritize iteration speed. Requires more than 6 years of complex web application backend

MySQL can return JSON data. The JSON_EXTRACT function extracts field values. For complex queries, you can consider using the WHERE clause to filter JSON data, but pay attention to its performance impact. MySQL's support for JSON is constantly increasing, and it is recommended to pay attention to the latest version and features.

LaravelEloquent Model Retrieval: Easily obtaining database data EloquentORM provides a concise and easy-to-understand way to operate the database. This article will introduce various Eloquent model search techniques in detail to help you obtain data from the database efficiently. 1. Get all records. Use the all() method to get all records in the database table: useApp\Models\Post;$posts=Post::all(); This will return a collection. You can access data using foreach loop or other collection methods: foreach($postsas$post){echo$post->
