When explaining SoMin’s ad copy and banner generation capabilities, people often ask whether ChatGPT replaced GPT-3 or whether it is still running an outdated mode.
When explaining SoMin’s ad copy and banner generation capabilities, people often ask whether ChatGPT replaced GPT-3 or whether it is still running an outdated mode. "We have not and do not plan to do so." A SoMin spokesperson responded, even though the chatbot ChatGPT launched by OpenAI is booming. This often surprises the customer, so here’s an explanation of why he would give such an answer.
GPT-2, GPT-3, ChatGPT and the recently launched GPT-4 all belong to the same category of artificial intelligence Model - Transformer. This means that, unlike previous generation machine learning models, they are trained to complete a more uniform task, so they do not need to be retrained for each specific task to produce actionable results. The latter explains their massive size (175 billion parameters in the case of GPT-3), while a model might need to "remember the entire internet" to be flexible enough to switch between different pieces of data based on user input . The model is then able to generate results when the user enters a query question, a description of the task, and a few examples (like you would ask a librarian for books of interest). This approach is called "few-shot learning" and has become a trend recently in providing input to modern Transformer models.
But is it always necessary to know everything about the Internet in order to complete the current task? Of course not - in many cases, like ChatGPT, a large number (millions) of specific Data samples for the task that will allow the model to initiate the Reinforcement Learning from Human Feedback (RLHF) process. In turn, RLHF will derive a collaborative training process between AI and humans to further train AI models to produce human-like conversations. Therefore, ChatGPT not only excels in the chatbot scenario, but also helps people write short-form content (such as poems or lyrics) or long-form content (such as essays); when people need to get answers quickly, in simple terms or in-depth knowledge Explain complex topics; provide brainstorming, new topics and ideas, which are helpful in the creative process, support the sales department in personalized communication, such as generating emails to respond to.
While it is technically possible for a large Transformer model to attempt to accomplish these tasks, it is unlikely to be accomplished by ChatGPT or even GPT-4 - this is due to ChatGPT and other OpenAI's Transformers' knowledge of events occurring in the world. Very limited because they are pretrained models and therefore their data is not updated frequently enough due to the very large computational demands of model retraining. This is probably the biggest shortcoming of all pre-trained models produced by OpenAI (and indeed anyone else) to date. A bigger problem is specific to ChatGPT: unlike GPT-3, it was trained on a very focused conversational dataset, so it is only in conversational tasks that ChatGPT outperforms its predecessors, while completing other human tasks. When it comes to productivity tasks, it's less advanced.
People now know that ChatGPT is just a smaller, more specific version of GPT-3, but does this mean there will be more in the near future? Such models emerge: MarGPT for marketing, AdGPT for digital advertising, MedGPT for answering medical questions?
This is possible, and here’s why: When the company SoMin submits an application to When gaining access to the GPT-3 Beta, despite filling out a lengthy application form explaining in detail the current software that would be built, I was asked to agree to provide feedback on how the model was used on a daily basis and the results received. The company OpenAI did this for a reason, mainly because it was a research project and they needed commercial insights into the best applications of the model, and they crowdfunded it in exchange for the chance to participate in this great artificial intelligence revolution. Chatbot apps seem to be one of the most popular, so ChatGPT comes first. ChatGPT is not only smaller (20 billion parameters vs. 175 billion parameters), but also faster than GPT-3, and more accurate than GPT-3 at solving conversational tasks - for a low-cost/high-quality AI product For me, this is a perfect business case.
So, for generative artificial intelligence, is bigger better? The answer is, it depends. When one is building a general learning model capable of completing many tasks, the answer is yes, the bigger the better, as evidenced by GPT-3’s advantages over GPT-2 and other predecessors. But when one wants to perform a specific task well, like the chatbot in ChatGPT, then data focus and a proper training process are much more important than model and data size. That’s why at SoMin, instead of using ChatGPT to generate copy and banners, specific digital ad-related data is used to guide GPT-3 to create better content for new ads that haven’t been seen yet.
So, one might ask, how will the future of generative AI develop? Multimodality will be one of the inevitable advancements people will see in the upcoming GPT-4, as OpenAI CEO Sam Altman mentioned in his speech. At the same time, Altman also broke the rumor that the model has 100 trillion parameters. Therefore, people know that bigger does not always mean better in this kind of artificial intelligence model.
The above is the detailed content of ChatGPT vs. GPT-3 vs. GPT-4 is just an internal fight among chatbot families. For more information, please follow other related articles on the PHP Chinese website!