Home > Technology peripherals > AI > body text

Human-computer interactive dialogue driven by large models

PHPz
Release: 2023-04-11 19:27:47
forward
1290 people have browsed it

Human-computer interactive dialogue driven by large models

Introduction: Dialogue technology is one of the core capabilities of digital human interaction. This sharing is mainly from Baidu Starting from PLATO-related R&D and applications, let’s talk about the impact of large models on dialogue systems and some opportunities for digital humans. The title of this sharing is: Human-computer interactive dialogue promoted by large models.

Today’s introduction starts from the following points:

  • Overview of the dialogue system
  • Baidu PLATO and related technologies
  • ##Applications, challenges and prospects of dialogue large models

1. Overview of the dialogue system

1. Overview of the dialogue system

# #In daily life, we often come into contact with some task-oriented dialogue systems, such as asking a mobile assistant to set an alarm or asking a smart speaker to play a song. The technology for this kind of vertical dialogue in a specific field is relatively mature, and the system design is usually modular, including modules such as dialogue understanding, dialogue management, and natural language generation.

Human-computer interactive dialogue driven by large models

The general process of traditional task-based dialogue is as follows: the user inputs a sentence, and the system parses it out through the natural language understanding module Related intentions and slot-value pairs, these slots are predefined; the dialogue management module tracks the status of multiple rounds of dialogue, and interacts with external databases to make system action decisions; and then uses the dialogue generation module , the output reply is returned to the user.

#In recent years, a lot of research has been done on open-domain dialogue technology, which means that you can chat on any topic without limiting the field. Representative works include Google Meena, Mata Blender, and Baidu PLATO. Compared with traditional modular dialogue systems, these end-to-end dialogue systems directly generate corresponding replies given the context of the dialogue.

2. End-to-end dialogue generation - new opportunities for dialogue systems

Human-computer interactive dialogue driven by large models End-to-end The end-to-end dialogue system can be designed based on RNN, LSTM or Transformer. The network architecture mainly includes two parts: encoder Encoder and decoder Decoder.

The encoder encodes the dialogue text into a vector and understands the dialogue content. ​

The decoder generates the corresponding reply based on the dialogue vector and the previous hidden vector. The training corpus is mainly Renren dialogue material, and comments can be extracted from public social media forums (Weibo, Tieba, Twitter, etc.) as approximate dialogue material. The training objective is mainly to minimize the negative log-likelihood function.

3. Challenges facing open domain dialogue

Large-scale models trained based on a large amount of corpus can already produce relatively coherent responses, but there are still many problems.

#The first problem is that the content is relatively empty and lacks information. The model's replies are relatively brief and have no substantial content, which can easily reduce the user's willingness to chat.

#Another problem is knowledge abuse. Some of the detailed information returned by the model is sometimes wrong or fabricated.

2. Baidu PLATO

Baidu PLATO has done some technical exploration on the above two types of problems.

#In view of the content holes, a pre-training dialogue generation technology based on discrete latent variables is proposed to achieve the rational and diverse generation of open domain replies. Regarding the problem of knowledge abuse, a weakly supervised dialogue generation model that integrates knowledge is proposed, which alleviates the problem of knowledge abuse to a certain extent and improves dialogue richness and knowledge accuracy.

1. Open domain dialogue "one-to-many" problem

Why does the dialogue model produce "empty content" Safe reply"?

Essentially, open domain dialogue is a one-to-many issue. In one dialogue, there are usually many reasonable responses, with different backgrounds and experiences. , depending on the scenario, the responses given may be different. Neural network training is usually mapped one by one, and what is learned is the average state of these responses, such as "very good" and "hahaha", which are safe and non-informative responses.

Human-computer interactive dialogue driven by large models

2. PLATO-1 latent space dialogue generation model

##PLATO -1 Proposes modeling of dialogue one-to-many relationships based on discrete latent variables.

This involves two tasks, mapping the original dialogue context and dialogue response to the latent variable Latent Action, and then learning the reply based on the latent variable generate. PLATO uses the same network to jointly model two tasks. It first estimates the distribution of latent variables, samples latent variables through Gumbel Softmax, and then learns to generate responses. In this way, diverse responses can be generated by sampling different latent variables. .

Human-computer interactive dialogue driven by large models

The case shows that different latent variables are selected to produce different response effects. These responses are all based on the responses above and are of good quality, appropriate and informative.

Human-computer interactive dialogue driven by large models

3. PLATO-2 Universal dialogue model based on course learning

PLATO-2 continues to expand on the basis of PLATO-1. In terms of parameters, it has reached a scale of 1.6 billion; in terms of pre-training corpus, there are 1.2 billion Chinese dialogue samples and 700 million English samples; in terms of training methods, it is based on course learning. What is Curriculum Learning? Just learn the simple ones first and then the complex ones.

In addition, PLATO-2 continues to use the unified network design PrefixLM, while learning dialogue understanding and reply generation. Training based on course learning is highly efficient, and unified network-based training is highly cost-effective.

Human-computer interactive dialogue driven by large models

PLATO-2 The first stage trains simplified general reply generation, and the second stage trains diversified reply generation , latent variables are added at this stage. The second stage also introduces dialogue coherence assessment training. Compared with the common generation probability ranking, coherence assessment effectively improves the quality of reply selection.

Human-computer interactive dialogue driven by large models

Can PLATO-2 serve as a universal dialogue framework? We know that the dialogue field is roughly divided into three categories, task-based dialogue, knowledge dialogue and open domain chat system. It is too expensive to pre-train different types of dialogue systems separately. PLATO-2's course learning mechanism can help it become a universal dialogue framework. Task-based dialogue is relatively focused. The one-to-one mapping model in the first stage of course learning just meets this situation. There are one-to-many situations in both knowledge dialogue and casual chat. In the knowledge dialogue, you can use different knowledge to reply to the user, and in the casual chat dialogue, you can There are different reply directions, so the second-stage model of course learning can be applied to knowledge dialogue and chat systems.

4, PLATO-2 in DSTC-9

In order to verify this capability, PLATO-2 participated DSTC, an international competition in the field of dialogue, comprehensively covers various dialogue fields. PLATO-2 won 5 championships in 6 tasks with a unified technical framework. This is the first time in the history of DSTC.

Human-computer interactive dialogue driven by large models

5. PLATO-XL’s first tens of billions of parameters Chinese and English dialogue generation model

What effect will be achieved if we continue to increase the parameter scale of the PLATO model? In September 2021, we launched PLATO-XL, the world's first tens-billion-scale Chinese and English conversation generation model.

Human-computer interactive dialogue driven by large models

In Chinese and English, several common commercial products are compared in terms of rationality, richness and attraction. When evaluated from other angles, PLATO's effect is far ahead.

Human-computer interactive dialogue driven by large models

## The WeChat public account "Baidu PLATO" is connected to the PLATO-XL model, and everyone can try it out and experience it.

Human-computer interactive dialogue driven by large models

PLATO The number of model parameters ranges from 100 million to one billion to tens of billions. In fact, when it reaches the billions The conversation has become smoother and smoother, and the model's logical capabilities have significantly improved when it reaches tens of billions of scale.

#6. Knowledge abuse problem

Large models all have the problem of knowledge abuse. How to solve it? How do we humans solve problems we don’t understand? You might check it on a search engine. Can this method of searching for external knowledge be used in the model?

Human-computer interactive dialogue driven by large models

Integrating external knowledge to assist reply generation is a promising direction to alleviate knowledge abuse. However, for large-scale dialogue materials, only the dialogue text and reply information exist, and it is impossible to know the correspondence between a certain corpus and external knowledge, that is, there is a lack of label information for knowledge selection.

Human-computer interactive dialogue driven by large models

7. PostKS knowledge selection based on posterior guidance

PostKS It is one of the representative works in the field of knowledge dialogue. It proposes knowledge selection based on posterior guidance. During the training process, the prior knowledge distribution is approximated to the posterior knowledge distribution.

Human-computer interactive dialogue driven by large models

#In the inference stage, since there is no posterior information, the model needs to use prior knowledge to generate responses. There will be inconsistencies in the training and inference phases. Training is based on posterior but inference can only be based on prior.

8. PLATO-KAG unsupervised knowledge dialogue based on joint optimization

PLATO-KAG unsupervised model, Knowledge selection and reply generation are jointly modeled. The top-k pieces of knowledge are selected based on a priori and sent to the generative model for end-to-end joint training. If the knowledge is selected accurately, it will be very helpful in generating the target reply, and the generation probability will be relatively high. Joint optimization will encourage this selection and make use of the given knowledge; if the knowledge is poorly selected, it will have no effect on generating the target reply, and the generation probability will be relatively high. Low, joint optimization suppresses this choice and ignores the given knowledge. This optimizes both knowledge selection and reply generation.

Human-computer interactive dialogue driven by large models

9. PLATO Comprehensive Knowledge Enhancement Dialogue

Human-computer interactive dialogue driven by large models

Judging from human knowledge learning experience, we also memorize a lot of knowledge in our brains. PLATO has tried comprehensive knowledge enhancement, while doing knowledge external application and knowledge internalization. On the one hand, it uses external general unstructured knowledge and portrait knowledge, and on the other hand, it also internalizes a large amount of question and answer knowledge into the model parameters through pre-training. After such comprehensive knowledge enhancement, the error rate of general dialogue knowledge has been reduced from 30% to 17%, the consistency of portraits has been increased from 7.1% to 80%, and the accuracy of question and answer has been increased from 3.2% to 90%. The improvement is very obvious.

#The picture below is a comparison of the effects after comprehensive knowledge enhancement.

Human-computer interactive dialogue driven by large models

##It is worth noting that although the effect has been significantly improved, the problem of knowledge abuse has not been completely solved, only alleviated That’s all. Even if the model scale is expanded to hundreds of billions of parameters, the problem of knowledge abuse still exists.

There are still several points worthy of our continued efforts: The first is the triggering timing of external knowledge, that is, when to check external knowledge and when to use internal knowledge. knowledge, which affects the flow and engagement of the conversation. The second is the accuracy of knowledge selection, which involves retrieval technology. The Chinese knowledge corpus is built in the scale of billions. It is not that easy to accurately retrieve appropriate knowledge through a given conversation. The third is the rationality and fidelity of knowledge utilization. Sometimes the model cannot accurately understand the knowledge or confuse and piece together inaccurate responses.

Human-computer interactive dialogue driven by large models

##3. Implementation, challenges and prospects of large-scale dialogue models

## The above introduces some technologies of PLATO dialogue, such as introducing large-scale models, adding discrete latent variables to improve the richness of dialogue, and introducing external knowledge to alleviate knowledge abuse through unsupervised introduction. So what are the practical applications in actual production?

1. Implementation application

Human-computer interactive dialogue driven by large models

PLATO is used in smart speakers and virtual humans Provides open domain chat capabilities in multiple scenarios such as , community chat, etc.

Human-computer interactive dialogue driven by large models

On the left is the digital person Du Xiaoxiao. Search for Du Xiaoxiao in Baidu APP or directly enter "Hello" to call the digital person. You can call the digital person through chat. Convenient search process and efficient access to answers and information. On the right is a virtual person in Baidu input method, who is both good-looking and good at chatting.

2. Challenges encountered in landing applications

In landing applications,the first challenge is Inference performance, the performance data of 1.6 billion parameter PLATO is listed in the figure. The number of operators has been reduced by 98% through operator fusion, and the model inference time has been reduced from 1.2s on the original v100 to less than 300ms on the A10 card. Through calculation accuracy optimization, 40% of the video memory was reduced. The inference card was changed from v100 to A10 to reduce costs. At the same time, architecture optimization and platform migration were performed to reduce link overhead.

Human-computer interactive dialogue driven by large models

The second challenge is conversation security. For example, harmful speech, political sensitivity, regional discrimination, privacy and many other aspects require great attention. PLATO deeply cleans the corpus, deletes unsafe samples, and uses a safe discriminant model to remove unsafe candidate responses after deployment. At the same time, the keyword table is maintained and adversarial training is added to detect and fill in gaps to improve security.

Human-computer interactive dialogue driven by large models

3. Outlook

In the past, people thought that open domain chatting was a With the development of large-scale models in recent years, significant progress has been made in the field of dialogue. Currently, models can generate coherent, smooth, rich and cross-domain dialogues, but there are still great challenges in aspects such as emotion, character design, personality and speculation. Room for improvement.

The road is long and difficult, but the road is coming. If we keep on walking, we can look forward to the future. I also hope that colleagues in the field of dialogue can work together to reach the peak of human-computer dialogue.

Human-computer interactive dialogue driven by large models

4. Quote

Human-computer interactive dialogue driven by large models

5. Q&A session

#Q: How is the effectiveness of the dialogue evaluated?

#A: Currently, there are no automatic indicators in the dialogue system that are more consistent with manual evaluation, and manual evaluation is still the gold standard. During the development phase, you can iterate with reference to the perplexity. In the final comprehensive evaluation, you still need to ask a large number of crowdsourcers to interact with different machines and perform manual evaluation on some indicators. Evaluation indicators also change with the development of technology. For example, when fluency is no longer a problem, then indicators such as safety and knowledge accuracy can be added to evaluate more advanced abilities.

The above is the detailed content of Human-computer interactive dialogue driven by large models. For more information, please follow other related articles on the PHP Chinese website!

source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template