


Ten lines of code are comparable to RLHF and use social game data to train a social alignment model
Making the behavior of language models consistent with human social values is an important part of current language model development. The corresponding training is also called value alignment.
The current mainstream solution is RLHF (Reinforcenment Learning from Human Feedback) used by ChatGPT, which is reinforcement learning based on human feedback. This solution first trains a reward model (value model) as a proxy for human judgment. The agent model provides rewards as supervision signals to the generative language model during the reinforcement learning phase.
This method has the following pain points:
1. The rewards generated by the agent model can easily be destroyed or tampered with.
#2. During the training process, the agent model needs to continuously interact with the generative model, and this process may be very time-consuming and inefficient. In order to ensure high-quality supervision signals, the agent model should not be smaller than the generative model, which means that during the reinforcement learning optimization process, at least two larger models need to alternately perform inference (judgment of rewards) and parameter updating (generative model parameter optimization). Such a setting may be very inconvenient in large-scale distributed training.
#3. The value model itself has no obvious correspondence with the human thinking model. We do not have a separate scoring model in mind, and in fact it is very difficult to maintain a fixed scoring standard for a long time. Instead, much of the value judgment we form as we grow comes from daily social interactions—by analyzing different social responses to similar situations, we come to realize what is encouraged and what is not. These experiences and consensus gradually accumulated through a large amount of "socialization-feedback-improvement" have become the common value judgments of human society.
A recent study from Dartmouth, Stanford, Google DeepMind and other institutions shows that using high-quality data constructed by social games combined with simple and efficient alignment algorithms may be the only way to achieve this. The key to alignment.
- ## Article address: https://arxiv.org/pdf/2305.16960.pdf
- Code address: https://github.com/agi-templar/Stable-Alignment
- Model download (including base, SFT, and alignment models): https://huggingface.co/agi-css
The author proposes an alignment method trained on multi-agent game data. The basic idea can be understood as transferring the online interaction of the reward model and the generative model in the training phase to the offline interaction between a large number of autonomous agents in the game (high sampling rate, previewing the game in advance). The game environment runs independently of training and can be massively parallelized. Supervisory signals move from being dependent on the performance of the agent's reward model to being dependent on the collective intelligence of a large number of autonomous agents.
For this purpose, the author designed a virtual social model, called Sandbox. The sandbox is a world composed of grid points, and each grid point is a social agent. The social body has a memory system that is used to store various information such as questions, answers, feedback, etc. for each interaction. Every time the social group responds to a question, it must first retrieve and return the N historical questions and answers most relevant to the question from the memory system as a contextual reference for this reply. Through this design, the position of the social body can be continuously updated in multiple rounds of interaction, and the updated position can maintain a certain continuity with the past. Each social group has a different default position in the initialization phase.
##Convert game data into alignment data In the experiment, the author used a 10x10 grid sandbox (a total of 100 social groups) to conduct social simulation, and formulated a social rule (the so-called Sandbox Rule): all social groups must make themselves aware of the problem The answers are more socially aligned to leave a good impression on other social groups. In addition, the sandbox also deployed observers without memory to score the responses of social groups before and after each social interaction. Scoring is based on two dimensions: alignment and engagement.
Simulated human society in a sandbox using different models
The author used the Sandbox to test language models of different sizes and different training stages. Overall, models trained with alignment (so-called “aligned models”), such as davinci-003, GPT-4, and ChatGPT, can generate socially normative responses in fewer interaction rounds. In other words, the significance of alignment training is to make the model safer in "out-of-the-box" scenarios without the need for special rounds of dialogue guidance. The model without alignment training not only requires more interactions to achieve the overall optimal response of alignment and engagement, but also the upper limit of this overall optimal is significantly lower than the aligned model.
The author also proposes a simple and easy alignment algorithm called Stable Alignment for Learn alignment from historical data in the sandbox. The stable alignment algorithm performs score-modulated contrastive learning in each mini-batch - the lower the score of the reply, the larger the boundary value of the contrastive learning will be set - in other words, stable alignment By continuously sampling small batches of data, the model is encouraged to generate responses that are closer to high-scoring responses and less close to low-scoring responses. Stable alignment eventually converges to the SFT loss. The authors also discuss the differences between stable alignment and SFT, RLHF.
The author particularly emphasizes the data from Sandbox games. Due to the setting of the mechanism, a large amount of it is included through revision ( revision) and become data that conforms to social values. The author proves through ablation experiments that this large amount of data with step-by-step improvement is the key to stable training.
##The author also aligns the algorithm performance with the current mainstream Performance comparisons were made with training stability, proving that stable alignment is not only more stable than reward modeling, but also comparable to RLHF in general performance and alignment performance (since ChatGPT uses undisclosed models, data and algorithms, it is only for reference ).
Instance generation results:
The above is the detailed content of Ten lines of code are comparable to RLHF and use social game data to train a social alignment model. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

AI is indeed changing mathematics. Recently, Tao Zhexuan, who has been paying close attention to this issue, forwarded the latest issue of "Bulletin of the American Mathematical Society" (Bulletin of the American Mathematical Society). Focusing on the topic "Will machines change mathematics?", many mathematicians expressed their opinions. The whole process was full of sparks, hardcore and exciting. The author has a strong lineup, including Fields Medal winner Akshay Venkatesh, Chinese mathematician Zheng Lejun, NYU computer scientist Ernest Davis and many other well-known scholars in the industry. The world of AI has changed dramatically. You know, many of these articles were submitted a year ago.

The performance of JAX, promoted by Google, has surpassed that of Pytorch and TensorFlow in recent benchmark tests, ranking first in 7 indicators. And the test was not done on the TPU with the best JAX performance. Although among developers, Pytorch is still more popular than Tensorflow. But in the future, perhaps more large models will be trained and run based on the JAX platform. Models Recently, the Keras team benchmarked three backends (TensorFlow, JAX, PyTorch) with the native PyTorch implementation and Keras2 with TensorFlow. First, they select a set of mainstream

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

The latest video of Tesla's robot Optimus is released, and it can already work in the factory. At normal speed, it sorts batteries (Tesla's 4680 batteries) like this: The official also released what it looks like at 20x speed - on a small "workstation", picking and picking and picking: This time it is released One of the highlights of the video is that Optimus completes this work in the factory, completely autonomously, without human intervention throughout the process. And from the perspective of Optimus, it can also pick up and place the crooked battery, focusing on automatic error correction: Regarding Optimus's hand, NVIDIA scientist Jim Fan gave a high evaluation: Optimus's hand is the world's five-fingered robot. One of the most dexterous. Its hands are not only tactile

Target detection is a relatively mature problem in autonomous driving systems, among which pedestrian detection is one of the earliest algorithms to be deployed. Very comprehensive research has been carried out in most papers. However, distance perception using fisheye cameras for surround view is relatively less studied. Due to large radial distortion, standard bounding box representation is difficult to implement in fisheye cameras. To alleviate the above description, we explore extended bounding box, ellipse, and general polygon designs into polar/angular representations and define an instance segmentation mIOU metric to analyze these representations. The proposed model fisheyeDetNet with polygonal shape outperforms other models and simultaneously achieves 49.5% mAP on the Valeo fisheye camera dataset for autonomous driving

Project link written in front: https://nianticlabs.github.io/mickey/ Given two pictures, the camera pose between them can be estimated by establishing the correspondence between the pictures. Typically, these correspondences are 2D to 2D, and our estimated poses are scale-indeterminate. Some applications, such as instant augmented reality anytime, anywhere, require pose estimation of scale metrics, so they rely on external depth estimators to recover scale. This paper proposes MicKey, a keypoint matching process capable of predicting metric correspondences in 3D camera space. By learning 3D coordinate matching across images, we are able to infer metric relative
