Home > Technology peripherals > AI > body text

To prevent large models from doing evil, Stanford's new method allows the model to 'forget' harmful task information, and the model learns to 'self-destruct'

PHPz
Release: 2023-09-13 20:53:01
forward
1215 people have browsed it

A new way to prevent large models from doing evil is here!

Now even if the model is open source, it will be difficult for people who want to use the model maliciously to make the big model "evil".

If you don’t believe it, just read this study.

Stanford researchers recently proposed a new method that can prevent large models from adapting to harmful tasks after training them with additional mechanisms.

They call the model trained through this method"Self-destruct model".

To prevent large models from doing evil, Stanfords new method allows the model to forget harmful task information, and the model learns to self-destruct

The self-destruct model can still handle useful tasks with high performance, but it will magically "change" when faced with harmful tasks. Difference". This paper has been accepted by AAAI and received an honorable mention for the Best Student Paper Award.

Simulate first, then destroy

More and more large models are open source, allowing more people to participate in the development and optimization of models, and develop models that are beneficial to society.

However, the open source model also means that the cost of malicious use of large models is also reduced. For this reason, we have to guard against some people (attackers) with ulterior motives.

Previously, in order to prevent someone from maliciously prompting large models to do evil, two methods,

structural security mechanism

and technical security mechanism, were mainly used. Structural security mechanisms mainly use licenses or access restrictions, but in the face of model open source, the effect of this method is weakened. This requires more technical strategies to supplement. However, existing methods such as security filtering and alignment optimization are easily bypassed by fine-tuning or prompting projects.

Stanford researchers proposed to use

task blocking

technology to train large models, so that the model can perform well in normal tasks while preventing the model from adapting to harmful tasks.

To prevent large models from doing evil, Stanfords new method allows the model to forget harmful task information, and the model learns to self-destructThe task blocking method is to assume that the attacker attempts to modify the pre-trained large model for harmful tasks, and then searches for the best model modification method.

Then increase the difficulty of transformation by increasing data cost and computing cost.

In this study, the researchers focused on ways to increase data costs, that is, to reduce the few-sample effect of the model, so that the model's few-sample performance on harmful tasks is close to that of the randomly initialized model, which means If

is to be maliciously transformed, more data

will be spent. So much so that attackers would rather train the model from scratch than use a pre-trained model. Specifically, in order to prevent the pre-trained model from successfully adapting to harmful tasks, the researchers proposed a

MLAC

(Meta-Learned) that utilizes meta-learning (Meta-Learned) and adversarial learning. Adversarial Censoring) algorithm to train the self-destruct model. MLAC uses the beneficial task data set and the harmful task data set to perform meta-training on the model:

To prevent large models from doing evil, Stanfords new method allows the model to forget harmful task information, and the model learns to self-destruct△MLAC training program

The algorithm simulates various possible adaptation attacks in the inner loop, and updates the model parameters in the outer loop to maximize the loss function on harmful tasks, that is, update the parameters to resist these attacks.

Through this internal and external cycle of confrontation, the model "forgets" information related to harmful tasks and achieves a self-destruction effect.

Then learn parameter initialization that performs well on beneficial tasks but is difficult to adapt to harmful tasks.

To prevent large models from doing evil, Stanfords new method allows the model to forget harmful task information, and the model learns to self-destruct△meta-learning process

Overall, MLAC finds the local advantages or saddle points of harmful tasks by simulating the adversary adaptation process. Maintain global optimality on beneficial tasks.

To prevent large models from doing evil, Stanfords new method allows the model to forget harmful task information, and the model learns to self-destructAs shown above, by planning the position of the pre-trained model in the parameter space, you can increase the difficulty of fine-tuning it.

The large model placed at point 1 can be easily adjusted through gradient descent to obtain the global optimal solution for harmful task loss (haemful task loss) and desired task loss (desired task loss).

On the other hand, a large model placed at point 2 can easily reach the optimal solution of the desired task, but is more likely to fall into the local optimal solution of the harmful task.

The model initialization obtained in this way is easy to adapt to the global optimum on beneficial tasks, but falls into local advantages on harmful tasks and is difficult to transform.

The self-destruction effect needle does not poke!

In order to test the performance of the "self-destruction model" trained through the above method, the researchers conducted an experiment.

First, the researchers prepared a biography data set-Bias in Bios.

They then viewed the gender identification task as harmful and the occupational classification task as beneficial. Based on the original data set, all pronouns were replaced with "they/their", which increased the difficulty of the gender identification task.

On the unprocessed data set, the random model only needed 10 examples to achieve over 90% gender classification accuracy.

Then the model is pre-trained with 50k steps of MLAC.

In testing, the researchers took the generated self-destruct model and ran it through a rigorous hyperparameter search to maximize fine-tuned performance on harmful tasks.

In addition, the researchers also extracted a subset of the verification set as the attacker training set to simulate the situation where the attacker only has limited data.

But when searching for hyperparameters, the attacker is allowed to use the complete verification set. This means that although the attacker only has limited training data, can explore the hyperparameters on the full amount of data.

If in this case, the MLAC-trained model is still difficult to adapt to harmful tasks, it can better prove its self-destruction effect.

The researchers then compared MLAC to the following methods:

  • Randomly initialized model
  • BERT fine-tuned only on beneficial tasks
  • Simple Adversarial training method

To prevent large models from doing evil, Stanfords new method allows the model to forget harmful task information, and the model learns to self-destruct

#△Fine-tuned performance on harmful tasks (gender recognition). Shading represents the 95% confidence interval on 6 random seeds.

The results found that the harmful task performance of the self-destruction model trained by the MLAC method was close to that of the random initialization model under all data amounts. However, the simple adversarial training method did not significantly reduce the fine-tuning performance of harmful tasks.

Compared with simple adversarial training, MLAC’s meta-learning mechanism is crucial to producing a self-destructive effect.

To prevent large models from doing evil, Stanfords new method allows the model to forget harmful task information, and the model learns to self-destruct

△The influence of the number of inner loop steps K in the MLAC algorithm, K=0 is equivalent to simple adversarial training

In addition, the MLAC model is useful in tasks The few-shot performance of the MLAC self-destruct model is better than that of the BERT fine-tuned model:

To prevent large models from doing evil, Stanfords new method allows the model to forget harmful task information, and the model learns to self-destruct

#△After fine-tuning the required tasks, the few-shot performance of the MLAC self-destruct model surpasses the BERT and random initialization models .

Paper link: https://arxiv.org/abs/2211.14946

The above is the detailed content of To prevent large models from doing evil, Stanford's new method allows the model to 'forget' harmful task information, and the model learns to 'self-destruct'. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template