


Break into the bottom layer of AI! NUS Youyang's team uses diffusion model to construct neural network parameters, LeCun likes it
The diffusion model has ushered in a major new application -
Just like Sora generates videos, it generates parameters for the neural network and directly penetrates into the bottom layer of AI!
This is the latest open source research result of Professor You Yang’s team at the National University of Singapore, together with UCB, Meta AI Laboratory and other institutions.
Specifically, the research team proposed a diffusion model p(arameter)-diff for generating neural network parameters.
Use it to generate network parameters, the speed is up to 44 times faster than direct training, and the performance is not inferior.
After the model was released, it quickly aroused heated discussions in the AI community. Experts in the circle expressed the same amazing attitude towards it as ordinary people did when they saw Sora.
Some people even directly exclaimed that this is basically equivalent to AI creating new AI.
Even AI giant LeCun praised the result after seeing it, saying it was really a cute idea.
In fact, p-diff does have the same significance as Sora. Dr. Fuzhao Xue (Xue Fuzhao) from the same laboratory explained in detail:
Sora generates high-dimensional data, i.e. videos, which makes Sora a world simulator (close to AGI from one dimension).
And this work, neural network diffusion, can generate parameters in the model, has the potential to become a meta-world-class learner/optimizer, moving towards AGI from another new important dimension.
Getting back to the subject, how does p-diff generate neural network parameters?
Combining the autoencoder with the diffusion model
To clarify this problem, we must first understand the working characteristics of the diffusion model and the neural network.
The diffusion generation process is a transformation from a random distribution to a highly specific distribution. Through the addition of compound noise, the visual information is reduced to a simple noise distribution.
Neural network training also follows this transformation process and can also be degraded by adding noise. Inspired by this feature, researchers proposed the p-diff method.
From a structural point of view, p-diff is designed by the research team based on the standard latent diffusion model and combined with the autoencoder.
The researcher first selects a part of the network parameters that have been trained and performed well, and expands them into a one-dimensional vector form.
An autoencoder is then used to extract latent representations from the one-dimensional vector as training data for the diffusion model, which can capture the key features of the original parameters.
During the training process, the researchers let p-diff learn the distribution of parameters through forward and reverse processes. After completion, the diffusion model synthesizes these potential representations from random noise like the process of generating visual information. .
Finally, the newly generated latent representation is restored to network parameters by the decoder corresponding to the encoder and used to build a new model.
The following figure is the parameter distribution of the ResNet-18 model trained from scratch using 3 random seeds through p-diff, showing the differences between different layers and the same layer. distribution pattern among parameters.
To evaluate the quality of the parameters generated by p-diff, the researchers used 3 types of neural networks of two sizes each on 8 data sets. taking the test.
In the table below, the three numbers in each group represent the evaluation results of the original model, the integrated model and the model generated with p-diff.
As can be seen from the results, the performance of the model generated by p-diff is basically close to or even better than the original model trained manually.
In terms of efficiency, without losing accuracy, p-diff generates ResNet-18 network 15 times faster than traditional training, and generates Vit-Base 44 times faster.
Additional test results demonstrate that the model generated by p-diff is significantly different from the training data.
As can be seen from the figure (a) below, the similarity between the models generated by p-diff is lower than the similarity between the original models, as well as the similarity between p-diff and the original model.
It can be seen from (b) and (c) that compared with fine-tuning and noise addition methods, the similarity of p-diff is also lower.
These results show that p-diff actually generates a new model, rather than just memorizing training samples. It also shows that it has good generalization ability and can generate new models that are different from the training data.
Currently, the code of p-diff has been open sourced. If you are interested, you can check it out on GitHub.
Paper address: https://arxiv.org/abs/2402.13144
GitHub: https ://github.com/NUS-HPC-AI-Lab/Neural-Network-Diffusion
The above is the detailed content of Break into the bottom layer of AI! NUS Youyang's team uses diffusion model to construct neural network parameters, LeCun likes it. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Configuring a Debian mail server's firewall is an important step in ensuring server security. The following are several commonly used firewall configuration methods, including the use of iptables and firewalld. Use iptables to configure firewall to install iptables (if not already installed): sudoapt-getupdatesudoapt-getinstalliptablesView current iptables rules: sudoiptables-L configuration

The CentOS shutdown command is shutdown, and the syntax is shutdown [Options] Time [Information]. Options include: -h Stop the system immediately; -P Turn off the power after shutdown; -r restart; -t Waiting time. Times can be specified as immediate (now), minutes ( minutes), or a specific time (hh:mm). Added information can be displayed in system messages.

Mark Cerny, chief architect of SonyInteractiveEntertainment (SIE, Sony Interactive Entertainment), has released more hardware details of next-generation host PlayStation5Pro (PS5Pro), including a performance upgraded AMDRDNA2.x architecture GPU, and a machine learning/artificial intelligence program code-named "Amethylst" with AMD. The focus of PS5Pro performance improvement is still on three pillars, including a more powerful GPU, advanced ray tracing and AI-powered PSSR super-resolution function. GPU adopts a customized AMDRDNA2 architecture, which Sony named RDNA2.x, and it has some RDNA3 architecture.

Backup and Recovery Policy of GitLab under CentOS System In order to ensure data security and recoverability, GitLab on CentOS provides a variety of backup methods. This article will introduce several common backup methods, configuration parameters and recovery processes in detail to help you establish a complete GitLab backup and recovery strategy. 1. Manual backup Use the gitlab-rakegitlab:backup:create command to execute manual backup. This command backs up key information such as GitLab repository, database, users, user groups, keys, and permissions. The default backup file is stored in the /var/opt/gitlab/backups directory. You can modify /etc/gitlab

Complete Guide to Checking HDFS Configuration in CentOS Systems This article will guide you how to effectively check the configuration and running status of HDFS on CentOS systems. The following steps will help you fully understand the setup and operation of HDFS. Verify Hadoop environment variable: First, make sure the Hadoop environment variable is set correctly. In the terminal, execute the following command to verify that Hadoop is installed and configured correctly: hadoopversion Check HDFS configuration file: The core configuration file of HDFS is located in the /etc/hadoop/conf/ directory, where core-site.xml and hdfs-site.xml are crucial. use

Zookeeper performance tuning on CentOS can start from multiple aspects, including hardware configuration, operating system optimization, configuration parameter adjustment, monitoring and maintenance, etc. Here are some specific tuning methods: SSD is recommended for hardware configuration: Since Zookeeper's data is written to disk, it is highly recommended to use SSD to improve I/O performance. Enough memory: Allocate enough memory resources to Zookeeper to avoid frequent disk read and write. Multi-core CPU: Use multi-core CPU to ensure that Zookeeper can process it in parallel.

Efficient training of PyTorch models on CentOS systems requires steps, and this article will provide detailed guides. 1. Environment preparation: Python and dependency installation: CentOS system usually preinstalls Python, but the version may be older. It is recommended to use yum or dnf to install Python 3 and upgrade pip: sudoyumupdatepython3 (or sudodnfupdatepython3), pip3install--upgradepip. CUDA and cuDNN (GPU acceleration): If you use NVIDIAGPU, you need to install CUDATool

Enable PyTorch GPU acceleration on CentOS system requires the installation of CUDA, cuDNN and GPU versions of PyTorch. The following steps will guide you through the process: CUDA and cuDNN installation determine CUDA version compatibility: Use the nvidia-smi command to view the CUDA version supported by your NVIDIA graphics card. For example, your MX450 graphics card may support CUDA11.1 or higher. Download and install CUDAToolkit: Visit the official website of NVIDIACUDAToolkit and download and install the corresponding version according to the highest CUDA version supported by your graphics card. Install cuDNN library:
