


Open source! Hong Kong Chinese, MIT, and Fudan propose the first RNA cornerstone model
Unlike the protein field, research in the RNA field often lacks sufficient annotation data. For example, 3D data only has more than 1,000 RNAs. This greatly limits the development of machine learning methods in RNA structure-function prediction tasks.
In order to make up for the lack of annotated data, this article demonstrates a cornerstone model that can provide rich structural and functional knowledge for various RNA studies - RNA foundation model ( RNA-FM). As the world's first RNA cornerstone model trained in an unsupervised manner based on 23 million unlabeled RNA sequences, RNA-FM mines the evolutionary and structural patterns contained in RNA sequences.
It is worth noting that RNA-FM only needs to match a simple downstream model or only provide embedding, and it can achieve performance far exceeding SOTA in many downstream tasks, such as It can be improved by 20% in secondary structure prediction and 30% in distance map prediction. Large-scale experiments have proven that the model is highly generalizable and can even be used for COVID-19 and regulatory fragments of mRNA.
- ##Preprint of the paper: https://arxiv. org/abs/2204.00300
- Code and model: https://github.com/ml4bio/RNA-FM
- Server: https://proj.cse.cuhk.edu.hk/rnafm
In recent years, biological computing methods based on deep learning have made breakthrough progress in the field of proteins. The most famous milestone is the end-to-end protein 3D structure prediction framework AlphaFold2 developed by the Google DeepMind team. However, protein is only one type of many biological molecules. Gene (DNA/RNA), as the source of protein production, contains more basic information than the latter and has more important research value.
Generally speaking, proteins are the products of translation from RNA used for coding, that is, mRNA. A fixed mRNA can be translated into a fixed protein sequence. In fact, this part of coding RNA only accounts for 2% of all RNA sequences, and the remaining 98% is non-coding RNA (ncRNA). Although ncRNAs are not directly "translated" into proteins, they fold into tertiary structures with specific functions and play a regulatory role in the translation process of mRNA or other biological functions. Therefore, analyzing the structure and function of ncRNA is a more basic and complex research than protein analysis.
However, compared to the protein field, where computational methods are more mature, RNA-based structure and function prediction is still in its early stages, and computational methods originally applicable to the protein field are difficult to directly migrate. to the RNA field. The main limitation of these computational methods is that annotation of RNA data is usually difficult to obtain, and it requires a lot of experimental resources and time to complete the annotation of a small amount of data. Most computational methods require a large amount of annotated data for supervision to achieve high performance. Although there is not much annotated data, the RNA field has actually accumulated a lot of unannotated sequence data. The method of this article is to use these unlabeled data to provide additional effective information for various downstream tasks.
Based on this consideration, Hong Kong Chinese, MIT, Fudan and Shanghai Artificial Intelligence Laboratory teams proposed an unsupervised method to The RNA foundation model (RNA-FM) is trained on 23 million label-free pure RNA sequences. Although the data does not provide annotation information during the training process, RNA-FM still mines the evolutionary and structural patterns contained in these RNA sequences in an unsupervised manner.
If RNA-FM can be effectively applied to downstream RNA structure and function prediction tasks, these computational methods will surely benefit from the knowledge induced by RNA-FM and achieve better performance Performance improvements. The upstream pre-training and downstream migration and application framework of RNA-FM are shown in the figure below.
In order to confirm whether the pre-trained RNA-FM has learned "knowledge" from a large amount of unlabeled data and What kind of "knowledge" has been learned? The articleconducts a series of analyzes on embedding.
First, a simple clustering comparison of various features was conducted directly through UMAP, and it was found that the embedding from pre-trained RNA-FM was better than other Embedding forms clusters with more distinct RNA species. This means that the embedding of RNA-FM does contain structural or functional information for distinguishing RNA species.
Then, the article also uses trajectory inference (Trajectory inference) to predict the evolution of lncRNA from different species through RNA-FM embedding. From the streamplot below, the predicted pseudo-time of evolution between species is roughly consistent with the real species evolution information, indicating that RNA-FM embedding also contains part of the evolutionary information.
It is worth noting that, whether it is community information of RNA species or evolutionary information of lncRNA, RNA-FM has not been directly exposed to these labels during training. RNA-FM discovers patterns related to structure, function and evolution from pure sequences in a completely self-supervised manner.
More experimental results
In addition to directly analyzing the embedding of RNA-FM , the article also attempts to introduce RNA-FM to various downstream RNA structure prediction tasks, including secondary structure, contact prediction, distance prediction, and tertiary structure prediction, and has achieved obvious results. promote.
Especially in terms of secondary structure prediction, the article uses RNA-FM as the backbone and only uses a simple ResNet network as the downstream model, surpassing two public data sets. The other 12 state-of-the-art methods are superior to the best UFold by 3-5 percentage points in F1score. In the head-to-head comparison with UFold, RNA-FM performs better in most RNA categories. More than UFold. If RNA-FM is combined with E2Efold, a further 5% performance improvement can be achieved.
Using RNA-FM to conduct a complete analysis of COVID-19 data, including using RNA-FM to accurately predict key regulatory elements in the COVID-19 reference genome (29870 nt), and using RNA-FM embedding to roughly predict the evolutionary trends of major COVID-19 variants.
Therefore, the article
further attempts to introduce RNA-FM into downstream RNA function prediction tasks, such as using RNA-FM embedding. Prediction of RNA-protein roles.
Experiments have proven that the introduction of RNA-FM embedding improves the performance of the model, and in some cases even achieves prediction results that match real secondary structure information as input.
finally attempts to use RNA -FM performs functional prediction of protein expression based on 5'UTR on mRNA. Although mRNA does not belong to ncRNA, the 5'UTR on it is a region that is not translated but has regulatory functions, which is consistent with the characteristics of ncRNA and does not appear in the training data.
As you can see from the figure below, models that include RNA-FM embedding are always better than models that do not. Although the performance improvement is relatively limited, it partly shows that RNA-FM also has certain generalization ability on non-ncRNA data.
Conclusion
In general, this article uses unlabeled RNA sequence data to pre-train the language model RNA-FM, and through direct or indirect methods, a series of structural or functional Comprehensive verification on different tasks proves that RNA-FM can indeed effectively improve the performance of computing methods in downstream tasks.
The emergence of RNA-FM has alleviated the current situation of RNA labeled data to a certain extent, and provides other researchers with a convenient interface to access large quantities of unlabeled data, which will As a basic model in the RNA field, it provides strong support and help for various research in this field.
About the author
This article has two co-first authors. Chen Jiayang is a research assistant at the Chinese University of Hong Kong. Hu Zhihang is a doctoral candidate at the Chinese University of Hong Kong.
This article has two corresponding authors. Sun Siqi, young researcher at Fudan University Intelligent Complex Systems Laboratory and Shanghai Artificial Intelligence Laboratory, homepage https://intersun.github.io.
Li Yu, Assistant Professor at the Chinese University of Hong Kong, Visiting Assistant Professor at MIT James Collins Lab, Research Scientist at Broad Institute of MIT and Harvard, Visiting Scholar at Wyss Institute at Harvard University, Forbes 30 Under 30 Asia list–Class of 2022, Healthcare & Science. Home page: https://liyu95.com.
The above is the detailed content of Open source! Hong Kong Chinese, MIT, and Fudan propose the first RNA cornerstone model. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

AI is indeed changing mathematics. Recently, Tao Zhexuan, who has been paying close attention to this issue, forwarded the latest issue of "Bulletin of the American Mathematical Society" (Bulletin of the American Mathematical Society). Focusing on the topic "Will machines change mathematics?", many mathematicians expressed their opinions. The whole process was full of sparks, hardcore and exciting. The author has a strong lineup, including Fields Medal winner Akshay Venkatesh, Chinese mathematician Zheng Lejun, NYU computer scientist Ernest Davis and many other well-known scholars in the industry. The world of AI has changed dramatically. You know, many of these articles were submitted a year ago.

The performance of JAX, promoted by Google, has surpassed that of Pytorch and TensorFlow in recent benchmark tests, ranking first in 7 indicators. And the test was not done on the TPU with the best JAX performance. Although among developers, Pytorch is still more popular than Tensorflow. But in the future, perhaps more large models will be trained and run based on the JAX platform. Models Recently, the Keras team benchmarked three backends (TensorFlow, JAX, PyTorch) with the native PyTorch implementation and Keras2 with TensorFlow. First, they select a set of mainstream

Face detection and recognition technology is already a relatively mature and widely used technology. Currently, the most widely used Internet application language is JS. Implementing face detection and recognition on the Web front-end has advantages and disadvantages compared to back-end face recognition. Advantages include reducing network interaction and real-time recognition, which greatly shortens user waiting time and improves user experience; disadvantages include: being limited by model size, the accuracy is also limited. How to use js to implement face detection on the web? In order to implement face recognition on the Web, you need to be familiar with related programming languages and technologies, such as JavaScript, HTML, CSS, WebRTC, etc. At the same time, you also need to master relevant computer vision and artificial intelligence technologies. It is worth noting that due to the design of the Web side

New SOTA for multimodal document understanding capabilities! Alibaba's mPLUG team released the latest open source work mPLUG-DocOwl1.5, which proposed a series of solutions to address the four major challenges of high-resolution image text recognition, general document structure understanding, instruction following, and introduction of external knowledge. Without further ado, let’s look at the effects first. One-click recognition and conversion of charts with complex structures into Markdown format: Charts of different styles are available: More detailed text recognition and positioning can also be easily handled: Detailed explanations of document understanding can also be given: You know, "Document Understanding" is currently An important scenario for the implementation of large language models. There are many products on the market to assist document reading. Some of them mainly use OCR systems for text recognition and cooperate with LLM for text processing.

The latest video of Tesla's robot Optimus is released, and it can already work in the factory. At normal speed, it sorts batteries (Tesla's 4680 batteries) like this: The official also released what it looks like at 20x speed - on a small "workstation", picking and picking and picking: This time it is released One of the highlights of the video is that Optimus completes this work in the factory, completely autonomously, without human intervention throughout the process. And from the perspective of Optimus, it can also pick up and place the crooked battery, focusing on automatic error correction: Regarding Optimus's hand, NVIDIA scientist Jim Fan gave a high evaluation: Optimus's hand is the world's five-fingered robot. One of the most dexterous. Its hands are not only tactile
