Author | Liu Shengchao
Editor | Kaixia
Starting in 2021, the combination of big language and multi-modality has swept the machine learning research community .
With the development of large models and multi-modal applications, can we apply these techniques to drug discovery? And, can these natural language textual descriptions bring new perspectives to this challenging problem? The answer is yes, and we are optimistic about it
Recently, Canada’s Montreal Institute for Learning Algorithms (Mila), NVIDIA Research, University of Illinois at Urbana-Champaign (UIUC), Princeton University and California Institute of Technology The college's research team jointly learns the chemical structure and text description of molecules through comparative learning strategies, and proposes a multi-modal molecular structure-text model MoleculeSTM.
The research is titled "Multi-modal molecule structure–text model for text-based retrieval and editing" and was published in "Nature Machine Intelligence" on December 18, 2023.
Paper link: https://www.nature.com/articles/s42256-023-00759-6 needs to be rewritten
Among them, Dr. Liu Shengchao is the first author, and Professor Anima Anandkumar of NVIDIA Research is the corresponding author. Nie Weili, Wang Chengpeng, Lu Jiarui, Qiao Zhuoran, Liu Ling, Tang Jian and Xiao Chaowei are co-authors.
This project was conducted by Dr. Liu Shengchao after joining NVIDIA Research in March 2022, under the guidance of Teachers Nie Weili, Teacher Tang Jian, Teacher Xiao Chaowei and Teacher Anima Anandkumar.
Dr. Liu Shengchao said: "Our motivation was to conduct preliminary exploration of LLM and drug discovery, and finally proposed MoleculeSTM."
For docking The text is designed to guide molecule editing
The core idea of MoleculeSTM is very simple and direct, that is, the description of molecules can be divided into two categories: internal chemical structure and external function description. Here we use a contrastive pre-training method to align and connect these two types of information. The specific diagram is shown in the figure below
#Illustration: MoleculeSTM flow chart.
This alignment of MoleculeSTM has a very good property: when there are some tasks that are difficult to solve in the chemical space, we can transfer them to the natural language space. And natural language tasks will be relatively easier to solve due to its characteristics. Based on this, we designed a wide variety of downstream tasks to verify its effectiveness. Below we discuss several insights in detail.
Features of Natural Language and Large Language Models
In MoleculeSTM, we pose a problem for the first time. We take advantage of the open vocabulary and combinatorial characteristics of natural language
In our recent work ChatDrug (https://arxiv.org/abs/2305.18090), we explored the conversational properties between natural language and large language models, which are of interest to Friends can go and take a look
Task design derived from features refers to the design of planning and arranging tasks based on the characteristics of the product or system
For existing language- For image tasks, they can be viewed as art-related tasks, such as generating pictures or text. That is, their results are varied and uncertain. However, scientific discoveries are scientific problems that usually have relatively clear results, such as the generation of small molecules with specific functions. This brings greater challenges in task design
In MoleculeSTM (Appendix B), we proposed two guidelines:
From this we designed three broad categories of tasks:
We will focus on the second task in the following sections
The qualitative results of molecular editing are restated as follows:
This The task is to input a molecule and a natural language description (such as additional attributes) at the same time, and then hope to output a new molecule described in a composite language text. This is text-guided lead optimization.
The specific method is to use the already trained molecule generation model and our pre-trained MoleculeSTM to learn the alignment of the two latent spaces (latent space), thereby performing latent space interpolation, and then generate it through decoding target molecule. The process diagram is as follows.
The content that needs to be rewritten is: a two-stage process diagram of zero-sample text-guided molecule editing
Here we show several groups of molecule editing The qualitative results are restated as follows: (Result details of the remaining downstream tasks can be found in the original paper). We mainly consider four types of molecular editing tasks:
#Result display: zero-sample text-guided molecule editing. (Note: This is a direct translation of the original sentence into Chinese.)
What is more interesting is the last type of task. We found that MoleculeSTM can indeed perform matching based on the text description of the target protein. Optimization of lead compounds for ligands. (Note: The protein structure information here will only be known after evaluation.)
The above is the detailed content of NVIDIA, Mila, and Caltech jointly release multi-modal molecular structure-text model of LLM combined with drug discovery. For more information, please follow other related articles on the PHP Chinese website!