Home > Technology peripherals > AI > Tsinghua team proposes knowledge-guided graph Transformer pre-training framework: a method to improve molecular representation learning

Tsinghua team proposes knowledge-guided graph Transformer pre-training framework: a method to improve molecular representation learning

WBOY
Release: 2023-11-23 18:17:07
forward
1280 people have browsed it

清华团队提出知识引导的图 Transformer 预训练框架:提高分子表征学习的方法

Editor | Ziluo

In order to facilitate molecular property prediction, in the field of drug discovery, it is very important to learn effective molecular feature representations. Recently, people have overcome the challenge of data scarcity by pre-training graph neural networks (GNN) using self-supervised learning techniques. However, there are two main problems with current methods based on self-supervised learning: the lack of clear self-supervised learning strategies and the limited capabilities of GNN

Recently, a research team from Tsinghua University, West Lake University and Zhijiang Laboratory, We propose Knowledge-guided Pre-training of Graph Transformer (KPGT), a self-supervised learning framework that provides improved, generalizable and robust learning through significantly enhanced molecular representation learning. Molecular property prediction. The KPGT framework integrates a graph Transformer designed specifically for molecular graphs and a knowledge-guided pre-training strategy to fully capture the structural and semantic knowledge of molecules.

Through extensive computational testing on 63 data sets, KPGT has demonstrated superior performance in predicting molecular properties in various fields. Furthermore, the practical applicability of KPGT in drug discovery was verified by identifying potential inhibitors of two antitumor targets. Overall, KPGT can provide a powerful and useful tool for advancing the AI-assisted drug discovery process.

The research was titled "A knowledge-guided pre-training framework for improving molecular representation learning" and was published in "Nature Communications" on November 21, 2023.

清华团队提出知识引导的图 Transformer 预训练框架:提高分子表征学习的方法

Determining molecular properties experimentally requires significant time and resources, and identifying molecules with desired properties is one of the most significant challenges in drug discovery. In recent years, artificial intelligence-based methods have played an increasingly important role in predicting molecular properties. One of the main challenges of artificial intelligence-based methods for predicting molecular properties is the characterization of molecules

In recent years, the emergence of deep learning-based methods has emerged as a potentially useful tool for predicting molecular properties, mainly because they have the ability to transform from simple inputs Excellent ability to automatically extract effective features from data. Notably, various neural network architectures, including recurrent neural networks (RNN), convolutional neural networks (CNN), and graph neural networks (GNN), are adept at modeling molecular data in various formats, ranging from simplified molecular inputs to Line input system (SMILES) to molecular images and molecular diagrams. However, the limited availability of marker molecules and the vastness of chemical space limit their predictive performance, especially when dealing with out-of-distribution data samples.

With the remarkable achievements of self-supervised learning methods in the fields of natural language processing and computer vision, these techniques have been applied to pre-train GNNs and improve representation learning of molecules, thereby achieving success in downstream molecular property prediction tasks. Substantial progress has been made

The researchers hypothesize that introducing additional knowledge that quantitatively describes molecular characteristics into a self-supervised learning framework can effectively address these challenges. Molecules have many quantitative characteristics, such as molecular descriptors and fingerprints, that can be easily obtained with currently established computational tools. Integrating this additional knowledge can introduce rich molecular semantic information into self-supervised learning, thereby greatly enhancing the acquisition of semantically rich molecular representations.

Generally, existing self-supervised learning methods rely on GNN as the core model. However, GNN has limited model capacity. Furthermore, GNNs can have difficulty capturing long-range interactions between atoms. And Transformer-based models have become a game-changing model. It is characterized by an increasing number of parameters and the ability to capture long-range interactions, providing a promising approach to comprehensively simulating the structural characteristics of molecules

Self-supervised learning framework KPGT

In this study, the researchers introduced a self-supervised learning framework called KPGT, which aims to enhance molecular representation learning to promote downstream molecular property prediction tasks. The KPGT framework consists of two main components: a backbone model called Line Graph Transformer (LiGhT) and a knowledge-guided pre-training policy. The KPGT framework combines the high-capacity LiGhT model, which is specifically designed to accurately model molecular graph structures, and utilizes a knowledge-guided pre-training strategy to capture molecular structure and semantic knowledge

The research team used About 2 million molecules, LiGhT was pre-trained through a knowledge-guided pre-training strategy

清华团队提出知识引导的图 Transformer 预训练框架:提高分子表征学习的方法

Rewritten content: Figure: KPGT Overview. (Source: paper)

KPGT outperforms baseline methods in molecular property prediction. Compared with several baseline methods, KPGT achieves significant improvements on 63 datasets.

清华团队提出知识引导的图 Transformer 预训练框架:提高分子表征学习的方法

Illustration: Comparative evaluation of KPGT and baseline methods in predicting molecular properties. (Source: paper)

In addition, by successfully using KPGT to identify potential inhibitors of two anti-tumor targets, hematopoietic progenitor kinase 1 (HPK1) and fibroblast growth factor receptor (FGFR1), it was demonstrated Practical applicability of KPGT.

清华团队提出知识引导的图 Transformer 预训练框架:提高分子表征学习的方法

Illustration: Identification of HPK1 inhibitors using KPGT. (Source: Paper)

清华团队提出知识引导的图 Transformer 预训练框架:提高分子表征学习的方法

#Illustration: Identification of FGFR1 inhibitors using KPGT. (Source: Paper)

Research Limitations

Despite the advantages of KPGT in effective molecular property prediction, there are still some limitations.

  • First of all, the integration of additional knowledge is the most significant feature of the proposed method. In addition to the 200 molecular descriptors and 512 RDKFPs used in KPGT, there is the potential to incorporate various other types of additional information knowledge.
  • Additionally, further research could integrate three-dimensional (3D) molecular conformation into the pre-training process, allowing the model to capture important 3D information about the molecule and potentially enhance representation learning. ability.
  • While KPGT currently employs a backbone model with approximately 100 million parameters and pre-training on 2 million molecules, exploring larger-scale pre-training can provide insights into molecular representation learning. More substantial benefits.

Overall, KPGT provides a powerful self-supervised learning framework for effective molecular representation learning, thereby advancing the field of artificial intelligence-assisted drug discovery.

Paper link: https://www.nature.com/articles/s41467-023-43214-1

The above is the detailed content of Tsinghua team proposes knowledge-guided graph Transformer pre-training framework: a method to improve molecular representation learning. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:jiqizhixin.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template