# In recent years, graph neural networks (GNN) have made rapid and incredible progress. Graph neural network, also known as graph deep learning, graph representation learning (graph representation learning) or geometric deep learning, is the fastest growing research topic in the field of machine learning, especially deep learning. The title of this sharing is "Basics, Frontiers and Applications of GNN", which mainly introduces the general content of the comprehensive book "Basics, Frontiers and Applications of Graph Neural Networks" compiled by scholars Wu Lingfei, Cui Peng, Pei Jian and Zhao Liang. .
Diagrams are a universal language for describing and modeling complex systems. The graph itself is not complicated, it mainly consists of edges and nodes. We can use nodes to represent any object we want to model, and we can use edges to represent the relationship or similarity between two nodes. What we often call graph neural network or graph machine learning usually uses the structure of the graph and the information of edges and nodes as the input of the algorithm to output the desired results. For example, in a search engine, when we enter a query, the engine will return personalized search results based on the query information, user information, and some contextual information. This information can be naturally organized in a graph.
Graph structured data can be found everywhere, such as the Internet, social networks, etc. In addition, in the currently very popular field of protein discovery, people will use graphs to describe and model existing proteins and generate new graphs to help people discover new drugs. We can also use graphs to do some complex program analysis, and we can also do some high-level reasoning in computer vision.
Graph machine learning It is not a very new topic. This research direction has been in the past 20 years, and it has always been relatively niche before. Since 2016, with the emergence of modern graph neural network-related papers, graph machine learning has become a popular research direction. It is found that this new generation of graph machine learning method can better learn the data itself and the information between the data, so that it can better represent the data, and ultimately be able to better complete more important tasks.
The earliest paper related to graph neural network appeared in 2009, before deep learning became popular. Papers on modern graph neural networks appeared in 2016, which were improvements to early graph neural networks. Afterwards, the emergence of GCN promoted the rapid development of graph neural networks. Since 2017, a large number of new algorithms have emerged. As the algorithms of graph neural networks become more and more mature, since 2019, the industry has tried to use these algorithms to solve some practical problems. At the same time, many open source tools have been developed to improve the efficiency of solving problems. Since 2021, many books related to graph neural networks have been written, including of course this "Basics, Frontiers and Applications of Graph Neural Networks".
The book "Basics, Frontiers and Applications of Graph Neural Networks" systematically introduces the core concepts and technologies in the field of graph neural networks, as well as cutting-edge research and development, and introduces applications in different fields. Applications. Readers from both academia and industry can benefit from it.
.
Feature Learning in Graphs is very similar to deep learning. The goal is to design an effective task-related or task-independent feature learning method to map the nodes in the original graph into a high-dimensional space to obtain the embedding of the nodes. representation, and then complete the downstream tasks.
There are two types of representations that need to be learned in graph neural networks:
Requires a filter operation, which takes the matrix of the graph and the vector representation of the node as input, continuously learns, and updates the vector representation of the node. Currently, the more common filter operations include Spectral-based, Spatial-based, Attention-based, and Recurrent-based.
Requires a pool operation, Taking the matrix of the graph and the vector representation of the nodes as input, it continuously learns to obtain the matrix of the graph containing fewer nodes and the vector representation of its nodes, and finally obtains a graph-level vector representation to represent the entire graph. Currently, the more common pool operations include Flat Graph Pooling (such as Max, Ave, Min) and Hierarchical Graph Pooling (such as Diffpool).
In this way, each node can define a calculation graph.
We can layer the calculation graph. The first layer is the most original information, and the sum is passed layer by layer. Aggregate information to learn vector representations of all nodes.
The above figure roughly describes the main steps of graph neural network model learning. There are mainly four steps:
##Picture above is an example of using average as an aggregation function. The vector representation of node v in the kth layer depends on the average of the vector representations of its neighbor nodes in the previous layer and its own vector representation in the previous layer.
To summarize the above content, the main point of the graph neural network is to generate the target node by aggregating the information of neighbor nodes. Vector representation of points, which takes into account parameter sharing in the encoder and also allows for inference learning.is one of the most classic algorithms, It can Act directly on the graph and exploit its structural information. Focusing on improving model speed, practicality and stability, as shown in the figure above, GCN has also gone through several iterations. The GCN paper is of epoch-making significance and laid the foundation for graph neural networks.
MPNNThe core pointis to transform the graph into convolution For the process of information transmission, it defines two functions, namely aggregation function and update function. This algorithm is a simple and general algorithm, but it is not efficient.
GraphSageis an industrial-level algorithm. It uses sampling to get a certain number of neighbor nodes. Thus the vector representation of the school node.
GATis the introduction of the idea of attention, its core The point is to dynamically learn edge weights during information transfer.
In addition to the algorithms introduced above, there is also GGNN. Its characteristic is that the output can be multiple nodes. If you are interested, you can check it out. Related papers.In the book "Basics, Frontiers and Applications of Graph Neural Networks", Chapters 5, 6, 7 and 8 also introduce how to evaluate the scalability of graph neural networks and graph neural networks respectively. , the interpretability of graph neural networks, and the adversarial stability of graph neural networks. If you are interested, you can read the corresponding chapters in the book. Graph neural network requires graph structure data, but it is doubtful whether the given graph structure is optimal. Sometimes there may be a lot of noise, and many applications There may be no graph-structured data, or even just raw features. So, we need to use the graph neural network to learn the optimal graph representation and graph node representation. ##3. The Frontier of Graph Neural Networks
1. Graph Structure Learning
#Experimental data can show the advantages of this method.
Through the visualization results of the graph, it can be found that the learned graphs tend to compare similar graphs Objects are clustered together and have a certain interpretability.
2. Other FrontiersIn the book "Basics, Frontiers and Applications of Graph Neural Networks", the following frontiers are also introduced. Research, these cutting-edge research have important applications in many scenarios:
We can use session information to construct a heterogeneous global graph, and then learn the vector representation of users or items through graph neural network learning, and use this vector representation for personalization recommendation.
We can track the dynamic change process of objects, Deepen your understanding of video with graph neural networks.
We can use graph neural networks to understand high-level information of natural language.
#A1: Graph neural network is a very important branch, and the one that keeps pace with graph neural network is Transformer. In view of the flexibility of graph neural networks, graph neural networks and Transformer can be combined with each other to take advantage of greater advantages.
Q2: Can GNN and causal learning be combined? How to combine?#A2: The important link in causal learning is the causal graph. The causal graph and GNN can be naturally combined. The difficulty of causal learning is that its data size is small. We can use the ability of GNN to better learn causal graphs.
Q3: What is the difference and connection between the interpretability of GNN and the interpretability of traditional machine learning?#A3: There will be a detailed introduction in the book "Basics, Frontiers and Applications of Graph Neural Networks".
Q4: How to train and infer GNN directly based on the graph database and using the capabilities of graph computing?A4: At present, there is no good practice on the unified graph computing platform. There are some startup companies and scientific research teams exploring related directions. This will be a very For valuable and challenging research directions, a more feasible approach is to divide the research into different fields.
The above is the detailed content of The foundation, frontier and application of GNN. For more information, please follow other related articles on the PHP Chinese website!