


How to generate 'good' graphs? Systematic review of deep generative models for graph generation
https://www.zhuanzhi.ai/paper/a904f0aa0762e65e1dd0b8b464df7168
The picture is the description object and their relationships, which appear in a variety of real-life scenarios. Graph generation is one of the key problems in this field, which considers learning the distribution of a given graph and generating more new graphs. However, due to their widespread applications, generative models of graphs with a rich history are traditionally handcrafted and can only model some statistical properties of graphs.
Recent progress in deep generative models for graph generation is an important step towards improving the fidelity of generated graphs and paves the way for new types of applications. This paper provides an extensive overview of the literature in the field of deep generative models for graph generation. First, the formal definition and preliminary knowledge of deep generative models for graph generation are given. Secondly, the classification of deep generative models for unconditional and conditional graph generation is proposed respectively. The respective existing works are compared and analyzed. Following this, an overview of the evaluation metrics in this specific area will be given. Finally, the applications of depth map generation are summarized and five promising research directions are pointed out.
Introduction
Graphs are ubiquitous in the real world, representing objects and their relationships, such as social networks, citation networks, biological networks, transportation network, etc. Graphs are also known to have complex structures containing rich underlying values [1]. Significant efforts have been made in this area, resulting in a rich literature and methods for dealing with various graph problems.
These works can be divided into two categories: 1) Predicting and analyzing patterns of a given graph. 2) Learn the distribution of a given graph and generate more novel graphs. The first type covers many research areas, including node classification, graph classification, and link prediction. A lot of work has been done in this area over the past few decades. Compared with the first type of problems, the second type of problems is related to graph generation problems, which is also the focus of this article.
Graph generation includes the process of modeling and generating real-world graphs, which has applications in several fields, such as understanding social networks [2], [3], [4] Interaction dynamics in, anomaly detection [5], protein structure modeling [6], [7], source code generation and translation [8], [9], semantic parsing [10]. Due to their wide application, the development of generative models of graphs has a rich history, resulting in well-known models such as random graphs, small-world models, stochastic block models, and Bayesian network models, which are based on a priori structural assumptions [11 ] Generate graph. These graph generative models [12], [13], [14] aim to model pre-selected graph families such as random graphs [15], small-world networks [16] and scale-free graphs [12]. However, due to their simplicity and hand-crafted nature, these random graph models often have limited ability to model complex dependencies and can only model some statistical properties of the graph.
These methods usually work well for the properties for which the predefined principles are tailored, but often do not work well for other properties. For example, contact network models can fit influenza epidemics but not dynamic functional connectivity. However, in many areas, the nature and generative principles of networks are largely unknown, such as those explaining the mechanisms of mental illness, cyberattacks, and the spread of malware in brain networks. For another example, Erdos-Renyi's graph does not have the heavy-tailed degree distribution typical of many real-world networks. Furthermore, the use of a priori assumptions limits these traditional techniques from exploring more applications in larger scale domains where a priori knowledge of graphs is always unavailable.
Given the limitations of traditional graph generation techniques, a key open challenge is to develop methods that can directly learn generative models from a collection of observed graphs, which is an improvement in generating graphs. An important step for fidelity. It paves the way for new types of applications, such as discovery of new drugs [17], [18], and protein structure modeling [19], [20], [21]. Recent advances in deep generative models, such as variational autoencoders (VAEs) [22] and generative adversarial networks (GANs) [23], have been proposed for generating graphs. Many deep learning models have been formalized for Generating graphs is a promising area for deep generative models, which is the focus of this review.
Various advanced work has been carried out in depth map generation, ranging from one-time graph generation to sequential graph generation processes, adapting to various deep generative learning strategies. These methods aim to address one or several of the above challenges through work in different fields, including machine learning, bioinformatics, artificial intelligence, human health, and social network mining. However, methods developed in different research areas often use different vocabulary and approach problems from different perspectives.
Furthermore, there is a lack of standard and comprehensive evaluation procedures to validate the developed deep generative models for graphs. To this end, this paper provides a systematic review of deep generative models for graph generation. The purpose is to help interdisciplinary researchers choose appropriate technologies to solve problems in their application fields, and more importantly, to help graph generation researchers understand the basic principles of graph generation and identify open research opportunities in the field of deep graph generation. To the best of our knowledge, this is the first comprehensive review of deep generative models for graph generation. Below, we summarize the main contributions of this review:
This paper proposes a taxonomy of deep generative models for graph generation, classified by problem setting and approach. The advantages, disadvantages, and relationships between the different subcategories are presented. Deep generative models for graph generation and basic deep generative models are described, analyzed, and compared in detail.
- We summarize and classify the results of existing evaluation procedures and metrics for deep generative models on benchmark datasets and corresponding graph generation tasks.
- We introduce existing application areas of graph depth generative models, and the potential benefits and opportunities they bring to these applications.
- We present several open issues and promising future research directions in the field of deep generative models for graph generation.
Unconditional depth generation model for graph generation
Unconditional depth map generation The goal is to learn the distribution pmodel(G) via a deep generative model from a set of observed real graphs sampled from the real distribution p(G). According to the style of the generation process, we can divide these methods into two main branches: (1) sequential generation: generate nodes and edges in sequence; (2) one-time generation: build a probabilistic graph model according to the matrix representation and generate all at once nodes and edges. Both methods of generating graphs have advantages and disadvantages. Sequential generation, while efficiently executing the local decisions of the previous generation, has difficulty maintaining long-term dependencies. Therefore, some global properties of graphs (such as scale-free properties) are difficult to include. Furthermore, existing work on sequence generation is limited to the ordering of predefined sequences, leaving the role of permutations. One-shot generation methods can simultaneously generate and refine the entire graph (i.e., nodes and edges) through multiple iterations, thereby modeling the global properties of the graph, but are time-consuming due to the need to collectively model the global relationships between nodes. The complexity usually exceeds O(N2), making most methods difficult to scale to large graphs.
Conditional Depth Generative Model for Graph Generation
The goal of conditional depth map generation is based on a set of observed realities The graph G and its corresponding auxiliary information (i.e. condition y) learn the conditional distribution pmodel(G|y). Auxiliary information can be category labels, semantic context, graphs from other distribution spaces, etc. Compared with unconditional depth map generation, in addition to the challenges in generating maps, conditional generation also needs to consider how to extract features from given conditions and integrate them into the generation of the map.
Therefore, in order to systematically introduce existing conditional depth map generation models, we mainly describe how these methods handle conditions. Since conditions can be any form of auxiliary information, they are divided into three types, including graph, sequence and semantic context, as shown in the yellow part of the taxonomy tree in Figure 1
The above is the detailed content of How to generate 'good' graphs? Systematic review of deep generative models for graph generation. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

AI is indeed changing mathematics. Recently, Tao Zhexuan, who has been paying close attention to this issue, forwarded the latest issue of "Bulletin of the American Mathematical Society" (Bulletin of the American Mathematical Society). Focusing on the topic "Will machines change mathematics?", many mathematicians expressed their opinions. The whole process was full of sparks, hardcore and exciting. The author has a strong lineup, including Fields Medal winner Akshay Venkatesh, Chinese mathematician Zheng Lejun, NYU computer scientist Ernest Davis and many other well-known scholars in the industry. The world of AI has changed dramatically. You know, many of these articles were submitted a year ago.

Written previously, today we discuss how deep learning technology can improve the performance of vision-based SLAM (simultaneous localization and mapping) in complex environments. By combining deep feature extraction and depth matching methods, here we introduce a versatile hybrid visual SLAM system designed to improve adaptation in challenging scenarios such as low-light conditions, dynamic lighting, weakly textured areas, and severe jitter. sex. Our system supports multiple modes, including extended monocular, stereo, monocular-inertial, and stereo-inertial configurations. In addition, it also analyzes how to combine visual SLAM with deep learning methods to inspire other research. Through extensive experiments on public datasets and self-sampled data, we demonstrate the superiority of SL-SLAM in terms of positioning accuracy and tracking robustness.

The performance of JAX, promoted by Google, has surpassed that of Pytorch and TensorFlow in recent benchmark tests, ranking first in 7 indicators. And the test was not done on the TPU with the best JAX performance. Although among developers, Pytorch is still more popular than Tensorflow. But in the future, perhaps more large models will be trained and run based on the JAX platform. Models Recently, the Keras team benchmarked three backends (TensorFlow, JAX, PyTorch) with the native PyTorch implementation and Keras2 with TensorFlow. First, they select a set of mainstream

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

Target detection is a relatively mature problem in autonomous driving systems, among which pedestrian detection is one of the earliest algorithms to be deployed. Very comprehensive research has been carried out in most papers. However, distance perception using fisheye cameras for surround view is relatively less studied. Due to large radial distortion, standard bounding box representation is difficult to implement in fisheye cameras. To alleviate the above description, we explore extended bounding box, ellipse, and general polygon designs into polar/angular representations and define an instance segmentation mIOU metric to analyze these representations. The proposed model fisheyeDetNet with polygonal shape outperforms other models and simultaneously achieves 49.5% mAP on the Valeo fisheye camera dataset for autonomous driving

The latest video of Tesla's robot Optimus is released, and it can already work in the factory. At normal speed, it sorts batteries (Tesla's 4680 batteries) like this: The official also released what it looks like at 20x speed - on a small "workstation", picking and picking and picking: This time it is released One of the highlights of the video is that Optimus completes this work in the factory, completely autonomously, without human intervention throughout the process. And from the perspective of Optimus, it can also pick up and place the crooked battery, focusing on automatic error correction: Regarding Optimus's hand, NVIDIA scientist Jim Fan gave a high evaluation: Optimus's hand is the world's five-fingered robot. One of the most dexterous. Its hands are not only tactile
