


7B model surpasses GPT4-V! Hong Kong University of Science and Technology and others released the 'Graph Reasoning Question and Answer' data set GITQA: Visual graphs can improve reasoning capabilities
Graph neural networks (GNNs) are good at leveraging the structural information of graphs for inference, but often require domain-specific tuning to achieve optimal performance, which makes their ability to generalize across different tasks limited. limit.
Large language models (LLMs) have stronger cross-task and generalization capabilities for graph reasoning, but often do not perform as well as dedicated graph neural network models on specific tasks.
Current research on graph reasoning often ignores the importance of visual information in graph reasoning, whether it is traditional graph neural networks or graph reasoning methods based on large language models.
However, humans use visual features to efficiently and accurately complete graph tasks, such as determining whether there are rings in the graph.
Therefore, it is of great significance to explore the role of visual form graph information in graph reasoning.
More specifically, can drawing a graph (Graph) as a picture (Image) give the model special reasoning capabilities? Can these images (called Visual Graphs) enhance existing graph reasoning models based on other modalities?
In order to answer these questions, the research team from Hong Kong University of Science and Technology and Southern University of Science and Technology built the first inference question and answer data set containing visual graphs, GITQA, and used open source models such as GPT-4 turbo, GPT-4V and Extensive experiments have been conducted on closed-source models such as Vicuna and LLaVA, confirming the role of Visual Graph in graph reasoning and its mutual enhancement with text modality.
Picture
Paper address: https://arxiv.org/abs/2402.02130
Project homepage: https://v-graph.github.io/
In the GITQA test benchmark, fine-tuned based on LLaVA-7B/13B The multi-modal model GITA-7B/13B demonstrates graph reasoning performance that surpasses GPT-4V.
GITQA Multimodal Graph Reasoning Question and Answer Dataset
The research team established the GITQA data set and its corresponding Test benchmark, GITQA dataset contains more than 423K question and answer instances, each instance contains corresponding graph structure-text-visual information and its corresponding question and answer pairs.
The GITQA data set contains two versions: GITQA-Base and GITQA-Aug, of which GITQA-Base only contains visual images of a single style.
GITQA-Aug is even richer. It performs a variety of data enhancements on the visual map, including changing the layout, point shape, edge width, point style, etc., thereby providing Provides more diverse visual representations.
Picture
As shown in Figure 1, the GITQA test benchmark contains 8 representative graph reasoning tasks: Connectivity (judgment graph (whether two points in the graph are connected), Cycle (to determine whether there is a cycle in the graph), TS (to find the topological order of the graph), SP (to find the shortest path between two points in the graph), MaxFlow (to calculate the maximum flow between two points in the graph) ), BGM (compute the maximum matching of a bipartite graph), HP (find the Hamiltonian path in the graph) and GNN (simulate the message passing of GNN).
Picture
The data set corresponding to each task is divided into different difficulty levels according to the complexity of the graph structure. A subset of (relevant statistics are shown in Table 1).
Experiments and results
Experiment 1: Comparison of graph reasoning capabilities of models based on different modal graph information
Research The team evaluated popular closed-source methods based on different modal graph input types (including text only (T-Only), vision only (V-Only), and text plus vision (V T)) on the GITQA-Base dataset. and the performance of open source large language models (such as GPT-4 turbo and Vicuna-7B/13B) and large multi-modal language models (such as GPT-4V and LLaVA-7B/13B). as shown in picture 2.
Picture
Specifically, the closed-source models GPT-4 and GPT-4V perform zero-shot inference, while for open-source Models Vicuna and LLaVA were fine-tuned by keeping the backbone model parameters unchanged, and only the training Projector and LoRA parts were fine-tuned (in particular, the LLaVA model after visual text dual-modal fine-tuning was named GITA by the researcher).
Table 2 summarizes the test results for all eight graph reasoning tasks.
Picture
Visual mode V.S. Text mode
As can be seen from Table 2, in Cycle and BGM On tasks, the visual modality performed better than the text modality, while on the other five tasks it was inferior to the text modality. This reveals that vision and text each have advantages in handling specific types of graph reasoning tasks. Mutual enhancement of visual and text modalities
For the closed-source model, GPT-4V (V T) has much higher average accuracy on eight tasks than GPT-4 Turbo (T-only) and GPT-4V (V-only).
For open source models (7B, 13B), similarly, the GITA model trained using bimodal data performed best on average. These observations verify that using visual and textual information simultaneously can enhance the model’s graph reasoning capabilities and achieve better performance than single-modal models.
More specifically, GITA-7B (V T) outperforms LLaVA-7B (V-only) and Vicuna-7B (T-only) in almost all tasks. For the closed-source model, using bimodality achieved the highest accuracy on five out of eight tasks. The fine-tuned LLaVA model can surpass GPT-4V
As shown in Table 2 and Figure 3, the GITA-7B and GITA-13B models, that is, the LLaVA-7B/13B model after dual-modal fine-tuning, show A significant performance improvement of more than 13% compared to GPT-4V. This huge improvement shows that the fine-tuned GITA model can effectively learn excellent graph reasoning capabilities from the GITQA dataset.
Picture
Experiment 2: The impact of difficulty level on graph tasks
Table 3 further shows the performance of the model at different difficulty levels test accuracy, the GNN task was omitted as it was too challenging for all models).
Performance using the visual modality alone outperformed the text modality and was comparable to using both modalities in Cycle and BGM tasks at all difficulty levels.
However, for other tasks, the performance of models using only the visual modality drops significantly when the difficulty increases from easy to medium or hard.
Picture
Similarly, when the difficulty increases, the models using only text modality and using visual text modality also perform better on these tasks. There will be a significant performance drop.
For the Connectivity task, GITA-7B (visual text) and GITA-13B (visual text) show comparable performance at all three challenge levels.
However, this consistent pattern is not observed in GPT-4V (visual text), as its performance decreases with increasing difficulty levels.
Experiment 3: Visual graph enhancement strategies and style preferences
The research team also explored special data enhancement The effectiveness of the strategy in fine-tuning the model.
Based on different enhancement strategies, the researchers divided the GITQA-Aug data set into four enhancement subsets: layout enhancement data set, node shape enhancement data set, and edge width enhancement data Set,node style augmented dataset.
Picture
The researchers performed all four enhancement subsets on the LLaVA-7B model using only visual map information. After separate fine-tuning, the comparison of its inference performance with that before data augmentation is shown in Table 4.
It can be clearly seen that the model's reasoning ability for challenging tasks on the layout-enhanced data set has improved dramatically (SP increased by 64.8%, HP increased by 69.63%).
The other three data enhancement strategies actually lead to performance degradation.
Specifically, the model achieves excellent results on the layout enhancement set, which is more than 11% higher than the GITQA-Base set. In comparison, the average results for the eight tasks in the other augmented sets are about 5% lower than the base set
Picture
These findings suggest that layout-based data augmentation provides a more effective visual perspective for graph reasoning. Furthermore, the researchers also tested the performance of Visual Graph reasoning based on each style within the same group under each enhancement strategy. As shown in Table 5, it shows that the model has no obvious style preference.
The above is the detailed content of 7B model surpasses GPT4-V! Hong Kong University of Science and Technology and others released the 'Graph Reasoning Question and Answer' data set GITQA: Visual graphs can improve reasoning capabilities. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



0.What does this article do? We propose DepthFM: a versatile and fast state-of-the-art generative monocular depth estimation model. In addition to traditional depth estimation tasks, DepthFM also demonstrates state-of-the-art capabilities in downstream tasks such as depth inpainting. DepthFM is efficient and can synthesize depth maps within a few inference steps. Let’s read about this work together ~ 1. Paper information title: DepthFM: FastMonocularDepthEstimationwithFlowMatching Author: MingGui, JohannesS.Fischer, UlrichPrestel, PingchuanMa, Dmytr

Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

AI is indeed changing mathematics. Recently, Tao Zhexuan, who has been paying close attention to this issue, forwarded the latest issue of "Bulletin of the American Mathematical Society" (Bulletin of the American Mathematical Society). Focusing on the topic "Will machines change mathematics?", many mathematicians expressed their opinions. The whole process was full of sparks, hardcore and exciting. The author has a strong lineup, including Fields Medal winner Akshay Venkatesh, Chinese mathematician Zheng Lejun, NYU computer scientist Ernest Davis and many other well-known scholars in the industry. The world of AI has changed dramatically. You know, many of these articles were submitted a year ago.

I cry to death. The world is madly building big models. The data on the Internet is not enough. It is not enough at all. The training model looks like "The Hunger Games", and AI researchers around the world are worrying about how to feed these data voracious eaters. This problem is particularly prominent in multi-modal tasks. At a time when nothing could be done, a start-up team from the Department of Renmin University of China used its own new model to become the first in China to make "model-generated data feed itself" a reality. Moreover, it is a two-pronged approach on the understanding side and the generation side. Both sides can generate high-quality, multi-modal new data and provide data feedback to the model itself. What is a model? Awaker 1.0, a large multi-modal model that just appeared on the Zhongguancun Forum. Who is the team? Sophon engine. Founded by Gao Yizhao, a doctoral student at Renmin University’s Hillhouse School of Artificial Intelligence.

What? Is Zootopia brought into reality by domestic AI? Exposed together with the video is a new large-scale domestic video generation model called "Keling". Sora uses a similar technical route and combines a number of self-developed technological innovations to produce videos that not only have large and reasonable movements, but also simulate the characteristics of the physical world and have strong conceptual combination capabilities and imagination. According to the data, Keling supports the generation of ultra-long videos of up to 2 minutes at 30fps, with resolutions up to 1080p, and supports multiple aspect ratios. Another important point is that Keling is not a demo or video result demonstration released by the laboratory, but a product-level application launched by Kuaishou, a leading player in the short video field. Moreover, the main focus is to be pragmatic, not to write blank checks, and to go online as soon as it is released. The large model of Ke Ling is already available in Kuaiying.

Recently, the military circle has been overwhelmed by the news: US military fighter jets can now complete fully automatic air combat using AI. Yes, just recently, the US military’s AI fighter jet was made public for the first time and the mystery was unveiled. The full name of this fighter is the Variable Stability Simulator Test Aircraft (VISTA). It was personally flown by the Secretary of the US Air Force to simulate a one-on-one air battle. On May 2, U.S. Air Force Secretary Frank Kendall took off in an X-62AVISTA at Edwards Air Force Base. Note that during the one-hour flight, all flight actions were completed autonomously by AI! Kendall said - "For the past few decades, we have been thinking about the unlimited potential of autonomous air-to-air combat, but it has always seemed out of reach." However now,
