VisGraphVar: A Benchmark Generator for Assessing Variability in Graph Analysis Using Large Vision-Language Models

Artificial Intelligence Research Institute (IIIA-CSIC)
*Corresponding author: cchacon@iiia.csic.es

An overview of LVLM performance across the seven tasks (complete dataset).

Abstract

The fast advancement of Large Vision-Language Models (LVLMs) has shown immense po- tential. These models are increasingly capable of tackling abstract visual tasks. Geometric structures, particularly graphs with their inherent flexibility and complexity, serve as an excellent benchmark for evaluating these models’ predictive capabilities. While human ob- servers can readily identify subtle visual details and perform accurate analyses, our inves- tigation reveals that state-of-the-art LVLMs exhibit consistent limitations in specific visual graph scenarios, especially when confronted with stylistic variations. In response to these challenges, we introduce VisGraphVar (Visual Graph Variability), a customizable bench- mark generator able to produce graph images for seven distinct task categories (detection, classification, segmentation, pattern recognition, link prediction, reasoning, matching), de- signed to systematically evaluate the strengths and limitations of individual LVLMs. We use VisGraphVar to produce 990 graph images and evaluate six LVLMs, employing two distinct prompting strategies, namely zero-shot and chain-of-thought. The findings demon- strate that variations in visual attributes of images (e.g., node labeling and layout) and the deliberate inclusion of visual imperfections, such as overlapping nodes, significantly affect model performance. This research emphasizes the importance of a comprehensive evaluation across graph-related tasks, extending beyond reasoning alone. VisGraphVar offers valuable insights to guide the development of more reliable and robust systems capable of performing advanced visual graph analysis.

Tasks

A general overview of the seven tasks covered by VisGraphVar (1-7), each representing a different challenge for LVLMs, enabling us to conduct a more detailed performance comparison and evaluation.

Results

Average LVLM performance (best to worst from left to right) regarding the VisGraphVar dataset.

1. The Striking Case of Spectral Layout

2. Pixtral-12B and the Complex Task of Matching

3. The Impact of Node Labels on Model Performance

BibTeX

@misc{sartori2024visgraphvarbenchmarkgeneratorassessing,
        title={VisGraphVar: A Benchmark Generator for Assessing Variability in Graph Analysis Using Large Vision-Language Models}, 
        author={Camilo Chacón Sartori and Christian Blum and Filippo Bistaffa},
        year={2024},
        eprint={2411.14832},
        archivePrefix={arXiv},
        primaryClass={cs.CV},
        url={https://arxiv.org/abs/2411.14832}, 
  }