LLM4SVG: Empowering LLMs to Understand and Generate Complex Vector Graphics

Juncheng Hu1
Guotao Liang1
1Beihang University   2The University of Hong Kong

LLM4SVG teaser
Our LLM4SVG can understand and generate vector graphics from textual description.
Our LLM4SVG is designed to:
(a) Understand the semantics of SVG (Scalable Vector Graphics) source code and directly extract the meanings conveyed by vector images;
(b) Generate corresponding structured SVG representations from textual prompts and decode them into SVG source code that accurately reflects the described content.
(c) illustrates some SVG examples generated by our method.
TL;DR: Instruction based fine-tuning empowers the LL.M. to understand and generate complex vector graphics

Abstract

The unprecedented advancements in Large Language Models (LLMs) have profoundly impacted natural language processing but have yet to fully embrace the realm of scalable vector graphics (SVG) generation. While LLMs encode partial knowledge of SVG data from web pages during training, recent findings suggest that semantically ambiguous and tokenized representations within LLMs may result in hallucinations in vector primitive predictions. Additionally, LLM training typically lacks modeling and understanding of the rendering sequence of vector paths, which can lead to occlusion between output vector primitives. In this paper, we present LLM4SVG, an initial yet substantial step toward bridging this gap by enabling LLMs to better understand and generate vector graphics. LLM4SVG facilitates a deeper understanding of SVG components through learnable semantic tokens, which precisely encode these tokens and their corresponding properties to generate semantically aligned SVG outputs. Using a series of learnable semantic tokens, a structured dataset for instruction following is developed to support comprehension and generation across two primary tasks. Our method introduces a modular architecture to existing large language models, integrating semantic tags, vector instruction encoders, fine-tuned commands, and powerful LLMs to tightly combine geometric, appearance, and language information. To overcome the scarcity of SVG-text instruction data, we developed an automated data generation pipeline that collected a massive dataset of more than 250k SVG data and 580k SVG-text instructions, which facilitated the adoption of the two-stage training strategy popular in LLM development. By exploring various training strategies, we developed LLM4SVG, which significantly moves beyond optimized rendering-based approaches and language-model-based baselines to achieve remarkable results in human evaluation tasks.


SVG Generation | Gallery


method

Methodology


Our LLM4SVG is capable of understanding and generating SVGs effectively. (1) During the training phase, we provide both the original SVG code $\mathbf{X}_{\mathrm{v}}$ and the corresponding instruction data $\mathbf{X}_{\mathrm{inst}}$ as input. For the understanding task, we use detailed descriptions$\mathbf{X}_{\mathrm{a}}$ generated by GPT-4 as the training labels. For the generation task, the SVG code portion is masked and serves as the target that the model needs to predict. (2) During the inference phase, for the understanding task, given an SVG source code, the model generates a description that aligns with the semantics expressed by the SVG. For the generation task, the model generates an SVG based on the input text prompt. During both training and inference phases, the rendered image $\mathbf{X}_{\mathrm{img}}$ of the SVG can be used as conditional input to the model, guiding the content that the model understands or generates.

method
An Overview of LLM4SVG

Comparison


method
Qualitative Comparison Between LLM4SVG and State-of-the-Art SVG Generation Methods, including optimization-based and LLM-based methods.
Figure above illustrates the visual quality of SVG generation methods, comparing both optimization-based and LLM-based approaches. It is evident that our method outperforms other LLM-based methods in terms of the completeness of the SVG generation, the selection and placement of primitives, and the semantic richness conveyed by the vector graphics. Optimization-based methods use samples from the Latent Diffusion Model as supervision during the SVG generation process. Consequently, these methods normally employ a large number of overlapping and interwoven primitives to closely approximate the realistic samples. This often lead to excessive stroke redundancy, and the individual shapes of the primitives may appear irregular when viewed in isolation, making them less practical for use in real-life applications.

Citation

@article{xing2024llm4svg,
  title={Empowering LLMs to Understand and Generate Complex Vector Graphics},
  author={Xing, Ximing and Hu, Juncheng and Liang, Guotao and Zhang, Jing and Xu, Dong and Yu, Qian},
  booktitle={arXiv preprint: 2412.11102},
  year={2024}
}