Download PDFOpen PDF in browser

GazeFusion: Saliency-Guided Image Generation

EasyChair Preprint no. 12845

11 pagesDate: March 31, 2024

Abstract

Diffusion models offer unprecedented image generation capabilities given just a text prompt. While emerging control mechanisms have enabled users to specify the desired spatial arrangements of the generated content, they cannot predict or control where viewers will pay more attention due to the complexity of human vision. Recognizing the critical necessity of attention-controllable image generation in practical applications, we present a saliency-guided framework to incorporate the data priors of human visual attention into the generation process. Given a desired viewer attention distribution, our control module conditions a diffusion model to generate images that attract viewers' attention toward desired areas. To assess the efficacy of our approach, we performed an eye-tracked user study and a large-scale model-based saliency analysis. The results evidence that both the cross-user eye gaze distributions and the saliency model predictions align with the desired attention distributions. Lastly, we outline several applications, including interactive design of saliency guidance, attention suppression in unwanted regions, and adaptive generation for varied display/viewing conditions.

Keyphrases: Computer Graphics, computer vision, human visual perception, image generation, machine learning, neural network

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:12845,
  author = {Yunxiang Zhang and Nan Wu and Connor Lin and Gordon Wetzstein and Qi Sun},
  title = {GazeFusion: Saliency-Guided Image Generation},
  howpublished = {EasyChair Preprint no. 12845},

  year = {EasyChair, 2024}}
Download PDFOpen PDF in browser