Visual embedding: A model for visualization

Cagatay Demiralp, Carlos E. Scheidegger, Gordon L. Kindlmann, David H. Laidlaw, Jeffrey Heer

Research output: Contribution to journalArticlepeer-review

31 Scopus citations


The authors propose visual embedding as a model for automatically generating and evaluating visualizations. A visual embedding is a function from data points to a space of visual primitives that measurably preserves structures in the data (domain) within the mapped perceptual space (range). The authors demonstrate its use with three examples: coloring of neural tracts, scatterplots with icons, and evaluation of alternative diffusion tensor glyphs. They discuss several techniques for generating visual-embedding functions, including probabilistic graphical models for embedding in discrete visual spaces. They also describe two complementary approaches-crowdsourcing and visual product spaces-for building visual spaces with associated perceptual-distance measures. In addition, they recommend several research directions for further developing the visual-embedding model.

Original languageEnglish (US)
Article number6756754
Pages (from-to)10-15
Number of pages6
JournalIEEE Computer Graphics and Applications
Issue number1
StatePublished - 2014


  • computer graphics
  • crowdsourcing
  • perception
  • perceptual distance
  • probabilistic model
  • visual embedding
  • visual product
  • visual space
  • visualization

ASJC Scopus subject areas

  • Software
  • Computer Graphics and Computer-Aided Design


Dive into the research topics of 'Visual embedding: A model for visualization'. Together they form a unique fingerprint.

Cite this