Abstract
The authors propose visual embedding as a model for automatically generating and evaluating visualizations. A visual embedding is a function from data points to a space of visual primitives that measurably preserves structures in the data (domain) within the mapped perceptual space (range). The authors demonstrate its use with three examples: coloring of neural tracts, scatterplots with icons, and evaluation of alternative diffusion tensor glyphs. They discuss several techniques for generating visual-embedding functions, including probabilistic graphical models for embedding in discrete visual spaces. They also describe two complementary approaches-crowdsourcing and visual product spaces-for building visual spaces with associated perceptual-distance measures. In addition, they recommend several research directions for further developing the visual-embedding model.
Original language | English (US) |
---|---|
Article number | 6756754 |
Pages (from-to) | 10-15 |
Number of pages | 6 |
Journal | IEEE Computer Graphics and Applications |
Volume | 34 |
Issue number | 1 |
DOIs | |
State | Published - 2014 |
Keywords
- computer graphics
- crowdsourcing
- perception
- perceptual distance
- probabilistic model
- visual embedding
- visual product
- visual space
- visualization
ASJC Scopus subject areas
- Software
- Computer Graphics and Computer-Aided Design