(Presenting 12/12)
Compressed videos constitute 70% of Internet traffic, and
video upload growth rates far outpace compute and storage
improvement trends. Leveraging perceptual cues like saliency,
i.e., regions where viewers focus their perceptual attention, can
reduce compressed video size while maintaining perceptual
quality, but requires significant changes to video codecs and
ignores the data management of this perceptual information.
In this talk, we describe Vignette, a new compression technique
and storage manager for perception-based video compression.
Vignette complements off-the-shelf compression software
and hardware codec implementations. Vignette’s compression
technique uses a neural network to predict saliency
information used during transcoding, and its storage manager
integrates perceptual information into the video storage
system to support a perceptual compression feedback loop.
Vignette’s saliency-based optimizations reduce storage by up
to 95% with minimal quality loss, and Vignette videos lead to
power savings of 50% on mobile phones during video playback.