Wireless Video Quality and Applications

29 Jul 2014

Video traffic currently comprises more than 50% of all wireless and mobile device data volume. This ratio is expected to increase for years to come. Likewise, deployments of small wireless sensors in the home, factory, retail outlets, automobiles and nearly everywhere else are proliferating. Growth in this space is expected to be exponential for many years to come. Many devices in the wireless, sensory aspect of the Internet of Things (IoT) will be video-based, which makes it possible for a variety of users to view, make decisions on and control conditions in the home, office, industrial floor and outdoors. 

Video data will be consumed by both artifical intelligence computer vision algorithms and human viewers. In both instances, the quality of the video signal being analyzed or viewed is essential to the success of device deployment. Remote viewing of an environment by a human, in particular, requires the video being captured, processed, analyzed, transmitted and displayed must be perceptually optimized for human consumption and interpretation. Likewise, perceptual model-driven computer vision algorithms benefit from similar quality control of visual signals.

The WNCG Laboratory for Image and Video Engineering (LIVE), led by Prof. Al Bovik, is the leading academic laboratory devoted to the development of models and algorithms for the automatic prediction of image and video quality, including 3D video quality. LIVE developed video quality models, including the award-winning SSIM and MOVIE models, both of which are extensively used by the global cable and satellite television industries to test equipment and cable infrastructure to control and improve human viewers' Quality of Experience (QoE). 

Paper 1: Image Quality Assessment: From Error Visibility to Structural Similarity

Paper 2: Mean Squared Error: Love it or Leave It? - A New Look at Signal Fidelity Measures

Paper 3: Motion-Tuned Spatio-Temporal Quality Assessment of Natural Videos

 

No-Reference Video Quality Prediction and Control

WNCG LIVE recently pioneered the development of No-Reference, or blind, video quality models suitable for monitoring and controlling video quality in the wireless realm. Full-reference models such as PSNR and SSIM are not useful for many applications since they require a pristine reference video signal. WNCG's current efforts deploy these models in an application-directed manner, which makes it possible to monitor video quality emanating from a high diversity of sensors operating under varying conditions while being dedicated to diverse practical tasks. One application for these models is for the perception-driven control of rate adaptation on wireless video streams undergoing time-varying channel conditions or compression protocols. WNCG has been developing wireless-centric detection systems that can detect faces more reliably and robustly than the best existing models under lossy and noisy networks with imperfect acquisition conditions. 

The WNCG LIVE team is searching for industry partnerships to develop appliation-specific and quality-driven wireless and wireline video quality models appropriate for diverse tasks like surveillance, monitoring, traffic control, recognition and environmental control in space, time and 3D. Many WNCG faculty parctipate in this Video Quality research, including Profs. Alan Bovik, Constantine Caramanis, Gustavo de Veciana, Joydeep Ghosh and Robert Heath. 

Paper 1: Automatic Prediction of Perceptual Image and Video Quality

Paper 2: Blind Image Quality Assessment: From Natural Scene Statistics to Perceptual Quality

Paper 3: No-Reference Image Quality Assessment in the Spatial Domain

Paper 4: Blind Prediction of Natural Video Quality

Paper 5: A Model of the Time-Varying Subjective Quality of HTTP Video Streams with Rate Adaptations

Paper 6: Rate Adaptation and Admission Control for Video Transmission with Subjective Quality Constraints

Paper 7: Face Detection on Distorted Images Using Perceptual Quality-Aware Features

Paper 8: Statistical Modeling of 3D Natural Scenes with Application to Bayesian Stereopsis

Paper 9: Color and Depth Priors in Natural Images

Paper 10: No-Reference Quality Assessment of Natural Stereopairs

Paper 11: Study of Distortion Conspicuity on Stereoscopically viewed 3D Images