Wireless Video Quality and Applications
Video traffic currently comprises more than 50% of all wireless and mobile device data volume. This ratio is expected to increase for years to come. Likewise, deployments of small wireless sensors in the home, factory, retail outlets, automobiles and nearly everywhere else are proliferating. Growth in this space is expected to be exponential for many years to come. Many devices in the wireless, sensory aspect of the Internet of Things (IoT) will be video-based, which makes it possible for a variety of users to view, make decisions on and control conditions in the home, office, industrial floor and outdoors.
Video data will be consumed by both artifical intelligence computer vision algorithms and human viewers. In both instances, the quality of the video signal being analyzed or viewed is essential to the success of device deployment. Remote viewing of an environment by a human, in particular, requires the video being captured, processed, analyzed, transmitted and displayed must be perceptually optimized for human consumption and interpretation. Likewise, perceptual model-driven computer vision algorithms benefit from similar quality control of visual signals.
The WNCG Laboratory for Image and Video Engineering (LIVE), led by Prof. Al Bovik, is the leading academic laboratory devoted to the development of models and algorithms for the automatic prediction of image and video quality, including 3D video quality. LIVE developed video quality models, including the award-winning SSIM and MOVIE models, both of which are extensively used by the global cable and satellite television industries to test equipment and cable infrastructure to control and improve human viewers' Quality of Experience (QoE).
No-Reference Video Quality Prediction and Control
WNCG LIVE recently pioneered the development of No-Reference, or blind, video quality models suitable for monitoring and controlling video quality in the wireless realm. Full-reference models such as PSNR and SSIM are not useful for many applications since they require a pristine reference video signal. WNCG's current efforts deploy these models in an application-directed manner, which makes it possible to monitor video quality emanating from a high diversity of sensors operating under varying conditions while being dedicated to diverse practical tasks. One application for these models is for the perception-driven control of rate adaptation on wireless video streams undergoing time-varying channel conditions or compression protocols. WNCG has been developing wireless-centric detection systems that can detect faces more reliably and robustly than the best existing models under lossy and noisy networks with imperfect acquisition conditions.
The WNCG LIVE team is searching for industry partnerships to develop appliation-specific and quality-driven wireless and wireline video quality models appropriate for diverse tasks like surveillance, monitoring, traffic control, recognition and environmental control in space, time and 3D. Many WNCG faculty parctipate in this Video Quality research, including Profs. Alan Bovik, Constantine Caramanis, Gustavo de Veciana, Joydeep Ghosh and Robert Heath.