Perceptual Fog Density Assessment and Image Defogging
Prof. Alan Bovik and his student Lark Kwon Choi in the WNCG Laboratory for Image and Video Engineering (LIVE) have developed a no-reference perceptual fog density prediction model and a perceptual image defogging algorithm based on natural scene statistics (NSS) and fog aware statistical features.
The proposed model, called Fog Aware Density Evaluator (FADE), predicts the visibility of a foggy scene from a single image without reference to a corresponding fog-free image, without dependency on salient objects in a scene, without side geographical camera information, without estimating a depth dependent transmission map, and without training on human- rated judgments. FADE only makes use of measurable deviations from statistical regularities observed in natural foggy and fog-free images.
Fog aware statistical features that define the perceptual fog density index derive from a space domain NSS model and the observed characteristics of foggy images. FADE not only predicts perceptual fog density for the entire image but also provides a local fog density index for each patch. The predicted fog density using FADE correlates well with human judgments of fog density taken in a subjective study on a large foggy image database.
As applications, FADE not only accurately assesses the performance of defogging algorithms designed to enhance the visibility of foggy images, but also is well suited for image defogging. A new FADE based referenceless perceptual image defogging, dubbed DEnsity of Fog Assessment-based DEfogger (DEFADE) achieves better results for darker, denser foggy images as well as on standard foggy images than state of the art defogging methods.
Paper 1: Referenceless Prediction of Perceptual Fog Density and Perceptual Image Defogging (under revision)
This work was supported by Business for Cooperative R&D between Industry, Academy, and Research Institute funded Korea Small and Medium Business Administration in 2013 under Grant No. C0014365.