One of the most successful image fusion strategies is based on the Laplacian pyramid decomposition. In the context of multi-scale fusion, the Laplacian pyramid decomposition has recently been demonstrated to be effective for several challenging tasks to enhance images and videos.
Night-time dehazing by fusion.
We introduce an effective technique to enhance night-time hazy scenes. Our technique builds on multi-scale fusion approach that use several inputs derived from the original image. Inspired by the dark-channel we estimate night-time haze computing the airlight component on image patch and not on the entire image (Ancuti et al. 2016) .
To deal with the problem of night-time hazy scenes , we propose a novel way to compute the airlight component while accounting for the non-uniform illumination presents in nighttime scenes. Unlike the well-known dark-channel strategy that estimates a constant atmospheric light over the entire image, we compute this value locally, on patches of varying sizes. This is found to succeed since under night-time conditions, the lighting results from multiple artificial sources, and is thus intrinsically nonuniform. In practice, the local atmospheric light causes the color observed in hazy pixels, which are the brightest pixels of local dark channel patches.
Selecting the size of the patches is non-trivial since small patches are desirable to achieve fine spatial adaptation to the atmospheric light, it might also lead to poor light estimates and reduced chance of capturing hazy pixels. For this reason, we deploy multiple patch sizes, each generating one input to the multiscale fusion process. Our fusion approach is accomplished in three main steps. First, based on our airlight estimation using different sizes of the patches we derive the first two inputs of the fusion approach. To reduce the glowing effect and emphasize the finest details of the scene, the third input is defined to be the Laplacian of the original image. In the second step, the important features of these derived inputs are filtered based on several quality weight maps (local contrast, saturation and saliency). Finally the derived inputs and the normalized weight maps are blended in a multi-scale fashion using a Laplacian pyramid decomposition of the inputs and a Gaussian pyramid of the normalized weights.
Awards
- Best paper award at CVPR 2017: Cosmin Ancuti (ICTEAM), Codruta Ancuti (University of Timisoara), Christophe De Vleeschouwer (ICTEAM), and Rafael Garcia (University of Girona) have been awarded the Best Paper Award at IEEE workshop on ‘New Trends in Image Restoration and Enhancement’ (CVPR 2017) for their paper entitled (Ancuti et al. 2017) .