Filmy Cloud Removal on Satellite Imagery with Multispectral Conditional Generative Adversarial Nets

In this paper, we propose a method for cloud removal from visible light RGBsatellite images by extending the conditional Generative Adversarial Networks(cGANs) from RGB images to multispectral images. Satellite images have beenwidely utilized for various purposes, such as natural environment monitoring(pollution, forest or rivers), transportation improvement and prompt emergencyresponse to disasters. However, the obscurity caused by clouds makes itunstable to monitor the situation on the ground with the visible light camera.Images captured by a longer wavelength are introduced to reduce the effects ofclouds. Synthetic Aperture Radar (SAR) is such an example that improvesvisibility even the clouds exist. On the other hand, the spatial resolutiondecreases as the wavelength increases. Furthermore, the images captured by longwavelengths differs considerably from those captured by visible light in termsof their appearance. Therefore, we propose a network that can remove clouds andgenerate visible light images from the multispectral images taken as inputs.This is achieved by extending the input channels of cGANs to be compatible withmultispectral images. The networks are trained to output images that are closeto the ground truth using the images synthesized with clouds over the groundtruth as inputs. In the available dataset, the proportion of images of theforest or the sea is very high, which will introduce bias in the trainingdataset if uniformly sampled from the original dataset. Thus, we utilize thet-Distributed Stochastic Neighbor Embedding (t-SNE) to improve the problem ofbias in the training dataset. Finally, we confirm the feasibility of theproposed network on the dataset of four bands images, which include threevisible light bands and one near-infrared (NIR) band.