Abstract:Visible videos can mainly provide texture information, whereas infrared videos provide hidden hot information. The fusion of both kinds of videos can generate higher quality video viewing experience. However, due to the limited mobile device resources, the complicated video processing tasks are off loaded to the more powerful cloudlet with more sufficient resources (computing, storage, and battery resources) to be performed. In this paper the inter-frame redundancy detection algorithm based on the mean hash is proposed, and the video frames after redundancy removal are transmitted to the cloudlet for processing. The video fusion algorithm based on the guided filter and the weighted two-dimensional principal component analysis (W2DPCA) is proposed. The video frames to be fused are first divided into base layer and detail layer using guided filter. Then, the base layers of visible frame and infrared frame are fused by the improved adaptive W2DPCA. Finally, the fused frame is acquired by combination of the fusional base layer and the reserved detail layer. Experiment results show that the redundancy detection method minimizes the amount of redundant data transmission in cloudlets and reduces the energy consumption of mobile devices. Compared with the existing methods, the fused frame obtained by the video fusion algorithm in this paper has more mutual information and higher structural similarity to the original frame, and the fusion result also has higher overall standard deviation and peak signal-to-noise ratio thus having better overall integration effect.