Fusion of Multispectral and Panchromatic Images for Enhanced Remote Sensing Resolution

Authors

  • Kasil Teyene Electrical and Computer Engineering Addis Ababa University Addis Ababa, Ethiopia Author
  • M. L. Diu School of Electrical and Electronic Engineering, Newcastle University, Singapore Author

DOI:

https://doi.org/10.17051/NJSIP/01.02.09

Keywords:

Remote Sensing, Pansharpening, Multispectral Image, Panchromatic Image, Image Fusion, Spatial Resolution Enhancement, Spectral Preservation, Wavelet Transform, Principal Component Analysis (PCA), Vision Transformer, Deep Learning, CNN-Based Fusion, Quality Metrics, SAM, ERGAS, Q-index

Abstract

This combination of multispectral (MS) and panchromatic (PAN) data commonly known as pansharpening has become a ubiquitous operation in all remotely determined research and development and it enables the creation of images that produce the saturation of the multispectral datasets combined with the fine-resolution capacity of the panchromatic data. This paper gives an extensive comparative study as well as practical use of new high-quality pansharpening methods, with a view to overcome the current insatiable needs in high-quality remote sensing information as well as rigorously reserving the spectral fidelity. Such state-of-the-art methods as convolutional neural networks (CNNs), Vision Transformer and deep learning-based methods which are Component Substitution (CS) and Multiresolution Analysis (MRA) are also included. Techniques are then tested on quantifiable scales with firm measures like Spectral Angle Mapper (SAM), Correlation Coefficient (CC), Error Relative Globules Adimensionnelle de Synthaje (ERGAS) and Quality Index (Q-index) so that there is the multi-faceted analysis of respective spectral and spatial work. Experimental validations were conducted with high-resolution WorldView-3 satellite images of large variety of types of land cover and scene complexities. The results reveal that deep-learning-based-fusion techniques, mainly the Transformer-architectures-based, have achieved significant bettering over conventional approaches in terms of providing enhanced spatial sharpness and superior spectral preservation with enhancements in all forms of assessment measures. The results illustrate the extent of the Transformer models in learning the complex cross-modality relationships and present the possibility of usage in real-time, large-scale, application-specific remote sensing applications like land cover classification, urban mapping, and precision agriculture. Not only has this work created a critical benchmarking of both present and developing fusion methods but it has also suggested practical lessons on possible future research on unsupervised fusion, model generalizability, and finally edge deployment in the service of in-situ applications of remote sensing.

Additional Files

Published

2025-04-19

Issue

Section

Articles