Sie sind auf Seite 1von 6

ABSTRACT There is two important things in remote sensing satellite image fusion.

They are preservation of spectral information and enhancement of spatial resolution. Remote sensing satellites like Quick bird and IKONOS provided only low-spatial-resolution multispectral (MS) images and high spatial- resolution panchromatic (PAN) images. It is very difficult to obtain high resolution multispectral images because of technical limitations of satellite sensors. So Image fusion is very important. The image fusion method is called Pan sharpening because a MS image is sharpened by injection of spatial details extracted from a PAN image. PAN sharpening algorithms are general component substitution methods like IHS, PCA etc. But they are having some draw backs in producing a good fused image.So here the latest technique New Adaptive Component Substitution Based Satellite Image Fusion by Using Partial replacement is used. This method is having greater fusion quality and also this algorithm is suited to work with different sensors. In this project the PAN sharpening is done using Component substitution and partial replacement. Also the fused image by the proposed method is compared to another fused image by IHS Pan sharpening Method. The quality of the PAN Sharpening method using Component substitution and Partial replacement is verified compared with IHS Pan Sharpening using the parameter Quality with No reference. IMAGE FUSION IMAGE fusion, also called pan-sharpening, is a technique used to integrate geometric detail of a high-resolution panchromatic (Pan) image and the color information of a low resolution multispectral (MS) image to produce a high resolution MS image. This technique is particularly important in large scale applications. Most earth resource satellites, such as SPOT, IRS, Landsat 7, IKONOS and QuickBird provide both Pan images at a higher spatial resolution and MS images at a lower spatial resolution. An effective image fusion technique can virtually extend the application potential of such remotely sensed images, as many remote sensing applications. There are many existing image fusion techniques like IHS, PCA, Wavelet, etc which produces color distortion to the fused image. So here the latest method Adaptive Component Substitution-Based Satellite Image Fusion by Using Partial Replacement is used which will maintain high spatial and spectral resolution.

4. Carry out the inverse transformation to II. GENERAL CS FUSION METHOD AND ITS DRAWBACKS In the following section, we report on analyses of the general CS-based fusion framework, which is widely used to combine MS and PAN images to improve the spatial resolution of the fused image efficiently. In the remainder of this section, we clarify the cause of spectral/spatial distortion of fused images due to the spectral response of the sensor in S-based fusion. A. CS-Based Image Fusion Approach A classical CS fusion approach is accomplished by a forward specific feature space transformation and an inverse transformation. The CS fusion method can be formulated with the following steps 1. Resize the MS image to the size of the PAN image. 2. Transform the MS image into specific feature space which is similar to the lowspatial-resolution PAN image. 3. The first component is substituted by the PAN image, which is a histogram matched. create the new high-spatial-resolution MS image.

Spectral/Spatial Distortion Owing to the Relative Spectral Response of the Sensor


In a general fusion framework, a theoretical high-spatial resolution MS image is formulated by the decomposition of its highand low-frequency components MS h n = High MS h n + Low MS h where High(MS h n) means the highfrequency information of the nth band of the ideal high-spatial-resolution MS image and Low(MS h n) is its low-frequency data. As is well known, we cannot obtain theoretical MS image MS h n, must be reconstructed using an existing low-spatial-resolution MS image and a high-spatial-resolution PAN image. As the low frequency of the highspatial-resolution MS image approximates to a spatially degraded MS image, we can substitute the original MS image for the low-frequency data of the theoretical MS image. Simultaneously, we can hypothesize that the high-frequency data of the highspatial resolution MS image is intimately linked with the relationship between the PAN image and the corresponding MS

image if high spatial MS is highly correlated with PAN . the PAN image must be highly correlated with, or have similar spectral/spatial characteristics to, each MS image because the high frequency of the theoretical MS image is approximately equal to that of the mathematical analysis of the PAN and MS images. Ideally, each band of satellite images must be covered by the spectral range of the PAN band in order that each MS image has high correlated spectral response with PAN. However, the wavelength of the PAN band used in commercial satellites may be not overlapped by each MS band .Subsequently, color distortion of the fused image will be an inevitable consequence in the injection process of high-frequency information due to the global dissimilarity or low correlation between the PAN and each MS image in their relative spectral response functions. Each image band has its own unique bandwidth and statistical characteristics. When the spatial details of the PAN image included in are separately injected into the MS bands, some excessive spectral information may be received because of the dissimilarity within the MS bands. If the modulation coefficient is set too low in order to avoid this problem, the spatial

quality of the fused image will decline . As a general result, the injection model of edge information allows the fused image to contain identical spatial information; however, the spectral property is relatively deprived because the PAN and firstcomponent images are globally low correlated with the corresponding MS image. To determine the optimal coefficient, researchers have suggested the various methods and experimental results as bases for selecting and optimizing the corresponding parameters to make the fused image, similar to the original MS image as efficiently as possible. Most algorithms do not provide an optimal result because they do not reflect the characteristics of the scene and some parameters are fixed for a specific satellite image. Moreover, as is well known from , the PAN and MS images may present some local instability or dissimilarity, such as object occultation or contrast inversion. If these effects are not taken into account along with correlation of the PAN and MS images, the fused result may suffer from artifact effects and global/local spectral dissimilarity with the original MS image. This is because the spectral/spatial information, which is different with the characteristics of the MS bands, is injected into fusion processing. Consequently, the

global and local dissimilarities between the PAN image and the corresponding MS image must be considered in the CS.

the original spatial characteristics, regardless of the type of satellite sensor. Thus, spatial details can be extracted from the PAN image without affecting the spectral/spatial distortion in every MS band. Our method is organized into two parts. The first step is the construction of a high-/low resolution component image by using partial replacement between the PAN and MS image. The next step is to assemble an adaptive CS fusion model to minimize the global/local spectral dissimilarity between the PAN and each MS band while preserving the spatial details of the original PAN image. In general CS methods color distortion of the fused image will be an inevitable consequence in the injection process of high frequency information due to the global dissimilarity or low correlation between the PAN and each MS image in their relative spectral response functions.

ADAPTIVE CS IMAGE FUSION USING PARTIAL REPLACEMENT


We propose the new adaptive CS image fusion method that uses partial replacement to remove spectral distortion and to preserve the original spatial characteristics, regardless of the type of satellite sensor. Thus, spatial details can be extracted from the PAN image without affecting the spectral/spatial distortion in every MS band. Our method is organized into two parts. The first step is the construction of a high-/low-resolution component image by using partial replacement between the PAN and MS images, and the next step is to assemble an adaptive CS fusion model to minimize the global/local spectral dissimilarity between the PAN and each MS band while preserving the spatial details of the original PAN image. The data size of the MS image that is used on the fusion framework is resampled equivalently to the original PAN image by using bicubic interpolation. Here the new adaptive CS image fusion method that uses partial replacement to remove spectral distortion is proposed to preserve

Construction of Spatially Degraded Pan and Initial Intensity Image


The digital number (DN) values of an image depend on the spectral response function of a sensor. The spectral relationship between the PAN and MS images is not fixed because the spectral characteristics are

changed with every object, area, and circumstance. Therefore, the experimental parameters in establishing cause non stable results since they may be different in each case, and they are arrived at by taking an average of the MS bands or considering connections only between blue and green. Here the linear regression algorithm is used to produce the optimal intensity image. Its model can be defined by the following equation which takes the PAN input and is shown below by the equation.

I l = 0 + n MSl n
Where I l means the initial intensity image and is the regression coefficient. General CS fusion techniques use the low-spatialresolution synthetic component image for extracting the spatial details of the original PAN image. Notwithstanding the strong correlation between the PAN and synthetic component images, each MS band retains a different spectral characteristic to the PAN and low-spatial resolution component images. The PAN image must therefore

PAN l = 0

+ n PANln

comprise a separate property for each MS band to avoid over injection. Here the new high-spatial-resolution component image by using the spatially degraded PAN and MS images by using partial replacement is developed. It is constructed to meet the spectral characteristics of the individual MS bands.

The PAN and MS images are resampled using bicubic interpolation to the same size. The regression coefficients between PAN and MS spectral bands are found . PANl means the spatially degraded low-spatial resolution PAN image. is the regression coefficient, N represents the spectral bands, and MSln is the nth MS image. In this linear regression model, the degraded PAN is applied as a response variable instead of the original PAN to use the similarity between the low-spatialresolution PAN and MS images. The values are calculated directly using the least square estimation. The initial intensity image is then produced using the following equation.

B. Construction of High Resolution Component Image


The correlation coefficient between the spatially degraded PAN bands and (each MS band histogram matched with Il ) is estimated to generate the high-resolution component image. Thereafter, by using each correlation coefficient, the new highresolution component image is computed using the following equation. Inh = CCn PAN + (1 CCn) MSnl (9) Wher CCn means the correlation coefficientbetween the low spatial-

resolution synthetic component image and the nth MS band image. Ihn is the highspatial-resolution component image corresponding to the nth MS band and MSln means the nth MS band histogram matched with the PAN image. Therefore, (9) focuses on the construction of a synthetic ific regions.

component image and the optimization of the correlation between the PAN image and each MS band by archiving a partial replacement in order to obtain optimal high-frequency information trough through minimization of the global spectral/spatial difference in spec

Das könnte Ihnen auch gefallen