Sie sind auf Seite 1von 9

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING 6367(Print), ISSN 0976

6375(Online) Volume 4, Issue 3, May June (2013), IAEME & TECHNOLOGY (IJCET) ISSN 0976 6367(Print) ISSN 0976 6375(Online) Volume 4, Issue 3, May-June (2013), pp. 570-578 IAEME: www.iaeme.com/ijcet.asp Journal Impact Factor (2013): 6.1302 (Calculated by GISI) www.jifactor.com

IJCET
IAEME

SUPER RESOLUTION IMAGING USING FREQUENCY WAVELETS AND THREE DIMENSIONAL VIEWS BY HOLOGRAPHIC TECHNIQUE
K. MATHEW Karpagam University, Coimbatore, Tamilnadu-641021, India S. SHIBU K.R.Gouri Amma College Of Engineering For Women, Thuravoor, Cherthala, Kerala

SYNOPSIS Super resolution imaging is achieved from low resolution images. This approach has three steps, registration, interpolation and reconstruction. In registration the specimen under observation is photographed using digital camera with holographic equipment. Several such photographs are taken so that the relative displacement of any such image is a sub pixel shift. These images are superposed in the same coordinate system. This step is registration. The second step is interpolation by frequency wavelet method. In this interpolation process, we collect high frequency information and resolution is being increased. Final stage of super resolution procedure is reconstruction. In reconstruction super resolution image is restored by minimising degradation in image due to aliasing effect and blur due to noise. Thus final image has large resolving power with excellent clarity and has a three dimensional view. Key Words: Hologram, Bi cubic interpolation, contrast ratio, frequency wavelets, three dimensional views. INTRODUCTION The requisites of an ideal image are very high resolution, good clarity, and three dimensional views. These qualities of an image are essential for precise analysis and such images have wide application in military, medical field, remote sensing and in consumer electronics, Resolution implies that different parts of a sample are separately seen. The resolving power by an optical device is its power to see two nearly separate objects as separate. When a
570

International Journal of Computer Engineering and Technology (IJCET), ISSN 09766367(Print), ISSN 0976 6375(Online) Volume 4, Issue 3, May June (2013), IAEME point object is viewed by an optical device owing to diffractioneffect, itappears to have a central bright spot surrounded by concentric subsidiary minima and maxima. Owing to this diffraction pattern, the image will be blurred.When we observe two nearby objects, they may not be seen as separate objects, but may be seen as a single object and so these nearby objects are not being resolved. According to Raleighs criterion of resolution(1), two nearby point objects are being resolved, if the central spot of one image lies on or outside the first subsidiary minimum of the other object. Using this principle, it is obvious that the resolving power of an optical device is proportional to , a being the aperture of the optical device, wavelength of the light used. This diffraction limit of resolution was recognized by Abbe(2) in 19th century and it is due to diffraction that a point source of light when imaged through an optional device appears as a spot with finite size. This intensity profile defines a point spread function and fullwith at half maximum of the PSF in the x y direction (lateral direction) and the axial z direction is given approximately as x, y ,

wavelength refractive index, NA numerical aperture of the objectives lens. The resolving power is inversely proportional to this full width at half maximum. Hence the resolving power can be increased either by increasing the aperture of the device or by decreasing the wavelength of light used. The imaging by optical devices is diffraction limited. But in digital imaging process, there are various methods for increasing resolutions. Suitable algorithms are available to obtain super resolution from low resolution images. One such method is interpolation by frequency wavelet method. The resolving power can be increased by adding high frequency information of specific image model and also by removing the ambiguity in the image due to sub pixel shift, blur due to defocus and degradation due to aliasing. Beside the three dimensional images of specific resolution, we are interested in the clarity of the image. The image is blurred due to diffraction effects and owing to various other factors like defocus etc. the clarity of a particular part of the image is decided by contrast factor or modulation factor defined by , is the maximum intensity and Imin is the minimum intensity. The contrasting factor increases with frequency of observing signal. Since magnifications is same at a given frequency we get distortion less magnification and the image can be a true replica of the specimen. So we have a suitable algorithm for achieving three dimensional images of high resolution, good clarity and distortion magnification.Holography record of phase variation When we are photograph an object using traditional means by light field, we get point by point record of square of the amplitude. The light reflecting of the specimen carries with the information of irradiance and it does not describe the phase of the emanated wave from the object. If the amplitude and phase of the emanated wave can be reconstructed, the resulting light field would form an image perfectly three dimensional exactly as if the object were before you.One such method is used in phase contrast microscope and another such device is holographic imaging designed by Dennis Gaber. He photographically recorded an interference pattern generated by interaction of scattered monochromatic light beam from an object and a coherent reference beam. The record of the resulting pattern is a hologram and the reconstruction of image can be formed by diffraction of the coherent beam by the hologram. This image is digitalized and stored as a matrix of binary digits in computer memory, so the pixels, besides the record of sequence of amplitude, will carry depth information because the phase charge of diffracted beam from the hologram is proportional to the depth of scattered light from the specimen.
571

International Journal of Computer Engineering and Technology (IJCET), ISSN 09766367(Print), ISSN 0976 6375(Online) Volume 4, Issue 3, May June (2013), IAEME When we photograph a point object, because it is imaged as a smear of light described by a point spread function s(x, y). Under incoherent illumination, these elementary flux density patterns overlap and add linearly to create final image. An object is a collection of point sources, each of which imaged by a spread function. The object plane wave front is composed of different Fourier component plane waves travelling in direction associated with spatial frequencies of object light field reflected or transmitted. Each one of the Fourier plane waves interfere with reference waves and scattered objects wavelets comingat angle, the relative phase of the waves varies from point to point and can be expressed as , If two kx )and irradiance distribution is given by 2C0 02cos or C0 021 0being permittivity of free space. Hence have a cosinusoidal distribution across film plane. When monochromatic beam is diffracted from the above hologram, the energy beam has intensity of illumination proportional is I (x, y) ER (x, y) where ER (x, y) is the reconstructing wave incident on the hologram. Then ER (x, y) = EORcos 2 , EORin the amplitude of the reconstructing wave. Hence the final wave (EF (x, y) = EOR (EOB2 + EOO2) cos 2 , + EOR EOBEOOcos 2 2 ) + EOR EOBEOOcos 2 , , 0 phase from the object, EOR amplitude of the reference beam EOO amplitude of scattered wave from the object. The final wave consist of three parts (1.) The amplitude modulated version of the reconstructing wave. This is zeroeth order of the undeflected direct beam. (2.) The sum term having the same amplitude proportional is the objective wave EOO and phase contribution 2 , arising from the tilting background and reconstructing wave front at the plane of hologram and it also contain the phase of the object and the phase is a measure of the depth at the position of object. (3.) The difference term. This term except for the multiplication constant has precisely the form of the EOO (x, y) with the actual phase of the object. This difference wave represents the scene or object exactly as it is. This phase dependent image is digitalized to get low resolution image. So the pixels of these low resolution image has depth information Registration of the low resolution images Determination of shift due to plane motion and rotation in frequency domain When a series of low resolution images (magnified images) are taken in a short interval of time, there is a relative displacement between the images owing to the small camera motion. The motion(3) can be described in terms of three parameters namely a horizontal shift h vertical shift rotation denoted byangle of rotation about z axis. The relative displacement of input image at sub pixel accuracy can be precisely determined. The reference signal is denoted by f (x, y).Owing to the effects ofhorizontal displacement h vertical displacement rotation angle , resulting images can be expressed as f1(x, y) = f ( , ) expressing and as 1
!

such waves have amplitude E0 the resulting field has an amplitude E = 2E0cos sin (wt

and B

, f1(x, y) can be expanded as f1(x, y) = f(x, y) +


+ . The error function between f1 and f is given as


572

International Journal of Computer Engineering and Technology (IJCET), ISSN 09766367(Print), ISSN 0976 6375(Online) Volume 4, Issue 3, May June (2013), IAEME square of difference between expanded form of f1(x, y), and f1(x, y) ie E = | , The summation is over the over lapping regions of f (x, y) and f 1 (x, y). Since we require this mean square value of error to be minimum we set 0, 0 0 Ie,

, | , E being error function.


The motion parameter can be computed by solving above set of linear equation. The relative displacements of under sampled LR images can be estimated with sub pixel accuracy. Then we can combine these LR images.(4) The figure 1 shows three 4x4 pixel LR frames on an 8x8 HR grid. Each symbol (Square, Circle, and Triangle) shows the sampling points of a frame with respect to HR grid. One arbitrary frame is selected as reference frame. This frame is marked by circular symbols. The sampling grid for the triangular frame is a simple translation of the reference grid. The sampling grid for square frame has translational, rotation and magnification components.

Thus we have regularity in the grid of LR sampling points for super resolution. When the pixel values from all the frames are considered, the data are irregularly sampled. In fact for each frame of low resolution images data points are sampled on a rectangular grid. But in the high resolution grid we have irregular sampling and this is called interlaced sampling.

573

International Journal of Computer Engineering and Technology (IJCET), ISSN 09766367(Print), ISSN 0976 6375(Online) Volume 4, Issue 3, May June (2013), IAEME Interpolation and reconstruction of super resolution images Super resolution imaging from the low resolution image can be achieved by any one of the following methods. Polynomial based image interpolation (5, 6) The process of image interpolation aims at estimating intermediate pixels between the known pixel values. To estimate the intermediate pixel x, the neighboring pixels and the (x) can be distance s are incorporated in the estimation process. The interpolation function written as (x) = k (x xk), is the interpolation kernel, x and xk represent the is band limited to (continuous and discrete spatial distance Ck interpolation constant. If sin . Since the above interpolation formula is practical n, n) then due to slow rate of decay of the interpolation kernel, are prefer to use approximation such as bi cubic interpolation formula related as 3 2 2 3 3 5 2 2 2 2 3 4 2 3 2 2 2 (1)

In this case of bicubic interpolation, the sample points are used for evaluating interpolation coefficient for image interpolation and this technique is performed row by row and then columns by columns. (2) Regularised Image Interpolation (7, 8) The image interpolation problem for captured digital image is an inverse problem. Generally the super resolution image reconstruction approach is an ill posed problem because of the insufficient number of L.R. images and ill conditioned blur operations. The imaging process can be expected as for k=1, 2, . pand matrix Wk is of size (N1 N2)2 L1 N1 L2 N2represents the degradation by blurring motion and subsampling. x denotes HR pixels, yk denotes LR pixels. Based on the above observation model, the aim of SR image reconstruction is to estimate HR images x from LR images yk for k= 1, 2, . P. Knowing the registration parameter the above observation model is completely specified. The deterministic regularized SR approach solves the inverse problem of finding x using prior information about the solution which can be used to make the problems well posed. Using constrained least squares we can choose a suitable form of x to minimizethe lagrangiao where c is high pass , || || |||| filter||. || represents the lagrangian multiplier known as regularising parameter. Larger the value of , we get smoother solutions and we can find a unique estimate x which minimize the above cost function. One of the most basic deterministic iterative techniques is used for solving the equation. and this leads to the following iteration for , Where denotes the convergence parameter and is the unsampling operator and a type of blur operator The above method is the generalized interpolation approach.
574

International Journal of Computer Engineering and Technology (IJCET), ISSN 09766367(Print), ISSN 0976 6375(Online) Volume 4, Issue 3, May June (2013), IAEME Interpolation by frequency wavelet technique Super resolution image from low resolution image can be obtained by frequency wavelet methods(9), in the wavelet method signals can be decomposed into components at different scales or resolutions. The advantage of this method is that signals trends at different scales can be isolated and studied. Global trends can be examined at coarse scales using scalar functions where local variations are better analysed at fine scales. A brief summary of orthonormal wavelet analysis of 1D and 2D signals are presented here. for a detailed study one can refer to the basic theory of wavelet presented by Mallet in his article,(10,11)Wavelength bases. The high frequency coefficient in the wavelet expansion are estimated by using the sample points of LR images in high resolution grid and then HR image is obtained by applying the wavelet transforms. From a function f (t) L2 (R), the projection fj (t) off(t) on to the subspace vj represents an approximation of the function at scale j. is known as scaling function. The approximation becomes more accurate as j increases. The difference in successive approximation is a detail signal (t) that spans a wavelet subspace Wj. So we can decompose the approximation space = VJWJand we can expand the function f (t) L2 (R) as VJ+1 f(t) = , , , , where aJ,k = , and = , bj,k (A) This wavelet decomposition can be extended to 2D images, then Vj(2) = Vj(1) Vj(1) and scaling function , and the functions , , ,, ,, , , ,, ,, , with
,, , ,, ,

(3)

,, , ,, ,

,, , ,, ,

,, , ,, ,

(B)

The expansion formulas (A) and (B) are used to estimate the wavelet coefficient. Using this estimates we interpolatethe function values at the HR grid points. We consider the case of non-uniformly sampled ID signals. Suppose we have a function f (t) for which we compute M uniformly distributed values say at t=0, 1,.. M-1. We are given nonuniformly sampled data points of, f (t) at t = to, t1, tp-1, so that 0 . We take unit time spacing grid to be resolution level V0. By repeated application, we decompose where 1. So we can separate in to approximation components and details components. So is expanded as , , , , substitutingthe values of sampled data, we get a set of p linear equations. The desired values of f (t) at the HR grid points can be computed by using the estimated coefficient.
575

International Journal of Computer Engineering and Technology (IJCET), ISSN 09766367(Print), ISSN 0976 6375(Online) Volume 4, Issue 3, May June (2013), IAEME , , , , for wavelet super resolution, the data is sampled non-uniformly and in a recurring manner. This type of sampling is called interlaced sampling. A common feature of wavelets SR reconstruction is the assumption that LR images to be enhanced are the corresponding low passes filtered sub bands of decimated wavelets transformed for HR images. So in all wavelets SR reconstruction methods (12),one estimate the high pass filtered sub bands of a DWT (Decimated Wavelet Transformed) for HR images and construct HR image by inverse wavelets transform. As a simple approach to the HR image reconstruction, we set all elements of these high pass bands to zero. This methods is called wavelet domain zero padding. The wavelet coefficients can be estimated by using Laplacian pyramid. Pyramid data structure used for image decomposition and reconstruction based Laplacian pyramid is shown in the figure.

As in the figure a, go is an image and g1 is the resulting image of applying an appropriate low pass filter and down sampling to go. The prediction error L0 is given by L0 = g0 - upsampling g1, g1 is low pass filtered and down sampled to get g2 and the second error L1 = g1 - upsampling g2. By repeating several times, we obtain Lkwhere Lk = gk - upsampling gk+1 In his implementation each is smaller than its predecessor by a scale factor due to reduced sample density. This is called lapacian pyramid. The low pass filters g0, g1, form a Gaussian pyramid. The pyramid reconstruction is shown in the figure b. The assigned image can be recovered exactly by L0, L1, ... LnAnother efficient procedure is to up sample Lnonce and add to Ln-1 This process is repeated until we get level o, and g0 is recovered. This is inverse laplacian pyramid generation. But our aim is to get higher resolution than g0. Suppose the predictive high resolution image is g-1. Then g-1 = L1 + up sampling g0.If we look on L0, L1, ... Ln as a new image pyramid we may obtain a residential pyramid L0, L1, ... Ln by using same method as that of obtaining L0, L1, ... Ln via the process of pyramid reconstruction L-1 is obtained by the following process.Up sampling Ln-1 and adding to Ln-2up sample the new image and add to Ln-3. By repeating this we get L-1By this method of pyramid structure we get high resolution.

576

International Journal of Computer Engineering and Technology (IJCET), ISSN 09766367(Print), ISSN 0976 6375(Online) Volume 4, Issue 3, May June (2013), IAEME Experimental Result In our technique of super resolution imaging, we use digital camera having holographic device. We have low resolution images whose pixel carry depth information. The relative displacement between LR images is calculated precisely upto sub pixel shift. The pixels of LR images are marked in HR grid. This is called interlacing sampling. The details of our technique are shown in the following block diagram.

This second step of our method is interpolation by frequency wavelet. At different levels resolution is improved by collecting high frequency information. The imaging function is expanded in terms of frequency wavelets. The coefficients in wavelet expansion is calculated with the help of irregular sample points in the high resolution grid. The resulting image is having high resolution. But the image is blurred owing to degradation caused by aliasing and defocus. This error is eliminated by techniques of the least square value of the error and we get output image.

Ia

Ib

The fig Ia shows the image obtained using nearest neighbor interpolation fig. Ib shows super resolution image having high clarity obtained using frequency wavelet technique.

577

International Journal of Computer Engineering and Technology (IJCET), ISSN 09766367(Print), ISSN 0976 6375(Online) Volume 4, Issue 3, May June (2013), IAEME

IIa

IIb

The fig. IIb shows final output image with depth information. The figure IIa shows input image after 10 interactions so our technique, we produce super resolution images of high clarityin there dimension. REFERENCE
1. Eugene Hecht optics Pearson Education, Inc pages 613 645 2. Marx Born & E. Wolf, principles of optics pergamon press Ltd. 3. Michal Irani and Samuel Peleg Improving resolution by image registration graphical models and image processing Vol 53 No3 Pages 231 239, 1991 4. Nhat Nauyen and Peyman Milanfar wavelet super resolution Circuit system Signal process Vol. 19 No. 4. 2000 Pages 321 338 5. Sky M K Kinley and Megan Levine cubic Spline Interpolation Math 45 Linear Algebra 6. Robert G. Keys cubic convolution Interpolation for digital Image processing IEEE transaction acoustics speech and signal processing vol 55p-29 No 6. 1981 Dec Pages, 1153 1160 7. Deepu Rajan and Subhasis Chaudhari A General interpolation Scheme for image scaling and, super resolution in proc of Erlangen workshop 99 on vision modeling and visualization university of Erlangen Germany No 1999 pages 301 308 8. Meingnet multivariate interpolation at arbitary points made simple Journal of applied math phys Vol 30 pages 292 304, 1979 9. Mathew K. Dr. Shibu wavelet based technique for super resolution images reconstraction International Journels of computer application volumes 33 No 7 November 2011 pages 11 17 10. Mallat S multi resolution approximation and wavelet orthonormal bases of Trans American society Vol 315 Pages 68 87 11. Mallat S. A wavelet tour of signal processing academic press San Siego Calif. 12. H. C. Liu, Y Feng and GY Sun wavelet domain image super resolution reconstruction based on images pyramid and cycle spinning Journal of physics conference services 48 (2006) pages 417 421. International symposium of instrumentation science and technologies. 13. Zhao, Hua Han and Sulong peng wavelet domains HMT based image super resolution proceeding of international conference. Pages 953 - 956 14. Benjamin Langamann, Kalaus Hartmann and orman Loffeld comparison of depth super resolution methods for 2D/3D images. International Journal of computer Information systems and Industrial Management application ISSN 2150 7988 Vol 3 (2011) Pages 635 345. 15. Vinod Karar and SmarajitGhosh, Effect of Varying Contrast Ratio and Brightness Nonuniformity Over Human Attention and Tunneling Aspects in Aviation, International Journal of Electronics and Communication Engineering &Technology (IJECET), Volume 3, Issue 2, 2012, pp. 400 - 412, ISSN Print: 0976- 6464, ISSN Online: 0976 6472.

578

Das könnte Ihnen auch gefallen