Sie sind auf Seite 1von 87

ABSTRACT

ABSTRACT:
1

Steganography is the art and science of hiding secret data in plain sight without being noticed within an innocent cover data so that it can be securely transmitted over a network. The word Steganography is originally composed of two Greek words steganos and graphia, which means "covered writing". The use of Steganography dates back to ancient times where it was used by romans and ancient Egyptians. The interest in modem digital Steganography started by Simmons in 1983 when he presented the problem of two prisoners wishing to escape and being watched by the warden that blocks any suspicious data communicated between them and passes only normal looking one. Any digital file such as image, video, audio, text or IP packets can be used to hide secret message. Generally the file used to hide data is referred to as cover object, and the term stego-object is used for the file containing secret message. Among all digital file formats available nowadays image files are the most popular cover objects because they are easy to find and have higher degree of distortion tolerance over other types of files with high hiding capacity due to the redundancy of digital information

representation of an image data. There are a number of steganographic schemes that hide secret message in an image file; these schemes can be classified according to the format of the cover image or the method of hiding. We have two popular types of hiding methods; spatial domain embedding and transform domain embedding. Steganography gained importance in the past few years due to the increasing need for providing secrecy in an open environment like the
2

internet. With almost anyone can observe the communicated data all around, Steganography attempts to hide the very existence of the message and make communication undetectable. Many techniques are used to secure information such as cryptography that aims to scramble the information sent and make it unreadable while Steganography is used to conceal the information so that no one can sense its existence. In most algorithms used to secure information both Steganography and

cryptography are used together to secure a part of information. Steganography has many technical challenges such as high hiding capacity and imperceptibility. In this paper, we try to optimize these two main requirements by proposing a novel technique for hiding data in digital images by combining the use of adaptive hiding capacity function that hides secret data in the integer wavelet coefficients of the cover image with the optimum pixel adjustment (OPA) algorithm. The

coefficients used are selected according to a pseudorandom function generator to increase the security of the hidden data. The OPA algorithm is applied after embedding secret message to minimize the embedding error. The proposed system showed high hiding rates with reasonable imperceptibility compared to other steganographic systems.

Existing Methods: LSB Embedding in Spatial Domain The basic idea in LSB is the direct replacement of LSBs of noisy or unused bits of the cover image with the secret message bits.

Till now LSB is the most preferred technique used for data hiding because it is simple to implement offers high hiding capacity, and provides a very easy way to control stego-image quality

Drawbacks of Existing Method: Low hiding capacity and Complex computations Embedding error will be there(Overflow & Underflow)

Proposed Method: The proposed system is an adaptive data hiding scheme, in which randomly selected integer wavelet coefficients of the cover image are modified with secret message bits. Each of these selected coefficients hide different number of message bits according to the hiding capacity function. After data insertion we apply optimum pixel adjustment algorithm to reduce the error induced due to data insertion.

TABLE OF CONTENTS

TABLE OF CONTENTS

CHAPTER NO.

PAGE NO.

Abstract

CHAPTER 1 1.1 Introduction 1.2 Block diagram 1.3 Fundamental of digital image 7 10 11

CHAPTER 2 2.1 Introduction about Wavelet 2.2 Wavelet Transform 2.3 Lifting Scheme
6

24 27 47

2.4 Lifting using Haar

48

CHAPTER 3 3.1 Introduction to LSB 3.2 Data hiding by simple LSB substitution 3.3 Optimal pixel adjustment process 51 54 56

CHAPTER 4 4.1 Introduction to Matlab 4.2 Starting to Matlab 59 60

CHAPTER 5 5.1 Conclusion 5.2 Reference 83 83

CHAPTER 1

1.1

Introduction:

Steganography is the art and science of hiding secret data in plain sight without being noticed within an innocent cover data so that it can be securely transmitted over a network. The word Steganography is originally composed of two Greek words steganos and graphic, which means "covered writing". The use of Steganography dates back to ancient times where it was used by romans and ancient Egyptians. The interest in modem digital Steganography started by Simmons in 1983 when he presented the problem of two prisoners wishing to escape and being watched by the warden that blocks any suspicious data communicated between them and passes only normal looking one. Any digital file such as image, video, audio, text or IP packets can be used to hide secret message. Generally the file used to hide data is referred to as cover object, and the term stego-object is used for the file containing secret message. Among all digital file formats available nowadays image files are the most popular cover objects because they are easy to find and have higher degree of distortion tolerance over other types of files with high hiding capacity due to
9

the redundancy of digital information representation of an image data. There are a number of steganographic schemes that hide secret message in an image file; these schemes can be classified according to the format of the cover image or the method of hiding. We have two popular types of hiding methods; spatial domain embedding and transform domain embedding.

The Least Significant Bit (LSB) substitution is an example of spatial domain techniques. The basic idea in LSB is the direct replacement of LSBs of noisy or unused bits of the cover image with the secret message bits. Till now LSB is the most preferred technique used for data hiding because it is simple to implement offers high hiding capacity, and provides a very easy way to control stego-image quality but it has low robustness to modifications made to the stego-image such as low pass filtering and compression and also low imperceptibility. Algorithms using LSB in grayscale images can be found in. The other type of hiding method is the transform domain techniques which appeared to overcome the robustness and imperceptibility problems found in the LSB substitution techniques. There are many transforms that can be used in data hiding, the most widely used transforms are; the discrete cosine transform (DCT) which is used in the common image compression format JPEG and MPEG, the discrete wavelet transform (DWT) and the discrete Fourier transform (DFT). Examples to data hiding using DCT can be found in. Most recent researches are directed to the use of DWT since it is used in the new image compression format JPEG2000 and MPEG4, examples of using DWT can be found in. In the secret message is embedded into the high frequency coefficients of the wavelet transform while leaving the low frequency coefficients sub band
10

unaltered. While in an adaptive (varying) hiding capacity function is employed to determine how many bits of the secret message is to be embedded in each of the wavelet coefficients. The advantages of transform domain techniques over spatial domain techniques are their high ability to tolerate noises and some Signal processing operations but on the other hand they are computationally complex and hence slower. In all proposed techniques for Steganography whether spatial or transform the key problem is how to increase the size of the Secret message without causing noticeable distortions in the cover object. Some of these techniques try to achieve the high hiding capacity low distortion result by using adaptive techniques that calculate the hiding capacity of the cover According to its local characteristics as in. However, the steganographic transform-based techniques have the following disadvantages; low hiding capacity and complex computations. Thus, to get over these disadvantages, the present paper proposes an adaptive data Hiding technique joined with the use of optimum pixel adjustment algorithm to hide data into the integer wavelet coefficients of the cover image in order to maximize the hiding capacity as much as possible. We also used a pseudorandom generator function to select the embedding locations of the integer wavelet coefficients to increase the system security. The remaining of the paper will be organized as follows. Firstly, we will provide a brief introduction to integer wavelet transform. Secondly we will describe the proposed steganographic system. Then, we will discuss the achieved results; and finally we will conclude the paper and suggest future improvements to the system.

11

1.2

Block Diagram:

1.3 FUNDAMENTALS OF DIGITAL IMAGE


Digital image is defined as a two dimensional function f(x, y), where x and y are spatial (plane) coordinates, and the amplitude of f at any pair of
12

coordinates

(x, y) is called intensity or grey level of the image at that point.

The field of digital image processing refers to processing digital images by means of a digital computer. The digital image is composed of a finite number of elements, each of which has a particular location and value. The elements are referred to as picture elements, image elements, pels, and pixels. Pixel is the term most widely used.

1.3.1 Image Compression Digital Image compression addresses the problem of reducing the amount of data required to represent a digital image. The underlying basis of the reduction process is removal of redundant data. From the mathematical viewpoint, this amounts to transforming a 2D pixel array into a statically uncorrelated data set. The data redundancy is not an abstract concept but a mathematically quantifiable entity. If n1 and n2 denote the number of information-carrying units in two data sets that represent the same information, the relative data redundancy by n1) can be defined as,
RD = 1 1 CR

RD

[2] of the first data set (the one characterized

Where

CR

called as compression ratio [2]. It is defined as


n1 n2

CR =

In image compression, three basic data redundancies can be identified and exploited: Coding redundancy, interpixel redundancy, and phychovisal redundancy. Image compression is achieved when one or more of these redundancies are reduced or eliminated.
13

The image compression is mainly used for image transmission and storage. Image transmission applications are in broadcast television; remote sensing via satellite, air-craft, radar, or sonar; teleconferencing; computer communications; and facsimile transmission. Image storage is required most commonly for educational and business documents, medical images that arise in computer tomography (CT), magnetic resonance imaging (MRI) and digital radiology, motion pictures, satellite images, weather maps, geological surveys, and so on.

1.3.2 Image Compression Model

image

Forward Transform

Encod er

compressed image

Figure 1.1.a) Block Diagram of Image compression

Compressed image

Decoder

Inverse

Reconstructed image

Figure 1.1.b) Block Diagram of Image Decompression 1.3.3 Image Compression Types There are two types image compression techniques. 1. Lossy Image compression
14

2. Lossless Image compression

Compression ratio:

1. Lossy Image compression :

Lossy compression provides higher levels of data reduction but result in a less than perfect reproduction of the original image. It provides high compression ratio. lossy image compression is useful in applications such as broadcast television, videoconferencing, and facsimile transmission, in which a certain amount of error is an acceptable trade-off for increased compression performance. Originally, PGF has been designed to quickly and progressively decode lossy compressed aerial images. A lossy compression mode has been preferred, because in an application like a terrain explorer texture data (e.g., aerial orthophotos) is usually mid-mapped filtered and therefore lossy mapped onto the terrain surface. In addition, decoding lossy compressed images is usually faster than decoding lossless compressed images.

15

In the next test series we evaluate the lossy compression efficiency of PGF. One of the best competitors in this area is for sure JPEG 2000. Since JPEG 2000 has two different filters, we used the one with the better trade-off between compression efficiency and runtime. On our machine the 5/3 filter set has a better trade-off than the other. However, JPEG 2000 has in both cases a remarkable good compression efficiency for very high compression ratios but also a very poor encoding and decoding speed.

The other competitor is JPEG. JPEG is one of the most popular image file formats. It is very fast and has a reasonably good compression efficiency for a wide range of compression ratios. The drawbacks of JPEG are the missing lossless compression and the often missing progressive decoding. Fig. 4 depicts the average rate-distortion behavior for the images in the Kodak test set when fixed (i.e., nonprogressive) lossy compression is used. The PSNR of PGF is on average 3% smaller than the PSNR of JPEG 2000, but 3% better than JPEG.
16

These results are also qualitative valid for our PGF test set and they are characteristic for aerial ortho-photos and natural images. Because of the design of PGF we already know that PGF does not reach the compression efficiency of JPEG 2000. However, we are interested in the trade-off between compression efficiency and runtime. To report this trade-off we show in Table 4 a comparison between JPEG 2000 and PGF and in Fig. 5 (on page 8) we show for the same test series as in Fig. 4 the corresponding average decoding times in relation to compression ratios.

Table 4 contains for seven different compression ratios (mean values over the compression ratios of the eight images of the Kodak test set) the corresponding average encoding and decoding times in relation to the average PSNR values. In case of PGF the encoding time is always slightly longer than the corresponding decoding time. The reason for that is that the actual encoding phase (cf. Subsection 2.4.2) takes slightly longer than the corresponding decoding phase.

For six of seven ratios the PSNR difference between JPEG 2000 and PGF is within 3% of the PSNR of JPEG 2000. Only in the first row is the difference larger (21%), but because a PSNR of 50 corresponds to an almost perfect image quality the large PSNR difference corresponds with an almost undiscoverable visual difference. The price they pay in JPEG 2000 for the 3% more PSNR is very high. The creation of a PGF is five to twenty times faster than the creation of a corresponding JPEG 2000 file, and the decoding of the created PGF is still five to ten times faster than the decoding of the JPEG 2000 file. This gain in speed is remarkable, especially in areas where time is more important than quality, maybe for instance in real-time computation.
17

In Fig. 5 we see that the price we pay in PGF for the 3% more PSNR than JPEG is low: for small compression ratios (< 9) decoding in PGF takes two times longer than JPEG and for higher compression ratios (> 30) it takes only ten percent longer than JPEG. These test results are characteristic for both natural images and aerial ortho-photos. Again, in the third test series we only use the Lena image. We run our lossy coder with six different quantization parameters and measure the PSNR in relation to the resulting compression ratios. The results (ratio: PSNR) are:

18

2.Lossless Image compression :

Lossless Image compression is the only acceptable amount of data reduction. It provides low compression ratio while compared to lossy. In Lossless Image compression techniques are composed of two relatively independent operations: (1) devising an alternative representation of the image in which its interpixel redundancies are reduced and (2) coding the representation to eliminate coding redundancies.

Lossless Image compression is useful in applications such as medical imaginary, business documents and satellite images.

Table 2 summarizes the lossless compression efficiency and Table 3 the coding times of the PGF test set. For WinZip we only provide average runtime values, because of missing source code we have to use an interactive testing procedure with runtimes measured by hand. All other values are measured in batch mode.

19

In Table 2 it can be seen that in almost all cases the best compression ratio is obtained by JPEG 2000, followed by PGF, JPEG-LS, and PNG. This result is different to the result in [SEA+00], where the best performance for a similar test set has been reported for JPEG-LS. PGF performs between 0.5% (woman) and 21.3% (logo) worse than JPEG 2000. On average it is almost 15% worse. The two exceptions to the general trend are the compound and the logo images. Both images contain for the most part black text on a white background. For this type of images, JPEG-LS and in particular WinZip and PNG provide much larger compression ratios. However, in average PNG performs the best, which is also reported in [SEA+00]. These results show, that as far as lossless compression is concerned, PGF performs reasonably well on natural and aerial images. In specific types of images such as compound and logo PGF is outperformed by far in PNG.

20

Table 3 shows the encoding (enc) and decoding (dec) times (measured in seconds) for the same algorithms and images as in Table 2. JPEG 2000 and PGF are both symmetric algorithms, while WinZip, JPEG-LS and in particular PNG are asymmetric with a clearly shorter decoding than encoding time. JPEG 2000, the slowest in encoding and decoding, takes more than four times longer than PGF. This speed gain is due to the simpler coding phase of PGF. JPEG-LS is slightly slower than PGF during encoding, but slightly faster in decoding images.

WinZip and PNG decode even more faster than JPEG-LS, but their encoding times are also worse. PGF seems to be the best compromise between encoding and decoding times. Our PGF test set clearly shows that PGF in lossless mode is best suited for natural images and aerial orthophotos. PGF is the only algorithm that encodes the three MByte large aerial ortho-photo in less than second without a real loss of compression efficiency. For this particular

21

image the efficiency loss is less than three percent compared to the best. These results should be underlined with our second test set, the Kodak test set.

Fig. 3 shows the averages of the compression ratios (ratio), encoding (enc), and decoding (dec) times over all eight images. JPEG 2000 shows in this test set the best compression efficiency followed by PGF, JPEG-LS, PNG, and WinZip. In average PGF is eight percent worse than JPEG 2000. The fact that JPEG 2000 has a better lossless compression ratio than PGF does not surprise, because JPEG 2000 is more quality driven than PGF. However, it is remarkable that PGF is clearly better than JPEG-LS (+21%) and PNG (+23%) for natural images. JPEG-LS shows in the Kodak test set also a symmetric encoding and decoding time behavior. Its encoding and decoding times are almost equal to PGF. Only PNG and WinZip can faster decode than PGF, but they also take longer than PGF to encode. If both compression efficiency and runtime is important, then PGF is clearly the best of the tested algorithms for lossless compression of natural images and aerial orthophotos. In the third test we perform our lossless coder on the Lena
22

image. The compression ratio is 1.68 and the encoding and decoding takes 0.25 and 0.19 seconds, respectively.

1.4.4. Image Compression Standards

There are many methods available for lossy and lossless, image compression. The efficiency of these coding standardized by some Organizations. The International Standardization Organization (ISO) and Consultative Committee of the International Telephone and Telegraph (CCITT) are defined the image compression standards for both binary and continuous tone (monochrome and Colour) images. Some of the Image Compression Standards are

1. JBIG1 2. JBIG2 3. JPEG-LS 4. DCT based JPEG 5. Wavelet based JPEG2000

Currently, JPEG2000 [3] is widely used because; the JPEG-2000 standard supports lossy and lossless compression of single-component (e.g., grayscale) and multicomponent (e.g., color) imagery. In addition to this basic compression functionality, however, numerous other features are provided, including: 1) progressive recovery of an image by fidelity or resolution; 2) region of interest coding, whereby different parts of an image can be coded with differing fidelity;
23

3) random access to particular regions of an image without the needed to decode the entire code stream; 4) a flexible file format with provisions for specifying opacity information and image sequences; and 5) good error resilience. Due to its excellent coding performance and many attractive features, JPEG 2000 has a very large potential application base. Some possible application areas include: image archiving, Internet, web browsing, document imaging, digital photography, medical imaging, remote sensing, and desktop publishing.

The main advantage of JPEG2000 over other standards, First, it would addresses a number of weaknesses in the existing JPEG standard. Second, it would provide a number of new features not available in the JPEG standard.

The preceding points led to several key objectives for the new standard, namely that it should: 1) allow efficient lossy and lossless compression within a single unified coding framework, 2) provide superior image quality, both objectively and subjectively, at low bit rates, 3) support additional features such as region of interest coding, and a more flexible file format, 4) avoid excessive computational and memory complexity. Undoubtedly, much of the success of the original JPEG standard can be attributed to its royalty-free nature. Consequently, considerable effort has been made to ensure that minimallycompliant JPEG- 2000 codec can be implemented free of royalties.

24

CHAPTER 2

2.1. INTRODUCTION TO WAVELET::

25

Over the past several years, the wavelet transform has gained widespread acceptance in signal processing in general and in image compression research in particular. In applications such as still image compression, discrete wavelets transform (DWT) based schemes have outperformed other coding schemes like the ones based on DCT. Since there is no need to divide the input image into non-overlapping 2-D blocks and its basis functions have variable length, wavelet-coding schemes at higher compression ratios avoid blocking artifacts. Because of their inherent multiresolution nature, wavelet-coding schemes are especially suitable for applications where scalability and tolerable degradation are important. Recently the JPEG committee has released its new image coding standard, JPEG-2000, which has been based upon DWT. Basically we use Wavelet Transform (WT) to analyze non-stationary signals, i.e., signals whose frequency response varies in time, as Fourier Transform (FT) is not suitable for such signals. To overcome the limitation of FT, Short Time Fourier Transform (STFT) was proposed. There is only a minor difference between STFT and FT. In STFT, the signal is divided into small segments, where these segments (portions) of the signal can be assumed to be stationary. For this purpose, a window function "w" is chosen. The width of this window in time must be equal to the segment of the signal where it is still be considered stationary. By STFT, one can get time-frequency response of a signal simultaneously, which cant be obtained by FT. The short time Fourier transform for a real continuous signal is defined as: X (f, t) = [ x(t ) w
2 j ft (t ) * ]e dt

------------ (2.1)

26

Where the length of the window is (t- ) in time such that we can shift the window by changing value of t, and by varying the value we get different frequency response of the signal segments. The Heisenberg uncertainty principle explains the problem with STFT. This principle states that one cannot know the exact time-frequency representation of a signal, i.e., one cannot know what spectral components exist at what instances of times. What one can know are the time intervals in which certain band of frequencies exists and is called resolution problem. This problem has to do with the width of the window function that is used, known as the support of the window. If the window function is narrow, then it is known as compactly supported. The narrower we make the window, the better the time resolution, and better the assumption of the signal to be stationary, but poorer the frequency resolution: Narrow window ===> good time resolution, poor frequency resolution Wide window ===> good frequency resolution, poor time resolution The wavelet transform (WT) has been developed as an alternate approach to STFT to overcome the resolution problem. The wavelet analysis is done such that the signal is multiplied with the wavelet function, similar to the window function in the STFT, and the transform is computed separately for different segments of the time-domain signal at different frequencies. This approach is called Multiresolution Analysis (MRA) [4], as it analyzes the signal at different frequencies giving different resolutions. MRA is designed to give good time resolution and poor frequency resolution at high frequencies and good frequency resolution and poor time resolution at low frequencies. This approach is good especially when the signal

27

has high frequency components for short durations and low frequency components for long durations, e.g., images and video frames. The wavelet transform involves projecting a signal onto a complete set of translated and dilated versions of a mother wavelet (t). The strict definition of a mother wavelet will be dealt with later so that the form of the wavelet transform can be examined first. For now, assume the loose requirement that (t) has compact temporal and spectral support (limited by the uncertainty principle of course), upon which set of basis functions can be defined.

The basis set of wavelets is generated from the mother or basic wavelet is defined as:
1 t b a a

a,b(t) =

; a, b and a>0

------------ (2.2)

The variable a (inverse of frequency) reflects the scale (width) of a particular basis function such that its large value gives low frequencies and small value gives high frequencies. The variable b specifies its translation along x-axis in time. The term 1/
a

is used for normalization.

2.2. WAVELET TRANSFORM


28

Whether we like it or not we are living in a world of signals. Nature is talking to us with signals: light, sounds Men are talking to each other with signals: music, TV, phones The human body is equipped to survive in this world of signals with sensors such as eyes and ears, which are able to receive and process these signals. Consider, for instance, our ears: they can discriminate the volume and tone of a voice. Most of the information our ears process from a signal is in the frequency content of the signal. Scientists have developed mathematical methods to imitate the processing performed by our body and extract the frequency information contained in a signal. These mathematical algorithms are called transforms and the most popular among them is the Fourier Transform. The second method to analyze non-stationary signals is to first filter different frequency bands, cut these bands into slices in time, and then analyze them. The wavelet transform uses this approach. The wavelet transform or wavelet analysis is probably the most recent solution to overcome the shortcomings of the Fourier transform. In wavelet analysis the use of a fully scalable modulated window solves the signal-cutting problem. The window is shifted along the signal and for every position the spectrum is calculated. Thenthis process is repeated many times with a slightly shorter (or longer) window for every new cycle.

29

In the end the result is a collection of time-frequency representations of the signal, all with different resolutions. Because of this collection of representations, we can speak of a multiresolution analysis. In the case of wavelets, we normally do not speak about time-frequency

Discrete Wavelet Transform The discrete wavelet transform (DWT) was developed to apply the wavelet transform to the digital world. Filter banks are used to approximate the behavior of the continuous wavelet transform. The signal is decomposed with a high-pass filter and a low-pass filter. The coefficients of these filters are computed using mathematical analysis and made available to you. See Appendix B for more information about these computations.

30

Where LPd: Low Pass Decomposition Filter HPd: High Pass Decomposition Filter LPr: Low Pass Reconstruction Filter HPr: High Pass Reconstruction Filter

The wavelet literature presents the filter coefficients to you in tables. An example is the Daubechies filters for wavelets. These filters depend on a parameter p called the vanishing moment.

31

The hp[n] coefficients are used as the low-pass reconstruction filter (LPr). The coefficients for the filters HPd, LPd and HPr are computed from the h[n] coefficients as follows:

High-pass decomposition filter (HPd) coefficients

g[n] = (1)n h[Ln] (L: length of the filter) Low-pass reconstruction filter (LPr) coefficients

h[n] = h[Ln] (L: length of the filter) High-pass reconstruction filter (HPr) coefficients

g[n] = g[Ln] (L: length of the filter)


32

The Daubechies filters for Wavelets are provided in the C55x IMGLIB for 2 p 10. Since there are several sets of filters, we may ask ourselves what are the advantages and disadvantages to using one set or another.

First we need to understand that we will have perfect reconstruction no matter what the filter length is. However, longer filters provide smoother, smaller intermediate results. Thus, if intermediate processing is required, we are less likely to lose information due to necessary threshold or saturation. However, longer filters obviously involve more processing. Wavelets and Perfect Reconstruction Filter Banks Filter banks decompose the signal into high- and low-frequency components. The low-frequency component usually contains most of the frequency of the signal. This is called the approximation. The high-frequency component contains the details of the signal.

Wavelet decomposition can be implemented using a two-channel filter bank. Two-channel filter banks are discussed in this section briefly. The main idea is that perfect reconstruction filter banks implement series expansions of discrete-time signals.

33

The input and the reconstruction are identical; this is called perfect reconstruction. Two popular decomposition structures are pyramid and wavelet packet. The first one decomposes only the approximation (low-frequency component) part while the second one decomposes both the approximation and the detail (high-frequency component).

34

Figure 8. Wavelet Packet Decomposition

The C55x IMGLIB provides the following functions for one dimension pyramid and packet decomposition and reconstruction. Complete information about these functions can be found in the C55x IMGLIB.

1-D discrete wavelet transform void IMG_wave_decom_one_dim(short *in_data, short *wksp, int *wavename, int length,int level);

35

1-D inverse discrete wavelet transform void IMG_wave_recon_one_dim(short *in_data, short *wksp, int *wavename, int length,int level);

1-D discrete wavelet package transform void IMG_wavep_decom_one_dim(short *in_data, short *wksp, int *wavename, int length,int level);

1-D inverse discrete wavelet package transform void IMG_wavep_recon_one_dim(short *in_data, short *wksp, int *wavename, int length,int level);

Wavelets Image Processing

Wavelets have found a large variety of applications in the image processing field. The JPEG 2000 standard uses wavelets for image compression. Other image processing applications such as noise reduction, edge detection, and finger print analysis have also been investigated in the literature.

Wavelet Decomposition of Images


36

In wavelet decomposing of an image, the decomposition is done row by row and then column by column. For instance, here is the procedure for an N x M image. You filter each row and then down-sample to obtain two N x (M/2) images. Then filter each column and subsample the filter output to obtain four (N/2) x (M/2) images.

Of the four subimages obtained as seen in Figure 12, the one obtained by low-pass filtering the rows and columns is referred to as the LL image. The one obtained by low-pass filtering the rows and high-pass filtering the columns is referred to as the LH images. The one obtained by high-pass filtering the rows and low-pass filtering the columns is called the HL image. The subimage obtained by high-pass filtering the rows and columns is referred to as the HH image. Each of the subimages obtained in this fashion can then be filtered and subsampled to obtain four more subimages. This process can be continued until the desired subband structure is obtained.

37

Three of the most popular ways to decompose an image are: pyramid, spacl, and wavelet packet, as shown in Figure 13.

In the structure of pyramid decomposition, only the LL subimage is decomposed after each decomposition into four more subimages. In the structure of wavelet packet decomposition, each subimage(LL, LH,HL, HH) is decomposed after each decomposition.

38

In the structure of spacl, after the first level of decomposition, each subimage is decomposed into smaller subimages, and then only the LL subimage is decomposed. Figure14 shows a three-level decomposition image of pyramid structure.

In the part I development stage, the JPEG 2000 standard supports the pyramid decomposition structure. In the future all three structures will be supported. For two dimensions, the C55x IMGLIB provides functions for pyramid and packet decomposition and reconstruction. Complete information about these functions can be found in the C55x IMGLIB.

2-D discrete wavelet transform void IMG_wave_decom_two_dim(short **image, short * wksp, int width, int height, int *wavename, int level);

39

2-D inverse discrete wavelet transform void IMG_wave_recon_two_dim(short **image, short * wksp, int width, int height, int *wavename, int level);

2-D discrete wavelet package transform void IMG_wavep_decom_two_dim(short **image, short * wksp, int width, int height, int *wavename, int level);

2-D inverse discrete wavelet package transform void IMG_wavep_recon_two_dim(short **image, short * wksp, int width, int height, int *wavename, int level); Wavelets Applications TI provides several one dimension and two dimension wavelets applications, which illustrate how to use the wavelets functions provided in the C55x IMGLIB. 2.2.1. 1-D Continuous wavelet transform The 1-D continuous wavelet transform is given by:
Wf (a, b) = x (t ) a ,b (t ) dt

------------ (2.3)

The inverse 1-D wavelet transform is given by:


1 C

x (t) =

W
0

( a, b)a ,b (t ) db

da a2

------------ (2.4)

40

Where C =

d<

()

is the Fourier transform of the mother wavelet (t). C is required


(0) = 0

to be finite, which leads to one of the required properties of a mother wavelet. Since C must be finite, then thus the
(t )

to avoid a singularity in the integral, and

must have zero mean. This condition can be stated as


(t ) dt = 0 and known as the admissibility condition.

2.2.2. 1-D Discrete wavelet transform The discrete wavelets transform (DWT), which transforms a discrete time signal to a discrete wavelet representation. The first step is to discretize the wavelet parameters, which reduce the previously continuous basis set of wavelets to a discrete and orthogonal / orthonormal set of basis wavelets.
m,n(t) = 2m/2 (2mt n) ; m, n such that - < m, n < -------- (2.5)

The 1-D DWT is given as the inner product of the signal x(t) being transformed with each of the discrete basis functions.
Wm,n = < x(t), m,n(t) > ; m, n Z ------------ (2.6)

The 1-D inverse DWT is given as:


W
m n

x (t) =

m,n

m , n (t ) ; m, n Z

------------- (2.7)

One Dimension Wavelet Applications


41

The 1D_Demo.c file presents applications of the one-dimension wavelet. A 128-point sine wave is used as input for all these applications as shown in Figure 15:

1-D Perfect Decomposition and Reconstruction Example The third application shows a three-level pyramid decomposition and reconstruction of the input signal:

// Perfect Reconstruction of Pyramid, Level 3 //================================================== for( i = 0; i < LENGTH; i++ ) signal[i] = backup[i]; IMG_wave_decom_one_dim( signal, temp_wksp, db4, LENGTH, 3 );
42

IMG_wave_recon_one_dim( signal, temp_wksp, db4, LENGTH, 3 ); for( i = 0; i < LENGTH; i++ ) noise[i] = signal[i] backup[i]; //

The error signal shown in Figure 24 represents the difference between the original signal and the reconstructed signal. This error signal is not zero because of the dynamic range of the 16-bit fixed-point data.
43

2.2.3. 2-D wavelet transform The 1-D DWT can be extended to 2-D transform using separable wavelet filters. With separable filters, applying a 1-D transform to all the rows of the input and then repeating on all of the columns can compute the 2-D transform. When one-level 2-D DWT is applied to an image, four transform coefficient sets are created. As depicted in Figure 2.1(c), the four sets are LL, HL, LH, and HH, where the first letter corresponds to applying either a low pass or high pass filter to the rows, and the second letter refers to the filter applied to the columns.

44

Figure 2.1. Block Diagram of DWT (a)Original Image (b) Output image after the 1-D applied on Row input (c) Output image after the second 1-D applied on row input

Figure 2.2. DWT for Lena image (a)Original Image (b) Output image after the 1-D applied on column input (c) Output image after the second 1-D applied on row input

The Two-Dimensional DWT (2D-DWT) converts images from spatial domain to frequency domain. At each level of the wavelet decomposition, each column of an image is first transformed using a 1D vertical analysis filter-bank. The same filter-bank is then applied horizontally to each row of the filtered and subsampled data. One-level of wavelet decomposition produces four filtered and subsampled images, referred to as subbands. The upper and lower areas of Fig. 2.2(b), respectively, represent the low pass and high pass coefficients after vertical 1D-DWT and sub sampling. The result of the horizontal 1D-DWT and sub sampling to form a 2D-DWT output image is shown in Fig.2.2(c).

We can use multiple levels of wavelet transforms to concentrate data energy in the lowest sampled bands. Specifically, the LL subband in fig 2.1(c) can be transformed again to form LL2, HL2, LH2, and HH2 subbands,
45

producing a two-level wavelet transform. An (R-1) level wavelet decomposition is associated with R resolution levels numbered from 0 to (R-1), with 0 and (R1) corresponding to the coarsest and finest resolutions. The straight forward convolution implementation of 1D-DWT requires a large amount of memory and large computation complexity. An alternative implementation of the 1D-DWT, known as the lifting scheme, provides significant reduction in the memory and the computation complexity. Lifting also allows in-place computation of the wavelet coefficients. Nevertheless, the lifting approach computes the same coefficients as the direct filter-bank convolution.

Two Dimension Wavelet Applications

The 2D_Demo.c file, provided in the C55x IMGLIB, presents applications of the two-dimension wavelet. A 128x128 image is used as input for all these applications. In the first application we are using the picture in Figure 25:

46

2-D Perfect Decomposition and Reconstruction Example In this application, the image is one-level decomposed and reconstructed. You notice no difference between the original picture and the reconstructed picture as shown in Figure 26.

47

Edge Detection Example In the second application, a 2-D edge detection is performed for the picture in Figure 27:

48

The HH part of the picture has a vertical line. This happens because a row by row processing was performed first and then a column by column. 2.3. LIFTING SCHEME

The wavelet transform of image is implemented using the lifting scheme [5]. The lifting operation consists of three steps. First, the input signal x[n] is down sampled into the even position signal xe (n) and the odd position signal xo(n) , then modifying these values using alternating prediction and updating steps. xe (n) = x [2n] and xo(n) = x [2n+1] ------------- (2.8)

A prediction step consists of predicting each odd sample as a linear combination of the even samples and subtracting it from the odd sample to form the prediction error. An update step consists of updating the even samples by adding them to a linear combination of the prediction error to form the updated sequence. The prediction and update may be evaluated in several steps until the forward transform is completed. The block diagram of forward lifting and inverse lifting is shown in figure 2.3.

49

xe (n) x (n) Split

xo(n)
(a)

Merge x (n)

(b)

Figure 2.3 The Lifting Scheme. (a) Forward Transform (b) Inverse Transform The inverse transform is similar to forward. It is based on the three operations undo update, undo prediction, and merge. The simple lifting technique using Haar wavelet is explained in next section. 2.4. LIFTING USING HARR The lifting scheme is a useful way of looking at discrete wavelet transform. It is easy to understand, since it performs all operations in the time domain, rather than in the frequency domain, and has other advantages as well. This section illustrates the lifting approach using the Haar Transform [6].

50

The Haar transform is based on the calculations of the averages (approximation co-efficient) and differences (detail co-efficient). Given two adjacent pixels a and b, the principle is to calculate the average s =
(a + b) 2

and

the difference d = a b . If a and b are similar, s will be similar to both and d will be small, i.e., require few bits to represent. This transform is reversible, since
a=s d 2

and

b=s+

d 2

and it can be written using matrix notation as

1 / 2 1 1 1 1 ( s, d ) = (a, b) ( a , b ) = ( s , d ) = (a,b)A, = ( s, d ) A 1 1 / 2 1/ 2 1/ 2

Consider a row of 2 n pixels values pixels S n , 2l , S n , 2l +1 forl


S n 1 of 2 n 1
S n 1,1 = ( S n , 2l + S n , 2l +1 ) / 2

S n ,l

for 0 l < 2 n . There are 2 n 1 pairs of


= S n , 2 l +1 S n , 2 l .

= 0,2,4,....., 2 n 2 .

Each pair is transformed into an average The result is a set

and the difference d n 1,l


d n 1of 2 n 1

averages and a set

differences.

51

CHAPTER 3

52

3.1 Introduction to LSB:


Data hiding is a method of hiding secret messages into a cover-media such that an unintended observer will not be aware of the existence of the hidden messages. In this paper, 8-bit grayscale images are selected as the covermedia. These images are called cover-images. Cover-images with the secret messages embedded in them are called stego-images. For data hiding methods, the image quality refers to the quality of the stego-images. In the literature, many techniques about data hiding have been proposed. One of the common techniques is based on manipulating the least-signi7cant-bit (LSB) planes by directly replacing the LSBs of the cover-image with the message bits. LSB methods typically achieve high capacity. Wang et al. proposed to embed secret messages in the moderately significant bit of the cover-image. A genetic algorithm is developed to 7nd an optimal substitution matrix for the embedding of the secret messages. They also proposed to use a local pixel adjustment process (LPAP) to improve the image quality of the stego-image. Unfortunately, since the local pixel adjustment process only considers the last three least significant bits and the fourth bit but not on all bits, the local pixel adjustment process is obviously not optimal. The weakness of the local pixel adjustment process is pointed out in Ref. As the local pixel adjustment process modifies the LSBs, the technique cannot be applied to data hiding schemes based on simple LSB substitution. Recently, Wang et al. further proposed a data hiding scheme by optimal LSB substitution and genetic algorithm. Using the proposed algorithm, the worst mean-square-error (WMSE) between the cover-image and the stego-image is shown to be 1/2 of that obtained by the simple LSB substitution method.

53

In this paper, a data hiding scheme by simple LSB substitution with an optimal pixel adjustment process (OPAP) is proposed. The basic concept of the OPAP is based on the technique proposed. The operations of the OPAP are generalized. The WMSE between the cover-image and the stego-image is derived. It is shown that the WMSE obtained by the OPAP could be less than of that obtained by the simple LSB substitution method. Experimental results demonstrate that enhanced image quality can be obtained with low extra computational complexity. The results obtained also show better performance than the optimal substitution method described.

3.2 Data hiding by simple LSB substitution:


In this section, the general operations of data hiding by simple LSB substitution method are described. Let C be the original 8-bit grayscale cover-image of Mc Nc pixels represented as

M be the n-bit secret message represented as

Suppose that the n-bit secret message M is to be embedded into the k-rightmost LSBs of the cover-image C. Firstly, the secret message M is rearranged to form a conceptually k-bit virtual image M_ represented as

54

Where

The mapping between the n-bit secret message. And the embedded message can be defined as follows:

Secondly, a subset of n_ pixels {xl1; xl2 ,.. xln_ } is chosen from the coverimage C in a predefined sequence. The embedding process is completed by replacing the k LSBs of xli by mi . Mathematically, the pixel value xli of the chosen pixel for storing the k-bit message mi is modified to form the stegopixel xli as follows:

In the extraction process, given the stego-image S, the embedded messages can be readily extracted without referring to the original cover-image. Using the same sequence as in the embedding process, the set of pixels{xl1, xl2,. Xln} storing the secret message bits are selected from the stego-image. The k LSBs of the selected pixels are extracted and lined up to reconstruct the secret message bits. Mathematically, the embedded message bits mi can be recovered by

Suppose that all the pixels in the cover-image are used for the embedding of secret message by the simple LSB substitution method. Theoretically, in the worst case, the

55

PSNR of the obtained stego-image can be computed by

Table 1 tabulates the worst PSNR for some k = 15. It could be seen that the image quality of the stego-image is degraded drastically when k>4.

3.3 Optimal pixel adjustment process:


In this section, an optimal pixel adjustment process (OPAP) is proposed to enhance the image quality of the stego-image obtained by the simple LSB substitution method. The basic concept of the OPAP is based on the technique proposed. Let pi , pi and pi be the corresponding pixel values of the ith pixel in the cover-image C, the stego-image C obtained by the simple LSB substitution method and the refined stego-image obtained after the OPAP. Let be the embedding error between pi and pi . According to the embedding process of the simple LSB substitution method described in Section 2, pi is obtained by the direct replacement of the k least signi7cant bits of pi with k message bits, therefore,
56

The value of

can be further segmented into three intervals, such that

Based on the three intervals, the OPAP, which modifi7es pi to form the stegopixel pi, can be described as follows:

Let

be the embedding error between pi and pi.

can be

computed as follows:

57

From the above 7ve cases, it can be seen that the absolute value of into the range only when ( case 5);while for other possible values of range , because and

may fall

falls into the

is obtained by the direct replacement of the are equivalent

k lab of Pi with the message bits

to Pi<2^k and Pi >256-2^k, respectively in general for gray scale natural images, when k<4, the no of pixel with pixel value smaller then 2^k or greater then 256-2^k, is insignificant. As a result it could be estimated that the absolute embedded error between pixel in the cover image and in the stego image obtained after the proposed OPAP is limited to

58

Let WMSE and WMSE be the worst case mean-squareerror between the stego-image and the cover-image obtained by the simple LSB substitution method and the proposed method with OPAP, respectively. WMSE* can be derived by

reveals that WMSE*< WMSE, for k>2; and WMSE* WMSE when k=4. This result also shows that the WMSE* obtained by the OPAP is better than that obtained by the optimal substitution method proposed in which WMSE* = WMSE. Moreover, the optimal pixel adjustment process only requires a checking of the embedding error between the original cover-image and the stego-image obtained by the simple LSB substitution method to form the final stego-image. The extra computational cost is very small compared with Wangs method which requires huge computation for the genetic algorithm to 7nd an optimal substitution matrix.

59

CHAPTER 4

4.1 Introduction to Matlab: The tutorials are independent of the rest of the document. The primarily objective is to help you learn quickly the rst steps. The emphasis here is learning by doing". Therefore, the best way to learn is by trying it yourself.
60

Working through the examples will give you a feel for the way that MATLAB operates. In this introduction we will describe how MATLAB handles simple numerical expressions and mathematical formulas. The name MATLAB stands for MATrix LABoratory. MATLAB was written originally to provide easy access to matrix software developed by the LINPACK (linear system package) and EISPACK (Eigen system package) projects. MATLAB is a high-performance language for technical computing. It integrates computation, visualization, and programming environment. Furthermore, MATLAB is a modern programming language environment: it has sophisticated data structures, contains built-in editing and debugging tools, and supports object-oriented programming. These factors make MATLAB an excellent tool for teaching and research. MATLAB has many advantages compared to conventional computer languages (e.g., C, FORTRAN) for solving technical problems. MATLAB is an interactive system whose basic data element is an array that does not require dimensioning. The software package has been commercially available since 1984 and is now considered as a standard tool at most universities and industries worldwide. It has powerful built-in routines that enable a very wide variety of computations. It also has easy to use graphics commands that make the visualization of results immediately available. Specic applications are collected in packages referred to as toolbox. There are toolboxes for signal processing, symbolic computation, control theory, simulation, optimiza-tion, and several other fields of applied science and engineering. Basic features As we mentioned earlier, the following tutorial lessons are designed to get you started quickly in MATLAB. The lessons are intended to make you familiar
61

with the basics of MATLAB. We urge you to complete the exercises given at the end of each lesson. 4.2 Starting MATLAB After logging into your account, you can enter MATLAB by doubleclicking on the MATLAB shortcut icon (MATLAB 7.0.4) on your Windows desktop. When you start MATLAB, a special window called the MATLAB desktop appears. The desktop is a window that contains other windows. The major tools within or accessible from the desktop are: The Command Window The Command History The Workspace The Current Directory The Help Browser The Start button

62

Figure 1.1: The graphical interface to the MATLAB workspace

63

Change the current working directory

Arithmetic operators plus uplus minus uminus mtimes times mpower power mldivide mrdivide ldivide rdivide - Plus - Unary plus - Minus - Unary minus - Matrix multiply - Array multiply - Matrix power - Array power ^ .^ \ / + + * .*

- Backslash or left matrix divide - Slash or right matrix divide - Left array divide - Right array divide
64

.\ ./

Relational operators eq ne lt gt le ge - Equal - Not equal - Less than - Greater than - Less than or equal - Greater than or equal == ~= < > <= >=

Logical operators Short-circuit logical AND Short-circuit logical OR and or not xor any || & | ~ &&

- Element-wise logical AND - Element-wise logical OR - Logical NOT - Logical EXCLUSIVE OR

- True if any element of vector is nonzero

Vectors a = [1 2 3 4 5 6 9 8 7] ; t = 0:2:20 t = 0 2 4 6 8 10 12 14 16 18 20 b=a+2 b = 3 4 5 6 7 8 11 10 9 c=a+b


65

c = 4 6 8 10 12 14 20 18 16 Matrices B = [1 2 3 4;5 6 7 8;9 10 11 12] ; B= 1 2 3 4 5 6 7 8 9 10 11 12 C = B' C= 159 2 6 10 3 7 11 4 8 12 Plot

t=0:0.25:7; y = sin(t); plot(t,y)


66

t=0:0.25:7; y = sin(t); plot(t,y) ; xlabel('x axis'); ylabel('y axis'); title('Heading'); grid on; gtext('text');

67

IF LOOP if a > 6 disp('a is greater'); elseif a==0 disp('a is zero'); else disp('a is smaller'); end FOR LOOP for i=1:5 a=a+1 end disp(a); While Loop while a < 10 a=a+1; end disp(a);

68

Function function c=add(a,b); c=a+b; return function c=mul(a,b); c=a*b; return

Main a=5; b=6; c=add(a,b); disp(c); d=mul(a,b); disp(d);

SWITCH a=input('enter---->'); switch a case 1 fprintf('one'); case 2 fprintf('two');


69

case 3 fprintf('three'); case 4 fprintf('four'); otherwise fprintf('otherwise'); end

How to read an image a =imread('cameraman.tif'); imshow(a); pixval on;

70

How to read an audio file a =wavread('test.wav'); wavplay(a,44100); Plot(a);

How to read an video file a=aviread('movie.avi'); movie(a);

71

Add two images J = imread('cameraman.tif'); I = imread('rice.tif'); K = imadd(I,J'); imshow(K)

72

Subtract two images I = imread('rice.tif'); Iq = imsubtract(I,50); subplot(1,2,1), imshow(I) subplot(1,2,2), imshow(Iq)

Convert image to gray and binary clc; clear; close all a= imread('flowers.tif'); subplot(2,2,1); imshow(a); subplot(2,2,2);
73

b=imresize(a,[256 256]); imshow(b); subplot(2,2,3); c=rgb2gray(b); imshow(c); subplot(2,2,4); d=im2bw(c); imshow(d);

74

RGB component a=imread('flowers.tif'); subplot(2,2,1); imshow(a); R=a; G=a; B=a; R(:,:,2:3)=0; subplot(2,2,2); imshow(R); G(:,:,1)=0; G(:,:,3)=0; subplot(2,2,3); imshow(G); B(:,:,1)=0; B(:,:,2)=0; subplot(2,2,4); imshow(B);

75

CONVER MOVIE TO FRAMES file=aviinfo('movie1.avi'); frm_cnt=file.NumFrames str2='.bmp' h = waitbar(0,'Please wait...'); for i=1:frm_cnt frm(i)=aviread(filename,i); % read the Video file frm_name=frame2im(frm(i)); % Convert Frame to image file frm_name=rgb2gray(frm_name);%convert gray filename1=strcat(strcat(num2str(i)),str2); imwrite(frm_name,filename1); waitbar(i/frm_cnt,h)
76

% to get inforamtaion abt video file % No.of frames in the video file

% Write image file

end close(h)

CONVERT FRAMES TO MOVIES


77

frm_cnt=5; number_of_frames=frm_cnt; filetype='.bmp'; display_time_of_frame=1; mov = avifile('MOVIE.avi'); count=0; for i=1:number_of_frames name1=strcat(num2str(i),filetype); a=imread(name1); while count<display_time_of_frame count=count+1; imshow(a); F=getframe(gca); mov=addframe(mov,F); end count=0; end mov=close(mov);

How to read a text file


78

fid = fopen('message.txt','r'); ice1= fread(fid); s = char(ice1'); fclose(fid); disp(s); Ans hello

How to write a text file txt=[65 67 68 69]; fid = fopen('output.txt','wb'); fwrite(fid,char(txt),'char'); fclose(fid); ANS =ACDE

79

Store an Image,Audio a =imread('cameraman.tif'); imwrite(a,'new.bmp'); a=wavread('test.wav'); wavwrite(d,44100,16,'nTEST.WAV'); Save and Load The Variable A=5; save A A; load A B=1; C=A+B; disp(C); Wavelet transform
80

a =imread('cameraman.tif'); [LL LH HL HH]=dwt2(a,'haar'); Dec=[... LL,LH HL,HH ... ]; imshow(Dec,[]);

DCT transform a=imread('cameraman.tif');


81

subplot(1,3,1);imshow(a,[]); b=dct2(a); subplot(1,3,2);imshow(b,[]);title('DCT'); c=idct2(b); subplot(1,3,3);imshow(c,[]);title('IDCT');

NOISE AND FILTER I = imread('eight.tif'); J = imnoise(I,'salt & pepper',0.02); K = medfilt2(J); subplot(1,2,1);imshow(J) subplot(1,2,2);imshow(K)

82

83

CHAPTER 5

84

5.1 CONCLUSION:
In this paper we proposed a novel data hiding scheme that hides data into the integer wavelet coefficients of an image. The system combines an adaptive data hiding technique and the optimum pixel adjustment algorithm to increase the hiding capacity of the system compared to other systems. The proposed system embeds secret data in a random order using a secret key only known to both sender and receiver. It is an adaptive system which embeds different number of bits in each wavelet coefficicient according to a hiding capacity function in order to maximize the hiding capacity without sacrificing the visual quality of resulting stego image. The proposed system also minimizes the difference between original coefficients values and modified values by using the optimum pixel adjustment algorithm. The proposed scheme was classified into three cases of hiding capacity according to different applications required by the user. Each case has different visual quality of the stego-image. Any data type can be used as the secret message since our experiments was made on a binary stream of data. There was no error in the recovered message (perfect recovery) at any hiding rate. From the experiments and the obtained results the proposed system proved to achieve high hiding capacity up to 48% of the cover image size with reasonable image quality and high security because of using random insertion of the secret message. On the other hand the system suffers from low robustness against various attacks such as histogram equalization and JPEG compression.

85

The proposed system can be further developed to increase its robustness by using some sort of error correction code which increases the probability of retrieving the message after attacks, also investigating methods to increase visual quality of the stego-image (PSNR) with the obtained hiding capacity.

5.2 REFERENCES:

[I) G. J. Simmons, "The prisoners' problern and the subliminal channel," in Proceedings of Crypto' 83, pp. 51-67, 1984. [2) N. Wu and M. Hwang. "Data Hiding: Current Status and Key Issues," International Journal of Network Security, Vol.4, No.1, pp. 1-9, Jan. 2007. [3) W. Chen, "A Comparative Study of Information Hiding Schernes Using Amplitude, Frequency and Phase Embedding," PhD Thesis, National Cheng Kung University, Tainan, Taiwan, May 2003. [4) C. Chan and L. M. Cheng, "Hiding data in images by simple LSB substitution," Pattern Recognition, pp. 469-474, Mar. 2004. [5) Changa, C. Changa, P. S. Huangb, and T. Tua, "A Novel bnage Steganographic Method Using Tri-way Pixel-Value Differencing," Journal of Multimedia, Vol. 3, No.2, June 2008. [6) H. H. Zayed, "A High-Hiding Capacity Technique for Hiding Data in images Based on K-Bit LSB Substitution," The 30th International Conference on Artificial Intelligence Applications (ICAIA 2005) Cairo, Feb. 2005.

86

[7) A. Westfeld, "F5a steganographic algorithm: High capacity despite better steganalysis," 4th International Workshop on Information Hiding, pp.289-302, April 25-27, 2001.

87

Das könnte Ihnen auch gefallen