Sie sind auf Seite 1von 101

DIGITAL WATERMARKING SYSTEM FOR STILL IMAGES

IMRAN KHAN

DEPARTMENT OF COMPUTER SCIENCES

COMSATS INSTITUTE OF INFORMATION TECHNOLOGY,


WAH CANTT PAKISTAN

SESSION 2001-2005

DIGITAL WATERMARKING SYSTEM FOR STILL IMAGES


Undertaken By: IMRAN KHAN
REG. NO. CIIT/F001-BCS-23/WAH

Supervised By: MRS. MUSSART ABDULLAH

A DISSERTATION SUBMITTED AS A PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF BACHELORS IN COMPUTER SCIENCE

DEPARTMENT OF COMPUTER SCIENCES

COMSATS INSTITUTE OF INFORMATION TECHNOLOGY,


WAH CANTT PAKISTAN

SESSION 2001-2005

In the name of Allah, Most merciful and compassionate the most Gracious and Beneficent whose help and guidance I always solicit at every step at every moment.

DEDICATION
To my Loving Mom, Brother and Sister Who have shown their Love and Support for me

ACKNOWLEDGEMENT
I owe my gratitude to Almighty Allah and Holy Prophet Muhammad (S.A.W) who taught his followers to seek knowledge from cradle to grave. Supporting every worker is an understanding family, or nothing would ever get written or done! I am grateful to my family my Mother my Brother and Sister for their support and understanding when I worked on this project. Their love, support, and encouragement helped this project possible. First of all my gratitude for the Prof. Dr. D.A. Karras, who not only accepted our offer for the implementation of the solution but helped us in every possible way to solve the problems that we faced during this work. His constant encouragement and guidelines at each step made it possible for us to do this project. Our supervisors Madam Mussarat and Mr. Faisal Shafique Butt, who helped us in a way one can never expect. Their support throughout the project provided invaluable feedback catching lots of errors, pointing out areas that needed more improvement and suggesting alternative possible solutions. Our sincere thanks to Mr. Shehzad Saleem for helping us understand and implement the famous Viterbi coding scheme. Spending four years at anonymous place was not only great experience but weve also found many friends who will be memorable for the life time and their help throughout four years not only for studies but also getting used to in the new environment. We thank Murtaza, Amin, Shoaib, and Mansoor for their great support for providing us place to live in and work on the project along with them. Shahid (the cook) took care of our dining during our stay at hostel.

Mudassar, Nassar who were also there with us and because of their entertaining habits they made everybody to feel at ease. Ammad, Fayyaz and Sajid for their support during all these years. THANKYOU all for such great support of yours

Imran Khan

PROJECT BRIEF

PROJECT NAME

DIGITAL WATERMARKING FOR STILL IMAGES

ORGANIZATION NAME

PEARLSoft

OBJECTIVE

DIGITAL RIGHTS MANAGEMENT

UNDERTAKEN BY

IMRAN KHAN

SUPERVISED BY

MRS. MUSARRAT ABDULLAH ASSOCIATE PROFESSOR COMPUTER SCIENCE CIIT WAH CAMPUS 1ST JULY 2005 10TH OCT 2005

STARTED ON

COMPLETED ON

COMPUTER USED

INTEL PENTIUM 4 PROCESSOR

SOURCE LANGUAGE

JAVA 2SE VERSION 1.4

OPERATING SYSTEM

WINDOWS XP PROFESSIONAL

TOOLS USED

GEL

ABSTRACT
This project seeks to develop a Robust Watermarking Software based on the research work carried out earlier. The group while exploring various watermarking techniques and algorithms that have been proposed for developing a Robust Watermarking Solution implemented a proposed Robust watermarking solution. A Robust Watermark is more resilient to the tempering/attacks that a multimedia object (Image, Video, and Audio) had to face like compression, image cropping, image flipping, image rotation to name a few. The group has implemented the solution A Second Order Spread Spectrum Modulation Scheme for Wavelet Based Low Error Probability Digital Image Watermarking. This scheme is proposed by D.A. Karras Chalkis Institute of Technology, Automation Dept. and Hellenic Open University, Rodu 2, Ano Iliupolis, Athens 16342, Greece. This software takes an Image as input and will apply the desired mark that one wishes to embed in the object according to the mentioned solution, so that it is not only invisible to the Human Visionary System but also strong enough to extract the original embedded mark to an extent that it is recognizable despite of the tempering/attacks. The group has worked under the guidance and supervision of Prof. Dr. D.A. Karras. The software works as a benchmarking tool for the proposed solution. The main purpose of this software is its use as test bed for the DWT based watermarking. Later it will become part of a larger system DWT based watermarking protocol.

TABLE OF CONTENTS
1 Introduction...... 1.1 1.2 Introduction to Data Hiding..... The Weight Of Background. 1.2.1 1.2.2 1.3 1.4 Close Relatives And Influences... Great Expectations... 02 04 04 06 08 09 11 12

The Watermarks... What Is Watermarking And Digital Watermarking. 1.4.1 Types Of Watermarking.. Different Faces Of Data Hiding.. 1.5.1 1.5.2 1.5.3 Characteristics Of A Robust Watermark..... Terminology.

1.5

Requirements 13 14 15 16

1.6

The Evaluation Of Watermarking Research......

Applications Of Digital Watermarking 2.1 2.2 Copyright Protection.. 2.2.1 2.2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 Illicit Use By End User..

Usage Specific Requirements. 19 19 Misappropration By Other Content Provides. 21 24

Annotation Watermarking.. 27 Finger Printing 29 Automatic Play list Generation for Copyright Verification. Multimedia Authentication.. Watermarking for Copy Protection.. Fraud And Temper Detection.. Others.. 30 31 32 35 36

Digital Watermarking Techniques For Still Images 3.1 3.2 Classification And Application Requirements... Photographic And Photorealistic Images...

38 38

3.2.1 3.2.2 3.2.3 3.3 3.4 3.5 3.6 3.7 3.8 3.9

Traditional Watermarking Methods.. Watermarking Methods Dealing With Geometric Distortion Content Based Watermarking Methods.

39 42 43

Problem Formulation.. 45 Generally Used Watermarking Techniques... The Adopted Digital Watermarking Technique. 45 48

Reason For Selecting This Technique 49 The Adopted Algorithm Description.. 50 Key Steps to Algorithm Understanding. Experimental Study and Discussion for Results 51 54 59

Software Design And Module Division... 4.1 4.2 4.3 Introduction

Software Description.. 59 Module Division. 61 63 64 64 64 68

Software Modules.. 5.1 5.2 Introduction ... Steps to Implement the Software Module Hash Function. 5.2.1 5.2.2 5.2.3 5.2.4 5.2.5 5.3 5.4 5.5 5.6 5.7 5.8 5.9 Select the Cover Image..... Calculating the Hash Function...... Applying Discrete Wavelet Transformation. Key based Transformation

Select Coefficients. 69 69 71 74 75 76 77 78 80

Introduction to the Watermark Information.. How to get the Watermark Information Security of the Watermark Information Applying the DSSS on the Watermark Information Bits. Applying the Parity Chack. Steps for Watermark Embedding... Steps for Watermark Detection.

Conclusions..

6.1 6.2

Summary Of Contributions Future Research Lines

84 85

Proposed Solution Simulated Observations and Results. Results.. References

Observations. 88 89

Chapter 1 Introduction

1.1

Introduction to Data Hiding

Data hiding, watermarking, and steganography are perhaps a curious association of words. And not only in English, because equally odd related terms are in use in other languages, just for instance Spanish, French or Italian have used, for similar purposes, digital filigreeing, invisible stamping, numerical tattooing, electronic marking, and others. But these expressions do not refer at all to craftsmanship, postal services, or body art, let alone some more or less hermetical or occultist skills. Rather, this richness of terms mirrors the flurry of activity that has surrounded in recent times the main subject investigated in this project. The objectives of this line of research and its appearance are best explained by the events that have favored the irresistible arrival of the Information age. It is a fact that since the 1980s and before, but especially during the 1990s, personal and mainstream media have gone digital. The convenience and reliability of digital formats for storage, transmission, and manipulation, has sentenced analog data to a vanishing secondary role. Just as one example, once overwhelmingly popular analog tapes and records have been lately confined to the underground of a few professionals and nostalgic freaks, who stubbornly distrust the advantages of the now ubiquitous digital compact discs. But, as computers and Internet connections have pervaded the developed world along with the globalization tide, the digital overturn has also been a silent Trojan horse that has afforded millions of people to get, duplicate, manipulate, and/or distribute original digital data[1]. The shock that this upheaval has caused to the traditional information flow model is far from being over, and it promises to keep attracting the attention of both researchers and society for many years to come. To start the Introduction with, the standard framework used to control revenues and property the cornerstones of market-oriented economies has been shaken at its foundations. Although legal provisions in force would seem to remain roughly applicable in the Digital Age, new technical tools are needed for law enforcement inside a world in which information moves and changes at a vertiginous speed. As a result, new shades have appeared over seemingly plain old concepts. One of the most outstanding among them is that of copy, one of the ever favorite pastimes of mankind. Copy has always been an unrelenting instrument of progress, there included copy in the sense beloved by Borges imperfect copy as sometimes a happy mutation is preferable to a sterile perfect duplication. It is plausible to think that the globalization of trade was responsible for the widespread

growth of the economic and intellectual concerns about copying that led to the western concept of copyright around the 17th century. More recently, the rediscovering of the much older and altruist practice of copy left has also taken place. But, whatever the exploitation model, it is clear that nowadays the distinction between the original and the ersatz has become more difficult than ever: still having not completely squeezed out the photocopier, the attraction exerted for instance by the CD/DVD-ROM burners, or by the inter textualization incarnated in the copy + paste paradigm, is already too mesmerizing for many. No matter how old copy is, it has never been so fast, easy, and either as malleable or as exact as desired. In this context, we can argue that the copy protection issue in its different flavors (owner identification, proof of ownership, transaction tracking or copy control) was the trigger and one of the main boosts behind the modern data hiding discipline start-up, and it seems that it was also its first seed [2]. Maybe inspired by the original paper watermarking techniques, several researchers thought around the first half of the 1990s that most issues related in one or other way to digital copy protection could be addressed if it was possible to imperceptibly conceal information inside multimedia digital signals in an efficient way [3] subtle electronic water mark as it read in probably the oldest appearance of the term in the modern sense [4] adhered to the desired content would do. As far as the name goes, digital watermarking would have been impossible if everyday data were waterproof to modifications. Nevertheless, to our luck or misfortune, human brain and senses turn out not to be perfect devices and, as such, they cannot distinguish changes between slightly different versions of the same multimedia data (i.e., digital images, audio or video): the door to data hiding was wide open.

1.2

The Weight of Background

In the words of C. Clark, the answer to the machine is in the machine [5]. Digital data hiding is indeed this kind of answer: it is undeniable that the new digital scenario has brought new headaches for old problems, but at the same time it has also provided the best environment for relieving them. Nevertheless, novel solutions propel fresh new ideas which bring new problems in, motivating the recursive invocation of Clarks aphorism. This is what it is likely to have happened with data hiding, as the core procedures for hiding information inside multimedia data, most of which were

initially devised for copy protection i.e. watermarking were also found suitable for other applications whose main efficiency criteria, or requirements, were sometimes different to those needed to deter copyright circumvention. The most remarkable among them was secret writing, or steganography. In this case data hiding hit the right note, awakening and hatching by sympathy a centuries old field covert communications that had been lurking inside esoteric parchments, military maneuvers, court intrigues, and impossible romances. Actually, steganography had already lived before a period of moderate academic flourishing through the classified study of subliminal channels during the Cold War [6]. But this time the reactivation of stegnography for digital multimedia was more widespread, and stimulated in turn the development of a number of techniques oriented to hide information into nonmultimedia assets such as text (in the best tradition of acrostics), circuits, programming codes, and many others. By contrast with the neologism data hiding whose acceptance is only recent, stegnography is a medieval creation stemming from the Greek root i.e. covert. Surely due to this temporal precedence, it has sometimes been used to design data hiding as a whole; stegnography is however currently considered as a special case of data hiding in which the most important design requirement is the secrecy of the communication act itself.

1.2.1 Close Relatives and Influences


Even though the embryo for the formal study of data hiding was, as we have shown, a genuine product of its times, the preexistence of stegnography drag the discipline into being quickly sponsored by a handful of romantic stories that rooted its lineage as far as the classical Ancient Times [7]. Clearly, the scientific significance of these historical information hiding episodes was almost always no more than anecdotic, as they were closer to conjuring tricks or to parlor games than to systematic knowledge. Just as cryptography before 1949: much more an art than a science. And this comparison is not unwarranted, as early contemporary data hiding began shamelessly treading on the footsteps of modern cryptography. To complicate matters, stegnography as an ad hoc discipline had also been intertwined and confused through the times with cryptography [8] as hiding was sometimes preferred to the more suspicious ciphering as a means to conceal information. Therefore, we may wonder if the term stegnographia, extant at least since 1499, should not have been left behind

when the field was reborn for scholarly formalization, just as alchemy evolved into chemistry. But, what is the actual parallelism between cryptography and data hiding, apart from trying to conceal information from third parties? In cryptography the encoder and the decoder use (and sometimes share) a set of keys necessary for encrypting and decrypting information, just as it is usually done in data hiding for embedding and decoding. Besides, they share the figure of the attacker, i.e. the undesirable third party in between the sender and the receiver of the information that can be passive or active. A passive attacker might try to obtain the embedded information just as it happens with a decipherer in cryptology, but (s) he could be likewise an active jammer attempting to impair communication such as in a warfare scenario. A crypto logic like attack could be active too, for example, trying to use a guess of the secret key to trick the receiver. Furthermore, alike to communications than to cryptography, the attacker can be also an inadvertent or unintentional one to represent usual processes such as compression or others. This double personality of the attacker, halfway between cryptology and communications, is important because it can be approximately mapped to the important global requirements of security and robustness, respectively. It introduces another fundamental ingredient too: communications. The sheer act of hiding data in the presence of attacks can just be considered as a particular case of communications. This alternative view of data hiding has grown since the mid-1990s to become at least as influential, if not more, than cryptography, especially when robustness is at stake. Concepts from communications such as modulations, channel coding and synchronization, have landed onto the discipline through this connection. The information theoretic study of data hiding has naturally accompanied this landing, both to show where the limits of the practical methods could be set and to establish The Weight of Background design guidelines for them. Notice that if the application of existing communications techniques alone would have sufficed to achieve many of the desired purposes of data hiding, this area of research would already be exhausted. Nonetheless, as we will see, the data hiding communications channel is a quite special one that requires in many times tailored solutions, because it frequently poses situations rarely present in usual communications scenarios. It is important to stress that encryption is actually complementary to data hiding and not a competitor. Certainly, the synergies from cryptology go further than the previously given hints. The basic crypto logical Kerckhoffs assumption has been fully adopted in data hiding, and, as a consequence,

security through obscurity, or through obfuscated algorithms, has been dismissed almost from the start by the research community (but industry will deserve some words about this issue later). In addition, and for stegnographic like applications, cryptanalysis has had its counterpart by the name of steganalysis since around 1998. Last, but not least, public-key cryptography has motivated some work in the same direction in data hiding.

1.2.2 Great Expectations


If it was to be the solution to problems as hard as copy protection or secret writing, it does not seem hard to imagine that the foundational ideas of data hiding must have been welcome as a gift by the researchers who started to map and allegedly conquer the new terra incognita by the end of the second millennium. But other factors outside the scientific community conspired together to generate even more expectations that resulted in what we could call a data hiding rush. At the beginnings of modern data hiding, cryptography was placed on top of its popularity and respect, thanks to its paramount role in 20th century history and to its brilliant projection into the future. Due to the fore mentioned parallelisms, this elevated status of cryptography was somehow partially transmitted to the prevailing conception of data hiding. A logical conclusion was that such a halo of borrowed prestige would thrust the consumerist desires of a globalized society always eager for novelties. In this way, the newly arrived product was unceremoniously seized, leading to the premature arrival of the technology to the market.

Contrarily to the more sensible introduction of cryptography into the mass-market, the Kerckhoffs principle was clearly violated by the commercial versions of data hiding from the outset. The reason of this risky strategy was the immaturity and primitiveness of the technology at the moment, and its introduction evident consequence was to disappoint potential end-users. And also to raise skepticism among standardization bodies seeking prospective technologies: namely, the DVD Copy Control Association has declined in 2002 to include watermarking technologies in their standard. Similarly, neither the JPEG 2000 nor the MPEG-4 standards have specified any watermarking algorithm. At least, the former has made provision for including them in future releases, and the latter has

specified a general Intellectual Property (IP) management and protection system. Just a few crude and unsophisticated, opportunist products were enough to cast doubts on the utility of data hiding, and to make it a transient casualty of the over expectations created. The obscure Digimarc watermarking plug-ins issued since 1996, or the challenge launched in 2000 by the Secure Digital Music Initiative (SDMI) to break several secret audio watermarking algorithms, were just some samples of this unwise industry policy. This series of events has probably contributed to some sporadic statements of meta attackers of data hiding technologies, most of the times engaging into unscientific or by zantine controversies, but nonetheless potentially able to damage the fields image.

With the benefit of hindsight, the current state of affairs allows us to be more optimistic and realistic about the future. On the research side, the advances that have taken place during last years are evident, as the discipline has moved from the initial turmoil of supposedly unbeatable inventions into a more steady and systematic

advance. On the industry side, the prospective benefits of achieving reliable data hiding are still seen as a prized reward. Both the plethora of small and medium enterprises that have proliferated since the first times and most of which have failed to use watermarking as a springboard and the big corporations have realized that research plays a crucial role in getting something more solid than just temporary revenues. An example of this changing attitude is the (somewhat timid) questioning of obscurity by the Video Watermarking Group (Digimarc, Hitachi, Macrovision, NEC, Philips, Pioneer and Sony). It is sure that the lack of standards has played against the translation of the emerging technological effort into well-finished end products, but looking back it is logical that no standards could have been issued at the time. We may suppose that useful de facto standards will spring as soon as most pending technological issues are satisfactorily solved. But, as far as copyright issues are concerned, industry wars that have for instance undermined the SDMI and consumers receptivity that has sunk into oblivion copy-protection-oriented MiniDiscs 3A consortium formed in 1998 by scores of relevant corporations to establish a standard for copyright-friendly music formats. The Different Faces of Data Hiding and DATs are key for their implementation. Last, the World Intellectual Property organization (WIPO) has recognized that technological measures are just one prong out of three within a general copy protection scheme, together with laws that support

protection technologies and cross-industry negotiations and licenses. As a first measure the two renewed WIPO treaties of 1996, i.e., the Copyright Treaty (WCT) to protect authors, and the Performances and Phonograms Treaty (WPPT) to protect producers, have started to come into force in the spring of 2002. On the other hand, it is unlikely that the philosophers stone of copy protection techniques will be ever found, and it is presumable that other technological solutions will converge with data hiding to this end. In spite of this discussion, data hiding technologies will have to be effective by themselves in areas like steganographic applications and others.

1.3

The watermarks

A watermark is an identifying feature, like a company logo, which can be used to provide protection of some "cover" data. A watermark may be either visible i.e. perceptible, or invisible i.e. imperceptible, both of which offer specific advantages when it comes to protecting data. Or any piece of data may be used as a watermark. The most common watermarks used include corporate or academic logos, number sequences, and also watermarks consisting of black and white dots.

A watermark is a pattern of bits inserted into a digital image (regarding our project because we are considering digital image watermarking) file that identifies the file's copyright information (author, rights, etc.). The name watermark is derived from the faintly visible marks imprinted on organizational stationery.

Unlike printed watermarks, which are intended to be somewhat visible or totally invisible, in our project the effort is made specifically to design completely invisible watermarks.

Satisfying all these requirements is no easy feat, but there are a number of researchers that have proposed the techniques for the digital watermarking. All of them work by making the watermark appear as noise - that is, random data that exists in most digital files anyway. But our technique is different in this manner as well.

Watermarks may be used to prove ownership of data, and also as an attempt to enforce copyright restrictions. For example, a web-based crawler may identify the watermark in a "copied" file.

1.4

What is Watermarking and Digital Watermarking?

Watermarking is the process involved in embedding a watermark into some cover data for the purpose of identification of the owner or original source of the multimedia data especially digital images in our projects concern.

Watermarking is not a new technique. It is descendent of a technique known as stegnography which has been in existence for at least a few hundred years. Stegnography is a technique for concealed communication. In contrast to cryptography where the content of a communicated message is secret, in stegnography the very existence of the message that is communicated is a secret and its presence is known only by parties involved in the communication. Stegnography is technique where a secret message is hidden within another unrelated message and then communicated to the other party. Some of the techniques of stegnography like use of invisible ink, word spacing patterns in printed documents, coding messages in music compositions, etc., have been used by military intelligence since the times of ancient Greek civilization. Watermarking can be considered as a special technique of stegnography where one message is embedded in another and the two messages are related to each other in some way. The most common examples of watermarking are the presence of specific patterns in currency notes which are visible only when the note is held to light and logos in the background of printed text documents. The watermarking techniques prevent forgery and unauthorized replication of physical objects. Digital watermarking is similar to watermarking physical objects except that the watermarking technique is used for digital content instead of physical objects. In digital watermarking a low-energy signal is imperceptibly embedded in another signal. The low-energy signal is called watermark and it depicts some metadata, like security or rights information about the main signal. The main signal in which the watermark is embedded is referred to as cover signal since it covers the watermark. The cover signal is generally a still image, audio clip, video sequence or a text documents in digital format.

The digital watermarking system essentially consists of a watermark embedder and a Watermark detector. The watermark embedder inserts a watermark onto the cover signal and the watermark detector detects the presence of watermark signal [11]. Note that an entity called watermark key is used during the process of embedding and detecting watermarks. The watermark key has a one-to-one correspondence with watermark signal (i.e., a unique watermark key exists for every watermark signal). The watermark key is private and known to only authorized parties and it ensures that only authorized parties can detect the watermark [12]. Further, note that the communication channel can be noisy and hostile (i.e., prone to security attacks) and hence the digital watermarking techniques should be resilient to both noise and security attacks.

Digital watermarking is a technique for embedding a watermark into a digital image to protect the owners copyright of the image. The embedded watermark in the resulting stego-image must be robust because the stego-image may be rotated or scaled by illicit users. It is desirable that after such rotation or scaling attacks, the watermark is not fully destroyed and can still be extracted to verify the copyright of the image. Many watermarking techniques for copyright protection have been proposed in recent years.

Digital watermarking technology makes use of the fact that the human eye has only a limited ability to observe differences. Minor modifications in the color values of an image are subconsciously corrected by the eye, so that the observer does not notice any difference.

1.4.1 Types of Watermarking


There are two famous approaches to watermarking that are mostly used. Spatial domain watermarkingSpatial domain watermarking is easy to implement and requires no original image for watermark detection. However, it often fails under signal processing attacks such as filtering and compression [10]. Besides, the fidelity of the original image data can be severely degraded since the watermark is directly applied on the pixel values [13].

Transform domain watermarkingWatermark embedded in the transform domain e.g., DCT, DFT, wavelet by modifying the coefficients of global or block transform [14]. Frequency domain watermarking generally provides more protection under most of the signal processing attacks. But the existing frequency-domain watermark algorithms require the original image for comparison in the watermark retrieval process, which is not practical for a huge image database. Furthermore, the necessity of progressive transmission is one of the requirements for Internet distribution. The lack of progressive transmission property in existing spatial- and frequency-domain watermarking algorithms limits their Internet applications.

In our software we have used the 2nd approach of watermarking that is Transform Domain watermarking. Basically in this approach as its name suggests we transform the pixel values (representing the image) into some other domain. This transformation of pixels values can be done by a number of ways. There are a number of transforming techniques like DCT (Discrete Cosine Transform), DFT (Discrete Fourier Transform) FFT (Fast Fourier Transform). We have used the DWT (Discrete Wavelet Transform).

The

basic advantage of using the transform domain watermarking is that we have

more control over the watermark insertion areas that are basically the areas of interest for inserting the watermark because in transform domain watermarking we have more than one coefficients against every pixel value. For each of RGB (Red, Green and Blue component of the pixel)

1.5

The Different Faces of Data Hiding

On the whole, the applications of a given technology largely determine the areas of active research inside it, and data hiding is not an exception to this rule. Applications usually specify or constrain different sets of requirements, thus posing concrete, workable problems.

1.5.1 Requirements
Among the most relevant concepts from which to draw the efficiency criteria of data hiding/watermarking algorithms are robustness and security that have established up to now an overall division between the two lines of research roughly comprised by watermarking and stegnography. This classification does not mean that both types of criteria cannot be met simultaneously [9], as they are actually two sides of the same problem, but this divide-and-conquer strategy has turned out to be appropriate to face the open topics. Robustness and security are related to reliability questions posed from the point of view of an authorized and an unauthorized party, respectively. From the standpoint of the authorized agent the aforementioned issues are usually associated to the correct retrieval of the hidden data within a variety of situations. From the other side, they are related to operations leading either to access the private information (embedded data, keys used) or to trick an authorized agent. The reliability of any of these operations can be (and must be) measured through different objective parameters, which in the more general case are given by probabilities that quantify the degree of success of a given operation. Additionally, all techniques oriented to embed information inside multimedia data present the compulsory requirement of being power-limited, so as not to perceptually interfere too much with the multimedia signal (host signal) that accommodates the hidden information. This power limitation on the distortion of the host data only means data hiding when it results in the imperceptibility of the hidden watermark. Perceptible embedding, such as that obtained with visible watermarks or glyphs, cannot be considered to be hidden and it constitutes a different but in many aspects related branch of research. Notice that the practical dilemma of the shield and the sword somehow implied by data hiding algorithms and attacks goes on partly thanks to the imperfect knowledge we have on

psycho visual and psychoacoustics effects. This uncertainty principle plays by all means in favor of data hiding, as a perfect knowledge of the perceptual mechanisms involved could be exploited to undermine robustness and security. Last, if perceptual analyses can be already delicate for some multimedia signals such as 3-D data, imperceptibility issues and hence formal analyses become much more complicated for non-multimedia host data. As it has been noticed by many authors, there is often either a clash or an overlap between the requirements chosen from the three global classifications above. The same happens whenever there are requirements belonging to types other than these basic three, adding more restrictions to a given scenario. For instance, a relevant requirement can be capacity, i.e., the maximum possible amount of error-free hidden information that can act as a figure of merit in situations subject to any combination of constraints chosen from the triplet of basic requirements.

1.5.2 Characteristics of a Robust Watermark


For a watermark to be robust that means tolerant and survives against the geometric attacks it should have some characteristics that are world widely defined and accepted. The idea of using reliable/permanent watermark to identify uniquely both the source (owner) of an image and an intended recipient has much importance in the electronic publishing and printing industries. For any of this type of watermarks it is necessary for a watermark to be visually imperceptible, secure, reliable and resistant to attacks (geometric attacks and attacks during the transmission). The main characteristics for a robust watermark are as follows, ImperceptibilityThe watermarked and original data source should be perceptually identical. That means if a watermark is visible, it can easily be removed or overwritten. However, if the watermark is embedded only in the "invisible parts" of the image (for instance, the least significant bits of the image), lossy compression algorithms, such as JPEG, removes the watermark. Therefore the watermark should somehow be associated to the perceptually significant parts of the image.

RobustnessThe embedded data should survive any signal processing operation the host signal goes through and preserve its fidelity. Or we can say that the watermarks must be difficult (or rather impossible) to remove. Therefore they have to be robust under unintentional distortions such as:

Digital-to-analog and analog-to-digital conversion; Filtering, compression, quantization, sampling; Contrast and color enhancement, dithering; Geometric distortions such as rotation, translation, scaling, cropping etc.

CapacityMaximize data embedding payload. SecuritySecurity is in the key. System embedding and extraction low complexity. Availability of original signal during extraction process.

For a good resistant/tolerant or robust watermark it is necessary for a watermark to have all of the above characteristics.

1.5.3 Terminology
The variety of application-specific techniques we have seen has led to the usual agreement that data hiding/watermarking is the umbrella term that more generally embraces the field delimited by all of them. To a minor degree, information hiding has also been used to this end. Notwithstanding several seminal and somewhat unfortunate normalization attempts [15] the novelty of the discipline and its role as a melting pot of elements from different areas such as information theory, communications, signal processing and cryptography has led to certain confusion in its nomenclature, and, hence, only to a partial agreement on the terms used. This instability of terminology seems to have been an enduring constant of data hiding for the time being. Here, we will abide by the de facto terminology that has eventually

come to be accepted by most of the relevant research groups, and that we will be and have been gradually introducing as needed.

1.6

The Evolution of Watermarking Research

In this section we describe the steps taken by the research of digital watermarking for multimedia data until arriving at its current position. We introduce the basic constituents of the problem and the difficulties arisen on the way. As this documentation is specifically designed after the implementation of the project of digital watermarking so it has a detailed description of all the problems that can met in the process of developing such a software product.

Digital data hiding research has already gone through several epochs despite having only barely reached its first decade of life. The first one was characterized by an exponential mushrooming of watermarking algorithms, most times concocted using elements arbitrarily chosen from assorted sources. In general, more emphasis was put on the invention of these different techniques than on studies oriented to theoretically study the basic underlying mechanisms of data hiding and their robustness properties. The embryonic influence of paper watermarking still managed sometimes to slip into details such as the recurring idea of hiding an image (or a human readable pattern) inside another, using the noisy retrieved image to assess performance to the naked eye.

Imperceptibility increasingly started to receive an independent treatment, rather than leaving it at the mercy of the properties of the actual data hiding method devised. Perceptual analyses, that had been the concern of the active field of lossy compression some years before, influenced the appearance of data hiding methods for different transform domains where these analyses were best known (e.g., the Discrete Cosine Transform [DCT]). By that time fluctuations between blind (also public, complete, or oblivious) and non-blind (also private, incomplete, non-oblivious, or cover-escrow) methods were frequent, that is, there were methods advocating the now inconceivable necessity of the original un watermarked data to retrieve the embedded information. This first phase went on the wane as it gained ground the recognition that the data hiding problem was a particular case of communications [16]. At this point we can

talk of the beginning of a second stage, characterized by the awareness that a lot of well-known techniques could be applied to a greater or lesser degree to data hiding. One significant example is spread spectrum, that as it had been designed for jamming scenarios, started to be used to the same end in data hiding. This phase also opened the way to an increasing formalization of the problem, and, hence, to more rigorous performance analyses. Within this framework, for any arbitrary domain, the watermarking problem consisted in optimally encoding the information into a watermark, under perceptual restrictions dependent on the domain and the embedding mechanism. Next, the watermarked signal was formed by blending the watermark together with the host signal through the use of an embedding function that could be typically additive or multiplicative. Last, the watermarked signal would undergo an attack stage before arriving to the decoder. The latter would try to recover the embedded information as reliably as possible, without using the original host signal in the blind case. To understand in more specific and detailed manner we need to describe here some key concepts that differentiate the fields of data hiding and watermarking although these are the two faces of the same coin.

Chapter 2 Applications of Digital Watermarking

2.1

Usage-specific requirements

Digital watermarks are particularly attractive for signals constituting a continuous stream such as audio or video signals. In case such signals are transmitted in analog form, recovery must be possible from the analog form, presumably at a minimum after the signal has been attenuated, distorted, and transformed in the process of transmission and reproduction. Particularly in the case of analog video signals with their high bandwidth requirements, the recovery must then either be possible given only a very limited high- fidelity recording of the original signal, or from a significantly lower bandwidth recording at a later stage. The former requirement can be further refined into real-time recovery requirements; in this case the watermark must be recovered given a signal excerpt with a duration delimited by a fixed upper time bound and given a fixed upper bound for the time permitted to recover the watermark after the signal excerpt has been available. For digitally transmitted signals, it must not be possible to detect (and therefore delete) the marking without an appropriately parameterized detector from either the encoded or the base band (decoded) signal and must be robust against digital-to-analog conversions. Since most multimedia signals transmitted digitally are encoded using a compression scheme and have only a fixed bandwidth available, an additional requirement levied on digital watermarks may be that the watermark does not increase the bandwidth required for the marked signal beyond the available bandwidth for a given signal.

2.2

Copyright protection

The protection of intellectual property through technical means was presumably one of the primary motivations for applying well-known steganographic techniques; some of the earliest publications in the field explicitly mention this particular application area [17]. Protective measures can be grouped into two broad categories. The first category encompasses the protection against misappropriation of creations by other content providers without the permission of or compensation of the rights owner, while the second category includes protection mechanisms against illicit use by end users. While both categories are commonly referred to as piracy, the issues involved and requirements for protective measures may make a distinction beneficial, although in a number of application scenarios protection against both categories of misuse are

called for. In addition to the partial requirements derived for the protection scenarios given below, there exists an orthogonal requirement component in the quality of the signal to which digital watermarks have been applied; presumably the most common quality metric is the absence of perceptually significant artifacts introduced by markings. Such a quality metric must be based on an accurate perceptual model; even though simpler metrics such as peak signal-to-noise ratio (PSNR) can provide guidance in the application of algorithms and their parameters, effects such as masking in the audio and color sensitivity in the visual domain cannot be adequately covered. However, while human test subjects potentially provide the most elaborate evaluation metric, the cost of deriving statistically significant data eliminating personal preferences and bias makes the approach impractical under most circumstances. Instead, computational models derived from such experiments can be employed that can predict the effect of a modification to a signal on the perceived quality (e.g., a just noticeable difference). The accuracy and level of detail of the perceptual model used for evaluation therefore are critical; however, given the derivation mechanism for the models, the ability to, for example, state that a specific individual will or will not perceive an alteration for a specific signal after the application of a marking is limited to probability statements over populations. The notion of perceptual significance is not necessarily limited to the human audiovisual system; in some scenarios, markings may identify ownership and distribution of multimedia data that are not intended for human perception, but rather for machine analysis. In this case, potential artifacts introduced by the marking process must not affect the acquisition and registration process of the automated analysis or be interpretable as semantically significant. However, this scenario is considerably more benign compared to one targeted to the human perceptual system, as the model is well defined and can be verified. Additional non perceptual quality metrics (subsidiary to the ones discussed above) may include the effect a marking has on the compressibility of the signal. Any copyright protection system must ensure that the false positive rate (depending on the application sub scenario, this can mean that the probability that a digital watermark detector responds without a marking being present or that an existing marking is read, but the payload is not retrieved correctly in such a way that another syntactically correct payload is detected) is below a certain threshold, transgression of which would lead to increased expenses due to litigation, warranty claims, or other customer dissatisfaction. In a consumer-oriented application area, this

imposes a significant burden of proof on the digital watermarking algorithm (e.g., the digital versatile disk (DVD) Copy Control Association (CCA) requires a false positive rate below 1012 [18]).

2.2.1 Misappropriation by other content providers


Misappropriation by other content providers can, in turn, occur in several forms. In the simplest case, a creation is duplicated, redistributed, or resold in its entirety and in its original form. Here, the legal framework provides, in its current form, protection even without the creation being marked with a copyright notice, though certain national jurisdictions may provide elevated protection status for creations affixed with a formal copyright notice. For most types of creations in digital representation such as movies, books, or even software, the addition of such a notice does not impose any difficulty and can be embedded in textual form. Other media types, particularly audio material, cannot be annotated in such a simple way. In addition, visible markings may not be desirable because of a concomitant loss of value to the end user; such a perceptible marking can be distracting or esthetically displeasing and would have to be placed in a semantically or esthetically significant location, since otherwise a simple cropping operation would remove the marking. Digital watermarking, or indeed a steganographic marking, can provide the requisite embedded information to assert copyright over a creation. In the most trivial form this amounts to embedding a source-encoded textual notice, but given the limited bandwidth afforded particularly by robust digital watermarks, more efficient encoding schemes are called for. Generally, this application scenario requires that information serving as evidence of the origin of a creation is provided as the payload signal. A more realistic application scenario must recognize that the creation may be subjected to a number of transformations including format conversions. Even for the case of duplication of the entire creation without alteration, a conversion in the representation may remove copyright notices that are not bound to the actual carrier signal of the creation, again making digital watermarks and stegnography attractive. Many transformations to which a creation may be exposed either deliberately or unwittingly are, however, not lossless. Such transformations may include the use of lossy compression to reduce storage and bandwidth requirements, digital-to-analog conversions, or complex operations such as multiband compression in the case of audio data that introduce a

significant distortion compared to the original creation. It is primarily due to these considerations that the robustness requirement of digital watermarks for copyright protection was introduced.

Unauthorized duplication can not only occur in situations with no preexisting relation between the rights owner and the secondary user, there are also situations where a user has licensed a creation for limited exploitation under certain circumscribed conditions. Any usage outside the licensed area must be unattractive to the secondary user, which can be accomplished by means of digital watermarking [19] in that the digital watermark contains evidence of the origin of the material. This immediately leads to an additional requirement for the application scenario: there must exist a subjective mapping between the set of rights owners O and protected creations C (in most situations there will be additional requirements on the relation between the sets, but for the purposes of this discussion, subjectivity is sufficient to capture the general case).Establishing such a mapping using digital watermarks implies that the payload size for the ownership marking must be sufficiently large to accommodate the domain of the mapping. The requirement can, however, also be fulfilled by a marking that permits the explicit identification of each element of C, in effect providing an efficient annotation watermark. In this case, however, the payload size may be significantly larger than in the previously mentioned one. An additional payload type, identifying individual copies of a creation as associated with a specific transaction, can also provide a deterrent effect; this application scenario for digital watermarks. While, as noted above, the creation may be subject to quality-lowering transformations, the application scenario dictates that the secondary user cannot degrade the quality to a significant extent, as that would likely reduce the value perceived by the secondary users customers to an extent that makes the duplication unattractive, in contrast to the requirements, although the quality degradation and variety of transformations inherent in a digital-to-analog conversion may be acceptable, as made evident by. This particular observation is rather problematic for protection mechanisms based solely in the digital domain. Robustness requirements, particularly with regard to lossy compression, are therefore modest if the end result after an attack is to be of value to a potential adversary. However, a high degree of robustness against de synchronization attacks is desirable, as such an attack does not necessarily degrade the perceived quality (e.g., shifts of single scan lines in a motion picture). A significant threat

related to de- synchronization is inherent in the nature of digital media in that the composition of new creations using parts of one or more existing ones can be performed with comparable ease and modest tools. Such compositions can take the form of collections of existing creations, in which case the precise technical form of the collection (such as the use of hyperlinked material instead of locating material at the same source) can determine the admissibility of collections without requiring compensation for the rights owner of the component creations, depending on the doctrine in a given jurisdiction. Another type of composition, which may even occur without a deliberate attempt at deception on the part of the secondary user is in the composition of fragments of existing creations, which may occur over multiple generations. If only one of the intermediate generations omits the required copyright notice, it becomes increasingly difficult to associate the fragment with its rights owner. While such composition may be legitimate in the case of very limited fragments under the fair use doctrine, the compensation of the rights owners cannot be avoided in general. In addition, the rights owner may object to having a creation used in a certain context and refuse to grant permission for such uses (e.g., the combination of unrelated video material designed to create a misleading impression. The difficulty with which such fragments could ordinarily be traced back to their respective origin not only endangers the rights of creations owners, it also imposes a severe burden on conscientious secondary user attempting to identify and then locate each creations owner for obtaining permission. A digital watermark identifying the creator directly or indirectly is therefore beneficial not only to the rights owner but also to legitimate secondary users, as it can significantly reduce the effort required and may make certain kinds of compositions possible that could otherwise not have been considered. Besides introducing a requirement for robustness against trivial de-synchronization, such applications add further constraints on the capabilities required for watermarking techniques in that the watermark must be recoverable after extensive cropping. Depending on the type of media, this may, for example, involve spatial cropping in the case of video or still images, or temporal cropping for audio and video material. However, the watermark payload must still be retrievable from the cropped elements, for example, by having the signal spread out with sufficient redundancy or repeating the watermark over the course of the carrier signal. This implies that the bandwidth requirement for the watermark payload applies not to the entire signal but rather to the minimum fragment size for which recognition must be possible. In case of audio data,

the fragment size is between 3 and 5 seconds; for other media types, the minimum fragment size may depend on the semantics of the fragment, for example, in the case of a visible trademark or trade secret stemming from a reproduced document, the same signal fragment extent is significant, whereas the recognition of the origin of a semantically irrelevant fragment of the same size may not be required.

2.2.2 Illicit use by end users


The main distinction between the unauthorized reproduction and use of creations by content providers discussed in the previous section and unauthorized use by end users lies in the visibility of the perpetrator. Whereas a content provider ultimately must attempt to sell or otherwise profit from the creations and therefore must expose himself to the public (albeit possibly in a different jurisdiction), the same is not true for end users. End users may duplicate creations and distribute such material either within their personal environment or using file sharing services, some of which can provide a certain amount of anonymity. While copyright markings and digital watermarks can assist in identifying such material if and when it is located, identification of the source of the material cannot be accomplished using these markings. A deterrence effect may, however, be conjectured if an individual copy of a creation is tied to a specific transaction (which may implicitly be extended to an individual based on the type of records maintained for a transaction). The payload for such a digital watermark may be the identity of the end user to which a creation is sold or otherwise licensed or a unique identification of the transaction itself. This can lead to the identification of the original purchaser or licensee (or the last authorized link in a distribution chain in the case of what has occasionally been called (super distribution) if a copy or elements thereof are found in the possession of an unauthorized end user. The payload size required for this application scenario mirrors those discussed in the preceding section; for transaction watermarks, the uniqueness constraint must be balanced against the drawbacks of large payloads, at least to an extent that the probability of duplicate transaction identifiers is comparable to other types of false positives in the detection stage, since otherwise the evidentiary value could be called into question. Pragmatic issues also must be taken into consideration in determining the true deterrence effect of marked creations, since unlike the case of commercial interests in most situations there will exist a strong legal protection of

individuals from searches of their property and invasion of their privacy without a viable justification. This is likely to limit the deterrent value in that only copies that are found in the open (e.g., those traded openly by an end user) can be verified for containing watermarks. It is also, to the authors best knowledge, untested whether such prima facie evidence in the form of a digital watermark is sufficient to show that a transgression has taken place and even so it may be the case that an end user thus identified can plausibly deny the deliberate dissemination. An argument for the efficacy of digital watermarks as a deterrent in this context is the use of automatic search engines that scan for protected creations. Such searches can occur either by transferring suspicious content to a central location and analyzing the data there or by using so-called agents to have the analysis process. In the case of the application scenario discussed in the preceding section, central analysis is already exceedingly difficult from a logistical standpoint as the set of potential sources for redistribution is not properly bounded and may grow faster than the product of bandwidth and processing power available to the rights owner. The approach becomes even less attractive when arbitrary end user systems are considered these will generally not make creations available to external nodes and also may not be available at the time of checking for protected creations. Using agents to detect misuse could, under very benign circumstances, eliminate the logistical problem in the scenario considering misappropriation by other content providers although making processing power available to rights owners through content providers would almost certainly require commensurate reimbursement, but would leave the problem of the ill-defined set unsolved. It is, however, unlikely that any end user would consent to rights owners executing arbitrary code and granting access to any data located on the end users system even though a comparable approach has been proposed in a similar context for a digital rights management system.

The robustness requirements for digital watermarks protecting against end users are considerably higher, since the quality aspect appears to be of lesser significance if creations are obtained for free or for the cost of transferring the data. This assertion is supported by the observed popularity of highly compressed representations of audio and particularly video data, the latter at a quality that is significantly below the level of the original. Given such low-quality requirements, robustness must also be maintained against a number of deliberate attacks. This is particularly problematic,

since there exists automatic tools for performing typically highly successful attacks against digital watermarking systems (e.g., used in developing benchmarks or in the course of academic research) that can be used even by individuals with modest skills which introduce quality degradations comparable to or in most cases significantly less than those tolerated by end users in the case of compression.

2.3

Annotation watermarking

Digital watermarks possess a number of desirable properties that make their use outside of copyright protection desirable, some of which are analogous to the ones discussed in the preceding sections. Annotation watermarks provide information in a side channel that is coupled to the carrier signal without degrading the perceived quality of the carrier signal; this distinguishes them from markings that are either visible or not tied to the carrier signal (e.g., comment fields specific to a certain file format). They are therefore of interest in applications where the format of multimedia data cannot be guaranteed or is likely to change throughout a work flow; similarly, if digital-to-analog conversions are part of the expected transformations that multimedia data must resist, the properties of digital watermarks are desirable. Unlike perceptible markings, digital watermarks can also be distributed across an entire carrier signal (e.g., a video stream) such that the resulting signal can be cropped significantly and the digital watermark can still be recovered either in its entirety or to a significant extent. It should, however, be noted that in some applications, such as those involving images or video data, watermarks are appropriate only if the requirements for constant quality or robustness of the marking to cropping are given. Otherwise, there exist alternative technologies (e.g., two-dimensional bar codes) that can provide considerable payload capacity with comparable or better robustness characteristics. The annotation application scenario can be considered notably distinct from steganographic techniques in that the presence of the watermark signal is public knowledge (a property that may, for example, also be true for copyright protection watermarks if they are employed as a deterrent) and a detector may also be available to the public. Unlike, for example, a copyright protection scenario, no immediate adversarial relation needs to be stipulated, since the marking itself is not directed against the interests of a particular individual or group. Given the robustness of most digital watermarks particularly as applied to digital-to-analog

conversions, one of the prime uses of annotation watermarks is the association of an analog representation with its digital original; this can, for example, occur via centralized database records, limiting the payload requirement to a single unique key for such a database. The ability to reference the original, given possibly only a cropped or otherwise partial copy of a document or other multimedia data, significantly eases record handling and can enable multimedia document management systems. An example of such an application scenario would be the use of X-ray photography for testing and documenting the structural integrity of materials and construction. Here, excerpts from the original (possibly very large) analog photograph can be scanned and reproduced.

It is, however, important to retain information as to the precise circumstances under which the photograph was originally taken and, from there, to any related documents. Somewhat problematic in this particular application scenario (and even more so in the case of medical imaging, where at least in principle similar concerns exist) is the requirement for very high fidelity of the marked representation and the implicit requirement that no artifact introduced by the watermarking process may resemble a feature that can be misinterpreted by an analyst. Regardless of whether the payload for the digital watermark is the annotation itself or a key into a database containing the actual referenced records, safeguards such as error-detecting or error-correcting codes must be employed to protect the integrity of the annotation.

Conversely, some applications for annotation require that the integrity of the annotated signal be preserved. This can take the form of several possible sub requirements; in a simple case, the duplication and transfer of an annotation watermark without authorization to another carrier signal must be prohibited. A more elaborate requirement is that the semantic integrity of an annotation-marked signal must be preserved. This requirement can, to some extent, be satisfied by the use of fragile watermarks, this implicitly contradicts several robustness requirements. The application scenarios can be considered a specific sub scenario of annotation watermarking.

2.4

Fingerprinting

The figurative term fingerprinting has acquired two completely disjointed interpretations in the field of content protection, only one of which applies to digital watermarking (a third interpretation involves the derivation of a unique characteristic of a creation with significant, typically constant, bandwidth such as a cryptographic hash function; as this derived information is disjoint from the creation or carrier signal, this type of fingerprinting is of no particular interest here by itself). In the watermark interpretation (which semantically predates others, the general terminology having been introduced by Wagner even prior to the application to digital watermarks [20]), a digital watermark uniquely identifying the end user of a creation is embedded in the creations carrier signal as the payload, corresponding to an application sub scenario. This implies the same requirements for the payload size of the watermark and the very high requirements for robustness against deliberate attacks specific to copyright protection scenarios. The fingerprint watermarks can be embedded at the time of distribution to a specific customer; this requires a considerable computational overhead for the generation of watermarks as well as a distribution medium that permits the efficient creation of distinct copies of a creation. Alternatively, a playback device that contains a subsystem tied to a specific individual or customer can embed a fingerprint watermark immediately on playback. The latter approach does not require distinct copies and reduces the computational burden on the content provider. This scenario was used by the now-defunct DiVX pay-preview digital video player scheme developed and owned by a partnership between Ziffren, Brittenham, Banca & Fischer, an entertainment law firm in Los Angeles, California, and Circuit City. However, fingerprint marking on playback implies that both the embedding mechanism and the requisite key material are present in the playback device (even download of ephemeral key material does not alter this situation) and hence under the control of a potential adversary. Implicit in this observation is the need for a separate embedding key for each playback device, since otherwise any single compromised playback device would mean that the adversary can embed arbitrary fingerprints, eliminating the evidentiary value of the watermarks. However, this implicit requirement for key material imposes a severe computational burden in case a fingerprint needs to be identified, since for each suspect device

fingerprint, a test for the existence of a watermark must be performed. The other interpretation refers to the extraction of semantically relevant or characteristic features from multimedia signals to identify the signal itself and does not involve digital watermarking at all.

2.5

Automatic play list generation for rights verification

Broadcasts or other types of public performances of creations must be accompanied by appropriate compensation for the rights owners. While there exist societies offering centralized records keeping and compensation of individual rights owners in most jurisdictions (e.g., ASCAP for music in the United States), it is nonetheless burdensome and error-prone work to create the requisite records on which creation was played, when it was played, and how many times. Inserting an annotation watermark into each individual creation permits the automatic monitoring of a broadcast stream or similar performance. This lowers the reporting burden on the performing entity and can therefore be performed at the originating site, resulting in lowered robustness requirements due to a lower risk of distortions introduced by compression, transmission faults, and incomplete reconstruction. In such an application sub scenario, the bandwidth (typically time) required for a given digital watermark is of secondary interest. However, a secondary benefit to royalty-collecting entities can be derived only if play list monitoring occurs after performance or broadcasting. In this case, the time required for the recovery of a digital watermark is critical to the overall efficiency of the monitoring scheme, since if each marking requires only a fraction of the duration of a creation, multiple signal sources can be monitored simultaneously by moving to a different source once a marking has been detected. Assuming that such a scan cycle does not last more than the average length of a creation and a low unit value for individual royalty payments, this reduces the expenditures necessary for the monitoring equipment and bandwidth. Even more cursory monitoring would also be adequate for this application scenario, since typically only gross or systematic underreporting is of actual interest. Given that almost all creations require the payment of royalties for performance, the threat of deliberate removal on the part of the entity broadcasting or performing the creations would be limited, as the lack of annotation watermarks would be sufficiently abnormal to warrant a manual inspection

of the material, presuming that the presence of annotation watermarks was mandated by the royalty-collecting entity. The automatic generation of play lists is also relevant to the reverse case, that is, to verify the fact that a given creation (advertisement) has been broadcasted according to a previously established contract. As in the previous case, the robustness requirements are derived mainly from the need to withstand standard processing chains encountered, so deliberate attacks on the watermark are of limited utility to the parties involved.

2.6

Multimedia authentication

The integrity and authenticity of multimedia signals, particularly those already in digital representation, are in jeopardy of malicious or otherwise semantically distorting manipulation from their creation or reception by a sensor onward. For application areas where the bitwise identity and authenticity of a digital document is either a priori desirable (e.g., in the case of documents in electronic representation) or otherwise feasible, cryptographic hash functions and digital signatures can provide both effective and efficient protection. However, for most applications involving multimedia signals, certain types of modifications such as compression, resulting in a bitwise difference between signals, will still result in a signal that is considered authentic and for which integrity is not considered violated. Given an original or authentic digital multimedia signal S and a signal S_, which is a putatively transformed S, several problems can be formulated. The authenticity of S_ can be shown with or without knowledge of the original S. In the first case, this is trivially accomplished by comparing the signals under proximity metric. In the second case, however, the authenticity must be determined based on a characteristic feature. Such features if the requirement for similarity is maintained; otherwise solutions such as Friedmans trustworthy digital camera using a signed hash value transmitted out of band would be sufficient should be intrinsic to the signal to avoid loss of the feature due to legitimate processing resulting in a loss of integrity and authenticity information while the signal itself is still valid. Digital watermarks provide an obvious solution to some of the requirements described above. However, the very robustness of regular digital watermarks against manipulation (especially against cropping and spatially or temporally localized alterations) makes these markings unsuitable. Instead, fragile watermarks are called for particularly in the case of integrity

protection applications. The drawbacks inherent in first generation fragile watermarks imply that a number of processing steps will result in the watermark being unrecoverable even though the integrity criteria could still be met for a given S_. The main problem in this case is the formulation of a similarity metric that determines the semantic equivalence (for a given application) of two signals S, S_ and subsequently the construction of a detector that signals the recovery of the fragile watermark if a given S_ exceeds a similarity threshold. Using watermarks for authenticity, and to some extent also for integrity, imposes the strong requirement that no unauthorized entity can embed a marking that purports to be another entity, and that the marking is linked in such a way to the carrier signal that a transfer of the marking from one carrier signal to another carrier signal resulting in detection of the authentication feature is not possible.

2.7

Watermarking for copy protection

Digital watermarks can be considered protection techniques only in that they provide a deterrence mechanism or evidence of breach of copyright or other contractual obligations after the fact. There exist, however, approaches for utilizing watermarks immediately for copy protection in conjunction with specially equipped devices for playback and recording. Protection can occur at two separate stages, during recording and playback. In each case both the presence of a potentially specific marking or the absence thereof can be used to induce a desired behavior of the device controlling the operation. Requiring the presence of a watermark to permit playback of a creation bears some resemblance to the authentication scenario for digital media representations, this application scenario implicitly requires the personalization of the creation for a given individual or set of devices, since otherwise a successful duplication of the digital representation would also reproduce the watermark; depending on the robustness requirements for the watermark, watermark recognition may even be possible from copies generated from analog sources (e.g., audio signals captured and re digitized from an analog output of a legitimate playback device). The requirements for payload capacity in case users (or devices) are identified by the transaction watermark; for individual transaction records, the payload size is correspondingly higher. As copies must be individually marked in this application scenario, this imposes limitations on the distribution forms that can be justified

economically. Requiring the absence of a watermark for playback could, for example, be part of an application scenario in which a playback device embeds a watermark (such as its identity) into the creation as it is played back alternatively this watermark could also be embedded in a recording process to identify first from subsequent generations of copies. This is the application scenario most similar to the copy bit approach found in digital audio tape (DAT) and audio CD systems, along with the familiar threats and vulnerabilities from that approach albeit with an increase in difficulty if the embedding process is an integral part of decoding a creation for playback. Conversely, another application scenario consists in requiring the absence of a digital watermark for recording. While such a scheme is only of interesting cases where it can be guaranteed that any recording device honors this convention, the benefit compared to a simple copy bit mechanism is that removal of the marking once it has occurred requires more effort than would be the case of a marking that is not tied to the content itself. In addition, more elaborate schemes can use the fact that payload sizes larger than a single bit can encode (assuming safeguards against unauthorized manipulation of the payload or the marking itself) arbitrary instructions as to the admissibility of copying or playback operations that can be changed dynamically either in the case of duplication or provided a writable representation during the use of the representation, for example, to record the number of remaining playback operations for a given license. As noted above, such a mark-on-use scheme was used for the output of the DiVX devices for the playback of digital movies, although the digital watermarks were embedded only in the analog signal and not used for copy protection as such.

The methods used for this kind of application require, in some sense, that any manipulation preserving the multimedia data also preserves the attached information. The degree to which robustness is required depends on the exact application, as these manipulations can range from mild, unintentional ones, such as some forms of filtering, format conversions, or compression, to those intentional. Even though sometimes robustness is not the only design target, the term watermarking is usually employed as a generic name in this area, and in many of those that have commercial applications. Copyrights are just like patents: worthless pieces of information if not enforced. Although a digital watermark cannot guarantee copyright protection by itself, it can be used to assist in the legal identification of the copyright

owner provided that the embedded data bears legal significance. Copy protection can be also addressed within access control scenarios, if the hidden data are used to indicate whether the protected host can be replicated or not. This application also requires the joint use of devices capable of reading and interpreting those embedded data to prevent unauthorized duplication. This sort of strategy was one of the objectives of the proposal to manufacture SDMI-compliant devices through licensing; clearly, the situation called for a standard. Nevertheless, access control methods have already reaped some notorious failures in non-watermarking environments, such as the breach of the DVD Content Scrambling System. A more promising scenario entails using digital watermarks to distinguish between different instances of the same digital content. This application is known as fingerprinting, and involves watermarking each copy using a unique identifier. Such unique labels can be used to track replication of content by revealing where the illegal copies originated. A case that has been proposed for this usage of data hiding is the real-time insertion of fingerprints in films during the projection. As the fingerprint would be unique for each projector, movies acquired inside a theater with a camcorder could be afterwards traced back to locate their origin. The same technique might be used to spot leaks inside recording studios. Integrity control (also called tamper detection/proofing) has been tied to the use of the somewhat difficult to define fragile watermarks, that would get deleted whenever any sort of modification is made to the host data. In general, the aims are not only to ascertain if the host signal was tampered with, but also in the affirmative case to detect where and sometimes even to attempt restoration of modified spots. Integrity leads to authentication through the use a soft digital signature (obtained by means of soft hashing) that would conform the watermark, but the possibility of removing it and forging an illegal new one imposes the use of public-key data hiding. Only then the secret keys used (i.e., the data hiding key and the signature cryptographic key) could act as a proof of ownership over the watermarked content. Unfortunately, none of these enticing features are still available at the current state of research, but the importance that this type of applications bears for legal documents, medical data, satellite images, and many other areas, makes it a prominent challenge for data hiding. Last, the exceptional importance given to integrity in some particular circumstances has prompted the examination of reversible data hiding, that is to say, the recovering as exactly as possible of the original host data from its watermarked version.

2.8

Fraud and tamper detection.

When multimedia content is used for legal purposes, medical applications, news reporting, and commercial transactions, it is important to ensure that the content was originated from a specific source and that it had not been changed, manipulated or falsified. This can be achieved by embedding a watermark in the data. Subsequently, when the photo is checked the watermark is extracted using a unique key associated with the source and the integrity of the data is verified through the integrity of the extracted watermark. The watermark can also include information from the original image that can aid in undoing any modification and recovering the original. Clearly a watermark used for authentication purposes should not affect the quality of an image and should be resistant to forgeries. Robustness is not critical as removal of the watermark renders the content inauthentic and hence of no value.

2.9

Others

Although data hiding is still struggling to overcome all of the problems posed by robustness and security, the good news is that the technology is ready for being deployed in several areas where these concerns are secondary (but not completely negligible). We can find examples in applications such as labeling or captioning, in which we just want to conceal some meta-data related or not to the content, such as time and conditions for still cameras or audio recorders, additional channel information (e.g., tele-text or subtitles), or medical data. The advantage of this procedure with respect to other methods already in use lies in the joint treatment of data and meta-data, while at the same time avoiding mutual interference and a bandwidth increase. This kind of embedded information can also be used to monitor broadcast content in television and radio signals. Just for instance, advertising or polling companies can use systems that automatically detect the different marks on the broadcast signal for billing or statistical purposes. Another possibility is that the hidden data consist of signaling information relevant to the channel used for transmitting the content data; this approach would reduce the bandwidth needed or help in technological transitions such as the move from analog to digital radio broadcasting. Finally, a recent proposal suggests using watermarking to blindly evaluate the quality of service (QoS) of multimedia communication links. As the

watermark follows the same alterations suffered by the host data it can be used to track any degradation caused by the channel.

Chapter 3 Digital Watermarking Techniques For Still Images

A lot of research effort was spent on the development of watermarking algorithms for images, which is discussed in this chapter. Starting with a short summary of application scenarios of image watermarking techniques, the evolution of image watermarking techniques for photographic images is outlined. This is followed by a section dealing with the watermarking principles for binary and halftone images. A short summary finalizes this chapter.

3.1

Classification and application requirements

Watermarking techniques are applied to images because of various reasons. Each of these possible applications involves typical processing operations that a watermarking technique must survive. Content protection scenarios may include operations like color to gray-scale conversion, global or local affine transforms, and printing and scanning. Authentication watermarks must not be affected by legal operations, while illegal attacks must destroy them. Metadata labeling scenarios may include media transform. A typical example is the transmission of information in printed images. This information is revealed if the printed image is shown to a webcam whose data is processed with the watermark reader software as presented by Digimarc [21]. Yet robustness is not a general requirement for data hiding techniques, un-detectibility is essential. A typical scenario for data hiding is the distribution of hidden information via (Usenet) newsgroups, bulletin boards, or simply by images on homepages. Steganalysis is a new research area dealing with the detection of hidden data as presented.

3.2

Photographic and photorealistic images

In general, most of the watermarking methods described in this section can be applied to color as well as to gray-scale images if the embedding of the watermark is not directly dependent on the color information of the image. Therefore, embedding in the intensity values is proposed in numerous image watermarking publications. The first method proposes embedding the watermark information in the LSB. The advantage of the direct LSB method is its capacity. A color image with the typical size of 1,600 1,200 in red, green, blue (RGB) representation is capable of storing more than 700 kB even if only the LSB is used as information carrier. If visible artifacts can be ignored, more bits can be used for hiding information. The main disadvantages of this simple

approach are its lack of robustness against lossy compression, its visible artifacts especially in flat image regions, and statistical dependencies that can be detected. More sophisticated methods are presented. LSB techniques are not limited to the spatial domain. They can also be applied to the image representations in transform domains. Embedding in a transform domain can be used to change the statistical behavior of the information carrier. Additional improvement can be achieved by carefully choosing the embedding scheme. The missing robustness and fragility of the LSB method is not a general disadvantage. In certain application scenarios, for example, for image authentication, fragility is a desirable criterion.

3.2.1 Traditional watermarking methods


The abstract model for watermarking is independent of the underlying media type and is shown in Figure 2.5. But embedding and detection, especially the resulting artifacts, are dependent on the data type. Traditional image watermarking techniques are based on spread-spectrum communication. Moreover frequencies carrying additional information cannot be modified arbitrarily because of the concomitant image degradations. Modifications of low frequencies affect the mean intensity and result in the noise of low spatial frequencies. These effects are strongly visible.

On the other hand, modifications of high frequencies result in less visually distracting high-frequency noise. The application of suitable image filters, which are commonly available in image processing programs (e.g., for image enhancement and intrinsically in image compression algorithms), can remove data embedded in high frequencies. Medium frequencies can generally be considered suitable information carriers to fulfill the requirements of robustness as well as of low visible degradation. Models of human perception are applied for increased performance.

There are several methods proposed be several people in the field of digital watermarking but here we discuss the most common and famous two methods for digital watermarking. The proposed two methods for watermarking digital data are as follows. Both methods use m-sequences to derive a pseudo noise (PN) code (the watermark). The first method compresses the image data from 8-bit gray scale to 7 bits by adaptive histogram modifications. The LSB is directly used to embed the

watermark information. The second proposed method uses LSB addition: The watermark is added to the LSB plane. For retrieving the watermark, a correlationbased extraction scheme is used. Parallels between spread-spectrum communications and watermarking are first considered and discussed by Cox et al. This algorithm uses a frequency domain transform to convert the input image into another domain. In the frequency domain, a sequence of values co = co [1], . . ., co[n] is extracted from the image. This sequence is the information carrier of the watermark and is modified. The watermark is a sequence of real numbers w = w [1], . . . , w[n]. Each value w[i] is chosen independently according to N (0, 1) (Gaussian distribution with mean = 0 and variance 2 = 1).

Figure: 3.1 The embedding scheme of the watermarking algorithm as proposed by Cox. In their original publication, Cox et. al suggested using the discrete cosine transform (DCT) domain, although other transform domains are applicable, too. The result of the insertion of the combined values into the original image is the watermarked image. Watermark retrieval is also based on correlation. As mentioned before, embedding a watermark by this method is not limited to the DCT domain. However, the DCT domain has been extensively studied because this is the transform used in Joint Picture Expert Group (JPEG) compression, where extensive studies on perceptibility were performed. Further advantages of using the DCT domain include the fact that frequency decomposition in frequency bands is efficient, DCT transform is widely used in image and video compression schemes, and the DCT coefficients affected by compression are well known. A considerable number of image watermarking techniques share this architecture. Yet they differ chiefly in the signal design, the embedding, and the retrieval of the watermark content.

Amplitude modulation of the DFT coefficients is applied by many watermarking techniques. One advantage of the DFT transform is the resulting shift (translation) invariance. Another one is the ease of considering the human perception by weighting frequencies. The properties of the DFT have been studied extensively in image processing literature. One of the results obtained there is the fact that the phase information is more important for the image content than the magnitude. In contrast to the previously described amplitude modulation for a blind retrieval of the watermark, an optimal statistical detector is proposed by O Ruanaidh et al. various methods for watermarking digital images in the wavelet domain have been proposed. Among other reasons, the development of new compression schemes led to new watermarking techniques. Barni et al. proposed a watermarking method based on the wavelet decomposition [22]. The wavelet decomposition decomposes the input image in high and low pass components with different orientations. A three-level decomposition of an image is shown in Figure 3.2. For watermarking, a discrete wavelet transform (DWT) is applied to the original image.

Figure 3.2 Tree level DWT calculation of the image. The DWT are proved to be the best in terms of robustness, security, efficiency and a number of related issues. That is the reason for their popularity in the field of digital watermarking of digital images. There are very few watermarking techniques developed which based on DWT.

3.2.2 Watermarking methods dealing with geometric distortion


The general problem with the previously described watermarking methods is the proper synchronization needed for retrieving the watermark. Applying geometrical

transforms to or warping the watermarked images affects this synchronization. Some watermarking algorithms, for example, block dependent algorithms, require a proper alignment, and therefore are not inherently robust against translation. Among these general transforms are global affine transforms. Yet local affine transforms and projective transforms also have to be considered. These geometrical transforms are considered de synchronization attacks and are applied in image watermarking benchmarks like the historic StirMark benchmark for the evaluation of robustness. Furthermore, a combination of simple transforms influences many classical watermarking schemes drastically, although visible artifacts are hardly perceptible for images. Besides the exhaustive search of the embedded watermark, different strategies have been developed to address the problem of geometrical de synchronization attacks. In general, additional information is embedded. This additional information can be the redundant watermark content or additional information to recover the original geometry of the watermarked image. Redundant embedding of the watermark content is proposed in different publications. For example, the watermark can be embedded periodically as proposed by Honsinger or Kutter. This periodic embedding results in the characteristic peaks in the autocorrelation function (ACF). These peaks reflect the applied geometrical transforms. However, an attacker can also calculate the ACF with the aim of predicting the watermark. Further publications based on redundant embedding use cyclic properties of the watermark pattern or use redundant embedding in video sequences. A watermarking method based on the autocorrelation function, which especially addresses local nonlinear geometrical distortions, was proposed by Voloshynovskiy et al. Invariant transform domains are applicable in increasing robustness. Some transforms are inherently not affected by specific geometrical transforms. For example, replacing the DCT transform with an invariant transform like log-polar mapping (LPM), which is also called the Fourier-Mellin transform, as described by O Ruanaidh and Pun has some theoretical advantages. After applying the DFT, which is invariant to translation, amplitude in the DFT at position (u, v) is projected in a new coordinate space (, ) via the projection: similarly some more techniques are discussed in following.

3.2.3 Content-based watermarking methods


While newer methods also have to face the previously described problem of geometrical distortions, they attempt to use semantic information in the image the content of the image for synchronization. Thus, they are classified as content-based watermarking algorithms. In Towards Second Generation Watermarking Schemes, Kutter et al [23] outlined a scheme that is based on significant features concerning perception. These features should be invariant to noise, covariant to geometrical transforms, and independent of cropping. For feature extraction, the image is decomposed using the Mexican-Hat wavelet as proposed by Manjunath et al. The detected features are used for an image segmentation using Voronoi diagrams (i.e., partitioning of a given space). The resulting segments are used for embedding a watermark with an existing watermarking scheme. The detected feature is used as a reference origin for the watermarking process.

For the detection of the watermark, the same features have to be extracted and the image has to be segmented. The authors reported that the feature location may move by 1 or 2 pixels, which has to be compensated for by a limited search. Instead of creating a triangulation of the image data Dittmann et al proposed a scheme based on self-spanning patterns (SSP). These SSPs are also based on image feature points. The initial pattern, which is represented by a polygon, is spanned over four feature points. Information carrying patterns are spanned around the previous pattern, resulting in a set of polygons with a given traverse direction. An estimation of images parameters is proposed by Alghoniemy and Tewfik which is also based on the wavelet decomposition. Previous proposed methods like suggested using image moments which are invariant against geometrical transforms [24]. However, their main disadvantage is the missing robustness against cropping, which is addressed by the method presented in. The scaling parameter is estimated using the edges standard deviation ratio (ESDR) and the rotation angle is estimated using the average edges angles difference (AEAD). These estimations are based on the wavelet maxima which are extracted from the low-frequency components of the wavelet decomposition. The ESDR and the AEAD show increased robustness against cropping. However, they are not completely robust against general affine transforms. Therefore, Alghoniemy and Tewfik propose to combine this method with exhaustive search strategies. Local

watermarks are proposed by Tang and Hang. Their scheme uses the same feature extraction method as proposed by Kutter et al. These extracted features build the centers of non overlapping image disks. The watermarks are embedded and extracted in each image disk. Before embedding and detection, the image disks are normalized using the normalization method proposed by Alghoniemy and Tewfik. The watermark is then embedded in the DFT domain. Segmentation or region-based image watermarking algorithms are proposed by Nikolaidis and Pitas and by Celik et al. In contrast to the methods in which a region of interest has to be selected manually, these methods use image segmentation methods to group the pixels of an image according some statistics. The method proposed by Celik et al. is based on a color clustering using a k-means clustering method. The cluster centers are identified and a Delauny triangulation of these cluster centers results in image regions that are watermarked. Thus, this watermarking technique is related to the triangulation method used in. However, different features are used for the triangulation. The method proposed by Nikolaidis and Pitas is based on the iterated conditional modes (ICM) for clustering. The resulting regions are merged and the largest regions of the final results are used for embedding the watermark. Before watermarking, the regions are approximated by ellipsoids. The bounding rectangle of each ellipsoid is used for the embedding and detection of the watermark.

3.3

Problem Formulation

An appropriate framework is required to investigate the performance of different watermarking methods in a fair way. Our analytic treatment of the problem will be based on stochastic modeling, reflecting the fact that many times the different acting parties do not have a deterministic knowledge of several of the variables involved, such as the host signal, the embedded data, or the key. With actual signals, operations are however deterministic for the embedder and the decoder. The technique for digital watermarking that will be followed is summarized in Figure 2.1. We observe that it is a (A Second Order Spread Spectrum Modulation Scheme for Wavelet Based Low Error Probability Digital Image Watermarking). The reason for selecting this scheme is that it is a low error probability scheme that gives accurate results and proved to be a robust scheme against the attacks.

As mentioned above there are a number of techniques that are developed for the purpose of watermarking but, the problems to watermarking schemes are that the failed to be tolerant against the attacks like rotation, flipping, scaling, compression etc. The scheme used by the group here for the implementation of the software of Digital Watermarking is a scheme that is robust and tolerant against these types of attacks and it is proved after implementation of the software that inserts and retains a watermark in the image even if the attacks are made to the image.

So the problem here is not to define a new technique for watermarking but to define a watermarking scheme that is free of bugs and proved to be robust.

3.4

Generally Used Digital Watermarking Techniques

Digital watermarking techniques take advantage of limitation of human perception. The digital watermarking techniques embed watermark in cover signals in such a way that the watermark does not alter the human perception of cover signal and the watermark can be detected only using signal processing techniques. Digital watermarking techniques can be used successfully with digital content in various forms like still images, video clips, audio tracks and text documents. One of the simplest methods of inserting a digital watermark in a still image is called LeastSignificant-Bit (LSB) watermarking. In this technique the lower order bits of selected pixels in the image are used to store watermarks. Techniques like flipping the lower order bits, replacing the lower order bits of each pixel with higher order bits of a different image (for e.g., a company logo), superimposing a watermark image over an area of image to be watermarked and adding some fixed intensity value are used to embed watermarks in spatial domain. Watermarks can also be inserted in frequency/transform domain by applying transforms like Fast Fourier Transform (FFT) and then altering the values of selected transform coefficients to store the watermark in still images. In addition, color separation techniques where the watermark appears only in one of the color bands, can be used for insertion of watermark in still images. Still image watermarking techniques can be extended and used with video signals, since video can be considered as a sequence of still images. However, digital video is generally stored and distributed in compressed format (e.g., MPEG-2, MPEG-4) in which the compression algorithms take advantage of temporal

redundancy in the video and hence still image watermarking techniques cannot be effectively used in such cases. So techniques that embed watermarks in raw video signal in such a way that the watermarks are unaffected by compression of the raw video signal and techniques that can directly embed watermarks in compressed video are considered effective and desirable. Further, one of the main constraints faced by video watermarking techniques is that they should not alter the bit-rate of the video. Discrete Wavelet Transform (DWT) based mechanisms where the watermarks are embedded in lowest frequency components in a controlled quantization process have been proposed. Further, techniques which can embed watermarks in compressed bit streams by transforming the watermark signal using Discrete Cosine Transform (DCT) and directly embedding it in compressed video bit stream have been proposed. The main constraint in watermarking audio signal is that the watermark should be inaudible to the listener. However this constraint poses a serious problem. Since digital audio is generally compressed using audio compression algorithms that tend to remove or mask inaudible frequencies, watermarks, which are inaudible, are also lost. Several techniques like bit stream watermarking and PCM watermarking are used to watermark audio material. In bit stream watermarking the watermark data is directly inserted into compressed audio files. In contrast, PCM watermarking embeds watermark in uncompressed audio data by hiding a broadband data signal below the audio signals masking threshold. Further, techniques like insertion of low-amplitude echoes are used to embed watermarks in audio material. Techniques like word space coding; line space coding and character coding are used for insertion of watermarks in text document. In word space coding, the spaces between words are subtly and imperceptibly altered to embed watermark code. In line space coding the space between the lines are altered and in character coding some of the characters are imperceptibly modified (i.e., made larger, serifs enhanced, etc).

A technology overview watermarking technique that use spread spectrum communication principles are used for insertion of robust watermarks in audio and video signals. These techniques embed the watermark as a low-energy, pseudo randomly generated white-noise sequence which can be detected using correlation. New classes of watermarking technique known as second generation watermarking (2GW) systems which embed watermarks in perceptually significant portions of the

signal by analyzing the semantic content of the signal are being increasingly proposed and used.

There are many different ways to embed a watermark or hide data within other data. To better understand the process of data hiding, certain terms are used when describing the different processes. The data to be hidden will be referred to as the embedded data, the host data is the file in which the data is to be embedded, and the file in which the data has been hidden will be referred to as the cover data. Watermarking and other hidden data are usually hidden within text, image, or audio files. This can be done in many different ways. Regardless of the method used to hide this data, certain restrictions must be kept in mind in order to produce a useful watermark. Bender et al. established guidelines in order to ensure the efficiency and accuracy of the watermarks. These guidelines are as follows [25]. The host data should not be degraded and the embedded data should be minimally perceptible. The embedded data should be directly encoded into the media so that the data remains intact across varying data file formats. The embedded data should be immune to modifications. Asymmetrical coding of the embedded data is desirable. Error correction coding should be used to ensure data integrity. The embedded data should be self-clocking or arbitrarily re-entrant so the embedded data can be recovered when only fragments of the cover data are available.

These are only some of the features that programmers keep in mind when deciding which methods to use in hiding their data. Depending on the type of host data they are using, different techniques may be considered. Although almost all of the following methods can be used for all three different types of data (text, image, and audio files), the ones listed below are the ones that will be covered in greater detail.

3.5

The Adopted Digital Watermarking Technique

As stated above there are a number of watermarking techniques available for the purpose of watermarking. But the scheme we have used in our software for digital watermarking is (A Second Order Spread Spectrum Modulation Scheme for Wavelet Based Low Error Probability Digital Image Watermarking). This scheme is proposed by (D.A. Karras Chalkis Institute of Technology, Automation Dept. and Hellenic Open University, Rodu 2, Ano Iliupolis, Athens 16342, Greece

Figure 3.5: The General Watermarking Scheme

3.6

Reason for Selecting This Technique

The purpose of using this technique is that it is an integrated second order approach in using spread spectrum modulation techniques and the wavelet methodology in modeling low error probability digital image watermarking which is very tolerant against the geometric attacks and compression attacks. Also the combination of FEC, Modulation Schemes, Multiplexing and Multiple Access techniques constitute the stage of channel coding. The modulation scheme (direct sequence spread spectrum) is used in this scheme on the wavelet domain. However, this modulation occurs after

modulating the digital watermark signal with an analog one, which consists of the summation of a properly selected set of sinusoids [26]. This further modulation is imposed to the watermark in order to enhance its resistance in noisy signal/image transmission environments. The adopted approach is, therefore, a second order watermark modulation/ embedding scheme. It is imposed after application of the Forward Error Correction (FEC-RS) methodology. The technique evaluates the merits of this second order embedding technique in different SNR ratios concerning binary sequences based watermarks embedded in 2-D images, using the probability of error in the watermark detection as a criterion. While modeling spread spectrum watermarking usually involves embedding a watermark in the spread spectrum of a signal/image, it is herein attempted to improve the robustness and fault tolerance of the transform domain watermarking approach by properly modulating the watermark, using a set of predefined sinusoids, so that even severe destruction of signal/ image blocks will result in the graceful degradation of the extracted watermark quality.

3.7

The Adopted Algorithm/Technique Description

In the preparation of this software we used the above mentioned scheme for digital watermarking, we adopt the idea that watermarking needs to be adaptive in order to be robust. The herein proposed method places the watermark on the most perceptually significant components of an image. To this end, since the proposed methodology is based on transform domain principles only the most important in the preservation of image perceptual characteristics coefficients of the transform, the Discrete Wavelet Transform - DWT, corresponding to the highest energy coefficients, are involved in the watermark embedding process. The logic behind the premise is quite simple. A watermark that is non-intrusive is one which resembles the image it is designed to protect. By virtue of its similarity to the image, any operation that is intentionally performed to damage the watermark will also damage the image.

The general watermarking scheme adopted here in the software for applying the proposed integrated spread spectrum and watermark modulation approach is illustrated in the next figure.

3.8

Key Steps to Algorithm Understanding

In this section, we present the proposed algorithm we have used in the development of our software project that might form the basis for more sophisticated transform domain algorithms. The proposed approach is based on the block mean technique as well as on the transform domain watermarking methodology, involving the DWT. However, it differs from similar approaches in the way it considers watermark embedding/detection. More specifically, as it will be analyzed next, it involves a second order spreading type modulation of the watermark for attaining low error probability and fault tolerance in noisy conditions transmission.

With respect to the block-mean approach, which is one of the bases of the proposed watermark scheme, we should mention that there are also a number of watermarking schemes developed, but we find this approach a good choice for the implementation of our software project. Basically the block mean approach says that the mean of each block is calculated and then block may be incremented to encode a 1 or decremented to encode a 0 (or vice versa). This is termed bi-directional coding. Alternatively, the mean may be incremented to encode a 1' and left unchanged to encode a 0. This is termed unidirectional coding.

The block-mean approach suffers from the grave disadvantage that an enemy that is in possession of a number of independent copies of the image can compare the different copies and read most, if not all, of the encoded message. Several experiments show that the expected number of undetected bits decreases exponentially with the number of copies. In some techniques this particular weakness is treated by randomizing both the size of the blocks as well as the positions of the blocks inside the image. Despite its simplicity, the block-mean method of marking images has proven to be highly

robust to lossy image compression, photocopying and color scanning and dithering. The number of bits that may be encoded using the block-mean approach equals the number of blocks, and this in turn depends on the size of the image and the block size, as well as the width of borders around blocks. Realistically, for a typical image of size 256 x 256 pixels the number of bits that one can expect to encode is approximately one hundred bits. This number of bits may be adequate for some applications, even after taking into account the need for redundancy in the code for error detection and correction as well as code word authentication. However, as it is well known, this capacity may be greatly increased by watermarking in the transform domain.

Following the block mean step, a technique for determining the number of bits to be placed at given locations in the image should be described. Note that in this section the DWT will be applied exclusively. Other transforms, like DCT mainly, have been already applied in the literature (not within the selected/proposed integrated watermarking framework, but at least it has been investigated as the main transform domain technique). Before proceeding, however, with the suggested novel transform domain watermarking scheme, it is important to outline the principles of transform domain spread spectrum watermarking as used in our technique, so that to understand the differences between our approach and other similar schemes used for transform domain watermarking.

After having studied a number of watermarking schemes and JPEG compression the followings are extracted as the key steps for a good watermarking scheme, which we applied in our used scheme, also followed in implementation of the software for digital watermarking in order to implement the proposed watermarking scheme:

1. Divide the image into blocks. 2. Subtract the mean of the block from each pixel in the block. 3. Normalize pixel values within each block so that they range between -127 and 127. 4. Compute the DWT transform of the image block. 5. Modulate selected coefficients of the transformation (e.g. using bi-directional coding) embedding the second order modulated watermark. The coefficients that are selected are those that are most relevant to the intelligibility of the image. This step is

very important in the proposed methodology and it is analyzed afterwards. Figure 2 depicts the main steps involved. 6. Compute the inverse transform IDWT, de-normalize, add the mean to each pixel in the block and replace the image block in the image.

Watermark detection is easily performed by carrying out Step 1 to Step 4 above on the original image and the watermarked image in parallel and comparing the values of the coefficients.

One of the most important factors in embedding a bit stream in an image, apart from the robustness, is to determine the number of bits that can be placed into a given image block. In a highly textured image block, energy tends to be more evenly distributed amongst the different DWT coefficients. In a flat featureless portion of the image the energy is concentrated in the low frequency components of the spectrum. As stated earlier, the aim is to place more information bits where they are most robust to attack and are least noticeable. This may be accomplished by using a simple threshold technique. You can set a threshold and the transform coefficients can be selected or deselected on the basis of that threshold. The scheme can be briefly described by the following image.

High GTC (Gain Transform Coding) transforms like DWT are the best suited for watermarking applications that is why we have used this approach and this scheme of transform domain watermarking. As high GTC transforms provide the most compact representation of the image, attacking DWT for the purpose of watermark removal will most likely destroy the image. From the complete set of DWT coefficients we choose a medium to high energy (the principle is similar to that of wavelet image compression) subset for watermarking purposes as illustrated in the above defined steps. The remaining steps are illustrated in figure 2. Figure 2 illustrates the modulation of the selected transform domain DWT coefficients for embedding the watermark. The selected coefficients F (k1, k2) undergo first a key based transform to obtain the coefficients C (k1, k2) to be used for embedding the signature. The signature sequence S to be embedded in C (k1, k2) may be obtained as a pseudorandom binary sequence using the random sequence generator (RSG) triggered by the key K (which in turn is derived from hashing the original image). The coefficients

obtained after embedding. C then undergoes the inverse Key-based transform to obtain the modified DWT, which together with the unmodified coefficients are inverted to obtain the watermarked image or the stego-image.

To complete the description of the embedding process we should outline the second order modulation process of the signature S to be embedded in C (k1, k2), that is the second order modulation process of the watermark.

Similarly there are also a number of sub steps involved in the scheme. Also the watermark detection and extraction phase is not discussed here in detail to minimize the complexity of the algorithm used. But they are an important part of the project of digital watermarking and can not be ignored, so after some time we will definitely discuss them in detailed manner.

3.9

Experimental Study and Discussion of the Results

In order to evaluate any watermarking methodology it is necessary to assess its performance under various attacks. The main attacks mentioned in the literature are as follows. Robustness attacks: Intended to remove the watermark JPEG compression, filtering, cropping, histogram equalization additive noise etc Presentation Attacks: Watermark detection failure. Geometric transformation, rotation, scaling, translation, change aspect ratio, line/frame dropping, affine transformation etc.

Counterfeiting attacks: Render the original image useless, generate fake original, dead lock problem. The herein presented experimental section shows preliminary results on the performance of the proposed watermarking method. An extensive

experimentation including all kinds of attacks and comparisons with other efficient watermarking methods is very important and is currently conducted

by the authors. However, the results herein outlined are quite promising and indicative of the method potential.

These are some of the methods that can be used to test whether a watermark can survive different changes to the image it is embedded in.

Horizontal Flipping Attacks like horizontal flipping generally removes the watermark information if not properly bounded to the data

Original Image

Attacked Image

Compare this Original Image with the attacked images at right, and see if you can spot any changes in quality. Horizontal Flipping Many images can be flipped horizontally without losing quality. Few watermarks survive flipping, although resilience to flipping is easy to implement.

Rotation & Cropping A small rotation with cropping doesnt reduce image quality, but can make watermarks undetectable as rotation realigns horizontal features of an image used to check for the presence of a watermark. The example at left has been rotated 3 degrees to the right, and then had its edges cropped to make the sides straight again.

JPEG Compression/Re-compression

JPEG is a widely used compression algorithms for images and any watermarking system should be resilient to some degree to compression or change of compression level e.g. from 71% to 70% in quality.

Scaling Uniform scaling increases/decreases an image by the same % rate in the horizontal and vertical directions. Non-uniform scaling like the example at left increases/decreases the image horizontally and vertically at different % rates. Digital watermarking methods are often resilient only to uniform scaling.

Dithering Dithering approximates colors not in the current palette by alternating two available similar colors from pixel to pixel. If done correctly this method can completely obliterate a watermark; however it can make an image appear to be patchy when the image is over-dithered

Stirmark StirMark is the industry standard software used by researchers to automatically attempt to remove watermarks created by Digimarc, SysCoP, JK_PGS (TALISMAN project .P.F.L. algorithm), Signum Technologies and EIKONAmark.

Stirmark attacks a given watermarked image using all the techniques mentioned in this report as well as more esoteric techniques such as low pass filtering, gamma correction, sharpening/unsharpening etc.

Chapter 4 The Software Design and Module Division

4.1

Introduction

This chapter describes the complete design, working and the process by following which the group has implemented the software of Digital Watermarking for images. Moreover the module division is also defined in this chapter. The purpose of this chapter is to make the reader clear about the project design and implementation. The intended work done by the group individuals is divided into modules so that can be presented separately.

After reading the above chapters it is quite feasible for the reader to know what is digital watermarking and where should it be used [see chapters 1, 2] and efficiently used as it is discussed in detail in the preceding chapters. So now the purpose of creating this software project/product is clear. This software of Digital Watermarking for Image is developed by the group as a final year project. This software is made for the basic purpose the digital watermarking is used for i.e. the copyright protection or you can say the identification of the original owner. This software is made capable of inserting or embedding, detecting and extracting the watermark in digital images. This software is equipped with the latest techniques and trends of the market for the digital watermarking purposes. The key feature of this software which differentiates it from others is also described in the following lines. As it is mentioned above it is a research project related to a newly born field of Digital Watermarking, and there may be still some doors that are needed to be open in this filed.

4.2

Software Description

The software of Digital Watermarking for Still images is developed on the technique discussed in section 3.5 (A Second Order Spread Spectrum Modulation Scheme for Wavelet Based Low Error Probability Digital Image Watermarking). Also see figure 3.1.

As far as the project design is concerned it is completely based on simplicity. Because, as it is a partially scientific project, and the design is of less importance in correspondence with the working. So the more emphasis is on the working and the design off course is made up of different modules. Combining them all at the end makes the complete project design. In the preparation of this software we used the

above mentioned scheme [3.5] for digital watermarking, we adopt the idea that watermarking needs to be adaptive in order to be robust. The herein proposed method places the watermark on the most perceptually significant components of an image. To this end, since the proposed methodology is based on transform domain principles only the most important in the preservation of image perceptual characteristics coefficients of the transform, the Discrete Wavelet Transform - DWT, corresponding to the highest energy coefficients, are involved in the watermark embedding process. The logic behind the premise is quite simple. A watermark that is non-intrusive is one which resembles the image it is designed to protect. By virtue of its similarity to the image, any operation that is intentionally performed to damage the watermark will also damage the image.

The general watermarking scheme adopted here in the software for applying the proposed integrated spread spectrum and watermark modulation approach is illustrated in the next figure.

But the above figure can only give just a brief introduction of the scheme. The detailed description is provided in the section 3.5 chapter 3. For the complete project understanding the group assumes that the reader has read and understood the sections 3.5 and introductory chapters.

As far as the software project is concerned it is full GUI based software. The approaches concerned to the programming are pure object oriented and are implemented in JAVA (JDK 1.4). The GUI with the software is created by the group

seeing the user who will going to use this software, as it is a kind of software that is to be used for the insertion and detection or extraction of the watermark, for which it is not necessary to perform each task using GUI, because the software is already quite heavy because it uses and manipulates image pixel values for all of the working which always require much processor. GUI is designed only for the user convenience. But an advantage of GUI is that the end user can have more access and easy access to the software and more options for inserting a watermark with multiple watermark capacity depending on the size of the image. The whole software implementation can be divided into the following modules, which are implemented by the group to develop the software of Digital Watermarking for Images.

4.3 Software Project Modules Division


The software modules are made to clarify the work done by the group individuals. As the software project is developed in group therefore there are certain things that are done collectively by both the group members. But as for the project committee requirement the group has divided the project into the following modules. And the descriptions of the working and implementation details of the modules are presented in the separate reports of the group individuals. We can generally divide the project into following modules.

Making Place In Image For Watermark The Watermark Computation/Generation Embed The Watermark Detect/ Extract The Watermark

Chapter 5 Software Modules

5.1

Introduction

If we have a general look on the digital watermarking behavior we came to know that there is always some room in an image in which we can place the watermark information. Now it depends upon the algorithm you are using for the purpose of watermarking, that how effectively the room can be calculated from the image. In the problem is discussed in the plane words we have to take an input image and make enough room in the image so that we can place the watermark information in that room of the image (pixel values) in such a way that there is no perceptual difference in the images before information embedding and after information embedding. So making the room in an image is the title of this module. There are always some insignificant areas in an image with respect to human eye, if we made some changed to that areas the human eye is unable to detect that minor changes. This is the most advantage that is used by the watermarking algorithms. So detecting and preparing the image areas, that is insignificant with respect to perceptibility, so that it can carry the watermark information. As this involves a number of sub steps the main sub steps are described in the following figure. And the rest of the sub step information is also given in detail in the following lines.

Wavelet Phase

Unused Magnitude Wavelet Coefficients A


Select

Cover Image

2-D Wavelet s

Coefficients

Key based Transfor -mation

DWT Based Signaling

Module 1: Basic Sub steps Figure 5.1 Module 1

5.2

Steps to Implement the Software Module

Detailed steps and sub steps of the module 1 of Digital Watermarking for Image.

5.2.1

Select the Cover Image

The Image selection process is implemented in a way to provide the flexibility to the user. User can select any JPEG image stored in his systems Hard Disk. Image selection process is fully GUI based for the convenience of the user to locate any still JPEG image to be watermarked saved in the hard disk of his system. Selection of 100x100 size image is recommended, because if increase image resolution, the time it takes for processing also increases.

The software reads the image from the specified location (GUI Based) and creates a label of the same size (Image Size) and places the image on the label and places the label in the middle of the frame. If the image size is larger then the frame window size the software simply places that area of the image on frame which is of frames size.

The image selection process is very important here the user can select the image for any of the following purposes: the watermark Embedding purpose or the watermark Detecting purpose or the watermark Extracting purpose. So the appropriate image should be selected with respect to the intended task.

5.2.2

Calculating the Hash Function

After selecting any of the tasks the software will now calculate the hash function of the given image for the purpose of key generation. This step comprises a number of sub steps. Basically the Hash of the image is calculated to get a Hash Key, which is used throughout the watermarking process. Basically key is calculated with the help of two things: the password entered by the user and hash of the image. The sub steps are as follows.

Read Image Pixel Values (RGBA)

In this step we read images pixel values; this process is quite slow because we read 4 values for each pixel (Red, Green, Blue and Alpha). And all the

read image pixel values are stored into a 4-D array. And the size of the array is equal to the number of the image pixels. And arrays are always slow, and manipulating such huge arrays decreases the efficiency but some specialized techniques at advance processing are used that overcomes this processing, but here we dont have any other option other than using arrays for storing the image pixel values.

Convert Image from RGB to HSB

After reading and storing the image pixel values we need to convert them into some color model which removes the alpha (brightness) component of the image. Now we need to convert the RGBA values to some other color model like HSB (Hue Saturation and Brightness). Why we choose HSB color model, because the basic idea of changing the color model is for eliminating the 4th component, (this 4th component ALPHA is basically for the Darkness and Lightness of the color). And in HSB (Hue, Saturation and Brightness) we can show each color with 3 components (24 bits) rather than 32 bits. For this purpose we now again need to store the converted values into a 3-D array, separate from the RGBA array, because we need to manipulate over the HSB values as well as the RGBA values.

Reading User Password

After the color model conversion now we are ready to read the password string from the user. This is basically the key that is known only to the legal or original owner of the digital image because on the basis of this password string the software will generate the hash key. Which is used throughout the process either for embedding or detecting the watermark in digital image? Here we read a password string from the user of Ncharacters. This is done with the help of a dialog box as using the great facility of the GUI system, providing the user an ease for handling and manipulating the software.

Convert Each Character to Integer

The entered password string is now used to know the ASCII value of each character of the entered password string. Here we get the whole string entered by the user and determine the ASCII or we can say integer value of each character independently. Now the ASCII of each character is needed to be stored.

Analyze the Entered String

Duplication issue is handled in this step. Analyze the string means check for the duplicate characters in string. If there are multiple occurrences of a character then add 256 for first repetition, 512 for second, 768 for third and so on. Because the duplication of characters in password string creates a number of problems

Use Each Integer to Generate Random Numbers

The software will now generate the random numbers. By using each converted integer value as a seed to the random number generator function to generate the random numbers between 0 and 1, and place the generated numbers into a H x W (Image Height x Image Width) array.

Repeat this step for each character of the password entered by the user. At the end there will be N-random arrays filled with the random numbers between 0 and 1 of size H x W.

Calculation and Subtraction of Mean Value

Now when the software has N-random arrays calculated, its next step is to find the mean value of each random array and subtracting that mean value from each element of that specific array again giving N-arrays of the same size H x W, but of different values so now it dont require the original random arrays generated in the above step. So we over write the original array values to save space and improve efficiency.

Multiple Each Array with Image Values

After the modified random arrays are generated it is needed to multiply each element of the random array with the B (Brightness) component of the HSB values, also taking the absolute value of the products and summing them at the same time. This is very complex step and requires a lot of the processing so much importance should be given to this phase. This phase results a single 1 double sized value. Now we just need the integer part of the resulting double sized value, so ignore the values after decimal point, repeating this step for each of the N-Arrays, giving N integer values. These integer values are in fact used to calculate originally the key.

Selection of threshold for Key Generation

Now a threshold is required for generating the key of zeros and ones. For selecting the threshold, count the number of N-integers which will be equal to the number of characters entered by the user as password. If the number of integers is odd then find the Median of N-integers, this Median will be the threshold. And if the number of N-integers is even then sort the obtained N-integers, divide them in two equal halves. Take last number of first half and first number of second half; find average of these two numbers, the resulting average is the threshold.

Assign Zero Or One

On the basis of the selected threshold selected in above step the string or key of zeros and ones is calculated by arranging the final integer string in the same order as the integer values from n-integer arrays resulted, assign Zero 0 to the integers less than threshold and assign One 1 to the integers having value greater than threshold. Now this sequence of 1s and 0s is in fact the Hash Key. This will be used throughout the embedding and extracting purposes.

5.2.3

Applying Discrete Wavelet Transformation

Although the discrete continuous wavelet transform enables the computation of the continuous wavelet transform by computers, it is not a true discrete transform. As a matter of fact, the wavelet series is simply a sampled version of the CWT, and the information it provides is highly redundant as far as the reconstruction of the signal is concerned. This redundancy, on the other hand, requires a significant amount of computation time and resources. The discrete wavelet transform (DWT), on the other hand, provides sufficient information both for analysis and synthesis of the original signal, with a significant reduction in the computation time. The detailed description and the theory behind the wavelet multi-resolution analysis are discussed in detail in the above chapters [3.2.1]. The reason for this short introduction to wavelets presented here is just to make the revision of the reader with wavelets.

A function that results in a set of high frequency differences, or wavelet coefficients is called a wavelet function. The wavelets and DWT (Discrete Wavelet Transforms) are discussed in detail in chapter 2. After the Hash Key is determined, the next step is to move towards the next step which is WAVELET TRANSFORMATION. We need to calculate the discrete wavelet transform of the image (RGBA) values.

There are many types of wavelet exists. We selected The Daubechies D4 Wavelet Transform. These wavelets are robust against the geometrical attacks; and efficient enough and have got much popularity in the field of digital watermarking and image compression. The basic functionality of the wavelets is to perform two types of filters.

1) Low Pass Filter Low Pass Filter gives the average of the even and odd (the consecutive) pixels which give a smoother low pass signal.

2) High Pass Filter High Pass Filter gives the frequency differences between even and odd pixel (consecutive pixels).

Basically after applying wavelets the step results in a reduced number of coefficients of two characteristics, the scaling coefficients and transform coefficients. From here onwards only the transform coefficients are to be processed specially for embedding or detecting purpose, and the scaling coefficients are also of much importance because they will be used to reconstruct the image after all type of processing have been completed.

5.2.4

Select Coefficients

In this step the selection of the resulted coefficients is made which are calculated in the above step. Here we introduce another threshold. After careful analysis of the coefficient obtained from various images, we concluded that the Coefficient selection threshold would be +- 20 i.e. only those coefficients are selected which are above -20 and below +20. Otherwise leave the coefficients unchanged on its location.

The number of these selected coefficients depends upon the image size (W x H).

5.2.5

Key Based Transformation

Key Based transformation is in fact the selection of coefficients from already selected coefficients (Output of the above step). This selection gives further reduced number of coefficients, which is made on the basis of key value. First we determine how much coefficients are to be selected; basically these are the coefficients which will carry the watermark sequence. Now again it depends upon the image size (H x W). The general formula to find the number of key based transform coefficients is as follows.

N = User selected watermark length after Viterbi encoding and DSSS + Parity

This will give the number of key based transform coefficients not originally the coefficients, so now we need to find the N key based transformed coefficients from the transformed coefficients. We made a random function with the help of which we can randomly select the N key based transformed coefficients from

already threshold based selected coefficients. Basic idea for key based selection is to make the watermark to embed to known locations as well as the original owner can be identified on the key matching. At the end of this step what we have is the key based selected coefficients which will now carry the watermark information.

5.3

Introduction to the Watermark Information

From here onwards the theme changes, after having the coefficients that will carry the watermark information [module 1], the watermark information is required that will be carried by the key based selected coefficients now to decide the watermark before going further. What a watermark can be?

From definition a watermark can be any one of the followings that can be efficiently used for the purpose of original owner identification. Text. Another Image. Combination of text and logo.

But having any of the above type of watermark has some disadvantages. And we are trying to develop a technique that has minimum drawbacks as compared to other watermarking techniques or software tools available in the market. So we are not considering any of the above type of watermarks due to their drawbacks especially considering the size and the color of the host or input image. As it is discussed that the basic purpose of watermark is identification, now whether we use some separate image or signature or text as a watermark or we use some other technique that is more robust against geometric attacks. What we are considering is the image color and size independent watermarking technique. And here in the following lines we describe it in detail.

In our technique for developing the Digital Watermarking for Images software we introduced a flexible and new technique for the selection of watermark. According to this new technique the watermark is not any separate thing like text, logo or image, but in fact it is a part of the input image, which is placed back to the image on the known locations with the help of the key. This gives a number of advantages

like the input image size and color is meaningless for the selection of the watermark. Because the watermark is a part of the host or input image and its size and color is according to the input image so there are very rare or we can say impossible chances of conflicts. And to get the watermark from the input/host image and then putting it back to the image according to the key is the key feature of our software of Digital Watermarking for Image. Because with the help of the key the software places the watermark on known locations so that it can be extracted on the basis of key or checked for the identification purpose. As we all know that the basic concept of a watermark is to use something for the identification purpose. And it is always not required to extract the watermark, what is the requirement from such kind of software is to identify the original owner of the image. This can be easily done by the technique used in our software of Digital Watermarking for Image.

The basic idea of getting the watermark from the input image makes the watermarking scheme watermark size and color independent. Consider if we have a different watermark image of size say (100 x 100) and the input image (in which we have to insert the watermark) of size (50 x 50) then how can we embed the watermark of greater size then the image that will contain the watermark, definitely this can never be the situation as most of the readers think it very odd, but a considerable situation can be a (100 x 100) input image size and (50 x 50) watermark (logo) size and it is not that odd because the size of watermark is ideally less than the image size which will contain the watermark. But still it is not feasible to embed a watermark of half of the image size. So to determine an accurate ratio for watermark per image we need some specialized formula. Which are to be calculated for image size and watermark size, which may result in incomplete size of watermark so we propose to get the watermark from the image and then place it back to the image on known locations not necessarily on the same locations from where it is get similarly the color can be a problem if we have a watermark of multiple colors and the host image is quite different in color from watermark then it is quite difficult to make the watermark imperceptible. Consider the following examples.

Figure 5.1 The size problem

Consider the above example if the situation is as above shown in the figure it is quite difficult for the software to insert a watermark image of almost half of the size of the host image. And suppose if it succeeded in inserting the watermark image into host image it will be surly impossible to make the resulting watermarked image imperceptible. Same is the case with the different colors of host image and watermark. Consider the following example.

Figure 5.2 The color problem The watermark color is totally different from the host image color. And if the software forces the watermark of such different color to be inserted in the host image it will be defiantly imperceptible. And the whole image will be destroyed.

So extracting the watermark from the image will be definitely differs on those locations where we insert it back to the image but the main thing is that it was extracted from the image and must have resemblance to the image which is impossible in any other case. And we not need to worry about the size of the watermark because we have made the software flexible and intelligent enough to have

a good idea about the size of the watermark that can be easily inserted into any size of input image. So now we consider how it works. The general module steps are defined in the following figure.

Group Bits

Viterbi Encoding

DSSS S

Parity Generation

Information bits

Inverse Kay Based Transformation

Module 2: Basic Sub steps

5.4

How to get the Watermark Information

The watermark information as discussed above will be extracted from the host image. Now how many pixels or bits should it extract out of image, definitely depending upon the number of key based selected coefficients (see section 5.2). Consider it should extract one third of the number of key based selected coefficients. A question arises why one third of the number of key based selected coefficients. The answer to this question is that we do not insert the extracted watermark bits as it is to the host image. First we perform some error correction technique so that our watermark information become secure and robust against attacks, and then the most exciting feature of our technique the DSSS. To spread the watermark information in the image so that it became spread over the whole image and it is difficult to remove from the watermark image. For all these needed steps the number of bits with the extracted watermark bits automatically increases. So for this reason one third of the key based selected coefficients are recommended.

5.5

Security of the Watermark Information

The security of the extracted watermark bits before inserting them into the image is very crucial and important step. For this purpose there is a number of encoding scheme available. As we know the there are two main classes of error correction coding schemes, the forward error correcting schemes and backward error correcting schemes. In this software of Digital Watermarking for Image we have used the Viterbi encoding scheme for the error correction of the watermark information bits.

The purpose of forward error correction (FEC) is to improve the capacity of a channel by adding some carefully designed redundant information to the data being transmitted through the channel. The process of adding this redundant information is known as channel coding. Convolution coding and block coding are the two major forms of channel coding. Convolution codes operate on serial data, one or a few bits at a time. Block codes operate on relatively large (typically, up to a couple of hundred bytes) message blocks. There are a variety of useful convolution and block codes, and a variety of algorithms for decoding the received coded information sequences to recover the original data. Convolution encoding with Viterbi decoding is a FEC technique that is particularly suited for bits error correction. Suppose that we have a system where a 1 channel bit is transmitted as a voltage of -1V, and a 0 channel bit is transmitted as a voltage of +1V. This is called bipolar non-return-to-zero (bipolar NRZ) signaling. It is also called binary antipodal (which means the signaling states are exact opposites of each other) signaling. The receiver comprises a comparator that decides the received channel bit is a 1 if its voltage is less than 0V and a 0 if its voltage is greater than or equal to 0V. One would want to sample the output of the comparator in the middle of each data bit interval.

Basically the purpose of describing the Viterbi codes here is to give the reader an idea about the forward error correcting codes. With the help of these error correcting codes we make our watermark or information bits secure, by sending two 2 bits in spite of one 1 watermark/information bit. A question may arise why we are sending double data then required. The answer is that we are making our watermark secure and more and more robust and un realizable for the attacker. This forward error correcting technique is used only for the purpose of making the watermark bits secure. If for

example we receive the watermarked image and we need to extract the watermark then this forward error correcting techniques will be helpful for the software to extract the original watermark/information bits.

5.6

Applying the DSSS on the Watermark Information bits

Over the last eight or nine years a new commercial marketplace has been emerging. Called spread spectrum, this field covers the art of secure digital communications that is now being exploited for commercial and industrial purposes. And our entire motivation for using the spread spectrum technique is security of the watermark. In the next several years hardly anyone will escape being involved, in some way, with spread spectrum communications. Applications for commercial spread spectrum range from wireless LANs (computer to computer local area networks), to integrated bar code scanner/palmtop computer/radio modem devices for warehousing, to digital dispatch, to digital cellular telephone communications, to information society city/area/state or country wide networks for passing faxes, computer data, email, or multimedia data.

Spread Spectrum uses wide band, noise-like signals. Because Spread Spectrum signals are noise-like, they are hard to detect. Spread Spectrum signals are also hard to Intercept or demodulate. Further, Spread Spectrum signals are harder to jam (interfere with) than narrowband signals. These Low Probability of Intercept (LPI) and anti-jam (AJ) features are why the military has used Spread Spectrum for so many years. Spread signals are intentionally made to be much wider band than the information they are carrying to make them more noise-like. Spread Spectrum signals use fast codes that run many times the information bandwidth or data rate. These special Spreading codes are called Pseudo Random or Pseudo Noise codes. They are called Pseudo because they are not real Gaussian noise.

For our concern the spread spectrum is used to spread the Viterbi encoded data to image in such a way that it become non detectable and secure. For our technique used for implementing the Digital Watermarking for Image software what the DSSS actually performs, is the DSSS component takes one 1 bit of watermark information

after Viterbi encoding applied, and generates 3 bits against one bit. This is quite redundant information but making the watermark information more and more reliable and secure. This is achieved in the coding by just having two types of multiplication functions. Which results three 3 bits of information against one 1 input bit of watermark information.

5.7

Applying the Parity check

After the DSSS is performed we just need to add one parity bit with the resulted watermark bit sequence which has now become quite long and redundant after the Viterbi encoding and the DSSS. Any of the two types of parity can be utilized either even parity or odd parity.

5.8

Steps to Implement the Software Module WM Embedding

The approach used to embed the generated watermark sequence into the host image is CM/SNS [28]. Here is the process of CM/SNS -----------------------------------------------s(k) = watermark sequence bits p(k) = Calculated from [28] c(k) = selected Coefficients after key based transformation e(k) = The Amount of Distortion Introduced in the Original Pixel Values rem = remainder c^ = watermarked coefficient sine = Mathematical Sine Function cos = Mathematical Cosine Function delta = User Selected Constant beta = User Selected Constant -----------------------------------------------------------p(k) = [(delta/4) cos (c(k)/(2*PI*delta)) ] ------------------------------------------e(k) = s(k) - p(k)

--------------------------------------if(|e(k)| > (beta/2) ){ e(k) = sine(e(k))*(beta/2) }else { e(k) = e(k) } ---------------------------------------if( rem (c(k)/delta) > (delta/2) ) { e(k) = - e(k) } else { e(k) = e(k) } ---------------------------------------if ( c(k) >= 0 ){ c^(k) = c(k) + e(k) } else { c^(k) = c(k) - e(k) } --------------------------------------

All the above equations are used to embed the watermark sequence into the host image. The constant are pre calculated. The resulting pixels values are then replaced with the original values of the image. The distortion introduced is minimal such that the host image is not visually disturbed. After the successful embedding of watermark sequence into the key based selected coefficients then these values are used for the re construction of the host image onto watermarked image. Inverse DWT is performed and the RGBA values of the respective pixels are obtained. These values are then used for construction of the image. One problem arises from this is that when the image is created the loss of information occurs if the image is reconstructed using JPEG algorithm. Because of the lossy nature of the JPEG algorithm not only the image data is lost but also the information i.e. watermark is also lost, to bypass this, we chosen the PNG format for the construction of the watermarked image. PNG is a loss less file format that keeps the

image data intact and not losing it during encoding process. The resulting watermarked image in the PNG format is saved at the users selected location.

5.9

Steps to Implement the Software Module WM Detection

The approach used to embed the generated watermark sequence into the host image is CM/SNS [28].

The Detector Function used is as follows -----------------------------------------------c (k) = watermarked coefficient delta = User Selected Constant s(k) = Initial Detected Sequences -----------------------------------------------q (k) = rem(|c(k)|/delta), k= 1...D ----------------------------------if (q(k) >= delta/2){ s(k) = ( (3*delta)/4 - q(k) ) } else { s(k) = ( q(k) - delta/4 ) } ----------------------------------if (abs s(k) > (delta/8)){ mark bit as +1 }else { mark bit as -1 } ----------------------------------The process goes like this, when the detector function above is applied to the data obtained after the process it is not in binary form. Instead the data consists of real values. But because the amount of distortion introduced during the embedding stage is limited to +- delta/4. The real values obtained after applying the detector function are also limited to the same domain. After careful analysis of the data it was concluded to

maintain a minimum threshold so that the real values are mapped to the bipolar bit sequence. This helps in getting the detected values in the same form as they were during embedding stage. For that the absolute value of the obtained sequence value is compared against delta/8. Any values found above this threshold are set +1, otherwise -1.

For detection and be sure about the presence of the watermark in the image it is imperative to compare it against the original sequence. In our approach the contender has to provide the copy of the original image that was used for embedding process. This copy is not the watermarked one so this image is also processed parallel to the watermarked image. When we get the results of the watermarked image, then the user key is used on the supposedly original image to generate the watermark sequence. This ideally generates the same watermark sequence that was during embedding stage. After that both watermark sequences are compared against each other to find out the relativity or similarity between the two. If this satisfies the given condition or threshold then the image is declared watermarked and the user/owner is identified.

One question arises from this is that what if two parties have similar copies of the original image, and then it would be hard to find out the real owner of the content. The answer to this is obviously the watermarking authority assumes that there is only one original copy of the content and it is kept safe at all costs. Like any other property of the organization or personal ATM PIN number or E-mail account password.

Also it must be noted that the Detector part of the project and the Embedding part of the project are made separate of each other because of their work under different situations or under different authority.

Like the Embedding part of the project can be used by the issuing personnel of the organization who take care of peoples request and then apply watermark on their content and dispatch them for further use. Like wise Detector part can be used by any legal authority in the organization, privately, or in public sector to verify the rightful owner of the content. But there must be similarity between what is in the embedding part and what is in detector part.

Chapter 6 Conclusions

In this documentation we have presented an evaluation of the most important techniques currently used in digital watermarking in the presence of key disputes in the watermarking worlds the security robustness and securely detection and extraction of the watermark from the watermarked image. The novelty of the approach used for the implementation of the software of Digital Watermarking for Image has resided in the employment of the Discrete Wavelet Transforms and the Direct Sequence Spread Spectrum techniques introduced in this technique, also the Error Correcting codes Viterbi encoding etc. These factors leave a good influence of performance, considering at the same time sufficiently realistic implementation assumptions. As this is a newly emerging field and there is a lot needed to be done in this field of digital watermarking and the most disadvantage of this field is that there is no practical implementations of the research techniques. Which is we think the most important reason behind slow progress in this field of digital watermarking. So this software is developed by the group as a research project and there may also be some limitations in this software but the attempts are made by the group to develop a system that is most reliable and accurate for the purpose of watermarking. The lack of practical implementations will leave this field un explored and this is going to be a future miracle if explored and understood. With this thing in mind the group has jumped into this field of watermarking and made an attempt to solve some of the things. Hopefully, several relevant conclusions can be drawn from this work:

In comparisons of our strategy used for the implementation of the technique for developing the software of Digital Watermarking we emphasizes on the key features of the techniques, which is in fact the back bone of this technique. Without these feature it is impossible to develop such software. And the results have shown the valuable importance of the techniques. While implementing we were having two approaches in mind whether to go for Spatial Domain Watermarking or Transform Domain Watermarking after a thorough study we decided to go for the transform domain watermarking for a number of its advantages in the field of image processing.

Another interesting phenomenon that we have observed regarding the implementation of the watermarking scheme proposed by Dimitrios A. Karras, we learnt much about the things related to this field. When some thing is done practically we learn more than just theoretical learning. Especially the DSSS is the most interesting learning of the whole project implementation.

More issues related to the relevance of forward error correcting codes for the watermark security have been made clear in Chapter 4: We have verified that, although the use of this software of Digital Watermarking for Image is essential for advanced schemes for the purpose of digital data identification purpose which is now achievable for the still images and possibly be achievable for all types of multimedia data in future. For which we will need much more sophisticated and advanced techniques and off course the high processing powers.

Another important conclusion is that, in practical watermarking approaches, it is often compulsory to trade optimality for imperceptibility of watermark. This point has been made clear when using the software of Digital Watermarking for Image the user should not to worry about the size and color of the host image and the watermarked image. Because the software is made capable of dealing with the color and size problems. This software of Digital Watermarking for Image provides the user flexibility for watermark security level by increasing or decreasing the number of shift keying registers in the step of DSSS. Varying the number of registers will also vary the security level, but increasing the register number used for shift keying will also increase the number of redundant data.

6.1 Summary of Contributions


1. The impact of the use of digital watermarking has been thoroughly investigated, when using the technique (Digital Watermarking technique with second order spread spectrum: used for developing the software of Digital Watermarking for Image). The systematic use of this parameter for comparison and design goals had only been occasional in digital watermarking

techniques because the interface or the GUI is not important in such type of software, what is important is the purpose for which they are made for (Watermarking the images) and therefore it has required to undertake previously unavailable theoretical analyses of different methods. Among them, we have provided an analysis for the proposed technique with second order DSSS, for better robustness and reliability of the watermark. 2. Methods developed for watermarking especially methods actually based on spread-transform approaches with some new ideas are rare, but in the proposed technique used for the development of the software it has proven that, by gathering some of the best properties from these two families of methods, it is possible to improve performance while using a more sophisticated method that have features like DWT and DSSS. Those are present in the used technique. Similarly better techniques can be formed in the same way by looking for the faults in the present techniques developed for digital watermarking. Because the overall intention was to develop a technique that is suitable for watermarking the images and to minimize the problems thus increasing the robustness

6.2 Future Research Lines


In the study of digital watermarking we have spotted different issues where this approximation to the proposed digital watermarking scheme proved to be good and tolerant against the geometric attacks in short looks an ideal approach for the watermarking and for developing any software for the purpose of digital watermarking. The improvements can be done to this approach for the high level of security and robustness, like implementing the same technique with some other modulation technique like DS-CDMA, which would be also a good choice but before going to improve the current scheme one should have sufficient understanding of the current scheme and then (s)he can improve the technique. But the technology growth never stops and this is a universal fact. So for future there are more and more advancements and improvements are to be made regarding the field of digital watermarking. So any one interested in this field of digital watermarking should come with

broad and good mind for making good changes that are helpful in advancement of the technology.

This work of software development has only considered the digital still images for the purpose of watermarking, but a good future advancement can be if the same work is done for the animations, audios and videos. Definitely the work will be requiring more and more complex and highly equipped with the key feature that make that technique suitable for such dynamic media. Also the processing power is a crucial thing regarding the work to be done for the real time processing. Because even in this era of rapid technological growth we need the specialized hardware for the real time processing. Similarly the specialized operating systems are needed for such hardware support. Which are available in market but they are much costly. In future we will definitely need such software as seeing the advancement in the digital communication. In order to deal with routine digital communication we must be facing more problems than we face today. As the number of network users are increasing day by day. For which we need to produce more sophisticated techniques for the original owner identification of the distributed digital data over the network or internet. And may by digital watermarking we become able to stop the piracy and coping.

Proposed Solution Simulated Observations and Results

Observations
S. # Image Name Image Size Delta Bits detected Watermark Detected 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 City. jpg Hill .jpg Imran .jpg Musharraf. jpg Scene .jpg Shoaib .jpg Flowers.jpg Sir .jpg Skull .jpg Wajahat .jpg Sunset .jpg Lilies .jpg Winter .jpg Nassar .jpg Malikchild .jpg Friend .jpg Bus.jpg Hiking.jpg Lake. jpg Mountain.jpg Forest.jpg Icecream.jpg Fruits.jpg Policecar.jpg Winter.jpg Fruitsbig.jpg Shoaib .jpg 101x100 100x75 100x75 60x100 100x75 120x75 153x110 75x119 83x98 120x90 152x114 136x102 139x104 158x162 89x99 124x154 160x106 130x100 154x100 156x101 159x103 69x60 138x100 162x105 154x100 178x114 120x75 20 24 32 40 16 28 20 36 24 40 28 20 28 28 16 24 20 15 24 12 20 24 32 28 16 28 12 11/16 7/8 10/16 16/32 6/16 23/32 5/8 19/32 5/8 9/16 11/16 8/8 12/16 12/16 5/8 7/8 12/16 4/16 5/8 5/8 10/16 11/16 17/32 2/8 10/16 6/8 6/8 Yes Yes Yes No No Yes Yes No Yes No Yes Yes Yes Yes Yes Yes Yes No Yes Yes Yes Yes No No Yes Yes Yes

28 29 30

Boquet.jpg More.jpg Bubble.jpg

100x125 113x150 67x130

24 32 20

6/8 8/16 5/8

Yes No Yes

Results
Total Images used 30 Watermark recovered in 22 Images Watermark not recovered from 08 Images Percentage of the Watermark recovery success 73.3%

Robust Digital Watermarking using the DSSS Embedding Process for modulating the selected highest energy coefficients of the DWT transform domain

Dis-Integration of the Figure 2 in the proposed watermarking process Every step is enclosed in a different color box and the corresponding details that are understood are given in the same color as the step is enclosed in.

Wavelet Phase

Unused Magnitude Wavelet Coefficients A a

2-D Wavelets Cover Image

Select Coefficients

Key based Transformation

Part I

1. Cover Image is provided so that the watermark could be embedded. This acts as the input to the main program. There is no output associated to this step. 2. After the image has been acquired, the 2-D wavelet Transform coefficients are calculated. This same Wavelet Phase will be further used to calculate the Inverse 2-D Wavelets for the purpose of the watermarking. So the same process will used again to find the inverse of the 2-D Wavelet process performed here. The whole set of the DWT coefficients is collected and will be used as input for the next step. 3. The coefficients passed from the previous step are analyzed and are selected. The algorithm proposes to select a subset of DWT coefficients from medium to high energy from the obtained set of the DWT coefficients. There are two outputs associated with this step. First, the selected (medium to high energy) coefficients that are passed for the Key based Transformation. The Second output includes the remaining DWT coefficients that are not selected (lower

energy), these are used for the 2-D Inverse DWT process later along with the 2-D Wavelet Phase in the Step 2 4. The selected coefficients F(k1, k2) undergo a Key based Transformation (K) to obtain C(k1, k2) to be used for the embedding process. The inputs for this step are a Key (K) that is used for the Transformation purpose and the selected coefficients from the Step 3. The output includes C(k1, k2) coefficients. 5. Embedding process, the watermark is embedded with the two constants and actual image coefficients.

Step 5

b Information Bits

Group Bits

RS Encoding

DSSS S

Parity Generation

Part II

Inverse Kay Based Transformation

6. The signature sequence to be embedded is obtained as pseudo-random binary sequence is used in this step. 7. The RS (Reed Solomon), a forward error correction technique that is applied. The input is the sequence of bits. The output is the RS encoded bits for error correction. 8. A Spread Spectrum Technique DSSS is applied to spread the signature over the selected transform domain coefficients. The input includes the RS encoded bits and the output is the Signature spread over the selected transform domain coefficients. 9. Adding Parity bits. 10. After Parity generation the output is used for further steps.

Unused Magnitude Wavelet Coefficients

Wavelet Phase K

DWT based Signaling

2-D IDWT

Inverse Key based Transformation

Part III

11. This Step performs the Inverse Key based Transformation. This transformation will be used to produce the Watermarked Image I. The two inputs involved here are a Key (K) and the DWT based signaling from the Step 10. 12. This Step has Three inputs and one output. One input is from the Step 11. The inversely transformed coefficients are passed from Step 11. The second input the Wavelet phase from the Step 2. The Third inputs are the unused magnitude wavelet coefficients (Low Energy Coefficients). The only output is the Watermarked Image I.

References

1. flix balado pumario fernando prez gonzlez Digital image data hiding Using side information Escola Tcnica Superior de Enxeeiros de Telecomunicacin Universidade de Vigo 2003 2. I. J. Cox and M. L. Miller. The first 50 years of electronic watermarking Eurasip Journal on Applied Signal Processing, 2:126132, February 2002. 3. K. Tanaka, Y. Nakamura, and K. Matsui. Embedding secret information into a dithered multi-level image In Proc. 1990 IEEE Military Communications Conference, pages 216220, 1990. 4. A. Z. Tirkel, G. Rankin, R. G. van Schyndel, W. Ho, N. Mee, and C. Osborne. Electronic water mark In DICTA-93, pages 666672, Sydney, Australia, December 1993. 5. C. Clark. The Future of Copyright in a Digital Environment chapter The Answer to the Machine is in the Machine, Kluwer, 1996. 6. G. J. Simmons. The history of subliminal channels IEEE Journal on Selected Areas in Communication, May 1998. 7. D. Kahn. The Code breakers: The Comprehensive History of Secret Communication from Ancient Times to the Internet. Scribner, December 1996. 8. A. Kerckhoffs. La cryptographie militaire. Journal des Sciences Militaires, IX, Janvier-Fevrier 1883 9. P. Moulin and J. OSullivan. Information-theoretic analysis of information hiding. IEEE Trans. on Information Theory, 49(3):563593, March 2003. 10. N. Nikolaidis and I. Pitas, Copyright protection of images using robust digital signatures, IEEE Int. Conf. on Acoustics, Speech and Signal Processing (1996) pp 2168-2171. 11. J.J.K O-Ruanaidh, W.J. Dowling, F.M Boland, Watermarking digital images for copyright protection, IEE Proceedings in Vision, Image and Signal Processing (Aug 1996) pp250-256.

12. L.M. Marnel, C.G Boncelet, Jr and C.T Retter, Spread spectrum image steganography, IEEE Transactions on Image Processing (Aug 1999) pp 10751083. 13. M. Kutter, F. Jordan, and F.Bossen, Digital signature of color images using amplitude modulation, Proc SPIE, Software and Retrieval for Image and Video Databases (Feb 1997) pp 518-526. 14. Watermarking using DCT domain constraints, IEEE Int. Conf. on Image Processing (Sep 1996) pp 231-234. 15. B. Pfitzmann. Information hiding terminology. In Int. Workshop on Information Hiding, volume 1174 of Lecture Notes in Computer Science, pages 347350, Cambridge, UK, May 1996. 16. I. J. Cox, J. Kilian, F. T. Leighton, and T. Shamoon. Secure spread spectrum watermarking for multimedia. IEEE Transactions on Image Processing, 6(12):16731687, December 1997. 17. Cox, I. J., et al., Secure Spread Spectrum Watermarking for Multimedia, Technical Report 95-10, NEC Research Institute, 1995. 18. DVD Copy Control Association, Request for Expressions of Interest, technical report, DVD Copy Control Association, April 2001. 19. Roth, V., Sichere verteilte Indexierung und Suche von Digitalen Bildern, Ph.D. thesis, Darmstadt Technical University, Darmstadt, Germany, 2001 20. Wagner, N. R., Fingerprinting, Proceedings of the 1982 IEEE Symposium on Security and Privacy (SOSP 83), Oakland, CA, April 1983, pp. 1822. 21. Perry, B., B. MacIntosh, and D. Cushman, Digimarc MediaBridgeThe Birth of a Consumer Product, from Concept to Commercial Applicaton, in E. J. Delp and P. W. Wong, (eds.), Proceedings of Electronic Imaging 2002, Security and Watermarking of Multimedia Contents IV, San Jose, CA, January 2002

22. Barni, M., et al., A DWT-Based Technique for Spatio-Frequency Masking of Digital Signatures, in P. W. Wong and E. J. Delp, (eds.), Proceedings of Electronic Imaging 99, Security and Watermarking of Multimedia Contents, Vol. 3657, San Jose, CA, January 1999 23. Su, P.-C., H.-J. Wang, and C.-C. J. Kuo, Digital Watermarking in Regions of Interest, IS&T Image Processing/Image Quality/Image Capture Systems (PICS), Savannah, GA, April 1999. 24. Shen, D. G., and H. S. H. Ip, Generalized Affine Invariant Image Normalization for Rotational Symmetric and Non-Rotational Symmetric Shapes, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), Vol. 19, No. 5, May 1997, pp. 431440 25. Artech House - Techniques and Applications of Digital Watermarking and Content Protection 26. Wang, H.-J. M., P.-C. Su, and C.-C. J. Kuo, Wavelet-Based Digital Image Watermarking, Optics Express 491, Vol. 3, No. 12, December 1998. 27. Eggers, J. J., and B. Girod, Blind Watermarking Applied to Image Authentication, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Salt Lake City, May 2001. 28. Mahalingum Ramkumar Data Hiding in Multi-media Theory and Applications Ph.D. dissertation, New Jersey Institute of Technology. Dept of E & CE Newark

Das könnte Ihnen auch gefallen