You are on page 1of 8

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/322923703

Techniques in Detecting Forgery in Identity Documents

Conference Paper · November 2017

CITATIONS READS
0 2,712

2 authors:

Alsadig Bashir Yahia A. Fadlalla


Sudan University of Science and Technology InfoSec Consulting, Hamilton, Ontario, Canada
2 PUBLICATIONS   0 CITATIONS    25 PUBLICATIONS   58 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Advanced Intrusion Detection System model for Online Banking Transactions View project

VANET Security View project

All content following this page was uploaded by Yahia A. Fadlalla on 04 February 2018.

The user has requested enhancement of the downloaded file.


Techniques of Detecting Forgery in Identity
Documents
Alsadig Bashir Yahia A.Fadlalla
Faculty of Computer Science and Lead consultant/Researcher, InfoSec Consulting
Information Technology Hamilton,Ontario,Canada
University of Sudan for Science and Technology Adjunct Professor, Collage of Computer Science
Sudan, Khartoum and Information Technology, SUST
Email:alsadigthree@gmail.com Email: Trusted-software@usa.net

Abstract—Identity documents (IDs) forgery is ID that issued to the Russian plane that crashed over the Sinai Peninsula half
by authorities are copied/altered by the sides not authorized to an hour after taking off.
create such documents, for the purpose of cheating those who About 500,000 people in the United States are victims
would view the governing bodies. Personal computers, scanners
and color printers are widely spread, because they have cheap of fake IDs every year, with a negative financial crisis of
prices and their quality is very high. The combination of these approximately $750 millions [4].
technologies of the low price high-raised the risk of forgery of In 2012, American Government Accountability Office spot-
the identity documents. Formally, many technologies were less ted a number of fake driving licenses in three different states,
effective in countering the danger of faking identity documents. because the license officials did not observe that the birthdates
New methods must be improved to restrict that threat.
were fake. Also about 4,585 fake passports were discovered
I. I NTRODUCTION at John.F. Kennedy airport. After September 11 ancient of
American Trade Centre, Canada spent millions of dollars to
An identity document (ID) is used to identify a person or secure the border between it and USA, citing fake passports
verify aspects of a person’s personal identity. Some countries as its top worry [5].
issue formal identity documents, while others may require ID forgery falls into three categories that resemble each
identity verification using informal documents[1] [2] . The ID other. The first refers to using a document with a photo that
documents have been used in Europe since 1919 [1]. is almost similar to the original owner. Second the creation a
Besides identifying a person, IDs are used to connect him fake ID from scratch. Third, there is the original ID with an
or her to the database information about that person. The altered photo or personal information [1] [2].
photo on the ID is used to connect the person with the
document. The connection between the ID and information in II. L ITERATURE R EVIEW
the database is predicted about the personal information, such 1) Biometrics: Chunlin Yang [6], proposed a system at-
as the name, age, birth-date, address, ID number, card number, tempt to identify the identity documents which is Fingerprint
gender, citizenship, etc..etc. There are several types of IDs System that consist of fingerprint identification, reader, a
such as Passports, National Identification Cards, Residence database, physical ID artifact and fingerprint identification.
Cards, birth certificates, death certificates, Driving Licenses, Biometrics technology such as fingerprint, face recognition,
Military Identification, etc [1] [2] [3]. etc can be an alternative to the human checks. In this system
ID forgery is a great problem today. It is skyrocketing he used the fingerprint biometrics application, and applied it in
alongside as we see more with the advancement of technology, two stages, firstly store the fingerprints templates, secondly the
as high quality printers, scanners and computers which are verification process, which is done by a device to collect the
comparatively of low cost, accordingly one can easily produce fingerprints and match them with the existing templates that
highly significant a false ID. are stored in the chip of the identity documents. However, as
Photo IDs are highly significant when travelling identifying he claimed that his approach faces some problems, such that
the personality and accessing restricted areas or installations. when the finger is wet, dry or scratched can make a problem
Unfortunately fake photo IDs have a widespread use and for verification. Also there are some challenges that face the
market that is not only vivid but on the rise, like illegal system, such as the speed of processors and hardware cost.
immigrants to Europe. Daily we observe boat emigrants who Bin Shouten and Bart Jacobs [7], discussed the role of bio-
may reach Europe or get drowned. Fake photos IDs help drug metrics specially of detecting fake passports in the Netherland,
smugglers, human traffickers and terror attacks. With a fake and the method used like fingerprint, facial recognition, voice,
photo IDs, for example someone who is not an airport official signature recognition, iris, retinal scanning, hand geometry
can gain access to a restricted area. Therefore, gains access to and keystroke. The biometrics used for electronic verification,
a taxed aircraft and planted a bomb that exactly what happen to make it an efficient tool for authentication process. The
applications of biometrics are currently widely spread and
that is because the governments searching for high accurate
and secure verification of their citizens. The biometrics system
consists of a number of components. Firstly, sensor to capture
the biometrics. Secondly, feature extraction to return the stored
data, thirdly, a matcher to compare the stored templates with
the obtained biometrics, fourthly, a decision module showing
approval or rejection. Biometrics can be used for authentica-
tion by two methods, first, identification, the documents holder
claims that the document is his, and verification, which is
the system obtaining the persons biometrics and comparing it
with the existing samples in the system database. The need
for biometrics increased after the 9/11 attacks, accordingly Fig. 1. A biometric system consists of four components: sensors, feature
the EU decided to add fingerprint biometrics to the European extraction, matcher and decision module
passports. The Netherland passport added both the facial image
and fingerprint to about 15.000 passport holders. So Shouten
and Jacobs discussed the Netherland passport components that can be read by computer or other digital equipments. 3)
each one individually. Facial recognition component can be Digital watermarking can carry data like the information of
divided into three groups, firstly, local feature based method, the owner, or some data related to identity document such
which extracts the eyes, nose, mouth and their locations, as the ID number. 4) Watermarking can last for many years
secondly, the appearance based method, which uses the whole without change. To ensure that the identity document is not
face for recognition, thirdly, the combined method, which is counterfeited we must, first scan the watermarked area by
the attempt to improve the recognition by combining the both a scanner, and, compare the watermark with the personal
feature based method and the appearance one. The second information. However, watermark has some weaknesses, like,
component of the Netherland passport is the fingerprint, which removal attack, which aims to remove all the watermarking,
works together with the facial image on the Netherland pass- cryptographic attack, which aims to crack the watermark to
port. The fingerprint, which is considered the famous biomet- alter it and protocol attack, which aims to attack all the
rics of all, it is a method of confirmation or identification of watermark application.
the person by comparing the two fingerprints. However, there Kharittha Thongker and Thumrongrat Amornraksa [8], in-
is a weakness of the face and fingerprint recognition in the vestigated a system of digital image watermark to authenticate
Netherland approach, concerning for the face recognition, the the photo in Thai national ID card. First, they embedded the
face appearance of the young people and old ones may change watermark in the owner photo and printed it on the ID card,
quickly, so we need to update the stored templates frequently. then, they created a photo ID card with a device that has a
Also, the background might make problems in identifying small camera, finally, they extracted the watermarking from
the position of the face, the beard, moustache and change the photo ID to determine if the information obtained same
of hairstyle all can affect the performance of the system. On as the ID code on the card or not. However, if they add more
the other hand, the fingerprint weaknesses are decreasing the hiding technique like the fingerprint or any of the biometrics
quality of fingerprint due to age; also the dry, wet or scratched to the system, it will be more efficient to detect the forgery.
fingerprints can affect the quality of the system. Furthermore, Subariah Ibrahim and et al [9], developed a method to
changing the right fingerprint with left one and vice versa can protect the copyright of printed documents, like the certificates
affect the system. and ID cards, by using digital watermarking technology, which
2) Watermarking: Burt Perry and et al [4], proposed a is a technology used to protect the copyright of printed doc-
system to figure out the fraud photocopied documents by using uments. At the authentication process, first, scanning before
digital watermarking technique, which is a number of bits verification must be done, which is known as offline authen-
embedded into a digital image, text, audio or video that can tication. Second, the extraction must occur, but the offline
authenticate the ownership of the material. Here the digital authentication may make noise on the documents, known as
watermarking technique is used to detect forgery in the identity printing and scanning distortion (PS). Distortion can affect
documents by using a unique personal code inserted inside the both the pixel values and the geometric boundary of scanned
photo. This code is added to the identity document without documents. The distortion of pixel values can occur by the
making any change or alternation on the photo. After we blurring of adjacent pixels. The geometric boundary is caused
embed the data in the photo we can check if the ID document by rotation, scaling and cropping. To make sure that offline
is counterfeited or not by comparing the embedded data with authentication of the printed documents is correct; the (PS)
other information on the identity document to ensure that distortion must be removed. In addition, the authors proposed
this identity document is not fabricated. Digital watermarking an algorithm to remove the (PS) distortion, in order to verify
has some good characteristics, such as 1) No design changes the authenticity of the printed documents. However, as they
because there is no visible watermark. 2) Machine readable, claimed that this method needs some enhancement, which is
Fig. 2. Abstract of ID document with the security features Fig. 3. An example of ID document

how the angle of rotated document will be discovered, because the height, orientation and intensity value of the pixel. The
if the angle does not represent the right position of every pixel main idea of the proposed methodology is that if there is
in the original watermarked document the output may not be forgery of the documents, there will be some difference in the
seen. height, orientation or thickness of the suspected part of the
3) Combination Systems: Justin Picard and et al [10], documents. In the document there will be a suspected region
proposed a system based on the integration of three different and a non suspected one. However, as they claimed that this
data hiding technologies which are digital watermarking, 2D approach may take a long time during the comparison process.
barcode, and copy detection pattern (CDP). While the normal Young-Bin Kwon and Jeong-Hoon Kim [12], proposed a
ID documents using watermarking and hiding technology are method that can extract the characters and insert them in the
currently applied, they proposed additional technology which MRZ (Machine Readable Zone), so if the data in the passport
is biometric to strengthen the ID security. compared with MRZ code is not identical, we can easily know
The proposed system consists of four components, two of the passport is fabricated. Both researchers used binarization
them are hiding technologies and the rest are copy detection method for the image and other personal information on the
pattern (CDP), and biometric prevention. Firstly, the water- passport and embedded these data in the MRZ. After that they
mark (The information hidden in the image), which is used extracted the characters and the photo and matched them with
to connect the document holder to his personal information the other information in the passport. This method is easy
(Information at the ID documents), secondly, 2D barcode, because we compare the recognized characters with the same
which can store more data than watermark, and it is used to passport information. If the same characters are not finding
store the personal information, watermark and biometrics, and in the same passport; we know that this passport is forged.
encrypt the data, thirdly, copy detection pattern (CDP), which The passport consists of three components, a picture area,
is highly texture digital image, that is inserted into an area in MRZ area and upper printed personal information. The MRZ
the ID documents, and is very sensitive to any copy, because it is very important thing in the passport recognition, and the
affects its quality negatively. By the use of the CDP it is easy place of the MRZ in the bottom of the passport and use the
to know that this is ID document was copied or not. Finally, same letter size, the MRZ has two lines, with 44 characters
biometrics, which is the measurements of peoples physical for each of them. The MRZ can recognize 10 digits and 26
and behavioral characteristics. Biometrics can be divided into alphabets. The process of matching template is to find the
two main types, first physical biometrics, such as fingerprints, most similar template to compare it with MRZ. For example
DNA, face, hand, etc, and second, behavioral biometrics, such at comparison process the two A of MRZ and upper personal
as handwriting, signature, gait, etc, and we use it also for area are compared to know the difference of their shape. After
identification and access control. extraction of passport information, cross checking verification
However, this approach causes more overhead in term of its on the region is performed in order to detect the counterfeited
computational process. passport. If matching is correct, it will print OK, otherwise it
4) Different Systems: Suman V Patar and et al [11], pro- will print WARNING. However, if they improve a method for
posed a system to find out the fraud photocopied documents. removing the hologram and extracting the background image
This work concentrates on detecting fabricated photocopy doc- from the characters, that will increase the recognition rate.
uments in which part of its contents is altered by removing the 5) Copy/Paste techniques: Romain Betrand and et al [13],
original contents and writing above it, or altering the contents proposed a method for detecting counterfeited documents,
through the cut and paste technique by using Bounding Box, which is Scan-Edit and Print (SEP) technique, that can con-
which is Matlab tool. The Bounding Box (BB) can surround vert documents into a digital form to be edited by using
the characters and symbols regions. After the BB surrounded image processing software. The SEP technique consists of
the components it can detect the forgery of the document by two components, the first copy and paste detector, which
error, which is a space between a pair of characters different
from the rest of the pairs of the document. Second imitation
error, which is defined as words that appear in two different
typestyles. Third, copy/paste and imitation, which are the two
Fig. 4. On the left is Copy and paste fraud and on the right is Imitation fraud
errors that are found above in one document. However, it is
better to add information about the neighboring words that
might increase the font recognition and maximize the font
can detect the copy and paste forgery, by which a group of
forgery detection.
characters are copied and pasted in the different locations on
Suman VPatgar and Vasndevt [16], presented a method to
the document. Second the imitation detector, which can detect
detect the fabricated photocopied document using Geometric
the imitation forgery, where the counterfeiter adds or alters the
Moments and Gray level co-occurrence matrix features. They
information of the document by imitating the font properties of
focused on detecting the fabrication that happened in the
the document. The above system uses the intrinsic document
documents in which some contents are manipulated by smear-
feature, such as character font properties, the shape of the
ing whether over the original content or writing new content
character and the characters or words alignments to detect the
above it. The Geometric moment, which is the geometrical,
forgery in the document. The final fabricated document words
features of pixel distributed in the image, is used in the image
with the different font type, character size, misalignment or
processing to classify the problems. Grey level co-occurrence
skew.
matrix is one of the most known texture analysis method that
The detection of copied and pasted or imitated character helps in estimating image properties to second-order static.
is depends on the character shape comparison. The distance After applying these methods, they found out that, if the
between the two characters A and B can be one of the follow- consistent intensity is smooth and has a strong edge contour
ing probabilities, first, when the distance between character it indicates the non-fabricated photocopied text, while if the
A and character B equals 0, that means there is no forgery. consistent intensity is rough and of a weak edge contour
Second, when the distance between character A and character it indicates the fabricated photocopied text.The Geometric
B is short, then a copy/paste forgery exists. Third, when the Moments also is used to enhance the document by making it
distance between character A and character B is long, this free of noise, dirt and background art. However, if we combine
indicates an imitation forgery. We suggest that, it will be another technique to remove the dirt, noise and background art,
better, if they add more fake indicators to measure the gap the percentage of detecting the fabrication will increase.
between the two neighboring characters. Also, the system will 6) Printer type: Christoph H.Lampert and et al [17],
be improved, if they add a technique to reduce the print and presented a system to help non expert users to detect the
scan noise. original documents from personal computer made document,
Svetlona Abramova and Rainer Bohme [14], presented a by analyzing the printing technique used. A Support Vector
framework called Copy-Move Forgery Detection (CMFD) for Machine (SVM) approach has been trained to distinguish the
detecting forgery that happens in the text documents by copy- laser printout from the inkjet printout. The proposed approach
move forgery of scanned text documents. The copy-move is to detect the forgery by using a pre-letter classification of
(CM) is to copy a part of a text-document and reinsert it printing technique. In fact every different printer has different
again in another location in the same text-document, which visual characteristic, especially in the edge area of the letters,
aims to hide undesirable contents or duplicating a special part, so the system can detect and decide for each letter in the
like to fabricate names, dates, values or putting a text-free document what kind of printer produced it. The approach can
background to cover undesirable information. The framework determine which type of printer was being used, inkjet or laser.
(CMFD) consists of two parts, which are: Optical Charac- The printed letters of sharper edges indicate that a laser printer
ter Recognition (OCR), which is focuses on detecting the was being used, while non-sharper edges indicate that an inkjet
forgery in the fonts by measuring their weight, size, style and printer was used. However, it will be better if they add the
roughness, and Copy-Move (CM), which is focus on detecting printer fingerprint too, like a special number or small images to
forgery on the background of the text documents. However, the laser printer to be more effective in distinguishing between
more improvement is to be added to this technique like the the laser and inkjet printer.
implementation conditional random to make the detection Amir Ahmed and Faisal Shfait [18], proposed an approach
process more efficient. to detect the non-uniform vertical scaling when a printed doc-
Romain Bertrand and et al [15], presented a method called ument is scanned and reprinted, the contents of the reprinted
Conditional Random Field (CRF), which is a tool to describe document have slightly different vertical distances as com-
the correlation between fonts, styles and sizes of the char- pared to the original one. Most of the printed documents have
acters. The system can decide that the character belongs to a static part (e.g. the header and the footer) and a non-static
a specific font by comparing the character font features to one (The actual content of the document). The system aims to
known database. After that the character is classified to know detect the distortion of the document on the fake documents,
whether it is genuine or fake. The system can detect three types so they developed an approach to identity the static parts of
of errors when the word has been modified, first, copy/paste the documents. This system compares the two images together
and computes the matching score. This matching score can be Aziza Satkhozhina and et al. [23], presented a system that
considered as a number identical character that has the same used the Conditional Random Field (CRF), to classify the font
position or a different one. However, we see that it will be type. The system aims to identify the typeface, weight, slope
a stronger authentication if they include the non-static part in and size of a font, without knowing the content of the text.
the authentication, by font recognition or printer type. The CRF, is used widely in pattern recognition and machine
Gaurav Gupta and et al [19], they described a system to learning the input of the system where the computer generated
discover whether the printed documents are counterfeited or or scanned images that contain a text with a uniform font,
not, and also to discover the source scanner and printer that the system ignores the punctuations, after that the images are
produce the document. They used the variance of intensity, binarized, and then the system automatically predicts the font
which is the measure of dispersion, if the variation in the in- type by comparing the typeface, slope, weight and size of the
tensity variance is big that means this document is fraudulent, fonts. However, it will be more efficient for detection process,
otherwise it is original. Also, they used a parameter called if they add another technique like printer signature to this
purity and color, which are the purity and the top of the plane approach.
of the document. So, when the number of the pixels in the Nitin Khama and et al [24], presented a method for authen-
color ranges in the RGB color model, it means that purity of ticating and identifying forged region in images that have been
the colors can be used to distinguish the scanners, while the produced by using flatbed scanners. The method is based on
plane is formed by the end points of intensity line and one the features of imaging sensor pattern that can be used as a
of the six hues to classify the type of printer that used on fingerprint for the scanner. The first step is to get the noise
making the documents. However, as they claimed the project pattern related to the image. The sensor noise consists of two
needs more enhancements to recognize more different types components, the random component and the fixed one. The
of scanners and printers. random component is part of the noise that can change from
Johann Gebhard and et al [20], developed an approach using image to image, while the fixed one is part of the noise that
the difference in edge roughness to distinguish the laser printed remains constant from image to image. The fixed component
pages from inkjet printed pages. The system used application can be considered a signature of the image sensor, and can
of unsupervised anomaly detection that does not need prior be used to know the source of the scanner. The basic idea of
training. They used large dataset of inkjet and laser printed the method is to break up the image into small blocks are and
documents, and reached that the inkjet printers produce a classify every block separately to detect the source scanner, if
high degree of edges roughness than the laser ones. However, all blocks declared that they come g from one scanner, then the
possible improvement could be considered by adding detection image will be an authentic image, otherwise, it will be a forged
of sharp edges to increase the quality of the system. Another image. We can use the same method to detect the document if
improvement is to distinguish the images in the documents. it is original or faked. However, if they use a steganography
More improvement is to add more techniques to the system technique this will strengthen the detection capability.
like conditional random field or support vector machine. Joost van Beusekan and et al [25], proposed an algorithm
Sara Elkasrawi and Faisal Shafait [21], presented an ap- that can detect the forgery on the document papers by measur-
proach to detect the source of the printers, which is relies on ing the left alignment (which is the text-line that starts at the
the noise is that produced by the printers, regardless of the left margin), measure the right aligned lines (which is the text-
content and size of the document. The proposed system can line that starts at the right one), compute the justified (which
differentiate the inkjet which causes very small drops of ink starts at the left margin and ends at the right one), and compute
on the paper, whose diameter ranges from 50 to 60 microns. the centered (which checks that the gaps between both margins
The laser has a drum that is initially positively charged; when are equal in size). In addition, the algorithm checks the skew
a document is to be printed. It discharges certain areas on the and analyses the text-lines by measuring them to know normal
drum according to the content of the document. However, if or abnormal skew angles. However, if this approach extend to
they add the content of the document to the authentication of include more kind of documents, like the documents that have
the original of the document, like the sharper edges of the photos, it will be better.
letters, that means it belongs to the laser printer, while the Christian Schulze and et al [26], presented a system to
rough edges of the letters that means it belong to the inkjet detect the type of the printer that is used to detect the forged
printer. documents by using discrete cosine transformation (DCT) and
Abbas Cheddan and et al [22], presented a method for machine learning techniques to help in differentiating if the
scanned documents to detect the forgery, using information printer used was inkjet or laser. Based on their experiments,
hiding technique, Steganography, which is the science that they observed two things: 1) the characters edges that are
embeds data in digital medium in an invisible way. They also produced by the laser-printer are sharp, while the inkjet-
proposed an efficient, highly secure and robust method against printer produced a tendency and blurring in character edges.
different image processing attacks. They embedded data in the 2) The edge roughness and degradation for the inkjet-printed
image of the document, by substituting one bit from every 8 documents have high degree of edge roughness, while the
bits. However, if they add another technique such as biometrics laser-printed documents have small edge roughness. However,
it will be more efficient. to make the detection process stronger, we think we should
add the printer signature to the above techniques. many faked document have not been detected yet therefore
Clariesse Mandridake and et al [27], developed an approach improvements must be done to try to discover those huge
to detect the forgery on the documents by focusing on two numbers of faked documents [28]. According to our related
areas, the background area and the photo printing area. To works, there are many techniques and methods used to detect
detect the forgery on the background, they used a special kind the forgery in the ID documents. Some of them have high
of the ink during the printing process to appear on the printed efficiency and the others have a low one, but overall they
document, so that it can help to detect the original document, need more and more work in this area to make the process of
on the other hand it can detect the forgery on the photo detection to a better status.
printing by identifying the used printer, where inkjet printer
indicates forged documents and the laser printer indicates the ACKNOWLEDGMENT
original documents. However, if they add hiding technique The authors would like to thank...
methods such as watermarking, that will upgrade the efficiency
of detection. R EFERENCES
Aravind K.Mikkilineni and et al [28], proposed an ap-
[1] N. D. F. U. . U. H. Office), “Guidance on examining identity docu-
proach using an image texture analysis to classify the used ments,,” London, Britain, 2016.
printer types. They used two methods to identify the kind of [2] N. D. F. Unit, “Guidance on examining identity documents,” London,
printer. The first method was passive, which is to classify the Britain, 2015.
[3] C. F. Prevention), “Id documents report,” London, Britain, 2014.
printer according to the basic characteristics in the printed [4] B. Perry, S. Carr, and P. Patterson, “Digital watermarks as a security
documents, like the roughness and sharpness of the character. feature for identity documents,” in PROC SPIE INT SOC OPT ENG,
They referred to this type of detection as intrinsic needed to vol. 3973, 2000, pp. 80–87.
[5] r. Independent, “Fake identity’ brits warned that their lives are in danger,”
understand more about the mechanism. The second method London, Britain, 2010.
is to embed the external signature in the printed pages, like [6] C. Yang, “Fingerprint biometrics for id document verification,” in Indus-
the printer serial number or date of printing. This kind of trial Electronics and Applications (ICIEA), 2014 IEEE 9th Conference
on. IEEE, 2014, pp. 1441–1445.
signature is called the extrinsic signature. However, it is better [7] B. Schouten and B. Jacobs, “Biometrics and their use in e-passports,”
to maximize this technique to include the multiple font size Image and Vision Computing, vol. 27, no. 3, pp. 305–312, 2009.
and font type. [8] K. Thongkor and T. Amornraksa, “Digital image watermarking for
photo authentication in thai national id card,” in Electrical Engineer-
Zohaib Khan and et al [29], investigated a method to ing/Electronics, Computer, Telecommunications and Information Tech-
detect if some parts of the document was modified, altered nology (ECTI-CON), 2012 9th International Conference on. IEEE,
or forged with different pens. They demonstrated the method 2012, pp. 1–4.
[9] S. Ibrahim, M. Afrakhteh, and M. Salleh, “Adaptive watermarking for
of hyperspectral imaging for ink mismatch detection in a hand- printed document authentication,” in Computer Sciences and Conver-
written note and used the separated bands selection technique gence Information Technology (ICCIT), 2010 5th International Confer-
that selects informative bands from hyperspectral images for ence on. IEEE, 2010, pp. 611–614.
[10] J. Picard, C. Vielhauer, and N. Thorwirth, “Towards fraud-proof id
accurate ink mismatch detection. There are two methods to documents using multiple data hiding technologies and biometrics,”
distinguish inks, which are the destructive and non-destructive in Electronic Imaging 2004. International Society for Optics and
examination. The destructive test which separate a mixture Photonics, 2004, pp. 416–427.
[11] S. V. Patgar, K. Rani, and T. Vasudev, “An unsupervised intelligent
of inks into its basic components that will show the original system to detect fabrication in photocopy document using variations in
sample from the unoriginal one. Their proposal focused on bounding box features,” in Contemporary Computing and Informatics
the second type of distinguishing inks, the non-destructive (IC3I), 2014 International Conference on. IEEE, 2014, pp. 670–675.
[12] Y.-b. Kwon and J.-h. Kim, “Recognition based verification for the
examination which is a hyperspectral method that aims to machine readable travel documents,” in International Workshop on
detect the mismatch ink based on the fact that the same inks Graphics Recognition (GREC 2007), Curitiba, Brazil. Citeseer, 2007.
show the similar spectral responses while the different inks are [13] R. Bertrand, P. Gomez-Krämer, O. R. Terrades, P. Franco, and J.-M.
Ogier, “A system based on intrinsic features for fraudulent document
spectrally different. However, if they explore a spectral system detection,” in Document Analysis and Recognition (ICDAR), 2013 12th
to distinguish if two inks are mixed together, this will give the International Conference on. IEEE, 2013, pp. 106–110.
algorithm the capability to detect any overwriting process. [14] S. Abramova et al., “Detecting copy–move forgeries in scanned text
documents,” Electronic Imaging, vol. 2016, no. 8, pp. 1–9, 2016.
III. S UMMARY [15] R. Bertrand, O. R. Terrades, P. Gomez-Kramer, P. Franco, and J.-M.
Ogier, “A conditional random field model for font forgery detection,” in
Detection forgery documents is considered the most sig- Document Analysis and Recognition (ICDAR), 2015 13th International
nificant issue, because it is related to most of the dangerous Conference on. IEEE, 2015, pp. 576–580.
[16] S. V. Patgar and T. Vasudev, “An unsupervised intelligent system to
crimes, such as illegal immigration, financial fraud, human detect fabrication in photocopy document using geometric moments
and drug smuggling, terrorism and so on. New technologies and gray level co-occurrence matrix,” International journal of computer
with widely network connections assess any person with little applications, vol. 74, no. 12, 2013.
[17] C. H. Lampert, L. Mei, and T. M. Breuel, “Printing technique classifica-
money to get easily high quality IDs from countries like China tion for document counterfeit detection,” in Computational Intelligence
through different Web sites. This has opened the doors widely and Security, 2006 International Conference on, vol. 1. IEEE, 2006,
for fake IDs markets [1] [2]. They noticed that only 60 percent pp. 639–644.
[18] A. G. H. Ahmed and F. Shafait, “Forgery detection based on intrinsic
of false documents can be detected with different machines, document contents,” in Document Analysis Systems (DAS), 2014 11th
while 80 percent can be detected by human experts, that mean IAPR International Workshop on. IEEE, 2014, pp. 252–256.
[19] G. Gupta, S. K. Saha, S. Chakraborty, and C. Mazumdar, “Docu-
ment frauds: Identification and linking fake document to scanners and
printers,” in Computing: Theory and Applications, 2007. ICCTA’07.
International Conference on. IEEE, 2007, pp. 497–501.
[20] J. Gebhardt, M. Goldstein, F. Shafait, and A. Dengel, “Document au-
thentication using printing technique features and unsupervised anomaly
detection,” in Document Analysis and Recognition (ICDAR), 2013 12th
International Conference on. IEEE, 2013, pp. 479–483.
[21] S. Elkasrawi and F. Shafait, “Printer identification using supervised
learning for document forgery detection,” in Document Analysis Systems
(DAS), 2014 11th IAPR International Workshop on. IEEE, 2014, pp.
146–150.
[22] A. Cheddad, J. Condell, K. Curran, and P. Mc Kevitt, “Combating digital
document forgery using new secure information hiding algorithm,” in
Digital Information Management, 2008. ICDIM 2008. Third Interna-
tional Conference on. IEEE, 2008, pp. 922–924.
[23] A. Satkhozhina, I. Ahmadullin, and J. P. Allebach, “Optical font recog-
nition using conditional random field,” in Proceedings of the 2013 ACM
symposium on Document engineering. ACM, 2013, pp. 119–122.
[24] N. Khanna, G. T. Chiu, J. P. Allebach, and E. J. Delp, “Scanner
identification with extension to forgery detection,” in Electronic Imaging
2008. International Society for Optics and Photonics, 2008, pp.
68 190G–68 190G.
[25] J. van Beusekom, F. Shafait, and T. M. Breuel, “Text-line examination
for document forgery detection,” International Journal on Document
Analysis and Recognition (IJDAR), vol. 16, no. 2, pp. 189–207, 2013.
[26] C. Schulze, M. Schreyer, A. Stahl, and T. Breuel, “Using dct features for
printing technique and copy detection,” in IFIP International Conference
on Digital Forensics. Springer, 2009, pp. 95–106.
[27] C. MANDRIDAKE, A. OUDDAN, M. HOARAU, and K. WIN-LIME,
“Towards fully automatic id document frauds detection.”
[28] R. M. Abed, “Scanned documents forgery detection based on source
scanner identification,” American Journal of Information Science and
Computer Engineering, vol. 1, no. 3, pp. 113–116, 2015.
[29] Z. Khan, F. Shafait, and A. Mian, “Automatic ink mismatch detection
for forensic document analysis,” Pattern Recognition, vol. 48, no. 11,
pp. 3615–3626, 2015.

View publication stats