Sie sind auf Seite 1von 27

1|Page #107, AmeerEstate, Besides ICICI Bank, SR Nagar, Hyderabad. 500 038.

Ph: 9949981281
PROJECT ABSTRACTS FOR YOUR
EASY REFERENCE

IEEE PAPERS
2009, 2008, 2007 and so on…

Sahasra Technology Solutions


#107,AmeerEstate, SR Nagar,Hyderabad. 500 038
Ph-99 4 99 81 2 81

Why SAHASRA – Promise for the Best


Quality On Time Complete
Learning Flexibility Completion Guidance

2|Page #107, AmeerEstate, Besides ICICI Bank, SR Nagar, Hyderabad. 500 038. Ph: 9949981281
IMAGE SEGMENTATION USING ITERATIVE
WATERSHEDING PLUS RIDGE DETECTION

This paper presents a novel segmentation algorithm for metallographic


images, especially those objects without regular boundaries and
homogeneous intensity. In metallographic quantification, the complex
microstructures make the conventional approaches hard to achieve a
satisfactory partition.

We formulate the segmentation procedure as a new framework of


iterative watershed region growing constrained by the ridge information.
The seeds are selected by an effective double-threshold approach, and
the ridges are superimposed as the highest waterlines in watershed
transform.

To tackle the over-segmentation problem, the blobs are merged


iteratively with the utilization of Bayes classification rule. Experimental
results show that the algorithm is effective in performing segmentation
without too much parameter tuning.

3|Page #107, AmeerEstate, Besides ICICI Bank, SR Nagar, Hyderabad. 500 038. Ph: 9949981281
HIERARCHICAL CONTOUR MATCHING FOR
DENTAL X-RAY RADIOGRAPHS
Received 6 October 2006; received in revised form 9 April 2007; accepted 18 May 2007

The goal of forensic dentistry is to identify individuals based on their dental


characteristics. In this paper we present a new algorithm for human
identification from dental X-ray images. The algorithm is based on matching
teeth contours using hierarchical chamfer distance.

The algorithm applies a hierarchical contour matching algorithm using multi-


resolution representation of the teeth. Given a dental record, usually a
postmortem (PM) radiograph, first, the radiograph is segmented and a multi-
resolution representation is created for each PM tooth. Each tooth is matched
with the archived antemortem (AM) teeth, which have the same tooth number,
in the database using the hierarchical algorithm starting from the lowest
resolution level.

At each resolution level, the AM teeth are arranged in an ascending order


according to a matching distance and 50% of the AM teeth with the largest
distances are discarded and the remaining AM teeth are marked as possible
candidates and the matching process proceeds to the following (higher)
resolution level.

After matching all the teeth in the PM image, voting is used to obtain a list of
best matches for the PM query image based upon the matching results of the
individual teeth. Analysis of the time complexity of the proposed algorithm
prove that the hierarchical matching significantly reduces the search space
and consequently the retrieval time is reduced. The experimental results on a
database of 187 AM images show that the algorithm is robust for identifying
individuals based on their dental radiographs.

4|Page #107, AmeerEstate, Besides ICICI Bank, SR Nagar, Hyderabad. 500 038. Ph: 9949981281
RECONSTRUCTION OF UNDERWATER IMAGE
BY BISPECTRUM

Reconstruction of an underwater object from a sequence of images


distorted by moving water waves is a challenging task. A new approach is
presented in this paper.

We make use of the bispectrum technique to analyze the raw image


sequences and recover the phase information of the true object. We test
our approach on both simulated and real-world data, separately.

Results show that our algorithm is very promising. Such technique has
wide applications to areas such as ocean study and submarine
observation.

Index Terms— bispectrum, water wave, image, reconstruction,


distortion, refraction.

5|Page #107, AmeerEstate, Besides ICICI Bank, SR Nagar, Hyderabad. 500 038. Ph: 9949981281
AUTOMATIC RECOGNITION OF EXUDATIVE
MACULOPATHY USING FUZZY CMEANS
CLUSTERING AND NEURAL NETWORKS

Retinal exudates are typically manifested as spatially random


yellow/white patches of varying sizes and shapes.

They are a characteristic feature of retinal diseases such as diabetic


maculopathy. An automatic method for the detection of exudate regions is
introduced comprising image colour normalisation, enhancing the
contrast between the objects and background, segmenting the colour
retinal image into homogenous regions using Fuzzy C-Means clustering,
and classifying the regions into exudates and non exudates patches
using a neural network.

Experimental results indicate that we are able to achieve 92% sensitivity


and 82% specificity.

6|Page #107, AmeerEstate, Besides ICICI Bank, SR Nagar, Hyderabad. 500 038. Ph: 9949981281
International Conference on Computer Systems and Technologies -
CompSysTech’07

AN IMPROVING MODEL WATERMARKING


WITH IRIS BIOMETRIC CODE

The Paper justifies need to work with new technology for perfect safety in
specific area of the human deal (the business).

The Problem same important with new technology - internet, all


multimedia author's system, video watch of the system, audio production
and etc. .

In this directions will lay each leading security system and modern
password with electronic document and key system of the passage with
modern cryptographic system. We discuss one problem with
watermarks, used biometric code for perfect the algorithm of the
watermark to safety.

Key words: Biometric Watermark, Iris code, Biometric system,


videowatch system, watermark with discrete wavelet transformation l.

7|Page #107, AmeerEstate, Besides ICICI Bank, SR Nagar, Hyderabad. 500 038. Ph: 9949981281
CALL ADMISSION CONTROL OPTIMIZATION
IN WIMAX NETWORKS
Worldwide interoperability for microwave access (WiMAX) is a promising technology for
lastmile Internet access, particularly in the areas where wired infrastructures are not
available. In a WiMAX network, call admission control (CAC) is deployed to effectively control
different traffic loads and prevent the network from being overloaded.

In this paper, we propose a framework of a 2-D CAC to accommodate various features of


WiMAX networks. Specifically, we decompose the 2-D uplink and downlink WiMAX CAC
problem into two independent 1-D CAC problems and formulate the 1-D CAC optimization, in
which the demands of service providers and subscribers are jointly taken into account. To
solve the optimization problem, we develop a utility- and fairness-constrained optimal
revenue policy, as well as its corresponding approximation algorithm.

THERE exist many regions in the world where wired infrastructures (i.e., T 1, DSL, cables,
etc.) are difficult to deploy for geographical or economic reasons. To provide broadband
wireless access to these regions, many researchers advocate worldwide interoperability for
microwave access (WiMAX), which is an IEEE 802.16 standardized wireless technology based
on an orthogonal frequency-division multiplexing (OFDM ) physical-layer architecture.
To support a variety of applications, IEEE 802.16 has defined four types of service:
1) unsolicited grant service (UGS);
2) real-time polling service (rtPS);
3) non-real-time polling service (nrtPS); and
4) best effort (BE) service.

In a WiMAX network with heterogeneous traffic loads, it is essential to find a call admission
control (CAC) solution that can effectively allocate bandwidth resources to different
applications. In this Project, a proposed WiMAX CAC framework, which effectively meets all
operational requirements of WiMAX networks. In this CAC framework, we decompose the 2-D
uplink (UL) and downlink (DL) WiMAX CAC problem into two independent 1-D CAC problems.
We further formulate the 1-D CAC as an optimization problem under a certain objective
function, which should be chosen to maximize either the revenue of service providers or the
satisfaction of subscribers.

With respect to 1-D CAC optimization problems, most previous studies were focused only on
two approaches:
1) the optimal revenue strategy (also known as the stochastic knapsack problem) and
2) the minimum weighted sum of blocking strategy .

In this project, we will show that these two strategies are, in fact, equivalent. Therefore, we
can mainly concentrate on the investigation of the optimal revenue strategy and view the
minimum weighted sum of blocking strategy as the basis for fast calculation algorithms.
Clearly, the optimal revenue policy only considers the profit of service providers. As an
8|Page #107, AmeerEstate, Besides ICICI Bank, SR Nagar, Hyderabad. 500 038. Ph: 9949981281
effort to conduct a multi objective study, in this paper, we will also take into account the
requirements from WiMAX subscribers and develop a policy with a satisfactory tradeoff
between service providers and subscribers.

The Project includes the following:


1) The development of a framework of CAC for WiMAX networks;
2) The investigation on various CAC optimization strategies; and
3) The proposal of a series of constrained greedy revenue algorithms for fast calculation.
Through detailed performance evaluation, the study carried out in this paper will show that
the proposed CAC solution can meet the expectations of both service providers and
subscribers.

Modules:
CAC model for WiMAX networks
Calculate the UL and DL capacity
1-D CAC optimization strategies and develop their corresponding approximation algorithms

The following parameters calculate using Greedy algorithm:


Utility Requirement
Fairness Requirement
Constrained Optimal Revenue Strategy

Simulation graphs:
Traffic arrival vs Revenue
Traffic arrival vs Utility
Blocking probability vs Traffic arrival

9|Page #107, AmeerEstate, Besides ICICI Bank, SR Nagar, Hyderabad. 500 038. Ph: 9949981281
ROBUST DWT-SVD DOMAIN IMAGE
WATERMARKING
Watermarking (data hiding) is the process of embedding data into a
multimedia element such as image, audio or video. This embedded data
can later be extracted from, or detected in, the multimedia for security
purposes. A watermarking algorithm consists of the watermark structure,
an embedding algorithm, and an extraction, or a detection, algorithm.

Watermarks can be embedded in the pixel domain or a transform


domain. In multimedia applications, embedded watermarks should be
invisible, robust, and have a high capacity. Invisibility refers to the
degree of distortion introduced by the watermark and its affect on the
viewers or listeners. Robustness is the resistance of an embedded
watermark against intentional attacks, and normal A/V processes such
as noise, filtering (blurring, sharpening, etc.), resampling, scaling,
rotation, cropping, and lossy compression.

Capacity is the amount of data that can be represented by an embedded


watermark. The approaches used in watermarking still images include
least-significant bit encoding, basic M-sequence, transform techniques,
and image-adaptive techniques.An important criterion for classifying
watermarking schemes isthe type of information needed by the detector:
• Non-blind schemes: Both the original image and the secret key(s) for
watermark embedding.
• Semi-blind schemes: The secret key(s) and the watermarkbit sequence.
• Blind schemes: Only the secret key(s).
Typical uses of watermarks include copyright protection (identification of
the origin of content, tracing illegally distributed copies) and disabling
unauthorized access to content. Requirements and characteristics for the
digital watermarks in these scenarios are different, in general.
Identification of the origin of content requires the embedding of a single
watermark into the content at the source of distribution.

To trace illegal copies, a unique watermark is needed based on the location


or identity of the recipient in the multimedia network. In both of these
applications, non-blind schemes are appropriate as watermark extraction or
10 | P a g e #107, AmeerEstate, Besides ICICI Bank, SR Nagar, Hyderabad. 500 038. Ph: 9949981281
detection needs to take place in a special laboratory environment only when
there is a dispute regarding the ownership of content.

For access control, the watermark should be checked in every


authorized consumer device used to receive the content, thus requiring
semi-blind or blind schemes. Note that the cost of a watermarking
system will depend on the intended use, and may vary considerably.
Two widely used image compression standards are JPEG and JPEG2000.
The former is based on the Discrete Cosine Transform (DCT), and the
latter the Discrete Wavelet Transform (DWT).

In recent years, many watermarking schemes have been developed


using these popular transforms. Permission to make digital or hard
copies of all or part of this work for personal or classroom use is
granted without fee provided that copies are not made or distributed for
profit or commercial advantage and that copies bear this notice and the
full citation on the first page.

To copy otherwise, or republish, to post on servers or to redistribute to


lists, requires prior specific permission and/or a fee.In all frequency
domain watermarking schemes, there is a conflict between robustness
and transparency. If the watermark is embedded in perceptually most
significant components, the scheme would be robust to attacks but the
watermark may be difficult to hide.

On the other hand, if the watermark is embedded in perceptually


insignificant components, it would be easier to hide the watermark but
the scheme may be least resistant to attacks. In image watermarking,
two distinct approaches have been used to represent the watermark. In
the first approach, the watermark is generally represented as a sequence of
randomly generated real numbers having a normal distribution with zero
mean and unity variance.

This type of watermark allows the detector to statistically check the


presence or absence of the embedded watermark. In the second
approach, a picture representing a company logo or other copyright

11 | P a g e #107, AmeerEstate, Besides ICICI Bank, SR Nagar, Hyderabad. 500 038. Ph: 9949981281
information is embedded in the cover image. The detector actually
reconstructs the watermark, and computes its visual quality using an
appropriate measure.

FILTERBANK-BASED
FINGERPRINT MATCHING
With identity fraud in our society reaching unprecedented proportions and with an
increasing emphasis on the emerging automatic personal identification applications,
biometrics-based verification, especially fingerprint-based identification, is receiving a
lot of attention. There are two major shortcomings of the traditional approaches to
fingerprint representation.

For a considerable fraction of population, the representations based on explicit


detection of complete ridge structures in the fingerprint are difficult to extract
automatically. The widely used minutiae-based representation does not utilize a
significant component of the rich discriminatory information available in the
fingerprints. Local ridge structures cannot be completely characterized by minutiae.

Further, minutiae-based matching has difficulty in quickly matching two fingerprint


images containing different number of unregistered minutiae points. The proposed
filter-based algorithm uses a bank of Gabor filters to capture both local and global
details in a fingerprint as a compact fixed length FingerCode. The fingerprint
matching is based on the Euclidean distance between the two corresponding
FingerCodes and hence is extremely fast.

We are able to achieve a verification accuracy which is only marginally inferior to the
best results of minutiae-based algorithms published in the open literature. Our
system performs better than a state-of-the-art minutiae-based system when the
performance requirement of the application system does not demand a very low false
acceptance rate. Finally, we show that the matching performance can be improved by
combining the decisions of the matchers based on complementary (minutiae-based
and filter-based) fingerprint information.

Index Terms: Biometrics, FingerCode, fingerprints, flow pattern, Gabor filters,


matching, texture, verification

12 | P a g e #107, AmeerEstate, Besides ICICI Bank, SR Nagar, Hyderabad. 500 038. Ph: 9949981281
EIGENFACES FOR RECOGNITION
Research on automatic face recognition in images has rapidly
developed into several inter-related lines, and this research has
both lead to and been driven by a disparate and expanding set of
commercial applications.

The large number of research activities is evident in the


growing number of scientific communications published on
subjects related to face processing and recognition.

Index Terms: face, recognition, eigenfaces, eigenvalues,


eigenvectors, Karhunen-Loeve algorithm.

13 | P a g e #107, AmeerEstate, Besides ICICI Bank, SR Nagar, Hyderabad. 500 038. Ph: 9949981281
PROJECT ABSTRACTS FOR YOUR EASY
REFERENCE

IEEE PAPERS
2009, 2008, 2007 and so on…

14 | P a g e #107, AmeerEstate, Besides ICICI Bank, SR Nagar, Hyderabad. 500 038. Ph: 9949981281
15 | P a g e #107, AmeerEstate, Besides ICICI Bank, SR Nagar, Hyderabad. 500 038. Ph: 9949981281
SPEECH RECOGNITION SYSTEM
FOR ISOLATED WORDS
Speech recognition technology is used more and more for telephone
applications like travel booking and information, financial account
information, customer service call routing, and directory assistance.

Using constrained grammar recognition, such applications can achieve


remarkably high accuracy. Research and development in speech
recognition technology has continued to grow as the cost for
implementing such voice-activated systems has dropped and the
usefulness and efficacy of these systems has improved.

For example, recognition systems optimized for telephone applications


can often supply information about the confidence of a particular
recognition, and if the confidence is low , it can trigger the application to
prompt callers to confirm or repeat their request.

Furthermore, speech recognition has enabled the automation of certain


applications that are not automatable using push-button interactive
voice response (IVR) systems, like directory assistance and systems
that allow callers to "dial" by speaking names listed in an electronic
phone book.

Index Terms: speech, recognition, verification, sound, isolated, words.

IRIS RECOGNITION SYSTEM


16 | P a g e #107, AmeerEstate, Besides ICICI Bank, SR Nagar, Hyderabad. 500 038. Ph: 9949981281
The iris of each eye is unique. No two irises are alike in their mathematical
detail--even between identical twins and triplets or between one's own left and
right eyes. Unlike the retina, however, it is clearly visible from a distance,
allowing easy image acquisition without intrusion.

The iris remains stable throughout one's lifetime, barring rare disease or
trauma. The random patterns of the iris are the equivalent of a complex
"human barcode," created by a tangled meshwork of connective tissue and
other visible features. The iris recognition process begins with video-based
image acquisition that locates the eye and iris.

The boundaries of the pupil and iris are defined, eyelid occlusion and specular
reflection are discounted, and quality of image is determined for processing.
The iris pattern is processed and encoded into a record (or "template"), which is
stored and used for recognition when a live iris is presented for
comparison. Half of the information in the record digitally describes the
features of the iris, the other half of the record controls the comparison,
eliminating specular reflection, eyelid droop, eyelashes, etc.

A biometric system provides automatic identification of an individual based


on a unique feature or characteristic possessed by the individual. Iris
recognition is regarded as the most reliable and accurate biometric
identification system available. Most commercial iris recognition systems use
patented algorithms developed by Daugman, and these algorithms are able to
produce perfect recognition rates. However, published results have usually
been produced under favourable conditions, and there have been no
independent trials of the technology.

The iris recognition system consists of an automatic segmentation system


that is based on the Hough transform, and is able to localise the circular iris
and pupil region, occluding eyelids and eyelashes, and reflections. The
extracted iris region was then normalised into a rectangular block with
constant dimensions to account for imaging inconsistencies.

Finally, the phase data from 1D Log-Gabor filters was extracted and
quantised to four levels to encode the unique pattern of the iris into a
bit-wise biometric template. The Hamming distance was employed for
classification of iris templates, and two templates were found to match if
a test of statistical independence was failed. The system performed with
perfect recognition on a set of 75 eye images; however, tests on another
17 | P a g e #107, AmeerEstate, Besides ICICI Bank, SR Nagar, Hyderabad. 500 038. Ph: 9949981281
set of 624 images resulted in false accept and false reject rates of
0.005% and 0.238% respectively. Therefore, iris recognition is shown to
be a reliable and accurate biometric technology.

Index Terms: iris, recognition, verification, gabor, eye recognition,


matching, verification.

OPTICAL CHARACTER RECOGNITION

Optical character recognition (OCR) is the translation of optically


scanned bitmaps of printed or written text characters into character
codes, such as ASCII. This is an efficient way to turn hard-copy
materials into data files that can be edited and otherwise manipulated on a
computer.

This is the technology long used by libraries and government agencies to


make lengthy documents quickly available electronically. Advances in OCR
technology have spurred its increasing use by enterprises. For many
document-input tasks, OCR is the most cost-effective and speedy method
available.

And each year, the technology frees acres of storage space once given
over to file cabinets and boxes full of paper documents. Before OCR can
be used, the source material must be scanned using an optical scanner
(and sometimes a specialized circuit board in the PC) to read in the page
as a bitmap (a pattern of dots). Software to recognize the images is also
required.

18 | P a g e #107, AmeerEstate, Besides ICICI Bank, SR Nagar, Hyderabad. 500 038. Ph: 9949981281
19 | P a g e #107, AmeerEstate, Besides ICICI Bank, SR Nagar, Hyderabad. 500 038. Ph: 9949981281
EIGENEXPRESSIONS FOR FACIAL
EXPRESSION RECOGNITION

We propose an algorithm for facial expression recognition which can


classify the given image into one of the seven basic facial expression
categories (happiness, sadness, fear, surprise, anger, disgust and
neutral).

PCA is used for dimensionality reduction in input data while retaining


those characteristics of the data set that contribute most to its variance,
by keeping lower-order principal components and ignoring higher-order
ones. Such low -order components contain the "most important" aspects of
the data.

The extracted feature vectors in the reduced space are used to train the
supervised Neural Network classifier. This approach results extremely
powerful because it does not require the detection of any reference point or
node grid.

The proposed method is fast and can be used for real-time applications.

JPEG-BASED IMAGE COMPRESSION


20 | P a g e #107, AmeerEstate, Besides ICICI Bank, SR Nagar, Hyderabad. 500 038. Ph: 9949981281
TECHNOLOGY

JPEG is a standardized image compression mechanism. It stands for


Joint Photographic Experts Group, the original name of the committee
that wrote the standard. JPEG is designed for compressing either
fullcolor or gray-scale images of natural, real-world scenes.

It works well on photographs, naturalistic artwork, and similar material;


not so well on lettering, simple cartoons, or line drawings. JPEG is a
lossy compression algorithm, meaning that the decompressed image
isn't quite the same as the one you started with.

JPEG is designed to exploit known limitations of the human eye (more


about this later), notably the fact that small color changes are perceived
less accurately than small changes in brightness.

A useful property of JPEG is that the degree of lossiness can be varied


by adjusting compression parameters. This means that the image maker
can trade off file size against output image quality. The code we have
developed includes:
• Color space transformation between RGB and YCbCr •
Quantization
• Optimized encoding

OFF-LINE SIGNATURE RECOGNITION


21 | P a g e #107, AmeerEstate, Besides ICICI Bank, SR Nagar, Hyderabad. 500 038. Ph: 9949981281
There exist a number of biometrics methods today e.g. Signatures,
Fingerprints, Iris etc. There is considerable interest in authentication
based on handwritten signature verification system as it is the cheapest
way to authenticate the person.

Fingerprints and Iris verification require the installation of costly


equipments and hence can not be used at day to day places like Banks
etc. As because Forensic experts can not be employed at every place,
there has been considerable effort towards developing algorithms that
could verify and authenticate the individual’s identity. Many times the
signatures are not even readable by human beings.

Therefore a signature is treated as an image carrying a certain pattern of


pixels that pertains to a specific individual. Signature Verification
Problem therefore is concerned with determining whether a particular
signature truly belongs to a person or not.

Signatures are a special case of handwriting in which special characters


and flourishes are viable. Signature Verification is a difficult pattern
recognition problem as because no two genuine signatures of a person
are precisely the same. Its difficulty also stems from the fact that skilled
forgeries follow the genuine pattern unlike fingerprints or irises where
fingerprints or irises from two different persons vary widely.

Ideally interpersonal variations should be much more than the


intrapersonal variations. Therefore it is very important to identify and
extract those features which minimize intrapersonal variation and
maximize interpersonal variations.

There are two approaches to signature verification, online and offline


differentiated by the way data is acquired. In offline case signature is
obtained on a piece of paper and later scanned. While in online case
signature is obtained on an electronic tablet and pen. Obviously
dynamic information like speed, pressure is lost in offline case unlike
22 | P a g e #107, AmeerEstate, Besides ICICI Bank, SR Nagar, Hyderabad. 500 038. Ph: 9949981281
online case.

A DIGITAL IMAGE COPYRIGHT


PROTECTION SCHEME BASED ON VISUAL
CRYPTOGRAPHY

A simple watermarking method for color images is proposed. The proposed method
is based on watermark embedding for the histograms of the HSV planes using visual
cryptography watermarking. The method has been proved to be robust for various
image processing operations such as filtering, compression, additive noise, and
various geometrical attacks such as rotation, scaling, cropping, flipping, and
shearing.

The watermark method is an excellent technique to protect the copyright ownership of


a digital image. The proposed watermark method is built up on the concept of
visual cryptography. According to the proposed method, the watermark pattern does
not have to be embedded into the original image directly, which makes it harder to
detect or recover from the marked image in an illegal way.

It can be retrieved from the marked image without making comparison with the
original image. The notary also can off-line adjudge the ownership of the suspect
image by this method. The watermark pattern can be any significant black/white
image that can be used to typify the owner.

Experimental results show that the watermark pattern in the marked image has good
transparency and robustness. By the proposed method, all the pixels of the marked
image are equal to the original image.

Index Terms: Matlab, source, code, histogram, HSV, visual, cryptography, watermark,
hue, saturation, value.

23 | P a g e #107, AmeerEstate, Besides ICICI Bank, SR Nagar, Hyderabad. 500 038. Ph: 9949981281
HIGH DEFINITION IMAGE COMPRESSION
TECHNOLOGY

The transport of images across communication paths is an expensive


process. Image compression provides an option for reducing the
number of bits in transmission. This in turn helps increase the volume of
data transferred in a space of time, along with reducing the cost
required. It has become increasingly important to most computer
networks, as the volume of data traffic has begun to exceed their
capacity for transmission.

Traditional techniques that have already been identified for data


compression include: Predictive coding, Transform coding and Vector
Quantization. In brief, predictive coding refers to the decorrelation of
similar neighbouring pixels within an image to remove redundancy.
Following the removal of redundant data, a more compressed image or
signal may be transmitted.

Transform-based compression techniques have also been commonly


employed. These techniques execute transformations on images to
produce a set of coefficients. A subset of coefficients is chosen that
allows good data representation (minimum distortion) while maintaining
an adequate amount of compression for transmission.

The results achieved with a transform-based technique is highly


dependent on the choice of transformation used (cosine, wavelet,
Karhunen-Loeve etc). Finally, vector quantization techniques require the
development of an appropriate codebook to compress data.
Usage of codebooks do not guarantee convergence and hence do not
necessarily deliver infallible decoding accuracy. Also the process may
be very slow for large codebooks as the process requires extensive
searches through the entire codebook. Following the review of some of
the traditional techniques for image compression, it is possible to
discuss some of the more recent techniques that may be employed for
data compression.

24 | P a g e #107, AmeerEstate, Besides ICICI Bank, SR Nagar, Hyderabad. 500 038. Ph: 9949981281
FAST DCT VIA MOMENTS
Discrete cosine transforms (DCTs) are widely used in speech coding and
image compression. They resemble Karhunen- Loeve transform for first-
order Markov stationary random data and are classified into four groups.

Finding fast computational algorithms for DCTs has been a rather active
subject. These methods all tried to reduce the amount of multiplications.
It is very important to low -power implementations of DCTs on mobile
devices that no floating multiplications or less multiplications are
needed.

At the same time, the parallel hardware methods also have been
developed for designing fast DCT processors. Among them the systolic
array methods have been given more attentions due to their easy VLSI
implementation.

By using a modular mapping and truncating, DCTs are approximated by


linear sums of discrete moments computed fast only through additions.

This enables us to use computational techniques developed for


computing moments to compute DCTs efficiently. We demonstrate this
by applying our earlier systolic solution to this problem. The method can
also be applied to multidimensional DCTs as well as their inverses.

Index Terms: DCT, discrete cosine transform, moments, moment, fast


transform, systolic array, Matlab source code.

25 | P a g e #107, AmeerEstate, Besides ICICI Bank, SR Nagar, Hyderabad. 500 038. Ph: 9949981281
FAST AND ROBUST SPEECH RECOGNITION
BASED ON DYNAMIC TIME-WARPING

Searching for the best path that matches two time-series signals is the
main task for many researchers, because of its importance in these
applications. Dynamic Time-Warping (DTW) is one of the prominent
techniques to accomplish this task, especially in speech recognition
systems. DTW is a cost minimisation matching technique, in which a test
signal is stretched or compressed according to a reference template.

Although there are other advanced techniques in speech recognition


such as the hidden Markov modelling (HMM) and artificial neural network
(ANN) techniques, the DTW is widely used in the small-scale embedded-
speech recognition systems such as those embedded in cell phones.

The reason for this is owing to the simplicity of the hardware


implementation of the DTW engine, which makes it suitable for many
mobile devices. Additionally, the training procedure in DTW is very
simple and fast, as compared with the HMM and ANN rivals.

Index Terms: Matlab, speech recognition, speech verification, speech


matching, Dynamic Time-Warping, dtw , features extraction.

26 | P a g e #107, AmeerEstate, Besides ICICI Bank, SR Nagar, Hyderabad. 500 038. Ph: 9949981281
PROJECT ABSTRACTS FOR YOUR EASY
REFERENCE

IEEE PAPERS
2009, 2008, 2007 and so on…

For More Project Abstracts

www.sahasratechnology.com

27 | P a g e #107, AmeerEstate, Besides ICICI Bank, SR Nagar, Hyderabad. 500 038. Ph: 9949981281

Das könnte Ihnen auch gefallen