Sie sind auf Seite 1von 118

SEABIRDS

IEEE 2013 2014


SOFTWARE PROJECTS IN VARIOUS DOMAINS
|JAVA|J2ME |J2EE|
|DOTNET|MATLAB|NS2|

SBGC, 24/83, O Block,

SBGC 4th FLOOR SURYA


COMPLEX,

MMDA

SINGARATHOPE BUS STOP,

COLONY,

ARUMBAKKAM
CHENNAI-600106

OLD MADURAI ROAD,


TRICHY-620002

Web: www.ieeeproject.in
E-Mail: ieeeproject@hotmail.com

Trichy

Chennai

Mobile:- 09003012150

Mobile:- 09944361169

Phone:- 0431-4013174

SBGC Provides IEEE 2013 -2014 projects for all Final Year Students. We do assist the students
with Technical Guidance for two categories.

Category 1: Students with new project ideas / New or Old

IEEE

Papers.
Category 2: Students selecting from our project list.
When you register for a project we ensure that the project is implemented to your fullest
satisfaction and you have a thorough understanding of every aspect of the project.

SEABIRDS PROVIDES YOU THE LATEST IEEE 2012 PROJECTS / IEEE 2013 PROJECTS
FOR FOLLOWING DEPARTMENT STUDENTS

B.E, B.TECH, M.TECH, M.E, DIPLOMA, MS, BSC, MSC, BCA, MCA, MBA, BBA, PHD,
B.E (ECE, EEE, E&I, ICE, MECH, PROD, CSE, IT, THERMAL, AUTOMOBILE,
MECATRONICS, ROBOTICS) B.TECH(ECE, MECATRONICS, E&I, EEE, MECH , CSE, IT,
ROBOTICS) M.TECH(EMBEDDED SYSTEMS, COMMUNICATION SYSTEMS, POWER
ELECTRONICS,

COMPUTER

ELECTRONICS,

VLSI

SYSTEMS,

POWER

Design)

SCIENCE,

SOFTWARE

M.E(EMBEDDED

ELECTRONICS,

ENGINEERING,

SYSTEMS,

COMPUTER

APPLIED

COMMUNICATION

SCIENCE,

SOFTWARE

ENGINEERING, APPLIED ELECTRONICS, VLSI Design) DIPLOMA (CE, EEE, E&I, ICE,
MECH, PROD, CSE, IT)

MBA (HR, FINANCE, MANAGEMENT, OPERATION MANAGEMENT, SYSTEM


MANAGEMENT, PROJECT MANAGEMENT, HOSPITAL MANAGEMENT, EDUCATION
MANAGEMENT, MARKETING MANAGEMENT, TECHNOLOGY MANAGEMENT)

We also have training and project, R & D division to serve the students and make them job
oriented professionals.

PROJECT SUPPORT AND DELIVERABLES


Project Abstract
IEEE PAPER
IEEE Reference Papers, Materials &
Books in CD
PPT / Review Material
Project Report (All Diagrams & Screen
Working Procedures
Algorithm Explanations
Project Installation in Laptops
Project Certificate

shots)

TECHNOLOGY: JAVA
DOMAIN: CLOUD COMPUTING
S. No.
1.

IEEE TITLE

ABSTRACT

Decentralized
Access

IEEE YEAR

We propose a new decentralized

Control access control scheme for secure data storage

with Anonymous in

clouds

that

supports

anonymous

Authentication of authentication. In the proposed scheme, the


Data
Clouds

Stored

in cloud verifies the authenticity of the series


without knowing the users identity before
storing data. Our scheme also has the added
feature of access control in which only valid
users

are

able

to

decrypt

the

stored

information. The scheme prevents replay


attacks and supports creation, modification,
and reading data stored in the cloud. We also
address user revocation. Moreover, our
authentication and access control scheme is
decentralized and robust, unlike other access
control schemes designed for clouds which
are

centralized.

The

communication,

computation, and storage overheads are


comparable to centralized approaches.

2014

2.

Modeling
Distributed

of AbstractCloud computing has received


File significant

attention

recently.

Delivering

Systems

for quality guaranteed services in clouds is highly

Practical

desired. Distributed file systems (DFSs) are

Performance

the key component of any cloud-scale data

Analysis

processing

middleware.

Evaluating

the

performance of DFSs is accordingly very


important. To avoid cost for late life cycle
performance fixes and architectural redesign,
providing performance analysis before the
deployment of DFSs is also particularly
important. In this paper, we propose a
systematic and practical performance analysis
framework, driven by architecture and design
models for defining the structure and behavior
of typical master/slave DFSs. We put forward
a configuration guideline for specifications of
configuration alternatives of such DFSs, and a
practical approach for both qualitatively and
quantitatively performance analysis of DFSs
with various configuration settings in a
systematic

way.

What

distinguish

our

approach from others is that 1) most of

2014

existing

works

measurements

rely
under

on

performance

variety

of

workloads/strategies, comparing with other


DFSs or running application programs, but
our approach is based on architecture and
design level

models

and systematically

derived performance models; 2) our approach


is able to both qualitatively and quantitatively
evaluate the performance of DFSs; and 3) our
approach not only can evaluate the overall
performance of a DFS but also its components
and individual steps. We demonstrate the
effectiveness of our approach by evaluating
Hadoop distributed file system (HDFS). A
series of real-world experiments on EC2
(Amazon Elastic Compute Cloud), Tansuo
and Inspur Clusters, were conducted to
qualitatively evaluate the effectiveness of our
approach. We also performed a set of
experiments

of

HDFS

on

EC2

to

quantitatively analyze the performance and


limitation of the metadata server of DFSs.
Results show that our approach can achieve

sufficient performance analysis. Similarly, the


proposed approach could be also applied to
evaluate other DFSs such as MooseFS, GFS,
and zFS.
3.

Balancing

AbstractIn

Performance,

database

distributed

systems

transactional

deployed

over

cloud

Accuracy,

and servers, entities cooperate to form proofs of

Precision

for authorizations that are justified by collections

Secure

Cloud of certified credentials. These proofs and

Transactions

credentials may be evaluated and collected


over extended time periods under the risk of
having the underlying authorization policies
or the user credentials being in inconsistent
states. It therefore becomes possible for
policy-based authorization systems to make
unsafe decisions that might threaten sensitive
resources. In this paper, we highlight the
criticality of the problem. We then define the
notion of trusted transactions when dealing
with proofs of authorization. Accordingly, we
propose several increasingly stringent levels
of policy consistency constraints, and present
different

enforcement

approaches

to

2014

guarantee the trustworthiness of transactions


executing on cloud servers. We propose a
Two-Phase Validation Commit protocol as a
solution, which is a modified version of the
basic

Two-Phase

Validation

Commit

protocols. We finally analyze the different


approaches presented using both analytical
evaluation of the overheads and simulations
to guide the decision makers to which
approach to use.
4.

A Scalable Two- AbstractA large number of cloud services


Phase Top-Down require users to share private data like
Specialization
Approach

electronic health records for data analysis or


for mining,

bringing

privacy

concerns.

Data

Anonymizing data sets via generalization to

Anonymization

satisfy certain privacy requirements such as

Using

kanonymity is a widely used category of

MapReduce
Cloud

on privacy preserving techniques. At present, the


scale of data in many cloud applications
increases tremendously in accordance with
the Big Data trend, thereby making it a
challenge for commonly used software tools
to capture, manage, and process such large-

2014

scale data within a tolerable elapsed time. As


a result, it is a challenge for existing
anonymization approaches to achieve privacy
preservation on privacy-sensitive large-scale
data sets due to their insufficiency of
scalability. In this paper, we propose a
scalable two-phase top-down specialization
(TDS) approach to anonymize large-scale
data sets using the MapReduce framework on
cloud. In both phases of our approach, we
deliberately design a group of innovative
MapReduce jobs to concretely accomplish the
specialization

computation

in

highly

scalable way. Experimental evaluation results


demonstrate that with our approach, the
scalability and efficiency of TDS can be
significantly

improved

over

existing

approaches.
5.

Dynamic

By

leveraging

virtual

machine

(VM)

Optimization

of technology which provides performance and

Multiattribute

fault isolation, cloud resources can be

Resource

provisioned on demand in a fine grained,

Allocation

in multiplexed manner rather than in monolithic

2013

Self-Organizing

pieces. By integrating volunteer computing

Clouds

into cloud architectures, we envision a


gigantic self-organizing cloud (SOC) being
formed to reap the huge potential of untapped
commodity

computing

power

over

the

Internet. Toward this new architecture where


each participant may autonomously act as
both resource consumer and provider, we
propose a fully distributed, VM-multiplexing
resource

allocation

scheme

to

manage

decentralized resources. Our approach not


only achieves maximized resource utilization
using the proportional share model (PSM),
but also delivers provably and adaptively
optimal execution efficiency. We also design
a novel multi attribute range query protocol
for locating qualified nodes. Contrary to
existing solutions which often generate bulky
messages per request, our protocol produces
only one lightweight query message per task
on the Content Addressable Network (CAN).
It works effectively to find for each task its
qualified resources under a randomized policy

that

mitigates

the

contention

among

requesters. We show the SOC with our


optimized

algorithms

can

make

an

improvement by 15-60 percent in system


throughput than a P2P Grid model. Our
solution also exhibits fairly high adaptability
in a dynamic node-churning environment.
6.

Scalable

and Personal health record (PHR) is an emerging

Secure Sharing of patient-centric model of health information


Personal

Health exchange, which is often outsourced to be

Records in Cloud stored at a third party, such as cloud


Computing Using providers. However, there have been wide
Attribute-Based

privacy

concerns

as

personal

health

Encryption

information could be exposed to those third


party servers and to Un authorized parties. To
assure the patients' control over access to their
own PHRs, it is a promising method to
encrypt the PHRs before outsourcing. Yet,
issues such as risks of privacy exposure,
scalability in key management, flexible
access, and efficient user revocation, have
remained the most important challenges
toward

achieving

fine-grained,

2013

cryptographically

enforced

data

access

control. In this paper, we propose a novel


patient-centric framework and a suite of
mechanisms for data access control to PHRs
stored in semitrusted servers. To achieve finegrained and scalable data access control for
PHRs, we leverage attribute-based encryption
(ABE) techniques to encrypt each patient's
PHR file. Different from previous works in
secure data outsourcing, we focus on the
multiple data owner scenario, and divide the
users in the PHR system into multiple security
domains

that

greatly

reduces

the

key

management complexity for owners and


users. A high degree of patient privacy is
guaranteed

simultaneously

by

exploiting

multiauthority ABE. Our scheme also enables


dynamic modification of access policies or
file attributes, supports efficient on-demand
user/attribute

revocation

and

break-glass

access under emergency scenarios. Extensive


analytical

and

experimental

presented

which

show

results

the

are

security,

scalability, and efficiency of our proposed


scheme.
7.

On Data Staging In this paper, we study the strategies for


Algorithms
Shared
Accesses
Clouds

for efficiently achieving data staging and caching


Data on a set of vantage sites in a cloud system
in with a minimum cost. Unlike the traditional
research, we do not intend to identify the
access

patterns to

facilitate the future

requests. Instead, with such a kind of


information presumably known in advance,
our goal is to efficiently stage the shared data
items to predetermined sites at advocated time
instants to align with the patterns while
minimizing the monetary costs for caching
and transmitting the
requested data items. To this end, we follow
the cost and network models in [1] and extend
the analysis to multiple data items, each with
single or multiple copies. Our results show
that under homogeneous cost model, when the
ratio of transmission cost and caching cost is
low, a single copy of each data item can
efficiently serve all the user requests. While

2013

in multicopy situation, we also consider the


tradeoff between the transmission cost and
caching cost by controlling the upper bounds
of transmissions and copies. The upper bound
can be given either on per-item basis or on
all-item basis. We present efficient optimal
solutions based on dynamic programming
techniques to all these cases provided that the
upper bound is polynomially bounded by the
number of service requests and the number of
distinct data items. In addition to the
homogeneous cost model, we also briefly
discuss this problem under a heterogeneous
cost model with some simple yet practical
restrictions and present a 2-approximation
algorithm to the general case. We validate our
findings by implementing a data staging
solver,

whereby

conducting

extensive

simulation studies on the behaviors of the


algorithms.

TECHNOLOGY: JAVA
DOMAIN: Data Mining
S. No.

IEEE TITLE

1.

Facilitating

ABSTRACT

IEEE YEAR

A large number of organizations today generate


and share textual descriptions of their products,

Document

services, and actions. Such collections of


Annotation

Using

Content
Querying Value

textual data contain significant amount of

and structured information, which remains buried in


the

unstructured

text.

While

information

extraction algorithms facilitate the extraction of


structured relations, they are often expensive
and inaccurate, especially when operating on
top of text that does not contain any instances
of the targeted structured information. We
present a novel alternative approach that
facilitates the generation of the structured
metadata by identifying documents that are
likely to contain information of interest and this
information is going to be subsequently useful
for querying the database. Our approach relies
on the idea that humans are more likely to add
the necessary metadata during creation time, if
prompted by the interface; or that it is much
easier for humans (and/or algorithms) to
identify the metadata when such information
actually exists in the document, instead of
naively prompting users to fill in forms with
information that is not available in the
document. As a major contribution of this

2014

paper, we present algorithms that identify


structured attributes that are likely to appear
within the document, by jointly utilizing the
content of the text and the query workload. Our
experimental

evaluation

shows

that

our

approach generates superior results compared


to approaches that rely only on the textual
content or only on the query workload, to
identify attributes of interest.
2.

An

Empirical Extending the keyword search paradigm to


relational data has been an active area of

Performance

research within the database and IR community


Evaluation

of

Relational
Keyword

during the past decade. Many approaches have


been

Search

proposed,

but

despite

numerous

publications, there remains a severe lack of


standardization for the evaluation of proposed

Techniques

search techniques. Lack of standardization has


resulted in contradictory results from different
evaluations, and the numerous discrepancies
muddle what advantages are proffered by
different approaches. In this paper, we present
the most extensive empirical performance
evaluation

of

relational

keyword

search

techniques to appear to date in the literature.


Our results indicate that many existing search
techniques

do

not

provide

acceptable

performance for realistic retrieval tasks. In


particular, memory consumption precludes
many search techniques from scaling beyond
small data sets with tens of thousands of
vertices. We also explore the relationship

2014

between execution time and factors varied in


previous evaluations; our analysis indicates that
most of these factors have relatively little
impact on performance. In summary, our work
confirms

previous

claims

regarding

the

unacceptable performance of these search


techniques and underscores the need for
standardization in evaluationsstandardization
exemplified by the IR community.
3.

Set Predicates in In data warehousing and OLAP applications,


SQL: Enabling Set- scalar-level

predicates

in

SQL

become

Level Comparisons increasingly inadequate to support a class of


for

Dynamically operations that require set-level comparison

Formed Groups

semantics, i.e., comparing a group of tuples


with multiple values. Currently, complex SQL
queries composed by scalar-level operations
are often formed to obtain even very simple
set-level semantics. Such queries are not only
difficult to write but also challenging for a
database engine to optimize, thus can result in
costly evaluation. This paper proposes to
augment SQL with set predicate, to bring out
otherwise obscured set-level semantics. We
studied two approaches to processing set
predicatesan

aggregate

function-based

2014

approach and a bitmap index-based approach.


Moreover, we designed a histogram-based
probabilistic method of set predicate selectivity
estimation, for optimizing queries with multiple
predicates.

The

experiments

verified

its

accuracy and effectiveness in optimizing


queries.
4.

Keyword
Routing

Query Keyword search is an intuitive paradigm for


searching linked data sources on the web. We
propose to route keywords only to relevant
sources to reduce the high cost of processing
keyword search queries over all sources. We
propose a novel method for computing top-k
routing plans based on their potentials to
contain results for a given keyword query. We
employ
summary

keyword-element
that

relationship

compactly

represents

relationships between keywords and the data


elements

mentioning

them.

multilevel

scoring mechanism is proposed for computing


the relevance of routing plans based on scores
at the level of keywords, data elements,
element sets, and subgraphs that connect these
elements. Experiments carried out using 150
publicly available sources on the web showed
that valid plans (precision@1 of 0.92) that are
highly relevant (mean reciprocal rank of 0.89)
can be computed in 1 second on average on a

2014

single PC. Further, we show routing greatly


helps to improve the performance of keyword
search, without compromising its result quality.
5.

Rough

The selection of relevant and significant


features is an important problem particularly

Hypercuboid

for data sets with large number of features. In


Approach
Feature
in

for

this regard, a new feature selection algorithm is

Selection presented based on a rough hypercuboid

Approximation

approach. It selects a set of features from a data


set by maximizing the relevance, dependency,

Spaces

and significance of the selected features. By


introducing the concept of the hypercuboid
equivalence

partition

matrix,

novel

representation of degree of dependency of


sample categories on features is proposed to
measure

the

relevance,

dependency,

and

significance of features in approximation


spaces. The equivalence partition matrix also
offers an efficient way to calculate many more
quantitative
inexactness

measures
of

to

describe

approximate

the

classification.

Several quantitative indices are introduced


based on the rough hypercuboid approach for
evaluating the performance of the proposed
method. The superiority of the proposed
method over other feature selection methods, in
terms

of

classification

computational

complexity

accuracy,

is

and

established

extensively on various real-life data sets of


different sizes and dimensions.

2014

6.

Active Learning of
Constraints

for

Semi-supervised clustering aims to


improve clustering performance by considering
user supervision in the form of pairwise

Semi-Supervised
Clustering

constraints. In this paper, we study the active


learning problem of selecting pairwise mustlink

and

cannot-link

constraints

for

semisupervised clustering. We consider active


learning in an iterative manner where in each
iteration queries are selected based on the
current clustering solution and the existing
constraint set. We apply a general framework
that builds on the concept of neighborhood,
where

neighborhoods

contain

labeled

examples of different clusters according to the


pairwise constraints.
method

expands

Our active learning

the

neighborhoods

by

selecting informative points and querying their


relationship with the neighborhoods. Under this
framework, we build on the classic uncertaintybased principle and present a novel approach
for computing the uncertainty associated with
each data point. We further introduce a
selection criterion that trades off the amount of
uncertainty of each data point with the
expected number of queries (the cost) required
to resolve this uncertainty. This allows us to
select queries that have the highest information
rate. We evaluate the proposed method on the
benchmark
demonstrate

data

sets

consistent

and
and

the

results

substantial

improvements over the current state of the art.

2014

7.

Supporting Privacy AbstractPersonalized web search (PWS) has


Protection

in

2014

demonstrated its effectiveness in improving the


quality of various search services on the

Personalized Web

Internet. However, evidences show that users

Search

reluctance to disclose their private information


during search has become a major barrier for
the wide proliferation of PWS. We study
privacy protection in PWS applications that
model user preferences as hierarchical user
profiles. We propose a PWS framework called
UPS that can adaptively generalize profiles by
queries while respecting user specified privacy
requirements. Our runtime generalization aims
at striking a balance between two predictive
metrics

that

personalization

evaluate

the

and

privacy risk

the

utility

of
of

exposing the generalized profile. We present


two greedy algorithms, namely GreedyDP and
GreedyIL, for runtime generalization. We also
provide an online prediction mechanism for
deciding whether personalizing a query is
beneficial. Extensive experiments demonstrate
the effectiveness of our framework. The
experimental results also reveal that GreedyIL
significantly outperforms GreedyDP in terms
of efficiency.
8.

Privacy-Preserving
Enhanced

AbstractCollaborative tagging is one of the


most popular services available online, and it
allows end user to loosely classify either online

Collaborative

or offline resources based on their feedback,


expressed in the form of free-text labels (i.e.,

2014

Tagging

tags). Although tags may not be per se sensitive


information, the wide use of collaborative
tagging services increases the risk of cross
referencing, thereby seriously compromising
user privacy. In this paper, we make a first
contribution toward the development of a
privacy-preserving

collaborative

tagging

service, by showing how a specific privacyenhancing technology, namely tag suppression,


can be used to protect end-user privacy.
Moreover, we analyze how our approach can
affect the effectiveness of a policy-based
collaborative tagging system that supports
enhanced web access functionalities, like
content filtering and discovery, based on
preferences specified by end users.
9.

Event

AbstractThe new method proposed in this


paper applies a multivariate reconstructed

Characterization

phase
and

Prediction

space

multivariate

(MRPS)
temporal

for
patterns

identifying
that

are

Based on Temporal characteristic and predictive of anomalies or


Patterns

in

events in a dynamic data system. The new


method

Dynamic
System

Data

extends

the

original

univariate

reconstructed phase space framework, which is


based

on

fuzzy

unsupervised

clustering

method, by incorporating a new mechanism of


data categorization based on the definition of
events. In addition to modeling temporal
dynamics in a multivariate phase space, a
Bayesian approach is applied to model the firstorder Markov behavior in the multidimensional

2014

data sequences. The method utilizes an


exponential loss objective function to optimize
a hybrid classifier which consists of a radial
basis kernel function and a log-odds ratio
component.

We

performed

experimental

evaluation on three data sets to demonstrate the


feasibility and effectiveness of the proposed
approach.
10.

Discovering
Emerging Topics in

AbstractDetection of emerging topics is now


receiving renewed interest motivated by the
rapid growth of social networks. Conventional-

Social Streams via


Link-Anomaly
Detection

term-frequency-based approaches may not be


appropriate in this context, because the
information exchanged in social network posts
include not only text but also images, URLs,
and videos. We focus on emergence of topics
signaled by social aspects of theses networks.
Specifically, we focus on mentions of users
links

between

users

that

are

generated

dynamically (intentionally or unintentionally)


through replies, mentions, and retweets. We
propose a probability model of the mentioning
behavior of a social network user, and propose
to detect the emergence of a new topic from the
anomalies

measured

through

the

model.

Aggregating anomaly scores from hundreds of


users, we show that we can detect emerging
topics only based on the reply/mention
relationships in social-network posts. We
demonstrate our technique in several real data
sets

we

gathered

from

Twitter.

The

2014

experiments show that the proposed mentionanomaly-based approaches can detect new
topics at least as early as text-anomaly-based
approaches, and in some cases much earlier
when the topic is poorly identified by the
textual contents in posts.
11.

A New Algorithm For a broad-topic and ambiguous query,


for Inferring User different users may have different search goals
Search Goals with when they submit it to a search engine. The
Feedback Sessions

inference and analysis of user search goals can


be very useful in improving search engine
relevance and user experience. In this paper,
we propose a novel approach to infer user
search goals by analyzing search engine query
logs. First, we propose a framework to discover
different user search goals for a query by
clustering the proposed feedback sessions.
Feedback sessions are constructed from user
click-through logs and can efficiently reflect
the information needs of users. Second, we
propose a novel approach to generate pseudodocuments to better represent the feedback
sessions for clustering. Finally, we propose a
new criterion )Classified Average Precision

2013

(CAP) to evaluate the performance of


inferring user search goals. Experimental
results are presented using user click-through
logs from a commercial search engine to
validate the effectiveness of our proposed
methods.
12.

Facilitating
Effective

Designing well-structured websites to facilitate


User effective user navigation has long been a

Navigation through challenge. A primary reason is that the web


Website

Structure developers' understanding of how a website

Improvement

should be structured can be considerably


different from that of the users. While various
methods

have

been

proposed

to

relink

webpages to improve navigability using user


navigation data, the completely reorganized
new structure can be highly unpredictable, and
the cost of disorienting users after the changes
remains unanalyzed. This paper addresses how
to improve a website without introducing
substantial changes. Specifically, we propose a
mathematical programming model to improve
the user navigation on a website while
minimizing alterations to its current structure.

2013

Results from extensive tests conducted on a


publicly available real data set indicate that our
model not only significantly improves the user
navigation with very few changes, but also can
be effectively solved. We have also tested the
model

on

large

synthetic

data

sets

to

demonstrate that it scales up very well. In


addition, we define two evaluation metrics and
use them to assess the performance of the
improved website using the real data set.
Evaluation results confirm that the user
navigation on the improved structure is indeed
greatly enhanced. More interestingly, we find
that heavily disoriented users are more likely to
benefit from the improved structure than the
less disoriented users.
13.

Building a Scalable

In this paper, we describe the design and

Database-Driven

implementation of a reverse dictionary. Unlike

Reverse Dictionary

a traditional forward dictionary, which maps


from words to their definitions, a reverse
dictionary takes a user input phrase describing
the desired concept, and returns a set of
candidate words that satisfy the input phrase.

2013

This work has significant application not only


for the general public, particularly those who
work closely with words, but also in the
general field of conceptual search. We present
a set of algorithms and the results of a set of
experiments showing the retrieval accuracy of
our methods and the runtime response time
performance of our implementation. Our
experimental results show that our approach
can

provide

significant

improvements

in

performance scale without sacrificing the


quality

of

the

result.

Our

experiments

comparing the quality of our approach to that


of currently available reverse dictionaries show
that of our approach can provide significantly
higher quality over either of the other currently
available implementation

TECHNOLOGY: JAVA
DOMAIN: DEPENDABLE & SECURE COMPUTING
S. No.

IEEE TITLE

ABSTRACT

1.

Secure Two-Party Privacy-preserving data publishing addresses

IEEE YEAR
2014

Differentially

the problem of disclosing sensitive data when

Private

Data mining for useful information. Among the

Release

for existing privacy models, _-differential privacy

Vertically

provides

one

of

the

strongest

privacy

Partitioned Data

guarantees. In this paper, we address the


problem of private data publishing, where
different attributes for the same set of
individuals are held by two parties. In
particular,

we

present

an

algorithm

for

differentially private data release for vertically


partitioned data between two parties in the semi
honest adversary model. To achieve this, we
first present a two-party protocol for the
exponential mechanism. This protocol can be
used as a sub protocol by any other algorithm
that requires the exponential mechanism in a
distributed setting. Furthermore, we propose a
two party algorithm that releases differentially
private data in a secure way according to the
definition of secure multiparty computation.
Experimental results on real-life data suggest
that the proposed algorithm can effectively
preserve information for a data mining task.

2.

Bandwidth

The Internet is vulnerable to bandwidth

Distributed Denial distributed

denial-of-service

2014

(BW-DDoS)

of Service: Attacks attacks, wherein many hosts send a huge


and Defenses

number of packets to cause congestion and


disrupt legitimate traffic. So far, BW-DDoS
attacks

have

employed

relatively

crude,

inefficient, brute-force mechanisms; future


attacks might be significantly more effective
and harmful. To meet the increasing threats,
more advanced defenses are necessary.
3.

k-Zero Day Safety: By enabling a direct comparison of different


A

Network security solutions with respect to their relative

Security Metric for effectiveness, a network security metric may


Measuring the Risk provide quantifiable evidences to assist security
of

Unknown practitioners in securing computer networks.

Vulnerabilities

However, research on security metrics has been


hindered by difficulties in handling zero-day
attacks exploiting unknown vulnerabilities. In
fact,

the

vulnerabilities

security
has

risk
been

of

unknown

considered

as

something un-measurable due to the less


predictable nature of software flaws. This
causes a major difficulty to security metrics,

2014

because a more secure configuration would be


of little value if it were equally susceptible to
zero-day attacks. In this paper, we propose a
novel security metric, k-zero day safety, to
address this issue. Instead of attempting to rank
unknown vulnerabilities, our metric counts how
many such vulnerabilities would be required
for compromising network assets; a larger
count implies more security because the
likelihood

of

vulnerabilities

having
available,

more

unknown

applicable,

and

exploitable all at the same time will be


significantly lower. We formally define the
metric, analyze the complexity of computing
the metric, devise heuristic algorithms for
intractable cases, and finally demonstrate
through case studies that applying the metric to
existing

network

security

practices

may

generate actionable knowledge.


4.

Security Games for

Most applications of wireless sensor

Node Localization networks (WSNs) rely on data about the


through Verifiable positions of sensor nodes, which are not
Multilateration

necessarily

known

beforehand.

Several

2014

localization approaches have been proposed but


most of them omit to consider that WSNs could
be deployed in adversarial settings, where
hostile nodes under the control of an attacker
coexist

with

faithful

ones.

Verifiable

multilateration (VM) was proposed to cope


with this problem by leveraging on a set of
trusted landmark nodes that act as verifiers.
Although VM is able to recognize reliable
localization measures, it allows for regions of
undecided positions that can amount to the 40
percent of the monitored area. We studied the
properties of VM as a non cooperative twoplayer game where the first player employs a
number of verifiers to do VM computations
and the second player controls a malicious
node. The verifiers aim at securely localizing
malicious nodes, while malicious nodes strive
to masquerade as unknown and to pretend false
positions.

Thanks

to

game

theory,

the

potentialities of VM are analyzed with the aim


of improving the defenders strategy. We found
that the best placement for verifiers is an

equilateral triangle with edge equal to the


power range R, and maximum deception in the
undecided region is approximately 0:27R.
Moreover, we characterizedin terms of the
probability of choosing an unknown node to
examine furtherthe strategies of the players.
5.

On Inference-Proof This work aims at treating the inference


View Processing of

problem in XML documents that are assumed

XML Documents

to represent potentially incomplete information.


The inference problem consists in providing a
control mechanism for enforcing inferenceusability confinement of XML documents.
More formally, an inference-proof view of an
XML document is required to be both
indistinguishable

from

the

actual

XML

document to the clients under their inference


capabilities, and to neither contain nor imply
any confidential information. We present an
algorithm for generating an inference-proof
view by weakening the actual XML document,
i.e., eliminating confidential information and
other information that could be used to infer
confidential information. In order to avoid

2013

inferences based on the schema of the XML


documents, the DTD of the actual XML
document is

modified

according to

the

weakening operations as well, such that the


modified DTD conforms with the generated
inference-proof view.
6.

SORT:A

Self-

Organizing

Trust

2013

Model for Peer-toPeer Systems

TECHNOLOGY: JAVA
DOMAIN: IMAGE PROCESSING
S. No.

IEEE TITLE

ABSTRACT

1.

Large

AbstractInherent statistical correlation for

Discriminative

context-based

prediction

IEEE YEAR

and

structural

Structured

Set interdependencies for local coherence is not

Prediction

fully exploited in existing lossless image

Modeling

With coding schemes. This paper proposes a novel

Max-Margin
Markov

prediction model where the optimal correlated

Network prediction for a set of pixels is obtained in the

for Lossless Image sense of the least code length. It not only

2014

Coding

exploits the spatial statistical correlations for


the optimal prediction directly based on 2D
contexts, but also formulates the data-driven
structural

interdependencies

to

make

the

prediction error coherent with the underlying


probability distribution for coding. Under the
joint constraints for local coherence, maxmargin Markov networks are incorporated to
combine support vector machines structurally
to make maxmargin estimation for a correlated
region. Specifically, it aims to produce multiple
predictions in the blocks with the model
parameters learned in such a way that the
distinction between the actual pixel and all
possible estimations is maximized. It is proved
that, with the growth of sample size, the
prediction

error

is

asymptotically

upper

bounded by the training error under the


decomposable loss function. Incorporated into
the lossless image coding framework, the
proposed model outperforms most prediction
schemes reported.
2.

Multi-Illuminant

AbstractMost

existing

color

constancy

2014

Estimation

With algorithms

assume

uniform

illumination.

Conditional

However, in real-world scenes, this is not often

Random Fields

the case. Thus, we propose a novel framework


for estimating the colors of multiple illuminants
and their spatial distribution in the scene. We
formulate

this

problem

as

an

energy

minimization task within a conditional random


field over a set of local illuminant estimates. In
order to quantitatively evaluate the proposed
method, we created a novel data set of twodominant illuminant images comprised of
laboratory, indoor, and outdoor scenes. Unlike
prior work, our database includes accurate
pixel wise ground truth illuminant information.
The performance of our method is evaluated on
multiple data sets. Experimental results show
that our framework clearly outperforms single
illuminant estimators as well as a recently
proposed multi illuminant estimation approach.
3.

Saliency-Aware

AbstractIn

region-of-interest

(ROI)-

Video

based video coding, ROI parts of the frame are

Compression

encoded with higher quality than non-ROI


parts. At low bit rates, such encoding may

2014

produce attention grabbing coding artifacts,


which may draw viewers attention away from
ROI, thereby degrading visual quality. In this
paper, we present a saliency-aware video
compression method for ROI-based video
coding. The proposed method aims at reducing
salient coding artifacts in non-ROI parts of the
frame in order to keep users attention on ROI.
Further, the method allows saliency to increase
in high quality parts of the frame, and allows
saliency

to

reduce

in

non-ROI

parts.

Experimental results indicate that the proposed


method is able to improve visual quality of
encoded video relative to conventional rate
distortion optimized video coding, as well as
two state-of-the art perceptual video coding
methods.
4.

Translation

AbstractThis paper is devoted to the

Invariant

study of a directional lifting transform for

Directional

wavelet frames. A non sub-sampled lifting

Framelet

structure

Transform

translation invariance as it is an important

Combined

With property in image denoising. Then, the

is

developed

to

maintain

the

2014

Gabor Filters for directionality of the lifting-based tight frame is


Image Denoising

explicitly discussed, followed by a specific


translation

invariant

directional

framelet

transform (TIDFT). The TIDFT has two


framelets 1,2 with vanishing moments of
order two and one respectively, which are able
to detect singularities in a given direction set. It
provides an efficient and sparse representation
for images containing rich textures along with
properties of fast implementation and perfect
reconstruction. In addition, an adaptive blockwise orientation estimation method based on
Gabor filters is presented instead of the
conventional

minimization

of

residuals.

Furthermore, the TIDFT is utilized to exploit


the

capability

incorporating
multivariate

of

the

image
MAP

exponential

denoising,

estimator

for

distribution.

Consequently, the TIDFT is able to eliminate


the noise effectively while preserving the
textures simultaneously. Experimental results
show that the TIDFT outperforms some other
frame-based

denoising

methods,

such

as

contourlet and shearlet, and is competitive to


the state-of-the-art denoising approaches.
5.

Vector-Valued
Image

Vector-valued images such as RGB

Processing color images or multimodal medical images

by Parallel Level show a strong inter channel correlation, which


Sets

is not exploited by most image processing


tools. We propose a new notion of treating
vector-valued images which is based on the
angle between the spatial gradients of their
channels. Through minimizing a cost functional
that penalizes large angles, images with parallel
level sets can be obtained. After formally
introducing this idea and the corresponding
cost functionals, we discuss their Gteaux
derivatives that lead to a diffusion-like gradient
descent scheme. We illustrate the properties of
this cost functional by several examples in
denoising and demo saicking of RGB color
images. They show that parallel level sets are a
suitable concept for color image enhancement.
Demosaicking with parallel level sets gives
visually perfect results for low noise levels.
Furthermore, the proposed functional yields

2014

sharper images than the other approaches in


comparison.
6.

Circular Re ranking Search re ranking is regarded as a common way


for Visual Search

to boost retrieval precision. The problem


nevertheless is not trivial especially when there
are multiple features or modalities to be
considered for search, which often happens in
image and video retrieval. This paper proposes
a new reranking algorithm, named circular
reranking, that reinforces the mutual exchange
of information across multiple modalities for
improving search performance, following the
philosophy that strong performing modality
could learn from weaker ones, while weak
modality does benefit from interacting with
stronger ones. Technically, circular reranking
conducts multiple runs of random walks
through exchanging the ranking scores among
different features in a cyclic manner. Unlike the
existing techniques, the reranking procedure
encourages interaction among modalities to
seek a consensus that are useful for reranking.
In this paper, we study several properties of

2013

circular reranking, including how and which


order of information propagation should be
configured to fully exploit the potential of
modalities for reranking. Encouraging results
are reported for both image and video retrieval
on Microsoft Research Asia Multimedia image
dataset and TREC Video Retrieval Evaluation
2007-2008 datasets, respectively.
7.

Efficient
for

Method This paper presents a new model of the content

Content

construction

Re reconstruction

problem

in

self-embedding

With systems, based on an erasure communication

Self-Embedding

channel. We explain why such a model is a


good fit for this problem, and how it can be
practically implemented with the use of digital
fountain codes. The proposed method is based
on an alternative approach to spreading the
reference information over the whole image,
which has recently been shown to be of critical
importance in the application at hand. Our
paper presents a theoretical analysis of the
inherent restoration trade-offs. We analytically
derive formulas for the reconstruction success
bounds, and validate them experimentally with

2013

Monte Carlo simulations and a reference image


authentication

system.

We

perform

an

exhaustive reconstruction quality assessment,


where the presented reference scheme is
compared to five state-of-the-art alternatives in
a common evaluation scenario. Our paper leads
to important insights on how self-embedding
schemes should be constructed to achieve
optimal

performance.

The

reference

authentication system designed according to


the presented principles allows for high-quality
reconstruction, regardless of the amount of the
tampered content. The average reconstruction
quality, measured on 10000 natural images is
37 dB, and is achievable even when 50% of the
image area becomes tampered.
8.

Modeling Iris Code Iris Code, developed by Daugman, in 1993, is


and Its Variants as the most influential iris recognition algorithm.
Convex Polyhedral A thorough understanding of Iris Code is
Cones

and

Its essential, because over 100 million persons

Security

have been enrolled by this algorithm and many

Implications

biometric personal identification and template


protection methods have been developed based

2013

on Iris Code. This paper indicates that a


template produced by Iris Code or its variants
is a convex polyhedral cone in a hyperspace. Its
central ray, being a rough representation of the
original biometric signal, can be computed by a
simple

algorithm,

which

can

often

be

implemented in one Matlab command line. The


central ray is an expected ray and also an
optimal ray of an objective function on a group
of distributions. This algorithm is derived from
geometric properties of a convex polyhedral
cone but does not rely on any prior knowledge
(e.g., iris images). The experimental results
show that biometric templates, including iris
and palmprint templates, produced by different
recognition methods can be matched through
the central rays in their convex polyhedral
cones and that templates protected by a method
extended from IrisCode can be broken into.
These

experimental

results

indicate

that,

without a thorough security analysis, convex


polyhedral cone templates cannot be assumed
secure. Additionally, the simplicity of the

algorithm implies that even junior hackers


without

knowledge

of

advanced

image

processing and biometric databases can still


break into protected templates and reveal
relationships among templates produced by
different recognition methods.
9.

Robust Document Segmentation of text from badly degraded


Image Binarization document images is a very challenging task due
Technique

for to the high inter/intra-variation between the

Degraded

document background and the foreground text

Document Images

of different document images. In this paper, we


propose a novel document image binarization
technique that addresses these issues by using
adaptive image contrast. The adaptive image
contrast is a combination of the local image
contrast and the local image gradient that is
tolerant to text and background variation
caused

by different

types

of

document

degradations. In the proposed technique, an


adaptive contrast map is first constructed for an
input degraded document image. The contrast
map is then binarized and combined with
Canny's edge map to identify the text stroke

2013

edge pixels. The document text is further


segmented by a local threshold that is estimated
based on the intensities of detected text stroke
edge pixels within a local window. The
proposed method is simple, robust, and
involves minimum parameter tuning. It has
been tested on three public datasets that are
used in the recent document image binarization
contest (DIBCO) 2009 & 2011 and handwritten
-DIBCO 2010 and achieves accuracies of
93.5%, 87.8%, and 92.03%, respectively, that
are significantly higher than or close to that of
the best-performing methods reported in the
three contests. Experiments on the Bickley
diary

dataset

that

consists

of

several

challenging bad quality document images also


show the superior performance of our proposed
method, compared with other techniques.
10.

Per-ColorantChannel
Barcodes
Mobile
Applications:

We propose a color barcode framework for


Color mobile phone applications by exploiting the
for spectral diversity afforded by the cyan (C),
magenta (M), and yellow (Y) print colorant
An channels commonly used for color printing and

2013

Interference

the complementary red (R), green (G), and blue

Cancellation

(B) channels, respectively, used for capturing

Framework

color images. Specifically, we exploit this


spectral diversity to realize a three-fold
increase

in

the

data

rate

by encoding

independent data in the C, M, and Y print


colorant channels and decoding the data from
the complementary R, G, and B channels
captured via a mobile phone camera. To
mitigate the effect of cross-channel interference
among the print-colorant and capture color
channels,

we

develop

an

algorithm

for

interference cancellation based on a physically


-motivated mathematical model for the print
and capture processes. To estimate the model
parameters

required

for

cross-channel

interference cancellation, we propose two


alternative

methodologies:

pilot

block

approach that uses suitable selections of colors


for

the

synchronization

expectation

maximization

blocks

and

approach

an
that

estimates the parameters from regions encoding


the data itself. We evaluate the performance of

the

proposed

framework

using

specific

implementations of the framework for two of


the most commonly used barcodes in mobile
applications,

QR

and

Aztec

codes.

Experimental results show that the proposed


framework successfully overcomes the impact
of the color interference, providing a low bit
error rate and a high decoding rate for each of
the colorant channels when used with a
corresponding error correction scheme.

TECHNOLOGY: JAVA
DOMAIN: MOBILE COMPUTING
S. No.

IEEE TITLE

1.

Cooperative

ABSTRACT

IEEE YEAR

AbstractProviding economic incentives to all

Spectrum Sharing: parties involved is essential for the success of


A

Contract-Based dynamic

Approach

spectrum

access.

Cooperative

spectrum sharing is one effective way to


achieve this, where secondary users (SUs) relay
traffics for primary users (PUs) in exchange for
dedicated spectrum access time for SUs own
communications. In this paper, we study the

2014

cooperative spectrum sharing under incomplete


information,

where

SUs

wireless

characteristics are private information and not


known by a PU. We model the PU-SU
interaction as a labor market using contract
theory. In contract theory, the employer
generally

does

not

completely

know

employees private information before the


employment and needs to offers employees a
contract under incomplete information. In our
problem, the PU and SUs are, respectively, the
employer and employees, and the contract
consists of a set of items representing
combinations of spectrum accessing time (i.e.,
reward) and relaying power (i.e., contribution).
We study the optimal contract design for both
weakly and strongly incomplete information
scenarios.

In

the

weakly

incomplete

information scenario, we show that the PU will


optimally hire the most efficient SUs and the
PU achieves the same maximum utility as in
the complete information benchmark. In the
strongly

incomplete

information

scenario,

however, the PU may conservatively hire less


efficient SUs as well. We further propose a
decompose-and-compare

(DC)

approximate

algorithm that achieves a close-to-optimal


contract. We further show that the PUs
average utility loss due to the suboptimal DC
algorithm

and

the

strongly

incomplete

information are relatively small (less than 2 and


1.3 percent, respectively, in our numerical
results with two SU types).
2.

Energy-Aware

AbstractIn this paper, we propose a

Resource

framework

for

energy

efficient

resource

Allocation

allocation in multiuser localized SC-FDMA

Strategies for LTE with synchronous HARQ constraints. Resource


Uplink

with allocation is formulated as a two-stage problem

Synchronous

where resources are allocated in both time and

HARQ Constraints

frequency. The impact of retransmissions on


the time-frequency problem segmentation is
handled through the use of a novel block
scheduling interval specifically designed for
synchronous HARQ to ensure uplink users do
not experience ARQ blocking. Using this
framework, we formulate the optimal margin

2014

adaptive allocation problem, and based on its


structure,

we

approaches

to

propose

two

minimize

suboptimal

average

power

allocation required for resource allocation


while attempting to reduce complexity. Results
are presented for computational complexity and
average power allocation relative to system
complexity and data rate, and comparisons are
made between the proposed optimal and
suboptimal approaches.
3.

Preserving

AbstractUsing geosocial applications,

Location Privacy in such as FourSquare, millions of people interact


Geosocial

with their surroundings through their friends

Applications

and their recommendations. Without adequate


privacy protection, however, these systems can
be easily misused, for example, to track users
or target them for home invasion. In this paper,
we introduce LocX, a novel alternative that
provides

significantly

improved

location

privacy without adding uncertainty into query


results or relying on strong assumptions about
server security. Our key insight is to apply
secure

user-specific,

distance-preserving

2014

coordinate transformations to all location data


shared with the server. The friends of a user
share this users secrets so they can apply the
same transformation. This allows all location
queries to be evaluated correctly by the server,
but our privacy mechanisms guarantee that
servers are unable to see or infer the actual
location data from the transformed data or from
the data access. We show that LocX provides
privacy even against a powerful adversary
model, and we use prototype measurements to
show that it provides privacy with very little
performance overhead, making it suitable for
todays mobile devices.
4.

Snapshot
Continuous
Collection
Probabilistic
Wireless
Networks

and

AbstractData collection is a common

Data operation

of

Wireless

Sensor

Networks

in (WSNs), of which the performance can be


measured by its achievable network capacity.
Sensor Most existing works studying the network
capacity issue are based on the unpractical
model called deterministic network model. In
this

paper,

more

reasonable

model,

probabilistic network model, is considered. For

2014

snapshot data collection, we propose a novel


Cell-based Path Scheduling (CPS) algorithm
that achieves capacity of (1/5 ln n. W) in
the sense of the worst case and order-optimal
capacity in the sense of expectation, where n is
the number of sensor nodes, is a constant,
and W is the data transmitting rate. For
continuous data collection, we propose a Zonebased Pipeline Scheduling (ZPS) algorithm.
ZPS significantly speeds up the continuous data
collection

process

by

forming

data

transmission pipeline, and achieves a capacity


gain

of

N
times

better

than the optimal capacity of the snapshot data


collection scenario in order in the sense of the
worst case, where N is the number of snapshots
in a continuous data collection task. The
simulation results also validate that the
proposed algorithms significantly improve
network capacity compared with the existing
works.
5.

QoS-Oriented

AbstractAs

wireless

communication

2014

Distributed
Routing
for

gains popularity, significant research has been


Protocol devoted to supporting real-time transmission
Hybrid with stringent Quality of Service (QoS)

Wireless Networks

requirements for wireless applications. At the


same time, a wireless hybrid network that
integrates a mobile wireless ad hoc network
(MANET)

and

wireless

infrastructure

network has been proven to be a better


alternative for the next generation wireless
networks.

By

directly

adopting

resource

reservation-based QoS routing for MANETs,


hybrids networks inherit invalid reservation
and race condition problems in MANETs. How
to guarantee the QoS in hybrid networks
remains an open problem. In this paper, we
propose a QoS-Oriented Distributed routing
protocol (QOD) to enhance the QoS support
capability

of

hybrid

networks.

Taking

advantage of fewer transmission hops and


anycast transmission features of the hybrid
networks, QOD transforms the packet routing
problem to a resource scheduling problem.
QOD incorporates five algorithms: 1) a QoS-

guaranteed neighbor selection algorithm to


meet the transmission delay requirement, 2) a
distributed packet scheduling algorithm to
further

reduce

transmission

delay,

3)

mobility-based segment resizing algorithm that


adaptively adjusts segment size according to
node mobility in order to reduce transmission
time, 4) a traffic redundant elimination
algorithm
throughput,

to

increase

and

5)

the

transmission

data

redundancy

elimination-based transmission algorithm to


eliminate

the

redundant

data

to

further

improve the transmission QoS. Analytical and


simulation results based on the random waypoint model and the real human mobility model
show that QOD can provide high QoS
performance in terms of overhead, transmission
delay, mobility-resilience, and scalability.
6.

Cooperative

AbstractDisruption

tolerant

networks

Caching

for (DTNs) are characterized by low node density,

Efficient

Data unpredictable node mobility, and lack of global

Access

in network information. Most of current research

Disruption Tolerant efforts in DTNs focus on data forwarding, but

2014

Networks

only limited work has been done on providing


efficient data access to mobile users. In this
paper, we propose a novel approach to support
cooperative caching in DTNs, which enables
the sharing and coordination of cached data
among multiple nodes and reduces data access
delay. Our basic idea is to intentionally cache
data at a set of network central locations
(NCLs), which can be easily accessed by other
nodes in the network. We propose an efficient
scheme that ensures appropriate NCL selection
based on a probabilistic selection metric and
coordinates multiple caching nodes to optimize
the tradeoff between data accessibility and
caching

overhead.

simulations

Extensive

show

that

our

trace-driven
approach

significantly improves data access performance


compared to existing schemes.
7.

Real-Time
Misbehavior

AbstractThe distributed nature of the


CSMA/CA-based

Detection in IEEE example,


802.11-Based

the

wireless
IEEE

protocols,

802.11

for

distributed

coordinated function (DCF), allows malicious

Wireless Networks: nodes to deliberately manipulate their backoff

2014

An

Analytical parameters and, thus, unfairly gain a large

Approach

share of the network throughput. In this paper,


we first design a real-time backoff misbehavior
detector, termed as the fair share detector (FS
detector), which exploits the nonparametric
cumulative sum (CUSUM) test to quickly find
a selfish malicious node without any a priori
knowledge of the statistics of the selfish
misbehavior. While most of the existing
schemes for selfish misbehavior detection
depend on heuristic parameter configuration
and experimental performance evaluation, we
develop a Markov chain-based analytical
model to systematically study the performance
of the FS detector in real-time backoff
misbehavior detection. Based on the analytical
model, we can quantitatively compute the
system configuration parameters for guaranteed
performance in terms of average false positive
rate, average detection delay, and missed
detection

ratio

under

detection

delay

constraint. We present thorough simulation


results to confirm the accuracy of our

theoretical analysis as well as demonstrate the


performance of the developed FS detector.
8.

Neighbor Due to high mobility of nodes in mobile ad hoc

Coverage-Based

networks (MANETs), there exist frequent link

Probabilistic

breakages which lead to frequent path failures

Rebroadcast

for and route discoveries. The overhead of a route

Reducing Routing discovery cannot be neglected. In a route


Overhead
Mobile
Networks

in discovery, broadcasting is a fundamental and


Ad

Hoc effective data dissemination mechanism, where


a mobile node blindly rebroadcasts the first
received route request packets unless it has a
route to the destination, and thus it causes the
broadcast storm problem. In this paper, we
propose

neighbor

coverage-based

probabilistic rebroadcast protocol for reducing


routing overhead in MANETs. In order to
effectively exploit the neighbor coverage
knowledge, we propose a novel rebroadcast
delay to determine the rebroadcast order, and
then we can obtain the more accurate additional
coverage ratio by sensing neighbor coverage
knowledge. We also define a connectivity
factor to provide the node density adaptation.

2013

By combining the additional coverage ratio and


connectivity factor, we set a reasonable
rebroadcast

probability.

Our

approach

combines the advantages of the neighbor


coverage knowledge and the probabilistic
mechanism, which can significantly decrease
the number of retransmissions so as to reduce
the routing overhead, and can also improve the
routing performance.
9.

Relay Selection for Our work is motivated by geographical


Geographical

forwarding of sporadic alarm packets to a base

Forwarding

in station in a wireless sensor network (WSN),

Sleep-Wake

where the nodes are sleep-wake cycling

Cycling

Wireless periodically and asynchronously. We seek to

Sensor Networks

develop local forwarding algorithms that can be


tuned so as to tradeoff the end-to-end delay
against a total cost, such as the hop count or
total energy. Our approach is to solve, at each
forwarding node enroute to the sink, the local
forwarding problem of minimizing one-hop
waiting delay subject to a lower bound
constraint on a suitable reward offered by the
next-hop relay; the constraint serves to tune the

2013

tradeoff. The reward metric used for the local


problem is based on the end-to-end total cost
objective (for instance, when the total cost is
hop count, we choose to use the progress
toward sink made by a relay as the reward).
The forwarding node, to begin with, is
uncertain about the number of relays, their
wake-up times, and the reward values, but
knows the probability distributions of these
quantities. At each relay wake-up instant, when
a relay reveals its reward value, the forwarding
node's problem is to forward the packet or to
wait for further relays to wake-up. In terms of
the operations research literature, our work can
be considered as a variant of the asset selling
problem. We formulate our local forwarding
problem as a partially observable Markov
decision process (POMDP) and obtain inner
and outer bounds for the optimal policy.
Motivated by the computational complexity
involved in the policies derived out of these
bounds, we formulate an alternate simplified
model, the optimal policy for which is a simple

threshold rule. We provide simulation results to


compare the performance of the inner and outer
bound policies against the simple policy, and
also against the optimal policy when the source
knows the exact number of relays. Observing
the good performance and the ease of
implementation of the simple policy, we apply
it to our motivating problem, i.e., local
geographical routing of sporadic alarm packets
in a large WSN. We compare the end-to-end
performance (i.e., average total delay and
average total cost) obtained by the simple
policy, when used for local geographical
forwarding, against that obtained by the
globally

optimal

forwarding

algorithm

proposed by Kim et al.


10.

Toward

Privacy Today's location-sensitive service relies on

Preserving

and user's mobile device to determine the current

Collusion

location. This allows malicious users to access

Resistance

in

Location

Proof cheating on their locations. To address this

Updating System

a a restricted resource or provide bogus alibis by

issue, we propose A Privacy Preserving


LocAtion proof Updating System (APPLAUS)

2013

in which colocated Bluetooth enabled mobile


devices mutually generate location proofs and
send updates to a location proof server.
Periodically changed pseudonyms are used by
the mobile devices to protect source location
privacy from each other, and from the untrusted
location proof server. We also develop usercentric location privacy model in which
individual users evaluate their location privacy
levels and decide whether and when to accept
the location proof requests. In order to defend
against colluding attacks, we also present
betweenness ranking-based and correlation
clustering-based

approaches

for

outlier

detection. APPLAUS can be implemented with


existing network infrastructure, and can be
easily deployed in Bluetooth enabled mobile
devices with little computation or power cost.
Extensive experimental results show that
APPLA US can effectively provide location
proofs,
location

significantly
privacy,

colluding attacks.

preserve

and

the

source

effectively

detect

11.

Distributed

In this paper, we propose a new Distributed

Cooperation

and Cooperation

Diversity

for framework. Our focus is on heterogeneous

Hybrid
Networks

and

Diversity

Combining

Wireless networks with devices equipped with two types


of radio frequency (RF) interfaces: short-range
high-rate interface (e.g., IEEE802.11), and a
long-range low-rate interface (e.g., cellular)
communicating over urban Rayleigh fading
channels. Within this framework, we propose
and evaluate a set of distributed cooperation
techniques operating at different hierarchical
levels with resource constraints such as short
-range RF bandwidth. We propose a Priority
Maximum-Ratio

Combining

(PMRC)

technique, and a Post Soft-Demodulation


Combining (PSDC) technique. We show that
the proposed techniques achieve significant
improvements on Signal to Noise Ratio(SNR),
Bit Error Rate (BER) and throughput through
analysis, simulation, and experimentation on
our software radio testbed. Our results also
indicate that, under several communication
scenarios, PMRC and PSDC can improve the

2013

throughput performance by over an order of


magnitude.
12.

Toward
Statistical

a In certain applications, the locations of events


Frame reported by a sensor network need to remain

work for Source anonymous. That is, unauthorized observers


Anonymity
Sensor Networks

in must be unable to detect the origin of such


events by analyzing the network traffic. Known
as the source anonymity problem, this problem
has emerged as an important topic in the
security of wireless sensor networks, with
variety of techniques based on different
adversarial assumptions being proposed. In this
work, we present a new framework for
modeling, analyzing, and evaluating anonymity
in sensor networks. The novelty of the
proposed framework is twofold: first, it
introduces

the

indistinguishability

notion
and

of

interval

provides

quantitative measure to model anonymity in


wireless sensor networks; second, it maps
source anonymity to the statistical problem of
binary

hypothesis

testing

with

nuisance

parameters. We then analyze existing solutions

2013

for designing anonymous sensor networks


using the proposed model. We show how
mapping

source

anonymity

to

binary

hypothesis testing with nuisance parameters


leads to converting the problem of exposing
private source information into searching for an
appropriate data transformation that removes or
minimize

the

effect

of

the

nuisance

information. By doing so, we transform the


problem from analyzing real-valued sample
points to binary codes, which opens the door
for coding theory to be incorporated into the
study of anonymous sensor networks. Finally,
we discuss how existing solutions can be
modified to improve their anonymity.
13.

Vampire

Attacks: Ad hoc low-power wireless networks are an

Draining Life from exciting research direction in sensing and


Wireless Ad Hoc pervasive computing. Prior security work in
Sensor Networks

this area has focused primarily on denial of


communication at the routing or medium
access control levels. This paper explores
resource depletion attacks at the routing
protocol layer, which permanently disable

2013

networks by quickly draining nodes' battery


power. These "Vampire attacks are not
specific to any specific protocol, but rather rely
on the properties of many popular classes of
routing protocols. We find that all examined
protocols are susceptible to Vampire attacks,
which are devastating, difficult to detect, and
are easy to carry out using as few as one
malicious

insider sending only protocol-

compliant messages. In the worst case, a single


Vampire can increase network-wide energy
usage by a factor of O(N), where N in the
number of network nodes. We discuss methods
to mitigate these types of attacks, including a
new proof-of-concept protocol that provably
bounds the damage caused by Vampires during
the packet forwarding phase.

TECHNOLOGY: JAVA
DOMAIN: NETWORKING
S. No.

IEEE TITLE

1.

Fast

Regular

ABSTRACT
AbstractRegular

IEEE YEAR
expression

(RE)

2014

Expression
Matching
Small TCAM

matching is a core component of deep packet


Using inspection in modern networking and security
devices. In this paper, we propose the first
hardware-based RE matching approach that
uses ternary content addressable memory
(TCAM), which is available as off-the-shelf
chips and has been widely deployed in modern
networking devices for tasks such as packet
classification.

We

propose

three

novel

techniques to reduce TCAM space and improve


RE matching speed: transition sharing, table
consolidation, and variable striding. We tested
our techniques on eight real-world RE sets, and
our results show that small TCAMs can be used
to store large deterministic finite automata
(DFAs) and achieve potentially high RE
matching throughput. For space, we can store
each of the corresponding eight DFAs with 25
000 states in a 0.59-Mb TCAM chip. Using a
different

TCAM

encoding

scheme

that

facilitates processing multiple characters per


transition, we can achieve potential RE
matching throughput of 1019 Gb/s for each of

the eight DFAs using only a single 2.36-Mb


TCAMchip.
2.

Green Networking
With

AbstractWith the aim of controlling

Packet power consumption in metro/transport and core

Processing

networks, we consider energy-aware devices

Engines: Modeling able to reduce their energy requirements by


and Optimization

adapting their performance. In particular, we


focus on state-of-the-art packet processing
engines, which generally represent the most
energy-consuming components of network
devices, and which are often composed of a
number of parallel pipelines to divide and
conquer the incoming traffic load. Our goal is
to control both the power configuration of
pipelines and the way to distribute traffic flows
among them. We propose an analytical model
to accurately represent the impact of green
network technologies (i.e., low power idle and
adaptive rate) on network- and energy-aware
performance indexes. The model has been
validated with experimental results, performed
by using energy-aware software routers loaded
by real-world traffic traces. The achieved

2014

results demonstrate how the proposed model


can effectively represent energy- and networkaware performance indexes. On this basis, we
propose a constrained optimization policy,
which seeks the best tradeoff between power
consumption and packet latency times. The
procedure aims at dynamically adapting the
energy-aware device configuration to minimize
energy

consumption

while

coping

with

incoming traffic volumes and meeting network


performance constraints. In order to deeply
understand the impact of such policy, a number
of tests have been performed by using
experimental

data

from

software

router

architectures and real-world traffic traces.


3.

On

Sample-Path

Optimal

AbstractWe investigate the problem

Dynamic of minimizing the sum of the queue lengths of

Scheduling

for all the nodes in a wireless network with a forest

Sum-Queue

topology. Each packet is destined to one of the

Minimization
Forests

in roots (sinks) of the forest. We consider a timeslotted system and a primary (or one-hop)
interference
existence

model.
of

causal

We

characterize

sample-path

the

optimal

2014

scheduling policies for this network topology


under this interference model. A causal samplepath optimal scheduling policy is one for which
at each time-slot, and for any sample-path
traffic arrival pattern, the sum of the queue
lengths of all the nodes in the network is
minimum among all policies. We show that
such

policies

exist

in

restricted

forest

structures, and that for any other forest


structure, there exists a traffic arrival pattern
for which no causal sample-path optimal policy
can exist. Surprisingly, we show that many
forest structures for which such policies exist
can be scheduled by converting the structure
into

an

equivalent

linear

network

and

scheduling the equivalent linear network


according to the one-hop interference model.
The nonexistence of such policies in many
forest structures underscores the inherent
limitation of using sample-path optimality as a
performance metric and necessitates the need to
study other (relatively) weaker metrics of delay
performance.

4.

PACK: PredictionBased

Cloud PACK (Predictive ACKs), a novel end-to-end

Bandwidth
Cost
System

AbstractIn this paper, we present

and traffic redundancy elimination (TRE) system,

Reduction designed for cloud computing customers.


Cloud-based TRE needs to apply a judicious
use of cloud resources so that the bandwidth
cost reduction combined with the additional
cost of TRE computation and storage would be
optimized. PACKs main advantage is its
capability of offloading the cloud-server TRE
effort to end clients, thus minimizing the
processing

costs

induced

by

the

TRE

algorithm. Unlike previous solutions, PACK


does not require the server to continuously
maintain clients status. This makes PACK
very

suitable

for

pervasive

computation

environments that combine client mobility and


server migration to maintain cloud elasticity.
PACK is based on a novel TRE technique,
which allows the client to use newly received
chunks to identify previously received chunk
chains, which in turn can be used as reliable
predictors to future transmitted chunks. We

2014

present

fully

functional

PACK

implementation, transparent to all TCP-based


applications and network devices. Finally, we
analyze PACK benefits for cloud users, using
traffic traces from various sources.
5.

Secure

Data

Retrieval

AbstractMobile nodes in military

for environments such as a battlefield or a hostile

Decentralized

region are likely to suffer from intermittent

Disruption-

network connectivity and frequent partitions.

Tolerant
Networks

Military Disruption-tolerant

network

(DTN)

technologies are becoming successful solutions


that allow wireless devices carried by soldiers
to communicate with each other and access the
confidential information or command reliably
by exploiting external storage nodes. Some of
the most challenging issues in this scenario are
the enforcement of authorization policies and
the policies update for secure data retrieval.
Ciphertext-policy attribute-based encryption
(CP-ABE)

is

promising

cryptographic

solution to the access control issues. However,


the

problem

of

applying

CP-ABE

in

decentralized DTNs introduces several security

2014

and privacy challenges with regard to the


attribute

revocation,

key

escrow,

and

coordination of attributes issued from different


authorities. In this paper, we propose a secure
data retrieval scheme using CP-ABE for
decentralized

DTNs

where

authorities

manage

multiple

their

key

attributes

independently. We demonstrate how to apply


the proposed mechanism to securely and
efficiently

manage

the

confidential

data

distributed in the disruption-tolerant military


network.
6.

Distributed In this paper, we face the challenging issue of

Control Law for defining and implementing an effective law for


Load Balancing in load balancing in Content Delivery Networks
Content
Networks

Delivery (CDNs). We base our proposal on a formal


study of a CDN system, carried out through the
exploitation

of

fluid

flow

model

characterization of the network of servers.


Starting from such characterization, we derive
and prove a lemma about the network queues
equilibrium. This result is then leveraged in
order to devise a novel distributed and time-

2013

continuous algorithm for load balancing, which


is also reformulated in a time-discrete version.
The discrete formulation of the proposed
balancing law is eventually discussed in terms
of its actual implementation in a real-world
scenario. Finally, the overall approach is
validated by means of simulations.
7.

Achieving Efficient Although existing flooding protocols can


Flooding

by provide efficient and reliable communication in

Utilizing

Link wireless sensor networks on some level, further

Correlation
Wireless
Networks

in performance improvement has been hampered


Sensor by the assumption of link independence, which
requires costly acknowledgments (ACKs) from
every receiver. In this paper, we present
collective flooding (CF), which exploits the
link correlation to achieve flooding reliability
using the concept of collective ACKs. CF
requires only 1-hop information at each node,
making the design highly distributed and
scalable with low complexity. We evaluate CF
extensively in real-world settings, using three
different types of testbeds: a single-hop
network with 20 MICAz nodes, a multihop

2013

network with 37 nodes, and a linear outdoor


network with 48 nodes along a 326-m-long
bridge.

System

evaluation

and

extensive

simulation show that CF achieves the same


reliability as state-of-the-art solutions while
reducing

the

total

number

of

packet

transmission and the dissemination delay by


30%-50% and 35%-50%, respectively.
8.

Complexity
Analysis

An increasing number of high-performance


and networks provision dedicated channels through

Algorithm Design circuit switching or MPLS/GMPLS techniques


for

Advance to support large data transfer. The link

Bandwidth

bandwidths in such networks are typically

Scheduling

in shared by multiple users through advance

Dedicated

reservation, resulting in varying bandwidth

Networks

availability in future time. Developing efficient


scheduling algorithms for advance bandwidth
reservation has become a critical task to
improve the utilization of network resources
and meet the transport requirements of
application users. We consider an exhaustive
combination of different path and bandwidth
constraints and formulate four types of advance

2013

bandwidth scheduling problems, with the same


objective to minimize the data transfer end time
for a given transfer request with a prespecified
data size: fixed path with fixed bandwidth
(FPFB); fixed path with variable bandwidth
(FPVB); variable path with fixed bandwidth
(VPFB); and variable path with variable
bandwidth (VPVB). For VPFB and VPVB, we
further consider two subcases where the path
switching delay is negligible or nonnegligible.
We propose an optimal algorithm for each of
these scheduling problems except for FPVB
and VPVB with nonnegligible path switching
delay, which are proven to be NP-complete and
nonapproximable,

and

then

tackled

by

heuristics. The performance superiority of these


heuristics is verified by extensive experimental
results in a large set of simulated networks in
comparison to optimal and greedy strategies.
9.

Efficient
Algorithms
Neighbor
Discovery

Neighbor discovery is an important first step in


for the initialization of a wireless ad hoc network.
In this paper, we design and analyze several
in algorithms for neighbor discovery in wireless

2013

Wireless Networks

networks. Starting with a single-hop wireless


network of n nodes, we propose a (nlnn)
ALOHA-like neighbor discovery algorithm
when nodes cannot detect collisions, and an
order-optimal (n) receiver feedback-based
algorithm when nodes can detect collisions.
Our algorithms neither require nodes to have a
priori estimates of the number of neighbors nor
synchronization between nodes. Our algorithms
allow nodes to begin execution at different time
instants and to terminate neighbor discovery
upon discovering all their neighbors. We
finally show that receiver feedback can be used
to achieve a (n) running time, even when
nodes cannot detect collisions. We then analyze
neighbor discovery in a general multihop
setting. We establish an upper bound of
O(lnn) on the running time of the ALOHAlike algorithm, where denotes the maximum
node degree in the network and nthe total
number of nodes. We also establish a lower
bound of (+lnn) on the running time of any
randomized neighbor discovery algorithm. Our

result thus implies that the ALOHA-like


algorithm is at most a factor min(,lnn) worse
than optimal.
10.

Semi-Random

This paper proposes a semi-random backoff

Backoff: Towards (SRB) method that enables resource reservation


Resource
Reservation

in

contention-based

wireless

LANs.

The

for proposed SRB is fundamentally different from

Channel Access in traditional random backoff methods because it


Wireless LANs

provides an easy migration path from random


backoffs to deterministic slot assignments. The
central idea of the SRB is for the wireless
station to set its backoff counter to a
deterministic value upon a successful packet
transmission. This deterministic value will
allow the station to reuse the time-slot in
consecutive backoff cycles. When multiple
stations with successful packet transmissions
reuse their respective time-slots, the collision
probability is reduced, and the channel
achieves

the

equivalence

of

resource

reservation. In case of a failed packet


transmission, a station will revert to the
standard random backoff method and probe for

2013

a new available time-slot. The proposed SRB


method can be readily applied to both 802.11
DCF and 802.11e EDCA networks with
minimum
DCF/EDCA

modification

to

the

implementations.

existing

Theoretical

analysis and simulation results validate the


superior performance of the SRB for smallscale and heavily loaded wireless LANs. When
combined with an adaptive mechanism and a
persistent backoff process, SRB can also be
effective for large-scale and lightly loaded
wireless networks.
11.

Utility Multicast/broadcast is regarded as an efficient

Maximization
Frame

work

technique for wireless cellular networks to


for transmit a large volume of common data to

Fair and Efficient multiple mobile users simultaneously. To


Multicasting

in guarantee the quality of service for each mobile

Multicarrier

user in such single-hop multicasting, the base-

Wireless
Networks

Cellular station transmitter usually adapts its data rate to


the worst channel condition among all users in
a multicast group. On one hand, increasing the
number of users in a multicast group leads to a
more

efficient

utilization

of

spectrum

2013

bandwidth, as users in the same group can be


served together. On the other hand, too many
users in a group may lead to unacceptably low
data rate at which the base station can transmit.
Hence, a natural question that arises is how to
efficiently and fairly transmit to a large number
of users requiring the same message. This
paper endeavors to answer this question by
studying the problem of multicasting over
multicarriers in wireless orthogonal frequency
division

multiplexing

(OFDM)

cellular

systems. Using a unified utility maximization


framework, we investigate this problem in two
typical

scenarios:

namely,

when

users

experience roughly equal path losses and when


they

experience

different

path

losses,

respectively. Through theoretical analysis, we


obtain optimal multicast schemes satisfying
various throughput-fairness requirements in
these two cases. In particular, we show that the
conventional multicast scheme is optimal in the
equal-path-loss case regardless of the utility
function adopted. When users experience

different path losses, the group multicast


scheme, which divides the users almost equally
into many multicast groups and multicasts to
different groups of users over non overlapping
subcarriers, is optimal.

TECHNOLOGY: JAVA
DOMAIN: PARALLEL & DISTRIBUTED SYSTEM
S. No.

IEEE TITLE

1.

Enabling

ABSTRACT
In

this

paper,

IEEE YEAR
we

propose

Trustworthy Service

Trustworthy Service Evaluation (TSE)

Evaluation

system to enable users to share service

in

Service-Oriented

reviews in service-oriented mobile social

Mobile

networks

Networks

Social

(S-MSNs).

Each

service

provider independently maintains a TSE


for itself, which collects and stores users
reviews

about

its

services

without

requiring any third trusted authority. The


service

reviews

can

then

be

made

available to interested users in making


wise service selection decisions. We
identify three unique service review

2014

attacks, i.e., linkability, rejection, and


modification

attacks,

and

develop

sophisticated security mechanisms for the


TSE

to

deal

Specifically,
enables

with

the

users

these

attacks.

TSE

(bTSE)

basic
to

distributedly

and

cooperatively submit their reviews in an


integrated

chain

hierarchical

and

techniques.

It

form

by

aggregate
restricts

the

using

signature
service

providers to reject, modify, or delete the


reviews.

Thus,

the

integrity

and

authenticity of reviews are improved.


Further, we extend the bTSE to a Sybilresisted TSE (SrTSE) to enable the
detection of two typical sybil attacks. In
the SrTSE, if a user generates multiple
reviews toward a vendor in a predefined
time slot with different pseudonyms, the
real identity of that user will be revealed.
Through security analysis and numerical
results, we show that the bTSE and the
SrTSE effectively resist the service review

attacks and the SrTSE additionally detects


the sybil attacks in an efficient manner.
Through performance evaluation, we show
that the bTSE achieves better performance
in terms of submission rate and delay than
a service review system that does not
adopt user cooperation.
2.

Tag

Scheme

Encoding

Network

coding

allows

against

intermediate nodes to encode data packets

Pollution Attack to

to improve network throughput and

Linear

robustness. However, it increases the

Coding

Network

propagation speed of polluted data packets


if a malicious node injects fake data
packets into the network, which degrades
the bandwidth efficiency greatly and leads
to incorrect decoding at sinks. In this
paper, insights on new mathematical
relations in linear network coding are
presented and a key predistribution-based
tag encoding scheme KEPTE is proposed,
which enables all intermediate nodes and
sinks to detect the correctness of the
received data packets. Furthermore, the

2014

security of KEPTE

with

regard to

pollution attack and tag pollution attack is


quantitatively analyzed. The performance
of KEPTE is competitive in terms of: 1)
low computational complexity; 2) the
ability that all intermediate nodes and
sinks detect pollution attack; 3) the ability
that all intermediate nodes and sinks
detect tag pollution attack; and 4) high
fault-tolerance ability. To the best of our
knowledge,

the

existing

key

predistribution-based schemes aiming at


pollution detection can only achieve at
most three points as described above.
Finally, discussions on the application of
KEPTE to practical network coding are
also presented.
3.

Exploiting

Service

Location-based applications utilize

Similarity

for

the positioning capabilities of a mobile

Privacy in Location-

device to determine the current location of

Based

a user, and customize query results to

Queries

Search

include neighboring points of interests.


However, location knowledge is often

2014

perceived as personal information. One of


the immediate issues hindering the wide
acceptance of location-based applications
is the lack of appropriate methodologies
that offer fine grain privacy controls to a
user without vastly affecting the usability
of the service. While a number of privacypreserving models and algorithms have
taken shape in the past few years, there is
an almost universal need to specify ones
privacy

requirement

without

understanding its implications on the


service quality. In this paper, we propose a
user-centric

location

based

service

architecture where a user can observe the


impact of location inaccuracy on the
service accuracy before deciding the geocoordinates to use in a query. We
construct a local search application based
on this architecture and demonstrate how
meaningful information can be exchanged
between the user and the service provider
to

allow the inference of contours

depicting the change in query results


across a geographic area. Results indicate
the possibility of large default privacy
regions (areas of no change in result set)
in such applications.
4.

Network
Aware

Coding

Cooperative communication, which

Cooperative utilizes neighboring nodes to relay the

MAC Protocol for overhearing information, has been employed


Wireless
Networks

Ad

Hoc as an effective technique to deal with the


channel fading and to improve the network
performances.
combines

Network

several

coding,

packets

which

together

for

transmission, is very helpful to reduce the


redundancy at the network and to increase the
overall

throughput.

Introducing

network

coding into the cooperative retransmission


process enables the relay node to assist other
nodes

while

serving

its

own

traffic

simultaneously. To leverage the benefits


brought by both of them, an efficient Medium
Access Control (MAC) protocol is needed. In
this paper, we propose a novel network coding
aware cooperative MAC protocol, namely

2014

NCAC-MAC, for wireless ad hoc networks.


The design objective of NCAC-MAC is to
increase the throughput and reduce the delay.
Simulation results reveal that NCAC-MAC
can improve the network performance under
general circumstances comparing with two
benchmarks.
5.

Probabilistic AbstractMalicious and selfish behaviors

Misbehavior
Detection
toward

represent a serious threat against routing in


Scheme delay/disruption tolerant networks (DTNs).
Efficient Due to the unique network characteristics,

Trust Establishment designing a misbehavior detection scheme in


in

Delay-Tolerant DTN is regarded as a great challenge. In this

Networks

paper, we propose iTrust, a probabilistic


misbehavior detection scheme, for secure
DTN

routing

toward

efficient

trust

establishment. The basic idea of iTrust is


introducing a periodically available Trusted
Authority (TA) to judge the nodes behavior
based on the collected routing evidences and
probabilistically checking. We model iTrust
as the inspection game and use game
theoretical analysis to demonstrate that, by

2014

setting

an

appropriate

investigation

probability, TA could ensure the security of


DTN routing at a reduced cost. To further
improve the efficiency of the proposed
scheme, we correlate detection probability
with a nodes reputation, which allows a
dynamic detection probability determined by
the trust of the users. The extensive analysis
and

simulation

results

demonstrate

the

effectiveness and efficiency of the proposed


scheme.
6.

A System for Denialof-Service

AbstractInterconnected

systems,

Attack such as Web servers, database servers, cloud

Detection Based on computing servers and so on, are now under


Multivariate

threads from network attackers. As one of

Correlation Analysis

most common and aggressive means, denialof-service (DoS) attacks cause serious impact
on these computing systems. In this paper, we
present a DoS attack detection system that
uses multivariate correlation analysis (MCA)
for accurate network traffic characterization
by extracting the geometrical correlations
between network traffic features. Our MCA-

2014

based DoS attack detection system employs


the principle of anomaly based detection in
attack recognition. This makes our solution
capable of detecting known and unknown
DoS attacks effectively by learning the
patterns of legitimate network traffic only.
Furthermore, a triangle-area-based technique
is proposed to enhance and to speed up the
process of MCA. The effectiveness of our
proposed detection system is evaluated using
KDD Cup 99 data set, and the influences of
both non-normalized data and normalized data
on the performance of the proposed detection
system are examined. The results show that
our system outperforms two other previously
developed state-of-the-art approaches in terms
of detection accuracy.
7.

ReDS: A Framework
for

AbstractDistributed

hash

tables

Reputation- (DHTs), such as Chord and Kademlia, offer

Enhanced DHTs

an efficient means to locate resources in peerto-peer networks. Unfortunately, malicious


nodes on a lookup path can easily subvert
such queries. Several systems, including Halo

2014

(based on Chord) and Kad (based on


Kademlia), mitigate such attacks by using
redundant lookup queries. Much greater
assurance can be provided; we present
Reputation for Directory Services (ReDS), a
framework

for

enhancing

lookups

in

redundant DHTs by tracking how well other


nodes service lookup requests. We describe
how the ReDS technique can be applied to
virtually any redundant DHT including Halo
and Kad. We also study the collaborative
identification and removal of bad lookup
paths in a way that does not rely on the
sharing of reputation scores, and we show that
such sharing is vulnerable to attacks that make
it unsuitable for most applications of ReDS.
Through

extensive

simulations,

we

demonstrate that ReDS improves lookup


success rates for Halo and Kad by 80 percent
or more over a wide range of conditions, even
against strategic attackers attempting to game
their reputation scores and in the presence of
node churn.

8.

A Secure Payment We propose RACE, a report-based payment


Scheme with Low scheme for multihop wireless networks to
Communication and stimulate node cooperation, regulate packet
Processing

Over transmission, and enforce fairness. The nodes

head for Multihop submit lightweight payment reports (instead


wireless Networks

of receipts) to the accounting center (AC) and


temporarily store undeniable security tokens
called Evidences. The reports contain the
alleged charges and rewards without security
proofs, e.g., signatures. The AC can verify the
payment by investigating the consistency of
the reports, and clear the payment of the fair
reports with almost no processing overhead or
cryptographic

operations.

For

cheating

reports, the Evidences are requested to


identify and evict the cheating nodes that
submit incorrect reports. Instead of requesting
the Evidences from all the nodes participating
in the cheating reports, RACE can identify the
cheating

nodes

with

requesting

few

Evidences. Moreover, Evidence aggregation


technique is used to reduce the Evidences'
storage area. Our analytical and simulation

2013

results demonstrate that RACE requires much


less communication and processing overhead
than the existing receipt-based schemes with
acceptable payment clearance delay and
storage area. This is essential for the effective
implementation of a payment scheme because
it uses micropayment and the overhead cost
should be much less than the payment value.
Moreover, RACE can secure the payment and
precisely identify the cheating nodes without
false accusations.
9.

Cluster-Based

Mobile ad hoc networks (MANETs) have

Certificate

attracted much attention due to their mobility

Revocation

with and ease of deployment. However, the

Vindication

wireless and dynamic natures render them

Capability
Mobile
Networks

for more vulnerable to various types of security


Ad

Hoc attacks than the wired networks. The major


challenge is to guarantee secure network
services. To meet this challenge, certificate
revocation is an important integral component
to secure network communications. In this
paper, we focus on the issue of certificate
revocation to isolate attackers from further

2013

participating in network activities. For quick


and

accurate

propose

certificate

the

Revocation

revocation,

Cluster-based

with

Vindication

we

Certificate
Capability

(CCRVC) scheme. In particular, to improve


the reliability of the scheme, we recover the
warned nodes to take part in the certificate
revocation process; to enhance the accuracy,
we propose the threshold-based mechanism to
assess

and

vindicate

warned

nodes

as

legitimate nodes or not, before recovering


them. The performances of our scheme are
evaluated by both numerical and simulation
analysis. Extensive results demonstrate that
the proposed certificate revocation scheme is
effective and efficient to guarantee secure
communications in mobile ad hoc networks.
10.

Fault Tolerance in Replication is the prevalent solution to


Distributed Systems tolerate faults in large data structures hosted
Using

Fused

Structures

Data on distributed servers. To tolerate f crash


faults (dead/unresponsive data structures)
among n distinct data structures, replication
requires f + 1 replicas of each data structure,

2013

resulting in nf additional backups. We present


a solution, referred to as fusion that uses a
combination of erasure codes and selective
replication to tolerate f crash faults using just f
additional fused backups. We show that our
solution achieves O(n) savings in space over
replication. Further, we present a solution to
tolerate f Byzantine faults (malicious data
structures), that requires only nf + f backups
as compared to the 2nf backups required by
replication. We explore the theory of fused
backups and provide a library of such backups
for all the data structures in the Java
Collection Framework. The theoretical and
experimental evaluation confirms that the
fused backups are space-efficient as compared
to replication, while they cause very little
overhead for normal operation. To illustrate
the practical usefulness of fusion, we use
fused backups for reliability in Amazon's
highly available key-value store, Dynamo.
While the current replication - based solution
uses 300 backup structures, we present a

solution that only requires 120 backup


structures. This results in savings in space as
well as other resources such as power.
11.

Flexible Symmetrical Most existing global-snapshot algorithms in


Global - Snapshot distributed systems use control messages to
Algorithms for Large
-Scale
Systems

coordinate the construction of a global

Distributed snapshot among all processes. Since these


algorithms typically assume the underlying
logical overlay topology is fully connected,
the number of control messages exchanged
among the whole processes is proportional to
the square of number of processes, resulting in
higher possibility of network congestion.
Hence, such algorithms are neither efficient
nor scalable for a large-scale distributed
system composed of a huge number of
processes. Recently, some efforts have been
presented to significantly reduce the number
of control messages, but doing so incurs
higher response time instead. In this paper, we
propose an efficient global-snapshot algorithm
able to let every process finish its local
snapshot in a given number of rounds.

2013

Particularly, such an algorithm allows a


tradeoff between the response time and the
message complexity. Moreover, our globalsnapshot algorithm is symmetrical in the sense
that identical steps are executed by every
process. This means that our algorithm is able
to achieve better workload balance and less
network congestion. Most importantly, based
on our framework, we demonstrate that the
minimum

number

of

control

messages

required by a symmetrical global-snapshot


algorithm is Omega (N\log N), where N is the
number of processes. Finally, we also assume
non-FIFO channels.
12.

High

Performance Utility computing models have long been the

Resource Allocation
Strategies

focus of academic research, and with the

for recent success of commercial cloud providers,

Computational

computation and storage is finally being

Economies

realized as the fifth utility. Computational


economies are often proposed as an efficient
means

of

resource

allocation,

however

adoption has been limited due to a lack of


performance and high overheads. In this

2013

paper, we address the performance limitations


of existing economic allocation models by
defining strategies to reduce the failure and
reallocation rate, increase occupancy and
thereby increase the obtainable utilization of
the system. The high-performance resource
utilization strategies presented can be used by
market participants without requiring dramatic
changes to the allocation protocol. The
strategies considered include overbooking,
advanced reservation, just-in-time bidding,
and using substitute providers for service
delivery. The proposed strategies have been
implemented in a distributed meta scheduler
and evaluated with respect to Grid and cloud
deployments.

Several

diverse

synthetic

workloads have been used to quantity both the


performance

benefits

and

economic

implications of these strategies.


13.

Optimal
Server
for

Client- We investigate an underlying mathematical


Assignment model and algorithms for optimizing the
Internet performance of a class of distributed systems

Distributed Systems

over the Internet. Such a system consists of a

2013

large number of clients who communicate


with each other indirectly via a number of
intermediate servers. Optimizing the overall
performance of such a system then can be
formulated as a client-server assignment
problem whose aim is to assign the clients to
the servers in such a way to satisfy some pre
specified requirements on the communication
cost and load balancing. We show that 1) the
total communication load and load balancing
are two opposing metrics, and consequently,
their tradeoff is inherent in this class of
distributed systems; 2) in general, finding the
optimal client-server assignment for some pre
specified requirements on the total load and
load balancing is NP-hard, and therefore; 3)
we propose a heuristic via relaxed convex
optimization for finding the approximate
solution. Our simulation results indicate that
the proposed algorithm produces superior
performance than other heuristics, including
the popular Normalized Cuts algorithm.
14.

Scheduling

Sensor The network traffic pattern of continuous

2013

Data Collection with sensor


Dynamic
Patterns

data

collection

often

changes

Traffic constantly over time due to the exploitation of


temporal and spatial data correlations as well
as the nature of condition - based monitoring
applications. This paper develops a novel
TDMA schedule that is capable of efficiently
collecting sensor data for any network traffic
pattern and is thus well suited to continuous
data collection with dynamic traffic patterns.
Following this schedule, the energy consumed
by sensor nodes for any traffic pattern is very
close to the minimum required by their
workloads given in the traffic pattern. The
schedule also allows the base station to
conclude data collection as early as possible
according to the traffic load, thereby reducing
the latency of data collection. Experimental
results using real-world data traces show that,
compared with existing schedules that are
targeted on a fixed traffic pattern, our
proposed schedule significantly improves the
energy efficiency and time efficiency of
sensor data collection with dynamic traffic

patterns.

TECHNOLOGY: JAVA
DOMAIN: SOFTWARE ENGINEERING
S. No.

IEEE TITLE

1.

Ant

ABSTRACT

Colony Research into developing effective computer

Optimization
Software

for aided techniques for planning software projects

Project is important and challenging for software

Scheduling
Staffing with

and engineering. Different from projects in other


an fields, software projects are people-intensive

Event-Based

activities and their related resources are mainly

Scheduler

human resources. Thus, an adequate model for


software project planning has to deal with not
only the problem of project task scheduling but
also the problem of human resource allocation.
But as both of these two problems are difficult,
existing models either suffer from a very large
search space or have to restrict the flexibility of
human resource allocation to simplify the
model. To develop a flexible and effective
model for software project planning, this paper
develops a novel approach with an event-based

IEEE YEAR
2013

scheduler

(EBS)

and

an

ant

colony

optimization (ACO) algorithm. The proposed


approach represents a plan by a task list and a
planned employee allocation matrix. In this
way, both the issues of task scheduling and
employee allocation can be taken into account.
In the EBS, the beginning time of the project,
the time when resources are released from
finished tasks, and the time when employees
join or leave the project are regarded as events.
The basic idea of the EBS is to adjust the
allocation of employees at events and keep the
allocation unchanged at nonevents. With this
strategy, the proposed method enables the
modeling

of

resource

conflict

and

task

preemption and preserves the flexibility in


human resource allocation. To solve the
planning problem, an ACO algorithm is further
designed. Experimental results on 83 instances
demonstrate that the proposed method is very
promising.

TECHNOLOGY: DOTNET
DOMAIN: CLOUD COMPUTING

S. No.

IEEE TITLE

ABSTRACT

1.

A Novel Economic AbstractThis

paper

presents

IEEE YEAR
a

novel

Sharing Model in a economic model to regulate capacity sharing


Federation of Selfish in a federation of hybrid cloud providers
Cloud Providers

(CPs). The proposed work models the


interactions among the CPs as a repeated
game among selfish players that aim at
maximizing their profit by selling their unused
capacity in the spot market but are uncertain
of future workload fluctuations. The proposed
work first establishes that the uncertainty in
future revenue can act as a participation
incentive to sharing in the repeated game. We,
then, demonstrate how an efficient sharing
strategy can be obtained via solving a simple
dynamic programming problem. The obtained
strategy is a simple update rule that depends
only on the current workloads and a single
variable summarizing past interactions. In

2014

contrast to existing approaches, the model


incorporates historical and expected future
revenue as part of the virtual machine (VM)
sharing decision. Moreover, these decisions
are enforced neither by a centralized broker
nor by predefined agreements. Rather, the
proposed model employs a simple grim
trigger strategy where a CP is threatened by
the elimination of future VM hosting by other
CPs. Simulation results demonstrate the
performance of the proposed model in terms
of the increased profit and the reduction in the
variance in the spot market VM availability
and prices.
2.

UCONABC

The business-driven access control

Resilient

used in cloud computing is not well suited for

Authorization

tracking

fine-grained

user

service

Evaluation for Cloud consumption. UCONABC applies continuous


Computing

authorization reevaluation, which requires


usage accounting that enables fine-grained
access control for cloud computing. However,
it was not designed to work in distributed and
dynamic

authorization

environments

like

2014

those present in cloud computing. During a


continuous

(periodical)

reevaluation,

an

authorization exception condition, disparity


among usage accounting and authorization
attributes may occur. This proposal aims to
provide

resilience

to

the

UCONABC

continuous authorization reevaluation, by


dealing with individual exception conditions
while maintaining a suitable access control in
the cloud environment. The experiments made
with a proof-of-concept prototype show a set
of measurements for an application scenario
(e-commerce) and allows for the identification
of exception conditions in the authorization
reevaluation.
3.

Distributed,

AbstractPlacing critical data in the

Concurrent,

and hands of a cloud provider should come with

Independent Access the guarantee of security and availability for


to Encrypted Cloud data at rest, in motion, and in use. Several
Databases

alternatives exist for storage services, while


data confidentiality solutions for the database
as a service paradigm are still immature. We
propose a novel architecture that integrates

2014

cloud

database

services

with

data

confidentiality and the possibility of executing


concurrent operations on encrypted data. This
is the first solution supporting geographically
distributed clients to connect directly to an
encrypted cloud database, and to execute
concurrent

and

including those

independent

operations

modifying the database

structure. The proposed architecture has the


further advantage of eliminating intermediate
proxies that limit the elasticity, availability,
and scalability properties that are intrinsic in
cloud-based solutions. The efficacy of the
proposed architecture is evaluated through
theoretical

analyses

and

extensive

experimental results based on a prototype


implementation subject to the TPC-C standard
benchmark for different numbers of clients
and network latencies.
4.

Key-Aggregate
Cryptosystem
Scalable
Sharing

AbstractData

sharing

is

an

for important functionality in cloud storage. In


Data this paper, we show how to securely,

in

Cloud efficiently, and flexibly share data with others

2014

Storage

in cloud storage. We describe new public-key


cryptosystems

that

produce

constant-size

cipher texts such that efficient delegation of


decryption rights for any set of cipher texts
are possible. The novelty is that one can
aggregate any set of secret keys and make
them as compact as a single key, but
encompassing the power of all the keys being
aggregated. In other words, the secret key
holder can release a constant-size aggregate
key for flexible choices of ciphertext set in
cloud storage, but the other encrypted files
outside the set remain confidential. This
compact aggregate key can be conveniently
sent to others or be stored in a smart card with
very limited secure storage. We provide
formal security analysis of our schemes in the
standard model. We also describe other
application of our schemes. In particular, our
schemes give the first public-key patientcontrolled encryption for flexible hierarchy,
which was yet to be known.

TECHNOLOGY: DOTNET
DOMAIN: DATA MINING

S. No.

IEEE TITLE

1.

A Group Incremental
Approach to Feature

ABSTRACT
Many

Applying

Rough Set Technique

real

data

increase

2014

dynamically in size. This phenomenon


occurs

Selection

IEEE YEAR

in

economics,

several

fields

population

including

studies,

and

medical research. As an effective and


efficient mechanism to deal with such
data, incremental technique has been
proposed in the literature and attracted
much attention, which stimulates the result
in this paper. When a group of objects are
added to a decision table, we first
introduce incremental mechanisms for
three representative information entropies
and then develop a group incremental
rough feature selection algorithm based on
information

entropy.

When

multiple

objects are added to a decision table, the


algorithm aims to find the new feature
subset

in

much

shorter

time.

Experiments have been carried out on


eight UCI data sets and the experimental
results show that the algorithm is effective
and efficient.
2.

Consensus-Based

AbstractIn this paper, we tackle a novel

2014

Ranking

of problem of ranking multivalued objects,

Multivalued Objects: where an object has multiple instances in a


A Generalized Borda multidimensional space, and the number of
Count Approach

instances per object is not fixed. Given an ad


hoc scoring function that assigns a score to a
multidimensional instance, we want to rank a
set of multivalued objects. Different from the
existing models of ranking uncertain and
probabilistic data, which model an object as a
random variable and the instances of an object
are assumed exclusive, we have to capture the
coexistence of instances here. To tackle the
problem, we advocate the semantics of
favoring widely preferred objects instead of
majority votes, which is widely used in many
elections and competitions. Technically, we
borrow the idea from Borda Count (BC), a
well-recognized method in consensus-based
voting systems. However, Borda Count cannot
handle multivalued objects of inconsistent
cardinality, and is costly to evaluate top k
queries on large multidimensional data sets.
To address the challenges, we extend and

generalize Borda Count to quantile-based


Borda

Count,

and

develop

efficient

computational methods with comprehensive


cost analysis. We present case studies on real
data sets to demonstrate the effectiveness of
the generalized Borda Count ranking, and use
synthetic and real data sets to verify the
efficiency of our computational method.
3.

Rough Sets, Kernel AbstractNowadays, the high availability of


Set,

and

data gathered from wireless sensor networks


and telecommunication systems has drawn the

Spatiotemporal
Outlier Detection

attention of researchers on the problem of


extracting knowledge from spatiotemporal
data. Detecting outliers which are grossly
different from or inconsistent with the
remaining spatiotemporal data set is a major
challenge in real-world knowledge discovery
and data mining applications. In this paper, we
deal with the outlier detection problem in
spatiotemporal data and describe a rough set
approach that finds the top outliers in an
unlabeled

spatiotemporal

data

set.

The

proposed method, called Rough Outlier Set


Extraction (ROSE), relies on a rough set
theoretic representation of the outlier set using
the rough set approximations, i.e., lower and
upper

approximations.

We

have

also

introduced a new set, named Kernel Set, that

2014

is a subset of the original data set, which is


able to describe the original data set both in
terms of data structure and of obtained results.
Experimental results on real-world data sets
demonstrate the superiority of ROSE, both in
terms of some quantitative indices and outliers
detected, over those obtained by various rough
fuzzy clustering algorithms and by the stateof-the-art outlier detection methods. It is also
demonstrated that the kernel set is able to
detect the same outliers set but with less
computational time.
4.

Discovering
Temporal

Frequent items mining is a widely exploratory

Patterns

Change technique

Presence

in

Taxonomies

the recurrent

that

focuses

correlations

on

among

discovering
data.

The

of steadfast evolution of markets and business


environments prompts the need of data mining
algorithms to discover significant correlation
changes in order to reactively suit product and
service provision to customer needs. Change
mining, in the context of frequent item sets,
focuses on detecting and reporting significant
changes in the set of mined item sets from one
time period to another. The discovery of
frequent generalized item sets, i.e., item sets
that 1) frequently occur in the source data, and

2013

2) provide a high-level abstraction of the


mined knowledge, issues new challenges in
the analysis of item sets that become rare, and
thus are no longer extracted, from a certain
point. This paper proposes a novel kind of
dynamic

pattern,

namely

the

History

Generalized Pattern (HiGen), that represents


the evolution of an item set in consecutive
time periods, by reporting the information
about

its

frequent

generalizations

characterized by minimal redundancy (i.e.,


minimum level of abstraction) in case it
becomes infrequent in a certain time period.
To address HiGen mining, it proposes HiGen
Miner, an algorithm that focuses on avoiding
item set mining followed by post processing
by exploiting a support-driven item set
generalization

approach.

To

focus

the

attention on the minimally redundant frequent


generalizations and thus reduce the amount of
the generated patterns, the discovery of a
smart subset of HiGens, namely the Nonredundant HiGens, is addressed as well.

Experiments performed on both real and


synthetic datasets show the efficiency and the
effectiveness of the proposed approach as well
as its usefulness in a
real application context.
5.

InformationTheoretic

Outlier detection can usually be considered as


Outlier a pre-processing step for locating, in a data

Detection for Large- set, those objects that do not conform to wellScale
Data

Categorical defined notions of expected behavior. It is


very
important in data mining for discovering
novel or rare events, anomalies, vicious
actions, exceptional phenomena, etc. We are
investigating outlier detection for categorical
data

sets.

This

problem

is

especially

challenging because of the difficulty of


defining a meaningful similarity measure for
categorical data. In this paper, we propose a
formal

definition

of

outliers

and

an

optimization model of outlier detection, via a


new concept of holo entropy that takes both
entropy

and

total

correlation

into

consideration. Based on this model, we define

2013

a function for the outlier factor of an object


which is solely determined by the object itself
and can be updated efficiently. We propose
two practical 1
-parameter outlier detection methods, named
ITB-SS and ITB-SP, which require no userdefined parameters for deciding whether an
object is an outlier. Users need only provide
the number of outliers they want to detect.
Experimental results show that ITB-SS and
ITB-SP are more effective and efficient than
mainstream methods and can be used to deal
with both large and high-dimensional data sets
where existing algorithms fail.
6.

Robust
Based
Management

Module- The current trend for building an ontologyData based data management system (DMS) is to
capitalize on efforts made to design a
preexisting well-established DMS (a reference
system). The method amounts to extracting
from the reference DMS a piece of schema
relevant to the new application needs-a
module
-,

possibly personalizing

it

with

extra

2013

constraints

w.r.t.

the

application

under

construction, and then managing a data set


using the resulting schema. In this paper, we
extend the existing definitions of modules and
we introduce novel properties of robustness
that provide means for checking easily that a
robust module-based DMS evolves safely
w.r.t. both the schema and the data of the
reference

DMS.

We

carry

out

our

investigations in the setting of description


logics which underlie modern ontology
languages, like RDFS, OWL, and OWL2 from
W3C. Notably, we focus on the DL-liteA
dialect

of

the

DL-lite

family,

which

encompasses the foundations of the QL


profile of OWL2 (i.e., DL-liteR): the W3C
recommendation for efficiently managing
large data sets.
7.

Protecting Sensitive Privacy is one of the major concerns when


Labels

in

Social publishing or sharing social network data for

Network
Anonymization

Data social science research and business analysis.


Recently, researchers have developed privacy
models similar to k-anonymity to prevent

2013

node

reidentification

through

structure

information. However, even when these


privacy models are enforced, an attacker may
still be able to infer one's private information
if a group of nodes largely share the same
sensitive labels (i.e., attributes). In other
words, the label-node relationship is not well
protected by pure structure

anonymization

methods. Furthermore, existing approaches,


which rely on edge editing or node clustering,
may significantly alter key graph properties.
In this paper, we define a k-degree-l-diversity
anonymity model that considers the protection
of structural information as well as sensitive
labels of individuals. We further propose a
novel anonymization methodology based on
adding noise nodes. We develop a new
algorithm by adding noise nodes into the
original graph with the consideration of
introducing the least distortion to graph
properties. Most importantly, we provide a
rigorous analysis of the theoretical bounds on
the number of noise nodes added and their

impacts on an important graph property. We


conduct extensive experiments to evaluate the
effectiveness of the proposed technique

TECHNOLOGY: DOTNET
DOMAIN: PARALLEL & DISTRIBUTED SYSTEM
S. No.

IEEE TITLE

1.

Behavioral

ABSTRACT

IEEE YEAR

AbstractThe delay-tolerant-network (DTN)

Malware Detection model is becoming a viable communication


in Delay Tolerant alternative to the traditional infrastructural
Networks

model for modern mobile consumer electronics


equipped with short-range communication
technologies such as Bluetooth, NFC, and WiFi Direct. Proximity malware is a class of
malware that exploits the opportunistic contacts
and

distributed

nature

of

DTNs

for

propagation. Behavioral characterization of


malware is an effective alternative to pattern
matching in detecting malware, especially
when dealing with polymorphic or obfuscated
malware. In this paper, we first propose a
general

behavioral

characterization

of

2014

proximity malware which based on naive


Bayesian model, which has been successfully
applied in non-DTN settings such as filtering
email spams and detecting botnets. We identify
two unique challenges for extending Bayesian
malware detection to DTNs (insufficient
evidence versus evidence collection risk and
filtering false evidence sequentially and
distributedly), and propose a simple yet
effective method, look ahead, to address the
challenges. Furthermore, we propose two
extensions to look ahead, dogmatic filtering,
and adaptive look ahead, to address the
challenge of malicious nodes sharing false
evidence. Real mobile network traces are used
to verify the effectiveness of the proposed
methods.
2.

LocaWard:

AbstractThe proliferation of mobile

Security

and devices has driven the mobile marketing to

Privacy

Aware surge in the past few years. Emerging as a new

Location-Based

type of mobile marketing, mobile location-

Rewarding System

based services (MLBSs) have attracted intense


attention

recently.

Unfortunately,

current

2014

MLBSs have a lot of limitations and raise many


concerns, especially about system security and
users privacy. In this paper, we propose a new
location-based

rewarding

system,

called

LocaWard, where mobile users can collect


location-based tokens from token distributors,
and then redeem their gathered tokens at token
collectors for beneficial rewards. Tokens act as
virtual currency. The token distributors and
collectors can be any commercial entities or
merchants that wish to attract customers
through such a promotion system, such as
stores, restaurants, and car rental companies.
We develop a security and privacy aware
location-based rewarding protocol for the
LocaWard system, and prove the completeness
and soundness of the protocol. Moreover, we
show that the system is resilient to various
attacks and mobile users privacy can be well
protected

in

the

meantime.

We

finally

implement the system and conduct extensive


experiments to validate the system efficiency in
terms of computation, communication, energy

consumption, and storage costs.


3.

Power

Cost

Reduction

AbstractThis

in stochastic

Distributed

paper

optimization

considers

approach

for

2014

job

Data scheduling and server management in large-

Centers: A Two- scale, geographically distributed data centers.


Time-Scale
Approach
Delay

Randomly arriving jobs are routed to a choice


for of servers. The number of active servers

Tolerant depends on server activation decisions that are

Workloads

updated at a slow time scale, and the service


rates of the servers are controlled by power
scaling decisions that are made at a faster time
scale. We develop a two-time-scale decision
strategy that offers provable power cost and
delay

guarantees.

The

performance

and

robustness of the approach is illustrated


through simulations.
4.

Traffic

Pattern-

AbstractDue

Based

Content popularity

of

to

the

multimedia

increasing
streaming

Leakage Detection applications and services in recent years, the


for Trusted Content issue of trusted video delivery to prevent
Delivery Networks

undesirable

content-leakage

has,

indeed,

become critical. While preserving user privacy,


conventional systems have addressed this issue

2014

by proposing methods based on the observation


of streamed traffic throughout the network.
These conventional systems maintain a high
detection accuracy while coping with some of
the traffic variation in the network (e.g.,
network delay and packet loss), however, their
detection performance substantially degrades
owing to the significant variation of video
lengths. In this paper, we focus on overcoming
this issue by proposing a novel content-leakage
detection scheme that is robust to the variation
of the video length. By comparing videos of
different lengths, we determine a relation
between the length of videos to be compared
and the similarity between the compared
videos. Therefore, we enhance the detection
performance of the proposed scheme even in an
environment subjected to variation in length of
video. Through a testbed experiment, the
effectiveness of our proposed scheme is
evaluated in terms of variation of video length,
delay variation, and packet loss.

Das könnte Ihnen auch gefallen