Sie sind auf Seite 1von 105

International Journal of Research in Science and Technology

Volume 2, Issue 3 (I): July - September 2015


Editor- In-Chief

Dr. Pranjal Sharma

Members of Editorial Advisory Board


Dr. Nurul Fadly Habidin
Faculty of Management and Economics,
Universiti Pendidikan Sultan Idris, Malaysia

Dr. Marwan Mustafa Shammot


Associate Professor,
King Saud University, Riyadh, Saudi Arabia

Prof. P. Madhu Sudana Rao


Professor of Banking and Finance,
Mekelle University, Mekelle, Ethiopia

Dr. Amer A. Taqa


Faculty, DBS Department,
College of Dentistry, Mosul University

Dr.Agbo J.Madaki
Lecturer,
Catholic University of Eastern Africa, Kenya

Dr. Sunita Dwivedi


Associate Professor, Symbiosis International
University, Noida

Dr. D. K. Pandey
Director,

Dr. Sudhansu Ranjan Mohapatra


Director, Centre for Juridical Studies
Unique Institute of Management & Technology, Meerut Dibrugarh University, Dibrugarh
Dr. Tazyn Rahman
Dean ( Academics )
Jaipuria Institute, Ghaziabad

Dr. Neetu Singh


HOD, Department of Biotechnology,
Mewar Institute , Vasundhara, Ghaziabad

Dr. Teena Shivnani


HOD, Department of Commerce,
Manipal University, Jaipur

Dr. Anindita
Associate Professor,
Jaipuria School of Business, Ghaziabad

Dr. K. Ramani
Associate Professor,
K.S.Rangasamy College of Technology, Namakkal

Dr. S. Satyanarayana
Associate Professor,
KL University , Guntur

Dr. Subha Ganguly


Dr. Gauri Dhingra
Scientist (Food Microbiology)
Assistant Professor,
University of Animal and Fishery Sciences, Kolkata JIMS, Vasant Kunj, New Delhi
Dr. V. Tulasi Das
Assistant Professor,
Acharya Nagarjuna University

Dr. R. Suresh
Assistant Professor,
Mahatma Gandhi University

Copyright @ 2014 Indian Academicians and Researchers Association, Guwahati


All rights reserved.
No part of this publication may be reproduced or transmitted in any form or by any means, or stored in any retrieval system of any nature
without prior written permission. Application for permission for other use of copyright material including permission to reproduce
extracts in other published works shall be made to the publishers. Full acknowledgment of author, publishers and source must be given.
The views expressed in the articles are those of the contributors and not necessarily of the Editorial Board or the IARA. Although every
care has been taken to avoid errors or omissions, this publication is being published on the condition and understanding that information
given in this journal is merely for reference and must not be taken as having authority of or binding in any way on the authors, editors
and publishers, who do not owe any responsibility for any damage or loss to any person, for the result of any action taken on the basis of
this work. All disputes are subject to Guwahati jurisdiction only.

International Journal of Research in Science and Technology


Volume 2, Issue 3 ( I ): July - September 2015

CONTENTS
Research Papers
PORTFOLIOS OF CONTROL IN BUSINESS PROCESS OUTSOURCING

1 13

Dr. Rohtash Kumar Garg and Neha Solanki


INVENTORY MODELING FOR DETERIORATING ITEMS: FUNDAMENTAL VIEW

14 21

Hetal R. Patel
AN OVERVIEW OF DATA WAREHOUSING AND OLAP OPERATIONS

22 28

Chandrakant Dewangan, Dileshwar Dansena, Mili Patel and Pooja Khemka


FORMANT ESTIMATION FOR SPEECH RECOGNITION AND STRUCTURAL ANALYSIS
OF ASSAMESE LANGUAGE IN ASSAM

29 37

Dr. Rashmi Dutta


CELLPHONE CLONING OVER (GSM & CDMA)

38 40

Mukesh Patel, Mili Patel and Pooja Khemka


CONVERSION OF METHANOL TO FORMALDEHYDE OVER Ag NANOROD CATALYST
UNDER MICROWAVE IRRADIATION

41 43

Manish Srivastava, Aakanksha Mishra, Ashu Goyal, Anamika Srivastava and Preeti Tomer
DIGITAL SIGNATURE VERIFICATION WITH OTP GENERATION BASED ON HIDDEN
MARKOV MODEL

44 47

A. Manoj, M. Ayyappan, T. Rajesh Kumar and Dr. S.Padmapriya


ENHANCED CERTIFICATE REVOCATION USING CLUSTERING SCHEME FOR MANETs

48 53

N. R. Vaishnavi , J. Omana, Sherinkaran Wisebell and I. M. Thaya


LOCATION PRIVACY IN UBIQUITOUS COMPUTING

54 58

Purnima Pradhan, Savita Singh, Mili Patel and Pooja Khemka


RESEARCH ISSUE IN: STUDY OF QUANTUM CRYPTOGRAPHY

59 62

Rupali Yadav, Neellima Manher, Mili Patel and Pooja Khemka


ARTIFICAL INTELLIGENCE

63 67

Amanjot Kaur
STUDY OF E -VOTING SYSTEM WITH MULTI SECURITY USING BIOMETRIC

68 75

Shashikala Khandey, Kavita Patel, Mili Patel and Pooja Khemka


STUDY OF QUANTUM COMPUTING

76 80

Alka Chandan, Ankita Panda, Mili Patel and Pooja Khemka


CONSIDERING ANY RELATION
SUGGESTIONS SYSTEM
Kiamars Fadaei

BETWEEN

ORGANIZATIONAL

CULTURE

&

81 89

CORPORATE GOVERNANCE AND FINANCIAL STABILITY IN SUDANESE BANKING


SECTOR
Dina Ahmed Mohamed Ghandour

90 98

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


PORTFOLIOS OF CONTROL IN BUSINESS PROCESS OUTSOURCING
Dr. Rohtash Kumar Garg and Neha Solanki
Assistant Professor, DIRD, New Delhi
ABSTRACT
The three trends that seem certain to dominate the world, for some time to come, are globalization,
technological advances and deregulation. They combine to make geographical dispersion an area of low
concern in the planning of business strategy; as enterprises increasingly look for leveraging the cost or
differentiation advantages available across the globe forging partnerships to create a value chain with the aim
of accomplishing the most with the least. It is in this scenario that business process outsourcing (bpo') emerges
as the latest buzz in management thinking as a global supply-chain of information and expertise that stretches
from Mumbai to Manhattan is etched.
The greatest challenge in this management tool, however, is one of control. Managers are accustomed to
having direct control over the resources to deliver the results for which theyre accountable. With BPO, these
controls are in the hands of the provider. How this aspect of governance is handled can mean the difference
between adequate results and high performing outsourcing that delivers beyond expectations. It is this control
issue its practice and its structuring that is explored in this paper.
Keywords: business process outsourcing, control, management
INTRODUCTION
In 1989 when CIO Katherine Hudson outsourced the management of Kodaks entire data centre to DEC-IBM
she introduced a whole new paradigm in business administration that would soon transcend borders, go global
and create havoc among the then existing ethics of business and exchange. The Kodak IBM deal changed the
rules of the management game irreversibly. If a large Fortune 500 company such as this could do it to save
costs and protect its core competency, would others allow themselves to lag behind?
Close to two decades later, the world, today, has moved onto outsourcing practically every business process
including medical services to third party vendors. The global business process outsourcing sector is currently
estimated to be over US$ 18 billion. Nasscom predicts the total exports for the Indian BPO sector to exceed
USD 8.3 billion in FY 2006-07, growing by 32 percent over the previous year. Over FY2001-2006, Indias
share in global process sourcing is estimated to have grown from 39 per cent to 45 percent. As a proportion of
national GDP, the revenue aggregate of the Indian technology sector has grown from 1.2 percent in FY1998 to
an estimated 5.4 percent in FY2007. Net value-added by this sector, to the economy, is estimated at 3-3.5
percent for FY2007. (All data from Nasscom strategic review 2007)
The increasing pervasiveness of outsourcing, the competitiveness and diversity of the market, and the
consequent growing interest among researchers on various outsourcing issues provide the impetus for this
study. In the last decade, a large number of studies have been conducted to address a variety of outsourcing
research issues. Researchers have focused on various outsourcing issues such as motivation (Buchowicz 1991,
Buck-Lew 1992), scope (Benko 1993, Gupta & Gupta 1992), performance (Arnett and Jones 1994, Loh and
Venkatraman 1995), contract (Fitzgerald and Willcocks 1994, Richmond and Seidmann 1992) and partnership
(Grover, Cheon and Teng 1996, Klepper 1995).
However, there has been only limited attention to the form of control systems that are suited to strategic
alliances, with calls to extend the domain of organizational control to cover inter-firm relationships (Otley,
1994; Hopwood, 1996; Spekl, 2001). Research that considers explicitly the design of control in outsourcing
relationships has, however, begun to appear (van der Meer-Kooistra and Vosselman, 2000; Mouritsen et al.,
2001; Chua and Mahama, 2002). A common feature, though, of all these studies has been its focus on the
clients perspective. In fact, as Levina and Ross note, while the clients sourcing decisions and the client-vendor
relationship have been examined in outsourcing literature, the vendor's perspective has hardly been explored
(Levina & Ross, 2003). This near-total lack of vendor perspectives in outsourcing research has been also been
highlighted by Dibbern et al., (2004).
This burgeoning impact of business process outsourcing coupled with the paucity of theoretical understanding
on control from the vendors perspective has motivated this research. An attempt to understand critical
outsourcing control issues and provide an integrative theoretical perspective to add new insights in the direction
and focus of future outsourcing research has been the underlying rationale for this study; the intent being to
explore how control modes are implemented in business process outsourcing relationships.
1

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


Research provides evidence of controllers in inter-organizational settings structuring a portfolio of control
modes in order to manage the complexities and subtleties of a task that involves people with various knowledge
and skills across time (Boland 1979, Orlikowski 1991, Henderson and Lee 1992). Thus we expect that, even in
the business process outsourcing context, firms will use a portfolio of controls. Thus, our research objective is
to explore the mechanisms that constitute the portfolios of control in business process outsourcing relationships.
REVIEW OF LITERATURE
Research on business process outsourcing has for the most part been driven by the practitioner community.
Academics, largely, have demonstrated relatively lesser interest in researching this phenomenon leading to a
virtual absence of academic publications on the topic" (Rouse and Corbitt, 2004). Other researchers such as
Gewald et al (2006) and Whitaker et al. (2006) have also noted this lacuna. Also, business process outsourcing
has often been treated as an extension of the concept of IT/IS outsourcing to IT intensive business processes
(Hyder et al, 2002) or as a subset of IT/IS outsourcing (Sovie & Hansen, 2001; Michell & Fitzgerald, 1997).
Hence for this paper, we have sourced information from the extensive body of work on IT/IS outsourcing
research of the past two decades. As Dibbern et al. (2004) note research on . business process
outsourcing would benefit from standing on the shoulders of what has already been accomplished in the field
of IS outsourcing.
CONCEPTUALIZATION OF BUSINESS PROCESS OUTSOURCING
The transaction cost approach to the theory of the firm hypothesizes that firms are organizational innovations
born out of the costs involved in market transacting in order to reduce those costs. Coase (1937) has argued
that, were the firm and the market alternatives for organizing the same set of transactions; a firm will substitute
market transactions as long as management costs are less than transaction costs. Thanks to the convergence in
corporate computing platforms and rapid advances made in communications technology it has become easy and
inexpensive to seamlessly link together geographically dispersed information systems thus making market
transactions for executing several activities previously done within the firm boundaries possible and preferable.
This concept of remotely executing tasks was the genesis of business process outsourcing defined as the
delegation of one or more IT-intensive business processes to an external provider that, in turn, owns,
administrates and manages the selected process/processes, based upon defined and measurable performance
metrics (Gartner 2004).
The key defining characteristics brought out by this definition are:

The transfer of management and execution of one or more complete business processes or entire business
functions to an external service provider (vendor).

Nature of outsourced processes being essentially IT intensive. Such highly transactional, technologyintensive work lends itself easily to business process automation.

The vendor is part of the decision making structure surrounding the outsourced business function.

The client relinquishes control over the outsourced process in favor of monitoring through performance
metrics tied to strategic business value.

In ascending order of value and level of expertise required, bpo can be classified as:

Data entry and conversion, which includes medical transcription;

Rule-set processing, in which a worker makes judgments based on rules set by the customer. He might
decide, for example, whether, under an airlines rules, a passenger is allowed an upgrade;

Problem-solving, in which the bpo provider has more discretionfor example, to decide if an insurance
claim should be paid;

Direct customer interaction, in which the bpo provider handles more elaborate transactions with the clients
customers. Collecting delinquent payments from credit-card customers is an example;

Expert knowledge services, which require specialists (with the help of a database). For example, a bpo
provider may predict how credit-card users behavior will change if their credit rating improves.

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


A business process outsourcing relationship typically progresses through the following four phases:

Figure 1
CONTROL PRACTICES: A SURVEY AND ANALYSIS OF LITERATURE
The issue of control has received considerable attention in organizational literature. Anthonys (1965)
contribution in particular is often referred to and well known for its distinction between strategic planning,
management control and operational control. Child (1984), Eisenhardt (1985), Hofstede (1981) and Ouchi
(1979) determine the context of control in organizations and determine a number of characteristics that lend
themselves to define a typology. On the other hand Boland (1979) and Orlilowski (1991) reveal how control is
applied in the context of information management. Looking at the existing theories, three common dimensions
can be identified according to Fischer (1993) that describe a useful typology for analyzing control: focus of
control (directed at whom or what), measures of control (degree of control), and process of control (means of
enforcing control).
Control in this study is viewed in a behavioral sense, that is, as the organization's attempt to increase the
probability that people will behave in ways that lead to the attainment of organizational goals (Flamholtz et al.,
1985) and thus includes a range of mechanisms to monitor and execute operations. As Tannenbaum (1968)
states it is the function of control to bring about conformance to organizational requirements and achievement
of the ultimate goals of the organization. Taking this broad view allows for an examination of multiple
approaches to control, and avoids problems associated with more narrow perspectives. For example, the
practioner literature typically takes a cybernetic view of control, in which outputs are compared against
standards and corrective actions are taken to address deviations; the PMI Standards Committee (1996) offers
the following definition: the process of comparing actual performance with planned performance, analyzing
variances, evaluating possible alternatives, and taking appropriate corrective action as needed (p.161, emphasis
in original). This cybernetic view assumes that outcomes are known, standards can be set, and corrective action
is possible (Markus and Pfeffer 1983, Merchant and Simons 1986, Jaworski 1988). However, desired outcomes,
standards, and corrective actions are not always obvious in the outsourcing environment, rendering these
assumptions problematic and suggesting the need for a broader interpretation of control (Kirsch 1997).
The behavioral view of control implies that the controller uses certain devices, or control mechanisms, to
promote desired behavior by the controllee (Kirsch 1997). These control mechanisms help implement control
modes, which may broadly be divided into formal controls, i.e., modes that rely on mechanisms that influence
the controllees behavior through performance evaluation and rewards, and informal controls, i.e., modes that
utilize social or people strategies to reduce goal differences between controller and controllee (Eisenhardt 1985;
Kirsch 1996, 1997). Some researchers (e.g., Merchant 1988) view formal and informal controls not as a
dichotomy, but as opposite ends of a continuum. Both formal and informal controls are exercised through
mechanisms used to influence controllee behavior. Examples include target implementation dates (formal
control) and socialization (informal controlKirsch 1997).
Two types of formal controls have been commonly considered in prior literature (e.g., Ouchi 1979, Eisenhardt
1985),behavior control and outcome control. In outcome control, the controller explicitly states desired
outcomes or goals and rewards the controllee for meeting those goals (Kirsch 1997). In behavior control, the
controller seeks to influence the process, or the means to goal achievement, by explicitly prescribing specific
3

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


rules and procedures, observing the controllees behaviors, and rewarding the controllee based on the extent to
which it follows stated procedures (Jaworski & Maclnnis 1989, Kirsch 1996).
Informal controls are also of two types: clan control and self-control. Clan controls operate by promulgating
common values, beliefs, and philosophy within a clan, which is defined as a group of individuals who are
dependent on one another and who share a set of common goals (Kirsch 1996). In self-control, the controllee
determines both the goals and the actions through which they should be achieved (Henderson and Lee 1992).
Several authors (e.g., Von Glinow 1983, Kirsch 1996) have noted, however, that the controller can also
encourage or enable the controllee to exercise self-control.
Studies on intra-organizational control practices further suggest that, controllers often use the four modes
(behavior, outcome, clan, self) in combination, creating a portfolio of controls (Kim 1984, Jaworski 1988,
Jaworski et al. 1993, Kirsch 1997). Referring to information systems development projects, Kirsch (1997) noted
that "In no instance did a controller in this study eliminate formal controls. .the controllers were not
comfortable relying solely on informal modes of control ..they seemed to require a comfort zone of
formal modes (Kirsch 1997, p. 235)."
CONTROL IN THE OUTSOURCING CONTEXT
Business process outsourcing is not just a technical process of managing a business function but also a social
process involving stakeholders from multiple organizational units. This set of stakeholders possesses critical
and complementary skills and knowledge that will be called upon during the course of the relationship.
Successful outsourcing arrangements, therefore, require effective management of relationships among these
stakeholders to elicit their contributions and cooperation, while, at the same time, maintaining progress in
conformance with the business value propositions and proposed schedules and budgets. Exercising control is
one powerful approach that managers can use to ensure progress by fusing together the complementary roles
and capabilities of the outsource participants, motivating individuals to work in accordance with organizational
goals and objectives. Research suggests a natural link between how an outsourcing arrangement is structured
and managed, and the subsequent outcomes (Dibbern et al., 2004). As Clark et al. state .the truly critical
success factors associated with successful outsourcing are those associated with vendor governance (Clark et
al., 1998). The practitioner literature has also noted the critical role that control plays in effective outsourcing
management (Linder and Sawyer, 2003).
RESEARCH METHODOLOGY
This section presents the methodological elements and the research design of this study. The first section
discusses the unit of analysis. The second section describes the stages of our case study. We then address the
issue of research quality.
UNIT OF ANALYSIS
The focus of this study is the outsourcing relationship between a service receiver and a service provider. An
individual client-vendor- outsourced function relationship commonly referred to as a queue embedded within
the vendor organization is regarded as the unit of analysis. Most of the prior research in outsourcing has used
the methodology of collecting data from either the customers or the vendors (Levina and Ross 2003). In this
study we adopt a more comprehensive and balanced approach by collecting data from both clients and vendorsthis research design significantly reduces the risks of common source bias in this study (Zmud and Boynton
1991).
Our cases were of business process outsourcing relationships with durations in excess of one year. Establishing
this criterion of duration ensured that the client and vendor have had sufficient opportunity to work with each
other and developed a degree of maturity in the relationship reflected in the establishment of the control
structures. A degree of outsourcing success was also inherent as contract durations in business process
outsourcing relationships are typically of one year, hence a relationship in excess of one year would have
witnessed at least one contract renewal / extension.
THE CASE STUDY APPROACH
The case study is a research strategy which focuses on understanding the dynamics present within single
settings. (Eisenhardt 1989). This research method is preferred when how or why questions are being posed,
when the investigator has little control over events, and when the focus is on a contemporary phenomenon
within some real-life context (Yin 1984). The investigation of how clients and vendors control to ensure that
business process outsourcing relationships progress in conformance with the business value propositions,
proposed schedules and budgets satisfied all of these criteria. The case study method is well established in
outsourcing research especially with reference to governance issues (Kirsch 1997, Kern and Willcocks 2000).
4

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


The procedure we followed for our exploratory case study was as per the steps outlined by Eisenhardt, 1989.
We started with a definition of the research question How do stakeholders control to manage business process
outsourcing relationships? Our attempt was to go into organizations with a well defined focus to collect
specific kinds of data systematically (Mintzberg 1979)
SITE SELECTION
We then proceeded to select our cases based on the concept of theoretical sampling so that we could best
answer the question posed (Glaser and Strauss 1967). Since our goal was to understand how controls are
structured and enforced, we needed cases where a) the vendor provided extensive access to individuals at
multiple levels who could describe control practices and how they are enforced; b) the client acknowledged the
relationship as successful; c) the client was willing to share perceptions as to efficacy of the control structure;
and d) the contract had been active long enough to demonstrate a degree of stability of processes. The cases we
studied satisfied all of these criteria. Our sites presented a rare opportunity for broad access to successful
outsourcing engagements. These cases were revelatory (Yin 1984: 48) or exemplar in the sense that we had an
opportunity to study something previously not researched, but not unique.
To strengthen the generalizability of this study, to produce enough data to suggest additions to control theories,
and to provide empirical grounding, four case studies were conducted (Yin 1984, Eisenhardt 1989).
Balancing the principles of similarity and variation, four business process outsourcing relationships in three
vendor organizations were identified. All three vendor firms are established bpo firms with dominant positions
in this sector on a global basis. Two are the bpo arms of premier IT companies, with annual revenues in excess
of US$ 2 billion and the third is purely into bpo provision with annual revenues close to US$1 billion. The
relationships differed along several dimensions, including outsourced function category, client firm industry
and employee headcount. They were however, similar in terms of organization, revenue size and lifetime.
DATA SOURCES
The first vendor organization (VO1) is a global leader in IT outsourcing providing services to more than 620
global customers across 53 countries. It employs 68,000 people from 42 different nations. This organizations
full-service integrated portfolio offers industry focused solutions and spans across the entire spectrum of IT
services application development and maintenance, business process outsourcing, technology infrastructure,
consulting, package implementation, product engineering services, systems integration and R & D.
The second vendor organization (VO2) provides consulting and IT services to clients globally - as partners to
conceptualize and realize technology driven business transformation initiatives. With over 80,000 employees
worldwide, the company uses a low-risk Global Delivery Model (GDM) to accelerate schedules with a high
degree of time and cost predictability.
The third vendor organization (VO3) manages business processes for companies around the world. The
company combines process expertise, information technology and analytical capabilities with operational
insight and experience in diverse industries to provide a wide range of services using its global delivery
network of more than 25 locations in nine countries in multiple geographic regions with a headcount of over
29,000.
One of the world's leading financial firms with services including wealth management, global investment
banking and securities, asset management, mutual funds and estate planning is our first client organization
(CO1). This US$50 billion (approx.) company has offices in some 50 nations and employs over 70,000 people.
The company services about 3.5 million corporate and about 140,000 corporate clients, as well as 3,000
financial institutions worldwide. The second client organization is one of the world's largest airline by revenuepassenger-miles and total passengers transported. UK's leading internet service provider, with more than 2
million subscribers, including more than 1.4 million on broadband is our third client organization (CO3).
The business processes outsourced included two instances of a financial analytics process which involved
updation of various accounting and financial statements for various companies on the client terminal;
adjustments to information received so as to facilitate company comparison; publishing of data for end user
reception and error correction of reports as and when required. A voice queue for customer acquisition and
retention and a non-voice queue for billing and technical help for an UK ISP comprised the other outsourced
function. The last relationship studied involved the outsourcing of the order processing and related information
technology services by an US airlines. The functions outsourced included processing inbound end-user pre-sale
and post-sale inquiries and orders and providing and maintaining hardware and software supporting the
information systems for these processes.
5

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


The four cases are summarized below in Table 1.

* size depicted in terms of employee strength (Emp) and approx. revenues (Rev) in US$ bn.
DATA COLLECTION
Data collected included both qualitative and quantitative types. This synergistic combination had advantages
described by Mintzberg (1979) as: For while systematic data create the foundation for our theories, it is the
anecdotal data that enable us to do the building..We uncover all kinds of relationships in our hard data, but it
is only through the use of this soft data that we are able to explain them. Data collection involved a variety of
techniques including unstructured and semi-structured interviews; documentation; archival records; direct
observations; published sources; physical artifacts such as manuals, forms, and project archives; and follow-up
emails and telephone interviews. The rationale was that the triangulation made possible by multiple data
collection methods provides stronger substantiation of constructs (Yin 1984).
It was critical, during data collection, to identify and elicit information about particular incidents of control.
Consequently, a specific interviewing technique was designed. Recognizing that control is purposive (Green
and Welsh 1988), three common goals of outsourced relationships were identified: (1) maximizing speed in
which things are done from the customers perspective (2) doing things accurately at the first attempt and (3)
optimizing efficiency and the cost per unit incurred by the vendor (ref.: COPC-2000 CSP Standard, June
2005). A series of interview questions focused on mechanisms controllers used in order to meet each of these
three goals.
Other general set of questions sought to uncover additional critical incidents and how they were handled. To
uncover such incidents, respondents were asked to recall events that caused problems for which the
organization had no ready solution, or events that challenged existing norms and solutions , or anything
interpersonal that was unusual or tension provoking and required some kind of response (Schein 1987, p.120).
The interviews of the team/group leaders, operations managers and service delivery leaders/ queue heads
generally lasted between one and two hours. The interview questions, primarily open-ended, were informed by
the literature on control (Kirsch 1997, Rustagi 2004) and designed to elicit data about theoretical constructs of
interest.
DATA ANALYSIS
Data analysis frequently overlapped with data collection. An important means of analysis used by this
researcher was the usage of field notes. Van Maanen (1988) described field notes as an ongoing stream-ofconsciousness commentary about what is happening in the research, involving both observation and analysis.
This overlap also allowed us to take advantage of flexible data collection. Adjustments to the interview protocol
to probe emergent theme to improve resultant theory were thus made. Detailed case study write-ups for each
relationship were made from these field notes as a next step with the overall objective to become intimately
familiar with each case as a stand alone entity and accelerate cross case comparison (eisenhardt 1989).
REACHING CLOSURE
In the subsequent step we tabulated the focus of and typical mechanisms used in the four control modes from
our background literature study on control and then by comparing the description of the observed control
mechanisms in business process outsourcing relationships attempted to classify them into the four control
categories.
ADDRESSING QUALITY OF RESEARCH
During data collection, construct validity and reliability issues were addressed. Construct validity, which is
concerned with establishing that the correct measures are used, can be increased by using multiple sources of
data (Yin 1984). In this research, multiple sources of data were tapped to ensure that responses converged.
Inconsistencies were resolved by consulting control related documentation or other individuals for clarification.
6

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


A typical example of documentation referred to would be a metrics checklist used by the process audit /
compliance teams in vendor organizations.
Reliability is concerned with consistency (Yin 1984): if the study is repeated, would the same results follow?
One of the primary means of establishing reliability is to build a case study database in order to be as explicit as
possible about the way in which observations were made and recorded (Kirk and Miller 1986, Yin 1984).
Several techniques were used in this study to improve reliability. First, all interviews were transcribed yielding
approximately two hundred pages of transcripts. All transcriptions and related documentation were filed by case
by individual. Second, the sources of all documents and data were recorded though an assurance of
confidentiality was made. Finally, an individual write-up of each relationship was prepared where the data were
sorted and organized by construct. The purpose of this last step was to reduce the raw data to a more
manageable and meaningful structure.
RESULTS
Control, in this research, refers to a range of mechanisms to monitor and execute operations to increase the
probability of conformance to organizational (both client and vendor) requirements in the outsourcing
relationship. A brief description of these mechanisms are given here as observed in the case study conducted
involving four discrete business process outsourcing relationships.
1. Process metrics
The term metrics in the bpo sector represents measurements that quantify results with respect to the
outsourcing relationship and are the key focus of outsourcing contracts variously referred to as service level
agreement (SLA) or statement of work (SOW). Various categories of metrics were observed with different
objectives and target populations. These included people metrics (attrition, absenteeism targets); support
function metrics (technology metrics such as network uptime) and process metrics.
As in this research, the first two have been categorized under different headings we deal here with only with the
last. Process metrics are used to measure the service provider's performance and determine whether the service
provider is meeting its commitments and can be differentiated into (i) volume metrics (ii) quality metrics (iii)
responsiveness metrics and (iv) efficiency metrics.
Volume of work is typically the key sizing determinant of an outsourcing relationship, specifying the exact
level of effort to be provided by the service provider within the scope of the relationship for instance number
of transactions processed per period. Any effort expended outside of this scope will usually be separately
charged to the client, or will require re-negotiation of the terms of the contract.
Quality metrics are the most diverse of all of the metrics covering a wide range of work products, deliverables
and requirements and seeking to measure the conformance of those items to certain specifications or standards.
These metrics include (i) Counts or percentages that measure the errors in major deliverables (ii) Standards
compliance (iii) Service availability (iv) Service satisfaction especially end-user satisfaction assessment.
Responsiveness metrics measure the amount of time that it takes for an outsourcer to handle a client request and
include metrics to measure the (i) the elapsed time from the original receipt of a request until the time when it is
completely resolved (ii) time taken to acknowledge a request, and accessibility of status information (iii) the
size of the backlog, typically expressed as the number of requests in the queue or the number of hours needed to
process the queue.
Efficiency metrics measure the engagement's effectiveness at providing services at a reasonable cost. Examples
include (i) cost/effort efficiency (ii) cost / employee efficiency
2. Transaction monitoring
A structured approach is used to monitor all types of end-user transactions (e.g., calls, faxes, mail, web-based,
e-mail, etc.) wherein all information given and received are monitored. The approach includes:
Details of performance attributes to be monitored for both accuracy and quality
A monitoring frequency

A monitoring method (e.g., side-by-side or remote).


Specific performance thresholds and a clear, objective scoring system.

A plan for communicating the findings of all transactions monitored to the individuals monitored,
including both negative and positive feedback.
Another usage of this mechanism is identifying (using Pareto analysis) reasons contributing to customer
satisfaction / dissatisfaction at agent, team and process level.
7

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


3. Staff Skill control
Recruitment guidelines
Vendors have clear, written definitions of the minimum skills and knowledge required for customer related
jobs, provided by the client. Recruiting and hiring approaches are geared to identify and successfully recruit
individuals with these minimum requirements.
Training
For all staff, training is provided for all the minimum skills and knowledge defined, unless staff has been hired
with these minimum skills and knowledge. The approach to training and development is formally defined and
includes a listing of the specific skills and knowledge required for each minimum skill; the personnel authorized
to provide the training and a desired or required outcome that can be verified. The document detailing the
training approach requires a sign-off by the client before the training schedule can be initiated.
Staff testing
The verification process for all staff in customer related jobs includes:
Objective performance thresholds that are linked to the minimum requirements (including all minimum
skills and knowledge) of the position.

Documentation (e.g., tests, scores, dates) that can be audited.

Action plans for staff that fail to demonstrate the required skills and knowledge.

Annual re-verification of skills and knowledge.

Re-verification of skills and knowledge following changes in program, procedures, systems, etc.

4. Staff Performance Management


This is done at two levels:
Annual performance appraisal
The vendor employees performance appraisal includes the findings from skills and knowledge verification
and transaction monitoring. Employee evaluations are structured to support the outsourcing relationships
business performance targets.

Continuous monitoring
Vendor employees who fail transaction monitoring are:
(i) Individually (one-on-one) coached on all transactions that do not meet target.
(ii) Monitored more frequently in order to determine if their performance is statistically below target.
(iii) Advised on corrective actions using a structured approach for identifying and resolving the root
cause(s) of poor performance. This action plan necessarily provides for removing employees who
repeatedly perform fatal errors from handling end-user transactions

5. Financial controls
One or more of these financial controls were observed in our cases embedded in the relationships pricing
structure:
Incentives are given to vendor as a percentage of invoice amount based on volumes handled at prespecified quality levels

A percentage of the savings or revenue improvement based on achieving or exceeding targets is given to
the vendor; this is commonly referred to as gain-share. A variation of this mechanism is the
risk/reward pricing method wherein the provider risks losing money if the agreed-on improvements are
not achieved.

Tying the providers revenue to level of improvement of the performance of the outsourced function
based on business metrics. Commonly, referred to as value pricing or business benefit pricing this
arrangement generally involves changing the customers business processes.

Defining the potential for future business, motivating the provider to maintain a keen performance edge.

6. Process work-flow documentation


A major concern among outsourcing and especially offshoring clients is the possibility of disruption of
outsourcing workflow and hence the insistence on the presence of documented policies and procedures by the
vendors for allocating transactions in the most likely scenarios including:
8

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


a) Normal operations with forecasted transaction levels.
b) Abnormal conditions which may arise for a number of reasons including:
i) Transaction volumes significantly above or below forecasted levels.
ii) Site, telecommunications, or system reduced availability/slowness or outage.
iii) Staffing levels well above or below scheduled levels (e.g., bad weather).
This documentation has three major components: (i) methodology for assessment of business risk impact (ii)
back-up and recovery strategies and (iii) key personnel and supplies required.
7. Task work-flow documentation
The objective of this control mechanism is to control variation within the performance of the individual tasks
comprising the outsourced function across all shifts and work teams. Diagrammatic flow-charts are made
during process transition itself and for every change in working made during the outsourcing lifetime detailing
each step of the work to be carried out at. Commonly referred to as SIPOC (supplier-input-process-outputcustomer) charts these are made for various levels thus assuming a tree like structure. (an example of a typical
SIPOC tree is given in the appendices). These flow-charts may be made by either party but necessarily require
sign offs from each and very stringent procedures exist for making any changes in them.
8. Regular reporting
This mechanism refers to the vendor reporting performance information as required by clients. Commonly
referred to as daily, weekly, or monthly reports, key areas covered by these reports are:
tracking the timeliness of the implementation milestones

compliance accuracy regarding appropriate legal requirements

reports on status of adherence to targets regarding process metrics

A key observation here was the level of automation involved. Almost all the regular reports were automatically
generated and sent to the clients email id by the software tools in the information system.
9. Regular meetings
Both parties to the outsourcing relationship initiated frequent meetings or conference calls to discuss the
performance status, issues and resolutions. Though mostly done following a pre-decided structure and schedule,
impromptu meetings to resolve one-off escalations or incidents were not uncommon. During these meetings
significant feedback was provided to the vendor team regarding their performance.
10. Site visits
Both parties made arrangements for team members at the managerial levels to travel to each others sites not
just for process training or transition but also to engender camaraderie and cooperation. This was considered a
necessity during the initial stage of the relationship and at least annually. As one interviewee observed, I dont
think theres any better measure of the relationship than seeing the <org. name withheld> guy and his peer
sitting side by side at a meeting. You can tell from their body language that they are two members of the same
team. Besides the client outsourcing manager or vendor manager who is often located at the outsourcing site,
other members of the client team responsible for or users of the outcome of the outsourced process also make
regular site visits. Often at these visits, client team members walk around the vendor site to informally gather
first hand information about the tasks, activities, progress and issues in the outsourcing arrangement.
11. Business reviews
In all four cases this was a quarterly incident. While reviews were done at a weekly/ monthly basis at different
hierarchical levels of the client-vendor governance structure a QBR was the high point of the relationship.
Reviewing performance to the required performance targets and plans, discussing anticipated program changes
and communicating business strategies were the objectives of this exercise which saw top management
representation from both sides.
A description from one of the interviewees summed up this control mechanism as: "At our quarterly business
review sessions, we spend an entire day just going through the numbers. In addition, part of the evaluation is
going through an analysis of a healthy relationship -- what's working and what's not working. We are very, very
candid with the executives as well as with the operations people at <org. name withheld> about what our
thoughts and views are, and we invite them to be just as candid back to us."
9

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


12. Process audits
All the three vendor sites we studied had a separate (outside the operations set-up) entity variously labeled as
process excellence; compliance and quality control in their governance structures which was aimed at
taking an unbiased external view of the operational capabilities. A major deliverable of this team was
performing a comprehensive annual compliance audit of each process metrics performance relative to each of
the requirements in the contract and related documents as well as industry benchmarks. Another objective of
this process is the replication of best practices across the organization.
13. Process milestones
Each business process outsourcing relationship goes through certain stages viz. transition, pilot, stabilization,
ramp-up and continuous improvement, the last two being iterative. Each stage has an attached time frame,
adherence to which controls the progress of the relationship.
14. Staffing and scheduling
Aligning staff capacity with historical and forecasted future transaction arrival patterns is a key service
deliverable of the vendor team. The mechanism used for this is a staffing plan that minimizes variation between
arrival patterns and staff capacity made using workflow management (WFM) software such as IEX and Aspect.
15. Shrinkage management
Shrinkage refers to the amount of scheduled time that is not realized because of absenteeism, sick/late time,
training, coaching, team meetings, etc. that are not included in the work schedule. Vendors measure this time by
staff category (e.g., by job type, organization level, etc.) at the entity level and at the program level for staff in
customer related jobs to estimate the costs and impact of each on service, quality, and end-user satisfaction. The
tactical management has targets established for minimizing shrinkage based on an understanding of these
implications, other business requirements (e.g., internal transfers), and labor conditions. Attrition (employees
being fired or resigning) levels are also tracked and minimized to ensure consistency in service quality offered
to end users.
16. Ensuring data security
Protection of patented or end-user details and other sensitive and proprietary data is a key component of the
outsourcing service quality. Hence data security is ensured by (i) having a documented security policy that
defines how access enterprise sensitive and proprietary data is to be protected (ii) implementation of a number
of mechanisms such as using proprietary lines and encryption to transmit data; making removable computer
drives, email capabilities or printers unavailable to non-managerial employees; disallowing visitor tours during
working hours and having armed guards to protect the work area (iii) physical separation of work areas and
employees serving different clients and (iv) conducting periodic checks to identify and prevent opportunities for
security breaches.
17. Free gifts
Vendor employees receive free client merchandise, and the workspace is decorated with colorful client
memorabilia. This mechanism is aimed at creating belonging through direct links to an organizations (client)
culture. As a senior manager from the vendor side mentioned these freebies create a sense of dual citizenship
so that vendor employees remain deeply committed to their new team.
18. Special events
Clients regularly fund and/or organize special events to mark important milestones in the organizations or
relationships growth where the outsource vendor employees (including the ground level workforce) are given a
chance to interact with and understand the employee culture of the client. For instance, an US based ISP in one
of the sites of our study organized a five day workshop for the top ten performers of each outsourced process in
both its captive and outsourced center in Goa. Such events are aimed at fostering clan control.
19. Certifications
Both the vendors we studied had also acquired external quality assurance / performance management
certifications such as BS 7799, ISO 9001:2000, COPC and eSCM and used their annual audits as a measure of
the efficacy of the control systems set up.

10

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


All of the above lead to the matrix shown in Table 2, which classifies the mechanisms identified as per the
control modes identified through our literature review:
Control
Focus
of
control Typical mechanisms
Relevant
control
mode
(identified from previous
mechanisms observed in
(identified
from
previous case study
literature)
literature)
Outcome
control

Outputs (both final and


interim) of a process

(Kirsch 1997)

Behavior
control

Process, or the means to


goal achievement and
controllees behaviors

(Jaworski and Maclnnis


1989, Kirsch 1996)

Clan control

Promulgating common
values,
beliefs,
and
philosophy within group
of individuals who share
a set of common goals
(Kirsch 1996)

Self control

The
controllee
determines both the goals
and the actions through
which they should be
achieved (Henderson and
Lee 1992)

Performance targets (Snell


1992, Eisenhardt 1985)
Interim milestones for a
particular
activity
(Henderson and Lee 1992)
Specific
project
goals
(Kirsch 1996)
Specification of appropriate
behaviors (e.g., development
methodology)
Evaluation of behavior
(Kirsch 1997) such as direct
observation (e.g., placing
client personnel on vendor
premises)
and
other
information systems (e.g.,
weekly progress reports,
periodic
meetings,
or
conference calls) (Eisenhardt
1985)

Business reviews
Financial controls
Process metrics
Process milestones

Ensuring
data
security
Process
work-flow
documentation
Regular meetings
Regular reporting
Site visits
Staff skill control
Staffing
and
scheduling
Task
work-flow
documentation
Transaction
monitoring

Carefully selecting and


socializing
members
(Boland 1979, Ouchi 1979,
Orlikowski 1991)
Shared experiences, rituals,
and ceremonies (Kirsch
1996, Ouchi 1980)

The controllee sets standards for


its own behavior, self-set project
milestones
and
monitoring
progress against these milestones

(Kirsch 1997)

Business reviews
Free gifts
Regular meetings
Site visits
Special events
Staff skill control

Certifications
Process audits
Shrinkage
management
Staff
performance
management
Staff skill control
Staffing
and
scheduling
Transaction
monitoring

The above table throws up two significant conclusions. One is the multiplicity of mechanisms used to
implement various control modes. Each mode is effected through more than one mechanism. For instance,
business reviews, financial controls, process metrics and process milestones are used to implement outcome
control. Conversely the same mechanism is used to implement more than one mode. Staff skill control defines
acceptable behaviors to be followed by the client, restricts the entry of employees into the relationship and is an
11

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


internal control mechanism for the vendor. Hence, this mechanism forms part of behavior, clan and self-control
modes.
The other key finding is the multiplicity of controllers of the relationship. Not just the clients and vendors are
involved in the structure, but also end-users (through user satisfaction measurement), external quality agencies
(through provision of certification) and industry bodies (through insistence on particular levels of data
protection done to maintain the image of the sector).
CONCLUSIONS
This study makes a couple of substantive contributions to outsourcing literature. The first is an articulation of
the mechanisms used by stake holders to implement behavior, clan, outcome and self control in business
process outsourcing relationships. This articulation is important because it speaks to the issue of how multiple
mechanisms constitute a portfolio of control modes. In addition, it aids future research by providing guidance
for operationalizing modes of control. The second contribution is an understanding of the critical roles played
by different stakeholders to control business process outsourcing relationships. The implication to practice is
that managers need to cultivate a broad view of control. Successful BPO clients would need to go beyond
direct, constraining control mechanisms to achieve their objectives and incorporate indirect and particularly
enabling controls to instill their provider with the incentive to deliver high performance.
Avenues for future research include seeking clarity on the sufficient amount of control needed and what decides
the appropriateness of a particular mechanism. Secondly, business process outsourcing relationships face a
complex, dynamic environment, and the multiplicity of controllers and control modes contributes to that
complexity. Future work could examine how various stakeholders structure their control portfolios to manage
this complexity.
REFERENCES
Academic literature
1. Anthony, R.N. (1965). Planning and control systems- A framework for analysis, Harvard Business School
Press, Boston.
2. Arnett, K. P. and Jones, M. C., Firms that Choose Outsourcing: A Profile, Information & Management, 2`
(4), 1994, pp.179~188
3. Benko, C., If Information System Outsourcing is the Solution, What is the Problem ?, Journal of Systems
Management, November 1992, pp.32~35
4. Boland, R.J. (1979). Control, Causality and Information System Requirements, Accounting, Organizations
and Society, 259-272
5. Buchowicz, B. S., A Process Model of Make vs. Buy Decision Making: The Case of Manufacturing
Software, IEEE Transactions on Engineering Management, 38 (1), 1991, pp.24~32
6. Buck-Lew, M., To Outsource or Not ?, International Journal of Information Management, 12 (1), 1992,
pp.3~20
7. Carmel, E. 1999. Global Software Teams. Prentice-Hall, Englewood Cliffs, NJ.
8. Child, J., Information Technology, Organization, and the Response to Strategic Challenges, California
Management Review, 30 (1), 1987, pp.33~50
9. Choudhury and Sabherwal, Portfolios of Control in Outsourced Software Development Projects,
Information Systems Research Vol. 14, No. 3, September 2003, pp. 291314
10. Chua, W.F. 1988. Interpretive sociology and management accounting research: a critical review,
Accounting, Auditing & Accountability Journal, 1: 59-79
11. Clark, T. D., Jr., Zmud, R. W. and McCray, G. E. (1995) "The Outsourcing of Information Services:
Transforming the Nature of Business in the Information Industry," Journal of Information Technology, Vol.
10, pp. 221-237.
12. Coase, R. H. (1937). "The Nature of the Firm," Economica, Vol. 4, No. November, pp. 386-405.
13. Fitzgerald and Willcocks (1994). Contracts and Partnerships in the outsourcing of IT. 15th International
Conference on Information Systems, Vancouver, Canada, ICIS

12

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


14. Flamholtz, E.G. 1983. Accounting, budgeting and control systems in their organizational context:
theoretical and empirical perspectives. Accounting, Organizations and Society, 8: 153-169.
15. Gewald, H., K. Wllenweber and T. Weitzel (2006). "The Influence of Perceived Risks on
BankingManagers' Intention to Outsource Business Processes - A Study of the German Banking and
finance Industry." Journal of Electronic Commerce Research 7(2): 78-96.
16. Glaser, B. and Strauss, A. (1967). The discovery of grounded theory. Chicago: Aldine.Green and Welsh
1988)
17. Grover, V.; Cheon, M. J.; and Teng, J.T.C., The Effect of Service Quality and Partnership on The
Outsourcing of Information Systems Functions, Journal of Management Information System, 12 (4), 1996,
pp.89~116
18. Gupta, U. G. and Gupta, A., Outsourcing The IS Function: Is It Necessary for Your Organization?,
Information Systems Management, Summer 1992, pp.44~50
19. Henderson, J. C., S. Lee. 1992. Managing I/S design teams: A control theories perspective. Management
Sci. 38(6) 757777.
20. Hofstede, G. 1981. Management Control of Public and Nor-for-Profit Activities.Accounting, Organizations
and Society 6 (3):193 - 211.
21. Hopwood, A.G. 1983. On trying to study accounting in the contexts in which it operates, Accounting,
Organizations and Society, 8: 287-305.
22. Hopwood, A. G. 1996. Looking across rather than up and down: on the need to explore the lateral
processing of information. Accounting, Organizations and Society, 21: 589-590.
23. Kalakota, R. and Robinson, M. 2004. Offshore Outsourcing: Business Models, ROI and Best Practices.
Mivar Press.
24. Kern, T., Lacity, M. and Willcocks, L. (2001). Application Service Provision, Englewood Cliffs: Prentice
Hall.
25. Kern, T., Lacity, M. C. and Willcocks, L. (2002). Net Sourcing: Renting Business Applications and
Services Over a Network, Upper Saddle River, NJ: Prentice-Hall..
26. Kim, J. S. 1984. Effect of behavior plus outcome goal setting and feedback on employee satisfaction and
performance. Acad. Management J. 27 139149.
27. Klepper, R. (1995). "The Management of Partnering Development in I/S Outsourcing," Journal of
Information Technology, Vol. 10, pp. 249-258.
28. Klepper, R. and Hoffmann, N. (2000). "Assimilation of New Information Technology and Organized
Culture: A Case Study," WIRTSCHAFTSINFORMATIK, Vol. 42, No. 4, pp. 339-346.
29. Levina, N. and Ross, J. (2003). "From the Vendor's Perspective: Exploring the Value Proposition in IT
Outsourcing," MIS Quarterly, Vol. 27, No. 3, pp. 331- 364.
30. Loh, L. and Venkatraman, N. (1995). "An Empirical Study of Information Technology Outsourcing:
Benefits, Risks, and Performance Implications," Proceedings of the 16th International Conference on
Information Systems, Amsterdam, The Netherlands, pp. 277-288.

13

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


INVENTORY MODELING FOR DETERIORATING ITEMS: FUNDAMENTAL VIEW
Hetal R. Patel
Assistant Professor, Mathematics Department, U. V. Patel College of Engineering, Ganpat University, Gujarat
ABSTRACT
This article derives an inventory model for deteriorating items with known and continuous demand and no
shortages allowed. Optimal ordering quantity and cycle time is computed. Three numerical examples were also
solved. Moreover, the derived model was subject to sensitivity analysis. The sensitivity analysis is performed by
changing each of the parameters by -75%, -50%, -25%, +25%, +50% and +75% taking one parameter at a
time to understand the behavior of the model.
Key words: EOQ, deteriorating, demand, cycle time, shortages
INTRODUCTION
General trends in globalization and fierce competition in todays business world forces businesses, irrespective
of size, to endeavor towards more effective methods for handling inventory. In fact, inventory modeling is one
of the most developed fields of operations research. Monks (1987) put inventory as idle resources that possess
economic value. An inventory is a list of items held in stock (Waters, 2003, p. 4). Looking at various product
characteristics, products perishability is an important aspect of inventory control.
Perishable goods are classified as (a) Amelioration and (b) deterioration. More simply, obsolescence is a loss of
value of a product due arrival of new and better product (Goyal and Giri, 2001). While, deterioration is decay,
spoilage, loss of utility with finite shelf life and start to deteriorate once they are produced (Shah and Shukla,
2009). Moreover, Decay, change or spoilage that prevent the items from being used for its original purpose are
usually termed as deterioration (Moon, Giri and Ko, 2005) and included Food items, pharmaceuticals,
photographic film, chemicals and radioactive substances as deteriorating products.
Practically, stocks related decisions in inventory management for perishable goods are complex. It is
influenced by deterioration rate and allied cost and resulting backlogging due to deterioration (Abad, 2003).
Padmanabhan and Vrat (1995) considered an EOQ model for perishable items with stock-dependent demand.
Furthermore, Chang and Dye (1999) developed an inventory model in which the proportion of customers who
would like to accept backlogging is the reciprocal of a linear function of the waiting time.
In inventory literature, many studies are available considering deterioration. Moreover, in the literature itself,
many scholars have classified the inventory research for deteriorating products into two major categories (a)
decay modes and (b) finite lifetime models (Ghare and Schrader, 1963; Raafat, 1991; Weatherford et al., 1992;
Liu et al., 1999; Panda et al., 2009). Decay models consider the products which deteriorate from the very
beginning and finite lifetime models consider the products which start to deteriorate after a certain time.
In this chapter, an inventory model is developed considering deteriorating items. The model also assumes
constant demand and known lead time. This means inventory available is exactly sufficient to cover the demand
during the lead time after that an order is placed. The inventory model for deteriorating items is different from
basic inventory model on only one point: inventory is assumed to deteriorate at a constant rate. In addition to
this, this model also assumes the situation wherein no demand is unsatisfied. Simply, this model does not allow
any shortages. Section 2 covers assumptions and notations followed by mathematical derivation in section 4.
Numerical analysis is presented in section 5 followed by sensitivity analysis in last section.
ASSUMPTIONS AND NOTATIONS
In inventory modeling literature, it has been noted that there is no standard set of notations that creates
significant difficulties. Henceforth, this study will use notation here that makes it easier to remember what the
different symbols represent.
Notations
D
the demand per year
s

the unit selling price

the fixed cost of placing and receiving an order

the cost of purchasing a unit

the cost of holding a unit in inventory for a year


14

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


the rate of deterioration
Q

the order quantity

the time between orders or the length of an order cycle (replenishment time)

the maximum inventory level if Q is the order quantity; with


the year

being an average inventory level over

ASSUMPTIONS
The essence of the assumptions is to make the complexity of the inventory system malleable to mathematical
modeling. The assumptions are selected to give accurate approximation of real life inventory system for
product. The following assumptions are used in developing the model:
(1)

The inventory system under consideration deals with single item. This assumption ensures that a single
item is isolated from other items and thus preventing item interdependencies.

(2)

The demand rate is known, constant and continuous.

(3)

The time horizon is infinite and a typical cycle length of T for planning schedule is considered.

(4)

A constant fraction
deterioration.

(5)

The lead time is zero.

(6)

The replenishment is instantaneous.

(7)

Deteriorated unit is not repaired or replaced during a given cycle.

(8)

Costs involved (such as holding cost, ordering cost etc.) are remaining constant over a period of time.

(9)

Shortages are not allowed.

of the on hand inventory deteriorates per unit time. is rate of

Motivated by the on-going research on inventory models for deteriorating items, purpose in this study is to
provide optimal inventory policy for the EOQ model with constant demand and no shortages.
MATHEMATICAL FORMULATIONS
A typical one time-inventory cycle is developed based on following computations:
Let
be the inventory level of the system at
. The inventory level decreases owing to
demand as well as deterioration. Thus, the change of inventory level can be represented by the following
differential equation over the period

The solution of the above differential equation is given as per following:


Above equations can be written as

where

Now

Where

and

Hence using equation (2.2.2), one get

15

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


Here one has initial condition,

Byputting these values in equation (2.2.3), one has

Henceinventory level at any instant of time t is

Also at boundary condition


By equation (2.2.4),

In order to calculate economic order quantity, first of all, total cost is computed. For EOQ model of
deteriorating products, following costs are considered:
(1)

Order cost,

(2)

Purchase cost,

(3)

Holding cost during the time period 0 to

and

to T,

16

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


Hence the total cost of an inventory system is given by
Total cost = Ordering cost + Purchase cost + Holding cost. Using equations (3.6) to (3.8),

Differentiating this equation w. r. to Q results into;

To check whether the cost is minimum or not, it is required to take second order differential w.r.to Q which
gives;

From the equation (3.11), it is visible that the second order differential equation gives positive value. Hence,
the equation is minimum at the given point. To determine
equation (3.10) is equating with zero. Hence, one
gets;

NUMERICAL ANALYSIS
To solve different scenarios of assumptions, three examples have been solved to demonstrate the application of
approach. The variable part is the different situations of the deterioration rates ( ). Example 1 is based on
situation consisting
the situation consisting

. Example 2 is based on situation consisting

and example 3 is based on

Example 1:for numerical experiment,


is considered. Let the values of other parameters for his
inventory model be fixed cost per order A= $ 300; annual demand D=700 units; Unit cost of producing an item
P= $ 5; annual holding cost per dollar value h=35% of purchase value ($). Under the given parameter values
and according to the equation (3.12 and 3.5), respective optimal values of Q* and T* are obtained.
Q*= 623.89 units

and

T*=0.737 years

Example 2: for numerical experiment,


is considered. Let the values of other parameters for his
inventory model be fixed cost per order A= $ 300; annual demand D=700 units; Unit cost of producing an item
P= $ 5; annual-0 holding cost per dollar value h=35% of purchase value ($). Under the given parameter values
and according to the equation (3.12 and 3.5), optimal values of Q* and T* are obtained.
Q*= 559.03 units

and

T*=0.783 years

Example 3: for numerical experiment,


is considered. Let the values of other parameters for his
inventory model be fixed cost per order A= $ 300; annual demand D=700 units; Unit cost of producing an item
P= $ 5; annual holding cost per dollar value h=35% of purchase value ($). Under the given parameter values
and according to the equation (3.12 and 3.5), optimal values of Q* and T* are obtained.
Q*= 551.89 units

and

T*= 0.788 years

SENSITIVITY ANALYSIS
Next step is to investigate the effect of changes in parameter values in model such as A, D, P, h, and on
model decision variables such as economic order quantity (Q*) and optimal cycle time (T*). For this, except ,
all the parameter values are kept constant in this model. The sensitivity analysis is performed by changing each
17

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


of the parameters by -75%, -50%, -25%, +25%, +50% and +75% taking one parameter at a time. The results are
presented in table 5.1.
Table 5.1: Effects of changes in parameter values in inventory model (=0.5)
Parameters

PCPV (%)

Change in decision variables


Q*

T*

-75%

376.32

0.476

-50%

483.46

0.593

-25%

560.91

0.674

+25%

677.92

0.789

+50%

726.00

0.840

+75%

769.46

0.876

-75%

264.10

1.120

-50%

404.67

0.912

-25%

520.94

0.806

+25%

718.04

0.687

+50%

805.59

0.649

+75%

888.30

0.618

-75%

1056.25

1.12

-50%

809.15

0.91

-25%

694.6

0.80

+25%

574.43

0.687

+50%

537.14

0.65

+75%

507.63

0.62

-75%

569.81

0.77

-50%

587.95

0.76

-25%

605.96

0.75

+25%

641.71

0.73

+50%

659.36

0.71

+75%

676.80

0.70

Based on the results of Table 5.1, the conclusions are briefly stated as follows:
(1) It is observed from table 5.1 that increase in ordering cost A increasesoptimal quantity Q*. Increase in
ordering cost also increases optimal cycle time T*. The obtained results show thatquantity function is very
sensitive to the ordering cost when decreasing and less sensitive to the ordering cost when it is increasing.
Simply, it is reflective that small negative change in A results into large change in Q*; while small positive
change in A results into not so large change in Q*. For cycle time, it is sensitive to the changes when A
decreases and less sensitive when A increases.
(2) Similarly as D increases, the economic order quantity Q* increases and cycle time T* decreases. When
reducing the demand, the change in Q* is very high showing higher sensitivity while less sensitive when
increasing the value of demand. For cycle time T*, it is sensitive to the changes when D decreases and less
sensitive when D increases.
18

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


(3) Considering the dependency of holding cost per unit on unit price of item, purchase price is considered for
sensitivity analysis.It is observed that an increase in purchase price results into decrease in Q* and also
decrease in T*. It is reflective that small negative change in P results into large change in Q*; while small
positive change in P results into not so large change in Q*. So, it is highly sensitive when P decreases and
less sensitive when P increases. Similar pattern is displayed in T*.
(4) Table also reflects that an increase in rate of deterioration (), there is an increase in Q* and decrease in T*.
For sensitization, the proportion change in Q* and T* is equivalent to the proportion change in . Thus,
optimal quantity function and cycle time function is less sensitive to the changes in rate of deterioration
().
Table 5.2: Effects of changes in parameter values in inventory model (=0.05)
Parameters

PCPV (%)

Change in decision variables


Q*

T*

-75%

350.46

0.49

-50%

442.51

0.62

-25%

507.31

0.71

+25%

602.75

0.84

+50%

641.13

0.89

+75%

675.46

0.94

-75%

223.50

1.24

-50%

353.35

0.98

-25%

462.00

0.86

+25%

648.00

0.73

+50%

731.19

0.68

+75%

809.96

0.65

-75%

894.00

1.24

-50%

706.64

0.98

-25%

616.25

0.86

+25%

518.50

0.73

+50%

487.60

0.68

+75%

462.88

0.65

-75%

553.53

0.79

-50%

555.43

0.785

-25%

557.24

0.784

+25%

560.87

0.781

+50%

562.68

0.780

+75%

564.49

0.779

Based on the results of Table 5.2, the conclusions are briefly stated as follows:
(1) It is observed from table 5.2 that increases in ordering cost A increases optimal quantity Q*. Increase in
ordering cost also increases optimal cycle time T*. The obtained results show that quantity function is very
sensitive to the ordering cost when decreasing and less sensitive to the ordering cost when it is increasing.
Simply, it is reflective that small negative change in A results into large change in Q*; while Q* is less
sensitive to positive change in A. The pattern is same in case on cycle time function too.
19

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


(2) Similarly as D increases, the economic order quantity Q* increases and cycle time T* decreases. When
reducing the demand, the change in Q* is very high showing higher sensitivity while less sensitive when
increasing the demand. For cycle time T*, it is sensitive to the changes when D decreases and less sensitive
when D increases.
(3) It is observed that an increase in purchase price results into decrease in Q* and also decrease in T*. It is
reflective that small negative change in P results into large change in Q*; while small positive change in P
results into not so large change in Q*. So, it is highly sensitive when P decreases and less sensitive when P
increases. Similar pattern is displayed in T*.
(4) Table also reflects that an increase in rate of deterioration (), there is a very stiff increase in Q* and
decrease in T*. For sensitization, the proportion change in Q* and T* is equivalent to the proportion change
in indicating in-sensitive nature.
Table 5.3: Effects of changes in parameter values in inventory model (=0.001)
Parameters

PCPV (%)

Change in decision variables


Q*

T*

-75%

347.68

0.50

-50%

438.03

0.63

-25%

501.45

0.72

+25%

594.5

0.85

+50%

631.82

0.90

+75%

665.15

0.95

-75%

219.1

1.25

-50%

347.71

0.99

-25%

455.58

0.867

+25%

640.44

0.73

+50%

723.21

0.688

+75%

801.4

0.65

-75%

876.22

1.25

-50%

695.37

0.99

-25%

607.45

0.867

+25%

512.4

0.73

+50%

482.14

0.688

+75%

457.98

0.65

-75%

551.89

0.7883

-50%

551.89

0.7882

-25%

551.89

0.7881

+25%

551.89

0.7880

+50%

552.00

0.7880

+75%

552.00

0.7880

Based on the results of Table 5.3, the conclusions are briefly stated as follows:
(1) Table 5.3reveals that increases in ordering cost A increases optimal quantity Q*. Increase in ordering cost
also increases optimal cycle time T*. The obtained results show that quantity function is very sensitive to
the ordering cost when decreasing and less sensitive to the ordering cost when it is increasing. Simply, it is
20

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


reflective that small negative change in A results into large change in Q*; while Q* is less sensitive to
positive change in A. The pattern is same in case on cycle time function too.
(2) Similarly as D increases, the economic order quantity Q* increases and cycle time T* decreases. When
increasing or reducing the demand, the change in Q* is very high showing higher sensitivity. For cycle time
T*, it is sensitive to the changes when D decreases and less sensitive when D increases.
(3) It is observed that an increase in purchase price results into decrease in Q* and also decrease in T*. It is
reflective that small negative change in P results into large change in Q*; while small positive change in P
results into not so large change in Q*. So, it is highly sensitive when P decreases and less sensitive when P
increases. Similar pattern is displayed in T*.
(4) Table also reflects that an increase in rate of deterioration (), there is a negligible increase in Q* and
negligible decrease in T*. the change is negligible reflecting no effect of change in . For sensitization, the
proportion change in Q* and T* is equivalent to the proportion change in indicating almost in-sensitive
relationship.
CONCLUSION
The inventory model for deteriorating products with constant demand is studied here. It is meant for decision
maker to plan the stocking of inventory. The sensitivity analysis refers that the model is sensitive to rate of
change of demand, ordering cost and less sensitive to the very low rate of deterioration i.e. 0.001 value.
Utilization of this model offers great insights relating to optimal quantity which is sensitive to inventory cost
elements used in this study.
REFERENCES
1. Abad, P. L. (2003),Optimal pricing and lot-sizing under conditions of perishability, finite production and
partial backordering and lost sale European Journal of Operational Research, 144: 677-685.
2. Chang, H. J. and Dye, C. Y. (1999), An EOQ model for deteriorating items with time varying demand and
partial backlogging, Journal of the Operational Research Society, 50: 11761182.
3. Ghare, P. M. and Schrader, G. P. (1963), A model for an exponentially decaying inventory, Journal of
Industrial Engineering, 14(5).
4. Goyal, S. K. and Giri, B. C. (2001), Recent Trends in Modelling of Deteriorating Inventory, European
Journal of Operational Research, 134: 1, 1-16.
5. Liu, L. and Shi, D. (1999): An (S.S) model for inventory with exponential lifetimes and renewal demands.
Naval Research Logistic, 46, 3956.
6. Monks, J.G., (1987), Operations Management. 3rd Edn., McGraw-Hill Book Co., New York, pp: 236-334.
7. Moon, I., Giri, B. C. and Ko, B. (2005), Economic order quantity models for ameliorating/deteriorating
items under inflation and time discounting, European Journal of Operational Research, 162:773-785.
8. Padmanabhan, G. and Vrat, P. (1995), EOQ models for perishable items under stock dependent selling
rate, European Journal of Operational Research, 86:281292.
9. Panda, S., Saha, S. and Basu, M. (2009), An EOQ model for perishable products with discounted selling
price and stock dependent demand, CEJOR. 17: 31-53.
10. Raafat, E. (1991), Survey of Literature on continuously deteriorating inventory model,
Operational Research Society, 42:27-37.

Journal of

11. Shah, N.H. and Shukla, K.T. (2009), Deteriorating inventory model for waiting time partial backlogging,
Applied Mathematical Sciences, 3(9): 421-428.
12. Waters, D. (2003),Inventory control and management, John Wiley and Sons Ltd, Replika Press Pvt. Ltd.,
India.
13. Weatherford, L. R. and Bodily, S. E. (1992), A taxonomy and research Overview of Perishable asset
revenue management: yield management, overbooking, and pricing. Operations Research, 40: 831-844.

21

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


AN OVERVIEW OF DATA WAREHOUSING AND OLAP OPERATIONS
Chandrakant Dewangan1, Dileshwar Dansena2, Mili Patel3 and Pooja Khemka4
Student1,2 and Faculty3,4 Kirodimal Institute of Technology, Raigarh (C.G.)
ABSTRACT
Data-driven decision support systems, such as data warehouses can serve the requirement of extraction of
information from more than one subject area. Data warehouses standardize the data across the organization so
as to have a single view of information. Data warehouses can provide the information required by the decision
makers. Developing a data warehouse for educational institute is the less focused area since educational
institutes are non-profit and service oriented organizations. In present day scenario where education has been
privatized and cut throat competition is prevailing, institutes needs to be more organized and need to take
better decisions. Institutes enrollments are increasing as a result of increase in the number of branches and
intake. Now a day, any reputed Institutes enrollments count in to thousands. In view of these factors the
challenges for the management are meeting the diverse needs of students and facing increased complexity in
academic processes. The complexity of these challenges requires continual improvements in operational
strategies based on accurate, timely and consistent information. The cost of building a data warehouse is
expensive for any educational institution as it requires data warehouse tools for building data warehouse and
extracting data using data mining tools from data warehouse. The present study provides an option to build
data warehouse and extract useful information using data warehousing and data mining open source tools. In
this paper we have explored the need of data warehouse / business intelligence for an educational institute, the
operational data of an educational institution has been used for experimentation. The study may help decision
makers of educational institutes across the globe for better decisions.
Keywords: Data warehouse architecture, Types of OLAP Servers, LAP Operations, OLAP vs. OLTP
INTRODUCTION
According to W. H. Inman A data warehouse is a subject-oriented, integrated, time-variant, and nonvolatile
collection of data in support of managements decision-making process.
Subject Oriented - The Data warehouse is subject oriented because it provide us the information around a
subject rather the organization's ongoing operations. These subjects can be product, customers, suppliers, sales,
revenue etc. The data warehouse does not focus on the ongoing operations rather it focuses on modeling and
analysis of data for decision making.
Integrated - Data Warehouse is constructed by integration of data from heterogeneous sources such as
relational databases, flat files etc. This integration enhance the effective analysis of data.
Time-Variant - The Data in Data Warehouse is identified with a particular time period. The data in data
warehouse provide information from historical point of view.
Non Volatile - Non volatile means that the previous data is not removed when new data is added to it. The data
warehouse is kept separate from the operational database therefore frequent changes in operational database are
not reflected in data warehouse.
Metadata - Metadata is simply defined as data about data. The data that are used to represent other data is
known as metadata. For example the index of a book serves as metadata for the contents in the book. In other
words we can say that metadata is the summarized data that lead us to the detailed data.
DATA WAREHOUSE ARCHITECTURE

Fig.1 Architecture of Warehouse


22

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


Architecture is the proper arrangement of the components. You build a data warehouse with software and
hardware components. To suit the requirements of your organization you arrange these building blocks in a
certain way for maximum benefit. We also want to review specific issues relating to each particular component.
1. SOURCE DATA COMPONENT
Source data coming into the data warehouse may be grouped into four broad categories, as discussed here.
(a) Production Data. This category of data comes from the various operational systems of the enterprise.
Based on the information requirements in the data warehouse, you choose segments of data from the different
operational systems. While dealing with this data, you come across many variations in the data formats. You
also notice that the data resides on different hardware platforms. Further, the data is supported by different
database systems and operating systems. This is data from many vertical applications and integrates the pieces
into useful data for storage in the data warehouse.
(b) Internal Data. In every organization, users keep their private spreadsheets, documents, customer profiles,
and sometimes even departmental databases. This is the internal data, parts of which could be useful in a data
warehouse. If your organization does business with the customers on a one-to-one basis and the contribution of
each customer to the bottom line is significant, then detailed customer profiles with ample demographics are
important in a data warehouse Although much of this data may be extracted from production systems, a lot of it
is held by individuals and departments in their private files. You cannot ignore the internal data held in private
files in your organization. It is a collective judgment call on how much of the internal data should be included
in the data warehouse.
(c) Archived Data. Operational systems are primarily intended to run the current business. In every operational
system, you periodically take the old data and store it in archived files. The circumstances in your organization
dictate how often and which portions of the operational databases are archived for storage. Some data is
archived after a year. Sometimes data is left in the operational system databases for as long as five years. Many
different methods of archiving exist. There are staged archival methods. At the first stage, recent data is
archived to a separate archival database that may still be online.
(d) External Data. Most executives depend on data from external sources for a high percentage of the
information they use. They use statistics relating to their industry produced by external agencies. They use
market share data of competitors. They use standard values of financial indicators for their business to check on
their performance. For example, the data warehouse of a car rental company contains data on the current
production schedules of the leading automobile manufacturers. This external data in the data warehouse helps
the car rental company plan for their fleet management.
2. DATA STAGING COMPONENT
After you have extracted data from various operational systems and from external sources, you have to prepare
the data for storing in the data warehouse.These three major functions of extraction, transformation, and
preparation for loading take place in a staging area. The data staging component consists of a workbench for
these functions. Data staging provides a place and an area with a set of functions to clean, change, combine,
convert, reduplicate, and prepare source data for storage and use in the data warehouse.

(a) Data Extraction. This function has to deal with numerous data sources. You have to employ the
appropriate technique for each data source. Source data may be from different source machines in
diverse data formats. Part of the source data may be in relational database systems. Some data may be
on other legacy network and hierarchical data models.
(b) Data Transformation. In every system implementation, data conversion is an important function. For
example, when you implement an operational system such as a magazine subscription application, you have to
initially populate your database with data from the prior system records. You may be converting over from a
manual system. Or, you may be moving from a file-oriented system to a modern system supported with
relational database tables.
(c) Data Loading. Two distinct groups of tasks form the data loading function. When you complete the design
and construction of the data warehouse and go live for the first time, you do the initial loading of the data into
the data warehouse storage. The initial load moves large volumes of data using up substantial amounts of time.
As the data warehouse starts functioning, you continue to extract the changes to the source data, transform the
data revisions, and feed the incremental data revisions on an ongoing basis. Figure below in data storage
illustrates the common types of data movements from the staging area to the data warehouse storage.
23

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


3. DATA STORAGE COMPONENT
The data storage for the data warehouse is a separate repository. The operational systems of your enterprise
support the day-to-day operations. These are online transaction processing applications. The data repositories
for the operational systems typically contain only the current data; the data storage must not be in a state of
continual updating. For this reason, the data warehouses are read-only data repositories. Generally, the
database in your data warehouse must be open. Depending on your requirements, you are likely to use tools
from multiple vendors. The data warehouse must be open to different tools. Most of the data warehouses
employ relational database management systems. Many of the data warehouses also employ multidimensional
database management systems. Data extracted from the data warehouse storage is aggregated in many ways and
the summary data is kept in the proprietary products like multidimensional databases (MDDBs). Such
multidimensional database is:-

Fig.2 Data storage component


4. INFORMATION DELIVERY COMPONENT
Who are the users that need information from the data warehouse? The range is fairly comprehensive. The
novice user comes to the data warehouse with no training and, therefore, needs prefabricated reports and preset
queries. The casual user needs information once in a while, not regularly. This type of user also needs
prepackaged information.

Fig.3 How data is deliver


24

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


The users will enter their requests online and will receive the results online. You may set up delivery of
scheduled reports through e-mail or you may make adequate use of your organizations intranet for information
delivery. Recently, information delivery over the Internet has been gaining ground.
5. METADATA COMPONENT
Metadata in a data warehouse is similar to the data dictionary or the data catalog in a database management
system. In the data dictionary, you keep the information about the logical data structures, the information about
the files and addresses, the information about the indexes, and so on. The data dictionary contains data about
the data in the database. Similarly, the metadata component is the data about the data in the data warehouse.
This definition is a commonly used definition. We need to elaborate on this definition. Metadata in a data
warehouse is similar to a data dictionary, but much more than a data dictionary.
6. MANAGEMENT AND CONTROL COMPONENT
This component of the data warehouse architecture sits on top of all the other components. The management
and control component coordinates the services and activities within the data warehouse. This component
controls the data transformation and the data transfer into the data warehouse storage. On the other hand, it
moderates the information delivery to the users. It works with the database management systems and enables
data to be properly stored in the repositories.
OLAP (Online Analytic Processing server
Online Analytical Processing Server (OLAP) is based on multidimensional data model. It allows the managers,
analysts to get insight the information through fast, consistent, interactive access to information. In this chapter
we will discuss about types of OLAP, operations on OLAP, Difference between OLAP and Statistical
Databases and OLTP.
TYPES OF OLAP SERVERS
We have four types of OLAP servers that are listed below.

1. Relational OLAP(ROLAP)
2. Multidimensional OLAP (MOLAP)
3. Hybrid OLAP (HOLAP)
4. Specialized SQL Servers
(1)Relational OLAP (ROLAP)
The Relational OLAP servers are placed between relational back-end server and client front-end tools. To store
and manage warehouse data the Relational OLAP use relational or extended-relational DBMS. ROLAP
includes the following .Implementation of aggregation navigation logic .Optimization for each DBMS back end
.Additional tools and services.
(2)Multidimensional OLAP (MOLAP)
Multidimensional OLAP (MOLAP) uses the array-based multidimensional storage engines for
multidimensional views of data. With multidimensional data stores, the storage utilization may be low if the
data set is sparse. Therefore many MOLAP Server uses the two level of data storage representation to handle
dense and sparse data sets.
(3)Hybrid OLAP (HOLAP)
The hybrid OLAP technique combination of ROLAP and MOLAP both. It has both the higher scalability of
ROLAP and faster computation of MOLAP. HOLAP server allows storing the large data volumes of detail data.
The aggregations are stored separated in MOLAP store.
(4)Specialized SQL Servers
Specialized SQL servers provides advanced query language and query processing support for SQL queries over
star and snowflake schemas in a read-only environment.
OLAP OPERATIONS
As we know that the OLAP server is based on the multidimensional view of data hence we will discuss the
OLAP operations in multidimensional data. Here is the list of OLAP operations.
1. ROLL-UP
This operation performs aggregation on a data cube in any of the following way:
By climbing up a concept hierarchy for a dimension By dimension reduction.Consider the following diagram
showing the roll-up operation.
25

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015

Fig.4 Roll-up operation


The roll-up operation is performed by climbing up a concept hierarchy for the dimension location. Initially the
concept hierarchy was "street < city < province < country". On rolling up the data is aggregated by ascending
the location hierarchy from the level of city to level of country. The data is grouped into cities rather than
countries. When roll-up operation is performed then one or more dimensions from the data cube are removed.
2. DRILL-DOWN
Drill-down operation is reverse of the roll-up. This operation is performed by either of the following way: By
stepping down a concept hierarchy for a dimension. By introducing new dimension. Consider the following
diagram showing the drill-down operation:

Fig.5 Drill operation


The drill-down operation is performed by stepping down a concept hierarchy for the dimension time. Initially
the concept hierarchy was "day < month < quarter < year." On drill-up the time dimension is descended from
the level quarter to the level of month. When drill-down operation is performed then one or more dimensions
from the data cube are added. It navigates the data from less detailed data to highly detailed data.
26

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


3. SLICE
The slice operation performs selection of one dimension on a given cube and gives us a new sub cube. Consider
the following diagram showing the slice operation.

Fig.6 Slice operation


The Slice operation is performed for the dimension time using the criterion time ="Q1". It will form a new sub
cube by selecting one or more dimensions.
4. DICE
The Dice operation performs selection of two or more dimension on a given cube and gives us a new sub cube.
Consider the following diagram showing the dice operation:

Fig.7 Dice operation


The dice operation on the cube based on the following selection criteria that involve three dimensions.
(location = "Toronto" or "Vancouver")
(time = "Q1" or "Q2")
(item =" Mobile" or "Modem").
27

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


OLAP vs. OLTP

REFERENCE
1. Data Warehousing Fundamentals: A Comprehensive Guide for IT Professionals. Paulraj PonniahCopyright
2001 John Wiley & Sons, Inc.ISBNs: 0-471-41254-6 (Hardback); 0-471-22162-7 (Electronic)
2. About tutorialspoint.com

28

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


FORMANT ESTIMATION FOR SPEECH RECOGNITION AND STRUCTURAL ANALYSIS OF
ASSAMESE LANGUAGE IN ASSAM
Dr. Rashmi Dutta
Associate Professor, Department of Physics, North Gauhati College, Guwahati 781031, Assam
1. INTRODUCTION
This paper deals with some nice applications of Theoretical Physics on speech technology, and describes the
different methods for studying the characteristic features of first, second and third formant frequencies of
Assamese [ a major link language of North East India] vowels , including both male and female informants ,
and their range of variations. The nonlinear features of the Assamese phonemes are studied through the
appropriate mathematical models, and moreover, suitable algorithms are developed for the structural analysis
of this language, [1,3,4,7,10].
An efficient and compact representation of the time- varying characteristics of speech offers potential benefits
for speech recognition .Today, virtually all high performance speech recognition systems are based on some
kind of mel- cepstral coefficients of filterbank analysis [4] . In fact, there are a few specific aspects due to
which formant based parameters are attractive, as listed below.

Formants are considered to be robust against channel distortion and noise.

Formant parameters might provide a means to tackle the problem of a mismatch between training and
testing conditions .

There is a close relation of formant parameters to model- based approaches to speech perception and
production .

A variety of approaches such as formant tracking [2,7,8], articulatory model [1,4 ] and auditory model have
been explored for the analysis and synthesis of speech and thereby developing strategies for speaker
independent speech recognition during the last few decades . The formant tracking method based on Linear
Predictive Code (LPC) has received considerable attention . Studies on formant estimation for speech
recognition have been done by several workers [ 1,3,6,9 ]. The formant model used in this section to
determine the formant frequencies is as proposed by Welling et al and avoids the above mentioned problems .
Based on digitized resonator technique , the entire frequency range is divided into a fixed number of
segments , each segment representing a formant frequency .
2. FORMANT EQUATION
A predictor polynomial, defined as the Fourier transform of the corresponding second order predictor, is given
by .
Ak (e jw ) 1 k e jw k e j 2 w
(1)
where k and k are the real valued prediction coefficients. From (1), we get

| Ak (e jw ) | 2 1 k2 k2 2 k (1 k ) cos 2 k cos(2 )

2 (1 k ) 2
(1 k )
(1 k ) k
4 k cos k

4 k
4 k

The parameter k
jw

2
k

determines the bandwidth of the resonator

(2)
defined as negative logarithm of

( k ).[ Ak (e )] . The formant frequency is given by


k (1 k )
F1 arccos

4 k

(3)

Using (1), the corresponding predictor error can be written as

E ( k 1 , k | k , k ) (1 k2 k2 )rk (0) 2 k (1 k )rk (1) 2 k rk (2)

(4)
29

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


where rk ( ) are the autocorrelation coefficients for statement k for =0, 1, 2. It is thus found that

| cos | 1, and thus the values of k and k are taken as| k | 2


1 k

(5)

2
k

We denote the beginning point and the end point of segment k by k 1 and k respectively. Using (4.4) ,
the predictor error can be written as
E( k 1 , k , k ) = ( 1 + k 2 + k 2 ) rk (0) 2 k ( 1 - k ) rk (1) 2 k rk (2) .. (6)
With the autocorrelation coefficients rk (v) of segment k for v = 0, 1, 2
rk (v) = r

( k 1

, k ) (v) = (1/ )

S(e j ) cos(v )d

(7)

k 1

By minimizing the prediction error as given by (6) with respect to k and k , we obtain the following
optimum prediction coefficients:

opt
k

opt
k

rk

(0) r k (1) rk (1) r k (2)

(0) 2 r k (1) 2

( 0 ) ( 2 ) (1)
r k r2k r k 2
rk (0) rk (1)

The value of the minimum prediction error is given [ 4 ] by


Emin ( k 1 , k ) =

E (
min

k

k 1 ,

k / k , k )

opt

r k ( 0) k

rk (1)

opt
k

( 2)

(8)

3. FORMANT FREQUENCY ESTIMATION OF ASSAMESE VOWELS


The formant frequency of Assamese vowels, corresponding to the utterances of both male and female
informants, has been determined by using equation (3). In case of Assamese phonemes, informants, including
male & female are chosen mostly from and within Assamese dominated areas. The formant frequency
characteristics of 8(eight) Assamese vowels have been depicted in Fig1 (a) for male informants and in Fig1 (b)
corresponding to female informants. For obtaining the formant frequencies, sampled spectra is subjected to FFT
according to the procedure as mentioned below :
(i) The utterance of each Assamese vowels is recorded using Goldwave version 4.26 and digitized at 8000Hz
sampling rate with a 16-bit resolution.
(ii) A signal pre- emphasis is performed by calculating the first order difference of the sampled speech signal.
(iii) For every 10 msec. a 30 msec. Hamming window is applied to overlapping speech segment and the
short time power spectrum is computed by 256 pt Fast Fourier Transform (FFT ).
(iv) The frequency range from 0 4000 Hz is used for formant estimation so that we have 128 samples in this
range that are used for the approximation of the Fourier integral and for finding the optimum segmentation
.
(v) Applying sec 4.02 the entire frequency range corresponding to a particular vowel is divided into some k
number of segments.
(vi) For each segment k, a second order resonator is defined and an algorithm (3.1) has been developed to
obtain formant frequencies and their representation .

30

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


3.1 ALGORITHM
The formant frequencies of eight Assamese vowels and their graphical representation , for both male and
female informants, are obtained using the following algorithm:
Fs=8000;
x=wavread('input signal);
p=fft(x);
y=abs(p);
freq=1:4000;
plot(freq,10*log10(y(freq)),'b-');
[a,e]=lpc(x,10);
ak=a(2);
bk=a(3);
b=1;
[h,w]=freqz(b,a,100);
w=w*fs/(2*pi);
plot(w,10*log10(abs(h)),'k-');
By adopting the above algorithms, we obtain the following representations:

31

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015

Fig 1(a) : Formant frequency characteristics of Assamese vowels corresponding to male informants

/a/

/aa/

20

50

1000

2000
/e/

3000

50
0
-50

1000

2000
/ea/

3000

4000

50
0
-50

1000

2000
/o/

3000

-50

4000

4000

Log Magnitute (db) Log Magnitute (db)

-20

-50

4000

1000

2000
/eai/

3000

4000

1000

2000
/ou/

3000

4000

1000 2000 3000


Frequency

4000

0
-50

4000

3000

50

1000 2000 3000


Frequency

2000
/u/

20

1000

50

50

-50

-20

Fig 1(b) : Formant frequency characteristics of Assamese vowels corresponding to female informants
The range of variation of first three formant frequencies , F1, F2 and F3 of eight Assamese vowels
corresponding to male and female informants, are given in Tab.1.
Table 1 : Range of variation of formant frequencies of eight Assamese vowels
Vowels
F1(KHz)
F2(KHz)
F3(KHz)
Male
Female
Male
Female
Male
Female
/a/
0.72 1.1
0.61 1.2
1.2 1.6
0.911.5
2.02.68
2.613.2

/aa/

/e/

(0.28)

(0.59)

(0.4)

(0.59)

(0.68)

(0.59)

0.71.1

0.180.9

1.31.75

0.91.79

2.23.15

1.01.7

(0.4)

(0.72)

(0.45)

(0.89)

(0.95)

(0.7)

0.171.1

0.30.91

2.12.62

0.511.3

2.183.3

2.03.21
32

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015

/u/

/ea/

/eai/

/o/

/ou/

(0.93)

(0.61)

(0.52)

(0.79)

(1.12)

(1.21)

0.171.2

0.31.42

0.721.3

0.61.5

2.22.8

0.91.76

(1.03)

(1.12)

(0.58)

(0.9)

(0.6)

(0.86)

0.31.6

0.20.9

1.82.45

0.451.6

2.53.6

2.13.2

(1.3)

(0.7)

(0.65)

(1.15)

(1.1)

(1.1)

0.41.21

0.31.23

0.71.5

0.451.8

2.22.8

1.11.58

(0.81)

(0.93)

(0.8)

(1.35)

(0.6)

(0.48)

0.310.6

0.520.9

0.71.32

1.11.7

1.72.41

3.13.65

(0.29)

(0.38)

(0.62)

(0.6)

(0.71)

(0.55)

0.30.6

0.51.0

0.671.4

0.91.52

1.782.4

3.03.36

(0.3)

(0.5)

(0.73)

(0.62)

(0.62)

(0.36)

4. NON-LINEAR FORMANT FREQUENCY CHARACTERISTICS OF ASSAMESE PHONEMES


With the emergence of nonlinear dynamical analysis , many researchers have striven to find out whether
speech can be approximated by a nonlinear low dimensional model [ 2,3,6,7,8 ] . These investigations have
been, on the whole, conclusive, and we attempt to show a full analysis using a range of invariant measures
that show clearly the low dimensional nonlinear behaviour of individual vowel sounds. A generalized
nonlinear system can be described by a number of observable output states, for instance , the motion of a
pendulum can be described by using angular position and angular velocity . This can then be used to construct
a state space description of the behaviour of the system. In general, time series data provide a scalar
observation of an items underlying dynamics and it is necessary to obtain a reconstruction of the state space
behaviour of the system from the scalar observation in some manner . Packard et al. showed how this might
be done numerically and Tokuda et al subsequently formalized a proof as to how this might be achieved .
From a dynamical system analysis point of view a stream of data is wholly unsuitable since the dynamics
themselves are underlying continual change and must therefore be nonstationary. A better analysis can be
achieved by focusing the analysis onto individual phonemes, single unambiguous sounds that form the building
blocks of any language, allowing the individual dynamics of each phoneme to be investigated before
attempting to generalize the complete system. This analysis takes sustained vowels from a variety of different
speakers and shows that irrespective of speaker the system is nonlinear and has a low dimension of different
order .
In voice science, the vocal fold vibration is described as a non-linear dynamic system [5]. There are several
nonlinearities that are involved in the process of vocal fold vibration [8] such as
(i)

Non-linear stress-strain characteristics of vocal fold tissue.

(ii)

Strong restoring forces at collision of the fold.

(iii)

Highly non-linear dependence of the airflow on glottal area.

Speech is nonlinear phenomenon . The information provided by the cepstral measures of vowel recognition
through LPC (linear predictive coding) analysis is not enough to ascertain the degree of non-linearity present in
the pronunciation of vowels. The fitting of polynomial, using matrix method, is a faster method to study the
non-linearity of the vowels. In the present study, the analysis & synthesis of Assamese vowels are made
studying their cepstral features and formant characteristics through Linear Predictive Coding (LPC).
To study the degree of non-linearity, the formant frequencies are subjected to fit a polynomial of degree p, as
described by equation ( 9)
Y=b0+b1x+b2x2+ . + bpxp

(.9)

where b0, b1, b2 . bp are coefficients to be determined with the help of the following matrix method.
33

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


Following this matrix method, the values of b0, b1, b2, b3 and b4 etc. are estimated. The range of variation of the
values of these coefficients for male and female as obtained in the present investigation, are given in Table 2
and Table .3 respectively. From these tables, it is seen that the range of variation of the values of the
coefficients b4 lies between -0.031 and 0.096 i.e. -0.031<b4<0.096 (for male) and -0.029 and 0.038 i.e. 0.029<b4<0.038 (for female). As b4 is very very small, so the x4 term of the polynomial could be neglected.
The equation (9) is represented in the matrix form as given by (10)
-1

Thus, the equation for the representation of the formant frequency and amplitude is non-linear with the degree
of non-linearity being three.
Similarly, the degree of non-linearity of formant frequency and cepstral coefficient can be obtained using the
same set of matrix equations.
Table 2 : Range of variation of the coefficients b0, b1, b2, b3, b4 of the polynomial fitting for formant
frequencies of Assamese vowels corresponding to male informants.
Vowels
/a/
/aa/
/e/
/eai/
/ea/
/o/
/u/
/ou/

b0
0.649<b0<2.328
1.10<b0<6.526
-2.689<b0<-1.804
-2.529<b0<-1.955
-1.542<b0<1.133
2.675<b0<2.566
1.083<b0<2.075
-1.870<b0<2.233

b1
-3.819<b1<-0.825
-11.625<b1<1.392
1.497<b1<4.350
2.682<b1<4.074
-1.868<b1<2.711
-4.866<b1<-0.294
-3.592<b1<-1.745
-3.801<b1<2.698

Coefficients
b2
0.460<b2<2.242
0.694<b2<6.841
-1.902<b2<-0.274
-1.759<b2<-0.658
-1.080<b2<1.088
0.236<b2<2.984
0.245<b2<2.103
-0.683<b2<2.236

b3
-0.453<b3<-0.035
-1.408<b3<-0.177
0.029<b3<0.418
0.061<b3<0.388
-0.188<b3<0.247
-0.619<b3<-0.017
-0.424<b3<-0.081
-0.452<b3<0.065

b4
-0.0062<b4<0.031
0.0002<b4<0.096
-0.031<b4<-0.0004
-0.029<b4<0.0002
-0.019<b4<0.011
-0.0002<b4<0.043
-0.002<b4<0.029
-0.0004<b4<0.031

Table 3 : Range of variation of the coefficients b0, b1, b2, b3, b4 of the polynomial fitting for formant
frequencies of Assamese vowels corresponding to female informants.
Vowels
Coefficients
b0
b1
b2
b3
b4
/a/
0.345<b0<1.075 -1.282<b1<-0.211 0.170<b2<0.649 -0.057<b3<0.012 -0.004<b4<-0.006
/aa/
0.416<b0<1.100 -1.184<b1<-0.259 0.333<b2<1.035 -0.244<b3<-0.059 0.004<b4<0.019
/e/
-1.920<b0<-1.608 2.225<b1<2.689 -0.665<b2<0.495 0.042<b3<0.056 0.0006<b4<0.0008
/eai/
-4.116<b0<1.344 -3.001<b1<2.621 -0.598<b2<2.307 -0.477<b3<0.040 0.002<b4<0.031
/ea/
-4.117<b0<-2.705 4.889<b1<6.509 -2.652<b2<-2.161 0.444<b3<0.498 -0.031<b4<0.033
/o/
-1.016<b0<0.483 -0.801<b1<2.014 -1.089<b2<0.519 -0.046<b3<0.295 -0.025<b4<-0.0004
/u/
1.583<b0<2.500 -4.287<b1<-2.816 1.840<b2<2.549 -0.526<b3<0.402 0.030<b4<0.038
/ou/
1.895<b0<2.000 -3.377<b1<-3.193 1.936<b2<2.037 -0.414<b3<0.392 0.026<b4<0.029
34

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015

Fig. 1 (c) : Sequential position of Assamese Vowels as per pitch magnitude variation(Male)

Fig. 1 (d) : Sequential position of Assamese Vowels as per pitch magnitude variation(Male)

Fig. 1 (e) : Sequential position of Assamese Vowels as per pitch magnitude variation(Male)
35

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015

Fig. 1 (f) : Sequential position of Assamese Vowels as per pitch magnitude variation(Female)

Fig.1 (g) : Sequential position of Assamese Vowels as per pitch magnitude variation(Female)

Fig. 1(h) : Sequential position of Assamese Vowels as per pitch magnitude variation(Female)
36

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


5. RESULTS AND CONCLUSION
In this chapter, we have presented formant frequencies , degree of nonlinearity and sequential position of three
major link- languages of North- East India. A comparative study is highlighted below:
The graphical representation of the formant frequency spectra of the eight Assamese vowels, have been
depicted in Fig 1 (a) and Fig 1 (b), respectively for both male and female informants.
The graphical representation of first, second and third formant frequencies and their range of variations,
corresponding to eight Assamese vowels, have been shown in Fig 1 (c), Fig 1 (d) and Fig 1 (e), for male
speakers. The figures also depict the sequential position of the eight Assamese vowels based on magnitude of
the respective formant frequencies. It is evident from the graph that frequencies F2 and F3 do not convey any
meaningful information which could be used for either speech or speaker identification. But, the variation of F1
with respect to vowels is distinct & prominent in case of male. This feature of F1 may be useful for speech and
speaker identification of Assamese male informants.
The graphical representation of first, second and third formant frequency of Assamese female utterances with
respect to eight vowels have been given in Fig. 1 (f), Fig .1 (g) and Fig 1 (h). It is found that the information
displaying through the vowels spectra of female utterances corresponding F2 and F3 are not distinguishable.
However, F1 seems to play an important role while placing the vowels according to their formant magnitude.
REFERENCES
1) Atal B.S. and Hanaur S.L.,(1971) Speech Analysis and Synthesis by Linear Prediction of the Speech
Wave, J. Acoustic Soc of America, 50 (2), pp637- 655.
2) Bartkova K. and Jovert D., (1999) Selective Prosodic Post-Processing for Improving Recognition of
French Telephone Numbers, in Euro Speech 99, 6th European Conference on Speech Communication and
Technology, Budapest, Hungary, 5-10 Sept, 1999, Vol 1 pp 267- 270,
3) Crowe A. and Jack M.A., (1987) Globally Optimizing Formant Tracker Using Generalized Centroids,
Electron Lett, Vol 23, pp 1019-1020,
4) Candless Mc S ,(1974) An Algorithm for Automatic Formant Extraction Using Linear Prediction Spectra,
IEEE Trans Acoust, Speech, Signal Processing, Vol ASSP-22, pp 135- 141,
5) Devi Basanti, (1984) Vowel Length in Assamese, Acoustic Studies in Languages edited by B.B.
Rajapurohit, Central Institute of India, Mysore,
6) Grierson G A , (2001) Linguistic Survey of India, (New Edition).
7) Kopee G E , Formant Tracking Using Hidden Markov Models and Vector Quantization , (1986) IEEE
Tran Acoust, Speech Signal Processing, Vol ASSP- 34, pp 709- 729,
8) Litman D J and Forbes- Riley D, (2006) Recognizing Student Emotions and Attitudes on the basis of
Utterances in Spoken Tutoring Dialogues with both Human and Computer Tutors, Speech
Communications 48, 5: pp559- 590, .
9) Quatieri T F , (2004) Discrete- Time Speech Signal Processing- Principles and Practice, Pearson
Education, .
10) Talukdar P H , Bhattacharjee U, Goswami C and Barman J,(2005) A Robust Recognizer for Assamese
and Bodo Vowels using Artificial Neural Network, Proc Int Sym, Frontiers of Research on Speech and
Music, 2005, pp 148-152,

37

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


CELLPHONE CLONING OVER (GSM & CDMA)
Mukesh Patel 1, Mili Patel 2 and Pooja Khemka3
Student1 and Faculty2,3 , Kirodimal Institute of Technology, Raigarh (C.G.)
ABSTRACT
Nearly 1.1 billion telecom subscriber worldwide ,as because of not only the ease to communicate with the
other person over worldwide by a cell phone and also through its the modern technology providing lots of
features and facilities like banking , shopping, social networking and many more in a cell phone. Thats why
everyone wants their own individual cell phone. Estimated that worldwide mobile phone fraud will reach $40
billion dollars soon .Cell phone cloning means that to preparing an another cell phone that have same
ESN(electronic serial number) or CTN (cellular telephone number) for CDMA (Code Division Multiple Access)
and IMEI (International Mobile Station Equipment Identity Number) for GSM (Global eSystem of Mobile
Communication) by which also make and receive call from its but only the original one will pay for it.
INTRODUCTION
Cloning :- cloning is the creation of an organism that is an exact genetic copy of another.
Cell Phone Cloning:-Cell Phone is the cloning of process of taking the programmed information that is stored
in a legitimate Mobile phone and illegally programming the identical information into another mobile phone.
Process of Cloning
Cloning involves modifying or replacing the EPROM in the phone with a new chip which would allow you to
configure an ESN (Electronic Serial Number) via Software. MIN (Mobile Identification Number) should also
have to change. When the ESN/MIN pair had changed successfully then an effective clone of the original phone
has created. Cloning Required Access to ESN and MIN pairs.ESN/MIN pairs were discovered in several ways:
1. Sniffing the cellular.
2. Trashing celling companies or cellular resellers.
3. Hacking cellular companies or cellular resellers.
CLONING IN GSM CELL PHONE
GSM: - GSM stands for Global System for Mobile Communication. Its use a Subscriber Identity Module (SIM)
Card.
Its Standard Set Developedby European Telecommunications Standard Institute (ETSI) To Describe
Technologies for Second Generation (2G) Digital Cellular Network.

The Important Information Is IMSI, Which is Stored on the Removal Sim Card.

Sim Card Inserted Into a reader.

Connect to computer and card details transferred.

Use encrypted card to interpret details as result a cloned cell phone is ready for replica.

CLONING IN CDMA CELL PHONE


CDMA :-CDMA stands For Code Division Multiple Access. A method oftransmitting simultaneous signals
over a shared portion of the spectrum. Its Use a Mobile Identification Number (MIN) Card That Contains User
account Information.

Cellular Telephone thieves monitor the radio frequency Spectrum.

Steal their Cell Phone pair as it is being anonymously registered with a cell site.

Subscriber information is also encrypted and transmitted digitally.

A device called DDI, Digital Data Interface Can Be Used to Get Pairs.

Stolen ESN and MIN were then fed into a New CDMA Handset.

PATAGONIA SOFTWARE
Patagonia is software available in the market which is used to clone CDMA phone. Using this software a cloner
can take over the control of a CDMA phone i.e. cloning of phone. A SIM can be cloned again and again and
38

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


they can be used at different places. Messages and calls sent by cloned phones can be tracked. However, if the
accused manages to also clone the IMEI number of the handset, for which softwares are available, there is no
way he can be traced.
DETECTION TECHNIQUES
Duplicate Detection

The network sees the same phone in several places at the same time.
Velocity trap:- The mobile seems to be moving at impossible or most unlikely speed...
RF Radio Frequency: -Normally identical Radio Equipment has a distinguishing fingerprint, so the network
software stores and compares fingerprint for the entire phone that it sees.
Call Counting: - Both the phone and the network keep track of calls made with the phone, and should the
differ more than the usually allowed one call, service is Denied.
Pin Codes: - Prior to placing a call, the caller unlock the phone by entering a pin code and then calls as usual.
CELL PHONE CLONING SYMPTOMS

Difficulty in placing outgoing.

Difficulty in retrieving voice mail message.

Incoming calls constantly receiving busy signal or wrong number.

Unusual calls appearing on your phone bills.

MEASURES TO BE TAKEN
This includes:

Blacklisting: blacklisting of stolen phones is another Mechanism to prevent unauthorized use.

PIN: -User Verification Using Personal Identification Number (PIN) codes are one method for
customer Protection against Cellular Phone Fraud.

Encryption :-Encryption is regarded as the effective way to prevent cellular Fraud.

Blocking :-Blocking is used by Service provider to Protect themselves from high risk Callers.

ADVANTAGES
1. If your phone has been lost , you can use your cloned cell phone.
2. If your phone got damaged or if you forgot your phone at home or any other place . Cloned phone can
be helpful.
39

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


DISADVANTAGES
1. It can be used by the terrorists for criminal activities.
2. It can be used by the cloner for fraud calls.
3. It can be used for illegal money transfer.
CONCLUSION
Cell Phone is in Initial Stages in Some Countries. Preventive Steps Should be taken by the network Provider
and the Government. The Enactment of legislation to prosecute crimes related to cellular phone is not viewed as
a priority. The cloning of a CDMA Mobile phones was Possible because there was no protection to the identity
information.
REFERENCES
1. http://www.cdmasoftware.com/eng.html
2. http://infotech.indiatimes.com
3. http://www.hackinthebox.org/
4. http://blog.metasploit.com/2007/09/root-shell-in-my-pocket-and-maybe-yours.html.

40

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


CONVERSION OF METHANOL TO FORMALDEHYDE OVER Ag NANOROD CATALYST UNDER
MICROWAVE IRRADIATION
Manish Srivastava, Aakanksha Mishra, Ashu Goyal, Anamika Srivastava and Preeti Tomer
Department of Chemistry, Banasthali Vidyapith, Banasthali, Rajasthan
ABSTRACT
Noble-metal nanorods comprise a novel class of nanostructures possessing solid interiors and porous walls.
Here we report a novel strategy for the controlled synthesis of silver nanorods with narrow size distribution
(1.4 nm) through Ag seeds nucleation and growth with CTAB. Our results show that the well-defined rod
structure of silver is essential for the catalytic activity. In this work, the catalytic activities of Ag-based nanorod
catalysts with various Ag concentration for methanol oxidation using new and efficient oxidant zinc dichromate
trihydrate has been shown. It was found that the with increasing Ag concentration in the catalyst significantly
enhanced the activity of the methanol oxidation reaction.
INTRODUCTION
Formaldehyde is one of the most important chemical products worldwide. [1] Now a days formaldehyde is
increasingly used for the production of urea-phenolic, acetal and melamine resins. In order to produce
formaldehyde in industrial scale two important routes has been used. In first one the oxidation of methanol is
carried out over a ferrite molybdate catalyst in an excess of air at temperatures close to 400 C [2-4]. The
second route uses a thin layer of electrolytic silver catalyst [5-11] with a feed of a mixture of methanol and air
(approximately 1:1) molar ratio in the temperature range 580 to 650C. 90% of typical yield of formaldehyde
has been observed. Nowadays, most of the industrial production is based on silver catalyst.
The impact of metal nanostructures is continually increasing as we become more capable of producing them
with well-controlled sizes and shapes for fine-tuning their properties and further development of emerging
applications. It has been established that the optical and magnetic properties of a metal nanostructure are highly
dependent not only on the size of the structure, but also on the shape [1518]. The presence of various ions has
been shown to influence the shape and size of metallic nanostructures produced via the polyol method.
The previous research done by the Xia et al. has shown that the presence of iron (II) or iron (III) ions in the
polyol synthesis facilitates the growth of silver nanowires or cubes, depending on the concentration of the iron
ions [19]. Kylee Korte has shown that the presence of copper (I) or copper (II) chloride in the polyol reduction
of silver nitrate allows for the production of silver nanorods and nanowires, which can be used in many areas,
including electronics and catalysis [20].
Here, we report an efficient, novel and easy on-pot synthesis of silver nanorods prepared by polyol reduction
with CTAB (cetyltrimethyl ammonium bromide), in order to enhance its catalytic activity towards alcohol
oxidation.
EXPERIMENTAL
The polyol method involves the reduction of a metal salt precursor by a polyol, a compound containing multiple
hydroxyl groups. The polyol used in this synthesis, ethylene glycol, served as both the reducing agent and
solvent. Synthesis procedure of silver nanorods has been divided into two parts:
a) Preparation of silver seed - The Ag seeds were prepared by chemical reduction of AgNO3 by NaBH4 in the
presence of trisodium citrate which act as a capping agent, help to stabilize the nanoparticles. For
preparations of the 4nm sized silver seed, a 20 mL solution with a final concentration of 0.25 mM AgNO3
and 0.25 mM trisodium citrate in water was prepared. While stirring vigorously, 0.6 mL of 10 mM NaBH4
was added all at once. Stirring was stopped after 30 sec. This seed was used 2 h after preparation but could
not be used after 5 h, as a thin film of particles appeared at the water surface. According to transmission
electron microscopy, seed diameters were 4 2 nm.
b) Preparation of Ag rods - first, six sets of solutions were prepared containing 0.25 mL of 10 mM AgNO3,
0.50 mL of 100 mM ascorbic acid, and 10 mL of 80 mM CTAB. Next, a varied amount of 4 nm seed
solution (2 mL, 1 mL, 0.5 mL, 0.25 mL, 0.125 mL or 0.06 mL) was added. Finally, 0.10 mL of 1 M NaOH
was added to each set. NaOH must be added last to obtain the desired nanorods in decent yield. After
adding the NaOH, the solution was gently shaken just enough to mix the NaOH with the rest of the
solution. Within 110 min a color change occurred varying from red, to brown, to green depending on seed
concentration. Each solution contained a mixture of rods and spheres with the aspect ratio of the rods
increasing with decreasing seed concentration.
41

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


CATALYTIC ACTIVITY
The oxidation reaction of methanol was conducted in a CEM Discover microwave synthesizer, a 7ml volume
microwave vessel with a feed composition of 1mm of methanol in acetic acid and 0.5g of oxidant with different
concentration of nanocatalyst. The power and temperature of the synthesizer was maintained at 100W and 85 C.
The time of reaction was tuned to 5 min. The compositions of the feed liquids and product were analyzed by
gas chromatography. Reaction progress was monitored by silica gel prepared thin layered chromatography. The
catalyst showed excellent properties in the catalytic oxidation of methanol to formaldehyde.
RESULTS AND DISCUSSION
The produced rods were relatively uniform in shape and size. Scanning electron microscope (SEM) depicts the
produced rods had a pentagonal cross-section, and were, on average, approximately 3-5 m in length.

To elucidate the role of CTAB, optical data are in accord with what others have observed for metallic nanorods
for transverse and longitudinal plasmon bands.[21,22] In the absence of CTAB, spheroid nanorods (aspect ratio
<2.5) were unstable and reverted to spheres (as judged by the disappearance of the long-wavelength absorption
band) within 10 min. In the absence of seed, silver ion reduction by ascorbic acid in the presence of CTAB
yielded only a few rods, which varied in aspect ratio. Thus, both CTAB and silver seed are necessary for the
fruitful results in the high yield formation of the Ag nanorods.
The electronic absorption spectra of silver nanorod solutions show the conventional 400 nm peak observed for
spherical silver nanoparticles and another peak at longer wavelengths, due to the longitudinal plasmon band of
rod-shaped particles.[23] Decreasing the amount of seed in the nanorod preparation led to a further red shift of
longer-wavelength longitudinal plasmon bands in the nanorod products, implying that the silver rods increased
in average aspect ratio as the seed concentration decreased. Fig 2 shows that 50 nm rods produced at 382 nm
absorption and 100 nm rods produced at 425 nm absorption spectra.

For different Ag seeds concentrations, the surface area decreased with increasing Ag seed concentration, but
there was no clear correlation between surface area, Ag seed concentration, and catalytic performance. Table 1
lists for various Ag seed concentrations, the best performing catalyst that gave the highest yield of
formaldehyde.
Table 1: Effect of Ag seed concentrations on Formaldehyde yield (%) at 120 C
Ag seed concentration in catalyst soln.(ml)
Formaldehyde, yield (%)
0.125
76
0.25
79
0.5
83
1
88
2
72
42

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


Among all the Ag seed variations in nanorod catalysts studied, 1ml concentration in catalysts gave the highest
formaldehyde yields.
Table 2: Effect of variation Ag nanorod catalyst on Formaldehyde yield (%) at 120C
Ag nanorod catalyst soln.(ml)
Formaldehyde, yield (%)
1
82
1.5
84
2
88
2.5
92
3
76
CONCLUSION
Ag nanorods prepared by the seed-mediated growth approach proved to be an excellent catalyst for Ag- based
epoxidation reaction. Under our reaction conditions, 2.5ml Ag nanorod catalyst concentration was the most
promising catalyst for Formaldehyde production, giving at 120C, formaldehyde yield of 92 % with its
selectivity up to 99 %. Moreover, the catalyst produced no measureable CO2 at reaction temperatures below 548
K.
REFERENCES
1. M. Qian, M.A. Liauw, G. Emig, Appl. Catal. A. 238, 211 (2003).
2. M. Badlani, I. E. Wachs, Catal. Lett. 75, 137-149 (2001).
3. L. . Briand, A.M. Hirt, I.E. Wachs, J. Catal. 202, 268 (2001).
4. W. L. Holstein, C. J. Machiels, J. Catal. 162, 118 (1996).
5. A.N. Pestryakov, Catal. Today, 28, 239 (1996).
6. A. Nagy, G. Mestl, T. Rhle, G. Weinberg, R. Schlgl, J. of Catal., 179, 548(1998).
7. W.L. Dai, Q. Liu, Y. Cao, J.F. Deng, Appl. Catal. A, 175, 83 (1998).
8. A. Nagy, G. Mesta, Appl. Catal. A, 188, 337 (1999).
9. G.I.N. Waterhouse, G.A. Bowmaker, J.B. Metson, Appl. Catal. A, 265, 85 (2004).
10. G.I.N. Waterhouse, G.A. Bowmaker, J.B. Metson, Appl. Catal. A, 266, 257 (2004).
11. A.N. Pestryakov, N.E. Bogdanchikova, A. Knop-Gericke, Catal. Today, 91-92,49 (2004).
12. K. Kelly, E. Coronado, L. Zhao, G. Schatz, J. Phys. Chem. B 107(2003) 668.
13. L. Dick, A. McFarland, C. Haynes, R. Van Duyne, J. Phys. Chem. B106 (2002) 853.
14. P. Kamat, J. Phys. Chem. B 106 (2002) 7729.
15. M. Chen, J. Kim, J.P. Liu, H. Fan, S. Sun, J. Am. Chem. Soc. 128(2006) 7132.
16. Wiley, B., et al; Polyol Synthesis of Silver Nanostructures: Control of Product Morphology with Fe(II) or
Fe(III) Species; Langmuir, 21, 8077-8080 (2005).
17. Xia, Y., et al; One-Dimensional Nanostructures: Synthesis, Characterization, and Applications; Advanced
Materials, 15, 353-389 (2003).
18. B. M. I. van der Zande, M. R. Bohmer, L. G. J. Fokkink and C. Schonenberger, Langmuir, 2000, 16, 451.
19. Y. Y. Yu, S. S. Chang, C. L. Lee and C. R. C. Wang, J. Phys. Chem. B, 1997, 101, 6661.
20. C. A. Foss, G. L. Hornyak, J. A. Stockert and C. R. Martin, J. Phys. Chem., 1994, 98, 2963.

43

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


DIGITAL SIGNATURE VERIFICATION WITH OTP GENERATION BASED ON HIDDEN
MARKOV MODEL

Student

1, 2

A. Manoj1, M. Ayyappan2, T. Rajesh Kumar3 and Dr. S.Padmapriya4


and Faculty3,4, Prathyusha Institute of Technology and Management, Poonamalee , Thiruvallur

ABSTRACT
Nowadays user authentication is done by providing a user name and password in most cases and signature in
banking sector. But this system is very easy and simple to crack, by trying out different passwords randomly. To
eliminate these threats a secure way of authenticating the user is proposed in this paper. We collect users
signature initially and train the system with the captured signature using Hidden Markov Model. This signature
is then verified by the system whenever the user tries to login. Also after the verification of signature, OTP is
generated which has six digit combination of words and numeric. The first three digits are generated by the
server itself and the remaining three digits are typed-in by the user. The signature can be obtained using either
mouse or digital pen which reduces cost of implementation.
Key words: Client, Sever, Feature construction, Hidden markov model, Behavior Capture, OTP verification.
INRODUCTION
Handwritten signature is the most usual method by which a person declares that they accept and take
responsibility for a signed document. This method is extensively used by contemporary society and has a solid
legal basis accepted by the international community as a personal authentication method. However, handwritten
signature has certain disadvantages, which have hindered its widespread use as biometric modality. The main
challenge currently faced by researchers is that samples taken from the same individual have a large variability
in their shapes and over time. Besides, forgery signatures carried out by impostors exhibit a small interclass
variation, which makes their identification as intrusive users more difficult. However, an interesting advantage
is that the acquisition process can be readily performed by electronic devices such as pen tablets, touch screens
or PDAs. [4]These devices offer not only the possibility of capturing the stroke of the signature (spatial
information represented by the horizontal and vertical pen position), but also other measurable characteristics
such as pen pressure or pen angle versus time. The main objective of the project is to provide an effective and
secure way of authenticating user to applications such as banking and other online sectors and services. There
are three main phases by which the users signature is captured, stored and compared whenever the user login to
the system. The application by which the signature is obtained is developed using java. After the initial
registration phase, the system is trained with the users signature for twenty times after which an account
number and a pin number will be generated. After this phase, when the user tries to login to the system, he/she
will be asked to input the signature which will be verified with the stored signature during the earlier phase. If
that matches then OTP is generated and the user is authenticated to access the data.
CLIENT
Client is the first module where the user is the actual client who is going to use the system to access data. In this
module we are implementing the Client interface by which the Client can interact with the Application. To
access the Application, the Client has to the register their details with Application Server. They have to provide
their information like Name, Address, Date Of birth, Mobile Number and etc. This information will be stored in
the database of the Application Server. The User is allowed to the access the application only by their provided
Interface. This module is also the user registration phase or the enrolment phase where the basic information of
user is registered in the database, which is the server. Fig 1 shows the information which are filled in by the
user or client at the time of registration.

Fig 1: User Registration


44

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


SERVER
The Server will store and monitor the entire Users information in their database and verify if required. Also the
Server has to establish the connection to communicate with the Users. The Server will update the each Users
activities in its database. The Server will authenticate each user before they access the Application, So that the
Server will prevent the Unauthorized User from accessing the Application. The initial stage in this phase is to
capture the users signature for the first time. Fig 2 shows how the signature is captured using a java interface.
At first, the user is asked to sign two times which is stored in the database as primary signatures of the user.

Fig 2: Obtain the Signature


LEARNING PHASE
In this phase, well train the system according to identify the Users Signature by using the following modules.
Once when the initial stage of capturing the signature of user is completed, the system must be trained enough
to re construct the signature itself during the feature construction. This phase involves three stages which are as
follows A. Behavior Capture, B. Feature Construction and C. Training / Classification. This phase is called as
the learning phase since the system is made to capture the signature, construct it and classify the signature based
on the strokes and points in the users signature.
A. Behavior Capture
The first stage in this phase serves to create a mouse-operation task, and to capture and interpret mouse/pen
behavior data. [2]This is done by capturing the strokes and points in the signature and interpreting them into the
system. Consider a signature by a user which has 3 strokes and 265 points. The system must be able to capture
all the strokes and points accurately and interpret these data into the database without any change from the
actual signature.
B. Feature Construction
The second module is used to extract holistic and procedural features to characterize mouse/pen behavior. The
captured signature is stored in the database in this phase. As the strokes and points of the signature are captured,
they are stored in the database exactly as the signature which was captured initially. The feature construction is
followed by the training phase where Hidden Markov Model, a signal processing technique is used, which helps
in constructing the signature and training the system.
C. Training / Classification
The third module, in the training phase, applies neural network, HMM and then builds the users profile using a
one-class classifier. This stage is the final stage, where the system uses HMM to construct an approximate set of
signature from the training set which was stored in the database. The user is asked to sign for up to twenty times
which would help the system to create a set of signature. This would then be used to construct a class using
HMM to construct all possible variations that may occur in the future signature set based on the training set of
signatures. This is used in the verification phase, where the users signature is compared with this class to
determine whether the signature matches with the one which was stored in the database.
HIDDEN MARKOV MODEL
A hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed
to be a Markov process with unobserved (hidden) states. An HMM can be presented as the simplest dynamic
Bayesian network. In simpler Markov models (like a Markov chain), the state is directly visible to the observer,
and therefore the state transition probabilities are the only parameters. In a hidden Markov model, the state is
45

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


not directly visible, but output, dependent on the state, is visible. Each state has a probability distribution over
the possible output tokens. Therefore the sequence of tokens generated by an HMM gives some information
about the sequence of states. Note that the adjective 'hidden' refers to the state sequence through which the
model passes, not to the parameters of the model; the model is still referred to as a 'hidden' Markov model even
if these parameters are known exactly. Hidden Markov models are especially known for their application in
temporal pattern recognition such as speech, handwriting, gesture recognition, part-of-speech tagging, musical
score following, partial discharges and bioinformatics. A hidden Markov model can be considered a
generalization of a mixture model where the hidden variables (or latent variables), which control the mixture
component to be selected for each observation, are related through a Markov process rather than independent of
each other. Recently, hidden Markov models have been generalized to pair wise Markov models and triplet
Markov models which allow consideration of more complex data structures and the modeling of non stationary
data.
VERIFICATON PHASE
In the Verification Phase, the Server will verify the User when they are login into their account. The Server will
verify the signature provided by the User while login with the Signature provided by the User when they
provided during the Training Phase. If the signature is not matched, then the Server will not allow the User to
access their account. This is the final phase where the system is put to test. The user is asked to provide the
account number and the PIN number. If the information is correct, then the user is asked to input the signature
for verification. The signature is then compared with the training set and if that matches, then the user is
authenticated to access the data and if it mismatches, the user will be denied access.

Fig 3: if signature is not matched.


ONE CLASS LEARNING ALGORITHM
One-Class classification tries to describe one class of objects, and distinguish it from all other possible objects.
Boundary between the two classes has to be estimated from data of only genuine class. Task: to define a
boundary around the target class (to accept as much of the target objects as possible, to minimize the chance of
accepting outlier objects).
OTP VERIFICATION
Once the User provided their signature correctly, the Server will generate the Session Key using Secure
Random Number generation algorithm and send it to the User Email id. Once the User received their session
key in their Email id, they have to provide the first 2 digits of the session key. The server will verify the next
2 digits of the session key. Once the Session key is verified by the Server, the User is allowed to access their
account.

Fig 4: OTP validation.


46

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


SECURE RANDOM NUMBER GENERATION ALGORITHM
Pseudo-Random Number Generators (PRNGs) are algorithms that can automatically create long runs of
numbers with good random properties one of the most common PRNG is the linear congruential generator. An
example of a simple pseudo-random number generator is the multiply-with-carry method invented by George
Marsaglia. It is computationally fast and has good (albeit not cryptographically strong) randomness properties.
OUTPUT
The final output of the system is depicted in pictures below. If the signature of the user matches, then the OTP
is generated and the user is authenticated to access data. Otherwise the system shows that the signature is not
matched and hence access is denied.
CONCLUSION
This digital signature verification system can be implemented where high security and low cost implementation
of authenticating the user is required. Banking sector is the main idea taken here to explain the problem and
solution. This may well be used in various other sectors where authenticating the user is an important security
feature.
REFERENCES
1. S. Abe, Support Vector Machines for Pattern Classification. New York: Springer, 2005.
2. A. E. Ahmed and I. Traore, A new biometric technology based on mouse dynamics, IEEE Trans. Depend.
Secure Computer. vol. 4, no. 3, pp. 165179, Jul. /Sep. 2007.
3. A. E. Ahmed and I. Traore, Anomaly intrusion detection based on biometrics, in Proc. IEEE Information
Assurance Workshop, West Point, NY, 2005, pp. 452453.
4. Y. Aksari and H. Artuner, Active authentication by mouse movements, in Proc. 24th Int. Symp.
Computer and Information Science, Guzelyurt, 2009, pp. 571574.
5. S. Bengio and J. Mariethoz, A statistical significance test for person authentication, in Proc. Speaker and
Language Recognition Workshop, Toledo, Spain, 2004, pp. 237244.
6. D. J. Berndt and J. Clifford, Using dynamic time warping to find patterns in time series, in Proc. Advance
in Knowledge Discovery in Database: Papers From the 1994 AAAI Workshop
1, Jul. 1994, pp. 3593.

47

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


ENHANCED CERTIFICATE REVOCATION USING CLUSTERING SCHEME FOR MANETs
N. R. Vaishnavi1 , J. Omana2, Sherinkaran Wisebell3 and I. M.Thaya4
Assistant Professor, Department of IT, Prathyusha Institute of Technology and Management, Thiruvallur
ABSTRACT
Mobile ad hoc networks (MANETs) have been used widely in recent years because of their mobility and can be
easily deployed. In comparison with wired networks, dynamic and wireless nature makes them more liable to
different types of security attacks. To provide secure network communication, certificate revocation is used as a
fundamental component. In order to prevent further damage, attackers are isolated from network activities by
revoking their certificates. In the existing certificate revocation scheme centralized certificate authority is
deployed, which results in communication overhead. To solve this issue the certificate authoritys responsibility
is shared with cluster heads. This decentralized distribution brings effective cluster communication among
nodes and reduces revocation time.

Index Terms: Mobile Ad hoc Networks (MANETs), certificate revocation, cluster, security.
INTRODUCTION
A decentralized category of wireless network which is built spontaneously as devices connect is called as ad
hoc networks. In wired networks, routers are component of a controllable and fixed infrastructure. This is not
the case in ad hoc networks where nodes should act as both routers and communication end points. Mobile ad
hoc network (MANET) is an infrastructure less network of mobile devices connected by wireless. In the recent
years, MANETs have drawn much attention due to their dynamic topology, ease of deployment, self organized
and mobility features. A mobile ad hoc network does not rely on fixed infrastructure. A mobile node or mobile
device can freely move in the network, it is an internet connected device whose location and point of attachment
to the network may frequently changed. Mobile nodes can be laptops, cell phones and personal digital assistants
(PDA).
In addition to above mentioned features, mobile ad hoc network utilizes multihop relaying by which nodes
coordinate and forward packets through one or more intermediate when there is no direct communication
between source and destination nodes. The nodes in this type of network act as both end users and routers,
which receive and pass on packets for other nodes. One more feature of MANETs, open network environment
where nodes can join and leave the network without any restrictions [1]. A mobile ad hoc network is more
liable than wired networks due to dynamic topology, scalability and no centralized management. Threats can
come either from external attackers or compromised nodes inside the network [2]. Hence, each node has to be
authenticated by providing a certificate after validating its individuality.
In general, the certificate is provided by means of centralized certificate authority to all the nodes joining the
network. Instead if the responsibility of the authority is decentralized, this may be better suitable for mobile ad
hoc networks. If the certificate is forged or any other misbehaviour is carried out, the nodes cant communicate
further. And importantly the routing protocol to be used for routing should be effective in determining
successful routing paths and message delivery because it is challenging where there is a fluctuation in the
topology. Always hierarchical routing is preferred compared to flat routing when scalability is taken into
account. Here hierarchical routing is made achievable by organizing nodes into clusters. All the clusters have a
distinct representative (head) to perform inter cluster communication.
The remainder of this paper is organized as follows: section 2 focuses on related works, describes the different
approaches used to provide security and section 3 focuses on problem definition. In section 4 implementation
details are described, section 5 focuses on performance analysis.

RELATED WORKS
A. UBIQUITOUS AND ROBUST ACCESS CONTROL
The procedure of URSA access control emphasizes multiple node unity and fully localized instantiation [3].
That is multiple node cooperate to monitor a nodes behaviour, which is in one or two hop away and decide
whether the node is misbehaving or well-behaving. Each and every node which joins the mobile ad hoc network
which is protected by URSA should have valid certificate.
A new node joining the network will have a certificate from a combination of existing nodes, after its
authenticity been verified. Other way is new nodes are given a trial certificate, allowed to forward packets (that
node can forward other nodes packet by acting as intermediate but it is not allowed to deliver its own packet)
48

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


and closely monitored during the trial period. A node is marked as condemned in two scenarios. One is when
node nb determines by direct monitoring that a neighbour node is misbehaving. Then nb puts that node into
revocation list (RL) and marked that node as condemned. Simultaneously, nb disseminates a signed accusation.
Other case is when nb receives an accusation against another node. It checks whether the accuser is a
condemned node in its RL. If so the accusation made is malicious, it will be dropped. If not, node nb updates its
RL entry of the accused node by adding the accuser into the nodes accuser list. The accused node will be
marked as condemned if number of accusers reaches k and removed from network.
B. LOCALIZED CERTIFICATE REVOCATION SCHEME
In this scheme [4], the nodes joining the network will be provided a valid certificate from the existing nodes.
These certificates are used for network authentication. The nodes can verify the validity of the certificates, as
they know the public key of the peers which issued certificates. In this scheme, nodes have to monitor other
nodes behaviour and have to disseminate an accusation against suspected nodes. For disseminating accusation
information this scheme uses self-healing community approach.
For instance, any node in the transmission range of node A and C can transmit packets from A to C. If selfish or
malicious node within self healing community is present then that node will not forward the packet what it
should suppose to forward. At that time any other node in that community can provide the service. A selfhealing community is functional as long as at least one well-behaving node in the community.
In this certificate revocation scheme, each participating node has to compile and maintain data based on that it
broadcast accusation information about all nodes in the network. The collected data is used to assign a
quantitative value for the trustworthiness of a node. Accusations from any node will be weighted based on the
reliability of that node. The weight of accusation is greater when the reliability of the node is higher and vice
versa. The certificate of the node is revoked when the value of the sum of accusation weights against the given
node is greater than a threshold.
The new node will broadcast the certificate to all nodes in the network. The validity of the certificate is been
verified by the other node if it is valid then the other node will unicast its profile table (signed by it) to the
sender of the certificate. A profile table contains information about the behaviour of the nodes in MANET.
Once the profile tables with valid signature is received from its network peers, a new node is required to
compile its own profile table which is originally based on the information contained in the table it received. A
profile table can be packet of varied length depending on the number of accusations made against the nodes.
The profile table contains following fields: Owners ID (certificate serial number of node that created the
profile table), Node count (number of nodes in the network), Peer i ID (certificate serial number of accused
node, if this field is zero then it indicates the end of the profile table), Certificate status (1 bit flag, if the bit is
set then the certificate is revoked or else not), accusation information (certificate serial number of the accuser
and the date that accusation was made). By comparing the behaviour of node with the profile table,
misbehaving can be detected. The disadvantages of these schemes (URSA and localized certificate revocation
scheme) are slow attack response, high storage and communication overhead.
C. SUICIDE FOR COMMON GOOD STRATEGY
When a node A detects any misbehaviour from any node (say node B) then node A broadcast a signed suicide
note (contains identities of both node A and B) into the network. The other nodes in the network verify the
signature if valid, revoke both A and B. By adding identities of both A and M into blacklist and deleting all
keys related to these nodes, revocation is achieved [5].
Properties of the strategy are:

Low communication overhead: No need to exchange messages i.e., voting information between
each and every node in the network.

Very fast removal: No delays while waiting for votes or threshold to be met.

Single node detection: Only one honest node has to detect misbehaviour to initiate revocation.

Fully decentralized: No need to discuss with central entity.

Even though this strategy reduces revocation time and communication overhead, this does not take into account
of differentiating falsely accused node from malicious attackers. As an outcome, the accuracy is degraded.

49

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


D. CERTIFICATE REVOCATION TO COPE WITH FALSE ACCUSATION
The nodes are organised into clusters, the Cluster Head (CH) is allowed to detect false accusation made by the
Cluster Member (CM) within the cluster. Only the nodes which are having high reliability are allowed to
become a cluster head. When a new node joins the network, node joining algorithm is carried out. When an
attack is detected, the accuser node is put into Warning List (WL) and accused node will be into Black List
(BL) after verifying the validity of accuser by CA.
Two types of accusation are made:
First, any normal node which has detected some attack from the neighbour node can accuse that node. The
accusation is made by sending attack detection packet to CA; on receiving the first attack detection packet the
CA will take action. This attack detection packet includes not only attackers ID but also accusers ID. After
verifying the validity of accuser, the accuser node and accused node are put into WL and BL. Second, the
malicious node will make false accusation on normal node and sends an attack detection packet to CA. The
validity verified here is the accuser should not be in WL. Then they are put into WL and BL.
The nodes in WL can communicate with other nodes in the network but cant become CH and also cant make
accusations further. The nodes in BL cant communicate with other nodes because their certificates are revoked
and they are isolated from the network. To handle this false accusation, CH will send a certificate recovery
packet to CA. The false accusation made by the malicious node against the legitimate node is detected by the
CH and restores the falsely accused node into the network. The accuser (CH) put into WL and the victim which
was falsely accused is moved from BL to WL [6][7].

PROBLEM DEFINITION
As deliberated above, the advantages and disadvantages of the schemes are compared. Actually in the existing
scheme a centralized certificate authority has been used. The authority provides certificates for all nodes via
cluster heads of the respective cluster. When a packet has to be forwarded, initially the validation should be
done by authority to verify nodes authenticity. This happens every time whenever a transmission takes place.
This affects the effectiveness of cluster communication.
In this paper, the responsibility of the certificate authority is split and given to all cluster heads of respective
clusters. So, the validation is done by the cluster head for the members of that particular cluster alone. This
results in effective cluster communication among the nodes. And also a distributed and scalable protocol is used
to enhance the effectiveness of the scheme.

IMPLEMENTATION
This section describes about cluster formation, certificate authoritys function and about warning list and black
list.
4.1 CLUSTER FORMATION
In the network, the mobile nodes are organized to form clusters. Each cluster has a Cluster Head (CH) and
remaining nodes of the cluster are the Cluster Members (CMs), which lie within the transmittal range of that
particular CH. The CH is elected based on the energy among the nodes which is higher in the same transmittal
area. To verify the availability of
links among nodes in the network is done by periodical broadcast of HELLO message. That is if a node
receives a new hello message then it is known that a new link is available. Within a particular time period, if no
hello messages are received from the neighbouring nodes. Then link is considered as disconnected. In this
scheme, the node which has maximum energy at that particular time will be elected as the cluster head.
The CH will generate CH hello packet to intimate the neighbouring nodes about its presence at a regular
interval. The nodes which lie in the transmittal range of CH will accept the hello packet, if it wishes to
participate in that cluster. In response the neighbouring nodes will send CM hello packet to confirm the link
establishment. The importance of CH is to interact with all the nodes of its respective cluster and also performs
inter cluster communication. Appropriate CH can reduce energy utilization and can upgrade the network
lifetime. One CH per cluster must be chosen during selection process because multiple CH within same cluster
will result in routing issues [9].
After a random amount of time, the energy level of the CH may decrease. So the CH election takes place again
to elect the node with higher energy. Suppose two nodes have same energy level, the node which is having
maximum neighbours will be elected as CH. The new CH will collect all the information related to the cluster
from old CH. And regarding new CH election will be intimated to authority.
50

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015

Fig.1 cluster based architecture [8]


4.2 CERTIFICATE AUTHORITYS FUNCTION
An authority is established in the cluster based architecture. The responsibility of that node is to issue
certificates to all CH in that network. And the CHs share the responsibility of certificate authority in turn. CHs
will issue certificates to all CMs of that particular cluster. Whenever a transmission happens the key
information of the CM is validated by CH. The transmission proceeds to happen only when the validation is
correct. Otherwise an error message will be interpreted. That is this happens when the key information of the
certificate is altered, it will mismatch.
The figure 1 clears depicts the flow in the network. Initially, nodes move to certain areas and they form clusters.
Now each cluster will have cluster head been elected. This information is reported to the authority in turn it will
provide certificates to the heads. When there is any change with the cluster head selection after a random
amount of time. That information will also be intimated to the authority.
4.3 WARNING LIST AND BLACK LIST
When a node tires to flood plenty of route requests in the network then it is deemed as malicious. Those route
requests are discarded by CH which lies in nodes transmittal range. In this case, the node creates collision will
be added to Black List (BL). Another case is the node which drops packet due to its TTL value expiration. Then
these nodes are added to Warning List (WL).

PERFORMANCE ANALYSIS
The simulation results are shown in the form of line graph. First the performance of AODV and CBRP is
evaluated under TCP traffic pattern. The graph (Fig.2) shows comparison of two protocols based on Packet
Delivery Ratio (PDR) for different number of nodes.

Fig.2 No. of nodes Vs. PDR


51

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015

Fig.3 No. of nodes vs. End to end delay


In the existing scheme, centralized authority employed which results in communication overhead. That is, when
compared the control packets are more than data packets which are transmitted in the network. In the proposed
scheme, by deploying decentralized authority which results shows reduction in the overhead.

Fig.4 Overhead vs. No. of. Nodes


And the residual energy of CHs is depicted below. That is at each point of time the energy the CH has is shown
in figure 5.

.
Fig. 5 Time vs. residual energy at CHs
By making the authoritys responsibility from centralized to decentralized, the revocation time is reduced. The
figure 6 shows the revocation time for the existing and proposed scheme. That is to prevent the malicious
activity exhibited from malicious nodes; the certificate has to be revoked as soon as possible.
52

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015

Fig.6 No. of malicious nodes vs. revocation time

CONCLUSION AND FUTURE WORK


Thus from the above analysis ,the packet delivery ratio is better for cluster based routing protocol .And the
proposed scheme reduces the overhead in compared to existing scheme. Thus the revocation time is also
reduced due to this decentralized scheme. In future; a profile table may be added for each node which describes
the behaviour of the node.

REFERENCES
1. Wei liu, Hiroki nishiyama, Nirwan ansari, Jie yang, Nei kato, Cluster-based certificate revocation with
vindication capability for mobile ad hoc networks, IEEE transactions on parallel and distributed systems,
Feb2013.
2. L.Zhou and Z.J.Haas, Securing ad hoc networks, IEEE Network Magazine, vol. 6, pp. 24-30, 1999.
3. H.Luo, J.Kong, P.Zerfos, S.Lu and L.Zhang,URSA: ubiquitous and robust access control for mobile ad
hoc networks, IEEE/ACM Transaction on Networking, vol. 12, pp. 1049-1063, 2004.
4. G.Arboit, C.Crepeau, C.R.Davis and M.Maheswaran, A localized certificate revocation scheme for mobile
ad hoc networks, Ad hoc network, vol. 6, pp.17-31, 2008.
5. J.Clulow and T.Moore, Suicide for the common good: A new strategy for credential revocation in selforganizing systems, ACMSIGOPS Operating systems reviews, vol. 40, pp.18-21, 2006.
6. K.Park, H.Nishiyama, N.Ansari and N.Kato, Certificate revocation to cope with false accusations in
mobile ad hoc networks, in proc. 2010 IEEE 71st vehicular technology conference: VTC2010-spring,
2010.
7. W. Liu, H. Nishiyama, N. Ansari, and N. Kato, A study on certificate revocation in mobile ad hoc
network, in IEEE International Conference 2011, Kyoto, Japan, Jun. 2011.
8. www.secs.oakland.edu/~shu/research.htm accessed on 10.3.14
9. Khalid hussain et al., Cluster head selection scheme for WSN and MANET: A Survey in world applied
science journal 23(5): 611-620, 2013.
10. B.Kannhavong et al.,A survey of routing attacks in MANET,IEEE wireless communication magazine, pp.
85-91, 2007.

53

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


LOCATION PRIVACY IN UBIQUITOUS COMPUTING
Purnima Pradhan 1, Savita Singh 2, Mili Patel3 and Pooja Khemka4
Student1,2 and Faculty3,4 , Kirodimal Institute of Technology, Raigarh (C.G.)
ABSTRACT
The field of ubiquitous computing envisages an era when the average consumer owns hundreds or thousands of
mobile and embedded computing devices. These devices will perform action based on the context of their users,
and therefore ubiquitous system will gather, collate and distribute much more personal information about
individuals then computers do today. Location information is a particularly useful form of context in ubiquitous
computing, yet its unconditional distribution can be very invasive. This dissertation takes a different approach
and argues that many location-aware applications can be function with anonymised location data and that,
where this is possible, its use is preferable to that of access control.
INTRODUCTION
Researcher in the Computer Science Lab at Xeroxs PARC (Palo Alto Research Center) first articulated the idea
of Ubiquitous Computing in 1988. Ubiquitous computing is the method of enhancing computer use by making
many computers available throughout the physical environment, but making them effectively invisible to the
user.
Ubiquitous computing, or Unicom, is the term given to the third era of modern computing. The first era was
defined by the mainframe computers, a single large time shared computer by an organization and used by many
people at the same time. Second, came the era of the PC, a personal computer primarily owned and used by one
person, and dedicated to them. The third era, ubiquitous computing, representative of the present time, is
characterized by the explosion of small networked portable computer products in the form of smart phones,
personal digital assistants (PDAs), and embedded computers built into many of the devices we own resulting
in a world in which each person own and uses many computers. Each era has resulted in progressively larger
numbers of computers becoming integrated into everyday life. Figure 1 represents the eras of modern
computing [1].

Fig.1 Three Eras of Modern Computing


Ubiquitous computing is a concept in software engineering and computer science where computing is made to
appear everywhere and anywhere. In contrast to desktop computing, ubiquitous computing can occur using any
device, in any location, and in any format. A user interacts with the computer, which can exist in many different
forms, including laptop computers, tablets and terminals in everyday objects such as a fridge or a pair of
glasses. The underlying technologies to support ubiquitous computing include internet, advanced middleware,
54

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


operating system, mobile code, sensors, microprocessor, new I/O and interface, networks, mobile protocols,
location and positioning and new materials. The idea behind ubiquitous computing is to surround ourselves with
computers and software that are carefully tuned to offer us unobtrusive assistance as we navigate through our
work and personal lives. Contrast this with the world of computer as we know them now.
CONTEXT-AWARE COMPUTING
Context-aware computing was first discussed in1994 by Schilit and Theimer as software that adapts according
to its location of use, the collection of nearby people and object, as well as changes to those objects over time.
Context is any information that can be used to characterize the situation of an entity. Applications that use
context, weather on a desktop or in a mobile or ubiquitous computing environment, are called context-aware
[2].
Dey and Abowd (2000a) define context awareness more generally with the following statement: "A system is
context-aware if it uses context to provide relevant Information and/or services to the user, where relevancy
depends on the users task."
The promise of context-awareness is that computers will be able to understand enough of a users current
situation to offer services, resources, or information relevant to the particular context. The attributes of context
to a particular situation vary widely, and may include the users location, current role (mother, daughter, office
manager, soccer coach, etc.), past activity, and affective state. Beyond the user, context may include the current
date and time, and other objects and people in the environment. The application of context may include any
combination of these elements [3].
The Active Badge system was the first context-aware system. In this application (Figure 2), users wore Active
Badges, infrared transmitters that transmitted a unique identity code. As users moved throughout their building,
a database was being dynamically updated with information about each users current location, the nearest
phone extension, and the likelihood of finding someone at that location. When a phone call was received for a
particular user, the receptionist used the database to forward the call to the last known location of that user,
rather than blindly forwarding the call to the users office, where he may not be located. This application, along
with much of the early work in context-aware computing was focused on location-aware computing, or, as they
are more commonly known today, location-based services [4].

Fig.2.1 Rendition of the original Active Badge application showing


55

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


THE LOCATION AND CERTAINTY OF ACTIVE BADGE WEARERS
LOCATION TECHNOLOGIES
Location technology is a combination of methods and techniques for determining a physical location of an
object or a person in the real world. Location-aware applications use the location of the target to add value to
the services they provide. The ability to determine a users location enables a variety of ubicomp applications
that provide services and functionality appropriate to the specific location and context. Today, people use
location-aware applications in almost any life domain, including entertainment, navigation, asset tracking,
health care monitoring, and emergency response [5].

1. LOCATION REPRESENTATION
Location is a position in a physical space and it can be represented in absolute, relative or symbolic
form.

Fig.2.2: An example of a latitude and longitude angles to a point on Earth


The most common means of specifying a precise absolute location is using the points degrees of latitude and
longitude on the surface of the Earth, as defined by the geographic coordinate system. If Earth were a perfect
ellipsoid, the latitude would measure the angle between the point and the equatorial plane from the center of
Earth. In reality, however, the latitude, or the geodetic latitude, measures the angle between the equator and a
line that is normal to the reference ellipsoid, which approximates the shape of Earth. The longitude measures
the angle along the Equator to the point. A line that passes near the Royal Observatory, Greenwich, England, is
accepted as the zero-longitude point and it is called the Prime Meridian. Lines of constant latitude are called
parallels and lines of constant longitude are called meridians. Meridians, unlike parallels, are not parallel and all
intersect at the North and South Poles. This form of representation is often used in outdoor location systems
such as GPS. See Figure 2 for an example.

2. INFRASTRUCTURE AND CLIENT BASED LOCATION SYSTEMS


There are three classes of location systems: client-based, network-based, and network-assisted. In a clientbased location system, a device computes its own location without relying on the network infrastructure. An
example of a client-based location system is GPS, in which a device equipped with a GPS chip calculates its
own location using signals received from at least four GPS satellites.
In a network-based location system, the network infrastructure calculates the position of a device. An example
of a network-based location system is the Active Badge system (Want et al., 1992), in which a badge carried by
the user emits infrared (IR) signals captured by the IR receivers in the ceiling. The receivers, in turn, transmit
signal data to a networked processor that computes the badges location.
56

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


In a network-assisted location system, both the device and the infrastructure participate in computing the
location of the device. An example of a network-assisted location system is the Assisted GPS, in which a device
calculates its own location based on its GPS measurements and additional information about the GPS
constellation received over the cellular link from the cellular network infrastructure.
PRIVACY IN UBIQUITOUS COMPUTING
To build ubicomp systems that are privacy-aware or privacy-respecting, one obviously has to first define what
exactly is meant by privacy. A privacy can be defind as: .a key value which underpins human dignity and other
key values such as freedom of association and freedom of speech [6].

1. PRIVACY AWARENESS SYSTEM (PAWS)


Figure 3 shows an example of pawS in operation: Upon entering in an ubicomp environment with a number of
available services a privacy beacon (1) announces the data collections of each service and their policies using a
wireless communications channel such as Bluetooth or IrDA. In order to save energy, the mobile privacy
assistant (2) the user is carrying delegates this information to the users personal privacy proxy residing
somewhere on the Internet (3), which contacts the corresponding service proxies at their advertised addresses
(4) and inquires their privacy policies. After comparing those privacy policies to the users privacy preferences,
the user proxy decides to decline usage of the tracking service, which results in disabling the location tracking
service of the video camera (5).

Fig.3: Overview of Privacy Management System

2. GENERAL PRINCIPLES
The four principles for use in a ubiquitous computing (ubicomp) environment:

a)Notice: Given a ubicomp environment where it is often difficult for data subjects to realize that data
collection is actually taking place, we will not only need mechanisms to declare collection practices
(i.e., privacy policies), but also efficient ways to communicate these to the user (i.e., policy
announcement).
b) Choice and consent: In order to give users a true choice, we need to provide a selection mechanism (i.e.,
privacy agreements) so that users can indicate which services they prefer.
c) Proximity and locality: The system should support mechanisms to encode and use locality information for
collected data that can enforce access restrictions based on the location of the person wanting to use the data.
d) Access and recourse: Our system needs to provide a way for users to access their personal information in a
simple way through standardized interfaces (i.e., data access). Users should be informed about the usage of their
data once it is stored, similar to call-lists that are often part of monthly phone bills (i.e., usage logs).
57

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


REFERENCES

1. Ubiquitous Computing Fundamentals / edited by John Krumm. Visit the Taylor & Francis web site
at http://www.taylorandfrancis.com.
2. Location Privacy in ubiquitous computing,
http://www.cl.cam.ac.uk/TechReports/.

by

Alastair

R.

Beresford,

visit

at

3. Ubiquitous Computing by R. Jason Weiss, Development Dimensions International & J. Philip


Craiger, University of Nebraska-Omaha.
4. Charting Past, Present, and Future Research in Ubiquitous Computing. GREGORY D. ABOWD
and ELIZABETH D. MYNATT, Georgia Institute of Technology
5. Bahl, P., and Padmanabhan, V. RADAR: An in-building RF-based user location and tracking
system. In proceeding of IEEE Infocom, Los Alamitos.
6. A Privacy Awareness System for Ubiquitous Computing Environments. Marc Langhe-inrich
Institute of Information System.

58

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


RESEARCH ISSUE IN: STUDY OF QUANTUM CRYPTOGRAPHY
Rupali Yadav 1, Neellima Manher 2, Mili Patel 3 and Pooja Khemka 4
Student1, 2 and Faculty3,4, Kirodimal Institute of Technology, Raigarh (C.G.)
ABSTRACT
Quantum cryptography is an emerging technology in which two parties can secure network Communications by
applying the phenomena of quantum physics. The security of these transmissions is based on the inviolability of
the laws of quantum mechanics. The quantum cryptography relies on two important elements of quantum
mechanics-the Heisenberg uncertainty principle and the principle of photon polarization. The Heisenberg
uncertainty principle states that, it is not possible to measure the quantum state of any system without
distributing that system. The principle of photon polarization states that, an eavesdropper cannot copy unknown
quantum bits i.e. unknown quantum states, due to no-cloning Theorem which was first presented by Wootters
Andzurek in 1982.this research paper concentrates on the theory of quantum cryptography, and how this
technology contributes to the network [1].
In this paper, we analyze the interests of using quantum cryptography in 802.11 wireless networks, and propose
a scheme integrating quantum cryptography in 802.11i security mechanisms for the distribution of the
encryption keys. The use of an apparatus network to provide alternative line-of-sight paths is also disc [2].
INTRODUCTION
Quantum cryptography is based on the laws of quantum physics. These laws ensure that nobody can measure a
state of an arbitrary polarized photon carrying information without introducing disturbances which will be
detected by legitimate users since the first QKD protocol proposed in 1984 with the name of BB84, research on
quantum cryptography gets significant advances. Experiments of different QKD systems have been realized in
fiber networks and over free space [1-4]. Especially, a turnkey service using quantum cryptography to
frequently generate fresh secret key has been commercialized in Switzerland. Since the first QKD protocol
proposed in 1984 with the name of BB84, research on quantum cryptography gets significant advances.
Experiments of different QKD systems have been realized in fiber networks and over free space [1-4].
Especially, a turnkey service using quantum cryptography to frequently generate fresh secret key has been
commercialized in Switzerland. Experiments aim at providing QKD service outdoor for a long distance in
satellite networks or between buildings in a city. In these works, communication entities of the QKD protocol
are mainly system devices but not final mobile users. For instance, communication entities in satellite networks
are ground stations and the satellite. Our motivation of integrating quantum cryptography in mobile wireless
networks is quite different. As indicated in Table I
TABLE I. COMPARISON OF MOBILE WIRELESS NETWORKS
Mobile wireless
User mobility
Coverage area
Terminals
Applications
network
level
GSM
High
Outdoor
Cell phone
Voice calls
(order
of
kilometers)
802.11
Low
Indoor
Laptop, PDA
ecommerce
(< 100m)
Internet,
Bluetooth
Low
Indoor
Peripheral
Replacement of wires
(< 10 meters)
devices
connecting devices in
close proximity of each
other
As given in above Table I GSM or cellular networks in general is a wide area network, used essentially outdoor
to provide mobile users with telephone service. As voice call is the main application of GSM networks, the
terminals are small size cell phones allowing mobile users to move with a high level of mobility. The speed of
mobile users in a GSM network can be at step speed or vehicle speed. With this level of mobility and the
outdoor environment, cellular network presents some disadvantages for the use of quantum cryptography. It
will be difficult to provide a line-of-sight path with a high user mobility level. The outdoor environment is not
ideal for free space quantum cryptography. Noise level can be [2] since the 1910s, One time pad (OTP)
cryptosystems have been in use. The crypto-key length of OTP is the same as the length of the plain text. If the
key is never reused, truly random, and kept secret, the OTP can be proven to be unbreakable. But the difficulty
59

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


of securing the sharing key has prevented it from becoming practical. Quantum Cryptography or Quantum key
distribution (QKD) is a new innovative technology that allows a more practical implementation of the classic
OTP. This is because Quantum Cryptography enables two distant parties (say Alice and Bob) to generate a
secret key that has guaranteed privacy due to the use of quantum physics. The secret key when it is used in an
OTP cryptosystem provides perfect security. [3]

Figure 1. Difference between Quantum & Classical Cryptography


WHAT ARE THE APPLICATIONS OF THE CRYPTOGRAPHY?

They are used in sending encrypted messages, Digital signatures, https protocol uses
cryptography extensively, this protocol helps you send passwords, credit card information etc
without a third party being able to see them.

Its used to scramble data so that only certain people who know how to decode it.

You could apply this if you wanted to sent someone a note but not let the people who transport
the note to be able to read it.

Its used a lot today to send and receive messages securely [8].

HOW DOES QUANTUM CRYPTOGRAPHY WORKS?


Quantum cryptography uses photons to transmit a key. Once the key is transmitted, coding and encoding using
the normal secret-key method can take place. But how does a photon become a key? How do you attach
information to a photons spin? This is where binary code comes into play. Each types of a photons spin
represents one piece of information usually a 1 or a 0, for binary code. This code uses strings of 1s and 0s to
create a coherent message. For example, 11100100110 could correspond with h-e-l-l-o. So a binary code can be
assigned to each photon for example, a photon that has a vertical spin ( | ) can be assigned a 1. Alice can
send her photons through randomly chosen filters and record the polarization of each photon. She will then
know what photon polarizations Bob should receive. When Alice sends Bob her photons using an LED, shell
randomly polarize them through either the X or the + filters, so that each polarized photon has one of four
possible states: (|), (), (/) or ( ) [source: Vittorio]. As Bob receives these photons, he decides whether to
measure each with either his + or X filter he cant use both filters together. Keep in mind, Bob has no idea
what filter to use for each photon, hes guessing for each one. After the entire transmission, Bob and Alice have
a non-encrypted discussion about the transmission. [5]
60

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015

Figure 2. Working of Quantum Cryptography


QUANTUM CRYPTOGRAPHY
Quantum cryptography is only used to produce and distribute a key, not to transmit any message data. Quantum
cryptography is different from traditional cryptographic systems in that it relies more on physics, rather than
mathematics, as a key aspect of its security model. Quantum cryptography uses our current knowledge of
physics to develop a cryptosystem that is not able to be defeated - that is, one that is completely secure against
being compromised without knowledge of the sender or the receiver of the messages.
QUANTUM CRYPTOGRAPHY TECHNOLOGY
Experimental implementations of quantum cryptography have existed since 1990, and today quantum
cryptography is performed over distances of 30-40 kilometers using optical fibers. Essentially, two technologies
make quantum key distribution possible: the equipment for creating single photons and that for detecting them.
The ideal source is a so-called photon gun that fires a single photon on demand. As yet, nobody has succeeded
in building a practical photon gun, but several research efforts are under way. Some researchers are working on
a light emitting p-n junction that produces well-spaced single photons on demand. Others are working with a
diamond-like material in which one carbon atom in the structure has been replaced with nitrogen. That
substitution creates a vacancy similar to a hole in a type semiconductor, which emits single photons when
excited by a laser. Many groups are also working on ways of making single ions emit single photons. None of
these technologies, however, is mature enough to be used in current quantum cryptography experiments. As a
result, physicists have to rely on other techniques that are by no means perfect from a security viewpoint. Most
common is the practice of reducing the intensity of a pulsed laser beam to such a level that, on average, each
pulse contains only a single photon. [1]

Figure 3. A Quantum Cryptographic communication system for securely transferring random key
61

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015

QUANTUM CRYPTOGRAPHY: ADVANTAGES AND DISADVANTAGES


Quantum cryptography depends on physics and not mathematics to decode a message. It was first used by the
Swiss to ensure that the votes cast during the parliamentary elections would not be tampered with. The votes
were transmitted using a secure encryption encoded by a key generated using photons.
ADVANTAGES OF QUANTUM CRYPTOGRAPHY
Virtually un-hack able.

Simple to use.

Less resource needed to maintain it.

DISADVANTAGES OF QUANTUM CRYPTOGRAPHY


The signal is currently limited to 90 miles.

Could replace a lot of jobs [6].

CONCLUSION
For the first time in history, the security of cryptography does not depend any more on the computing resources
of the adversary, nor does it depend on mathematical progress. Quantum cryptography allows exchanging
encryption keys, whose secrecy is future-proof and guaranteed by the laws of quantum physics. Its combination
with conventional secret-key cryptographic algorithms allows raising the confidentiality of data transmissions
to an unprecedented level. Quantum cryptography allows reaching unprecedented levels of security guaranteed
by quantum physics for data transmissions over optical networks.
Recognizing this fact, the MIT Technology Review and Newsweek magazine identified in 2003 quantum
cryptography as one of the - ten technologies that will change the world [7].
REFERENCE
1. Rishi Dutt Sharma Computer science department Ambedkar Institute of Technology G.G.S.I.P.U, NEW
DELHI rishi.abes@gmail.comk security.
2. Thi Mai Trang Nguyen, Mohamed Ali Sfaxi and Solange Ghernaouti-Hlie University of Lausanne, HECINFORGE, CH-1015 Lausanne, Switzerland Email: trnguyen@ieee.org, {mohamedali.sfaxi,
sgh}@unil.chussed.
3. Mohamed Elboukhari1, Mostafa Azizi2 and Abdelmalek Azizi1,3 1dept. Mathematics & Computer
Science, FSO, University Mohamed Ist, Morocco 2 dept. Applied Engineering, ESTO, University
Mohamed Ist, Oujda,Morocco elboukharimohamed@gmail.com , azizi.mos@gmail.com 3Academy
Hassan II of Sciences & Technology, Rabat, Morocco abdelmalekazizi@yahoo.fr .
4. Richard J. Hughes D. M. Alde, P. Dyer, G. G. Luther, G. L. Morgan and M. Schauer University of
California Physics Division Los Alamos National Laboratory Los Alamos, NM 87545
5. http://www.whitec0de.com/quantum-cryptography/
6. http://masonstudent4444.blogspot.in/2011/10/quantum-cryptography-advantages-and.html
7. M. Indra Sena Reddy1, K. Subba Reddy2, M. Purushotham Reddy3, P.J. Bhat4, Rajeev5 1,2Dept. Of CSE,
RGMCET, India. 3Dept. of CSE, VBIT, India. 4,5ISRO Satellite centre, Bangalore, India.
mir555mittapalli@gmail.com
8. https://answers.yahoo.com/question/index?qid=20080317203915AASbfHb

62

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


ARTIFICAL INTELLIGENCE
Amanjot Kaur
Assistant Professor, S.D.S.P.M. College for Women, Rayya, Panjab
ABSTRACT
Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent
computer programs. It is related to the similar task of using computers to understand human intelligence, but
AI does not have to confine itself to methods that are biologically observable. AI involves two basic ideas. First,
it involves studying the thought processes of human beings. Second, it deals with representing those processes
via machines such as robots, computers, etc. Natural language processing refers to artificial intelligence
methods of communicating with a computer in a natural language like English. The main objective of a NLP
program is to understand input and initiate action
Keywords: Artificial Neural Network, Neuro Dynamic, Turing Test,
1. INTRODUCTION
AI is the study of how to make computers do things which at the moment people do better. This is ephemeral as
it refers to the current state of computer science and it excludes a major area ; problems that cannot be solved
well either by computers or by people at the moment.
Artificial Intelligence is concerned with the design of intelligence in an artificial device. There are two ideas in
the definition.
(i) Intelligence
(ii) Artificial device
Accordingly there are two possibilities:
(i) A system with intelligence is expected to behave as intelligently as a human
(ii) A system with intelligence is expected to behave in the best possible manner
Secondly what type of behavior are we talking about?
(i) Are we looking at the thought process or reasoning ability of the system?
(ii) Or are we only interested in the final manifestations of the system in terms of
its actions? Given this scenario different interpretations have been used by different researchers as defining the
scope and view of Artificial Intelligence.
1. One view is that artificial intelligence is about designing systems that are as intelligent as humans. This
view involves trying to understand human thought and an effort to build machines that emulate the
human thought process. This view is the cognitive science approach to AI.
2. The second approach is best embodied by the concept of the Turing Test. Turing held that in future
computers can be programmed to acquire abilities rivaling human intelligence. As part of his argument
Turing put forward the idea of an 'imitation game', in which a human being and a computer would be
interrogated under conditions where the interrogator would not know which was which, the
communication being entirely by textual messages. Turing argued that if the interrogator could not
distinguish them by questioning, then it would be unreasonable not to call the computer intelligent.
Turing's 'imitation game' is now usually called 'the Turing test' for intelligence.
1.1 THE TURING TEST
This test was proposed by Alan Turing (Turing, 1950), was designed to provide a satisfactory operational
definition of intelligence. Turing defined intelligent behavior as the ability to achieve human-level performance
in all cognitive tasks, sufficient to fool an interrogator. Roughly speaking, the test he proposed is that the
computer should be interrogated by a human via a teletype, and passes the test if the interrogator cannot tell if
there is a computer or a human at the other end. Programming a computer to pass the test provides plenty to
work on. The computer would need to possess the following capabilities:
Natural language processing to enable it to communicate successfully in English (or some other human
language);

Knowledge representation to store information provided before or during the interrogation;

Automated reasoning to use the stored information to answer questions and to draw new conclusions;
63

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015

Machine learning to adapt to new circumstances and to detect and extrapolate patterns.

Turing's test deliberately avoided direct physical interaction between the interrogator and the computer, because
physical simulation of a person is unnecessary for intelligence. However, the so called total Turing Test
includes a video signal so that the interrogator can test the subject's perceptual abilities, as well as the
opportunity for the interrogator to pass physical objects ``through the hatch.'' To pass the total Turing Test, the
computer will need
Computer vision to perceive objects, and

Robotics to move them about.

Within AI, there has not been a big effort to try to pass the Turing test. The issue of acting like a human comes
up primarily when AI programs have to interact with people, as when an expert system explains how it came to
its diagnosis, or a natural language processing system has a dialogue with a user. These programs must behave
according to certain normal conventions of human interaction in order to make themselves understood. The
underlying representation and reasoning in such a system may or may not be based on a human model.

1.2 TYPICAL AI PROBLEMS


While studying the typical range of tasks that we might expect an intelligent entity to perform, we need to
consider both common-place tasks as well as expert tasks. Examples of common-place tasks include
(i) Recognizing people, objects.
(ii) Communicating (through natural language).
(iii) Navigating around obstacles on the streets
These tasks are done matter of factly and routinely by people and some other animals. Expert tasks include:
Medical diagnosis.
Mathematical problem solving
Playing games like chess
These tasks cannot be done by all people, and can only be performed by skilled specialists.
Now, which of these tasks are easy and which ones are hard? Clearly tasks of the first type are easy for humans
to perform, and almost all are able to master them. The second range of tasks requires skill development and/or
intelligence and only some specialists can perform them well. However, when we look at what computer
systems have been able to achieve to date, we see that their achievements include performing sophisticated
tasks like medical diagnosis, performing symbolic integration, proving theorems and playing chess.
On the other hand it has proved to be very hard to make computer systems perform many routine tasks that all
humans and a lot of animals can do. Examples of such tasks include navigating our way without running into
things, catching prey and avoiding predators. Humans and animals are also capable of interpreting complex
sensory information. We are able to recognize objects and people from the visual image that we receive. We are
also able to perform complex social functions.

1.3 INTELLIGENT BEHAVIOUR


This discussion brings us back to the question of what constitutes intelligent behaviour. Some of these tasks and
applications are:
Perception involving image recognition and computer vision

Reasoning

Learning

Understanding language involving natural language processing, speech processing

Solving problems

Robotic

1.4 PRACTICAL IMPACT OF AI


AI components are embedded in numerous devices e.g. in copy machines for automatic correction of operation
for copy quality improvement. AI systems are in everyday use for identifying credit card fraud, for advising
doctors, for recognizing speech and in helping complex planning tasks. Then there are intelligent tutoring
systems that provide students with personalized attention.
64

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


Thus AI has increased understanding of the nature of intelligence and found many applications. It has helped in
the understanding of human reasoning, and of the nature of intelligence. It has also helped us understand the
complexity of modeling human reasoning.
1.5 APPROACHES TO AI
Strong AI: aims to build machines that can truly reason and solve problems. These machines should be self
aware and their overall intellectual ability needs to be indistinguishable from that of a human being.
Excessive optimism in the 1950s and 1960s concerning strong AI has given way to an appreciation of the
extreme difficulty of the problem. Strong AI maintains that suitably programmed machines are capable of
cognitive mental states.

Weak AI: deals with the creation of some form of computer-based artificial intelligence that cannot truly
reason and solve problems, but can act as if it were intelligent. Weak AI holds that suitably programmed
machines can simulate human cognition.

Applied AI: aims to produce commercially viable "smart" systems such as, for example, a security system
that is able to recognize the faces of people who are permitted to enter a particular building. Applied AI has
already enjoyed considerable success.

Cognitive AI: computers are used to test theories about how the human mind works--for example, theories
about how we recognize faces and other objects, or about how we solve abstract problems.

1.5 LIMITS OF AI TODAY


Todays successful AI systems operate in well-defined domains and employ narrow, specialized knowledge.
Common sense knowledge is needed to function in complex, open-ended worlds. Such a system also needs to
understand unconstrained natural language. However these capabilities are not yet fully present in todays
intelligent systems.
What can AI systems do?
Todays AI systems have been able to achieve limited success in some of these tasks.
In Computer vision, the systems are capable of face recognition

In Robotics, we have been able to make vehicles that are mostly autonomous.

In Natural language processing, we have systems that are capable of simple machine translation.

Todays Expert systems can carry out medical diagnosis in a narrow domain

Speech understanding systems are capable of recognizing several thousand words continuous speech

Planning and scheduling systems had been employed in scheduling experiments with the Hubble Telescope.

The Learning systems are capable of doing text categorization into about a 1000 topics

In Games, AI systems can play at the Grand Master level in chess (world champion), checkers, etc.

What can AI systems NOT do yet?


Understand natural language robustly (e.g., read and understand articles in a newspaper)
Surf the web
Interpret an arbitrary visual scene
Learn a natural language
Construct plans in dynamic real-time domains
Exhibit true autonomy and intelligence
1.7 EXPERT SYSTEMS
Expert Systems attempt to capture the knowledge of a human expert and make it available through a computer
program. There have been many successful and economically valuable applications of expert systems.
Benefits
Reducing skill level needed to operate complex devices.
Diagnostic advice for device repair.
Interpretation of complex data.
Cloning'' of scarce expertise.
Capturing knowledge of expert who is about to retire.
Intelligent training.
65

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


2. ARTIFICIAL NEURAL NETWORK (ANN)
ANN is a computational structure designed to mimic biological neural networks. It consists of computational
units called neurons, which are connected by means of weighted interconnections. The weight of an
interconnection is a number that expresses the strength of the associated interconnection. The main
characteristic of ANNs is their ability to learn. The learning process is achieved by adjusting the weights of the
interconnections according to some applied learning algorithms. Therefore, the basic attributes of ANNs can be
classified into Architectural attributes and Neuro dynamic attributes. The architectural attributes define the
network structure, i.e., number and topology of neurons and their interconnectivity. The neuro dynamc
attributes define the functionality of the ANN. But, despite of successful implementation of ANN in solving
various problems, if we consider it from the perspective of artificial intelligence it lacks one very important
aspect of human brain. The aspect is, after finishing the learning a neural network gives the same output to same
input without referencing the current context unlike human brain that takes decision according to the problem as
well as according to the context (here context means emotional state) the problem arose in. If we talk about
human brain, it takes the decision depending upon the current conditions (sensed by five senses), experience
and state of mind (something related with emotions). While, if we talk about ANN, it takes decision on the basis
of current conditions (as in human brain) and training quality (somewhat similar to experience) only. It does not
take emotions into consideration while taking a decision. Think about a human being without emotions. Now
following are some question that will help us to realize the importance of Emotions in an intelligent agent: Is
there any relation between intelligence and emotion? Is there any relation between decision taken by human
and emotional state of the human at that time? These are some questions that we would be addressing in this
paper.
3. INTELLIGENT AGENTS
An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that
environment through actuators.
Rational Agent
A rational agent is one that does the right thing-conceptually speaking; every entry in the table for the agent
function is filled out correctly. Obviously, doing the right thing is better than doing the wrong thing. The right
action is the one that will cause the agent to be most successful.
Performance measures
A performance measure embodies the criterion for success of an agent's behavior. When an agent is plunked
down in an environment, it generates a sequence of actions according to the percepts it receives. This sequence
of actions causes the environment to go through a sequence of states. If the sequence is desirable, then the agent
has performed well.
Omniscience, learning, and autonomy
An omniscient agent knows the actual outcome of its actions and can act accordingly; but omniscience is
impossible in reality. Doing actions in order to modify future percepts-sometimes called information
gathering-is an important part of rationality. Our definition requires a rational agent not only to gather
information, but also to learn as much as possible from what it perceives. To the extent that an agent relies on
the prior knowledge of its designer rather than on its own percepts, we say that the agent lacks autonomy. A
rational agent should be autonomous-it should learn what it can to compensate for partial or incorrect prior
knowledge.
TASK ENVIRONMENTS
We must think about task environments, which are essentially the "problems" to which rational agents are the
"solutions."
SPECIFYING THE TASK ENVIRONMENT
The rationality of the simple vacuum-cleaner agent, needs specification of
the performance measure
the environment
the agent's actuators and sensors.
SINGLE AGENT VS. MULTIAGENT.
An agent solving a crossword puzzle by itself is clearly in a single-agent environment, whereas an agent playing
chess is in a two-agent environ- ment. As one might expect, the hardest case is partially observable, stochastic,
sequential, dynamic, continuous, and multiagent.
66

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


Agent programs
The agent programs all have the same skeleton: they take the current percept as input from the sensors and
return an action to the actuatom6 Notice the difference between the agent program, which takes the current
percept as input, and the agent function, which takes the entire percept history. The agent program takes just
the current percept as input because nothing more is available from the environment; if the agent's actions
depend on the entire percept sequence, the agent will have to remember the percepts.
Simple Reflex Agent
The simplest kind of agent is the simple reflex agent. These agents select actions on the basis of the current
percept, ignoring the rest of the percept history.
Select action on the basis of only the current percept. E.g. the vacuum-agent

Large reduction in possible percept/action situations.

Implemented through condition-action rules. If dirty then suck

Model-based reflex agents


The most effective way to handle partial observability is for the agent to keep track of the part of the world it
can't see now. That is, the agent should maintain some sort of internal state that depends on the percept history
and thereby reflects at least some of the unobserved aspects of the current state.
Goal-based agents
Knowing about the current state of the environment is not always enough to decide what to do. For example, at
a road junction, the taxi can turn left, turn right, or go straight on. The correct decision depends on where the
taxi is trying to get to. In other words, as well as a current state description, the agent needs some sort of goal
information that describes situations that are desirable-for example, being at the passenger's destination. The
agent program can combine this with information about the results of possible actions (the same information as
was used to update internal state in the reflex agent) in order to choose actions that achieve the goal.
Utility-based agents
Goals alone are not really enough to generate high-quality behavior in most environments. For example, there
are many action sequences that will get the taxi to its destination (thereby achieving the goal) but some are
quicker, safer, more reliable, or cheaper than others. Goals just provide a crude binary distinction between
"happy" and "unhappy" states, whereas a more general performance measure should allow a comparison of
different world states according to exactly how happy they would make the agent if they could be achieved.
Because "happy" does not sound very scientific, the customary terminology is to say that if one world state is
preferred to another, then it has higher utility for the agent.
4. REFERENCES
1. Sturt Russel, Peter Norvig, Artificial Intelligence: A Modern Approach, Third Edition PHI
2. Haugeland, J. (Ed.). (1985). Artificial Intelligence: The Very Idea. MIT Press.
3. Bellman, R. E. (1978). An Introduction to Artificial Intelligence: Can Computers Think? Boyd & Fraser
Publishing Company.
4. Charniak, E. and McDermott, D. (1985). Introduction to Artificial Intelligence. Addison-Wesley.
5. Winston, P. H. (1992). Artificial Intelligence (Third edition). Addison-Wesley.
6. Kurzweil, R. (1990). The Age of Intelligent Machines. MIT Press.

67

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


STUDY OF E-VOTING SYSTEM WITH MULTI SECURITY USING BIOMETRIC
Shashikala Khandey1, Kavita Patel2, Mili Patel3 and Pooja Khemka4
Student1, 2 and Faculty3,4, Kirodimal Institute of Technology, Raigarh (C.G.)
ABSTRACT
In this paper an voting system is proposed with secure user authentication by providing biometric as well as
password security to voter accounts. This paper presents the overview of the development and implementation
of Biometric Electronic Voting System. In this system the specific peculiarities of secure authentication to a
system are various and for a sensitive area like e-Voting also challenging. In this paper we evaluate biometric
systems in order to prove their capabilities for e-Voting systems.
Keywords: Electronic voting, Requirement of E-Voting, Architecture of biometric Electronic Voting system,
System Implementation, biometric systems, fingerprint, authentication.
INTRODUCTION
This contribution tries to look into e-Voting from a different angle on the necessary citizen authorization from a
different angle. Instead of concepts such as one-time passwords or smart cards, we try to look into the pros and
cons of a biometric approach.[1]As the modern communication and internet today are almost accessible
electronically the computer technology users bring the increasing need for electronic service and their security
.usages of new technology in the voting process improve the elections in natural .this new technology refers to
electronic voting system where the election data is recorded stored and processed primarily as digital as digital
information
REQUIREMENT OF E-VOTING
The requirement in traditional voting process is also applicable for e-voting and some of them are
mentioned below [3]
1. Fairness: No person can learn the voting outcomes before the tally.
2. Eligibility: Only eligible voters are allowed to cast their vote.
3. Uniqueness: No voter is allowed to cast their vote more than once.
4. Privacy: No person can access the information about the voters vote.
5. Accuracy: All the valid votes should be counted correctly.
6. Efficiency: The counting of votes can be performed within a minimum amount of time.
ARCHITECTURE OF THE PROPOSED SYSTEM
Biometrics is the science that tries to fetch human biological features with an automated machine either to
identify or authenticate. Biometric products eliminate the need for passwords and Personal Identification
Numbers or PINs. Biometric systems exchange knowledge with individuals features such as finger print or
proximity identification. It makes it comfortable and fast to record features. The analysis of human data using
the fingerprints, facial patterns, eye retinas is termed as the Biometrics. Initially the applications of biometrics
were focused only on the high end consumers like government, defense and airport security. However, it has
now become more commercial. Some of the commercial applications employed are; Network or personal
computer login security, web page security, employee recognition, time and attendance systems, voting
solutions are the commercial applications . The software will be developed with Microsoft visual basic 2010 at
the front end and SQL server database at the backend.[2]
BIOMETRIC REGISTRATION OF VOTERS
This involves the process of capturing an eligible (18 years of age or above, with a sound mind, a national and
resident in the country) voters personal information (Name, Date of Birth, Home town, Language, Address,
Family, Passport. photograph, etc), including fingerprints using the[2]

68

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


Biometric Registration of voters: This involves the process of capturing an eligible (18 years of age or above,
with a sound mind, a national and resident in the country) voters personal information (Name, Date of Birth,
Home town, Language, Address, Family, Passport. photograph, etc), including fingerprints using the fingerprint
machine to scan the fingers and stored to the voters database to be used for authentication or verification on
election day. [2]
Candidates Registration: Each political party or Independent Candidate after passing all the requirements of the
electoral processes required for contesting in the election would then be registered to the software after going
through the ballot position balloting. Each candidates personal information (Name of Candidate, Party Name
and Logo if applicable, portfolio or position vying for) would be captured and stored in the Candidates table on
the database. [2]
Voter Verification or Authentication: A fingerprint recognition system operates either in verification mode or in
identification mode. The various stages in a fingerprint verification system are shown in Fig 2.0
The first stage is the data acquisition stage in which a fingerprint image is obtained from an individual by using
a sensor. The next stage is the pre-processing stage in which the input fingerprint is processed with some
standard image processing algorithms for noise removal and smoothening. The pre-processed fingerprint image
is then enhanced using specifically designed enhancement algorithms which exploit the periodic and directional
nature of the ridges. The enhanced image is then used to extract salient features in the feature extraction stage.
Finally, the extracted features are used for matching in the matching stage [2]

The first stage is the data acquisition stage in which a fingerprint image is obtained from an individual by using
a sensor. The next stage is the pre-processing stage in which the input fingerprint is processed with some
standard image processing algorithms for noise removal and smoothening. The pre-processed fingerprint image
is then enhanced using specifically designed enhancement algorithms which exploit the periodic and directional
nature of the ridges. The enhanced image is then used to extract salient features in the feature extraction stage.
Finally, the extracted features are used for matching in the matching stage [2].
Electronic Voting: The term electronic voting and also known as e-voting is a term inclusive of many systems
and methods of voting. This includes booths equipped with electronic devices, software, peripherals, processing
systems, equipment, tools and screens, networks and means of communication, etc., [2].
Vote Counting and Collation of Results: While voting is in progress, the software would tally each candidates
votes as and when an eligible voter selects the candidate by clicking or touching the passport size photograph,
Name of the candidate or the logo if applicable. The percentage of vote cast by each candidate is calculated and
their respective positions determined as soon as polls closes. [2]
SYSTEM IMPLEMENTATION & DISCUSSION
This proposed framework has been successfully simulated on Arduino 1.0.3 platform. The steps involved in the
implementation of the proposed secure electronic voting system are highlighted from Figure 3 to Figure 12.[3]
69

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015

Figure 10: Validation of Candidates Selection


70

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015

Figure 11: Information of Winning Candidate

Figure 12: Voting Result Summary


METHODS OF ELECTRONIC VOTING
1. Election, party, candidate, region, street, polling clerk and village headman information is defined by
system administrator.[1]
2. Electors information is recorded into the system with their fingerprints by village headman.[1]
3. System administrator starts the election on the day determined before.[1]
4. Polling clerk starts the election on the box within his or her authorisation.[1]
5. Elector comes to the box announced before and scans the fingerprint for voting.[1]
6. If the scanned fingerprint is not in the electors database, elector can not vote.[1]
7. If the scanned fingerprint is in the electors database, electors ID information is shown on the screen.[1]
8. If there is no problem with the ID check, elector votes by pressing on the vote button.[1]
9. If elector voted before for the election in question, the system warns about the situation. If elector has not
voted yet, the system brings the vote screen.[1]
10. Elector votes for any party by pressing on the YES button. Elector is warned as final step for preventing
miss voting by a message on the screen. If the elector wants to continue voting, elector finishes voting by
pressing on the YES button.[1]
11. If the elector tries to vote for the second time for the election in question, the system does not allow this.[1]
12. Then, election is finished by the system administrator.[1]
13. Election results and statistical information can be provided just after the election is finished[1]
71

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015

Figure 2. User interface for system login

Figure 3. Election defining screen


Working principle of the system during electors voting procedure is as follows:
1. Electors fingerprint is scanned by using fingerprint scanner.
2. Electors fingerprint is searched for in the electors database.
3. If the electors fingerprint is not in the electors database, elector can not vote. Otherwise, the electors
information on the screen is checked:
1. After checking, elector presses on the Vote button. Then, the system controls if the elector voted before for
the election in question. If so, the system warns about the situation. Otherwise, the system brings the vote
screen.
2. Elector votes for any party.
3. If the elector is sure about the vote, voting is finished. Biometric election system is designed as a web based
system. By this way, votes of the electors are collected in a center through internet network which does not cost
much. The security of the information is provided because the fingerprints are converted to dual code,
encrypted and recorded in the database when entering the system. When the system is run for the first time, the
image on the screen as users interface is shown in Figure 2. There are three different login methods to the
system. The first one is the system administrator. The system administration is the part of the system where all
the regulations are done and authorizations are determined. It consists of screens where election determination,
party determination, region determination, street determination, elector determination, election affairs, election
screen, election results, and active users can be viewed.
After logging in, election name and date is defined by the administrator as shown in Figure 3. Parties to
participate in the election are defined. Then, regions where the election will be done and streets in those regions
are defined in the system. By this way, attendances of only the related electors are provided.
72

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


Electors registered before are searched in the database[1]

and necessary corrections are made. In addition, new ellectors are added. Fingerprints must be scanned while
the electors are registered. Otherwise, registration of the electors can not be done. Electors are registered by
using elector defining screen as shown in Figure 4. While defining electors, all of the information about the
electors must be recorded. If the information is not recorded properly, registration of the electors can not be
done. Electors see the fingerprint defining screen on the recording phase which comes after recording of the
information as shown in Figure 5. Fingerprint defining result screen comes in front of the elector during the
scanning and defining phase of the fingerprint. Electors are registered to the related region. By this way, elector
can vote only in the region where he is registered. The election is started for voting by system administrator just
after the elector defining procedure is finished. Only one election is started in the system at the same time.
Thus, the errors with the system are prevented.
73

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


After starting the system, electors login and vote. If the fingerprint verified belongs to an elector registered
in the system, the information of the elector appears

on the screen. Purpose of this application is identification control by system administrator. Elector can vote by
pressing on the elector voting button just after authentication. Elector can vote clicking on the YES button of
the party that he/she chooses on the e-voting screen as shown in Figure 6. After voting, a message appears on
the screen and voting procedure is completed for the elector. Election results of any region or regions in any
time are observed by the system administrator. These operations can be done on election results part of system
administrator window as shown in Figure 7.
CONCLUSION
Electronic voting system is emerging as significant alternative to the conventional systems in the delivery of
reliable and trusted elections. In this paper, a framework for electronic voting system based on fingerprint
74

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


biometric is proposed and implemented with the objective of eliminating bogus voting and vote repetition and
we also discuss requirement of e-voting such as Fairness, Eligibility, Uniqueness , Privacy, Accuracy,
Efficiency.
REFERENCE
1. Adem Alpaslan ALTUN1* and Metin BLGN2 Web based secure e-voting system with fingerprint
Authentication Scientific Research and Essays Vol. 6(12), pp. 2494-2500, 18 June, 2011.
http://www.academicjournals.org/SRE
2. M.O Yinyeh , K.A. Gbolagade Overview of Biometric Electronic Voting System in Ghana
IJARCSSE3(7), July - 2013, pp. 624-627/www.ijarcsse.com
3. Sanjay Kumar1, Manpreet Singh Design a Secure Electronic Voting System Using Fingerprint Technique
ESIGN IJCSI , Vol. 10, Issue 4, No 1, July 2013 ISSN (Print): 1694-0814 | ISSN (Online): 1694-0784.
4. Alaguvel.R 1, Gnanavel.G2,Jagadhambal.K 3 Biometrics using Electronic Voting System with Embedded
Security IJARCET)Volume 2, Issue 3, March 2013
5. Sonja.hof@ifs.uni-linz.ac.at

75

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


STUDY OF QUANTUM COMPUTING
Alka Chandan1, Ankita Panda2, Mili Patel3 and Pooja Khemka4
Student1, 2 and Faculty3,4, Kirodimal Institute of Technology, Raigarh (C.G.)
ABSTRACT
Quantum computing is an emerging technology. The clock frequency of current computer processor systems
may reach about 40 GHz within the next 10 years. By then, one atom may represent one bit. For the last fifty
years computers have grown faster, smaller, and more powerful-transforming and benefiting our society in
ways too numerous to count. But like any exponential explosion of resources, this growth - known as Moore's
law must soon come to an end. Research has already begun on what comes after our current computing
revolution. This research paper gives an overview of quantum computers description of their operation,
differences between quantum and silicon computers, No special scientific knowledge is necessary for the
reader.
Key Words: Classical computers, quantum computers, quantum computer systems, Bits vs.qubits , History of
Quantum Computers, Languages and Quantum Computation , Moore's Law for Quantum Computers ,Future
Benefits of Quantum Computers ,difference between classical and quantum, Limitations of quantum computing,
The Potential and Power of Quantum Computing, Conclusion.
INTRODUCTION
A technology of quantum computers is also very different. For operation, quantum computer uses quantum bits
(qubits). Qubit has a quaternary nature. Quantum mechanics laws are completely different from the laws of a
classical physics. A qubit can exist not only in the states corresponding to the logical values 0 or 1 as in the case
of a classical bit, but also in a superposition state.
A qubit is a bit of information that can be both zero and one simultaneously (Superposition state). Thus, a
computer working on a qubit rather than a standard bit can make calculations using both values simultaneously.
A qubyte is made up of eight qubits and can have all values from zero to 255 simultaneously. Multi-qubyte
systems have a power beyond anything possible with classical computers.

Figure 1. An overview of Quantum


Quantum computing is essentially harnessing and exploiting the amazing laws of quantum mechanics to process
information. A traditional computer uses long strings of bits, which encode either a zero or a one. A quantum
computer, on the other hand, uses quantum bits, or qubits. What's the difference? Well a qubit is a quantum
system that encodes the zero and the one into two distinguishable quantum states. But, because qubits behave
quantum, we can capitalize on the phenomena of "superposition" and "entanglement."
76

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


A quantum computer is a computation system that makes direct use of quantum-mechanical phenomena, such
as superposition and entanglement, to perform operations on data.
Quantum computers are different from digital computers based on transistors. Whereas digital computers
require data to be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1),
quantum computation uses qubits (quantum bits), which can be in superposition of states. A theoretical model is
the quantum Turing machine, also known as the universal quantum computer.
QUANTUM COMPUTER SYSTEMS

Superposition State

In classical computers, electrical signals such as voltages represent the 0 and 1 states as one-bit information.
Two bits indicate four states 00, 01, 10, and 11, and n bits can represent 2n states. In the quantum computer, a
quantum bit called qubit, which is a two-state system, represents the one-bit information. For instance,
instead of an electrical signal in classical computers, an electron can be used as a qubit. The spin-up and spindown of an electron represent two states: 0 and 1, respectively. A photon can also be used as a qubit, and the
horizontal and vertical polarization of a photon can be used to represent both states. Using qubits, quantum
computers can perform arithmetic and logical operations as does a classical computer. The important difference,
however, is that one qubit can also represent the superposition of 0 and 1 states.
Bits vs. qubits
A quantum computer with a given number of qubits is fundamentally different from a classical computer
composed of the same number of classical bits. For example, to represent the state of an n-qubit system on a
classical computer would require the storage of 2n complex coefficients. Although this fact may seem to
indicate that qubits can hold exponentially more information than their classical counterparts, care must be
taken not to overlook the fact that the qubits are only in a probabilistic superposition of all of their states. This
means that when the final state of the qubits is measured, they will only be found in one of the possible
configurations they were in before measurement. Moreover, it is incorrect to think of the qubits as only being in
one particular state before measurement since the fact that they were in a superposition of states before the
measurement was made directly affects the possible outcomes of the computation.
Qubits are made up of controlled particles and the means of control (e.g. devices that trap particles and switch
them from one state to another).
For example: Consider first a classical computer that operates on a three-bit register. The state of the computer
at any time is a probability distribution over the different three-bit strings 000, 001, 010, 011, 100, 101, 110,
111. If it is a deterministic computer, then it is in exactly one of these states with probability 1. However, if it is
a probabilistic computer, then there is a possibility of it being in any one of a number of different states. We can
describe this probabilistic state by eight nonnegative numbers A,B,C,D,E,F,G,H (where A = probability
computer is in state 000, B = probability computer is in state 001, etc.). There is a restriction that these
probabilities sum to 1.
The state of a three-qubit quantum computer is similarly described by an eight-dimensional vector
(a,b,c,d,e,f,g,h), called a ket. Here, however, the coefficients can have complex values, and it is the sum of the
squares of the coefficients' magnitudes, that must equal to 1. These square magnitudes represent the probability
amplitudes of given states. However, because a complex number encodes not just a magnitude but also a
direction in the complex plane, the phase difference between any two coefficients (states) represents a
meaningful parameter. This is a fundamental difference between quantum computing and probabilistic classical
computing.
HISTORY OF QUANTUM COMPUTERS
In 1982 R.Feynman presented an interesting idea how the quantum system can be used for computation reasons.
He also gave an explanation how effects of quantum physics could be simulated by such quantum computer.
This was very interesting idea which can be used for future research of quantum effects. Every experiment
investigating the effects and laws of quantum physics is complicated and expensive. Quantum computer would
be a system performing such experiments permanently. Later in 1985, it was proved that a quantum computer
would be much more powerful than a classical one.
LANGUAGES AND QUANTUM COMPUTATION
The quantum computation model, has been through, and still exists in, many forms. Classical computation
models such as the Turing machine, Lambda calculus, and circuit representation have all been extended to
encompass quantum information. Currently, the most efficient forms of representation of quantum algorithms
77

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


include the circuit model and the associated operator calculus. In this model, quantum bits are manipulated by
abstracted operators which have a mathematical construction independent of implementation.
From this model, multiple quantum computing languages have been developed which attempt to provide a
complete framework for simulating and verifying algorithms within the circuit-operator model. From
experience in working with quantum computing languages as well as from a theoretical standpoint, computer
scientists have constructed the following list of requirements which any useful quantum computing language
must satisfy.
Abstracted Quantum Model
Any language must provide a high level construction to allow programmers to develop modular, intuitive, and
compact code. Thus, the language must have some automated mechanism for translation to a sequence of low
level instructions for a quantum machine.
Hardware Independence
Any quantum computing language must not make reference to the hardware of the quantum machine, only the
operators which any quantum machine should be capable of. The automated translation to low level instructions
may take responsibility for this independent of the programmer and his code.
Quantum Registers
There are three basic operations that a quantum register (qureg) must be able to perform for the user.
1. Allocation and Deallocation - This will likely only involve the addressing and scoping of a querg.
2. Initialization - In order to initialize a qureg, it must be measured and operated upon, two attributes which
clearly must be properties of any quantum language.
3. Concatenation - Operators must have the ability to access any subset of the quregs available to the
system. This could be accomplished in a number of ways. Physical movement of quregs could allow
passage through an operator together which would be implemented as addressing manipulation in the
quantum language. Alternatively, operators could be constructed and understood to be used as state
operators which would be inherently applied to the entire quantum machine state with identity operations
for unused qubits.
In this instance, the implementation would be manipulation of operator structure rather than addressing
schemas.
*Measurement- A measurement of a quantum register will collapse the quantum state to an observable basis.
Although this will be governed by the hardware, a generic measure command should be available.
Quantum Operators
Operators may have a number of possible implementations. Properties of operators within languages which
contributed to readable, efficient code are described below.
Low Level Operators
Common operators which must form a complete set (redundancy is fine) should be quickly accessible.
Operator Composition
From these low level operators, higher level operators can be constructed through composition. Thus, a user
may conjugate many operators into a single operator which may be used more than once.
Operator Inversion
Because all quantum operators must be unitary they must be reversible. Inverting an operator should be easy
and accessible.
MOORE'S LAW FOR QUANTUM COMPUTERS
According to Moore's Law, the number of transistors of a microprocessor continues to double in every 18
months. According to such evolution if there is a classical computer in year 2020, it will run at 40 GHz CPU
speed with 160 Gb RAM. If we use an analogue of Moors law for quantum computers, the number of quantum
bits would be double in every 18 months. But adding just one qubit is already enough to double a speed. So, the
speed of quantum computer will increase more than just doubling it.
THE MAJOR DIFFERENCE BETWEEN QUANTUM AND CLASSICAL COMPUTERS
The memory of a classical computer is a string of 0s and 1s, and it can perform calculations on only one set of
numbers simultaneously. The memory of a quantum computer is a quantum state that can be a superposition of
78

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


different numbers. A quantum computer can do an arbitrary reversible classical computation on all the numbers
simultaneously. Performing a computation on many different numbers at the same time and then interfering all
the results to get a single answer, makes a quantum computer much powerful than a classical one.
FUTURE BENEFITS OF QUANTUM COMPUTERS
1. Cryptography and Peter Shors Algorithm
In 1994 Peter Shor (Bell Laboratories) found out the first quantum algorithm that, in principle, can perform an
efficient factorization. This became a complex application that only a quantum computer could do. Factoring is
one of the most important problems in cryptography. For instance, the security of RSA (electronic banking
security system) - public key cryptography - depends on factoring and it is a big problem. Because of many
useful features of quantum computer, scientists put more efforts to build it. However, breaking any kind of
current encryption that takes almost centuries on existing computers, may just take a few years on quantum
computer.
2. Artificial Intelligence
It has been mentioned that quantum computers will be much faster and consequently will perform a large
amount of operations in a very short period of time. On the other side, increasing the speed of operation will
help computers to learn faster even using the one of the simplest methods - mistake bound model for learning.
3. Other Benefits
High performance will allow us in development of complex compression algorithms, voice and image
recognition, molecular simulations, true randomness and quantum communication. Randomness is important in
simulations. Molecular simulations are important for developing simulation applications for chemistry and
biology. With the help of quantum communication both receiver and sender are alerted when an eavesdropper
tries to catch the signal. Quantum bits also allow more information to be communicated per bit. Quantum
computers make communication more secure.
LIMITATIONS OF QUANTUM COMPUTING
Beals et al. proved that, for a broad class of problems, quantum computation cannot provide any speed-up.
Their methods were used by others to provide lower bounds for other types of problems. Ambainis found another powerful method for establishing lower bounds. In 2002, Aaronson showed that quantum approaches
could not be used to efficiently solve collision problems. This result means there is no generic quantum attack
on cryptographic hash functions. Shors algorithms break some cryptographic hash functions, and quantum
attacks on others may still be discovered, but Aaronsons result says that any attack must use specific properties
of the hash function under consideration.
THE POTENTIAL AND POWER OF QUANTUM COMPUTING
Quantum computer with 500 qubits gives 2500 superposition states. Each state would be classically equivalent
to a single list of 500 1's and 0's. Such computer could operate on 2500 states simultaneously. Eventually,
observing the system would cause it to collapse into a single quantum state corresponding to a single answer, a
single list of 500 1's and 0's, as dictated by the measurement axiom of quantum mechanics. This kind of
computer is equivalent to a classical computer with approximately 10150 processors.
CONCLUSION
It is important that making a practical quantum computing is still far in the future. Programming style for a
quantum computer will also be quite different. Development of quantum computer needs a lot of money. Even
the best scientists cant answer a lot of questions about quantum physics. Quantum computer is based on
theoretical physics and some experiments are already made. Building a practical quantum computer is just a
matter of time. Quantum computers easily solve applications that cant be done with help of todays computers.
This will be one of the biggest steps in science and will undoubtedly revolutionize the practical computing
world.
REFERENCES
1. Y. Kanamori,! S.-M. Yoo,! W.D. Pan,! and F.T. Sheldon!! A SHORT SURVEY ON QUANTUM
COMPUTERS International Journal of Computers and Applications, Vol. 28, No. 3, 2006
2. C.P. Williams & S.H. Clearwater, Exploration in quantum computing (New York: Springer-Verlag, 1997).
3. P.W. Shor, Algorithm for quantum computation: Discrete logarithm and factoring, Proc. 35th IEEE Annual
Symp. On Foundations of Computer Science, Santa Fe, NM, November 1994, 24134.
4. M. Oskin, F.T. Chong, & I. Chuang, A practical architecture for reliable quantum computers, IEEE
Computer, January 2002, 7987.
79

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


st

5. Quantum Computers & Moore's Law. Retrieved on December 1 , 2002 from: http://www.qubyte.com
st

6. Daniel, G. (1999). Quantum Error-Correcting


from:http://qso.lanl.gov/~gottesma/QECC.html

Codes.

Retrieved

on

November

31 ,

2002

7. Manay, K. (1998). Quantum computers could be a billion times faster than Pentium III. USA Today.
st

Retrieved on December 1 , 2002 from: http://www.amd1.com/quantum_computers.html


8. Scott Aaronson ,Dave Bacon Quantum Computing and the Ultimate Limits of Computation: The Case for
a National Investment Version 6: December 12, 20081
9. Cris Cecka, Review of Quantum Computing ResearchSummer 2005
10. M. Nielson, I. Chuang, Quantum Computation and Quantum Information(Cambridge Univ. Press,
Cambridge, 2000)
11. DHondt, Ellie and Prakash Panangaden. Quantum weakest preconditions.Proc. QPL 2004, pp. 75-90
12. Quantum Computing by Eleanor Rieffel
13. http://en.wikipedia.org/wiki/Quantum_computer
14. https://uwaterloo.ca/institute-for-quantum-computing/quantum-computing-101

80

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


CONSIDERING ANY RELATION BETWEEN ORGANIZATIONAL CULTURE &
SUGGESTIONS SYSTEM
Kiamars Fadaei
Department of Management, Shahed University, Tehran, Iran
ABSTRACT
The present research intends to study any relation between organizational culture and suggestions system. It is
an explanatory one from among different types of applicable researches and from method viewpoint. Therefore
two separated questionnaires were applied for this purpose. The first one is related to organizational culture
with 23 closed questions. The second one is related to measuring any cooperation through suggestions system.
Upon ensuring about narration and survival of measuring tools, both questionnaires were distributed among
members of a sample consisting of 30 persons of a statistical population including headquarter personnel of
quarter areas of Education Department. According to the findings, there is a meaningful relation between
organizational culture parameters and suggestions system. Finally there are some proposals for betterment and
upgrading all cultural dimensions in order to have more promotion of suggestions system in mentioned
organization.
Keywords: Education, Organizational culture, Suggestions system, Shiraz, Iran.
INTRODUCTION
Organizational culture is the major factor in the formation and enrichment of effective parameters of different
organizational systems. Also we have different organizations that may start cooperative management in order to
benefit from different thoughts, ideas and innovations of their personnel as the most valuable capital in
management of their affairs. For his purpose and distribution of this culture it is necessary to benefit from
executive sub-systems as well. One of the most efficient of these systems is suggestion system to create a
suitable mechanism and space for submission, receiving, considering, evaluation and performing any
beneficiaries' proposals in order to obtain specified ideals. From the last two decades most studies explained
different viewpoints and dimensions about organizational culture (Martin,1992; Countz, 2003; Shahin & Arash,
2003; Gupta, 2010).
Upon the approval of Cooperation System at Parliament, New York was the first state that started to have any
proposals from the personnel and retired personnel (Suggestions Systems Bulletin, 2005). After World War II,
Japanese noticed that system in American industries and promoted it gradually in their own industries. The
suggestions system mostly focused on financial effects of any suggestion in U.S.A while it was emphasizing at
Japan on cooperation aspects. Then financial motives replaced with group intends. When Japanese managers
are asked: "What is the major goal of implementation of suggestions system?" none of them point out to finding
a new product and/or new method for reducing of costs and/or more profits. In fact they do not consider any
further direct economic consequences, but they may focus on personnel motivation and providing responsible
personnel as the major goal of suggestions system (Kluwer, 2001),
Organizational culture: All people who are working in an organization may have common beliefs, ideas, values
and orders which may construct the cultural structure of it. Generally we will have a new idea out of
combination both terms of "Culture" & "Organization" none of which (culture & organization) have the same
thought in itself. Organizational Culture is effective on all organizational aspects and today it is so much
important in a way that most management specialists consider the main duty of organizational leaders to change
and reform the situation of suitable cultural values. All organizational systems including the structure &
behavior are under the effect of governing culture on the organization. Followings are different descriptions for
the culture: Culture is a collection of different aspects transfers from one generation to another for separation of
different social groups. Half stud stated that "Culture is a group thinking program in which the members of a
group and/or social class may be separated from other group members (Hofstede, 2001).
Accordingly it is possible to describe organizational culture as follows: Organizational culture returns back to
all learned values, beliefs and behaviors from t he past and accompanied with different experiences in the
history of organization which my intends to be revealed in major disciplines and behavior of the members .
STRUCTURE OF ORGANIZATIONAL CULTURE
Organizational culture means an exclusive pattern of all assumptions, values and common orders including all
social activities, language, symbols and organizational functions. In order to have a better knowledge about
organizational culture, it is better to know more its structure as follows:
81

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


Common assumptions: It includes the following items:
Different thoughts and beliefs of persons about others and themselves as well (Pay attention to their own
benefits against public interests), b) Different relations among members (Competition and/or cooperation), c)
Organizational relations with the environment (Overcoming to the environment, cooperation and something like
that) and d) Time tendency (Future, Present & Past).
Common values: Value is a basic fixed belief about different matters with considerable and meaningful
importance for people.
Common social acceptance: It is a regular process according which some new members may enter into the
organizational culture.
Common symbols: It means any obvious items which may be applied for showing a single common value
and/or a special meaning.
Common language: It means a common system including different voices, written signs and/or points for
transfer of special meaning among members
Common narrations: It means common stories, heroes and myths in an organizational culture.
Common functions: It means superficial ceremonies, special & official activities for creation of powerful
sentimental and performing different jobs as a special accident (Moghimi, 2001).
THE ORGANIZATIONAL CULTURE PATTERN IN PRESENT RESEARCH
There are different classifications for organizational culture. Queen (1999) made a classification for
organizational culture which is the most complete one and the present research has been applied according
which as well. Queen & Kerth specify the organizational culture including all major values, assumptions and
definitions in different aspects of an organization which may be revealed in four types of organizational culture.
These four types include hierarchy culture, conceptual culture, and ideological culture and agreed one. They
have introduced 9 major organizational parameters in different types of organizational culture which are: The
goal of organization, Operation criteria of the organization, Authority reference in the organization, Power
resource, Manner of decision making, Leadership style, following up & acceptance method, Evaluation
criterion of members and motivation of personnel (Quinn et al., 1999).
There is a special culture for all organizations including different behaviors of it. In order to have required
effectiveness, all organizations should bear a concerned culture in compliance with the assignment, technology,
volume of functions and other similar variations. Followings are major organizational parameters in which there
is special situation for different organizational cultures:
Hierarchal culture
It means a culture in which the priority of jobs in an organization is based upon the grades in a way that the
lower position is responsible against another position with upper class (Countz, 2003). Leaders make decisions
according to some real analysis with more interests in maintenance and security.
Rational culture (Market)
The fundamental assumptions and basic values of market culture are based upon clear ideals and innovative
strategy towards more profits and efficiency. All market based organizations are interested in maintenance the
competitors position and overcoming them by fixed situation of goals. We have a result-based job in this
culture. All leaders are really insistent. The real factor of combination all parts is focusing on victory and longterm concepts, competitive functions and focusing on reaching the success and applying the goals. The term
"success" in this culture means further cooperation and interferes in market (Quinn et al., 1999).
Ideological culture (Adhocracy)
The real ideal and goal belongs to all personnel in this culture with further controls through common orders
(Mir Sepasi, 2001). After worldwide change from industrial age to the information decade we have the
formation of the fourth form of the culture. This type of culture may reply to chaotic environments and speedy
condition of 21st century. The fundamental assumptions of this culture are the real differences between this
culture and previous ones. Innovation, novelty and pioneer situation are different assumptions which may
enable the organizations to produce new services and products through which find more success. Adhocracy
culture considers the major duty of management in promotion of job creation, innovation and focusing on
priorities. It may benefit from innovation for more profits and interests. The real goal of ideological culture
(adhocracy) is to support function criterion.
82

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


Any decision making procedures in these organizations is based upon the idea of the leaders about risk
accepting and further innovations. The stuff may have no chance only to accept the made decisions in order to
perform all organizational values. (Brown, 1995)
Consensual culture (Tribal)
In this culture we have different strategies through transfer of ideas and mutual regulations among the members.
In fact it is not the case to following up only a special leadership center (Mir Sepasi, 2001). There is a friendly
environment in consensual culture in which all members are involved in the work with each other. It is like a
wide family. Leaders and supervisors are the fathers of the organization. Royalty and common habits and
beliefs may provide a chain of different people. There is a high rate of obligation among people. The
organization focuses on long-term benefits of promotion human resources and combination of people (Quinn et
al., 1999).
Organizational goal
Goal means the final idea for which different organizations may create accordingly (Tabibi & Seyed, 2005).
Performance measure
By determining a criterion for different organizations may enable them to answer to this question: "Why should
we spend their resources for these activities?" (Tabibi & Seyed, 2005).
Armstrong & Baron (2006) stated that all functions criterion should:
Be related with strategic criteria and goals which are important for the organization and may promote the
business.
Be related with the goals and replies of considered groups and people.
Provide different documents as the basics of measurement.
Be approvable and provide different information confirming the expectations.
Be precious in compliance with evaluation goal and further access to data.
Provide a valid base for required feedback and functions (Armstrong & Baron, 2006).
Power Resource & Authority reference in organization
Power has a wide meaning than authority. It points out to the ability of people or groups in interfere in the
beliefs or actions of people or other groups. Authority in organization means making any effective decisions on
others. In fact authority is the same of power, but power has an organizational situation (Countz, 2003).
Decision making
The major duty of managers is to make decision. This is because they are obliged to make decision about what
should be done, who should do the same, required time, place of work performance and even about manner of
job performance. Decision making is the important part of managers' activities and in fact is the nature of it.
Leadership Style
It means general form of leader's functions as it is understood by the personnel .Leadership style means manner
of benefiting from the power and interference by a leader. Most of specialists in management believe that
leadership style of a manager is under the effect of its attitude towards its own role and its personnel.
If it considers the personnel as some people under his supervision and orders, probably it may create a
conservative style. But if it considers the personnel as its own colleagues and assumes that it has only more
responsibilities (in comparison with others) then it may apply cooperative & liberal styles (Rezaeian, 2006).
ACCEPTANCE
It means a method in which all personnel may following up a special item and/or person. It is also the manner
of acceptance of all organization personnel depending upon the traditional and/or non-traditional leaders of the
organization. A non-traditional leader is a person who makes its followers to sacrifice all their benefits in favor
of the organization. It may have deep effects on its own followers. This group of people may change other
people by their personal power and abilities. They may inject the value of works in them. For instance and for
more supports of the leader a follower may say:" I will walk on fire if my leader orders me!
Assessment criterion of members
Any assessment of function may be limited to applying different techniques for producing of operational
information (Greiling & Dorothea, 2006). Today assessment philosophy may focus on current application and
future goals of personnel and also cooperation of personnel in mutual obtaining of goals and with the help of
83

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


the supervisor. Followings are the major signs of modern assessment philosophy: Making direction for the
application; Focusing on the goals & ideals; Specifying the goals with mutual consulting between the
supervisor and employee.
Personnel motivation
Motivation is to encourage the personnel to do a job for obtaining suitable ideals of the organization. It may be
constructive and/or destructive. People with suitable motivation have high rate of self-control and in contrast
those with lack of motivation have no more controls. Official relations may remove any motivations in
personnel. In other words the basic item in motivation is non-official relations among personnel (Efjeh &
Seyed, 2006).
SUGGESTIONS SYSTEM
It concludes both concepts of "system" and "proposals". "System" means a collection of different parts with
mutual effects on each other for finding a common goal (Moghimi, 2006). "Proposal" is an idea which may be
accepted and/or rejected. Moghimi (2006) explains the suggestions system as follows: "Suggestions system"
means an official defined procedure created and controlled by the master manager in order to receive
voluntarily ideas of personnel for betterment of organizational functions in different aspects. "System" needs a
method for payment of allowance against accepted proposals by the personnel Generally, suggestions system is
a written system for activating the mentality of people and applying their ideas for further betterment of
organizational activities (Pizam, 1978; Ramezani & Jalal, 2005).
RESEARCH THEORIES
Theory means a hypothetical and temporary reply to the research question. It is any assumptions and
conservative ideas of the researcher about probable replies to the considered problem which is followed up for
any acceptance / rejection in practical researches. Therefore, all theories of this research may be divided into
two major and indirect theories as follows:
Major theory
There is a meaningful relation between parameters of organizational culture from view point of Queen and
suggestions system.
Indirect theories
There is a meaningful relation between goal determination of organization and suggestions system.
There is a meaningful relation between the function criterion and suggestions system.
There is a meaningful relation between the following up method and suggestions system.
There is a meaningful relation between power source and suggestions system.
There is a meaningful relation between decision making and suggestions system.
There is a meaningful relation between motivation and suggestions system.
There is a meaningful relation between the leadership style and suggestions system.
There is a meaningful relation between assessment criterion of members and suggestions system.
There is a meaningful relation between authority resource and suggestions system.
Statistical population of the research: Statistical population in accordance with the idea of most researchers is
all real or virtual members interested in findings of the research in a way to be in common at least in one
objective (Abedini, 2011).
Education Department of Fars province is responsible for planning, supervision, handling and controlling of 19
Education departments of the city and 39 areas, one education department of movable tribes, one department of
exceptional education and 19 agents. Education Department of Fars province has 4 independent districts which
are active under the supervision of the Education Department of Fars Province. As a result the statistical
population of present research includes all personnel of four folded aerial headquarter of Shiraz Education Dept.
Total number of society is 421 persons with separate distribution in district 1 (113 persons), district 2 (96
persons, district 3 (101 persons) and district 4 (111 persons).
SAMPLING METHOD
Sampling method by which which different units are selected as a sign of greater society (Khaki &
Gholamreza, 2007). There are various sampling methods including non-harmonized sampling method, random
simple sampling method, random classified systematic sampling method, Cluster sampling method, Sharing
84

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


sampling method and combined sampling method (Sokaran & Ouma, 2007). We employed clustering sample
method so that it is possible to define sampling unit which is Organization". At first we may select different
organizations (cluster) by simple random method and then appointed required number of personnel from among
these organizations (Azar et al., 2006). Then in next step, the researcher may benefit from simple random
sampling method for selecting of respondents at four folded areas.
Specifying the sample volume: In order to specify sample volume, it is possible to apply following formulation
(Hosseini & Seyed, 2004).

2
] pq
2
n

( N 1) 2 [ z ] 2 p q
2
N [Z

According to the mentioned formulation, the sample volume was 165 persons.
DATA COLLECTION TOOLS
a) Library information including any study and considering domestic & foreign books and magazines and
browsing in data bases (internet) and also benefitting from experiences of other researchers in order to fin
theoretical basics of research
b) Benefiting from the questionnaire as the major tools of data collection for finding out the considered data
We used two separated questionnaires in this research. The first one is related to organizational culture and
submitted by the use of presented model by Quinn et al. (1999) and prepared by Half Stud in the form of 23
close questions. The second one is related to measuring any cooperation through suggestions system.
The results of further studies of research theories: First indirect theory: There is a meaningful relation between
goal determination of organization and suggestions system. By applying relevant test method with insurance
level of %95, we had Sig=0.001. Since we have Sig<0.05, therefore we reject zero theory and accept the
opposite one. It means a meaningful relation between goal determination of organization and suggestions
system. As a result the first indirect theory is confirmed with insurance level of %95.
The results of further studies of research theories: Second indirect theory: There is a meaningful relation
between the function criterion and suggestions system. By applying relevant test method with insurance level of
%95, we had Sig=0.000. Since we have Sig<0.05, therefore we reject zero theory and accept the opposite one. It
means a meaningful relation between function criterion and suggestions system. As a result the first indirect
theory is confirmed with insurance level of %95.
The results of further studies of research theories: Third indirect theory: There is a meaningful relation between
the following up method and suggestions system. By applying relevant test method with insurance level of
%95, we had Sig=0.000. Since we have Sig<0.05, therefore we reject zero theory and accept the opposite one. It
means a meaningful relation between following up method and suggestions system. As a result the first indirect
theory is confirmed with insurance level of %95.
The results of further studies of research theories: Fourth indirect theory: There is a meaningful relation
between power source and suggestions system. By applying relevant test method with insurance level of %95,
we had Sig=0.000. Since we have Sig<0.05, therefore we reject zero theory and accept the opposite one. It
means a meaningful relation between power source and suggestions system. As a result the first indirect theory
is confirmed with insurance level of %95.
The results of further studies of research theories: Fifth indirect theory: There is a meaningful relation between
decision making and suggestions system. By applying relevant test method with insurance level of %95, we had
Sig=0.000. Since we have Sig<0.05, therefore we reject zero theory and accept the opposite one. It means a
meaningful relation between decision making and suggestions system. As a result the first indirect theory is
confirmed with insurance level of %95.
The results of further studies of research theories: Sixth indirect theory: There is a meaningful relation between
motivation and suggestions system. By applying relevant test method with insurance level of %95, we had
Sig=0.000. Since we have Sig<0.05, therefore we reject zero theory and accept the opposite one. It means a
meaningful relation between motivation and suggestions system. As a result the first indirect theory is
confirmed with insurance level of %95.
85

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


The results of further studies of research theories: Seventh indirect theory: There is a meaningful relation
between the leadership style and suggestions system. By applying relevant test method with insurance level of
%95, we had Sig=0.000. Since we have Sig<0.05, therefore we reject zero theory and accept the opposite one. It
means a meaningful relation between leadership style and suggestions system. As a result the first indirect
theory is confirmed with insurance level of %95.
The results of further studies of research theories: Eighth indirect theory: There is a meaningful relation
between assessment criterion of members and suggestions system. By applying relevant test method with
insurance level of %95, we had Sig=0.000. Since we have Sig<0.05, therefore we reject zero theory and accept
the opposite one. It means a meaningful relation between assessment criterion and suggestions system. As a
result the first indirect theory is confirmed with insurance level of %95.
The results of further studies of research theories: Ninth indirect theory: There is a meaningful relation between
authority resource and suggestions system. By applying relevant test method with insurance level of %95, we
had Sig=0.000. Since we have Sig<0.05, therefore we reject zero theory and accept the opposite one. It means a
meaningful relation between authority resource and suggestions system. As a result the first indirect theory is
confirmed with insurance level of %95.
Major question of research: What is the relation between organizational culture parameters and suggestions
system? For responding to this question, there is a theory in the format of major theory of research. Followings
are the results of considering this theory (Venkatesan et al., 2011).
Major theory: There is a meaningful relation between organizational culture parameters and suggestions system
from the point of view of Queen. By applying relevant test method with insurance level of %95, we had
Sig=0.000. Since we have Sig<0.05, therefore we reject zero theory and accept the opposite one. It means a
meaningful relation between organizational culture parameters and suggestions system. As a result the first
indirect theory is confirmed with insurance level of %95 (Table1) .
Table 1. The results of Pierson combination test for major theory
1.000

0.667

0.00

165

165

0.667

1.000

0.00

165

165

Combination coefficient
Organizational culture Sign

Combination coefficient
Suggestions system Sign

OBTAINED RESULTS OUT OF FREEDMAN TEST


We studied nine organizational culture parameters in this research which would be classified by the use of
Freedman test. Findings show that from among organizational culture variants in this research leadership style
has the first position and motivation has the last position from the viewpoint of personnel and respondents.
PATHOLOGY OF SUGGESTIONS SYSTEM AT FOUR FOLDED DISTRICTS OF EDUCATION
In this part we considered the average grades of suggestions system at four folded districts of education and
compared it with suitable situation. There are various tests for the average of society (s). According to the
number of these populations, there are different tests for the quantities of variations and their further relations.
Since we intend to consider the average grade of a group, it is necessary to apply t test of a sample for
pathology of suggestions system.
We considered the average grades higher than 3 as the suitable situation and lower /equal to three as a nonsuitable condition. The results of present research show that central & distributed indexes of different grades are
located in four folded areas as mentioned in Table 2. Area 2 and three have respectively the maximum and
minimum rates among different areas.

86

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


Table 2. Explanatory statistics of suggestions system in four folded areas
Area

Qty Average

Criteria deviation

Deviation of average criteria

44

2.69221

0.5652

0.0852

38

3.6388

0.6374

0.1034

40

2.3159

0.5423

0.0857

43

3.5159

0.4234

0.0645

Theory test
Suggestions system has a non-suitable situation at different areas.
Suggestions system has a suitable situation at different areas (Table 3).

Table 3. Average test

Area

Test value=3
Statistics
of t test

Freedom
degree

Meaningful
level

df

Sig

Average
difference

Insurance distance of %95 for


average difference
Low level

High level

-3.613

43

0.999

-0.3078

-0.4797

-0.1360

6.177

37

0.000

0.6387

0.4292

0.8483

-7.978

39

-0.6840

-0.8575

-0.5106

7.988

42

0.000

0.5158

0.3855

0.6462

We have Sig<0.05 at areas 2 & 4, therefore H0 theory was rejected and the opposite theory accepted
accordingly. Therefore suggestions system has a suitable situation in these areas. We have Sig>0.05 at areas 1
&3, therefore it is possible to accept theory H0. Therefore suggestions system has a suitable situation in these
areas.
CONCLUSIONS & PROPOSALS
Regarding a direct relation between organizational culture and suggestions system, it is expected to have a
tendency towards ideological culture and agreed/cooperative culture in the organization which may make a
serious situation and enriching the proposals. Therefore, in order to have a tendency to these cultures, it is
recommended to:
1. Reject any direct supervision on personnel and make them free for any further works.
2. Provide and establish different work groups and teams.
3. Set aside enough time for leading and training of your personnel rather than group behavior with them.
4. Providing special attention to all of them and assisting for developing their abilities and skills.
5. Establish suitable evaluation system for assisting the personnel find their real rights and situation.
6. Rather to pay attention to the necessities of higher levels, it is necessary to consider all materialistic needs of
personnel.
7. Write job promotion programs (guidelines) and tries to assign the positions in accordance with their
priorities.

87

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


In addition to study of any relation between organizational culture and suggestions system, the present research
managed to have pathology of suggestions system in different education departments. Therefore, with regard
the non-suitable situation of this variant at districts 1 & 3, we recommend all managers to do the followings:
a) In order to establish suggestions system in an organization it is necessary to provide required pre-requisite.
One of the most important and effective factors is suitable culture making and suitable organizational
culture. We discussed the matter of culture as mentioned above. In addition to organizational culture we are
facing with a new plan in the organization, therefore we should consider some written programs for
preparing culture change in the organization.
b) It is necessary to write required instructions and rules for performing of suggestions system.
c) We should consider a suitable allowance in compliance with a proposal. It is necessary to submit any
allowances in a format of a package and make them free to select one of them with their own preference.
d) Allowances should be granted through an official ceremony and in presence of other personnel.
e) The real philosophy of performing suggestions system is to establish a cooperative management style in the
organization. Therefore it is recommended to benefit from presented proposals in any organizational
decision making.
f) Establishment and providing different sessions in which we may study the suggestions system and notify the
personnel about the project results.
REFERENCES
1.
Abedin Samad (2011) Sociological analysis of economic and cultural capital role in tendency to social
values among the students of Islamic Azad University, Khalkhal Branch. Indian. J. Sci. Technol. 4, 11621167.
2.

Armstrong Michele (2006) Function management. Translated by Saeid Safari & Amir Vahabian, Tehran,
Iran.

3.

Azar Adel and Momeni Mansour (2006) Statistics & its application in management. 2nd edition, Samt
public Tehran, Iran.

4.

Brown Andrew (1995) Organizational culture. Pitman Publ. London, UK.

5.

Counts Harold (2003) Principles of management. Translated by Mohammad Hadi Chamran, Tehran, Iran.

6.

Efjeh Seyed Ali Akbar (2006) Philosophical principles & Leadership theories & Organizational behavior.
Samt public. Tehran, Iran.

7.

Greiling Dorothea (2006) Performance measurement, a remedy for increasing the efficiency of public
services. Protestant Univ. Appl. Sci. Darmstadt, Germany.

8.

Gupta BM (2010) Ranking and performance of Indian Universities based on publication and citation data.
Indian. J. Sci. Technol. 3, 837-843.

9.

Hofstede G (2001) Cultures consequences. Comparing values, behaviors. Instt. Organiz. Across Nations.
Beverly Hills, CA. Sage.

10.

Hosseini Seyed Yaghoub (2004) Nonparametric statistics. Res. method & 10 SPSS Statistical software,
1st edition, Tehran, Iran.

11.

Khaki Gholamreza (2007) Research method with an attitude of Thesis Writing. Baztab public. Tehran,
Iran.

12.

Kluwer (2001) Decision Making, Social and Creative Dimensions. Acad. Publ., Springer.

13.

Mir Sepasi Nasser (2001) A report of jobs circulation seminar: Job satisfaction & organizational
effectiveness. Tadbir Monthly Magaz. 4, 232-239.

14.

Moghimi Seyed Mohammad (2001) Organization & research attitude management", 2nd edition, Termeh
Publ. Tehran, Iran.

15.

Moghimi Seyed Mohammad (2006) Suggestions system in governmental organizations from theory up to
practice. Publ. of Cult. Ser. Asso. Iranians in abroad, Tehran, Iran.
88

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


16.

Pizam Abraham (1978) Some correlates of innovation within industrial suggestion systems. Personnel
Psychol. 27, 63-76.

17.

Quinn Robert E and Cameron Kim S (1999) Diagnosing and changing organizational culture. NY.
Addison-Wesley.

18.

Ramezani Jalal (2005) Participation Management with focusing on Suggestions System. Payam Publ.
Tehran, Iran.

19.

Rezaeian Ali (2006) Basics of Organization & Management. Samt Publ. Tehran, Iran.

20.

Shahin Arash (2003) Submission a continuous betterment model for suggestions system. 4th natl. conf. in
Suggestions Sys. Faculty of Managt. Tehran University, Tehran, Iran.

21.

Sokaran Ouma (2007) Research methods in management. Higher Edu. Res. Institute in Managt &
Planning, Iran.

22.

Suggestions System Message Bulletin (2005) Differentiates between suggestions system & ideas/
criticisms fund. Refah Bank. l3, 321-328.

23.

Tabibi Seyed Jamal (2005) Strategic programming. Ministry of Health & Therapeutic Publication.
Tehran, Iran.

24.

Venkatesan P, Dharuman C and Gunasekaran S (2011) A comparative study of principal component


regression and partial least squares regression with application to FTIR diabetes data. Indian J. Sci.
Technol. 4, 740-746.

89

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


CORPORATE GOVERNANCE AND FINANCIAL STABILITY IN SUDANESE BANKING SECTOR
Dina Ahmed Mohamed Ghandour
Lecturer, Department of Accounting And Finance, Faculty of Business Administration, University of
Medical Sciences and Technology, Khartoum, Sudan
ABSTRACT
Corporate governance has become one of the wide and deep discussions across the globe recently. Weak and
ineffective corporate governance mechanism in banks can affect the stability of the financial system leading to
financial crisis and in return it can has a negative impact on the economy as a whole. The purpose of this
paper is to investigate and focus on corporate governance practices in Sudanese banking sector and its link to
financial system stability, economic growth and the recent financial crisis as well as it has mentioned the
different rules and regulations that were set to avoid such a crisis in the future. This paper followed the
analytical/qualitative method to investigate such a problem. Secondary data was used as the main data
collection tool .The study was based on analysis of the collected data in order to achieve the objectives of the
research.
Key words: Corporate governance, financial system stability, economic growth, Sudanese banking sector,
financial crisis.
INTRODUCTION
Probably no other set of firms has been closely examined in the past few years as banks and financial
institutions. Since the beginning of the financial crisis in 2008, countless polices have been proposed, discussed
and enacted on nearly every aspect of banking and finance. The bulk of this attention almost certainly springs
from the crisis, which became a powerful reminder of the importance of the financial system and the call for
new structural design for the system. The primary feature of the new structural design that has received
maximum emphasis so far at international forums, is improved corporate governance reinforced by prudential
regulation and supervision. Corporate governance is now a topic of considerable interest to a large and
expanding cross-section of the community. It is obviously of fundamental importance to company directors,
Reserve Bank, in its capacity as supervisor of the banking system.
Corporate governance is defined as the structures and processes by which companies are directed and
controlled. Good corporate governance helps companies operate more efficiently, improve access to capital,
mitigate risk and safeguard against mismanagement. It makes companies more accountable and transparent to
investors and gives them the tools to respond to stakeholder concerns. Corporate governance also contributes to
development. Increased access to capital encourages new investments, boosts economic growth, and provides
employment opportunities (international finance corporation, 2014).
However, corporate governance for banking organizations is arguably of greater importance than for other
companies, given the crucial financial intermediation role of banks in economy andis essential to achieving
and maintaining public trust and confidence in the banking system (the Basel committee).
BANKS CORPORATE GOVERNANCE VS. COMPANIES
Corporate governance in banks differs from the standard (typical for other companies), which is due to several
issues:
Banks are subject to special regulations and supervision by state agencies (monitoring activities of the bank
are therefore mirrored); supervision of banks is also exercised by the purchasers of securities issued by
banks and depositors ("market discipline", "private monitoring");

The bankruptcy of a bank raises social costs, which does not happen in the case of other kinds of entities
collapse; this affects the behavior of other banks and regulators;

Regulations and measures of safety net substantially change the behavior of owners, managers and
customers of the banks; rules can be counterproductive, leading to undesirable behavior management (take
increased risk) which expose well-being of stakeholders of the bank (in particular the depositors and
owners);

Between the bank and its clients there are fiduciary relationships raising additional relationships and agency
costs;

Problem principal-agent is more complex in banks, among others due to the asymmetry of information not
only between owners and managers, but also between owners, borrowers, depositors, managers and
supervisors;
90

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015

The number of parties with a stake in an institutions activity complicates the governance of financial
institutions.

To sum up, depositors, shareholders and regulators are concerned with the robustness of corporate governance
mechanisms. The added regulatory dimension makes the analysis of corporate governance of opaque banking
firms more complex than in non-financial firms (Wilson, Casu, Girardone, Molyneux, 2010). The below figure
summarizes the key players in the corporate governance framework for banks:

In the case of banks therefore, corporate governance needs to be perceived as a need of such conduct of an
institution, which would force the management to protect the best interests of all stakeholders and ensure
responsible behavior and attitudes (Tirole, 2001). Accountability, fairness, transparency and independence are
the four objectives of corporate governance.
WHY BANKS CORPORATE GOVERNANCE
In the banking sector corporate governance is therefore a way of business and affairs of the bank by the
management and the board, affecting how they (Drs. Alberto G. Romero; 2003):

Set corporate objectives (including generating economic returns to owners);


Run the day-to-day operations of the business;
Consider
the
interests
of
recognized
stakeholders;
Align corporate activities and behaviors with the expectation that banks will operate in a safe and sound
manner, and in compliance with applicable laws and regulations; and
Protect the interests of depositor

HISTORICAL BACKGROUND OF SUDANESE FINANCIAL SECTOR; STRUCTURE, POLICY


AND PERFORMANCE
Sudans financial system today consists of the CBOS, 19 active commercial banks of which, six are state owned
banks, ten are owned jointly and three owned by foreign capital. In addition, there are four specialized banks
and two investment banks plus an unspecified number of non-bank financial entities, mainly Islamic insurance
(Takaful) companies. Two state-owned banks, Omdurman Bank and Bank of Khartoum, dominate the
Sudanese banking System.
The Sudanese banks are very small by international standards with a total amount of deposits in the entire
banking system of around $500 million since 1995. The average capital and total assets of a Sudanese bank is
$3.5 million and $24 million, respectively (Kireyev, 2001). The deposits structure of the Sudanese banks differs
from most Islamic banks. In Sudan, total deposits are dominated by demand deposits with a share of over 70%,
whereas saving and
Investment deposits remain relatively small. Kireyev (2001) argues that this phenomenon is a reflection of the
cash nature of the Sudanese economy where individuals prefer to have instant access to their funds. This
phenomenon also reflects the failure of the banking sector to offer investment opportunities that suit potential
depositors. Deteriorating investment climate and creeping inflation led to highly negative profits rates on
91

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


deposits in the 1990s, encouraging savers to invest heavily in property and other real assets.
to invest in the property sector until 1995 when the BOS prevented such practice.

Even banks used

Small bank size and weak bank performance in the 1990s, in particular, contributed to heavy government
intervention and regulations that shattered public confidence in banks in the early 1990s (Elhiraika, 1998). The
Central Bank usually imposed detailed requirements for lending, dividing the economy into priority sectors and
sub-sectors for which the banks were required to extend credit. Lending to agriculture was a priority, other
sectors were less of a priority and some were prohibited from bank financing. The Central Bank prescribed
different prices for credit depending on priority status and geographical allocation. Large loans had to be
approved by the Central Bank. Credits to public enterprises were extended directly by the BOS.
According to Kireyev (2001), prior to the reform program initiated in 1997, banking supervision was lax, no
unified accounting system existed, and the banks accumulated large portfolios of non-performing loans. By the
mid 1990s, the Sudanese financial system was characterized by its bulky, large and unmanageable regulatory
system of cumbersome guidelines for credit allocation, centralized lending by the central bank to public
enterprises, an absence of indirect monetary policy instruments, fixed and negative real rates of return, an
inadequate accounting system, detailed minimum and maximum limits of lending to individual sectors,
restrictions on financing trade in individual commodities, restrictions on inter-bank transactions, prior approval
for large loans and geographical allocation of credit. These constraints on banks were exercised at a time of
high inflation, which reached 133% in 1996.
In 1997, with the first IMF Staff Monitored Program, the CBOS gradually dismantled restrictions and
liberalized the financial system. Thus, in 1998 the BOS initiated open market operations using indirect Shariah based instruments that included the central bank and government Musharaka certificates (CMCs, and
GMCs respectively). The GMCs are issued against the value of the governments and CBOSs shares in
commercial banks; the GMCs are issued against the assessed value of government share in a number of selected
companies. The CBOS took a number of steps to liberalize the financial sector and help to curtail inflation
(Kireyev, 2001).
A number of measures were introduced to improve bank supervision, increase compliance with capital
adequacy requirements, and reduce the high level of non-performing loans (from 18% of total loans in 19981999 to 12% in 2003). In addition, the CBOS revised the weighted assets risk scales for some Islamic modes of
finance, such as Salam and purchasing of goods by banks for commercial purposes to better reflect the specific
risk facing banks.
This period also witness some restructuring of the financial sector through mergers and liquidation of stateowned and private sector banks. For example, Unity Bank and the National Bank for Exports and Imports
merged into the Bank of Khartoum Group, while the Sudanese Industrial Bank merged with Elnelien Bank to
form Elnelien Bank for Industrial Development. Meanwhile, the Middle East Bank and the Internal and
International Trade Bank were liquidated. The Central Bank banned the establishment of new commercial
banks during this period. However, following the signing of the peace agreement, many new banks and other
financial institutions are expected to begin in both the South and the North.
PROBLEM STATEMENT
In the wake of recent corporate scandals and the restructuring attempts of countries throughout the world to
rebuild their economies from recent financial crises, much international attention has been given to
understanding the causes and dynamics of financial crises. Yet there is an increasing recognition around the
globe of the critical importance of good corporate governance mechanisms in safeguarding financial stability
.This paper focuses to answer the following questions:

Whats the mechanism of corporate governance in Sudanese banking sector?

Does corporate governance matters for financial system stability and economic growth?

Is poor corporate governance is the main reason for the recent financial crises?

What are the different rules and regulations that were set by the banking sector (Sudanese banks) to
avoid such crises in the future?

SIGNIFICANCE OF THE STUDY


The significance of this study is conducted in order to:
92

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015

Spotlight on the importance of corporate governance to the banking sector.

Emphasis on the role that corporate governance plays in the financial system and wider economy i.e.
why it is important for economic growth and financial stability.

To note the destabilizing impact of weak corporate governance structures on the soundness of the
financial system.

Highlight the key elements of sound corporate governance.

OBJECTIVES OF THE STUDY


The general objective of this study is to:
Describe the mechanism of corporate governance in Sudanese banking sector.

Determine the role that corporate governance plays for economic growth and financial system stability.

Study carefully the link between corporate governance and the financial crisis.

Identify the different rules and regulations that were set by the banking sector to avoid such crises in the
future.

HYPOTHESIS
All countries will have a stable financial system and continuous economic growth, if a proper corporate
governance framework is implemented throughout banks.

Good corporate governance practices which are linked with prudential regulation and supervision will
enable banks to avoid any financial crisis in the future.

TARGET POPULATION
The targeted population of this study is the financial institutions.
MATERIALS AND METHODS
This study is based on analytical and qualitative methods.
DATA COLLECTION
Two types of data sources can be integrated; secondary data and primary data. Secondary data is used to gain
initial insight into the research problem; it is required in the preliminary stages of research to determine what is
known already and what new data is required. Primary data is data that did not exist before. It is designed to
answer specific questions of interest of the researcher. In this study information was obtained through the
collection of secondary. Secondary data was obtained from the internet, text books, news and journals.
This study is based on discussions and analysis of the collected data by focusing on corporate governance
practices in Sudanese banking sector and its link to financial system stability and economic growth taking in to
account if its one of the main reasons for the recent financial crises.
DISCUSSION
This paper will spotlight on the concept of corporate governance in Sudanese banking sector, in terms of;

Its practices in the banking sector and how it matters for financial system stability and economic growth.

Its link to the recent financial crisis in addition to the different rules and regulations that were set to avoid
such a problem in the future.

CORPORATE GOVERNANCE PRACTICES IN SUDANESE BANKING SECTOR


Since banks are important players in Sudanese financial system, special focus on Corporate Governance in the
banking sector becomes critical and the Central Bank of Sudan, as a regulator, has a responsibility on the nature
of Corporate Governance framework. Such a framework takes the following mechanism;
BOARD OF DIRECTORS
Irrespective of whether an enabling moral, social, legal and political environment is available, the board of
directors has a strong role in corporate governance. It cannot have a reputation for moral integrity and are not
technically qualified. They must be adequately aware of the risks and complexity involved in the banking
business, and must try their best to adhere to professional standards. In Islamic systems, they must have the
additional qualifications of being aware of the Shari principles as well as the Islamic teachings related to
business and finance.
93

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


Its the responsibility of the board to perform a number of functions without interfering in the day-to-day
management of the bank. For this purpose it must meet regularly and retain full and effective control over the
affairs of the bank. It must hold regular discussions with senior management and internal audit, establish and
approve policies, and monitor the progress toward corporate objectives. One of the board crucial functions is to
clearly specify the strategic objectives and guiding principles for the bank, codes of conduct for senior
management, and standards of appropriate behavior for the staff.
SENIOR MANAGEMENT
While the expression board of directors ,as used above, refers to persons who are generally not only
shareholders themselves but also participate in the governance of the bank, senior management refers to the
CEO and other senior members of the staff who perform management functions but are not necessarily
shareholders. Modern corporations are in general not managed by their owners (shareholders); instead,
professional managers are hired to run the business. They are fiduciaries; this creates the principle/ agent
problem and leads to conflict of interest. Its therefore necessary to impose restrictions on self dealing, fraud,
excessive compensation in different hidden forms and other malpractices.
Management is responsible for the day -to - day functions of the bank. Its accountable before the board of
directors, which has the legal authority to appoint and remove the concerned officers. Just as its necessary to
clearly specify the functions of the board and it members, its also necessary to regulate the behavior of
managers to ensure that they dont act in a way that would hurt the interests of shareholders and depositors.
SHAREHOLDERS AND DEPOSITORS
In Islamic financial institutions, investment depositors participate in the profit and loss like shareholder. They
have, nevertheless, no voice in shareholders meetings even thought their deposits are generally far greater than
shareholders capital. Guaranteeing these deposits would be in conflict with the spirit of Islamic finance. Even
though demand depositors are not exposed directly to the risk of banking business, they are exposed indirectly
for two main reasons. Firstly; the deposit insurance system does not generally insure demand deposits beyond
certain limit & secondly, the losses suffered by banks of their Pls advances may be substantial and the capital
and reserve plus investment deposits may not be sufficient to cover them. This is however, unlikely to happen
expect when demand deposits far exceed investment deposits, which is the case in a number of Islamic banks,
or when a substantial proportion of investment deposits has been withdrawn, which is possible because banks
generally allow the withdrawal of these deposits even before maturity. Therefore, all precautions need to be
taken to ensure continued confidence of depositors in Islamic banks.
CORPORATE GOVERNANCE, FINANCIAL SYSTEM STABILITY AND ECONOMIC GROWTH
Banks are crucial to a countrys economy, since they are regarded as the intermediaries of the financial system,
placed right in its heart, and therefore their sound operation is a central determinant of the stability of the
financial system as a whole.
Hence, financial institutions must operate in a prudent, fixed and transparent fashion in order to minimize the
possibility of failures and systemic risks.Significantly, financial institutions are not like other ordinary firms:
they have special attributes, which considerably intensify the conventional view of corporate governance
problems. The unique problems and risks that financial institutions pose are centered on their capital structure;
risks relating to credit, liquidity, exposure concentration, interest rates, exchange rates, settlement, and internal
operations; and other loyalty problems, like fraud and self-dealing practices. In this context, there are additional
financial risks associated with the unique importance and operation of banks, like contagion and financial
instability that any banking-oriented corporate governance framework should deal with. This reflects the fact
that the failure of one bank can rapidly affect another through inter-institutional exposures and confidence
effects. And any prolonged and significant disruption to the financial system can have potentially severe effects
on the wider economy.

94

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


The below figure clarifies financial system stability and its three pillars;

Regulatory
Governance

Public Sector
Governance

However, banks must always maintain a strong corporate governance framework in order to avoid any
instability in the financial system. Actually, this was not the case when the world faced the financial crisis in
2008.
CORPORATE GOVERNANCE AND 2008 FINANCIAL CRISIS
Many scientists tried to find out the link between corporate governance and financial crisis and they mentioned
the following:
Cross-border capital flows have become one of the key engines of growth in many developing economies. They
allow a country to finance profitable investment projects, which would otherwise not be possible given the
scarcity of financial resources. When there is a sudden crisis of confidence, however, large reversals can occur
and, as evidenced by the East Asian Crisis of 1997/98, these reversals are usually accompanied by tremendous
economic pain.
Steve Hanke, 2003, advances lack of transparency on the part of central banks as one of the reasons financial
crises can occur so suddenly. He chides central bankers for their lack of transparency as it relates to the
presentation of their financial statements. He contends that these statements, if made available on a timely basis
and transparent manner, could go along way towards preventing financial crises, since this would allow one to
identify the mark-to-market risk and market disequilibria and thus prevent sudden reversals. Central bank
financial statements become even more important, he argues, in pegged exchange rate regimes: as a crisis
approaches in these systems (reflected by continuous large outflows of foreign reserves), instead of tightening,
the central bank tries to compensate for the loss in reserves by expanding its net domestic assets to maintain
domestic liquidity. Eventually the reserves are depleted and one is faced with a more severe financial crisis than
would have occurred had the market been properly informed of the state of affairs in the country through the
central banks balance sheets.
95

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015


In contrast, Avinash Persaud, 2003, argues that while better corporate governance is a good thing, having it will
not necessary lead to the disappearance of financial crises. His rationale is that it is never obvious whether there
exists a bubble or not until the bubble bursts. He therefore recommends the adoption of simple rules, rather than
excessive reliance on judgment and discretion. Additionally, given the propensity for the governments of small
open economies to fix their currencies, Persaud recommends that short term external borrowing should only be
allowed for companies who earn foreign exchange through the sale of their goods and services.
On the other hand, the banking sector in Sudan is generally less affected by the crisis as its link with regional
and international financial sectors and institutions is either very limited or non-existent. This is attributed
mainly to the small size of its activities and the underdevelopment of the sector, and also to adoption of the
Islamic Banking System, which does not use conventional Western lending and borrowing tools and practices
and does not use interest rates in its financial dealings at all. Moreover, Sudanese banks do not receive
international or regional bank lending and therefore are not expected to be influenced adversely by the crisis.
Adverse effects will be confined to a few foreign-owned banks, resident in Sudan and registered in UAE stock
markets, whose share prices have dropped.
However, the enormous decline in oil prices and export earnings has also resulted in a sharp drop in budget
revenues, exacerbated the foreign currency shortage, accentuated trade and current account deficits and
increased pressure on the Sudanese pound to depreciate. The cumulative effect of these factors will further slow
banking activity as more shortage of liquidity will be felt in the economy.
As a result, the following initiatives were taken to enhance corporate governance in banks.
BASEL COMMITTEE INITIATIVES
Among the global guidelines further initiatives were set by the Basel Committee. The committee said that
Banking Supervision should be indicated. First of all, sectoral "good practices" must be indicated, taking into
account the specificities of the banks. In addition to that, in 2010 Basel committee updated general rules
intended to improve corporate governance in banks .The current version of the document contains 14 rules in 6
areas (BCBS, 2010, October):

Supervisory board practices

Senior management

Risk management and internal control

Compensation policy

Complex or opaque corporate structures

Disclosure of information and transparency.

An extension of these documents is guidelines for the internal audit function in banks (BCBS, 2011, December)
that formulate 20 rules relating to the following issues: supervisory expectations relative to the internal audit
function, a function for internal audit of the institution of the supervisory board, the supervisory assessment of
the internal audit function.
In addition to that, the central bank of Sudan continually set different rules and regulations to be followed by
banks. It has set the following banking supervision, in order to ensure an effective corporate governance
framework is implemented throughout banks:

Adhering to the standards of supervision issued by the Basel Committee and the Islamic Financial
Services Board.

Tightening banking supervision to achieve financial soundness of banks, raise their financial efficiency
and handle inadequacy and shortcoming, for the purpose of ensuring the rights of depositors and securing
best investment and utilization of resources.

Persisting in reinforcing and developing the role of internal supervision in the bank and non-bank
financial institutions, through activating the role of the boards of directors and strengthening the internal
control, audit and compliance officers systems.

Activating the adherence of banks and non-bank financial institutions to the regulations and directives of
the Central Bank of Sudan.

Enhancing and activating desk and prudential supervision mechanisms of the Central Bank.
96

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015

Activating adherence to the Sharia regulations in the banking transactions and encouraging the banks to
use Islamic financing modes without concentration on the Murabaha mode.

Activating the banking supervision units in branches of the Central Bank of Sudan in the States.

Persevering in applying the transparency and financial disclosure standards to the banks and non-bank
financial institutions which are subject to supervision of the Central Bank.

Continuating in raising the competencies of the employees of the banking system in coordination with the
related entities.

Activating the supervisory role over the non-bank financial institutions and the micro finance institutions.

Activating the relations with the supervisory bodies in the countries to which we are tied by economic
and financial relations.

Continuating the strengthening and activation of the internal control and risk systems and directorates of
the banks in the light of acceptable practices and international standards.

Continuing in activating the supervisory measures with respect to adherence to the requirements of
combating money laundering and terrorism financing in coordination with the related bodies.

Continuing in tackling the problem of insolvent debts and striving to bring down its percentage to the
internationally acceptable levels (6%).

Continuing the activation of the mechanisms of work systems devaluation in the banks.

Developing the work of the Banks Deposits Insurance Fund.

CONCLUSION/RECOMMENDATIONS
Corporate governance is very important for the banking sector so as to maintain financial system stability and to
enhance economic growth. This paper has introduced such an issue by mentioning corporate governance
practices in Sudanese banking sector and how it matters for financial system stability and economic growth.
Also it has clarified the link between corporate governance and the recent financial crisis. At the end, in order
to implement a good corporate governance practices and to avoid any similar crisis in the future the following
recommendations are portrayed;

Central bank of Sudan must continuously monitor the performance of banks to ensure their compliance with
corporate governance regulations.

Banks must recognize the importance of good corporate governance practices and its link to maintain a
sound financial system.

Bank directors should have the obligation to monitor lending practices, to see that bank policies are
enforced, and to ensure that lending practices remain within the institutions overall management ability.

The corporate governance framework should ensure that timely and accurate disclosure is made on all
material matters regarding the financial situation performance, ownership, and governance of the bank. and

The responsibility of external and internal auditors should be stronger and they should be obliged to report
any observed non-compliance to supervisors; the auditors should be subject to mandatory rotation and
should be banned from performing services for one client of other services beyond the audit of financial
statements.

REFERENCES
Adam B. Elhiraika and Khalid Abu Ismail, Working Paper 0411 , Financial sector policy and poverty
reduction in Sudan, ww.erf.org.eg/CMS/uploads/pdf/0411_final.pdf.

Dr
Alan
Bollard,
2003,
Corporate
governance
in
the
http://www.rbnz.govt.nz/research_and_publications/speeches/2003/0132484.html

Avinash Persaud ,Steve Hanke ,2003,The Link Between Corporate Governance And Financial Crisis,
www.centralbank.org.bb/Publications/CGC_Final.pdf

Benjamin A.I. Espiritu, Ph.D.,2005, The Effects of Corporate Governance Regulations on the Practices of
Directors
of
Banks:
A
Philippine
Experience,
www.dlsu.edu.ph/research/centers/cberd/.../corporate_governance.pdf
97

financial

sector,

International Journal of Research in Science and Technology

ISSN 2394 - 9554

Volume 2, Issue 3 (I): July - September, 2015

E-Journal USA,Economic Perspectives,2005,Promoting Growth Through Corporate Governance, http:


//usinfo.state.gov/journals/journals.htm

Dr.
Gomathi
Viswanathan,
2008,
Corporate
http://EzineArticles.com/?expert=Dr._Gomathi_Viswanathan.

International Finance Corporation, 2014, Corporate Governance, ifc.org/corporate governance.

M. Umer Chapra and Habib Ahmed, 2002, Corporate Governance in Islamic Financial Institutions,
www.irtipms.org/PubText/93.pdf.

Medani M. Ahmed,2010, Global Financial Crisis Discussion Series Paper 19: Sudan Phase 21,
www.atdforum.org/IMG/pdf_ODI_Report.pdf

Monika
Marcinkowska,
2012,
Corporate
Governance
In
andRemedies,is.muni.cz/do/econ/.../fai/.../FAI_issue2012_02_Marcinkowska.pdf.

Dr. Nasser Saidi, 2007, Corporate Governance in the Banking Sector: Issues and Challenges,
www.difc.ae/corporate-governance-banking-sector-issues-and-challenges.

Ross Levine,2004, The Corporate Governance of Banks:A Concise Discussion of Concepts and Evidence,
https://www.econbiz.de/.../the-corporate-governance-of-banks-a-concise-discussion-of-concepts-andevidence.../10002419340

Sandy Mavrommati,2006, Corporate Gover nance Challenges Aff ecting Financial Stability,
www.chasecambria.com/site/journal/article.php?id=186

Stijn Claessens ,Foreword by Sir Adrian Cadbury,2003 ,Corporate Governance and Development, Global
Corporate Governance Forum Focus1,www.ifc.org/.../Focus_1_Corp_Governance_and_Development.pdf.

The Central Bank Of Barbados and The Barbados Institute Of Banking and Finance At Sherbourne
Conference Centre, Barbados, 2003,Corporate Governance in The Financial Sector,
www.centralbank.org.bb/Publications/CGC_Final.pdf

Governance

In

Indian

Banks:

Banks,

Problems

98

MANUSCRIPT SUBMISSION
GUIDELINES FOR CONTRIBUTORS
1. Manuscripts should be submitted preferably through email and the research article / paper
should preferably not exceed 8 10 pages in all.
2. Book review must contain the name of the author and the book reviewed, the place of
publication and publisher, date of publication, number of pages and price.
3. Manuscripts should be typed in 12 font-size, Times New Roman, single spaced with 1
margin on a standard A4 size paper. Manuscripts should be organized in the following
order: title, name(s) of author(s) and his/her (their) complete affiliation(s) including zip
code(s), Abstract (not exceeding 350 words), Introduction, Main body of paper,
Conclusion and References.
4. The title of the paper should be in capital letters, bold, size 16 and centered at the top of
the first page. The author(s) and affiliations(s) should be centered, bold, size 14 and
single-spaced, beginning from the second line below the title.
First Author Name1, Second Author Name2, Third Author Name3
1Author Designation, Department, Organization, City, email id
2Author Designation, Department, Organization, City, email id
3Author Designation, Department, Organization, City, email id
5. The abstract should summarize the context, content and conclusions of the paper in less
than 350 words in 12 points italic Times New Roman. The abstract should have about five
key words in alphabetical order separated by comma of 12 points italic Times New Roman.
6. Figures and tables should be centered, separately numbered, self explained. Please note
that table titles must be above the table and sources of data should be mentioned below the
table. The authors should ensure that tables and figures are referred to from the main text.
EXAMPLES OF REFERENCES
All references must be arranged first alphabetically and then it may be further sorted
chronologically also.
Single author journal article:
Fox, S. (1984). Empowerment as a catalyst for change: an example for the food industry.
Supply Chain Management, 2(3), 2933.
Bateson, C. D.,(2006), Doing Business after the Fall: The Virtue of Moral Hypocrisy,
Journal of Business Ethics, 66: 321 335
Multiple author journal article:
Khan, M. R., Islam, A. F. M. M., & Das, D. (1886). A Factor Analytic Study on the Validity
of a Union Commitment Scale. Journal of Applied Psychology, 12(1), 129-136.
Liu, W.B, Wongcha A, & Peng, K.C. (2012), Adopting Super-Efficiency And Tobit Model
On Analyzing the Efficiency of Teachers Colleges In Thailand, International Journal on
New Trends In Education and Their Implications, Vol.3.3, 108 114.

Text Book:
Simchi-Levi, D., Kaminsky, P., & Simchi-Levi, E. (2007). Designing and Managing the
Supply Chain: Concepts, Strategies and Case Studies (3rd ed.). New York: McGraw-Hill.
S. Neelamegham," Marketing in India, Cases and Reading, Vikas Publishing House Pvt. Ltd,
III Edition, 2000.
Edited book having one editor:
Raine, A. (Ed.). (2006). Crime and schizophrenia: Causes and cures. New York: Nova
Science.
Edited book having more than one editor:
Greenspan, E. L., & Rosenberg, M. (Eds.). (2009). Martins annual criminal code:Student
edition 2010. Aurora, ON: Canada Law Book.
Chapter in edited book having one editor:
Bessley, M., & Wilson, P. (1984). Public policy and small firms in Britain. In Levicki, C.
(Ed.), Small Business Theory and Policy (pp. 111126). London: Croom Helm.
Chapter in edited book having more than one editor:
Young, M. E., & Wasserman, E. A. (2005). Theories of learning. In K. Lamberts, & R. L.
Goldstone (Eds.), Handbook of cognition (pp. 161-182). Thousand Oaks, CA: Sage.
Electronic sources should include the URL of the website at which they may be found,
as shown:
Sillick, T. J., & Schutte, N. S. (2006). Emotional intelligence and self-esteem mediate between
perceived early parental love and adult happiness. E-Journal of Applied Psychology, 2(2), 3848. Retrieved from http://ojs.lib.swin.edu.au/index.php/ejap
Unpublished dissertation/ paper:
Uddin, K. (2000). A Study of Corporate Governance in a Developing Country: A Case of
Bangladesh (Unpublished Dissertation). Lingnan University, Hong Kong.
Article in newspaper:
Yunus, M. (2005, March 23). Micro Credit and Poverty Alleviation in Bangladesh. The
Bangladesh Observer, p. 9.
Article in magazine:
Holloway, M. (2005, August 6). When extinct isn't. Scientific American, 293, 22-23.
Website of any institution:
Central Bank of India (2005). Income Recognition Norms Definition of NPA. Retrieved
August 10, 2005, from http://www.centralbankofindia.co.in/ home/index1.htm, viewed on
7. The submission implies that the work has not been published earlier elsewhere and is not
under consideration to be published anywhere else if selected for publication in the journal
of Indian Academicians and Researchers Association.
8. Decision of the Editorial Board regarding selection/rejection of the articles will be final.

PUBLICATION FEE

The International Journal of Research in Science and Technology is an online open access
journals, which provides free instant, worldwide and barrier-free access to the full-text of all
published manuscripts to all interested readers in the best interests of the research community.
Open access allows the research community to view, any manuscript without a subscription,
enabling far greater distribution of an author's work than the traditional subscription based
publishing model. The review and publication fee of a manuscript is paid from an author's
research budget, or by their supporting institutions.
As costs are involved in every stage of the publication process, like manuscript handling form
submission to publication, peer-review, copy-editing, typesetting, tagging and indexing of
articles, Electronic composition and production, hosting the final article on dedicated servers,
electronic archiving, server and website update and maintenance and administrative overheads,
each author is asked to pay certain fee as follows.
The publication fee for the Online journal is Rs 1000. If the author wants a printed
copy of the journal, then additional Rs.500/- have to be pay (which includes printing
charges of the journal, hard copy of publication certificate for all authors, packaging
and courier charges ).
The publication fee for the Online journal is Rs $50. If the author wants a printed copy
of the journal, then additional $50 have to be pay (which includes printing charges of
the journal, hard copy of publication certificate for all authors, packaging and courier
charges ).
The principal author will get one complimentary copy of journal ( online / print ) based on fee
paid along with publication certificate ( e copy / hard copy) after publication of the paper.
The publication fee, you can direct deposit / online transfer in favour of IARA at HDFC bank
( IFS code : HDFC0001474, SWIFT code / BIC Code : HDFCINBBCAL, HDFC Bank,
Nezone Plaza, Near Sohum Shoppe, Christian Basti, G. S. Road, Guwahati, Assam,
India) Current Account no 50200006808986
If anybody do not have funds to pay publication fee, he/she will have an opportunity to request
the Editor for fee waiver through the Head of his/her Institution/Department/University with
the reasons, because IARA does not want fee to prevent the publication of worthy work,
however fee waivers is granted on a case-to-case basis to authors who lack funds. To apply for
a waiver author/s must request during the submission process. Any request received thereafter
will not be considered.

Indian Academicians and Researchers Association


1, Shanti Path ,Opp. Darwin Campus II, Zoo Road Tiniali, Guwahati, Assam
email : info@iaraedu.com / submission@iaraedu.com

Das könnte Ihnen auch gefallen