Sie sind auf Seite 1von 45

A POST-QUANTUM CRYPTOGRAPHY BASED ACCESS

CONTROL SYSTEM FOR INFORMATION SECURITY.


PRESENTED BY
IBRAHIM OLUWADAMILARE RILIWAN
(CSC/12/0989)

SUBMITTED TO
DR. GABRIEL AROME

OF
THE DEPARTMENT OF COMPUTER SCIENCE, SCHOOL OF
SCIENCES,
FEDERAL UNIVERSITY OF TECHNOLOGY AKURE, ONDO
STATE NIGERIA.
IN PARTIAL FULFILLMENT FOR THE AWARD OF
BACHELOR OF TECHNOLOGY (B. TECH) IN COMPUTER
SCIENCE

April, 2016.

CHAPTER ONE
INTRODUCTION

1.1 General Overview


The history of information security begins with computer security. Alese (2000, 2004) defines
computer network security as the integration of Access Control, Authentication, Confidentiality,
Integrity, Availability and Non-repudiation to either protect an individual, institution or a nation.
The need for computer security arose during World War II when the first mainframes, developed
to aid computations for communication code breaking, were put to use. Multiple levels of
security were implemented to protect these mainframes and maintain the integrity of their data.
Access to sensitive military locations, for example, was controlled by means of badges, keys, and
the facial recognition of authorized personnel by security guards. The growing need to maintain
national security eventually led to more complex and more technologically sophisticated
computer security safeguards.
Information security is the protection of information and its critical elements. Information
confidentiality, integrity and availability are the three main characteristics of information
security. Confidentiality ensures that information is accessed only by those who have privileges,
Information integrity is the state of being complete and uncorrupted while information
availability characteristics enables users or other systems to access information. Information
security is guaranteed through data Encryption (Alese et, al., 2002). In the encryption process,
information system uses cryptographic algorithms to encrypt the information into the cipher text
1

form which ensures the integrity of the information during communication. (Mohammad et. al.
2014).
In the last three decades, public key cryptography has become an indispensable component of
our global communication digital infrastructure. These networks support a plethora of
applications that are important to our economy, our security, and our way of life, such as mobile
phones, internet commerce, social networks, and cloud computing. In such a connected world,
the ability of individuals, businesses and governments to communicate securely is of the utmost
importance.
Many of our most crucial communication protocols rely principally on three core cryptographic
functionalities: public key encryption, digital signatures, and key exchange. Currently, these
functionalities are primarily implemented using Diffie-Hellman key exchange, the RSA
cryptosystem, and elliptic curve cryptosystems. The security of these depends on the difficulty of
certain number theoretic problems such as Integer Factorization or the Discrete Log Problem
over various groups.
In 1994, Peter Shor of Bell Laboratories showed that quantum computers, can efficiently solve
each of these problems, thereby rendering all public key cryptosystems based on such
assumptions impotent. Thus a sufficiently powerful quantum computer will put many forms of
modern communication from key exchange to encryption to digital authentication in peril.
Thus the goal of post-quantum cryptography (also called quantum-resistant cryptography) is to
develop cryptographic frameworks that are secure against both quantum and classical computers,
and can interoperate with existing communications protocols and networks.

Access control deals with the elicitation, specification, maintenance and enforcement of
authorization policies in software-based systems (Sandhu and Samarati 1994). In order to allow
an enforce of authorization policies, the high-level control objectives specified for a system need
to be mapped to the structures provided by an access control model. An access control model
provides an abstract framework for the definition of authorization policy rules. It also defines
how essential access control elements (like subjects, operations, objects) could be interrelated.
Attribute, Location and Time-Based Access Control (ABLTAC) is used by many enterprise
systems to protect their information resources from unauthorized access. ABLTAC policies are
declined in terms of permission that are associated with attributes assigned to users. A permission
determines what operations a user with a specific attribute can perform on information resources.
Attributes define, classify, or annotate the datum to which they are assigned. The semantics of an
attribute indicate some purpose or characteristic and, when used within larger collections, enable
efficient identification and classification of like objects. These attributes are then used to
associate sets of permissions and tasks to the specified individuals. In an attribute based
information security system, data flow from one person or department to another are dependent
on the attribute possessed by a certain set of people which can have access to them.

1.2 MOTIVATION
The rapid evolution of computers and microprocessor chips with higher computational powers
has led to the creation of quantum computers that are specially designed for effective information
processing and communication network which in turn has increased security (confidentiality,
integrity, availability, authenticity, accountability, non-repudiation and reliability) concerns for
experts in the industry in recent years. The arising issues were not far-fetched since Classical
3

computer security systems are based on Public Key Cryptographic schemes such as RSA, DES
and Elliptic Curve Cryptography (ECC), whose security solely depends on the difficulty of
solving Discrete Logarithm Problems (DLP) and Integer Factorization Problems (IFP) (Gabriel,
Alese, et. al., 2014) the advent of quantum computers in large quantities will make it easy to
solve these mathematically hard problems, thus creating serious damages to existing information
security frameworks. This project therefore attempts to develop a code based Post-Quantum
Cryptography access control framework to ensure data integrity and information security.

1.3 PROJECT OBJECTIVES


The specific objectives of this project work are to:
i.

Design a Post Quantum Cryptography based Access Control framework for

Information Security.
ii.
Implement the framework in (a).
iii.
Evaluate the performance of the proposed system using selected standard metrics.

1.4 PROJECT METHODOLOGY


In order to achieve the objectives of this project, the following set of activities will be carried
out;
i.

LITERATURE REVIEW: An extensive review of related literatures on the

subject of discuss would be done, this review would examine the aim and objectives,
functions, methodology and limitations of existing cryptography based access control
systems earlier developed.
ii.
SYSTEM DESIGN: This would clearly describe the design of various system
subunits and their operations, software design tools such as Unified Modelling Language
4

(UML) would be used to design various system class models, other design tools which
include: Use Case Diagrams, Sequence Diagrams, Activity and State Charts would also
be used during the design process to explain various system operations and the
interactions between the system components. The overall architecture of the proposed
system is a three tired architecture where the first layer is the Application layer which
serves as the primary interface between the system and the client, instructions from the
Application Layer are authenticated in the middle tire i.e. the Access Control Layer of the
system, this layer contains an access control engine which uses a post quantum
cryptography algorithm for information encryption and an access role map which is used
to authenticate each unique user of the system, the last layer of the system is the Resource
Layer which is the information repository of the system.
iii.
SYSTEM DEVELOPMENT AND IMPLEMENTATION: This follows the
system design phase of this project, during this phase, the component modules of the
system created during the design phase would be developed and integrated to make the
entire system, the development of the modules would be in phases which include:
a. USER INTERFACE/ FRONT END DEVELOPMENT: This
involves the development of a user friendly interface for easy interaction with
system by its users. The user interface would be responsive i.e. it would be device
independent and all user type devices would be able to see the same information.
HTML5 and CSS3 seasoned with JavaScript and some JQuery libraries would be
employed to achieve this aim.
b. BACKEND DEVELOPMENT: This involves the development of
the web services that would be used in the system, some of the web services that
would be used include, the Google Geolocation API which would be used for
location authentication, access control / user creation module would be created
5

using code based post cryptography algorithm as the primary encryption


algorithm to ensure information integrity and user authentication in the system.
The code based Post Cryptography algorithm that would be used for the access
control engine is the McEliece algorithm due to the efficiency of the algorithm for
public key data encryption and decryption. When unique identities of system
users are received from the application layer of the proposed system, the system
interface module of the access control engine receives these unique entities and
decrypt them using McEliece algorithm decryption script in the evaluator module
of the engine, the decrypted entities are then mapped to their respective access
roles using the mapping function present in the access control engine, final entity
authentication is done when the Predicate API extract the environment parameters
of the user at an instance such as the time of system access and geo-location of
access, they are then compared to the authorized environment parameters of the
unique user access role, full system access is granted to the unique user if the user
iv.

authentication is successful else the user is locked out of the system.


SYSTEM TESTING AND EVALUATION: After the development of the entire

system, unit testing of individual modules that makes up the system would be done to
ensure the efficiency of systems components after which the entire system would be
evaluated against other classical cryptosystems using selected standard metrics.
v. SYSTEM DEPLOYMENT: The design will be deployed on a computer system
with minimum microprocessor of core i3 with bus speed of at least 1.6Ghz, with
Windows 10 Operating System software installed on it. The minimum required RAM size
of the PC is 2GB RAM.
1.5 CONTRIBUTION TO KNOWLEDGE
6

After full Implementation of this project, a post quantum cryptography based access control
system would be developed to provide solutions to security concerns attributed to information
security.

CHAPTER TWO
LITERATURE REVIEW

2.1 GENERAL OVERVIEW


In order to understand the concepts associated with security and records management systems, it
is imperative to examine and analyze published material from experts regarding the field. The
purpose of this review is to analyze and examine and obtain experience as regards information
security, the creation and archival processing of electronic records. The review is based on an
exhaustive assessment of the literature on information security, access control systems, electronic
management and electronic records, and it also contains an overview of the main concepts
associated with the creation of a secured electronic records management system from the
perspective of published experts.
7

2.2 INFORMATION SECURITY


In the computer industry, the term security -- or the phrase computer security -- refers to
techniques for ensuring that data stored in a computer cannot be read or compromised by any
individuals without authorization. Most computer security measures involve data encryption and
passwords. Data encryption is the translation of data into a form that is unintelligible without a
deciphering mechanism. A password is a secret word or phrase that gives a user access to a
particular program or system.
In his attempt to define computer network security, Alese (2000,2004) defines computer security
as the integration of Access Control, Authentication, Confidentiality, Integrity, Availability and
Non-repudiation to either protect an individual, institution or a nation.
i.

ACCESS CONTROL: Ensuring that users access only those resources and

services that they are entitled to access and that qualified users are not denied access to
services that they legitimately expect to receive.
ii.

AUTHENTICATION: Ensuring that users are the persons they claim to be.

Gabriel (Gabriel et al., 2014,2015), Stallings (Stallings, 2005), in their attempt to define
security, defines authentication as a service which provides a system the capability to
verify that a user is the very one he or she claims to be, some of the common means used
to assure authentication includes users username, password, retinal images, physical
location and identity cards.
iii.

CONFIDENTIALITY: Ensuring that information is not accessed by

unauthorized persons. Information that is sensitive or proprietary needs to be protected


through more stringent control mechanisms which are:
8

a.
b.

Authentication
Authorization

These two mechanisms are used to ensure the confidentiality of information.


iv.

INTEGRITY: Ensuring that information is not altered by unauthorized persons in

a way that is not detectable by authorized users. This service, through encryption and
hashing algorithms, ensures the integrity of information in a system.
v.
AVAILABILITY: Ensuring that a system is operational and functional at a given
moment, usually provided through redundancy; loss of availability is often referred to as
"denial-of-service". It applies both to data and to services in an information system.
vi.
NONREPUDIATION: Ensuring that the originators of messages cannot deny
that they in fact sent the messages. In practice, there is possibility that the sender of a
message may deny the ownership of the exchanged digital data that originated from him
or her. This service, through digital signature and encryption algorithms, ensures that
digital data may not be repudiated by providing proof of origin that is difficult to deny. A
digital signature is a cryptography mechanism that is the electronic equivalent of a written
signature to authenticate a piece of data as to the identity of the sender.
Over the years there have been dynamism in Information Security. The revolution has come both
in the mode of commission and countermeasures. In this present age, State of security can be
guaranteed if the following forms of protection mechanisms are put in place.
i. DETERRENCE: Reduces the threat to information assets through fear. Can
consist of communication strategies designed to impress potential attackers of the
likelihood of getting caught.
ii. PREVENTION: The traditional core of computer security. Consists of
implementing safeguards like the tools covered in this book. Absolute prevention is

theoretical, since there's a vanishing point where additional preventative measures are no
longer cost-effective.
iii. DETECTION: Works best in conjunction with preventative measures. When
prevention fails, detection should kick in, preferably while there's still time to prevent
damage. Includes log-keeping and auditing activities.

2.3

ACCESS CONTROL

Access control is concerned with determining the allowed activities of legitimate users,
mediating every attempt by a user to access a resource in the system. A given information
technology (IT) infrastructure can implement access control systems in many places and at
different levels. Operating systems use access control to protect files and directories, Database
management systems DBMS apply access control to regulate access to tables and views. Most
commercially available application systems implement access control, often independent of the
operating systems and/or DBMSs on which they are installed. The objectives of an access control
system are often described in terms of protecting system resources against inappropriate or
undesired user access. From a health management perspective, this objective could just as well
be described in terms of the optimal sharing of information resources about clients. After all, the
main objective of information system is to make information available to users and applications.
A greater degree of sharing may get in the way of resource protection; in reality, a well-managed
and effective access control system actually facilitates sharing.
2.3.1

CONCEPT OF ACCESS CONTROL

10

i.

OBJECT: An entity that contains or receives information. Access to an object

potentially implies access to the information it contains. Examples of objects are records,
fields (in a database record), blocks, pages, segments, files, directories, directory trees,
process, and programs, as well as processors, video displays, keyboards, clocks, printers,
and network nodes.
ii.

SUBJECT: An active entity, generally in the form of a person, process, or device

that causes information to flow among objects.


iii.

OPERATION: An active process invoked by a subject; for example, when an

automatic teller machine (ATM) user enters a card and correct personal identification
number (PIN), the control program operation on the users behalf is a process, but the
subject can initiate more than one operation-deposit, withdrawal, balance inquiry, etc.
iv.

PERMISSION (USER PRIVILEGE): An authorization to perform some action

on the system. In most computer security literature, the term permission refers to some
combination of object and operation. A particular operation used on two different objects
represents two distinct permissions, and similarly, two different operations applied to a
single object represent two distinct permissions. For example, a bank teller may have
permissions to execute debit and credit operations on customer records through
transactions, while an accountant may execute debit and credit operations on the general
ledger, which consolidates the banks accounting data.
v.

ACCESS CONTROL LIST (ACL): A list associated with an object that

specifies all the subjects that can access the object, along with their rights to the object.
Each entry in the list is a pair (subject, set of rights). An ACL corresponds to a column of

11

the access control matrix (described next). ACLs are frequently implemented directly or
as an approximation in modern operating systems.
vi.

ACCESS CONTROL MATRIX: A table in which each row represents a subject,

each column represents an object, and each entry is the set of access rights for that
subject to that object.
vii.

SEPARATION OF DUTY (SOD): The principle that no user should be given

enough privileges to misuse the system. For example, the person authorizing a paycheck
should not also be the one who can prepare it. Separation of duties can be enforced either
statically by defining conflicting roles (i.e., roles which cannot be executed by the same
user) or dynamically by enforcing the control at access time.
viii.

SAFETY: Measures that the access control configuration (e.g., access control

mechanism or model) will not result in the leakage of permissions to an unauthorized


principal. Thus, a configuration is said to be safe if no permission can be leaked to an
unauthorized or unintended principal.

2.3.2

POLICIES, MODEL AND MECHANISM

When planning an access control system, three abstractions of controls should be considered:
access control policies, models, and mechanisms. Access control policies are high-level
requirements that specify how access is managed and who, under what circumstances, may
access what information. While access control policies can be application-specific and thus taken
into consideration by the application vendor, policies are just as likely to pertain to user actions
within the context of an organizational unit or across organizational boundaries. For instance,
policies may pertain to resource usage within or across organizational units or may be based on
12

need-to-know, competence, authority, obligation, or conflict-of-interest factors. Such policies


may span multiple computing platforms and applications. At a high level, access control policies
are enforced through a mechanism that translates a users access request, often in terms of a
structure that a system provides. There are a wide variety of structures; for example, a simple
table lookup can be performed to grant or deny access. Although no well-accepted standard yet
exists for determining their policy support, some access control mechanisms are direct
implementations of formal access control policy concepts. Rather than attempting to evaluate
and analyze access control systems exclusively at the mechanism level, security models are
usually written to describe the security properties of an access control system.
A model is a formal presentation of the security policy enforced by the system and is useful for
proving theoretical limitations of a system. Access control models are of general interest to both
users and vendors. They bridge the rather wide gap in abstraction between policy and
mechanism. Access control mechanisms can be designed to adhere to the properties of the model.
Users see an access control model as an unambiguous and precise expression of requirements.
Vendors and system developers see access control models as design and implementation
requirements. On one extreme, an access control model may be rigid in its implementation of a
single policy. On the other extreme, a security model will allow for the expression and
enforcement of a wide variety of policies and policy classes.
2.3.3

TYPES OF ACCESS CONTROL POLICIES

There are several well-known access control policies, which can be categorized as discretionary
or non-discretionary. Typically, discretionary access control policies are associated with identitybased access control, and non-discretionary access controls are associated with rule-based
controls (for example, mandatory security policy).
13

2.3.3.1 DISCRETIONARY ACCESS CONTROL (DAC)


DAC leaves a certain amount of access control to the discretion of the object's owner or anyone
else who is authorized to control the object's access. For example, it is generally used to limit a
user's access to a file; it is the owner of the file who controls other users' accesses to the file.
Only those users specified by the owner may have some combination of read, write, execute, and
other permissions to the file. DAC policy tends to be very flexible and is widely used in the
commercial and government sectors. However, DAC is known to be inherently weak for two
reasons. First, granting read access is transitive; for example, when Ann grants Bob read access
to a file, nothing stops Bob from copying the contents of Anns file to an object that Bob
controls. Bob may now grant any other user access to the copy of Anns file without Anns
knowledge. Second, DAC policy is vulnerable to Trojan horse attacks. Because programs inherit
the identity of the invoking user, Bob may, for example, write a program for Ann that, on the
surface, performs some useful Function, while at the same time destroys the contents of Anns
files. When investigating the problem, the audit files would indicate that Ann destroyed her own
files. Thus, formally, the drawbacks of DAC are as follows:
i.

Information can be copied from one object to another; therefore,

there is no real assurance on the flow of information in a system.


ii.

No restrictions apply to the usage of information when the user has

received it.
iii.

The privileges for accessing objects are decided by the owner of

the object, rather than through a system-wide policy that reflects the
organizations security requirements.
2.3.3.2 NON-DISCRETIONARY ACCESS CONTROL
14

In general, all access control policies other than DAC are grouped in the category of NonDiscretionary Access Control (NDAC). As the name implies, policies in this category have
rules that are not established at the discretion of the user. Non-discretionary policies establish
controls that cannot be changed by users, but only through administrative action.
SEPARATION OF DUTY (SOD) policy can be used to enforce constraints on the assignment
of users to roles or tasks. An example of such a static constraint is the requirement that two roles
be mutually exclusive; if one role requests expenditures and another approves them, the
organization may prohibit the same user from being assigned to both roles. So, membership in
one role may prevent the user from being a member of one or more other roles, depending on the
SOD rules, such as Workflow and Role-Based Access Control. Another example is a historybased SOD policy that regulates, for example, whether the same subject (role) can access the
same object a certain number of times.

2.3.3.3 ATTRIBUTE BASED ACCESS CONTROL (ABAC)


Attribute Based Access Control (ABAC) is an access control policy whereby the access control
decisions are made based on a set of characteristics, or attributes, associated with the requester,
the environment, and/or the resource itself. Each attribute is a discrete, distinct field that a policy
decision point can compare against a set of values to determine whether or not to allow or deny
access. The attributes do not necessarily need to be related to each other, and in fact, the
attributes that go into making a decision can come from disparate, unrelated sources. They can be
as diverse as the date an employee was hired, to the projects on which the employee works, to
the location where the employee is stationed, or some combination of the above. One should also

15

note that an employees role in the organization can serve as one attribute that can be (and often
is) used in making an access control decision.
A typical ABAC scenario involves a requester who attempts to access a system either directly or
through an intermediary. The requester will have to directly or indirectly provide a set of
attributes that will be used to determine whether the access will be allowed. Once the requester
provides these attributes, they are checked against the permissible attributes and a decision will
be made depending on the rules for access. A key advantage to the ABAC model is that there is
no need for the requester to be known in advance to the system or resource to which access is
sought. As long as the attributes that the requester supplies meet the criteria for gaining entry,
access will be granted. Thus, ABAC is particularly useful for situations in which organizations or
resource owners want unanticipated users to be able to gain access as long as they have attributes
that meet certain criteria. This ability to determine access without the need for a predefined list of
individuals that are approved for access is critical in large enterprises where the people may join
or leave the organization arbitrarily.
For relatively simple implementations, large databases or other infrastructure are not necessary
and the application logic for allowing access based on attributes is all that is required. In more
complicated environments, however, the need for databases becomes critical, particularly if some
of the attributes that go into making a decision include organizational or personal information.
For example, if a persons role in the organization were used as one of the attributes that
determines access, a database and directory services infrastructure become indispensable.

2.4 ACCESS CONTROL SCHEMES USED IN INFORMATION SECURITY


Security has grown to be one of the major concern of the experts in the information and
communication technology ecosystem, due to an increase in the growth of network connectivity,
16

the issue of network security is becoming increasingly demanding as far as size and
implementation of new information technologies is concerned (Anderson, 2001; Manchala,
2000). A good network security must be able to address the issue of availability, confidentiality,
integrity accuracy, efficiency and usability. This means that a good security measure that will be
on a record management system must be able to work real time. Several attempts have been
made to provide security using a software agent systems approach. In these systems, the main
focus was on providing a solution for specific security issues, such as authentication and
authorization. (Alowolodu, 2009) ascertained the essence of network security and used Genetic
Algorithm (GA) to differentiate between a normal network connection and an attack. The GA
which is a programming technique that mimics biological evolution as a problem-solving
strategy was used and a result of almost 95% success was achieved. One of the problems of GA
was how to find a representation of the problem at hand since there are various ways by which
the given problem could be represented or encoded. (Balding, 2008) also developed a framework
using multi-agent systems for Internet security. The proposed system architecture of this
approach composed of three different agent types classified on their functionalities. The first type
is responsible for intrusion detection; the second type is responsible for encryption and
decryption of messages, while the third type can act as the combination of the previous two
types. Although this approach has provided useful security system, it does not address some
other important issues such as authentication, authorization, digital signature, and verification
security services. (Lalana, 2002) have proposed an approach to solve some of the security
problems in multi-agent systems, which utilizes delegation based trust management. However,
the main focus of his approach was on authentication and authorization.

17

Ron Rivest, Adi Shamir and Leonard Adleman of the Massachusetts Institute of Technology in
1977 developed a public key cryptography called RSA. This algorithm uses two different but
mathematically linked keys, one public and one private. The public key can be shared with
everyone, whereas the private key must be kept secret. In RSA cryptography, both the public and
the private keys can encrypt a message; the opposite key from the one used to encrypt a message
is used to decrypt it. This attribute is one reason why RSA has become the most widely used
asymmetric algorithm: It provides a method of assuring the confidentiality, integrity, authenticity
and non-reputability of electronic communications and data storage.
Many protocols like SSH, OpenPGP, S/MIME, and SSL/TLS rely on RSA for encryption and
digital signature functions. It is also used in software programs -- browsers are an obvious
example, which need to establish a secure connection over an insecure network like the Internet
or validate a digital signature. RSA signature verification is one of the most commonly
performed operations in IT. The security of RSA relies on the computational difficulty of
factoring large integers. As computing power increases and more efficient factoring algorithms
are discovered, the ability to factor larger and larger numbers also increases. Encryption strength
is directly tied to key size, and doubling key length delivers an exponential increase in strength,
although it does impair performance. RSA keys are typically 1024- or 2048-bits long, but experts
believe that 1024-bit keys could be broken in the near future, which is why government and
industry are moving to a minimum key length of 2048-bits. A team of researchers which included
Adi Shamir, a co-inventor of RSA, has successfully determined a 4096-bit RSA key using
acoustic cryptanalysis. Amandeep Kaur (Amandeep et.al, 2013) looks into efficient data storage
security algorithm in a cloud environment using RSA Algorithm, he and his research colleagues
looks into the effectiveness of using RSA algorithm to ensure data integrity and information
18

security in a cloud environment. V. Masthanamma (V.Masthanamma et.al, 2015) further explore


the effective use of RSA encryption Process Algorithm to ensure security of data in a cloud
computing ecosystem. Other encryption algorithm worthy of mentioning is the Elliptic Curve
Cryptography which is gradually gaining more ground in the information security ecosystem.
Neal Koblitz (Koblitz, 1987) and Victor Miller (Miller, 1985) in 1985 invented the Elliptic Curve
Cryptosystems (ECC), this can be viewed as elliptic curve analogues of the older discrete
logarithm (DL) cryptosystems in which the subgroup is replaced by the group of points on an
elliptic curve over a finite field. In comparison with RSA algorithm, Elliptic Curve Cryptography
provides higher security with less key sizes and requires significantly less computational
resources which make it attractive for use in digital signature smart cards (Mailov, et al., 2015).
The mathematical basis for the security of elliptic curve cryptosystems is the computational
intractability of the elliptic curve discrete logarithm problem (ECDLP) (Certicom, 2009). ECC
generates keys through the properties of the elliptic curve equation; according to some
researchers, ECC can yield a level of security with a 164-bit key that other systems require a
1,024-bit key to achieve. Because ECC helps to establish equivalent security with lower
computing power and battery resource usage, it is becoming widely used for standalone and
mobile applications (Mills, 2009). ECC approach to security provides authentication,
authorization, non-repudiation, digital signature, and verification security services (Alese 2004,
Jayabhaskar, et al., 2012, Alowolodu, et al., 2013, Akinyede, et al., 2014, Greeshma, et al., 2014).
After seeing quantum computers destroy RSA, DSA, ECC and ECDSA, we can arguably
conclude that cryptography is dead; that there is no hope of scrambling information to make it
incomprehensible to, and unforgeable by, attackers; that securely storing and communicating

19

information means using expensive physical shields to prevent attackers from seeing the
informationfor example, hiding USB sticks inside a locked briefcase chained to a trusted
couriers wrist.
A closer look reveals, however, that there is no justification for the leap from quantum
computers destroy RSA and DSA and ECDSA to quantum computers destroy cryptography.
There are now many important classes of cryptographic systems beyond RSA and DSA and
ECDSA that can defeat the treat posed by Quantum systems, they include:
i.

Hash-based cryptography. The classic example is Merkles hash-tree public-key

signature system (1979), building upon a one-message-signature idea of Lamport and


Diffie.
ii.

Code-based cryptography. The classic example is McEliece hidden-Goppa-code

public-key encryption system (1978).


iii.

Lattice-based cryptography. The example that has perhaps attracted the most

interest is the Hoffstein PipherSilverman NTRU public-key-encryption system


(1998).
iv.

Multivariate-quadratic-equations cryptography. One of many interesting examples

is Patarin HFEv public-key-signature system (1996), generalizing a


proposal by Matsumoto and Imai.
v.

Secret-key cryptography. The leading example is the DaemenRijmen Rijndael

cipher (1998), subsequently renamed AES, the Advanced Encryption Standard.


All of these systems are believed to resist classical computers and quantum computers. Nobody
has figured out a way to apply Shors algorithmthe quantum-computer discrete-logarithm
algorithm that breaks RSA and DSA and ECDSAto any of these systems. Another quantum
20

algorithm, Grovers algorithm, does have some applications to these systems; but Grovers
algorithm is not as shockingly fast as Shors algorithm, and cryptographers can easily
compensate for it by choosing somewhat larger key sizes.
2.4.1 THE CODE-BASED PUBLIC-KEY ENCRYPTION SYSTEM
Assume that b is a power of 2. Write n = 4b lg b; d = lg n; and t = 0.5n/d.
For example, if b = 128, then n = 3584; d = 12; and t = 149.
The receivers public key in this system is a dt n matrix K with coefficients in F2. Messages
suitable for encryption are n-bit strings of weight t, i.e., n-bit strings having exactly t bits set to
1. To encrypt a message m, the sender simply multiplies K by m, producing a dt-bit ciphertext
Km.
The basic problem for the attacker is to syndrome-decode K, i.e., to undo the multiplication by
K, knowing that the input had weight t. It is easy, by linear algebra, to work backwards from Km
to some n-bit vector v such that Kv = Km; however, there are a huge number of choices for v,
and finding a weight-t choice seems to be extremely difficult. The best known attacks on this
problem take time exponential in b for most matrices K. How, then, can the receiver solve the
same problem? The answer is that the receiver generates the public key K with a secret structure,
specifically a hidden Goppa code structure, that allows the receiver to decode in a reasonable
amount of time. It is conceivable that the attacker can detect the hidden Goppa code structure
in the public key, but no such attack is known.
Specifically, the receiver starts with distinct elements 1,2, . . . ,n of the
field F2d and a secret monic degree-t irreducible polynomial g F2d [x]. The
main work for the receiver is to syndrome-decode the dt n matrix where
each element of F2d is viewed as a column of d elements of F2 in a standard
21

basis of F2d. This matrix H is a parity-check matrix for an irreducible binary


Goppa code, and can be syndrome-decoded by Pattersons
algorithm or by faster algorithms.
The receivers public key K is a scrambled version of H. Specifically, the
receivers secret key also includes an invertible dt dt matrix S and an n n
permutation matrix P. The public key K is the product SHP. Given a ciphertext
Km = SHPm, the receiver multiplies by S1 to obtain HPm, decodes H to
obtain Pm, and multiplies by P1 to obtain m.

2.5

INFORMATION SYSTEM

An Information System(IS) is any combination of information technology and people's activities


that support operations, management and decision making. It is important to note the inclusion of
'people's activities' in the above definition. People's activities are a vital part of an information
system, though some definition may fail to include it. There are numerous definitions of the term
information system and they sometimes, even conflict regards its components."Even people who
appear to interpret information system in the same way are apparently different sets of concepts
to explain it, and they apply different terminology" (Falkenberg et al.,1998). According to
Falkenberg et al.(1998), information system can be interpreted in at least three different ways. As
a technical system, implemented with computer and telecommunications technology; as a social
system, such as an organization connection with its information needs; and as a conceptual
system (i.e. an abstraction of either of the above).

22

From the above, it is clear that the term is used to refer not only to the Information Technology
(IT) that an organization uses, but also to the way in which people interact with this technology
in support of business processes. Information systems are implemented within an organization
for the purpose of improving the efficiency and effectiveness of that organization. Capabilities of
the information system and characteristics of the organization, its work systems, its people, and
its development and implementation methodologies together determine the extent to which the
purpose is achieved (Silver et al. 1994).
Information systems serve all the systems of a business, linking the different components in such
a way that they effectively work towards the same purpose. The role of most information
systems was simple in the early years until 1960. They were mainly used for electronic data
processing (EDP) purposes such as transactions processing, record-keeping and accounting. EDP
is often defined as the use of computers in recording, classifying, manipulating, and summarizing
data.

2.5.1

CLASSIFICATION OF INFORMATION SYSTEMS

Information systems can be classified into the following:


2.5.1.1 TRANSACTION PROCESSING SYSTEMS these process data resulting from
business transactions, update operational databases, and produce business documents. Examples:
sales and inventory processing and accounting systems. A transaction is an event that generates
or modifies data that are eventually stored in an information system. A transaction processing
system is a type of information system that consistently accept airline reservations from a range
of travel agents, accepting different transaction data from different travel agents would be a
problem.
23

2.5.1.2 MANAGEMENT INFORMATION SYSTEMS (MIS) provide information in the


form of prespecified reports and displays to support business decision making. Examples: sales
analysis, production performance and cost trend reporting systems. This new role focused on
developing business applications that provided managerial end users with predefined
management reports that would give managers the information they needed for decision-making
purposes. The successful MIS supports a business long range plans, with feedback loops that
allow for titivation of every aspect of the enterprise, including recruitment and training regimens.
MIS not only indicate how things are going, but also why and where performance is failing to
meet the plan. These reports include near-real-time performance of cost centers and projects with
detail sufficient for individual accountability.
2.5.1.2.1 ADVANTAGES OF MANAGEMENT INFORMATION SYSTEMS
The following are some of the benefits that can be attained for different types of management
information systems. (Pant and Hsu, 1995)
i.

Companies are able to highlight their strength and weaknesses due to the

presence of revenue reports, employee's performance record etc. The identification of


these aspects can help the company improve their business processes and operations.
ii. Giving an overall picture of the company and acting as a communication
and planning tool.
iii.
The availability of the customer data and feedback can help the company
to align their business processes according to the needs of the customers. The
effective management of customer data can help the companies to perform direct
marketing and promotion activities.
iv. Information is considered to be an important asset for company in the
modern competitive world. The customer buying trends and behavior can be

24

produced by the analysis of sales and revenue reports from each operating region of
the company.
By the 1970s, these pre-defined management reports were not sufficient to meet many of the
decision-making needs of management. In order to satisfy such needs, the concept of decision
support systems (DSS) was born. The new role for information systems was to provide
managerial end users with ad hoc and interactive support of their decision-making processes.

2.5.1.3 DECISION SUPPORT SYSTEMS (DSS) is a computer-based information system that


provide interactive ad hoc support for the decision-making processes of managers and other
business professionals in an organization. Examples: product pricing, profitability forecasting
and risk analysis systems. DSSs serve the management, operations, and planning levels of an
organization and help to make decision, which may be rapidly changing and not easily specified
in advance (Wikipedia, The free encyclopaedia May 23, 2012). A passive DSS is a system that
aids the process of decision making, but that cannot bring out explicit decision suggestion or
solutions. An active DSS can bring out such decision suggestions or solutions (Haettenschweiler,
1999).
In the 1980s, the introduction of microcomputers into the workplace ushered in a new era, which
led to a profound effect on organizations. The rapid development of microcomputer processing
power (e.g. Intels Pentium microprocessor), application software packages (e.g. Microsoft
Office), and telecommunication networks gave birth to the phenomenon of end user computing.
End users could now use their own computing resources to support their job requirements instead
of waiting for the indirect support of a centralized corporate information services department. It
became evident that most top executives did not directly use either the MIS reports or the
25

analytical modelling capabilities of DSS, so the concept of executive information systems (EIS)
was developed.
2.5.1.4 EXPERT SYSTEMS (ES) serve as consultants to users by providing expert advice in
limited subject areas. It is a knowledge-based system that provide expert advice and act as expert
consultants to users. Examples are credit application advisor, process monitor, and diagnostic
maintenance systems.

2.6

ATTRIBUTE BASED INFORMATION SYSTEM

An attribute is a named property of a class that describes a range of values that instances of the
property may hold. A class may have any number of attributes or no attribute at all. An attribute
represents some property of the thing you are modeling that is shared by all objects of that class.
For example, every wall has a height, width, and thickness; you might model your customers in
such a way that each has a name, address, phone number, and date of birth. An attribute is
therefore an abstraction of the kind of data or state an object of the class might encompass. At a
given moment, an object of a class will have specific values for every one of its class's attributes.

attributes

Figure 1.0 : Examples of Customer Attributes


Attribute-based information systems provide fine granularity, high exibility, rich semantics and
other useful features like partial authentication and natural support for role-based access control.
They are also very generic systems and are backward compatible with other technologies like
26

role-based systems or identity-based systems. They can be used in a constrained fashion to


achieve this backward compatibility (Wang et al., 2004). Attribute-based systems have enormous
potential for providing data security in distributed environments. Peer-to-peer systems are an
example of one such beneficiary: individuals may publish documents that implicitly target those
users who are assigned the appropriate attributes. Moreover, such publishing can be completely
transparent to the peer-to-peer system. For example, a user Bob looking for employment in the
field of secure systems engineering could place a copy of his curriculum vitae in publicly
accessible web space encrypted with the attributes secure systems engineering and human
resources manager. Only potential employers satisfying these attributes would be able to
decrypt this information and contact Bob. The attributes are in a verifiable form when they are
asserted by some trusted entity on behalf of the user.

2.7

REVIEW OF RELATED WORKS

Information systems are generally categorized either based on the method used for implementing
access control or based on the entity which enforces access control. Using the former technique,
they can be broadly categorized as Identity-based, role-based and attribute-based access control
systems. Using the latter technique, they are categorized as discretionary access control (DAC)
and mandatory access control(MAC) systems.
The followings are the early technologies that are used by the early systems. The explanation
will include their advantages and limitations.
2.7.1

DISCRETIONARY ACCESS CONTROL (DAC)

In Discretionary access control (DAC), the owner of the object specifies the access policy, listing
who is allowed to access the resources and their corresponding access rights. In DAC the creator
27

of the object is the owner by default and he can delegate his ownership rights to another principal
(Department of Defence, 1989). The DAC model can be implemented using ACL or capability
certificates. In the capability-based model, the capability certificates are created by the resource
owner. Although this system provides great exibility in defining access control policies, it also
makes it hard to verify the security policies of the overall system. This is primarily due to the fact
that resource owners control specifies security policies. Another problem is that DAC is that its
more prone to errors or misconfigurations in security policies and hence more susceptible to
exploits.
2.7.2

MANDATORY ACCESS CONTROL (MAC)

In Mandatory access control (MAC) the access control policies are defined by the system
administrator. It is implemented as a multi-level access control system often containing highly
sensitive data. It has several hierarchical classification levels and each resource and principal in
the system is classified as a member of one of those levels. The principal's classification specifies
his access level whereas the resource's classification specifies the minimum level of access a
principal would require to access that resource. Examples of MAC are the Bell-LaPadula
confidentiality model (Bell and LaPadula 1976 ) and the Biba integrity model (Biba, 1977).
MAC requires that certain functional components like the operating system and associated
utilities be `trusted' and placed outside the MAC model because they are required to access
resources at each access level. This makes it impossible to model a complete system using MAC
without assuming that certain components are completely trusted. In computer security, the
principle of least privilege requires that a principal should be able to access only resources which
are required for its legitimate purpose. Since the MAC model is based on a few distinct levels, it
does not provide fine grained control to satisfy this requirement completely. Separation of duty
28

(SoD) is another principle which requires that the same principals are not given the privilege to
execute transactions which are mutually exclusive from the security point of view, especially in
the context of avoiding fraud. SoD can either be static or dynamic. Static SoD can easily be
achieved by assigning principals privileges from only one group of mutually exclusive
transactions. In practice, such a system is very inefficient and a more common approach, called
dynamic SoD, is to assign principals privileges from multiple groups but restrict them to execute
transactions from only one group during system execution. Since MAC assigns fixed security
levels to principals, dynamic SoD cannot be achieved in MAC.

2.7.3

ROLE BASED ACCESS CONTROL (RBAC)

Role-based access control was developed as an authorization system to be used by organizations,


where employees were given access rights to the organization's resources. Employees in an
organization who perform similar duties, need similar access rights to same resources. It was
proposed that they should logically be part of a group and access rights should be given to
groups. To simplify access management, employees were designated as members of some
predefined `roles' and access permissions were granted to these `roles'. Using this system access
management becomes easier as it is just a matter of adding or removing an employee to a `role'.
This also alleviates the problem of modifying a large number of access control rules when an
employee joins or leaves the organization. The challenge with RBAC systems is that role
management is a huge task in a large system and is mostly implemented in a centralized manner.
Even if de-centralized administration is used, role management is an admin function and hence
relies on an admin to manage and administer roles. Another problem with RBAC systems is that
for each new composition of users, a new role must be defined. In a large system with large
29

combinations of principals, the RBAC model results in a problem called `role explosion' where
the number of roles increases exponentially (Elliott and Knight, 2010) and ultimately becomes
unmanageable.

2.7.3.1 CONTEXT-AWARE

ROLE

BASED

ACCESS

CONTROL USING

USER

RELATIONSHIP (KANGSOO AND SEOOG, 2013)


Role-based access control is widely used in modern enterprise systems because it is adequate for
reflecting the functional hierarchy in various organizations for access control model. However,
environmental changes, such as the increasing usage of mobile devices, make several challenges.
This paper suggests relationship-based access control model that considers the relationship
among users in an organization and surrounding user identification as context information. This
assume mobile office environment as our domain. In mobile office, employees who use mobile
device such as smartphone and tablet PC have relation with other employees for cooperative
work to perform organizations task. But existing access control model do not consider
relationship among employees as context information in mobile office. This work cannot be
implemented in a distributed environment or organization.

30

CHAPTER THREE
SYSTEM ANALYSIS AND DESIGN
3.1

INTRODUCTION

This chapter analyses the proposed system, expressing the detailed design of the proposed model
by using various software engineering design tools to present the system key modules, the
interactions between the modules as well as use case scenarios to determine the possible working
conditions of the system.
The design of the systems component is represented using the unified modelling language
(UML).
The Unified Modeling Language (UML) is a standard language for specifying, visualizing,
constructing, and documenting the software system and its components. The UML focuses on the
conceptual and physical representation of the system. It captures the decisions and
understandings about systems that must be constructed. It is used to understand, design,
configure, maintain, and control information about the systems. Being the international standard
notation for object-oriented analysis and design, the UML is ideal for this purpose.

3.2

ANALYSIS OF AN EXISTING SYSTEM

Securing information dissemination between stakeholders in an information system is a


fundamental problem that arises in numerous applications. Such applications include multilevel
31

security, secure multicast, collaborative online communities, and distributed file systems. The
fundamental importance of the secure exchange of information has resulted in a wide range of
solutions. Traditional access control mechanisms can be categorized into three groups:
mandatory access control (MAC) (Denning, 1976), discretionary access control (DAC)
(Lampson,1971, Sandhu and Samarati 1994), and role-based access control (RBAC) (Ferraiolo et
al.,2001, Sandhu et al.,1996). In MAC, an administrative mechanism enforces centralized access
control on every object. Systems implementing DAC require the owner of an object to dictate
policy. Under RBAC, a users role in an organization inherently dictates their ability to access
and manipulate data. Each role in an RBAC system is associated with a set of permissions
required to carry out that role, cryptographic algorithms such as RSA, Advanced Encryption
Standard (AES), DES and Elliptic Curve Cryptography (ECC) are used in the implementation of
various access controls system above classical systems which accounts for their flaws with the
advent of Quantum computing.
3.3

PROBLEM OF THE EXISTING SYSTEM

Although the existing systems, with the cryptographic schemes used to ensure information
security and access control are highly effective at controlling access to such systems under a
single administrative authority, but with the advent of quantum computing which introduces high
performance computing for information processing, the security schemes in use becomes
inadequate to ensure information security and access restrictions in systems which was due to the
reliability of classical systems cryptographic schemes on the difficulty of solving Discrete
Logarithm Problems (DLP) and Integer Factorization Problems (IFP) which is rather easy for
quantum systems to solve compared to classical computers.

32

3.4

OVERVIEW OF THE PROPOSED SYSTEM

Attribute based information security system is a system that has extremely large advantages for
providing data security in a distributed environment. Examples of such systems are the peer to
peer systems whereby individuals may publish documents that implicitly target those users who
are assigned the appropriate attributes. The attributes to be used define, classify, or annotate the
datum to which they are assigned. The semantics of an attribute indicate some purpose or
characteristic and, when used within larger collections, enable efficient identification and
classification of like objects. For example, individuals in enterprise systems are often segregated
into groups of common interest or duty based on a given set of attributes (Sandhu et al., 1996),
e.g., function, department, university level, rank/position etc. These attributes are then used to
associate sets of permissions and tasks to the specified individuals.
3.5

SYSTEM ARCHITECTURE

In Information systems, users try to access various resources they need from the system by
making appropriate request to the server. In attribute-based information systems, resources that
are available to each user are determined based on the authorization level indicated in the
attributes submitted by the user.
The system architecture of the proposed system i.e. the attribute, time and location based access
control system for an hospital management information system is a three-layered system
architecture. The architecture is composed of Application Layer (Front End), Access Control
Layer (Web Services + Security) and the Resource Layer (Backend). The Resource Layer is
implemented using the file system, as managed by the database manager. The Access Control
Layer is where the attribute provided by the system users are used in categorizing system users
33

and the resources to be made available to them are determined based on their access roles.
Finally, the Application layer covers the interfaces by which users relates with the system i.e. the
layer at which users operates. At this layer, the user supplies his unique attribute information
upon request, this request will be sent to the Access Control Layer where the access level of the
user is determined and relevant resource with respect to the identified access level and user query
are release by the resource layer of the system.

Figure : System architecture of the Attribute, Time, Location Based Access Control
System for Hospital Management Information System.
The Access Control Layer is made up of two cascading system modules which are:
34

i.

The Access Control Engine (Security System): The access control engine

is a dedicated service that performs rule-based access control for users' requests. It
contains three components, the Evaluator, the Interface and the Predicates API. The
core of the engine is the evaluator, which acts as a reasoning system to validate the
rule set for allowing access to the system. Another important component of the engine
is the predicates API, which implements all required user-specific predicates used in
our system. It is also responsible for collecting data, such as Current Server Time,
Server Load, Appointment Information, User Profile, etc., to instantiate the variables
in each predicate. The third component is the interface that sends request to and
receives response from the Application Layer and exposes the validation interface as
a Web service which can be invoked by access control proxies resided in applications.
The diagram below shows an annotated architecture of the Access Control Engine.

Figure . Components of the Access Control Engine


The sequence of operation of the access control engine is as described below:
1.The application layer sends the authorization request to the access control
engine.
35

2. Upon receiving the request, the interface component forwards it to the


validator.
3. According to the \action" tag in the request, corresponding rule set will be
activated. Each rule within the rule set contains one or more predicates. The
evaluator will invoke each predicate implementation via predicates API.
4-5. The predicates API may collect necessary data from databases or
environment parameters to instantiate variables within each predicate.
6. The predicates API returns to the evaluator with \True" or \False" after
executing the implementation of the predicate.
This procedure may occur multiple times until predicates API returns all
predicates executing result as shown in the sequence diagram below from 7-10.
11. After collecting all the results from the predicates API, the evaluator will
perform the inference procedure to obtain authorization decision and return the
decision to the interface component.
12. The interface component returns this decision back to the Application layer,
and the layer will enforce access control according to the decision.

36

Figure . Sequence Diagram for the Access Control Engine


ii.

Web Services: This are functional modules that are responsible for

executing users requests based on the access control engine authentication for the
respective user.
The Application Layer: This is an interface layer where users can easily interact with the
system by passing in their respective request into the system while it output the respective result
of the system query to the respective user.
37

The Resource Layer / Database: A database is a storage location where organized information
is stored, it is reliable because information can easily be managed, accessed and updated and also
retrieved, it can also be referenced to for future purposes, this database serves as the
information/resource repository for the system, it stores information about the agents that is
allowed access into the system as well as various data that are stored inside it about each unique
system user and MySQL database was used for the design of the database.

3.6

UML DIAGRAMS

Unified Modeling Language (UML) is a modeling language that is used to visualize, specify,
construct and document the artifacts or architecture of a software system/framework. It provides
a set of notations to create a visual model of the system. Like any other language, UML has its
own syntax and semantics. UML is however, not a system design or development methodology,
but can be used to document object-oriented and analysis results obtained using some
methodology.
UML can be used to construct different types of system design diagrams i.e. Class, Objects,
Activity, Use Case diagrams etc. to capture various perspective views of the system such as the
user, behavioral, structural, implementation, and environment views of a system. The different
UML diagrams provide different perspectives of the software system to be developed and
facilitate a comprehensive understanding of the system. Such models can be refined to get the
actual implementation of the system.
3.6.1

USE CASE DIAGRAM

38

The use case diagram shows the various functions of the stakeholders in the proposed system and
how the key stakeholders of the system interacts with each other. The use case diagram below
shows the functional modules of the proposed systems with stakeholders interaction with the
system.

Figure . Use Case diagram for the proposed system.


3.6.2

CLASS DIAGRAM

A Class is a category or group of things that has similar attributes and common behavior. A
Rectangle is the icon that represents the class it is divided into three areas. The upper most area
contains the name, the middle; area contains the attributes and the lowest areas show the
operations. Class diagrams provides the representation that developers work from. Class
diagrams help on the analysis side, too.

39

3.6.3

SEQUENCE DIAGRAM

A Sequence Diagram is an interaction diagram that emphasis the time ordering of messages; a
collaboration diagram is an interaction diagram that emphasizes the structural organization of the
objects that send and receive messages. Sequence diagrams and collaboration diagrams are
isomorphic, meaning that you can take one and transform it into the other.

40

Figure . Sequence diagram for the proposed system.

3.6.4

COLLABORATION DIAGRAM

A Collaboration Diagram also called a communication diagram or interaction diagram, is an


illustration of the relationships and interactions among software objects. The concept is more
than a decade old although it has been refined as modeling paradigms have evolved.

41

Figure . Collaboration diagram for the proposed system.

3.6.5

STATE ACTIVITY DIAGRAM

The state diagram shows the states of an object and represents activities as arrows connecting the
states. The Activity Diagram highlights the activities. Each activity is represented by a rounded
rectangle-narrower and more oval-shaped than the state icon. An arrow represents the transition

42

from the one activity to the next. The activity diagram has a starting point represented by filledin circle, and an end point represented by bulls eye.

Figure . State activity diagram for the proposed system.


REFERENCES
1. Buchbinder S. and Thompson J, 2010: An Overview of Healthcare Management
2. Dr. Ariyaratne, 2010: A private hospital management system. Postgraduate
Institute of Medicine, University of Colombo, Sri Lanka.
3. Farzandipour M, Sadoughi F and Meidani Z, 2010: Hospital Information Systems
User Needs Analysis: A Vendor Survey.
43

4. Hosford, S.B. (2008). Hospital Progress in Reducing Error: The Impact of


External Interventions. Hospital Topics, Winter 2008, 86, 1, 9-19.
5. Hospital Management and Information System. Quintegra Solutions (2006).
6. Hospital Management Information System, Gujarat Informatics System (2012)
<http://www.gujaratinformatics.com/hmis.html>
7. Ndira SP, Rosenberger KD, Wetter T. (2008) Assessment of data quality of and
Staff Satisfaction with an electronic health record system in a developing country
(Uganda): a qualitative and quantitative comparative study.
8. Praveen K.A and Gomes L.A, 2006: A Study of the Hospital Information System
(HIS) In the Medical Records Department of A Tertiary Teaching Hospital Journal of the
Academy of Hospital Administration, Vol. 18, No. 1 (2006-01 - 2006-12).
9. Smith M. and Gert van der Pijl, 1999: Developments in Hospital Management
and Information Systems. Tilburg University School of Economics, Netherlands.
Bandura A. (1977). Social Learning Theory, New York: General Learning Press.
10. Lampson,B.W.(1971). Protection, Proceedings of the 5th Princeton Conference on
Information Sciences and Systems.
11. Pirretti Matthew, Patrick Traynor, and Patrick McDaniel(2006). SIIS Laboratory,
CSE, Pennsylvania State University, University Park, PA, USA. pirretti, traynor,
mcdaniel@cse.psu.edu.
12. Sandhu R.S, Coyne E. J., Feinstein H. L., and Youman C. E. (1996): Role-based
access control models. Computer, 29(2): 3847.
13. Sandhu R.S and Samarati P.(1994): Access Control Principles; principles and
practice, IEEE
Communication,329,40-48.7.
14. Wang, L., Wijesekera, D., and Jajodia, S., (2004) A logic-based framework for
attribute based

access control," in ACM workshop on Formal Methods in Security

Engineering.
15. Kangsoo Jung and Seog Park (2013): Context-Aware Role Based Access Control
Using User

Relationship. International Journal of Computer Theory and Engineering,

Vol. 5(3).
44

Das könnte Ihnen auch gefallen