Sie sind auf Seite 1von 48

PREDICTING LENGTH OF STAY, FUNCTIONAL OUTCOME, AND

AFTERCARE IN THE REHABILITATION OF STROKE PATIENTS.

Abstract:

Hospital length of stay (LOS) of patients is an important factor for planning


and managing the resource utilization of a hospital. There has been considerable
interest in controlling hospital cost and increasing service efficiency, particularly in
stroke and cardiac units where the resources are severely limited. This study
introduces an approach for early prediction of LOS of stroke patients arriving at
the Stroke Unit of King Fahad Bin Abdul-Aziz Hospital, Saudi Arabia. The
approach involves a feature selection step based on information gain followed by a
prediction model development step using different machine learning algorithms.
Prediction results were compared in order to identify the best performing
algorithm. Many experiments were performed with different settings. This paper
reports the performance results of the two most accurate models. The Bayesian
network model with accuracy of 81.28% outperformed C4.5 decision tree model
(accuracy 77.1%).
INTRODUCTION:
Stroke, also known as cerebrovascular accident (CVA) is a sudden and
devastating illness that is characterized by the rapid loss of brain function due to a
disruption of blood supply to the brain. This disruption is caused by either
ischemia (lack of blood flow) which counts for more than 80% of all strokes [1],
blockage of blood flow, or hemorrhage (loss of blood) [2]. Based on previous
studies, Stroke is the second cause of death worldwide after cardiac disease,
cancer, and chronic lower respiratory disease [3]. The length of stay (LOS) of a
stroke patient varies depending on various factors describing the patient’s medical
condition [4]. LOS is defined as the number of days a patient is required to stay in
a hospital or any healthcare facility for treatment [5], [6]. LOS is an important
factor for planning and managing the resource utilization of a hospital [7]. There
has been considerable interest in controlling hospital cost and increasing service
efficiency, particularly in stroke and cardiac units where the resources are severely
limited; thus hospitals try to reduce LOS as much as possible [8]. Moreover,
reducing the LOS purportedly yields large cost savings [7], [9]. A model to predict
the length of stay (LOS) of stroke patients at an early stage of patient management
can effectively help in managing hospital resources and increase efficiency of
patient care. Such a prediction model will enable early interventions to reduce
complications and shorten patient’s LOS and ensure a more efficient use of
hospital facilities. Most previous research uses statistical techniques to identify
factors determining the LOS and the prediction of LOS is made using regression
models [4], [6], [9], [10]. Recently, with the availability of large amounts of
patients’ data, data mining approaches to predict LOS are becoming increasingly
promising [5], [11]–[16]. In healthcare, data mining has been used for diagnosis
and prognosis of illness and for predicting the outcome of medical procedures
[16]–[23]. Data mining techniques such as classification, artificial neural networks,
clustering etc. are used to discover data patterns and relationships among a large
set of factors and to construct reliable prediction models based on the given input
data. To predict LOS of patients, Wrenn et al. [15] proposed an artificial neural
network based prediction model for an Emergency Department. Rowan et al. [13]
proposed to predict LOS of cardiac patients using artificial neural networks based
on preoperative and initial postoperative factors. Azari et al. [24] proposed a multi-
tiered data mining approach for predicting LOS. They used clustering technique to
create a training dataset to train several classification algorithms for prediction of
LOS. Classification algorithms were also used by Hachesu et al. [5], and Jiang et
al. [14] for predicting LOS. They have demonstrated multiple classification
algorithms (decision tree, support vector machines, logistic regression) with varied
level of accuracy. Most of the previous works emphasized on the use of novel or
hybrid classification algorithms or complex ensemble models [12], [14]. However,
the performance of any prediction model depends on the number and type of its
inputs variables as well [25]–[27]. Thus, the selection of appropriate input
variables or features is critical to the performance of any prediction model. In this
work, we proposed to solve the LOS prediction problem using a novel approach
combining a feature selection step for optimum mix of input features with a
prediction model development step using classification algorithms. Our aim is to
provide an early LOS estimation for new patients arriving at a stroke unit.
EXISTING SYSTEM:

We categorized data values and derived new fields from existing data in the
following features: ejection fraction, diastolic blood pressure, systolic blood
pressure, smoking, triglyceride, low-density lipoprotein, high-density lipoprotein,
hemoglobin, serum cholesterol, and fasting blood sugar. These features were
changed to categorical attributes for analysis and to obtain bad results.

DISADVANTAGES:

 These features were changed to categorical attributes for analysis and to


obtain bad results.

PROPOSED SYSTEM:

In this work, we proposed to solve the LOS prediction problem using a


novel approach combining a feature selection step for optimum mix of input
features with a prediction model development step using classification algorithms.
Our aim is to provide an early LOS estimation for new patients arriving at a stroke
unit. proposed and implemented a software package demonstrating that artificial
neural networks could be used as an effective LOS stratification instrument in
postoperative cardiac patients

ADVANTAGES:

 They proposed using the HR model to predict the mean LOS of stroke
patients.
 The largest advantage of a registry is the ability to prospectively add
patients, while allowing investigators to go back and collect information
retrospectively if needed
SYSTEM CONFIGURATION:

HARDWARE CONFIGURATION:

 Processor - Pentium –IV

 Speed - 1.1 GHz


 RAM - 256 MB(min)
 Hard Disk - 20 GB
 Key Board - Standard Windows Keyboard
 Mouse - Two or Three Button Mouse
 Monitor - SVGA

SOFTWARE CONFIGURATION:

 Operating System : Windows95/98/2000/XP/7.


 Application Server : Tomcat6.0/7.X.
 Front End : HTML, Java, Jsp.
 Scripts : JavaScript.
ARCHITECTURE DIAGRAM:
LITERATURE SURVEY:

TITLE: The cost of cerebral ischaemia.

ABSTRACT:

Cerebral ischaemia is a major cause of disability and death globally and has a
profoundly negative impact on the individuals it affects, those that care for them
and society as a whole. The most common and familiar manifestation is stroke,
85% of which are ischaemic and which is the second leading cause of death and
most common cause of complex chronic disability worldwide. Stroke survivors
often suffer from long-term neurological disabilities significantly reducing their
ability to integrate effectively in society with all the financial and social
consequences that this implies. These difficulties cascade to their next of kin who
often become caregivers and are thus indirectly burdened. A more insidious
consequence of cerebral ischaemia is progressive cognitive impairment causing
dementia which although less abrupt is also associated with a significant long-term
disability. Globally cerebrovascular diseases are responsible for 5.4 million deaths
every year Approximately 3% of total healthcare expenditure is attributable to
cerebral ischaemia with cerebrovascular diseases costing EU healthcare systems 21
billion euro in 2003. The cost to the wider economy (including informal care and
lost productivity) is even greater with stroke costing the UK 7-8 billion pound in
2005 and the US $62.7 billion in 2007. Cerebrovascular disease cost the EU 34
billion euro in 2003. From 2005 to 2050 the anticipated cost of stroke to the US
economy is estimated at $2.2 trillion. Given the global scale of the problem and the
enormous associated costs it is clear that there is an urgent need for advances in the
prevention of cerebral ischaemia and its consequences. Such developments would
result in profound benefits for both individuals and their wider societies and
address one of the world's most pre-eminent public health issues

YEAR OF PUBLICATION:2008

AUTHOR NAME: Flynn RW1, MacWalter RS, Doney AS.


TITLE: Mitochondria, Oxidative Metabolism And Cell Death In Stroke.

ABSTRACT:
Stroke most commonly results from occlusion of a major artery in the brain
and typically leads to the death of all cells within the affected tissue. Mitochondria
are centrally involved in the development of this tissue injury due to modifications
of their major role in supplying ATP and to changes in their properties that can
contribute to the development of apoptotic and necrotic cell death. In animal
models of stroke, the limited availability of glucose and oxygen directly impairs
oxidative metabolism in severely ischemic regions of the affected tissue and leads
to rapid changes in ATP and other energy-related metabolites. In the less-severely
ischemic "penumbral" tissue, more moderate alterations develop in these
metabolites, associated with near normal glucose use but impaired oxidative
metabolism. This tissue remains potentially salvageable for at least the first few
hours following stroke onset. Early restoration of blood flow can result in
substantial recovery of energy-related metabolites throughout the affected tissue.
However, glucose oxidation is markedly decreased due both to lower energy
requirements in the post-ischemic tissue and limitations on the mitochondrial
oxidation of pyruvate. A secondary deterioration of mitochondrial function
subsequently develops that may contribute to progression to cell loss.
Mitochondrial release of multiple apoptogenic proteins has been identified in
ischemic and post-ischemic brain, mostly in neurons. Pharmacological
interventions and genetic modifications in rodent models strongly implicate
caspase-dependent and caspase-independent apoptosis and the mitochondrial
permeability transition as important contributors to tissue damage, particularly
when induced by short periods of temporary focal ischemia.

YEAR OF PASSING:2012
AUTHOR NAME: Sims NR1, Muyderman H.

TITLE: preliminary data

ABSTRACT:

This report presents preliminary U.S. data on deaths, death rates, life
expectancy, leading causes of death, and infant mortality for 2008 by selected
characteristics such as age, sex, race, and Hispanic origin. Methods-Data in this
report are based on death records comprising more than 99 percent of the
demographic and medical files for all deaths in the United States in 2008. The
records are weighted to independent control counts for 2008. For certain causes of
death such as unintentional injuries, homicides, suicides, drug-induced deaths, and
sudden infant death syndrome, preliminary and final data may differ because of the
truncated nature of the preliminary file. Comparisons are made with 2007 final
data. Results-The age-adjusted death rate decreased from 760.2 deaths per 100,000
population in 2007 to 758.7 deaths per 100,000 population in 2008. From 2007 to
2008, age-adjusted death rates decreased significantly for 6 of the 15 leading
causes of death: Diseases of heart, Malignant neoplasms, Cerebrovascular diseases,
Accidents (unintentional injuries), Diabetes mellitus, andAssault (homicide). From
2007 to 2008, age-adjusted death rates increased significantly for 6 of the 15
leading causes of death: Chronic lower respiratory diseases; Alzheimer's disease;
Influenza and pneumonia; Nephritis, nephrotic syndrome and nephrosis;
Intentional self-harm (suicide); and Essential hypertension and hypertensive renal
disease.

YEAR OF PUBLISHING:2008

AUTHOR NAME: Miniño AM, Xu J, Kochanek KD.


TITLE: Predicting length of stay, functional outcome, and aftercare in the
rehabilitation of stroke patients.

ABSTRACT:

Research in recent years has revealed factors that are important predictors of
physical and functional rehabilitation: demographic variables, visual and
perceptual impairments, and psychological and cognitive factors. However, there is
a remaining uncertainty about prediction of outcome and a need to clinically apply
research findings. This study was designed to identify the relative importance of
medical, functional, demographic, and cognitive factors in predicting length of stay
in rehabilitation, functional outcome, and recommendations for post discharge
continuation of services.

YEAR OF PUBLICATION:2002

AUTHOR NAME: Galski T1, Bruno RL, Zorowitz R, Walker J.


TITLE: Use of data mining techniques to determine and predict length of stay of
cardiac patients.

ABSTRACT:

Predicting the length of stay (LOS) of patients in a hospital is important in


providing them with better services and higher satisfaction, as well as helping the
hospital management plan and managing hospital resources as meticulously as
possible. We propose applying data mining techniques to extract useful knowledge
and draw an accurate model to predict the LOS of heart patients.

YEAR OF PUBLICATION:2013

AUTHOR NAME: Hachesu PR1, Ahmadi M, Alizadeh S, Sadoughi F.


TITLE: Elderly patients in acute medical wards: factors predicting length of stay
in hospital.

ABSTRACT:

A prospective study of 419 patients aged 70 and over admitted to acute medical
wards was carried out by medical staff from a geriatric unit. Data, including
presenting problem, housing, social support, mental state, continence, and degree
of independence before and after admission, were recorded. Of the 419 patients,
143 remained in hospital after 14 days and 65 after 28 days. The major factors
associated with prolonged stay in hospital included advanced age, stroke,
confusion and falls as reasons for admission to hospital, incontinence, and loss of
independence for everyday activities. Social circumstances did not predict length
of stay. Although these factors are interrelated, the most important influence on
length of stay was the medical reason for admission. Early contact with the
geriatric medical unit in these patients may speed up the recovery or result in more
appropriate placement.

YEAR OF PUBLICATION:2000

AUTHOR NAME: Maguire PA, Taylor IC, Stout RW.


TITLE: Predicting length of stay in an acute psychiatric hospital.

ABSTRACT:

Multivariate statistical methods were used to identify patient-related variables


that predicted length of stay in a single psychiatric facility. The study investigated
whether these variables remained stable over time and could be used to provide
individual physicians with data on length of stay adjusted for differences in clinical
caseloads and to detect trends in the physicians' practice patterns.

YEAR OF PUBLICATION:1999

AUTHOR NAME: Huntley DA1, Cho DW, Christman J, Csernansky JG.


TITLE: Data mining in healthcare and biomedicine: a survey of the literature.

ABSTRACT:

As a new concept that emerged in the middle of 1990's, data mining can help
researchers gain both novel and deep insights and can facilitate unprecedented
understanding of large biomedical datasets. Data mining can uncover new
biomedical and healthcare knowledge for clinical and administrative decision
making as well as generate scientific hypotheses from large experimental data,
clinical databases, and/or biomedical literature. This review first introduces data
mining in general (e.g., the background, definition, and process of data mining),
discusses the major differences between statistics and data mining and then speaks
to the uniqueness of data mining in the biomedical and healthcare fields. A brief
summarization of various data mining algorithms used for classification,
clustering, and association as well as their respective advantages and drawbacks is
also presented. Suggested guidelines on how to use data mining algorithms in each
area of classification, clustering, and association are offered along with three
examples of how data mining has been used in the healthcare industry. Given the
successful application of data mining by health related organizations that has
helped to predict health insurance fraud and under-diagnosed patients, and identify
and classify at-risk people in terms of health with the goal of reducing healthcare
cost, we introduce how data mining technologies (in each area of classification,
clustering, and association) have been used for a multitude of purposes, including
research in the biomedical and healthcare fields. A discussion of the technologies
available to enable the prediction of healthcare costs (including length of hospital
stay), disease diagnosis and prognosis, and the discovery of hidden biomedical and
healthcare patterns from related databases is offered along with a discussion of the
use of data mining to discover such relationships as those between health
conditions and a disease, relationships among diseases, and relationships among
drugs. The article concludes with a discussion of the problems that hamper the
clinical use of data mining by health professionals.

YEAR OF PUBLICATION:2011

AUTHOR NAME: Yoo I1, Alafaireet P, Marinov M, Pena-Hernandez K, Gopidi


R, Chang JF, Hua L.
TITLE: Uniqueness of medical data mining

ABSTRACT: This article addresses the special features of data mining with
medical data. Researchers in other fields may not be aware of the particular
constraints and difficulties of the privacy-sensitive, heterogeneous, but voluminous
data of medicine. Ethical and legal aspects of medical data mining are discussed,
including data ownership, fear of lawsuits, expected benefits, and special
administrative issues. The mathematical understanding of estimation and
hypothesis formation in medical data may be fundamentally different than those
from other data collection activities. Medicine is primarily directed at patient-care
activity, and only secondarily as a research resource; almost the only justification
for collecting medical data is to benefit the individual patient. Finally, medical data
have a special status based upon their applicability to all people; their urgency
(including life-ordeath); and a moral obligation to be used for beneficial purposes

YEAR OF PUBLICATION:2002

AUTHOR NAME: Krzysztof J. Cios


CLASS DIAGRAM:

PATIENT HOSPITAL

REGISTER ACCEPT

LOGIN VIEW HISTORY

REQUEST BED ALLOCATION


SEQUENCE DIAGRAM:

PATIENT HOSPITAL

REGISTER
ACCEPT

LOGIN VIEW HISTORY

REQUEST
BED ALLOCATION
ACTIVITY DIAGRAM:

PATIENT HOSPITAL

REGISTER ACCPET

LOGIN VIEW HISTORY

REQUEST BED
ALLOCATION
USERCASE DIAGRAM:

REGISTER
HOSPITAL
LOGIN

PATIENT REQUEST

ACCEPT

VIEW HISTORY

BED ALLOCATION
ER DIAGRAM:

REGISTER ACCEPT

LOGIN PATIENT HOSPTIAL VIEW HISRORY

REQUEST BED ALLOCATION


DATA FLOW DIAGRAM:

START

REGISTER

LOGIN

REQUEST

ACCEPT

VIEW HISTORY

BED
ALLOCATION

STOP
MODULAR:

 PATIENT MODULE
 HOSPITAL MODULE

PATIENT MODULE:

 LOGIN
 REGISTER
 REQUEST
 LOGOUT

LOGIN:

The login module is the very first and the most common module in all
applications. In the suggested system only registered users will be allowed to login
the system the unauthorized users will be unable to login. Registered users with
their username and password only being correct will moved on to the next page. Or
else they will be unable to login.

REGISTER:

This module is User Registration; all the new users have to register. Each user
is given a unique password with their user name. To access their account they have
to give their valid username and password i.e. authentication and security is
provided for their account.

REQUEST:

Request for Offer is an open and competitive purchasing process whereby an


organization requests the submission of offers in response to .Request is waiting
for the response.
LOGOUT:

Loging out means to end access to a computer system or a website. Logging out
informs the computer or website that the current user wishes to end the login
session. Log out is also known as log off, sign off or sign out.

HOSPITAL MODULE:

 LOGIN
 ACCEPT THE REQUEST
 VIEW PATIENT HISTORY
 BED ALLOCATION FOR THE PATIENT
 LOGOUT

LOGIN:

The login module is the very first and the most common module in all
applications. In the suggested system only registered users will be allowed to login
the system the unauthorized users will be unable to login. Registered users with
their username and password only being correct will moved on to the next page. Or
else they will be unable to login.

ACCEPT THE REQUEST

The Accept request-header field can be used to specify certain media types
which are acceptable for the response

VIEW PATIENT HISTORY:

It also contains information relating to the patient's Primary and Secondary


Addresses, Emergency Contacts, Referrals, and Family. Social. The Social
section of a patient's clinical record is for keeping record of the patient's
social habits and lifestyle choices.
BED ALLOTMENT:

Bed management is the allocation and provision of beds, especially in a


hospital where beds in specialist wards are a scarce resource.The "bed" in
this context represents not simply a place for the patient to sleep, but the
services that go with being cared for by the medical facility: admission
processing, physician time, nursing care, necessary diagnostic work,
appropriate treatment, and so forth.

LOGOUT

Loging out means to end access to a computer system or a website.


Logging out informs the computer or website that the current user wishes to
end the login session. Log out is also known as log off, sign off
LANGUAGE SPECIFICATION:

Java Technology
Java technology is both a programming language and a platform.
The Java Programming Language
The Java programming language is a high-level language that can be
characterized by all of the following buzzwords:
 Simple
 Architecture neutral
 Object oriented
 Portable
 Distributed
 High performance
 Interpreted
 Multithreaded
 Robust
 Dynamic
 Secure

With most programming languages, you either compile or interpret a program so


that you can run it on your computer. The Java programming language is unusual
in that a program is both compiled and interpreted. With the compiler, first you
translate a program into an intermediate language called Java byte codes —the
platform-independent codes interpreted by the interpreter on the Java platform. The
interpreter parses and runs each Java byte code instruction on the computer.
Compilation happens just once; interpretation occurs each time the program is
executed. The following figure illustrates how this works.
You can think of Java bytecodes as the machine code instructions for the
Java Virtual Machine (Java VM). Every Java interpreter, whether it’s a
development tool or a Web browser that can run applets, is an implementation of
the Java VM. Java bytecodes help make “write once, run anywhere” possible. You
can compile your program into bytecodes on any platform that has a Java compiler.
The bytecodes can then be run on any implementation of the Java VM. That means
that as long as a computer has a Java VM, the same program written in the Java
programming language can run on Windows 2000, a Solaris workstation, or on an
iMac.
The Java Platform
A platform is the hardware or software environment in which a program runs.
We’ve already mentioned some of the most popular platforms like Windows 2000,
Linux, Solaris, and MacOS. Most platforms can be described as a combination of
the operating system and hardware. The Java platform differs from most other
platforms in that it’s a software-only platform that runs on top of other hardware-
based platforms.The Java platform has two components:
 The Java Virtual Machine (Java VM)
 The Java Application Programming Interface (Java API)
You’ve already been introduced to the Java VM. It’s the base for the Java platform
and is ported onto various hardware-based platforms.
The Java API is a large collection of ready-made software components that provide
many useful capabilities, such as graphical user interface (GUI) widgets. The Java
API is grouped into libraries of related classes and interfaces; these libraries are
known as packages. The next section, What Can Java Technology Do?, highlights
what functionality some of the packages in the Java API provide.
The following figure depicts a program that’s running on the Java platform. As the
figure shows, the Java API and the virtual machine insulate the program from the
hardware.

Native code is code that after you compile it, the compiled code runs on a specific
hardware platform. As a platform-independent environment, the Java platform can
be a bit slower than native code. However, smart compilers, well-tuned
interpreters, and just-in-time bytecode compilers can bring performance close to
that of native code without threatening portability.

What Can Java Technology Do?


The most common types of programs written in the Java programming language
are applets and applications. If you’ve surfed the Web, you’re probably already
familiar with applets. An applet is a program that adheres to certain conventions
that allow it to run within a Java-enabled browser.
However, the Java programming language is not just for writing cute, entertaining
applets for the Web. The general-purpose, high-level Java programming language
is also a powerful software platform. Using the generous API, you can write many
types of programs.

An application is a standalone program that runs directly on the Java platform. A


special kind of application known as a server serves and supports clients on a
network. Examples of servers are Web servers, proxy servers, mail servers, and
print servers. Another specialized program is a servlet. A servlet can almost be
thought of as an applet that runs on the server side. Java Servlets are a popular
choice for building interactive web applications, replacing the use of CGI scripts.
Servlets are similar to applets in that they are runtime extensions of applications.
Instead of working in browsers, though, servlets run within Java Web servers,
configuring or tailoring the server.

How does the API support all these kinds of programs? It does so with packages of
software components that provide a wide range of functionality. Every full
implementation of the Java platform gives you the following features:
 The essentials: Objects, strings, threads, numbers, input and output, data
structures, system properties, date and time, and so on.
 Applets: The set of conventions used by applets.
 Networking: URLs, TCP (Transmission Control Protocol), UDP (User Data
gram Protocol) sockets, and IP (Internet Protocol) addresses.
 Internationalization: Help for writing programs that can be localized for
users worldwide. Programs can automatically adapt to specific locales and
be displayed in the appropriate language.
 Security: Both low level and high level, including electronic signatures,
public and private key management, access control, and certificates.
 Software components: Known as JavaBeansTM, can plug into existing
component architectures.
 Object serialization: Allows lightweight persistence and communication
via Remote Method Invocation (RMI).
 Java Database Connectivity (JDBCTM): Provides uniform access to a wide
range of relational databases.
The Java platform also has APIs for 2D and 3D graphics, accessibility, servers,
collaboration, telephony, speech, animation, and more. The following figure
depicts what is included in the Java 2 SDK.
How Will Java Technology Change My Life?
We can’t promise you fame, fortune, or even a job if you learn the Java
programming language. Still, it is likely to make your programs better and requires
less effort than other languages. We believe that Java technology will help you do
the following:
 Get started quickly: Although the Java programming language is a
powerful object-oriented language, it’s easy to learn, especially for
programmers already familiar with C or C++.
 Write less code: Comparisons of program metrics (class counts, method
counts, and so on) suggest that a program written in the Java programming
language can be four times smaller than the same program in C++.
 Write better code: The Java programming language encourages good
coding practices, and its garbage collection helps you avoid memory leaks.
Its object orientation, its JavaBeans component architecture, and its wide-
ranging, easily extendible API let you reuse other people’s tested code and
introduce fewer bugs.
 Develop programs more quickly: Your development time may be as much
as twice as fast versus writing the same program in C++. Why? You write
fewer lines of code and it is a simpler programming language than C++.
 Avoid platform dependencies with 100% Pure Java: You can keep your
program portable by avoiding the use of libraries written in other languages.
The 100% Pure Java TM Product Certification Program has a repository of
historical process manuals, white papers, brochures, and similar materials
online.
 Write once, run anywhere: Because 100% Pure Java programs are
compiled into machine-independent byte codes, they run consistently on any
Java platform.
 Distribute software more easily: You can upgrade applets easily from a
central server. Applets take advantage of the feature of allowing new classes
to be loaded “on the fly,” without recompiling the entire program.
ODBC
Microsoft Open Database Connectivity (ODBC) is a standard programming
interface for application developers and database systems providers. Before ODBC
became a de facto standard for Windows programs to interface with database
systems, programmers had to use proprietary languages for each database they
wanted to connect to. Now, ODBC has made the choice of the database system
almost irrelevant from a coding perspective, which is as it should be. Application
developers have much more important things to worry about than the syntax that is
needed to port their program from one database to another when business needs
suddenly change.
Through the ODBC Administrator in Control Panel, you can specify the particular
database that is associated with a data source that an ODBC application program is
written to use. Think of an ODBC data source as a door with a name on it. Each
door will lead you to a particular database. For example, the data source named
Sales Figures might be a SQL Server database, whereas the Accounts Payable data
source could refer to an Access database. The physical database referred to by a
data source can reside anywhere on the LAN.

The ODBC system files are not installed on your system by Windows 95. Rather,
they are installed when you setup a separate database application, such as SQL
Server Client or Visual Basic 4.0. When the ODBC icon is installed in Control
Panel, it uses a file called ODBCINST.DLL. It is also possible to administer your
ODBC data sources through a stand-alone program called ODBCADM.EXE.
There is a 16-bit and a 32-bit version of this program and each maintains a separate
list of ODBC data Sources.
From a programming perspective, the beauty of ODBC is that the application can
be written to use the same set of function calls to interface with any data source,
regardless of the database vendor. The source code of the application doesn’t
change whether it talks to Oracle or SQL Server. We only mention these two as an
example. There are ODBC drivers available for several dozen popular database
systems. Even Excel spreadsheets and plain text files can be turned into data
sources. The operating system uses the Registry information written by ODBC
Administrator to determine which low-level ODBC drivers are needed to talk to
the data source (such as the interface to Oracle or SQL Server). The loading of the
ODBC drivers is transparent to the ODBC application program. In a client/server
environment, the ODBC API even handles many of the network issues for the
application programmer.
The advantages of this scheme are so numerous that you are probably thinking
there must be some catch. The only disadvantage of ODBC is that it isn’t as
efficient as talking directly to the native database interface. ODBC has had many
detractors make the charge that it is too slow. Microsoft has always claimed that
the critical factor in performance is the quality of the driver software that is used.
In our humble opinion, this is true. The availability of good ODBC drivers has
improved a great deal recently. And anyway, the criticism about performance is
somewhat analogous to those who said that compilers would never match the
speed of pure assembly language. Maybe not, but the compiler (or ODBC) gives
you the opportunity to write cleaner programs, which means you finish sooner.
Meanwhile, computers get faster every year.

JDBC
In an effort to set an independent database standard API for Java, Sun
Microsystems developed Java Database Connectivity, or JDBC. JDBC offers a
generic SQL database access mechanism that provides a consistent interface to a
variety of RDBMS. This consistent interface is achieved through the use of “plug-
in” database connectivity modules, or drivers. If a database vendor wishes to have
JDBC support, he or she must provide the driver for each platform that the
database and Java run on.
To gain a wider acceptance of JDBC, Sun based JDBC’s framework on ODBC. As
you discovered earlier in this chapter, ODBC has widespread support on a variety
of platforms. Basing JDBC on ODBC will allow vendors to bring JDBC drivers to
market much faster than developing a completely new connectivity solution.
JDBC was announced in March of 1996. It was released for a 90 day public review
that ended June 8, 1996. Because of user input, the final JDBC v1.0 specification
was released soon after.
The remainder of this section will cover enough information about JDBC for you
to know what it is about and how to use it effectively. This is by no means a
complete overview of JDBC. That would fill an entire book.

JDBC Goals
Few software packages are designed without goals in mind. JDBC is one that,
because of its many goals, drove the development of the API. These goals, in
conjunction with early reviewer feedback, have finalized the JDBC class library
into a solid framework for building database applications in Java.
The goals that were set for JDBC are important. They will give you some insight
as to why certain classes and functionalities behave the way they do. The eight
design goals for JDBC are as follows:

1. SQL Level API


The designers felt that their main goal was to define a SQL interface for Java.
Although not the lowest database interface level possible, it is at a low enough
level for higher-level tools and APIs to be created. Conversely, it is at a high
enough level for application programmers to use it confidently. Attaining this goal
allows for future tool vendors to “generate” JDBC code and to hide many of
JDBC’s complexities from the end user.

2. SQL Conformance
SQL syntax varies as you move from database vendor to database vendor. In an
effort to support a wide variety of vendors, JDBC will allow any query statement
to be passed through it to the underlying database driver. This allows the
connectivity module to handle non-standard functionality in a manner that is
suitable for its users.
3. JDBC must be implemental on top of common database interfaces
The JDBC SQL API must “sit” on top of other common SQL level APIs.
This goal allows JDBC to use existing ODBC level drivers by the use of a
software interface. This interface would translate JDBC calls to ODBC and
vice versa.
4. Provide a Java interface that is consistent with the rest of the Java
system
Because of Java’s acceptance in the user community thus far, the designers feel
that they should not stray from the current design of the core Java system.
5. Keep it simple
This goal probably appears in all software design goal listings. JDBC is no
exception. Sun felt that the design of JDBC should be very simple, allowing for
only one method of completing a task per mechanism. Allowing duplicate
functionality only serves to confuse the users of the API.
6. Use strong, static typing wherever possible
Strong typing allows for more error checking to be done at compile time; also,
less error appear at runtime.

7. Keep the common cases simple


Because more often than not, the usual SQL calls used by the programmer are
simple SELECT’s, INSERT’s, DELETE’s and UPDATE’s, these queries should
be simple to perform with JDBC. However, more complex SQL statements should
also be possible.
Finally we decided to proceed the implementation using Java Networking.
And for dynamically updating the cache table we go for MS Access database.
simple architecture-neutral
object-oriented
portable
Distributed
high-performance
interpreted
Java is also unusual in that each java program is both compiled and interpreted.
With a compile you translate a java program into an intermediate language called
java byte codes the platform-independent code instruction is passed and run on the
computer.
Compilation happens just once; interpretation occurs each time the program is
executed. The figure illustrates how this works.

Java Program Interpreter

Compilers My Program

You can think of java byte codes as the machine code instructions for the java
virtual machine (java vm). Every java interpreter, whether it’s a java development
tool or a web browser that can run java applets, is an implementation of the java
vm. The java vm can also be implemented in hardware.
Java byte codes help make “write once, run anywhere” possible. You can compile
your java program into byte codes on my platform that has a java compiler. The
byte codes can then be run any implementation of the java vm. For example, the
same java program can run windows nt, solaris, and macintosh.

SYSTEM TESTING:

The purpose of testing is to discover errors. Testing is the process of trying to discover
every conceivable fault or weakness in a work product. It provides a way to check the
functionality of components, sub assemblies, assemblies and/or a finished product It is the
process of exercising software with the intent of ensuring that the

Software system meets its requirements and user expectations and does not fail in an
unacceptable manner. There are various types of test. Each test type addresses a specific testing
requirement.

TYPES OF TESTS

Unit testing

Unit testing involves the design of test cases that validate that the internal program logic is
functioning properly, and that program inputs produce valid outputs. All decision branches and
internal code flow should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before integration. This is a
structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform
basic tests at component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process performs accurately
to the documented specifications and contains clearly defined inputs and expected results.
Integration testing

Integration tests are designed to test integrated software components to determine if they
actually run as one program. Testing is event driven and is more concerned with the basic
outcome of screens or fields. Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the combination of components is
correct and consistent. Integration testing is specifically aimed at exposing the problems that
arise from the combination of components.

Functional test

Functional tests provide a systematic demonstration that functions tested are available as
specified by the business and technical requirements, system documentation, and user manuals.

Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be exercised.

Systems/Procedures: interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on requirements, key


functions, or special test cases. In addition, systematic coverage pertaining to identify

Business process flows; data fields, predefined processes, and successive processes must
be considered for testing. Before functional testing is complete, additional tests are identified and
the effective value of current tests is determined.
System Test
System testing ensures that the entire integrated software system meets requirements. It
tests a configuration to ensure known and predictable results. An example of system testing is the
configuration oriented system integration test. System testing is based on process descriptions
and flows, emphasizing pre-driven process links and integration points.

White Box Testing


White Box Testing is a testing in which in which the software tester has knowledge of the
inner workings, structure and language of the software, or at least its purpose. It is purpose. It is
used to test areas that cannot be reached from a black box level.

Black Box Testing


Black Box Testing is testing the software without any knowledge of the inner workings,
structure or language of the module being tested. Black box tests, as most other kinds of tests,
must be written from a definitive source document, such as specification or requirements
document, such as specification or requirements document. It is a testing in which the software
under test is treated, as a black box .you cannot “see” into it. The test provides inputs and
responds to outputs without considering how the software works.

Unit Testing:

Unit testing is usually conducted as part of a combined code and unit test phase of the
software lifecycle, although it is not uncommon for coding and unit testing to be conducted as
two distinct phases.

Test strategy and approach


Field testing will be performed manually and functional tests will be written in detail.

Test objectives

 All field entries must work properly.


 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.

Features to be tested

 Verify that the entries are of the correct format


 No duplicate entries should be allowed
 All links should take the user to the correct page.

Integration Testing

Software integration testing is the incremental integration testing of two or more


integrated software components on a single platform to produce failures caused by interface
defects.

The task of the integration test is to check that components or software applications, e.g.
components in a software system or – one step up – software applications at the company level –
interact without error.

Test Results: All the test cases mentioned above passed successfully. No defects encountered.

Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional requirements.

Test Results: All the test cases mentioned above passed successfully. No defects encountered.
CONCLUSION:

This study introduces an approach for early prediction of LOS of stroke patients
arriving at a stroke unit. The approach involves a feature selection step based on
information gain and a prediction model development step using J48 decision tree
or Bayesian network. Prediction results were compared between the two models.
The performance of the Bayesian network based model was better (accuracy
81.28%) as opposed to the performance of the J48 based prediction model
(accuracy 77.1%). A partial representation of the J48 based model and the
Bayesian network based model are exhibited in Fig. 3 and Fig. 4 respectively. The
study faced some limitations in terms of data availability and the quality of the
data. More than 50% of the collected records were discarded due to
incompleteness. Nevertheless, the performance of the proposed prediction model is
quite promising. In future studies, the proposed approach will be tested on larger
datasets. Another important area for future research is to extend the proposed
approach to predict other attributes such as the Stroke Level of the patients
REFERENCES:

[ 1] R. W. V. Flynn, R. S. M. MacWalter, and A. S. F. Doney, “The cost of


cerebral ischaemia,” Neuropharmacology, vol. 55, no. 3, pp. 250–256, 2008.

[2] N. R. Sims and H. Muyderman, “Mitochondria, oxidative metabolism and cell


death in stroke,” Biochimica et Biophysica Acta (BBA)- Molecular Basis of
Disease, vol. 1802, no. 1, pp. 80–91, 2010.

[3] A. M. Mimino, Deaths: Preliminary Data for 2008. DIANE Publishing, 2011.

[4] T. Galski, R. L. Bruno, R. Zorowitz, and J. Walker, “Predicting length of stay,


functional outcome, and aftercare in the rehabilitation of stroke patients. The
dominant role of higher-order cognition.,” Stroke, vol. 24, no. 12, pp. 1794–1800,
1993.

[5] P. R. Hachesu, M. Ahmadi, S. Alizadeh, and F. Sadoughi, “Use of data mining


techniques to determine and predict length of stay of cardiac patients,” Healthcare
informatics research, vol. 19, no. 2, pp. 121–129, 2013. [6] P. A. Maguire, I. C.
Taylor, and R. W. Stout, “Elderly patients in acute medical wards: factors
predicting length of stay in hospital.,” Br Med J (Clin Res Ed), vol. 292, no. 6530,
pp. 1251–1253, 1986.

[7] A. Lim and P. Tongkumchum, “Methods for analyzing hospital length of stay
with application to inpatients dying in Southern Thailand,” Global Journal of
Health Science, vol. 1, no. 1, p. 27, 2009.

[8] V. Gómez and J. E. Abásolo, “Using data mining to describe long hospital
stays,” Paradigma, vol. 3, no. 1, pp. 1–10, 2009.
[9] K.-C. Chang, M.-C. Tseng, H.-H. Weng, Y.-H. Lin, C.-W. Liou, and T.-Y. Tan,
“Prediction of length of stay of first-ever ischemic stroke,” Stroke, vol. 33, no. 11,
pp. 2670–2674, 2002.

[10] D. A. Huntley, D. W. Cho, J. Christman, and J. G. Csernansky, “Predicting


length of stay in an acute psychiatric hospital,” Psychiatric Services, 1998

[11] I. Nouaouri, A. Samet, and H. Allaoui, “Evidential data mining for length of
stay (LOS) prediction problem,” in 2015 IEEE International Conference on
Automation Science and Engineering (CASE), 2015, pp. 1415–1420.

[12] L. Lella, A. di Giorgio, and A. F. Dragoni, “Length of Stay Prediction and


Analysis through a Growing Neural Gas Model,” 2015.

[13] M. Rowan, T. Ryan, F. Hegarty, and N. O’Hare, “The use of artificial neural
networks to stratify the length of stay of cardiac patients based on preoperative and
initial postoperative factors,” Artificial Intelligence in Medicine, vol. 40, no. 3, pp.
211–221, 2007.

[14] X. Jiang, X. Qu, and L. B. Davis, “Using Data Mining to Analyze Patient
Discharge Data for an Urban Hospital.,” in DMIN, 2010, pp. 139–144.

[15] J. Wrenn, I. Jones, K. Lanaghan, C. B. Congdon, and D. Aronsky, “Estimating


patient’s length of stay in the emergency department with an artificial neural
network,” in AMIA Annual Symposium Proceedings, 2005, vol. 2005, p. 1155.

[16] Á. Silva, P. Cortez, M. F. Santos, L. Gomes, and J. Neves, “Rating organ


failure via adverse events using data mining in the intensive care unit,” Artificial
intelligence in medicine, vol. 43, no. 3, pp. 179–193, 2008.
[17] I. Yoo et al., “Data mining in healthcare and biomedicine: a survey of the
literature,” Journal of medical systems, vol. 36, no. 4, pp. 2431– 2448, 2012. [18]
R. Katta and Y. Zhang, “Medical data mining,” in AeroSense 2002, 2002, pp. 305–
308.

[19] K. J. Cios and G. W. Moore, “Uniqueness of medical data mining,” Artificial


intelligence in medicine, vol. 26, no. 1, pp. 1–24, 2002.

[20] S. N. Ghazavi and T. W. Liao, “Medical data mining by fuzzy modeling with
selected features,” Artificial Intelligence in Medicine, vol. 43, no. 3, pp. 195–206,
2008.

[21] J. Li et al., “Mining risk patterns in medical data,” in Proceedings of the


eleventh ACM SIGKDD international conference on Knowledge discovery in data
mining, 2005, pp. 770–775.

[22] M.-L. Antonie, O. R. Zaiane, and A. Coman, “Application of Data Mining


Techniques for Medical Image Classification.,” MDM/KDD, vol. 2001, pp. 94–
101, 2001.

[23] D. Delen, G. Walker, and A. Kadam, “Predicting breast cancer survivability:


a comparison of three data mining methods,” Artificial intelligence in medicine,
vol. 34, no. 2, pp. 113–127, 2005.

[24] A. Azari, V. P. Janeja, and A. Mohseni, “Predicting hospital length of stay


(PHLOS): A multi-tiered data mining approach,” in 2012 IEEE 12th International
Conference on Data Mining Workshops, 2012, pp. 17–24.

[25] I. Kononenko and S. J. Hong, “Attribute selection for modelling,” Future


Generation Computer Systems, vol. 13, no. 2, pp. 181–195, 1997.
[26] Y. Saeys, I. Inza, and P. Larrañaga, “A review of feature selection techniques
in bioinformatics,” bioinformatics, vol. 23, no. 19, pp. 2507–2517, 2007.

[27] R. Duggal, S. Shukla, S. Chandra, B. Shukla, and S. K. Khatri, “Impact of


selected pre-processing techniques on prediction of risk of early readmission for
diabetic patients in India,” International Journal of Diabetes in Developing
Countries, pp. 1–8, 2016.

Das könnte Ihnen auch gefallen