Beruflich Dokumente
Kultur Dokumente
Series Editors:
B. Blobel, O. Bodenreider, E. Borycki, M. Braunstein, C. Bühler, J.P. Christensen, R. Cooper,
R. Cornet, J. Dewen, O. Le Dour, P.C. Dykes, A. Famili, M. González-Sancho, E.J.S. Hovenga,
J.W. Jutai, Z. Kolitsi, C.U. Lehmann, J. Mantas, V. Maojo, A. Moen, J.F.M. Molenbroek,
G. de Moor, M.A. Musen, P.F. Niederer, C. Nøhr, A. Pedotti, O. Rienhoff, G. Riva, W. Rouse,
K. Saranto, M.J. Scherer, S. Schürer, E.R. Siegel, C. Safran, N. Sarkar, T. Solomonides, E. Tam,
J. Tenenbaum, B. Wiederhold and L.H.W. van der Woude
Volume 236
Recently published in this series
Vol. 235. R. Randell, R. Cornet, C. McCowan, N. Peek and P.J. Scott (Eds.), Informatics for
Health: Connected Citizen-Led Wellness and Population Health
Vol. 234. F. Lau, J. Bartle-Clar, G. Bliss, E. Borycki, K. Courtney and A. Kuo (Eds.), Building
Capacity for Health Informatics in the Future
Vol. 233. A.M. Kanstrup, A. Bygholm, P. Bertelsen and C. Nøhr (Eds.), Participatory Design &
Health Information Technology
Vol. 232. J. Murphy, W. Goossen and P. Weber (Eds.), Forecasting Informatics Competencies
for Nurses in the Future of Connected Health – Proceedings of the Nursing
Informatics Post Conference 2016
Vol. 231. A.J. Maeder, K. Ho, A. Marcelo and J. Warren (Eds.), The Promise of New
Technologies in an Age of New Health Challenges – Selected Papers from Global
Telehealth 2016
Vol. 230. J. Brender McNair, A Unifying Theory of Evolution Generated by Means of
Information Modelling
Vol. 229. H. Petrie, J. Darzentas, T. Walsh, D. Swallow, L. Sandoval, A. Lewis and C. Power
(Eds.), Universal Design 2016: Learning from the Past, Designing for the Future –
Proceedings of the 3rd International Conference on Universal Design (UD 2016),
York, United Kingdom, August 21–24, 2016
Edited by
Dieter Hayn
AIT Austrian Institute of Technology GmbH, Graz, Austria
and
Günter Schreier
AIT Austrian Institute of Technology GmbH, Graz, Austria
This book is published online with Open Access and distributed under the terms of the Creative
Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
Publisher
IOS Press BV
Nieuwe Hemweg 6B
1013 BG Amsterdam
Netherlands
fax: +31 20 687 0019
e-mail: order@iospress.nl
LEGAL NOTICE
The publisher is not responsible for the use which might be made of the following information.
Preface
Digital Insight – Information-Driven Health & Care
A hospital is generally regarded as a place where patients are cured from injuries and
diseases. However, in an increasing number of cases, clinical patient management
within the hospital alone is not enough for successful treatment, but continuation of the
therapy beyond the hospital walls is indicated. This requires not only cooperative and
motivated patients, but also information exchange with out-patient healthcare providers
and professionals. Apart from physicians, also nurses are concerned, since nurses usu-
ally have more contact to the patients and their families. Today’s ICT tools need to
actively involve all the players in healthcare, in order to close the gaps in between
multimodal patient care. Additionally, data should be complemented with data from the
patients’ homes and support active and independent living.
Ineffective discharge management may jeopardize successful conclusion of treat-
ment. Moreover, hospital stays can represent independent risk factors for additional
diseases or complications, e.g. when nosocomial infections or delirium are involved.
Many of these events could potentially be prevented, but prevention strategies often
need an integrated view and involve more than just one type of hospital personnel, i.e.
physicians, nurses, administration, technicians, etc. Within hospitals, a lot of data that
could help to early detect or even to prevent such events are already available. So far,
however, often these data are either not sufficiently considered or implications of iden-
tified risk factors are not sufficiently communicated. “Information-driven Health &
Care” relates both, a) identification of certain factors within data pools from health &
care and b) information exchange among all the involved stakeholders. ICT can support
both aspects and provide tools that help health professions in identifying and com-
municating relevant data. Such tools will play an important role in future healthcare
systems.
When thinking of the most common diseases in industrialised countries, such as
cardiovascular diseases or diabetes, information exchange networks usually need to
cover only a rather small geographic area. In case of less frequent or even rare condi-
tions, however, health informatics and eHealth are becoming more and more connected
across Europe. European Reference Networks for rare diseases are currently being
implemented by the EU member states. Tools such as virtual tumour boards for onco-
logical treatments or, generally spoken, virtual multidisciplinary meetings or confer-
ences will play an important role within these networks. The Reference Networks, once
fully functional, will not only ease access to specialised experts for patients with rare
diseases, they will also build a European ICT infrastructure that might influence health
care processes for the next decades. Therefore, it is essential that such infrastructures
build on well-established standards such as Integrating the Healthcare Enterprise (IHE),
even if standardization might initially require more time for implementation.
The 2017 issue of the annual Health Informatics meets eHealth conference ad-
dresses this increasingly international focus of eHealth and acknowledges the im-
portance of cross-border health ICT. “Digital Insight – information-driven Health &
Care” has been chosen as the special topic for eHealth2017 since future ICT systems
vi
need to include methods of machine learning and predictive analytics in order to pro-
vide actionable information to health professionals and to support preventive health
care concepts, as described above. The present book, available as an open access eBook,
was compiled to give some digital insight into the current research in health informatics
and eHealth, addressing future issues of health and care.
Reviewers
We would like to thank all reviewers for their significant contribution to the proceedings
of the eHealth2017:
Contents
Preface v
Dieter Hayn and Günter Schreier
Scientific Program Committee vii
Reviewers viii
1. Introduction
Access and retrieval of relevant information for patient safety and quality assessment is
important in clinical contexts. The objective of critical incident reporting systems (CIRS)
is to enable users, e.g. health care professionals working for a hospital, to report in an
anonymous manner critical events that occurred in their working environment. Incident
reporting has been instituted in healthcare systems in many countries for some time now,
e.g. in Switzerland in 1997 [1], but not in all healthcare systems it is obligatory to report
critical incidents. However, it has been shown that those anecdotal reports bear important
information on limitations of systems and processes [2]. On the one hand, critical
situations or even systematic errors can be identified by studying these reports which is
crucial to develop countermeasures. On the other hand, once a measure to address certain
problematic situations has been realized in a hospital, a database of incident reports could
be used to check whether the measure was successful and the numbers of reported cases
of a certain problem category dropped as expected. The reports allow to recognize trends
relevant for further assessment.
In practice, reports are entered into CIRS by filling a digital reporting form. It
contains multiple free text fields mainly asking for describing the problem or incident
1 Corresponding Author: Kerstin Denecke, Bern University of Applied Sciences, Quellgasse 21, Biel, E-
Mail: kerstin.denecke@bfh.ch.
2 K. Denecke / Concept-Based Retrieval from Critical Incident Reports
that has been recognized. Experiences from quality manager in hospitals showed that the
free text fields bear the most important information for problem assessment. For
identifying problems from these reports in a hospital, the entire database of free-textual
event descriptions needs to be queried. So far, CIRS are not designed to support a
retrieval of relevant reports matching a specific query or only structured fields can be
queried. Thus, there is a substantial need for analyzing the free-textual parts of the reports
automatically, identifying trends and frequently occurring incidents, very serious
problems or even causes of critical incidents as the use case scenarios above show. The
systematic access to big and complex data sets, especially in medical information
systems is still challenging [3]. State of the art retrieval methods are based on methods
like keyword matching, document indexing, scoring or term weighting [4], language
models or inference networks [5]. In the last years, research in that area has focused on
improving search results by taking domain knowledge into account [6] or using semantic
methods [7]. In such techniques, words from both the query and the items in the corpus
are mapped to concepts and relations in a knowledge source (typically an ontology), and
retrieval is then based on semantic proximity in the background ontology.
Very limited work in natural language processing and information management
considered medical incident reports. Akiyama et al. introduced a method to distinguish
incident reports using artificial intelligence technology [8]. In more detail, characteristic
words were extracted from incident reports, and co-occurrence networks of the
characteristic words were created. Fujita et al. performed a linguistic analysis for incident
reports in English [9]. They extracted characteristic words using natural language
processing and they evaluated the degree of similarities between incident documents. In
our work, we aim at handling the special requirements needed for processing incident
reports from hospitals to support a search and navigation within a collection of reports.
This use case differs significantly from many other search scenarios. As the size of the
document collection under scrutiny is relatively small compared to a pool of abstracts of
biomedical literature or an entire database of clinical documents comprising thousands
of texts, a retrieval approach tuned towards a high recall is required. Objective of the
retrieval is to get hints to potential problems in the workflow. The user values all facts
that address a given information need and he or she would also accept a certain amount
of false-positive results. The expected result are incident reports that match the query.
The main contributions of the paper are: 1) Introduction and implementation of a
concept-based retrieval method for incident reports in German, and 2) Evaluation and
comparison of concept-based retrieval with a standard retrieval method.
2.1. Requirements
2.2. Material
The basis of the analysis and retrieval experiment are 581 randomly selected incident
reports from the Inselspital Bern. They originate from different clinics of the hospital
and consist of at least of a date and a free-textual event description. Most of them have a
title that is summarizing the critical event consisting mainly of one to three keywords.
Sometimes, a potential measure for addressing the problem is suggested in a separate
data field. An example is shown in Table 1.
A corpus analysis [10] showed that the incident reports contain medical and non-medical
named entities. We follow the hypothesis that this peculiarity requires a retrieval method
that allows to search for medical concepts and for keywords. We apply Apache Lucene
(https://lucene.apache.org/core/) to create word vectors for the documents in the data
collection. Lucene is an open source, full-featured text search engine library written in
Java. The standard settings are kept, meaning that the texts are tokenized, normalized to
lower case, stop words are removed and stemmed. But, we are extending the search
vector by semantic concepts. More specifically, query and reports are indexed by ID
MACS® which results in a list of concepts of the Wingert nomenclature for each report
and the query. The terminology server ID MACS® — medical semantic network, is a
software provided by the German company ID Information und Dokumentation im
Gesundheitswesen (http://www.id-berlin.de) [3,11].
When a document is analyzed, ID MACS® splits the text into its sentences. Each
sentence is then broken up by a chunking method. The resulting chunks contain noun,
verb or adverbial phrases. In each of these phrases the clinical and additional concepts
are identified. If one potential word is found, it is mapped onto the respective concept of
the Wingert Nomenclature. The latter is a German derivate of an early version of
SNOMED [12]. It is a polyaxial nomenclature that contains ten axes of different
categories of concepts. For example, the Topology-axis contains topological concepts,
the Morphology-axis contains morphological concepts and the Procedure-axis contains
concepts referring to medical procedures. In addition, the G-axis contains helpful
concepts for certain adjectives and verbs and linguistic meta-information (e.g. „negated
phrase“).
The surrounding words of an examined word are taken into consideration for
concept mapping. Thereby, ambiguities can be resolved and inconvenient wordings can
4 K. Denecke / Concept-Based Retrieval from Critical Incident Reports
In the evaluation, we are comparing these three retrieval methods. The objective is to
determine the precision, recall and F-Score of the retrieval and to determine the
differences and limitations of concept-based retrieval (combined and semantic) versus
standard IR (lucene).
We are considering five topics for which documents are retrieved (see Table 2):
delivery, drugs, prescription, control and signature. The topics and queries have been
formulated by the future user. The queries that have been used for retrieval are listed in
Table 2. The indicated search terms were concatenated by OR in our system. The gold
standard has been created manually by a medical expert. He applied the query terms in
Table 2: Queries for the evaluation and the number of target documents according to the manual annotation
ID Topic Query terms Translated query No. of
(not used in the evaluation) target documents
1 Delivery Ausgabe, Abgabe, abgegeben delivery, deliver 14
2 Drugs Medikamente, Medikation drugs, medication 284
3 Prescription Verordnung prescription, prescribe 176
4 Control Kontrolle, kontrollieren control 80
5 Signature Visum signature 1
K. Denecke / Concept-Based Retrieval from Critical Incident Reports 5
the Excel search field and marked all relevant matches. We determined F-measure,
precision and recall for the new introduced method (Combination of keyword-based and
concept-based retrieval), but also for each approach on its own to be able to assess the
differences in the retrieval methods. Errors or missing retrieval results have been
assessed manually. Since for the retrieval task under consideration the recall is more
important than the precision, we calculate also the F2 – measure using (Equation 1)
݈݈ܽܿ݁ݎ כ ݊݅ݏ݅ܿ݁ݎ
ܨఉ ൌ ሺͳ ߚ ଶ ሻ כሺ ൘ߚ ଶ ݊݅ݏ݅ܿ݁ݎ כ ݈݈ܽܿ݁ݎሻ (1)
3. Results
Table 3 shows the retrieval results for all queries and the three retrieval methods. It can
be seen that the precision is lower for the semantic search and the combined approach
than for the lucene retrieval method. Precision is in average 96% for the lucene retrieval,
51% for the semantic retrieval and 61% for the combined approach. The quality varies
substantially for the semantic retrieval and combined approach depending on the query.
The highest precision value of 94% is achieved with the semantic retrieval and combined
approach for the query "Drugs". In contrast, the average recall of the combined approach
is significantly higher with 96.5% than for the lucene and semantic approach. This means,
many irrelevant documents are retrieved with the semantic and combined approach, but
in particular for the combined method, the identified results mainly contain the relevant
ones. The F2-Measure for the lucene retrieval and combined approach is similar with
0.78.
4. Discussion
In this work, a new retrieval method was introduced based on semantic term mapping. It
has been shown that a combination of standard keyword-based retrieval with the
semantic search results in highly satisfactory recall values. The lucene retrieval returns
all texts containing the search term which can result in false positives since the context
is not at all considered. The retrieval fails for the keyword-based approach lucene when
the search term or a synonym is not explicitly mentioned in the text and semantic
inferences would be necessary. For example, a report that only contains the term “heparin”
would not be identified with a query like “medication”. So far, the semantic relations
included in the ID MACS® are not used, but could help in making such inferences. The
semantic approach provides many false negatives due to the indexing process (i.e. the
mapping of natural language to concepts of the ontology). Sometimes, the results are
false positives, but for some cases these are true positives that were not determined using
the Excel query. This holds true for query terms with synonyms. The semantic (and
combined) search abstracts from lexical variants and synonyms and is thus more
powerful than a simple keyword matching. For this reason, inflected verbs are mapped
to the same concept (e.g. verordnet, Verordnung etc. (prescribe, prescription) are mapped
to the concept referring to “Verordnung” (prescription)).Beyond the evaluation protocol,
we recognized that proper names as query can fail in the semantic search. They cannot
be indexed and will thus result in no retrieval results (e.g. ipdos). On the other hand, the
keyword-based approach can fail for proper names given many writing variations of
proper names (ipdos, i-pdos, i-p Dos, i-dos…).
Precision rates for information retrieval tasks in the biomedical domain have been
assessed mainly from biomedical literature and achieved a precision between 70-90 %
while the recall is around 70% [15]. Compared to this, the combined approach with
semantic and keyword-based search results in a better recall (96%), and in a slightly
lower precision. The idea of a semantic retrieval is not new, but still has not been
considered for medical incident reports in German. The underlying terminology of ID
MACS® with its semantic network provides a well suited resource for realizing the
retrieval.
The approach was tuned towards a high recall. This results from the experience that
the quality manager would rather go through some false positive documents instead of
losing too much relevant information. From the current retrieval practice it is still a gain
in time to have some irrelevant texts in the results set. In future work, the system should
be tested with queries from multiple users. The queries used for the evaluation were to a
certain extent artificial, since they contain only keywords as they are used in the existing
retrieval method. Having a real retrieval system on hand could lead to more complex or
syntactically incorrect queries.
The evaluation does not reflect the user satisfaction with the retrieval result. As next
step, the retrieval system needs to be tested and used by quality manager and people
accessing CIRS messages. It is obvious that time for retrieving relevant reports is
decreasing using such retrieval methods. Therefore, we expect a high user satisfaction
since they get support in analyzing the reports. With respect to the evaluation setting, it
has to be mentioned that the dataset was relatively small and comprised only incident
reports from one hospital. A ranking of search results was not considered, since it is
irrelevant in the current search scenario.
To address the fact that specific queries cannot be easily formulated by the user,
faceted search or data set visualization methods such as tag clouds could help. A first
K. Denecke / Concept-Based Retrieval from Critical Incident Reports 7
assessment showed that tag clouds provide useful terms for further retrieval. We will
consider this in our future works. Additionally, a query expansion using the semantic
network of ID MCAS® will be considered. Query expansion is a representative technique
of information retrieval. It generates alternative search terms or expanded queries on
lexical or semantic level for improving the retrieval performance [16]. The retrieval can
also be improved by including SoundEx technology [17] to retrieve also texts for query
terms that sound similar to word in the reports.
5. Acknowledgement
We acknowledge Helmut Paula for providing and annotating the data. Further, thanks to
ID Berlin for giving access to ID MACS® within the context of this work.
References
[1] Staender S, Daviers J, Helmreich B, Sexton B, Kaufmann M. The anaesthesia critical incident reporting
system: an experience based database. Intern J Med Inform 47, 1997, 87-90
[2] Pham JC, Girard T and Pronovost PJ. What to do With Healthcare Incident Reporting Systems. Journal
of Public Health Research, 2013, 2(3), 154-59
[3] Denecke K. Informationsextraktion aus medizinischen Texten. (Information extraction from medical
documents). PhD Thesis. Shaker Verlag, Aachen, 2008.
[4] Manning CD, Raghavan P, Schütze H et al. Introduction to information retrieval, Volume 1. Cambridge
university press, Cambridge, 2008
[5] Dakka W, & Ipeirotis PG . Automatic Extraction of Useful Facet Hierarchies from Text Databases. 2008
IEEE 24th International Conference on Data Engineering. doi:10.1109/icde.2008.4497455.
[6] Vit Novacek TG and Handschuh S. Coraal towards deep exploitation of textual resources in life sciences.
Lecture Notes in Computer Science. Berlin/Heidelberg, 2009, 5651/2009:206–215
[7] Gonzalo J, Li H, Moschitti A, and Xu J. Sigir 2014 workshop on semantic matching in information
retrieval. In Proceedings of the 37th International ACM SIGIR Conference on Research &
Development in Information Retrieval, SIGIR '14, New York, NY, USA, 2014. ACM, 1296-1296.
[8] Akiyama M, Yamamoto S, Fujita K, Sakata I, Kajikaw Y. Effective Learning and Knowledge Discovery
Using Processed Medical Incident Report. 2012 Proceedings of PICMET '12: Technology Management
for Emerging Technologies, Vancouver, BC, 2012, 2337-2346
[9] Fujita K, Akiyama A, Park K, Yamaguchi EN, Furukawa H. Linguistic Analysis of Large-Scale Medical
Incident Reports for Patient Safety. Studies in Health Technology and Informatics. Volume 180: Quality
of Life through Quality of Information, 2012, 250 –254
[10] Denecke K: Automatic Analysis of Critical Incident Reports: Requirements and Use Cases. Stud Health
Technol Inform. 2016;223:85-92.
[11] Kreuzthaler M, Bloice MD, Faulstich L, Simonic KM, Holzinger A. A Comparison of Different
Retrieval Strategies Working on Medical Free Texts. JUCS, 2011, 17(7): 1109-1133.
[12] Wingert F. (Automated indexing based on SNOMED. Methods Inf Med, 1985, 24(1), 765-773.
[13] Denecke K, Bernauer J. Extracting Specific Medical Data Using Semantic Structures. AIM, 2007: 4594:
257-264
[14] Denecke K. Semantic Structuring of and Information Extraction from Medical Documents using the
UMLS. Methods of Information in Medicine, 2008, 5(47), 425-34
[15] Ananiadou S, McNaught J. Text Mining for Biology and biomedicine. Artech House, Inc., Norwood,
MA, USA, 2005
[16] Voorhees EM. Query expansion using lexical-semantic relations. In Proceedings of the 17th annual
international ACM SIGIR conference on Research and development in information retrieval (SIGIR
'94), W. Bruce Croft and C. J. van Rijsbergen (Eds.). Springer-Verlag New York, Inc., New York, NY,
USA, 1994, pp. 61-69
[17] Zobel J and Philip Dart P. Phonetic string matching: lessons from information retrieval. In Proceedings
of the 19th annual international ACM SIGIR conference on Research and development in information
retrieval (SIGIR '96). ACM, New York, NY, USA, 1996, pp. 166-172
8 Health Informatics Meets eHealth
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-8
1. Introduction
Coding of medical terms from full text data is a crucial prerequisite for medical data
management in clinical research and in patient care as well. Scientific publications, for
instance, are indexed by controlled keywords (e.g. the Medical Subject Headings used
by PubMed). Moreover, full text entries of patient records are processed and further
analyzed applying automatic entity recognition. In all these cases medical terms or
phrases used in a given text are assigned codes/controlled vocabulary representing the
biomedical concept behind these expressions. An obvious and well-known problem is
that biomedical texts often contain ambiguous abbreviations and words. I.e. single
expressions denote different concepts (homonymity) and, therefore, need to be assigned
different codes depending on the context. A common example is the term cold, which
may be an adjective describing the temperature, a noun referring to the common cold, or
the acronym for the Chronic Obstructive Lung Disease (COLD).
Word sense disambiguation (WSD) aims at inferring the correct meaning of a given
term depending on the surrounding text [1]. WSD (re-)establishes a functional mapping
between terms (plus surrounding text) and concepts.
1
Corresponding Author: Sven Festag, Department of Medical Informatics, Medical Faculty,
RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany, E-Mail: sven.festag@rwth-aachen.de
S. Festag and C. Spreckelsen / WSD of Medical Terms via Recurrent Convolutional Neural Networks 9
This study aimed at evaluating the use of word embedding, hereinafter also referred to
as vector representation of words, and recurrent convolutional neural networks for WSD
in the biomedical domain. More precisely, we investigated the accuracy achieved by that
approach on the task of assigning the correct UMLS Metathesaurus concept (CUI) to an
ambiguous word given its context, i.e. the text surrounding the word. The evaluation was
based on a subset (strictly separated from the training data subset) of the MSH WSD data
mentioned above.
2. Methods
The proposed method adopts vector representation of words introduced by Quoc and
Mikolov [12] and recurrent convolutional neural networks (RCNN) for text classification
10 S. Festag and C. Spreckelsen / WSD of Medical Terms via Recurrent Convolutional Neural Networks
presented by Lai et al. [13]. Both methods factor the context of words into the
computation and, thus, keep the most important information needed for disambiguation
[14]. A separate text processing pipeline is established for each word to be disambiguated.
All these pipelines share the same preprocessing unit followed by an RCNN specialized
for a single word (compare Figure 1). First, the preprocessing unit transforms a given
input text containing the corresponding word into vector representations. Subsequently,
these vectors are processed by the RCNN to estimate the correct sense in this context.
Neural Networks rely on numeric input vectors. Hence, a text must be preprocessed and
transformed into such vectors. The most common methods (bag-of-words or bag-of-n-
grams) lead to high dimensional and sparse representations while comprising only little
semantic or contextual information (see the argumentation in [12]). In contrast, Quoc
and Mikolov introduced word representations of low dimensionality preserving some
semantic information [12] by mapping semantically similar words to vector
representations which are close to each other in the vector space. The version of this
approach used for our study is described in the following:
Let denote the number of paragraphs in the training corpus while labels the
number of words in the th paragraph. The th word in the th paragraph is described by
, while labels the whole th paragraph, and k represents (half) the size of a context
frame of preceding and following words. The training task is to maximize (Equation 1)
ೕ
∑ ∑ log ,
, , … , , , ,
, … , ,
,
(1)
ೕ
The conditional probabilities P are estimated by softmax regression. At first, for every
input , , … , , , , , … , , , the following is computed (Equation 2)
For this purpose, the columns of that correspond to , , … , , , , , … , ,
and the column of that corresponds to are extracted and summed up. This is
abbreviated by the function in Eq. (1). The sum is linearly transformed by the weight
matrix and then translated by . The resulting vector
has as many dimensions as
there are diverse words in the training paragraphs. Assume that
denotes the entry
of
corresponding to the word . The required probability can then be estimated as
follows (Equation 3):
,
, , , … , , , , , … , , , ∑
ሺ͵ሻ
S. Festag and C. Spreckelsen / WSD of Medical Terms via Recurrent Convolutional Neural Networks 11
Disambiguation is performed by RCNNs specific for each pipeline. The RCNN proposed
by Lai et al. was developed for text classification in general [13]. In our case, we use a
slightly modified model for the disambiguation of words. In contrast to the original
model, word embedding is done as described above. An RCNN consists of a
convolutional layer with a recurrent structure, a single layer with perceptrons and a max-
pooling layer followed by a final softmax layer. The input of such a network is composed
of an arbitrary number of vector representations, each having fixed dimensionality. Thus,
the input texts passed to the pipeline can have an arbitrary number of words. Assume
that the overall input is a document ൌ ǡ ǥ ǡ containing the word the RCNN is
specialized for. During preprocessing is transposed into a list of vectors
ൌ
ሺ ሻǡ ǥ ǡ ሺ ሻ, which is then transferred to the RCNN. By the initial convolutional
layer the following two multidimensional functions are evaluated for every א,
where denotes the elementwise sigmoid function and
ǡ
ሺሻ ǡ
ሺ ሻ ǡ
ሺ ሻ are
matrices (Equations 4 and 5).
ൌ ሺ
ڄ
ڄሺ
ሻሻ ሺͶሻ
ൌ ሺ
ڄ
ڄሺ ሻሻ ሺͷሻ
The function returns a vector representing the context defined by all words left to the
current one. Analogously, computes the right context. For the first input word we
define
as a variable vector learned in addition to the other parameters. The same
holds true for
corresponding to the last word. The whole text is considered during
the context computation irrespectively of its length, which is a major advantage of a
recurrent convolutional layer over a conventional convolutional layer (e.g. the approach
used in [11]).
The next layer contains as many simple perceptrons as there are vector
representatives. Its th perceptron computes (Equation 6)
ൌ ݄ሺ
ڄ ሻ ሺሻ
భ
where
ሺሻ ǡ ሺሻ are further parameters,
మ denotes the concatenation of ǡ ǡ , and
య
is evaluated element wise. The subsequent max-pooling layer compresses its input
ሺሻ
consisting of arbitrary many vectors to a vector of fixed length by applying
the max function to each vector component. The input to the softmax layer is then given
by ൌ
. This is transformed into to the overall output by using the
softmax regression, which estimates the probability distribution of the ambiguous
meanings conditioned on the input document ǡ ǥ ǡ , or rather, on its vector
representations ሺ ሻǡ ǥ ǡ ሺ ሻ.
12 S. Festag and C. Spreckelsen / WSD of Medical Terms via Recurrent Convolutional Neural Networks
Experiment
All pipelines used for our studies are based on the same architecture. Documents
containing an ambiguous word serve as inputs. The output of a pipeline gives an
estimation of the word’s correct meaning. Figure 2 depicts the line of action pursued for
the experiment. The preprocessing unit is the same for all pipelines and was implemented
as described in section 2.1. The training corpus for the word embedding was iterated over
for 5 times with a window size of 10 words (' 5) resulting in 300-dimensional word
vectors.
While preserving their basic structure (see 2.2.) the trained RCNNs differ from
pipeline to pipeline. While the number of columns of is constant for all, the number
of rows equals the count of meanings of the corresponding word. In our case, the number
of rows is fixed by the number of UMLS concepts linked to a given ambiguous word.
The main difference, however, lies in the values of the learned parameters .
20 of these documents were used to train and evaluate 20 of the previously described
RCNNs. They were selected by taking the first 20 documents returned by the function
os.listdir() traversing the full MSH WSD directory. According to the documentation this
function returns all elements of a given directory in an arbitrary order. Thus, this subset
can be considered pseudo-random. For all 20 cases the list of corresponding MEDLINE
citations was divided into a training set containing about 75% of the data and a disjoint
test set consisting of the rest. In both subsets the numbers of all meanings were balanced
as they are in the original set [16]. Each RCNN was trained for ͳͲͲͲ iterations using a
learning rate of ͲǤͲͳ. Parameters were initialized randomly based on a truncated normal
distribution (mean: Ͳ, stddev: ͲǤͳ, interval: ሾെͲǤʹǡ ͲǤʹሿ).
3. Results
The section presents key figures of the RCNN training process for the individual
pipelines - each identified by the disambiguated word. Accuracy with respect to the test
set has been measured after every 10th training iteration. An overview of these
measurements is given in Figure 3.
Final accuracies after 1000 iterations are summarized in Table 1. 8 out of 20
pipelines (i.e. 40%) achieved an accuracy of 98% and above. A total of 14 out of 20
pipelines (i.e. 70%) achieved an accuracy of at least 90%. The average final accuracy of
all pipelines is about 91%.
Most of the pipelines showed good or excellent performance. A clear outlier is the
“Phosphorylase”-network. Its bad performance may be explained by considering the two
possible meanings of the word. On the one hand, it refers to a class of enzymes and on
the other hand, it describes a special enzyme of this class. Thus, one concept is enclosed
by the other one (hyponymy), which makes disambiguation extremely hard.
A comparison to the Naïve Bayes approach described by Plaza et al. [6] is
problematic. The Bayesian method was tested against the full MSH WSD set, but the
outline of the experiment raises the suspicion that the training set overlapped the test set,
which - if proved correct - would forbid to estimate the real accuracy of the proposed
method by the measured one. In order to avoid these complications, we divided the MSH
WSD data creating disjoint training and test sets.
2
https://radimrehurek.com/gensim/models/doc2vec.html
3 https://www.tensorflow.org/
14 S. Festag and C. Spreckelsen / WSD of Medical Terms via Recurrent Convolutional Neural Networks
References
[1] A. Kilgarriff, "I Don't Believe in Word Senses", Computers and the Humanities, 31(2) (1997), 91–113.
[2] T. C. Rindflesch and A. R. Aronson, Ambiguity resolution while mapping free text to the UMLS
Metathesaurus, Proceedings of the Annual Symposium on Computer Application in Medical Care, 1994,
240–244.
[3] A. J. Jimeno-Yepes and A. R. Aronson, Improving an automatically extracted corpus for UMLS
Metathesaurus word sense disambiguation, Procesamiento del lenguaje natural, 45 (2010), 239–242.
[4] A. J. Yepes and A. R. Aronson, Query Expansion for UMLS Metathesaurus Disambiguation Based on
Automatic Corpus Extraction, The Ninth International Conference on Machine Learning and
Applications, 2010, 965–968.
[5] M. Stevenson, Y. Guo, and R. Gaizauskas, Acquiring sense tagged examples using relevance feedback,
Proceedings of the 22nd International Conference on Computational Linguistics, 1 (2008), 809-816.
[6] L. Plaza, A. J. Jimeno-Yepes, A. Diaz, and A. R. Aronson, Studying the correlation between different
word sense disambiguation methods and summarization effectiveness in biomedical texts, BMC
bioinformatics, 12 (2011), 355.
[7] A. Jimeno Yepes and A. R. Aronson, Knowledge-based and knowledge-lean methods combined in
unsupervised word sense disambiguation, Proceedings of the 2nd ACM SIGHIT International Health
Informatics Symposium, 2012, 733.
[8] A. J. Jimeno-Yepes, B. T. McInnes, and A. R. Aronson, Exploiting MeSH indexing in MEDLINE to
generate a data set for word sense disambiguation, BMC bioinformatics, 12 (2011), 223.
[9] D. Bahdanau, K. Cho, and Y. Bengio, Neural Machine Translation by Jointly Learning to Align and
Translate, Preprint: http://arxiv.org/pdf/1409.0473.
[10] D. Amodei et al., Deep Speech 2: End-to-End Speech Recognition in English and Mandarin,
Proceedings of The 33rd International Conference on Machine Learning, 2016, 173–182.
[11] Y. Kim, Convolutional Neural Networks for Sentence Classification, Preprint:
http://arxiv.org/pdf/1408.5882.
[12] Quoc V. Le and Tomas Mikolov, Distributed Representations of Sentences and Documents, Preprint:
https://arxiv.org/pdf/1405.4053v2.pdf.
[13] S. Lai, L. Xu, K. Liu, and J. Zhao, Recurrent Convolutional Neural Networks for Text Classification,
Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015, 2267–2273.
[14] M. Stevenson and Y. Wilks, Word-Sense Disambiguation, in: The Oxford Handbook of Computational
Linguistics, R. Mitkov, Oxford, 2005, 249–265.
[15] National Library of Medicine (US), UMLS® Reference Manual.
https://www.ncbi.nlm.nih.gov/books/NBK9676/, 2009.
[16] A. J. Jimeno-Yepes and A. R. Aronson, Knowledge-based biomedical word sense disambiguation:
comparison of approaches, BMC Bioinformatics, 11(1) (2010), 1–12.
[17] N. Ide and J. Véronis, Introduction to the Special Issue on Word Sense Disambiguation: The State of
the Art, Computational Linguistics, 24(1) (1998), 2–40.
16 Health Informatics Meets eHealth
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-16
1. Introduction
More than a half of all healthcare-associated infections (HAIs) – i.e., infections occurring
in a patient during treatment in a hospital or other healthcare facility [1] – are of bacterial
or fungal origin [2]. Based on pathogen identification and antibiotic susceptibility testing,
the microbiological laboratory helps to verify a suspected infection and provides crucial
information for selecting the appropriate infection treatment. This is especially important
in intensive care units (ICUs), where the patient’s health is greatly compromised and the
prevalence of multi-resistant organisms resistive to standard antibiotic therapies is higher
[3, 4].
International infection surveillance programs, which include European surveillance
definitions of HAIs in ICUs by the European Centre for Disease Prevention and Control
1
Corresponding Author: Jeroen S. de Bruin, Medexter Healthcare GmbH, Borschkegasse 7/5, 1090
Vienna, Austria, E-Mail: jb@medexter.com
J.S. de Bruin et al. / Arden Syntax MLM Building Blocks for Microbiological Concepts 17
(ECDC) in Stockholm [5] require microbiology results to verify most types of HAIs.
This information includes the type of sample material taken from the patient; the type of
tests performed on the sample; the type of microorganism detected, the pathogen’s
abundance in the tested sample, and the time of detection.
The digitization of microbiological laboratory test results facilitates their rapid
communication and permits various modes of automatic processing, such as the
computerized detection of a bacterial outbreak or infection [6‒9]. An essential
prerequisite for computerized use of microbiological laboratory test results is their
storage and communication in a structured and standardized format.
Our goal was to create a standardized library for the analysis of microbiological
laboratory test results. The library is intended to provide interoperable building blocks
for the analysis of microbiology results in knowledge-based infection detection and
surveillance systems. Building blocks correspond with microbiological information
elements that are frequently required by HAI surveillance programs, such as the
aforementioned ECDC infection surveillance criteria. Preprocessing and encoding of raw
clinical data were implemented in Java. Subsequent data querying for the aforementioned
information elements was implemented in Arden Syntax, an international standard for
computerized knowledge representation and processing that supports the collection,
description, and processing of medical knowledge in a machine-executable format [10].
To test the library and assess how frequent implemented queries were performed, we
focused on microbiology data from ICUs of the Vienna General Hospital (VGH).
2. Methods
Microbiological laboratory test results were acquired from the MOLIS laboratory
information system at the Department of Clinical Microbiology at VGH. Data are
structured in XML format and include unique patient and sample identifiers, sample
sender (department), demographic patient data (age, sex), the number of samples tested,
sample source, performed tests, and the pathogens that tested positive.
Using unique patient identifiers, the MOLIS database was cross-referenced with the
KisDB hospital information system database of VGH. SQL queries were used to obtain
information on a patient’s hospital stay, including the unit(s) the patient stayed in during
the present hospital stay, when the patient was admitted to the unit, and for how long.
Data acquired from both sources were combined and stored in the MiBiDB project
database, which is an Oracle relational database.
18 J.S. de Bruin et al. / Arden Syntax MLM Building Blocks for Microbiological Concepts
The library was implemented using Java and Arden Syntax. Data preprocessing and
preparation was implemented in Java.
Data preprocessing and preparation were done using a custom-made thesaurus of
bacteria, with which microbiology results could be classified according to bacteria type
(such as a common skin contaminant, a uropathogen, etc.), and according to sample site
(blood culture, catheter culture). The structure of the thesaurus was based on the NCBI
taxonomy browser [11], and was initially filled with and coded according to the
microorganisms in the code list provided in [5]. The thesaurus was then extended to
include the analysis of about 450.000 MOLIS XML files, from which we extracted all
pathogens found. The extracted data included at least pathogen family, gender, species,
and name as stated in the XML file. The pathogens were then classified manually by
microbiology experts.
When processing new microbiology results, matching algorithms analyze both the
coded input in the XML files and the pathogen name – which is provided by the
microbiology laboratory technician as a free text input. Through this process, reported
pathogens are matched to any of the bacteria present in the thesaurus and subsequently
assigned the class(es) of the corresponding set(s) of the respective bacterial strain.
The building blocks containing rules for information elements were encoded in
Arden Syntax. With Arden Syntax, medical rules are coded in a syntax that resembles
natural language, which makes the code more easily comprehensible and verifiable by
healthcare professionals [12]. The medical rule sets are known as medical logic modules
(MLMs), and usually contain sufficient logic to make at least a single medical decision
[13].
For coding and processing Arden Syntax MLMs, we used the ArdenSuite framework
[14], which includes an integrated development environment (IDE) as well as an
ArdenSuite server. The IDE was employed for coding, compiling, and testing MLMs
before transferring them to the ArdenSuite server, where compiled MLMs are stored and
executed. For server access, the ArdenSuite server uses web-service protocols: either the
Simple Object Access Protocol (SOAP) or Representational State Transfer (REST). The
server/database connector employs Java Database Connectivity (JDBC); this additional
module may be used to connect the ArdenSuite server with any SQL-based external
database sources. For the present project we used the server/database connector to
connect with the MiBiDB project database. Figure 1 provides a graphic diagram of the
project architecture.
We used patient demographics (age, sex, length of stay) to describe the patient population.
For the identification of microbiological information needed for infection allocation, we
analyzed ECDC infection surveillance criteria for ICUs, and presented those information
elements (see Table 1). Elements are numbered and grouped according to their related
HAI syndrome(s). Based on this analysis and the analysis of available microbiology data,
information elements were implemented in the library. They are referenced by their
related HAI syndrome and – if more than one description exists – by their respective
J.S. de Bruin et al. / Arden Syntax MLM Building Blocks for Microbiological Concepts 19
Figure 1. The ArdenSuite framework for medical knowledge representation and rule-based inference. Image
adapted from [14]. Note: MLM, medical logic module.
sub-category number (such as PN3.3). These are presented in Table 2, along with the
frequency (both absolute and relative to the total number of positive results found) of
these library elements in the study data.
3. Results
Table 1 shows the results of the analysis for ECDC infection surveillance criteria in ICUs.
In all, 28 information elements related to microbiology results were found.
Table 1. 28 microbiological information elements for the ECDC ICU infection surveillance criteria.
Table 2. Results for the Arden Syntax library for the eleven identified information elements.
Bloodstream infections (BSI) Pneumoniae (PN)
Element Freq. (abs) Freq. (rel) Element Freq. (abs) Freq (rel)
UO 82 6.62% PN1.1 432 34.87%
Sec BSI 46 3.71% PN3.2 23 1.86%
BSI-A 5 0.40% PN4.1 18 1.45%
PN4.2 1 0.08%
second step, preprocessing algorithms identify and classify bacteria according to the
thesaurus. Finally, based on these classifications and additional information such as the
pathogen’s abundance in the tested sample and the time of detection, the MLMs test for
occurrence of their respective information element.
4. Discussion
In the present report, we discuss the results of a knowledge-based program library for
the analysis of microbiology test results for infection control. Using Java for
preprocessing raw microbiology test results and Arden Syntax for knowledge-based
querying of preprocessed data, we constructed a collection of 28 building blocks that
process microbiological information required in the ECDC infection surveillance criteria
for ICUs. We then used data from ICUs of the VGH to test the library and quantify how
often these information elements appear in the data.
References
[1] World Health Organization, The Burden of Health Care-Associated Infection Worldwide, 2017.
Available at: http://www.who.int/gpsc/country_work/burden_hcai/en/, last access: 27.1.2017.
[2] European Centre for Disease Prevention and Control (ECDC), Point Prevalence Survey of Healthcare-
Associated Infections and Antimicrobial Use in European Acute Care Hospitals 2011–2012, 2013.
Available at: http://ecdc.europa.eu/en/publications/Publications/healthcare-associated-infections-
antimicrobial-use-PPS.pdf, last access: 16.11.2016.
[3] N. Brusselaers, D. Vogelaers, S. Blot, The rising problem of antimicrobial resistance in the intensive
care unit, Annals of Intensive Care 1 (2011), 47; doi:10.1186/2110-5820-1-47.
J.S. de Bruin et al. / Arden Syntax MLM Building Blocks for Microbiological Concepts 23
[4] G. Zilahi, A. Artigas, I. Martin-Loeches, What’s new in multidrug-resistant pathogens in the ICU?,
Annals of Intensive Care 6 (2016), 96; doi: 10.1186/s13613-016-0199-4.
[5] European Centre for Disease Prevention and Control (ECDC), European Surveillance of Healthcare-
Associated Infections in Intensive Care Units, HAI-Net ICU Protocol, Protocol Version v1.02, 2015.
Available at: http://ecdc.europa.eu/en/publications/Publications/healthcare-associated-infections-HAI-
ICU-protocol.pdf, last access: 16.1.2017.
[6] R.B. Dessau, P. Steenberg, Computerized surveillance in clinical microbiology with time series analysis,
Journal of Clinical Microbiology 31(4) (1993), 857–860.
[7] D.M. Hacek, R.L. Cordell, G.A. Noskin, L.R. Peterson, Computer-assisted surveillance for detecting
clonal outbreaks of nosocomial infection, Journal of Clinical Microbiology 42(3) (2004), 1170–1175.
[8] J.S. de Bruin, W. Seeling, C. Schuh, Data use and effectiveness in electronic surveillance of healthcare
associated infections in the 21st century: a systematic review, Journal of the American Medical
Informatics Association 21(5) (2014), 942–951.
[9] R. Freeman, L.S.P. Moore, L. García Álvarez, A. Charlett, A. Holmes, Advances in electronic
surveillance for healthcare-associated infections in the 21st Century: a systematic review, The Journal
of Hospital Infection 84(2) (2013), 106–119.
[10] Health Level Seven, Arden Syntax v2.10 (Health Level Seven Arden Syntax for Medical Logic Systems,
Version 2.10), 2015. Available at: http://www.hl7.org/implement/standards/product_brief.cfm?
product_id=372, last access: 30.1.2017.
[11] E.W. Sayers, T. Barrett, D.A. Benson, S.H. Bryant, K. Canese, V. Chetvernin, D.M. Church, M.
DiCuccio, R. Edgar, S. Federhen, M. Feolo, L.Y. Geer, W. Helmberg, Y. Kapustin, D. Landsman, D.J.
Lipman, T.L. Madden, D.R. Maglott, V. Miller, I. Mizrachi, J. Ostell, K.D. Pruitt, G.D. Schuler, E.
Sequeira, S.T. Sherry, M. Shumway, K. Sirotkin, A. Souvorov, G. Starchenko, T.A. Tatusova, L.
Wagner, E. Yaschenko, J. Ye, Database resources of the National Center for Biotechnology Information,
Nucleic Acids Research 37 (2009), D5–D15.
[12] M. Samwald, K. Fehre, J. de Bruin, K.-P. Adlassnig, The Arden Syntax standard for clinical decision
support: Experiences and directions, Journal of Biomedical Informatics 45(4) (2012), 711–718.
[13] G. Hripcsak, Writing Arden Syntax Medical Logic Modules, Computers in Biology and Medicine 24(5)
(1994), 331–363.
[14] K.-P. Adlassnig, K. Fehre, Service-oriented Fuzzy-Arden-Syntax-based clinical decision support, Indian
Journal of Medical Infomatics 8(2) (2014), 75–79.
[15] D.H. Culver, T.C. Horan, R.P. Gaynes, W.J. Martone, W.R. Jarvis, T.G. Emori, S.N. Banerjee, J.R.
Edwards, J.S. Tolson, T.S. Henderson, J.M. Hughes, and the National Nosocomial Infections
Surveillance System, Atlanta, Georgia, Surgical wound infection rates by wound class, operative
procedure, and patient risk index, The American Journal of Medicine 91(3B) (1991), 152S–157S.
[16] R.W. Haley, D.H. Culver, J.W. White, W.M. Morgan, T.G. Emori, V.P. Munn, T.M. Hooton, The
efficacy of infection surveillance and control programs in preventing nosocomial infections in US
hospitals, American Journal of Epidemiology 121(2) (1985), 182–205.
[17] M. Klompas, D.S. Yokoe, Automated surveillance of health care-associated infections, Clinical
Infectious Diseases: An Official Publication of the Infectious Diseases Society of America 48(9) (2009),
1268–1275.
[18] BMG Österreich, ELGA GmbH, FH Technikum Wien, BRZ Österreich, Fachhochschule Dortmund,
Terminologie-Browser. Available at:
https://termpub.gesundheit.gv.at/TermBrowser/gui/main/main.zul, last access: 5.3.2017.
24 Health Informatics Meets eHealth
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-24
Abstract. Routine patient data in electronic patient records are only partly structured,
and an even smaller segment is coded, mainly for administrative purposes. Large
parts are only available as free text. Transforming this content into a structured and
semantically explicit form is a prerequisite for querying and information extraction.
The core of the system architecture presented in this paper is based on SAP HANA
in-memory database technology using the SAP Connected Health platform for data
integration as well as for clinical data warehousing. A natural language processing
pipeline analyses unstructured content and maps it to a standardized vocabulary
within a well-defined information model. The resulting semantically standardized
patient profiles are used for a broad range of clinical and research application
scenarios.
1. Introduction
The totality of electronic health records (EHRs) in health care institutions of all levels
constitutes a highly interesting “data treasure” for primary and secondary usage scenarios
[1]. Innovative re-use of this wealth of information about millions of patients, available
as structured and unstructured data from heterogeneous sources, requires a combined
effort capitalising on data semantics approaches, biomedical terminologies, natural
language processing, big data management and predictive content analytics.
Natural language processing (NLP) is a fundamental technology, which enables
access to relevant information within patient narratives, and it can be used to get
semantically enriched patient profiles [2]. This is especially important for secondary use
case scenarios and retrospective cohort building within a clinical environment. Very
often, attributes required for addressing a specific information need, or attributes that
characterise a patient cohort are present in EHRs as parts of daily routine documentation
within a certain document type produced in a specific unit by a certain type of health
professional. Using this raw data without further processing, even simple queries may
require time-consuming efforts, involving combined expertise in information extraction
and information management in the medical domain. Therefore, further processing is
1
Corresponding Author: Markus Kreuzthaler, CBmed GmbH - Center for Biomarker Research in
Medicine, Stiftingtalstrasse 5, 8010 Graz, Austria; Institute for Medical Informatics, Statistics and
Documentation, Medical University of Graz, Auenbruggerplatz 2/V, 8036 Graz, Austria, E-Mail:
markus.kreuzthaler@medunigraz.at
M. Kreuzthaler et al. / Semantic Technologies for Re-Use of Clinical Routine Data 25
necessary in order to ensure that the information needs are met, required attributes and
their sources are identified, a workflow is devised and, finally, the information need is
correctly formulated with the query syntax supported by the information retrieval tool in
use. This paper will demonstrate how such a retrieval scenario can benefit from
appropriate natural language processing methods together with a well-defined
information model.
2. Background
The translation is “An approximately 2 by 1 cm sized, oval-shaped opacity (in the chest
X-ray) over (- laying) the left lower lobe; pleural recess (on the right side of the lobe)
free of fluids, on the left side partly adhesive (after non-recent inflammation)”. Such
highly condensed text is understood by clinicians (at least by those of the same area), but
it poses challenges to computer-based morphological, syntactic and semantic processing.
Numerous language idiosyncrasies that are typical for clinical texts have to be considered
when tailoring an NLP pipeline and supporting lexical resources to clinical narratives:
acronyms, abbreviations, ambiguous terms, synonyms, derivations, single-word
compounds, uncorrected spelling, spelling variants, typing and punctuation errors, jargon
expressions, telegram style, non-standardized numeric expressions, and non-standard
variations of negations [1,4].
Clinical NLP is a relatively small sub-area, compared to e.g. bioNLP, which targets
scientific texts. A reason for this is the difficulty of accessing clinical data due to privacy
concerns. This results in a lack of shared data and gold standards. Nevertheless, scientific
challenges with special tracks have been established to foster clinical NLP research (i2b2
NLP research data sets, ShARe/CLEF eHealth, SemEval, TREC Medical Tracks,
MedNLPDoc). Research institutions that have considerable merits in clinical NLP are
the Mayo Clinic [5] (cTakes), the Veterans Affairs network of hospitals [1,6] (The Leo
framework - The VINCI-developed NLP infrastructure using UIMA [17], the Apache
Unstructured Information Management Architecture) and the Columbia-Presbyterian
Medical Center [7,8] (MedLEE). MetaMap [9] uses concepts of the UMLS
Metathesaurus for semantic annotations and is used as a core engine for SemRep [10], a
system that generates subject-predicate-object triples out of MetaMap annotations.
Usually applied in the biomedical domain, its applicability to clinical narratives had been
26 M. Kreuzthaler et al. / Semantic Technologies for Re-Use of Clinical Routine Data
investigated [11]. HITEx [12] is based on GATE [13], an NLP engine. The DKPro Core
collection also has to be mentioned [14] in the context of NLP functionalities.
In most of them, the scope is limited to English text, whereas the International
Workshop on Health Text Mining and Information Analysis (Louhi) is worth mentioning
due to its focus on European languages. Nevertheless, the situation for the German
language in this research area is not satisfying. The lack of openly available gold
standards makes comparison between competing approaches almost impossible. Notable
exceptions are the JULIE Labs in Germany, which made their clinical language models
and UIMA based NLP framework openly accessible.
Pipeline components are mostly a combination of various methods exploiting rule-
based engines (e.g. Apache UIMA Ruta [18] or regular expressions), distributional
semantics, or unsupervised / supervised machine learning methods, with a special focus
on (i) not violating real time constraints and (ii) reaching a certain level of annotation
quality. Especially terminology / ontology management in the background and its
mapping to narrative content is a main building block in a pipeline. Recently, deep
learning methods, especially bi-directional long short term memories (BI-LSTM), one
type of recurrent neural network, have attracted attention for processing clinical
narratives e.g. for the de-identification task [15,16].
In the use case presented here we extended the core NLP engine provided by the
company Averbis [25] with six rule-based extraction approaches based on regular
expressions. We built a custom medication terminology used by the parameter-tuned
concept matcher within the pipeline and also mapped to ICD-10 codes. The results of the
extraction engine are fed into the generic clinical data model of the SAP Connected
Health platform [19], which is described in the following section.
3. Methods
The data set for first initial experiments on clinical information extraction, using the in-
memory SAP HANA technology as data sink, was a sample from in- and outpatient
discharge summaries from a dermatology department, filtered by ICD codes and
admission dates. Using an Extract Transform Load (ETL) process with Talend Open
Studio, 1,696 summaries were extracted. After manual de-identification this corpus
could then be used various text mining experiments.
Table 1. Entity types to be extracted from the narratives together with examples of their representation.
Entity type Representation examples
Diagnosis (ICD-10 code) “malignant melanoma”, “C43.9”
Medication information “Norvasc 5 mg 1-0-1”, ”Atarax 25 mg 0-0-1”
Tumor staging (pTNM) “pT1a N3 M1c”, “pT3aN0M1c”, “pT-2b”
Breslow level “TD <0,5mm”, “TD unter 0,5 mm”, “Tumordicke 0.9 mm”
Clark level “Invasionslevel III”, “Level II”, “Clark-Level III”
Mitosis index “Mitosen < 1/mm²”
S100 biomarker “S100 0.058 (Normbereich)”
Ulceration “ulceriertes Ca in situ”, “exulzeriertes MM”
M. Kreuzthaler et al. / Semantic Technologies for Re-Use of Clinical Routine Data 27
Figure 1. Logical clinical data model in the SAP Connected Health platform.
Based on the use case “malignant melanoma” formulated by the administrators of a large
biobank, we analysed the above data set in order to gather additional phenotype
information for defining retrospective patient cohorts. Table 1 shows the types of entities
to be extracted in a first stage of the project. Specific annotators were implemented for
the information extraction process and combined with the NLP core in use.
A core entity of the clinical data model within the SAP Connected Health platform is the
Patient and the corresponding Interactions with healthcare providers (see Figure 1). An
Interaction is an event, which may occur at a specific time or time interval. Examples
are diagnostic procedures, chemotherapy treatments, genomic analyses or hospital
check-ups. Each Interaction is uniquely identified by an InteractionID and classified by
an InteractionType. The latter can be coded according to standardized terminologies such
as SNOMED CT. Patients usually participate in several Interactions.
Figure 2. pTNM = T3N1M1c representation according to the SAP Clinical Data Model. For simplicity
reason the version of the SNOMED coding system is not shown (Jan 2017 release).
28 M. Kreuzthaler et al. / Semantic Technologies for Re-Use of Clinical Routine Data
Figure 3. Customized filter cards for the secondary use of the structured and standardized clinical routine data
within SAP Medical Research Insights. The annotation results for the entities Clark Level and pTNM within
the corresponding clinical narrative of the NLP pipeline are depicted before data harmonization within the SAP
Connected Health platform.
4. Results
4.1. Integrating data within the SAP Connected Health platform - Modelling pTNM
Structured data extracted from the clinical narratives can be integrated in the Clinical
Data Warehouse (CDW), by using custom or standard plug-ins and adapters provided by
SAP or, alternatively, by using an ETL tool. At an early stage of the project, we imported
structured data stored as CSV files, the result file format from the applied NLP pipeline
to the narratives. However, in a later stage, structured and unstructured data from a source
M. Kreuzthaler et al. / Semantic Technologies for Re-Use of Clinical Routine Data 29
system will be directly imported into the SAP Connected Health platform by using an
ETL tool like Talend Open Studio. Such a tool will communicate directly with the NLP
engine via a RESTful service which will return a set of annotations using e.g. JSON, also
avoiding intermediate result files.
Once in the system, the data is mapped to the clinical data model of the SAP
Connected Health platform, described in Section 3.3. SAP Medical Research Insights
will then query this model to retrieve clinical data for the corresponding use cases.
Figure 2 shows how pTNM is represented according to the clinical data model. It
corresponds to an Interaction of the type “pTNM” described by three coded attributes
with SNOMED CT (i.e. InteractionDetails), each one describing the primary tumour (pT),
regional lymph nodes (pN) and distant metastasis (pM) according to the 7 th edition
(2009) of the AJCC Cancer Staging Manual.
SAP Medical Research Insights allows data access from heterogeneous sources such as
clinical information systems, tumour registries, biobank systems etc. It allows filtering
and grouping patients according to different attributes based on the SAP Connected
Health clinical data model. Figure 3 depicts its main interface. On the left side, filter
cards can be used to formulate queries. Each filter card corresponds to an interaction type.
One of the filter cards is directly related to the pTNM classification and its corresponding
values are now searchable within a defined structured and standardized information
model.
In this case, we would like to retrieve all male patients with the diagnosis “C43.9”
(ICD-10 code for “malignant melanoma of skin”) and metastasis. More advanced queries
including disjunction and temporal information are also supported.
Acknowledgements
This work is part of the IICCAB project (Innovative Use of Information for Clinical Care
and Biomarker Research) within the K1 COMET Competence Center CBmed
(http://cbmed.at), funded by the Federal Ministry of Transport, Innovation and
Technology (BMVIT); the Federal Ministry of Science, Research and Economy
(BMWFW); Land Steiermark (Department 12, Business and Innovation); the Styrian
Business Promotion Agency (SFG); and the Vienna Business Agency. The COMET
program is executed by the FFG. We also thank KAGes (Styrian hospital company) and
SAP SE to provide significant resources, manpower and data as basis for research and
innovation, Averbis GmbH for providing the Information Discovery platform, Biobank
Graz for the use case descriptions and finally Werner Aberer, director of the Department
of Dermatology, Medical University of Graz, for the provision of sample data
(anonymised texts).
References
[1] Meystre, S. M., Savova, G. K., Kipper-Schuler, K. C., & Hurdle, J. F. (2008). Extracting information
from textual documents in the electronic health record: a review of recent research. Yearb Med Inform,
35(128), 44.
[2] Friedman, C., & Elhadad, N. (2014). Natural language processing in health care and biomedicine. In
Biomedical Informatics (pp. 255-284). Springer London.
[3] Kreuzthaler, M., Daumke, P., & Schulz, S. (2015). Semantic retrieval and navigation in clinical
document collections. Stud Health Technol Inform, 212, 9-14.
[4] Patterson, O., Igo, S., & Hurdle, J. F. (2010, November). Automatic acquisition of sublanguage semantic
schema: towards the word sense disambiguation of clinical narratives. AMIA Annu Symp Proc. 2010,
612-616.
M. Kreuzthaler et al. / Semantic Technologies for Re-Use of Clinical Routine Data 31
[5] Savova, G. K., Masanz, J. J., Ogren, P. V., Zheng, J., Sohn, S., Kipper-Schuler, K. C., & Chute, C. G.
(2010). Mayo clinical Text Analysis and Knowledge Extraction System (cTAKES): architecture,
component evaluation and applications. J Am Med Inform Assoc. 2010, 507-513
[6] Patterson, O. V., Forbush, T. B., Saini, S. D., Moser, S. E., & Duvall, S. L. (2015). Classifying the
Indication for Colonoscopy Procedures. Stud Health Technol Inform 216, 614-618.
[7] Friedman, C., Johnson, S. B., Forman, B., & Starren, J. (1995). Architectural requirements for a
multipurpose natural language processor in the clinical environment. Proc Annu Symp Comput Appl
Med Care, 1995, 347-351.
[8] Friedman, C., Hripcsak, G., DuMouchel, W., Johnson, S. B., & Clayton, P. D. (1995). Natural language
processing in an operational clinical information system. Natural Language Engineering, 1(01), 83-108.
[9] Aronson, A. R. (2001). Effective mapping of biomedical text to the UMLS Metathesaurus: the MetaMap
program. Proc AMIA Symp. 2001, 17-21.
[10] Rindflesch, T. C., & Fiszman, M. (2003). The interaction of domain knowledge and linguistic structure
in natural language processing: interpreting hypernymic propositions in biomedical text. Journal of
Biomedical Informatics, 36(6), 462-477.
[11] Liu, Y., Bill, R., Fiszman, M., Rindflesch, T. C., Pedersen, T., Melton, G. B., & Pakhomov, S. V. (2012).
Using SemRep to label semantic relations extracted from clinical text. In AMIA Annu Symp Proc. 2012;
587-95
[12] Zeng, Q. T., Goryachev, S., Weiss, S., Sordo, M., Murphy, S. N., & Lazarus, R. (2006). Extracting
principal diagnosis, co-morbidity and smoking status for asthma research: evaluation of a natural
language processing system. BMC Medical Informatics and Decision Making, 6(1), 30.
[13] Cunningham, H., Wilks, Y., & Gaizauskas, R. J. (1996, August). GATE - A General Architecture for
Text Engineering. In Proceedings of the 16th conference on Computational linguistics-Volume 2 (pp.
1057-1060). Association for Computational Linguistics.
[14] Bär, D., Zesch, T., & Gurevych, I. (2013, August). DKPro Similarity: An Open Source Framework for
Text Similarity. In ACL (Conference System Demonstrations) (pp. 121-126).
[15] Shweta, A. E., Saha, S., & Bhattacharyya, P. (2016). Deep Learning Architecture for Patient Data De-
identification in Clinical Records. ClinicalNLP 2016, 32.
[16] Lee, J. Y., Dernoncourt, F., Uzuner, O., & Szolovits, P. (2016). Feature-augmented neural networks for
patient note de-identification. arXiv preprint arXiv:1610.09704.
[17] Savova, G., Kipper-Schuler, K., Buntrock, J., & Chute, C. (2008). UIMA-based clinical information
extraction system. Towards enhanced interoperability for large HLT systems: UIMA for NLP, 39.
[18] Kluegl, P., Toepfer, M., Beck, P. D., Fette, G., & Puppe, F. (2016). UIMA Ruta: Rapid development of
rule-based information extraction applications. Natural Language Engineering, 22(01), 1-40.
[19] SAP Connected Health platform, https://help.sap.com/platform_health, last access: 15.3.2017.
[20] HL7 Implementation Guide for CDA® Release 2: IHE Health Story Consolidation, Release 1.1 - US
Realm, http://www.hl7.org/implement/standards/product_brief.cfm?product_id=258 last access:
13.3.2017.
[21] HL7 FHIR. https://www.hl7.org/fhir/ last access: 13.3.2017.
[22] ISO/TC 215, Health informatics (2008). Electronic health record communication Part 2: Archetype
interchange specification, (ISO 13606-2:2008)
[23] T. Beale & S. Heard: Archetype Definitions and Principles version 1.0.2.
http://www.openehr.org/releases/1.0.2/architecture/am/archetype_principles.pdf last access: 13.3.2017.
[24] Kreuzthaler, M., Schulz, S., & Berghold, A. (2015). Secondary use of electronic health records for
building cohort studies through top-down information extraction. Journal of biomedical informatics, 53,
188-195.
[25] Averbis text analytics - Healthcare: https://averbis.com/en/industries/healthcare/ last access: 13.3.2017.
32 Health Informatics Meets eHealth
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-32
1. Introduction
these facts, the occurrence of delirium has been identified as one of the markers for the
quality of care and patient safety [6].
There are a number of studies on predicting delirium which evaluated different risk-
stratification cohort rankings, such as Folstein Mini Mental State Examination (MMSE)
scores [7], Clock Drawing Test (CDT) [6], Confusion Assessment Method (CAM) [8],
CAM- Intensive Care Unit (CAM-ICU) [8], Delirium Assessment Scale (DAS) [7], etc.
In a literature review, we have identified several relevant papers concerning
application of machine learning techniques on delirium prediction that have been
published in the last 10 years.
In a recent publication [8], Wassenaar et al. used CAM-ICU to assess the patient
delirium conditions and developed a regression model. They achieved an Area Under the
Receiver Operating Characteristic (AUROC) of 0.70 with sensitivity and specificity of
62% and 67% respectively in the 0-1 day stay in ICU. These measures increased to
AUROC = 0.81 with a sensitivity of 78% and specificity 68% with six days of stay. The
study cohort consisted of 2,914 patients in which 1,962 were included in development
dataset and 952 were included in validation dataset.
In a similar study [9] with 397 patients who stayed at internal medicine ward, a
model has been developed based on a rule derived from CAM. The model achieved an
AUROC = 0.85 with sensitivity and specificity of 80% and 90%, respectively.
Several other studies were published on predicting delirium in the years before and
a systematic review of risk-stratification models has been published by Newman et.al
[10] in which the authors have considered different risk factors and derived rules for
predicting delirium [7, 9, 11, 12]. However, current models are not suitable for
hospitalized patients in general, the population considered was less in numbers and the
accuracy was lower in non-obvious cases.
Objectives
It was the aim of our study to develop and validate a generalized predictive model
irrespective of cohort group but solely on available medical records. Therefore, a large
set of records from patients diagnosed with delirium should be analyzed. Delirium
induced by alcohol and other psycho active substances should be excluded (i.e. delirium
with ICD-10 code F05). The idea was to develop a generalized predictive model to
identify the patients who are susceptible to delirium during their hospitalization period,
which could alert healthcare practitioners early for keeping a sight on delirium progress.
We used routine data stored within the Hospital Information System (HIS) of the
Steiermärkische Krankenanstaltengesellschaft m. b. H (KAGes), a regional health care
provider in Styria, one of the nine provinces of Austria. Our dataset consisted of
retrospective data of hospitalized patients from gerontopsychiatry and internal medicine
34 D. Kramer et al. / Development and Validation of a Multivariable Prediction Model
departments. This dataset is a part of the HIS of KAGes, which consists of the
longitudinal health records from about 90% of the 1.2 million Styrian inhabitants –
covering hospital stays and outpatient visits from small standard hospitals to the
university hospital in Graz over a period of nearly 15 years.
Inclusion criteria:
• Patients who were diagnosed with ICD-10 Code F05 (delirium due to
known psychological condition)
• Time of first documentation of the ICD-10 Code F05 in the period from
01/01/2013 to 30/10/2016
• In-clinic stay at one of the KAGes hospitals
Exclusion criteria:
• For those patients where the day of the diagnosis of delirium occurred
within the last two days of a hospital stay we assume that the delirium
occurred already earlier and was recorded later. Therefore, we excluded
such hospitalizations altogether.
• All data from delirium patients without any patient record before the time
of diagnosis – for such patients, no data to be used for prediction (before
the diagnosis of delirium) was available
According to the inclusion criteria, we identified approx. 3,000 delirium patients.
After applying the exclusion criteria, 2,221 delirium patients remained.
We used the complete record of each of the patients identified according to these
criteria for our study, i.e. all data from in-clinic stays and outpatient visits within one of
the KAGes hospitals. All the data recorded on the day of the diagnosis of delirium and
afterwards were excluded in order to simulate a prospective setting.
The analysis of the learning models has been done on the data obtained from the KAGes
HIS openMEDOCS, which is based on IS-H/i.s.h.med information systems,
implemented on SAP platforms. Due to the size of the data and the requirements for
analysis (both for controlling as well as for medical, quality and efficiency issues) SAP
HANA was chosen as the data warehouse platform.
The queries resulted in data set with 8,561 patients and 858 features. Categorical
features, such as diagnoses or procedures, were expanded to one boolean feature per
possible value. Similarly, ICD-10 codes from main and sub diagnoses were also included
as grouped variables by ICD-10 chapter.
R software which is a free software environment for statistical computing, and R’s
Classification and Regression Training (caret)[13] and associated packages have been
used in our modelling.
Different learning algorithms were implemented, including Random Forests (RF),
Linear Discriminant Analysis (LDA), Logistic Regression (LR), Support Vector
Machine (SVM), K−Nearest Neighbor (KNN), Elastic Net (ENET), and Neural Network
(NN).
The data set was split into a training and a test data set. The training data set consisted
of 75% of the cases and a 10-fold cross validation was implemented for the training of
the models.
Various standard statistical measures were used for validating the model performance,
including sensitivity, specificity, accuracy, Cohen’s Kappa, and AUROC.
3. Results
We have analyzed the relationship of each factor with delirium to understand their
influence on delirium prediction. The following figures show a selection of highly
influential factors.
Figure 1 illustrates that the age and comorbidity are two major risk factors for
delirium. The median age of the delirium cohort was significantly higher than the median
age of the non-delirium cohort, i.e. our control group. That means the higher the age of
the patient the more susceptible to develop delirium. Likewise, Figure 1 b) shows that
the higher the Charlson comorbidity index was, the higher the probability of developing
delirium.
Figure 2 shows that the prevalence of delirium was higher in severely ill patients
suffering from other diseases, such as dementia, depression, heart failure, pneumonia or
respiratory insufficiency. Nearly 50% of the delirium patients in our cohort group had
heart failure, i.e. ~35% of heart failure patients in our considered population have
36 D. Kramer et al. / Development and Validation of a Multivariable Prediction Model
Figure 1 Box plots of (a) age distribution and (b) Charlson comorbidity index distribution for patients in the
delirium and non-delirium cohorts
developed delirium. Along with illness, we have also found other factors such as
metabolic imbalance, physical disorders etc., through laboratory results and nursing
assessment. Examples are presented in Figure 3.
The classification models used along with performance criteria are listed in Table 1.
All classification models have shown similar behavior with variation except K-Nearest
Neighbors algorithm which was outperformed by the other models.
Figures 4 show the graphical representation of the results for Random Forests.
4. Discussion
The present paper describes how we developed and validated a multivariable prediction
model for the occurrence of delirium in hospitalized gerontopsychiatry and internal
medicine patients.
Delirium is a very common condition in hospitalized patients. Our paper presents
an approach to early-detect patients at risk for delirium based solely on data that already
are available at KAGes hospitals. To our knowledge, it is the first publication on
predicting delirium with such a large patient population and feature set and our results
60
Delirium patients
Patients without delirium
50
40
Percentage
30
20
10
0
Respiratory
Heart failure Depression Dementia Pneumonia insufficiency
Figure 2 Bar chart showing the percentage of patients with delirium and without delirium w.r.t individual
diagnosed disease
D. Kramer et al. / Development and Validation of a Multivariable Prediction Model 37
40
80
Delirium patients Delirium patients
Patients without delirium Patients without delirium
30
60
Percentage
Pe rc en ta g e
20
40
10
20
0
Electrolyte Withdrawal
0
Dehydration imbalance syndrome Hearing loss 0 1 2 3
Figure 3 Bar chart showing the significance of metabolic imbalance and physical disorders related to delirium
(left). Example of laboratory results, C - reactive protein (CRP) count which is a categorical factor where 0 is
normal and 3 is very abnormal (i.e highly elevated) (right).
Limitations
Although the model performs very well, there may still be some room for improvement,
in particular as far as the collection of the data is concerned.
• From our experience, we expect, that the time and date of delirium occurrences
might not always be correct and that in some cases delirium might neither be
diagnosed nor recorded.
• The quality of the data used in this work was limited, since the data were not taken
from scientific studies featuring dedicated processes for data quality improvement
(e.g. source data verification, monitoring, etc.), but the data were taken directly
from the KAGes HIS. Therefore, we expect that model accuracy could further be
improved, if additional efforts would be applied to data management.
• Although our dataset is large in comparison with previous research, further data
with true positive results (i.e. more patients who delivered delirium) would be
necessary. We randomly chose 7,000 patients for our control group, while there
were only 2,221 delirium patients included. Therefore, a more balanced dataset
might further improve our models.
• For implementing our algorithms in clinical routine, higher sensitivity would be
advantageous.
Figure 4. Receiver Operating Characteristics of the Random Forest model with the corresponding
performance criteria on the right hand side
Our models are still in a preliminary stage so far. Further work on data pre-processing,
data-cleaning, model optimization etc. will be necessary before our models can be
applied in routine care.
Feature importance needs to be identified to reduce the size of the feature set. Our
target is to increase the sensitivity factor without compromising accuracy.
Our analyses revealed significantly different results when we applied different
learning algorithms. Further analyses are required to understand the significant model
parameters that influence model accuracy.
Acknowledgements
This work has been carried out with the K1 COMET Competence Center CBmed, which
is funded by the Federal Ministry of Transport, Innovation and Technology (BMVIT);
the Federal Ministry of Science, Research and Economy (BMWFW); Land Steiermark
(Department 12, Business and Innovation); the Styrian Business Promotion Agency
(SFG); and the Vienna Business Agency. The COMET program is executed by the FFG.
KAGes and SAP provided significant resources, manpower and data as basis for research
and innovation.
References
[1] T. G. Fong, S. R. Tulebaev, and S. K. Inouye, “Delirium in elderly adults: diagnosis, prevention and
treatment,” Nat Rev Neurol, vol. 5, no. 4, pp. 210-20, Apr, 2009.
[2] D. K. Kiely, E. R. Marcantonio, S. K. Inouye, M. L. Shaffer, M. A. Bergmann, F. M. Yang, M. A.
Fearing, and R. N. Jones, “Persistent delirium predicts greater mortality,” J Am Geriatr Soc, vol. 57, no.
1, pp. 55-61, Jan, 2009.
[3] D. M. Popeo, “Delirium in older adults,” Mt Sinai J Med, vol. 78, no. 4, pp. 571-82, 2011 Jul-Aug, 2011.
[4] S. Wass, P. J. Webster, and B. R. Nair, “Delirium in the elderly: a review,” Oman Med J, vol. 23, no. 3,
pp. 150-7, Jul, 2008.
[5] H. Zhang, Y. Lu, M. Liu, Z. Zou, L. Wang, F. Y. Xu, and X. Y. Shi, “Strategies for prevention of
postoperative delirium: a systematic review and meta-analysis of randomized trials,” Crit Care, vol. 17,
no. 2, pp. R47, Mar, 2013.
D. Kramer et al. / Development and Validation of a Multivariable Prediction Model 39
1. Introduction
Clinical terminology systems, classifications and coding systems have been developed
using independent, divergent or uncoordinated approaches. This is specifically true for
Health interventions in the medical and surgical field with for example UMLS [1],
LOINC [2] DICOM SDM [3], SNOMED CT [4], ACHI and ICD10 AM [5], CCI [6],
ICD 9CM [7], ICD10 CM [8], OPCS4 [9], OPS [10], CCAM[11].
The International Classification of Health Interventions, ICHI was developed since
2006 by the WHO FIC (WHO Family of International Classifications) network up to an
alpha 2 version 2016, with a specific chapter section 1 on medical and surgical
interventions to create a common base across existing advanced coding systems for
1
Corresponding Author: Jean Marie Rodrigues, INSERM Limics Paris, rodrigues@univ-st-etienne.fr
J.M. Rodrigues et al. / How to Link SNOMED CT Procedure and WHO ICHI 41
interventions [12]. The ICHI model states that it follows the EN-ISO 1828 Categorial
Structure (CAST) [13, 14, 15]. The ICHI semantic structure with 3 axes and 7 digit codes
is a partial application of EN-ISO 1828 Categorial Structure (CAST). ICHI coding
system was tested with the Korean Classification of Health Interventions [16] and with
UNU-CBG CASEMIX group use case [17] which show that ICHI cannot reach the level
of granularity of ICD 9 CM Vol 3 [7].
The SNOMED CT procedures hierarchy is the largest available terminology
resource in UMLS. We compare the ICHI semantic structure and the SNOMED CT
concept model for the procedures hierarchy to the ISO 1828 definition of a CAST for
surgical procedures. We test the similarities and differences between ICHI CAST, ISO
1828 standard CAST and SNOMED CT concept model and identify the modifications
ICHI semantic structure needs to be aligned with SNOMED CT concept model.
We present the ICHI semantic structure, ISO1828 CAST and the SNOMED CT concept
model for the procedures hierarchy and check their differences.
2.1. ICHI
The main aims of ICHI are to allow international comparisons namely for the cost of
health interventions [18] and to provide a classification for countries that lack one. There
are two other chapters outside medical and surgical interventions which are not taken
into account here. ICHI currently contains around 5,800 items, in an alpha version 2016.
The target date for approval by the World Health Assembly is 2019.
The ICHI semantic structure is built around three axes: “Target”, “Action” and
“Means” [12] with the following definitions:
x “Target”: the entity on which the Action is carried out
x “Action”: a deed done by an actor to a target
x “Means”: the processes and methods by which the Action is carried out.
This is not fully conformant with ISO 1828 CAST for only the “Action” axis is
equivalent to ISO 1828 CAST “Surgical deed” semantic category and there is no
semantic linksas explained in [13, 14, 15 ] and the following “1.2. sub-chapter.
We selected in [17] three ICD 9 CM Volume 3 codes having a map x ICD 9CM
codes to 1 ICHI code and compare their lexical content with ICHI and SNOMED CT
concept codes and their semantic content with ISO 1828 CAST ICHI semantic coding
structure and SNOMED CT concept model.
x 51.41 Common duct exploration for removal of calculus
x 37.78 Insertion of temporary trans-venous pacemaker system
x 02.11 Simple suture of dura mater of the brain
3. Results
The map between ICD 9 CM, ICHI and SNOMED CT selected codes is as following in
Table 1:
Figure 1 provides the SNOMED CT concept model for “Incision and exploration of
common bile duct for removal of calculus” diagram available in the IHTSDO browser
[21].
Or in SNOMED CT inferred expression compositional grammar.
{363700003 |Direct morphology (attribute)| = 56381008 |Calculus (morphologic abnormality)|,
405814001 |Procedure site - Indirect (attribute)| = 79741001 |Common bile duct structure (body structure)|,
260686004 |Method (attribute)| = 129306000 |Surgical removal - action (qualifier value)| }
{405813007 |Procedure site - Direct (attribute)| = 79741001 |Common bile duct structure (body structure)|,
260686004 |Method (attribute)| = 129287005 |Incision - action (qualifier value)| }
{405813007 |Procedure site - Direct (attribute)| = 79741001 |Common bile duct structure (body structure)|,
260686004 |Method (attribute)| = 281615006 |Exploration - action (qualifier value)| }
Figure 1: SNOMED CT concept model for “Incision and exploration of common bile duct for removal of
calculus” diagram
44 J.M. Rodrigues et al. / How to Link SNOMED CT Procedure and WHO ICHI
The Figure 1 SNOMED CT concept model and its expression are equivalent to the
ISO 1828 expression:
“Surgicaldeed” “Removal”
“hasObject” “Calculus “(lesion)
“hasSite”” Common bile duct” (anatomical entity)
“hasSubsurgicaldeed”” Incision”
“hasObject” “Common bile duct” (anatomical entity)
“hasSubsurgicaldeed” “Exploration”
“hasObject” “Common bile duct” (anatomical entity)
Figure 2 gives an example for the “direct device” attribute which is equivalent to the
ISO 1828” hasObject” semantic link associated with the” Interventional equipment”
semantic category.
Or in SNOMED CT inferred expression compositional grammar.
{405814001 |Procedure site - Indirect (attribute)| = 80891009 |Heart structure (body structure)|,
260686004 |Method (attribute)| = 257867005 |Insertion - action (qualifier value)|,
363699004 |Direct device (attribute)| = 360127006 |Intravenous cardiac pacemaker system (physical
object)| }
The Figure 2 SNOMED CT concept model and its expression are equivalent to the
ISO 1828 expression:
“Surgicaldeed” “Insertion”
“hasObject” “Intravenous cardiac pacemaker” (interventional equipment))
“hasSite” “heart” (anatomical entity)
Figure 3 gives an example for the attribute “Using device” which is equivalent to
the ISO 1828 “hasMeans” semantic link associated with the ISO 1828 “interventional
equipment” semantic category.
J.M. Rodrigues et al. / How to Link SNOMED CT Procedure and WHO ICHI 45
{405813007 |Procedure site - Direct (attribute)| = 8935007 |Cerebral meninges structure (body structure)|,
260686004 |Method (attribute)| = 129357001 |Closure - action (qualifier value)|,
424226004 |Using device (attribute)| = 27065002 |Surgical suture, device (physical object)| }
The Figure 3 SNOMED CT concept model is equivalent to the ISO 1828 expression:
“Surgicaldeed” “Closure”
“hasObject” “Cerebral meninges” (anatomical entity)
“hasMeans” “surgical suture device” (interventional equipment)
4. Discussion
1 map. This is due to the absence of “Surgical suture device” code as “Means” as shown
by SNOMED CT concept model which is aligned with ISO 1828.
It appears that only the ICHI “Action” axis is aligned with ISO 1828 “Surgical deed”
semantic category and the SNOMED CT “ Method” attribute.
ICHI “Target” axis is an umbrella name for all the ISO 1828 semantic categories
allowed to have a semantic link “hasObject” with the” Surgical deed” semantic category
but in the ICHI “Target” axis there is no equivalent to “lesion” or “interventional
equipment” semantic categories”. They are present in the SNOMED CT concept model.
The ICHI “Means” axis is an umbrella name for “approaches”, “technique” mainly
for imaging and ionising or nuclear therapy but not for the ISO 1828 “interventional
equipment” or” medical devices” or “substance” semantic categories. They are present
in the SNOMED CT concept model.
There is no ICHI equivalent to the ISO 1828 “hasSite” semantic link which links the
“lesion” or “interventional equipment” semantic categories to the “anatomical entity”
semantic category which is a mandatory semantic category to be conformant with the
standard ISO 1828. The SNOMED CT concept model attribute “Procedure site indirect”
provides the equivalence to the ISO 1828 semantic link “hasSite”.
There is also no ICHI equivalent of ISO 1828 “hasSubsurgicaldeed” semantic link
which prevents to code with ICHI two actions during the same intervention which is
possible with ISO 1828 for instance as an approach to the main action. It is allowed by
the SNOMED CT concept model.
5. Conclusion
We have shown that if it is possible to link SNOMED CT concept model for procedures
hierarchy with ISO 1828 CAST standard for surgical procedures there is no such a link
between ICHI alpha version 2016 coding structure section 1 “Interventions on Body
systems and Functions” and ISO 1828 standard CAST.
There is a need of a WHO international classification of health interventions for
international comparisons [21] and for countries not having such a system. For more
developed countries the need is clear for ICHI section 2 “Interventions on activities and
participations” and ICHI section 3 “Interventions to improve the environment and health-
related behavior”. On the other hand section 1 “Interventions on Body systems and
Functions” shall be compliant with a minimum granularity not reached in the 2016 alpha
version.
We recommend that the following modifications should be discussed by WHO:
1)The ICHI “Action” axis shall be allowed to be present several times.
2) The ICHI “Target” axis must be extended to “morphologic abnormalities” and
main categories of “drugs” and “medical devices”.
3) The ICHI “Target” axis once extended shall be replaced by two axes: “Direct
target” and “Indirect Target” with the meaning of “Direct” and “Indirect” of SNOMED
CT attributes “Procedure site”, “Morphology”, “Device” and “Substance”.
Acknowledgements
References
[1] A.T. McCray et Al, The representation of meaning in the UMLS, Methods Inf Med 34(12) (1995), 193-
201.
[2] Logical Observation Identifiers Names and Codes (LOINC), http://www.loinc.org/, last access: 1.3.2017.
[3] DICOM, http://www.xray.hmc.psu.edu/dicom/dicom_home.html, last access: 1.3.2017.
[4] SNOMED Clinical Terms, http://www.snomed.org/, last access: 1.3.2017.
[5] Australian Classification of Health Interventions (ACHI) National Centre for Classification. in Health,
http://sydney.edu.au/health-sciences/ncch/classification.shtml, last access: 1.3.2017.
[6] Canadian Classification of Health Interventions, https://www.cihi.ca/en/submit-data-and-view-
standards/codes-and-classifications/cci, last access: 1.3.2017.
[7] ICD9M Diagnostic and Procedures Codes,
[8] https://www.cms.gov/medicare/coding/ICD9providerdiagnosticcodes/codes.html, last access: 1.3.2017.
[9] ICD10 and GEMs,
[10] https://www.cms.gov/Medicare/Coding/ICD10/2017-ICD-10-PCS-and-GEMs.html,last
access1.3.2017.
[11] OPCS4 Classification, https://digital.nhs.uk/article/290/Terminology-and-Classifications/OPCS4, last
access: 1.3.2017.
[12] OPS German procedures classification, https://www.dimdi.de/static/en/klassi/ops/, last access: 1.3.2017.
[13] CCAM, http://www.atih.sante.fr/ccam-descriptive-v45-00-au-format-nx, last access: 1.3.2017.
[14] International Classification of Health Intervention: Alpha2 Version 2016
http://mitel.dimi.uniud.it/ichi/docs/, last access: 1.3.2017.
[15] EN ISO 1828 2012, https://www.iso.org/search/x/query/ISO1828, last access: 1.3.2017.
[16] J.M.Rodrigues et al, The CEN TC 251 and ISO TC 215 Categorial Structures. A Step towards increased
interoperability, Stud Health Technol Inform 136 (2008), 857-862.
[17] B Trombert Paviot et al, Development of a New International Classification of Health
Interventions͒Based on an Ontology Framework, Stud Health Technol Inform 169 (2011), 749-753.
[18] B Jung et al, The Revision of the Korean Classifications of Health Interventions Based on͒the Proposed
ICHI Semantic Model and Lessons Learned, Stud Health Technol Inform 169 (2011), 754-759.
[19] S.M. Aljunid et al, ICHI Categorial Structure: A WHO-FIC Tool for Semantic Interoperability of
Procedures Classifications, Stud Health Technol Inform 216 (2015), 1090.
[20] F.Song, Y.K.Loke, T.Walsh, A.M. Glenny, A.J. Eastwood, D.G. Altman, Methodological problems in
the use of indirect comparisons for evaluating healthcare interventions: survey of published systematic
reviews, BMJ (2009) 338 1136-1147.
[21] SNOMED CT starter guide,
[22] https://confluence.ihtsdotools.org/display/DOCSTART/SNOMED+CT+Starter+Guide, last access:
1.3.2017.
[23] SNOMED CT Editorial Guide, https://confluence.ihtsdotools.org/display/DOCEG/
SNOMED+CT+Editorial+Guide, last access: 1.3.2017.
[24] WHO Browser, http://who.int/classifications/icd11/browse/f/en, Last access: 13.3.2017.
48 Health Informatics Meets eHealth
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-48
1. Introduction
The use of genetic variation for clinical decisions about patient care has gained crucial
relevance over the last few decades. It is now widely accepted that a high rate of patient
response to medication is due to genetic variation. Some of these variations influence
drug metabolism, transport, and receptor response [1-3]. Some hospitals have started
prescribing both drugs and doses based on genetic test results to match medication with
the information on the genotype of the patient [4].
Especially in oncology, the enhanced knowledge of the molecular pathogenesis of
malignant diseases has led to an increasing number of available treatment options for
targeted cancer therapies. As a result, German university hospitals have started to
1
Corresponding Author: Marc Hinderer, Chair of Medical Informatics, Friedrich-Alexander-Universität
Erlangen-Nürnberg, Wetterkreuz 13, 91058 Erlangen, Germany. E-Mail: marc.hinderer@fau.de
M. Hinderer et al. / Supporting Molecular Tumor Boards in Molecular-Guided Decision-Making 49
2. Methods
3. Results
Molecular tumor boards complement the existing entity-related tumor boards of German
university hospitals. Four of the five hospitals either launched their molecular tumor
board in 2015 or 2016, or joined the molecular tumor board of the German National
Center for Tumor Diseases (NCT) Heidelberg in 2015 (Table 1). Members of a tumor
board may choose to request a molecular tumor board review of a particular case. The
molecular tumor boards focus on cancer patients with progression after standard
treatment, rare tumor entities and resistances to molecular targeted therapies. The entity-
related Tumor Boards mainly focus on routine genetic testing such as testing for KRAS
wildtype or BRCA1 mutations. The molecular tumor boards, by contrast, mainly focus
on exploratory genetic testing. One hospital did not yet implement a separate molecular
tumor board but integrated the molecularly guided decision-making into the existing
entity-related tumor board.
We observed heterogeneity in the organization of the genetic testing among the four
hospitals. Several medical disciplines of the five interviewed hospitals have offered
genetic testing (Table 1). The sample materials, which were used to perform a genetic
test and the purpose of testing varied among both disciplines and the university hospitals.
In one hospital, they established a pipeline to analyze and interpret the genetic
results alongside the department of pathology and to present it to the molecular tumor
board.
50 M. Hinderer et al. / Supporting Molecular Tumor Boards in Molecular-Guided Decision-Making
The diagnostic departments of the five university hospitals, which perform genetic
testing, may receive external orders for out-patients and internal orders for both in-
patients and patients from the out-patient clinic.
External physicians use a pre-defined paper-based letter of referral (German
template no. 10) to order genetic tests for out-patients who are insured by the German
statutory health insurance. In contrast patients who are insured by a private insurance
company may be referred without such a letter of referral. In one hospital, an external
physician may alternatively refer a patient to the hospitals’ out-patient clinic, which, in
turn, will order a genetic test internally.
We observed heterogeneity among the five hospitals in the process of ordering an
internal genetic test (Table 1). Despite having electronic order entry established for the
“traditional” diagnostic procedures, four hospitals still use paper-based order forms to
order internal genetic tests. The design of these paper-based order forms differs among
the university hospitals. In three hospitals, an electronic order process for internal
physicians is currently being developed and scheduled to be implemented in the near
future. This electronic order process will typically be embedded into the review process
of the entity-related tumor boards. Nevertheless, one hospital has already developed and
implemented an online system to manage all entity-based tumor boards. The members
of the entity-based tumor boards can electronically place a genetic test order in the online
system, which will be automatically transferred into an order list of the respective
diagnostic department.
In four hospitals physicians of the diagnostic departments mainly run genetic tests to
compare the patient's genes to a panel of known gene mutations. Panel sequencing is a
method to perform next-generation sequencing (NSG). One hospital mainly uses Sanger
sequencing for DNA sequencing, but uses panels as well. Each hospital performs
M. Hinderer et al. / Supporting Molecular Tumor Boards in Molecular-Guided Decision-Making 51
Table 2. Excerpt of an Excel spreadsheet containing annotated somatic gene variants; including the
chromosome number (Chr), start position (Start), end position (End), reference base (Ref), alternative non-
reference alleles which are called on at least one of the samples (Alt), regions (e.g. exonic or intronic) that one
variant hits (Reg), gene name associated with one variant (Gene), exonic variant function (e.g. nonsynonymous
or synonymous; Exonic), frequency of variant allele (Frequency) and whether it is a tumor suppressor (isTS)
or an oncogene (isO).
Chr Start End
Ref Alt Reg Gene Exonic Frequency isTS isO
chr22 37966169 37966169
G A downstream CDC42EP1, 36% 0 0
LGALS2
chr6 66012682 66012682 G T exonic LOC441155 nonsynonymous 40.43% 0 0
SNV
chr3 142180921 142180921 - C exonic ATR frameshift 11.11% 0 0
insertion
chr19 42796840 42796840 C T exonic CIC nonsynonymous 35.46% 1 0
SNV
Similar to the order process, we observed heterogeneity among the five German
university hospitals in reporting the results of genetic tests. After analyzing and
interpreting the results of the genetic test, physicians of the diagnostic departments of all
five hospitals describe their interpretation of raw data in a medical report. In all hospitals,
the final medical report only consists of a narrative textual description. No structured
results are generated.
The medical reports comprised the following sections:
2
URL: http://cancer.sanger.ac.uk/cosmic
3
URL: https://www.ncbi.nlm.nih.gov/clinvar/
4
URL: https://www.ncbi.nlm.nih.gov/projects/SNP/
52 M. Hinderer et al. / Supporting Molecular Tumor Boards in Molecular-Guided Decision-Making
• indication for genetic testing (e.g. “due to cancer progression and metastasis”)
• diagnosis (e.g. “ICD-10-GM-2017: C16.9”)
• tumor cell concentration (e.g. “25%”)
• examined genes (e.g. “FGFR2, MET, etc.”) and gene sequences including splice
sites, UTR, promoter etc.
• description of genetic examination (e.g. “The resulting DNA sequence was
compared to the reference sequence (hg19) using CLCbio Genomics
Workbench (Qiagen)…”)
• description of non-synonym variants (mutations) including gen name, cDNA,
associated protein, allele frequency and corresponding pathogenicity
• CNV (e.g. “An amplification of the FLT3 locus was detected”)
• critical interpretation of the genetic mutations (e.g. “The CDH1 mutation is
clearly pathogenic” including OMIM number, whether dominant or recessive
allele, suggestion for further examinations etc.)
The signed medical reports are sent to the physicians who initially requested the
genetic test. This communication is typically paper-based if the report is sent to an
external physician. If an internal clinician ordered the genetic test, the medical report is
still communicated on paper in one hospital. In three hospitals the report is electronically
communicated and stored as a PDF document in the EHR, whereas in one hospital the
medical record is communicated on paper as well as stored as a PDF document in the
EHR (Table 1).
All hospitals drafted PowerPoint slides to present the annotated and interpreted
results of the genetic tests to the molecular tumor boards. The presentations included
screenshots, narrative text and self-written tables.
All files which contained data from sequenced raw data up to the annotated variants
were finally stored in a file system within the diagnostic department. In contrast the
reports for the molecular tumor board were stored in the EHR.
In two hospitals, research funds cover the costs of genetic testing for exploratory genetic
testing for the molecular tumor board.
4. Discussion
We were able to provide a first overview of the supporting procedures for molecular
tumor boards in Germany. This topic is a new and evolving medical field. Therefore, we
conducted semi-structured interviews with a framework of themes rather than a
structured interview with a rigorous set of questions.
The molecular tumor boards of all five hospitals aimed at treating patients for whom
a case-related cancer therapy according to guidelines has been ineffective. It might be
necessary to include such patients earlier into the molecular tumor board instead of
waiting until a therapy according to guidelines is ineffective and the cancer had an
avoidable progress. The interviewed experts have also recommended this. Another
similarity of the different hospitals was that they compared the patient's genes to a panel
of known gene mutations. However, the panel sizes varied among the five hospitals and
only one of them offered WES and WGS. Furthermore, all hospitals used similar online
services and open-access databases to analyze and interpret the results of a genetic test.
They also used the same data formats for reporting their results and organized the
annotated gene variants and mutations in a Microsoft Excel spreadsheet. No hospital had
a dedicated tool to support the interpretation of the annotated somatic gene variants and
mutations for the molecular tumor board. They all drafted PowerPoint slides instead,
which included screenshots, narrative text and self-written tables presentation for the
molecular tumor board.
All five hospitals had in common, that they still used free-text documents in most
of their support procedures for molecular tumor boards rather than machine-readable
documents. This is similar to the situation in the United States. According to a previous
survey by the American Society of Clinical Oncology, approximately 50 percent of
physicians receive their genetic results in form of a PDF document. Only 22 percent of
the surveyed physicians receive discrete data from their laboratory which can be stored
in the electronic health records (EHR) [7].
However, the five hospitals created different processes to incorporate genetic testing
into their clinical environment and to present genetic findings to the molecular tumor
board. One factor which varied among the five hospitals was, for instance, the kind of
genetic testing performed by a certain medical discipline.
A limitation of our study was that we limited the number of investigated sites to
only five hospitals. Furthermore, we did not investigate all diagnostic departments in all
five hospitals, which would be necessary to gain a comprehensive overview. This sample
might not be representative of all German hospitals. Nevertheless, we were able to
demonstrate how heterogeneous the current support procedures for molecular tumor
boards have been.
Further research needs to be conducted on this topic to provide a comprehensive
and structured overview of the current situation in German hospitals. This should include
an enhanced questionnaire and involve more hospitals.
We believe there are three approaches to support the process from genetic testing to
reporting within the molecular tumor boards. First, our results emphasize the need of
standardized workflows which perform automated variant calling and annotation for the
clinical interpretation of genetic variants. This could make the process mentioned above
54 M. Hinderer et al. / Supporting Molecular Tumor Boards in Molecular-Guided Decision-Making
faster and less error-prone. In addition to that, the test results could be more reproducible
and more comparable.
Second, there is a need for tools supporting the experts in creating their reports and
presentations, which are based on the annotated mutations and variants of the previous
process. These tools should comprise natural language processing to present tumor
relevant studies. Moreover, relevant parts should be highlighted which match the genetic
results of the patient. These tools should be able to extract structured information, which
is often used for interpretation. Furthermore, they should visualize relevant information
to the physician, like the 3D-structure of proteins with highlighting mutated parts, for
instance.
Third it might be useful to implement a pharmacogenomic clinical decision support
system (CDSS) as illustrated for example by Melton et al. [8], Hicks et al. [9] and Overby
et al. [10]. Pharmacogenomic CDSS link a patients’ genotype to biomedical knowledge
in order to assist physicians in assessing cancer status, in making a diagnosis, in selecting
an appropriate cancer therapy or in making other molecularly guided decisions [11].
5. Acknowledgement
This study was conducted within the MIRACUM consortium which is funded by the
German Ministry for Education and Research (BMBF) under the Funding Number FKZ
01ZZ1606H. The present work was performed in fulfillment of the requirements for
obtaining the degree “Dr. rer. biol. hum.” from the Friedrich-Alexander-Universität
Erlangen-Nürnberg (MH).
6. References
[1] E.J. Stanek, C.L. Sanders, K. a J. Taber, et al., Adoption of Pharmacogenomic Testing by US Physicians:
Results of a Nationwide Survey., Clin. Pharmacol. Ther. 91 (2012) 450–458. doi:10.1038/clpt.2011.306.
[2] R. Weinshilboum, Inheritance and drug response., N. Engl. J. Med. 348 (2003) 529–537.
doi:10.1056/NEJMra020021.
[3] The Royal Society, Personalised medicines: hopes and realities, R. Soc. (2005) 52.
[4] K.M. Romagnoli, R.D. Boyce, P.E. Empey, et al., Bringing clinical pharmacogenomics information to
pharmacists: A qualitative study of information needs and resource requirements, Int. J. Med. Inform.
86 (2016) 54–61. doi:10.1016/j.ijmedinf.2015.11.015.
[5] National Association of SHI-Accredited Physicians. Doctors’ fee scale - para. 19.4.4, (n.d.).
http://www.kbv.de/html/13259.php?srt=relevance&stp=fulltext&q=19.4.4&s=Suchen.
[6] German Association of Research-Based Pharmaceutical Companies. In Deutschland zugelassene
Arzneimittel für die personalisierte Medizin, (n.d.). https://www.vfa.de/de/arzneimittel-
forschung/datenbanken-zu-arzneimitteln/individualisierte-medizin.html.
[7] D. Raths, Health Informatics: The Ongoing Struggle to Get Actionable Genomic Data to the Point of
Care, (n.d.). http://www.healthcare-informatics.com/blogs/david-raths/ongoing-struggle-get-
actionable-genomic-data-point-care (accessed February 9, 2017).
[8] B.L. Melton, A.J. Zillich, J.J. Saleem, et al., Iterative Development and Evaluation of a
Pharmacogenomic-Guided Clinical Decision Support System for Warfarin Dosing, Appl. Clin. Inform.
7 (2016) 1088–1106. doi:10.4338/ACI-2016-05-RA-0081.
[9] J.K. Hicks, D. Stowe, M.A. Willner, et al., Implementation of Clinical Pharmacogenomics within a
Large Health System: From Electronic Health Record Decision Support to Consultation Services,
Pharmacotherapy. 36 (2016) 940–948.
[10] C.L. Overby, E.B. Devine, N. Abernethy, et al., Making Pharmacogenomic-based Prescribing Alerts
More Effective: A Scenario-based Pilot Study with Physicians, J. Biomed. Inform. 55 (2015) 249–259.
[11] P. Jia, L. Zhang, J. Chen, et al., The Effects of Clinical Decision Support Systems on Medication Safety:
An Overview, PLoS One. 11 (2016) e0167683. doi:10.1371/journal.pone.0167683.
Health Informatics Meets eHealth 55
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-55
1. Introduction
1
Corresponding Author: Martin Staemmler, University of Applied Sciences, Zur
Schwedenschanze 15, 18435 Stralsund, Germany. E-mail: martin.staemmler@fh-
stralsund.de
56 M. Staemmler et al. / Ad hoc Participation in Professional Tele-Collaboration Platforms
and in particular privacy and data security. To illustrate the approach the established tele-
collaboration system TKmed [5] will be used.
2. Methods
Starting from a short overview of the exemplar TKmed the requirements and the
approach for the patient-centric use cases will be presented.
TKmed is a nationwide tele-collaboration service currently being used in more than 180
practices and hospitals mainly in Germany and in some of the adjacent nations in cross-
border scenarios. It provides tele-applications like teleradiology and telecardiology and
includes a consultation workflow support for about 2000 registered users.
Figure 1 shows the TKmed system architecture with a centralized infrastructure to
allow for a store & forward approach for DICOM and non-DICOM objects. The user
directory facilitates identity and access management for users, thereby including the
organizational assignment of a user (e.g. to a department, a clinic). Two-factor
authentication for safe remote access for doctors on duty is achieved by token services.
The external trusted services (ESZ) guarantee an end-to-end encryption using keys
provided at runtime to the different front-ends. They vary in the level of integration:
While TK-Basis features a solely web-based viewer including access to local DICOM
sources and data objects, TK-Router and TK-Gateways are applications executed on the
user site to automatically route data objects to the centralized infrastructure or the
destination, respectively. In addition, TK-Gateways include a PACS for keeping data
objects prior to their further processing in the institutional data management and even
support of mobile devices (TK-Gateway Professional). These levels may be seen as
representatives for typical tele-application systems.
The TKmed Direkt front-ends serve for ad hoc participation and the concept will be
introduced in the following chapters.
M. Staemmler et al. / Ad hoc Participation in Professional Tele-Collaboration Platforms 57
The hospital or practice or the doctor, respectively, has to abide to medical confidentiality
when allowing access to the patient's personal data. When access is granted to external
persons (e.g. referrers, experts) this implies a signed patient consent. However, in case
of the patient receiving access, consent is required at a much lower level for the
transmission and the associated risks or implicit consent may be assumed based on the
interaction with the patient, as described in the following paragraph.
The contact between the doctor or personnel with the patient serves for the
authentication of the patient which has to be mapped to the requirement of a two-factor
authentication in the IT domain [6]. Again, the patient will have to provide her/his email
address to the doctor or the personnel for obtaining the link as a first factor. For the
second factor, demographic information and a session key will be used to produce a token
as a printout (text or QR-code) for the patient. The provided link together with the
possession of the printout fulfills the two-factor authentication requirement. Both have
to be used to access the information provided from the health care institution. Internally,
the procedure results on the one hand in establishing a temporary data container for that
patient data objects and on the other hand sending a link to the patient's email account.
It is worth noting, that within the tele-application the patient only receives access to
her/his data container when invoking the link. Her/his rights are limited to viewing and
downloading the data or to directly selecting a destination for forwarding the data e.g.
due to a planned visit to another doctor.
3. Results
This chapter presents the results of the implementation, current usage and business case
to achieve long-term sustainability.
3.1. Implementation
The use cases were implemented using a Java application being executed in the browser.
Figure 2 shows the approach for use case 1 (depicted as TKmed Direkt in Fig ure 1) for
requesting a link by providing the name, email address and selecting the desired
destination. The detailed list of destinations obtained does only include those, who have
agreed to receive uploaded data. Usually they have generated a specific destination
within their organizational units for receiving uploaded data. The request results in an
email with a link for the upload (Figure 3).
Figure 4 shows the result from activation the link. The upload allows to select data
objects (DICOM CD, DICOM images, directories, files) and to add a short message for
informing the selected receiver. With “Send now” the transfer is started and visualized
by a progress bar and the list of data objects transferred.
For use case 3 a slightly different approach has been implemented to simplify the
upload for medical professionals (shown as TKmed Direkt Professional in Figure 1).
Instead of repeatedly requesting links for uploads and performing the upload, medical
professionals may be invited by a recipient e.g. a colleague or a hospital. In this case, the
recipient generates a link for uploads and provides this link to the medical professional,
e.g. via email. The link usage may be constrained by the number of usages, the
accumulated data volume of successful transfers or by a time period for validity.
For use case 2 the person receives a link for accessing the data objects provided and
uses the obtained token information to authenticate. As a result, access to the data objects
is granted.
For all use cases it is worth noting that any data object is kept on the TKmed
infrastructure for up to 14 days only in order to avoid objections based on data retention.
The recipients, patients or medical professionals, are responsible for data object
management, either locally or using a cloud-based record according to the relevant
applicable data protection regulations.
60 M. Staemmler et al. / Ad hoc Participation in Professional Tele-Collaboration Platforms
Since - from a patient view - access is granted for upload and download without provi-
ding a long-term storage of patient data no costs for the patient are acceptable. This
expectation is in line with experiences of providers, who offered personal health records
for patients at reasonable costs and subsequently withdraw their offer from the market
due to too low numbers of users and lack of revenue. A comparable reasoning applies to
an ad hoc usage by medical professionals. On the other hand, the practice or hospital
providing such a service and being already involved in tele-applications improves its
standing and market provision, which justifies paying for the ad hoc transfer service.
Payment schemes may be based on licensing this service, on the number of accounts
being used / established or on the amount of data transferred. In the current
implementation each organization registered with TKmed, who wants to provide this ad
hoc transfer service, needs to obtain a license.
The presented functionality has been released and is routinely used by about 35 of the
currently 180 hospitals and practices, predominantly the larger organizations. The usage
covers all three identified use cases and extends to patients from abroad in some
organizations. A demand to facilitate inclusion of the described functionality in the web
presence of organizations and to allow for branding has been issued and will be taken up
in subsequent versions.
M. Staemmler et al. / Ad hoc Participation in Professional Tele-Collaboration Platforms 61
4. Discussion
provide sufficient information on patient demographics and context data but with the
caveat of a limited ease of use and reducing user acceptance and usability.
5. References
[1] IAEA, Worldwide Implementation of Digital Imaging in Radiology, IAEA Human Health Series, No
28, 2015
[2] Bashhur RL, Krupinski EA, Weinstein RS, Dunn Mr, Bashshur N, The Empirical Foundations of
Telepathology: Evidence of Feasibility and Intermediate Effects, Telemedicine and e-Health 23 (3): 1-
37, 2017
[3] European Society of Radiology, ESR white paper on teleradiology: an update from the teleradiology
subgroup, Insights Imaging 5(1): 1–8, 2014
[4] European Society of Radiology, ESR teleradiology survey: results, Insights Imaging, 5: 463-479, 2016
[5] Staemmler M, Walz M, Weisser G, Engelmann U, Weininger R, Ernstberger A, Sturm J, Establishing
End-to-End Security in a Nationwide Network for Telecooperation, in Mantas J et al. MIE 2012, IOS
Press, Amsterdam, pp.512-516, 2012
[6] German Federal Office for Information Security (BSI), Measure M4.133,
https://www.bsi.bund.de/DE/Themen/ITGrundschutz/ITGrundschutzKataloge/Inhalt/_content/m/m04/
m04133.html?nn=6610622 (last access: 14.2.2017
[7] Walker J, Meltsner M, Delbanco T, US experience with doctors and patients sharing clinical notes, BMJ,
pp 350-352, 2015
[8] Van Ooijen PMA, Roosjen R, de Blecourt MJ, van Dam R, Broekema A, Oudkerk M. Evaluation of the
Use of CD-ROM Upload into the PACS or Institutional Web Server. Journal of Digital Imaging; 19
(Suppl 1):72-77, 2006
[9] Hamdi O, Chalout MA, Quattara D, Krief F, eHealth: Survey on research projects, comparative study
of telemonitoring architectures and main issues, Journal of Network and Computer Applications,
46:100–112, 2014
[10] de Greiff A, Zu- und Einweiserbindung mittels Uploadportal, http://dicomtreffen.unimedizin-
mainz.de/assets/images/PDF/2016/greiff.pdf, last access: 14.2.2017
[11] Ammenwerth E, Schnell-Inderst P, Hoerbst A, Patient Empowerment by Electronic Health Records:
First Results of a Systematic Review on the Benefit of Patient Portals, in Stoicu-Tivadar L, et al. e-
health Across Borders without Boundaries. IOS Press Vol 165 pp. 63-67. 2011
[12] Czwoydzinski J, Eßeling R, Meier N, Heindel W, Lenzen H, xPIPE – Reception of DICOM Data from
any Sender via the Internet, Fortschr Röntgenstr 187(05): 380-384, 2015
[13] Health Data Space, https://www.telepaxx.com/category/health-cloud, last access: 14.2.2017
[14] Winblad I, Hämäläinen P, Reponen J, What is found positive in healthcare information and
communication technology implementation? - The results of a nationwide survey in Finland. Telemed J
E Health 17(2):118-23, 2011
[15] Sinha P, Sunder G, Bendale P, Mantri M, Dande A, Electronic Health Record, Wiley, IEEE Press, 2013
[16] DICOM Standard, medical.nema.org/DICOM (2016)
[17] IHE DI Profile, RAD_TF_Vol3, www.ihe.net, 2016
[18] HL7 Messaging, Medical Record / Information Management, www.hl7.org, 2016
[19] IHE IT Infrastructure, Volumes 1-3, www.ihe.net, 2016
Health Informatics Meets eHealth 63
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-63
1. Introduction
1
Corresponding Author: Karl Holzer, CGM Clinical Austria GmbH, Pachergasse 2, 4400 Steyr, Austria,
E-Mail: karl.holzer@cgm.com.
64 O. Krauss et al. / Challenges and Approaches to Make MDTMs Interoperable – The KIMBo Project
through the use of healthcare IT standards, mainly HL7 Fast Healthcare Interoperability
Resources (HL7 FHIR).
2. Methods
The activities in KIMBo concerning MDTMs are based on the requirements of hospitals.
These requirements were identified by the conduction of an extensive systematic
literature review to evaluate the state-of-the-art concerning MDTMs [3]. Furthermore,
the analysis was extended by conducting semi-structured expert interviews [4]. Target
partners were coordinators of MDTMs, specifically coordinators of tumor boards in
different healthcare institutions across Austria, to gather requirements and identify
differences between the requirements elicited from the literature and real world
applications.
The next step was an analysis of the different processes identified from literature
and expert interviews to get a consistent and refined description of the necessary steps in
typical MDTM settings. The results of the analysis were transferred into discussions and
committee work with the HL7 community to further refine the definitions to handle
workflows in the draft standard HL7 FHIR STU3 [5]. To validate findings and
assumptions against applicability the methods of explorative prototyping in software
engineering were utilized.
Different Healthcare Standards and Communication Profiles were analyzed.
Omitting standards concerning the exchange of healthcare data in general (such as HL7
standards), the IHE Cross Enterprise Tumor Board Workflow Definition (XTB-WD) [9]
Profile is specific to Tumor Board Meetings. It is based on the IHE Cross-Enterprise
Document Workflow Profile (XDW) [12], which defines Content Creator, Consumer
and Updater actors communicating with each other, and how these communications are
documented. XTB-WD defines generalized, linear steps to conduct a tumor board. It was
not selected for implementation because it primarily deals with documenting what
happened in the workflow, as opposed to enabling definition and automation of it, as was
the goal of the KIMBo project.
3. Results
The results of the conducted literature review have previously been published in [3]. A
total of 837 articles have been reviewed, of which 25 articles were then thoroughly
analyzed. The publication identifies participating parties in an MDTM (oncologists,
pathologists, radiologists, surgeons, radiotherapists etc.), what information is required to
conduct the MDTM (imaging results, patient summary, histological findings, etc.), the
workflows of MDTMs in thirteen different hospital settings and identified technical and
organizational problems and solutions therein. [3]
The review shows that there is an overarching workflow which all hospitals follow
when conducting an MDTM. However, each hospital conducts the meetings with some
differences, primarily concerning the medical issue addressed. Differences were also
found to be caused by the local law [6], culture and set policies of the hospital
administration or MDTM participants [7]. From a technical perspective a lack of
O. Krauss et al. / Challenges and Approaches to Make MDTMs Interoperable – The KIMBo Project 65
interoperability, as well as a potential for automating parts of the process were identified.
These areas are what the KIMBo project is focused on addressing. [3]
During the analysis phase of the project, semi-structured interviews with different
institutions were conducted. The requirement for the selection of interview partners was,
that a defined process/implementation to conduct MDTMs must be in operation at the
potential interview partner’s institution and the interview partner must be directly
involved in planning or coordinating the boards in operation at the corresponding
institution.
The interviewed organizations included the Comprehensive Cancer Center
Graz/University Hospital Graz, Hospital of Elisabethinen Linz, Vienna Hospital
Association (KAV) and the Hospital “Krankenhaus der Barmherzigen Schwestern” Linz.
The interviews were conducted following an interview guide/question catalogue in order
to deliver comparable results and to identify differences in the MDTM-setup of the
corresponding interview partner’s institution. Questions were derived partly from the
outcomes of the already conducted literature review [3] to validate the findings but also
to get more detailed information on specific topics not treated in the found literature so
far (e.g. “How important is it for your organization to involve external specialists ad hoc
to specific boards?”) to allow a proper architecture and system design phase.
The interview recordings were gathered and common requirements as well as major
differences were identified for each of the interview questions.
The identified processes to handle MDTMs were very similar from a high level
perspective, but differed in detail. Examples are the handling of patients to be discussed
inside the board, which were not yet available inside the institutions’ documentation
systems as well as patient data management, or the invitation of MDTM participants
preparatory for the actual conduction of the MDTM.
In the implemented setup, the accessibility of the discussed patient’s documentation
in the used board solution showed considerable differences, spanning from selective
access and assignment of specific documents to the board participants to complete access
to the patient’s data within the corresponding organizations boundaries.
At three institutions it was additionally possible to attend cancer boards to evaluate
the interview results against the real world implementation of the designed processes.
This comparison showed that the designed processes were mainly followed at all
institutions, but also showed the necessity of flexibility, e.g. in terms of urgent cases that
need to be discussed and were initially planned for future MDTMs.
In order to cope with interoperability issues, the HL7 FHIR draft standard was
identified. Since HL7 FHIR is currently under development, some of the essential FHIR
resources weren’t finished or didn’t exist at all when KIMBo was started. Thus, it was
necessary for the project team to be actively involved in the standardization work driven
by various HL7 work groups. This comprises the participation in the weekly calls of the
FHIR work group “Workflow” as well as regular attendance at the HL7 Working Group
Meetings (WGMs) beginning with spring 2016.
Besides further development of necessary FHIR resources, another purpose for the
active involvement in the FHIR work groups is to be aware of potential changes of the
66 O. Krauss et al. / Challenges and Approaches to Make MDTMs Interoperable – The KIMBo Project
used resources as soon as possible. During this phase the work group developed
workflow related patterns for requests and events and the use of the corresponding FHIR
resources, which were later applied by other HL7 work groups to all relevant resources.
Furthermore, with the resource “task” a special workflow resource was created.
To reach the desired outcome and to cover the identified requirements, two different
perspectives were considered for the approach being (1) the architecture of the necessary
(software) components as well as (2) the process definition to be executed within the
MDTM.
Figure 1. Architecture overview of the KIMBO Project, showing participating Hospital Information Systems
(HIS) and Radiology Information Systems (RIS) / Picture Archive and Communication Systems (PACS)
O. Krauss et al. / Challenges and Approaches to Make MDTMs Interoperable – The KIMBo Project 67
A Web-UI is attached to the KWB which allows access during the MDTM meeting
for authorized participants. Thus, intermittent participants of MDTMs, like the patient or
the family physician, who will likely participate in only a single meeting, can participate
without the need of installing additional software bundles.
When the MDTM is started, the cases are reviewed sequentially. Usually, in clear-
cut patient cases, a pre-defined treatment plan fitting the patient’s medical situation is
selected as a recommendation for further treatment. If further discussion is required,
additional information can be requested for review or a specific treatment plan for the
patient can be designed. Finally, the results of an MDTM are verified by the participants
and made available.
68 O. Krauss et al. / Challenges and Approaches to Make MDTMs Interoperable – The KIMBo Project
3.5. Prototyping
At the time of writing this paper most prototyping was done from the perspective of
integration of the necessary backend components defined within the architecture
blueprint and how the necessary business logic can be executed (especially in the context
of utilization of the HL7 FHIR resources) upon them to see if the identified requirements
and assumptions are technically feasible.
Upcoming prototypes will also include user interfaces to see how the backend
components will integrate with user actions/workflows to satisfy the desired
requirements from a usability perspective.
The need for multidisciplinary team meetings will gain traction in upcoming years
because of current political (e.g. Primary Health Care settings as outlined in [11]) as well
as organizational challenges (e.g. interdisciplinary models of care, collaborative
treatment of patients and discussion of their paths through the healthcare landscape). To
facilitate the transition to- and execution of these models, information technology as well
as a common standard (e.g. HL7 FHIR resources) for the interoperability of the involved
participants’ IT-systems needs to be established.
As discovered in the literature review in [3] as well as in the conducted expert
interviews, a key issue with existing software solutions is to be found concerning their
interoperability amongst other IT-systems in place as well as in the lack of
(organizational) interoperability when external specialists need to be involved.
Utilization of modern standards (in development) like HL7 FHIR can helpfully exploit
the potential of MDTMs across different areas of healthcare and support collaborative
efforts of interdisciplinary care.
The next steps within KIMBo will be to further prototype and integrate the necessary
components as well as evaluations with relevant stakeholders to validate the coverage of
identified requirements. Expected results of this evaluation are new requirements which
O. Krauss et al. / Challenges and Approaches to Make MDTMs Interoperable – The KIMBo Project 69
Acknowledgment
The research project KIMBo has received funding from the Austrian research agency
FFG under the General Programme.
The authors also want to thank the interview partners and organizations mentioned in 3.2
for taking their time and the gained valuable insights.
References
[1] N.S. El Saghir, N.L. Keating, R.W.Carlson, K.E. Khoury, L. Fallowfield,: Tumor boards: optimizing
the structure and improving efficiency of multidisciplinary management of patients with cancer
worldwide. American Society of Clinical Oncology, 2014
[2] Li, J., Robertson, T., Hansen, S., Mansfield, T., Kjeldskov, J.: Multidisciplinary Medical Team
Meetings: A Field Study of Collaboration in Health Care. OZCHI 2008 Proceedings, 2008
[3] O. Krauss, M. Angermaier, E. Helm, Multidisciplinary Team Meetings – A Literature Based Process
Analysis, in: Lecture Notes in: Information Technology in Bio- and Medical Informatics: 7th
International Conference, Springer International Publishing, 2016, pp. 115-129
[4] R. Edwards , J. Holland, What forms can qualitative interviews take? in: What is qualitative
interviewing?, Bloomsbury Publishing, 2013. pp. 29 – 42.
[5] HL7.org, FHIR STU3 Candidate (v1.9.0-10905), http://build.fhir.org/index.html, last access:
27.01.2017
[6] Mitglieder des Onkoligie-Beirates, Krebsrahmenprogramm Österreich, Bundesministerium für
Gesundheit, Wien, 2014
[7] Jazieh, A. R., Tumor Boards: Beyond the Patient Care Conference. Journal of Cancer Education 26,
405-408, 2011
[8] V. Stiehl, Process Driven Applications with BPMN, Springer International Publishing, 2016
[9] IHE PCC Technical Committee, Cross Enterprise Tumor Board Workflow Definition (XTB-WD) Trial
Implementation, IHE International, 2014
[10] HL7.org, FHIR Workflow Description, http://build.fhir.org/workflow.html, last access: 27.01.2017
[11] Bundesministerium für Gesundheit und Frauen, Zielsteuerung-Gesundheit ab 2017,
http://www.bmgf.gv.at/home/Gesundheit/Gesundheitsreform/Zielsteuerung_Gesundheit_ab_2017, last
access: 30.01.2017
[12] IHE ITI Technical Committee,, Technical Framework Volume 1, Revision 13.0, IHE International 2016
70 Health Informatics Meets eHealth
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-70
1. Introduction
Translational research requires the integration of heterogeneous data (both clinical and
omics data) into a unified view for analysis [1]. Such a view can provide researchers with
a source of both, generation and validation of hypotheses and of cohort selection and
biomarker discovery. It might also enable the reuse of valuable existing data with the
consequence of a reduction of costs and an increase in research effectiveness [2]. Like
for Bauer et al. [3], integration means for us that “1) the different data types of interest
are accessible via a single platform, 2) that data are cross-referenced (i.e. different types
of patient-specific data such as molecular and clinical data can be linked) and 3) that data
formats and platform infrastructures facilitate querying”.
1
Corresponding Author: Jan Christoph, Chair of Medical Informatics, Friedrich-Alexander-Universität
Erlangen-Nürnberg, Wetterkreuz 13, 91058 Erlangen, Germany. E-Mail: jan.christoph@fau.de
J. Christoph et al. / Two Years of tranSMART in a University Hospital 71
Tools like cBioPortal [4], iDASH [5], and tranSMART [6] provide such an
integrated view. They enable the efficient exploration of data by presenting both clinical
and omics data in an integrated way. They also provide already existing functions for
data exploration, cohort selection and the generation and validation of hypotheses.
The tranSMART platform has its roots in the i2b2 phenotype framework [7]. An
active open-source community organized by the tranSMART Foundation has been
developing it since 2013. In its database, tranSMART combines clinical data in an entity-
attribute-value store like i2b2 [7] with separate tables for omics data. On top, an
application server provides the user with an intuitive web front-end.
There are already reviews comparing these platforms [8,9] and publications exist
which deal with tranSMART and mention its use for clinical studies, e.g. [10].
Furthermore, there are papers which picture tranSMART in detail and from a technical
point of view: They describe the platform and the architecture of the system [6,11], its
meaning in the context of a study infrastructure [2,3,12], and in connection with other
platforms (like Galaxy, Minerva or Genedata Analyst) [13,14] or technologies (like
NoSQL or HL7) [15,16].
2. Methods
In June 2014, we ranked all institutes (>100 units) of our university hospital with regard
to their clinical omics-activities based on the annual research report and literature
research. The leaders of the top-ten-ranked research groups were asked for an interview,
which was accepted in nine cases. Each took place in a semi-structured manner in two
parts: The first part comprised questions on running and planned projects, used methods,
resulted or expected data and finally the noticed limitations for a more efficient research.
In the second part of the interview tranSMART was demonstrated as an example of an
integration platform with public data to determine whether this kind of software is
considered to be useful in principle or not. Three suitable partners (having both need and
data and covering a broad range of use cases) were selected to establish a prototype of
an integrating platform for translational research. The fourth use case arose shortly
afterwards in the form of a big research project with several external partners who needed
a research database for the integration of both clinical and omics data [19].
For the final determination of the platform, the review of Canuel et al. [9] was a very
useful starting point to select tranSMART as a platform which fulfills our requirements,
which, among others, arose by the interviews described above.
At the beginning, tranSMART 1.1 was installed, later we migrated to every new version
immediately after its release - until the latest version 16.2. Our installations are based on
Ubuntu Linux 14.04 and use PostgreSQL 9.3 as a database in virtual machines (VM) at
an ESX-cluster. There are three tranSMART-VMs for development, quality assurance
and productive mode within our internal hospital network. The forth VM is publicly
available and used for demonstration and education purposes. All VMs have 8 GB RAM,
4 CPUs and between 50 GB and 500 GB hard disk.
In all four use cases of translational research, we modeled the data by first using
Microsoft Excel files in an iterative process between physicians and computer scientists.
Finally, the extract-transform-loading (ETL-)-Tool Talend Open Studio Data Integration
performed the transformation of the raw data into data which were ready for being
uploaded to tranSMART.
Since there are several tools to finally upload clinical data and different types of
omics data into the tranSMART database, we generate a systematic overview and
performed practical testing of all available tools. In the end, tMDataLoader [20] was
chosen because it supports most data types as shown in Table 1.
Since it is open source, tranSMART offers many possibilities for extensions and
applications. That is why we announced topics for bachelor or master theses for students
of computer science (educational use case 1). Furthermore, we integrated tranSMART
into lectures and used it for practical exercises (educational use case 2). Training material
in the form of slides and screen videos has been prepared and an exercise with 18 tasks
has been designed to give an overview of the handling of tranSMART and its most
J. Christoph et al. / Two Years of tranSMART in a University Hospital 73
For the last two years, we have collected every tranSMART-related feedback that we
received from researchers and students. For the first three uses cases of translational
research, we conducted a semi-structured interview with the researchers six weeks after
their use of tranSMART. Furthermore, we evaluated the access logs provided by the
admin interface of tranSMART as well as the Tomcat logs by an Elasticsearch, Logstash,
and Kibana (ELK) stack [21].
3. Result
In the following, the main use cases are described which have been supported by
tranSMART at our university hospital.
Research use case 1: Supporting the analysis of an ongoing prospective study of
colorectal cancer of the Division of Molecular and Experimental Surgery.
The final dataset of more than 600 patients was derived from a CSV-export of an
electronic data capture system which provided over 500 clinical items with six electronic
case report forms (eCRF) per patient (baseline and up to five follow-ups). Furthermore,
we integrated two Excel files with about 50 gene expressions (generated by a RT-qPCR-
74 J. Christoph et al. / Two Years of tranSMART in a University Hospital
analysis) and two protein quantifications, respectively for each patient. About 15
additional items had to be calculated within the ETL-process since they were not
explicitly available within the raw data (like survival time or disease-free survival) or
their original categorization had to be transformed to be suitable for analyses (e.g. tumor
locations in only three categories than in twelve as defined in the eCRF). To support
querying and data exploration in tranSMART before version 16.2, gene expression data
were modeled both as clinical and as high dimensional data. This dataset was
complemented by clinical data (13 items) and gene expression data (120 genes) of 177
patients of a comparable public study (GEO GSE17536 [22]) to support the generation
of hypotheses. Every six months, the whole ETL-process is repeated to add new data
which has been gained from further follow-ups and further omics-analyses. The whole
iterative process of data modeling and the establishment of the periodic ETL took about
500-750h for us. In return, the resulting tranSMART-project has been intensively
analyzed by four researchers of the department with altogether about 1.000 logins during
the last two years and with an average time of use of about one hour per login.
Research use case 2: Supporting the final analysis of a completed retrospective
study of cancer vaccination of the Department of Dermatology.
The raw data of 62 patients - with each patient having 37 clinical items of up to 42 visits
- was provided as an Excel file. The data contained no omics data and had to be imported
only once. Since tranSMART still lacks a feasible “time-series” concept, each parameter
had to be summarized over all visits of a patient in one single value. In case of numerical
variables, we used the minimum, the maximum or the average value. Categorical
variables were discussed with a physician to determine the mapping rule on how to
represent different categorical values over all visits in a final value. Modeling and
importing the data took about 40 hours for us. The resulting tranSMART-project was
analyzed by two users for about three months with altogether about 50 logins with an
average time of use of about 1,5h per login.
Research use case 3: Supporting the analysis of a completed retrospective study
on gastrointestinal stroma tumors of the department of Internal Medicine.
The rather small dataset of four clinical items (no omics data) of 32 patients was available
in the form of an Excel file. It could be imported straight forward within half an hour.
The resulting tranSMART-project was analyzed in the context of a PhD thesis of a
medical student who performed survival analyses for one week with altogether six logins
with an average time of use of about 30 minutes per login.
While the data has been successfully imported, the resulting tranSMART-project
unfortunately has not been used as a source of information. It turned out that the use of
relational-structured source files (Excel, CSV) directly seemed to be more
straightforward to programmers. No added value was perceived in using the entity
attribute-value scheme or the RESTful-API provided by tranSMART. Researchers who
wanted to develop new analysis methods based on these data stated that they would rather
have preferred the use of a common data model such as OMOP/OHDSI [23].
Across these use cases, the five analysis methods of tranSMART which have been
used most often via the web interface are in descended order summary statistics, ANOVA,
survival analysis, fisher-test, and correlation analysis. The feedback of researchers who
used the analysis methods of tranSMART via the web front-end was very positive (e.g.
“tranSMART opens a magnitude of new opportunities for our future research!”, “Also
yesterday, I worked with great delight with this program.”, “It is a dream.”, “Greetings
from the dermatology casino: I’m very pleased with my new toy!”).
The overall perception of tranSMART among researchers as well as students was very
positive. It is considered a useful tool for translational research which has the ability to
integrate clinical and omics data which provides many methods for analyzing such data.
76 J. Christoph et al. / Two Years of tranSMART in a University Hospital
However, having omics data to analyze does not automatically guarantee success if
researchers already have toolchains in place which are not compatible with tranSMART
(use case 4). On the other hand, tranSMART can be used successfully even without
omics data (use case 2 and 3). The size of the study (number of data elements) seems to
be less important (use case 3) as long as it justifies the data modeling and import efforts
which can take in the shortest time 30 minutes from scratch until several months.
TranSMART is especially suited for cohort identification, data exploration, and the
generation and validation of hypotheses. The user can choose predefined functions such
as correlation analysis, ANOVA or survival analysis from a catalog and parametrize
them according to his/her needs. Although it is possible to build own methods within the
SmartR-workflows, it will likely not be successful to perform basic research in the field
of computational molecular biology with tranSMART. The reason of this downside is
that the platform - despite its RESTful-API – is not designed to integrate programming
code like R or Python to solve a chain of small individual problems with intermediate
results.
Most medical-focused researchers who were interested in the use of tranSMART
had average computer skills but were not experienced with programming languages like
R and had no more than basic knowledge of statistic programs like SPSS. Those who
were more experienced considered tranSMART to be more useful for education than for
their own research.
Having an easy-to-use interface is also a pitfall and makes it essential to have some
statistical background and the awareness that tranSMART does not check whether even
basic preconditions for statistical tests are fulfilled (e.g. a sufficient sample size and a
normal distribution for the t-test). Although these could still be checked automatically in
theory, the user still has to be aware of confounder etc. and take them into consideration.
This aspect can be – and has been - conveyed in a playful and interactive manner and it
has, in fact, been highly appreciated by the students.
Data modeling is a crucial step prior to the import of data because it is neither possible
to correct values within tranSMART nor to amend attributes. For example, if the
difference between the survival time and the disease-free survival time is required for
analysis, the difference has to be calculated and imported as an additional attribute
already during the ETL process. Pseudonymization is also not supported - the patient
identifier is visible via the feature grid view in the exact same manner as it has been
imported.
The model of the clinical data requires to have exactly one line per patient with the
same superset of variables as columns for all patients. This makes it challenging to
represent 1:n relationships between the data such as multiple diagnoses of a patient or
time series of lab values. Luckily, in our use cases, we were able to work around this
limitation by aggregation and mapping rules, but it depends on each use case if a
satisfying solution can be found. A major limitation of the data model is a lack of
relationships between attributes, such as primary and secondary diagnosis or dependent
attributes describing a finding: according to the tranSMART Foundation, the next release
17.1 – expected in summer 2017 - is to remove this limitation by becoming more
compatible with i2b2 which uses therefor the concept of modifiers.
The tool tMDataLoader fulfilled all our current needs to import the data after the
modeling process. However, in regard of clinical data it shows the same lack of
J. Christoph et al. / Two Years of tranSMART in a University Hospital 77
incremental data updates and the limitation to flat CSV files (no XML or ODM) as all
other publicly available tools.
Technic
The tranSMART-application has run sufficiently stable on our servers except for some
sporadic overutilization of the CPU which rendered the web interface unusable for
researchers.
The documentation provided by the community is more extensive than e.g. for i2b2
but lacks details especially for plugins like SmartR.
Although it should be possible to create and modify plugins, we found it quite
arduous: the development of an own SmartR-workflow for survival analysis took several
months. Moreover, tranSMART does not yet support distributed computing so that we
performed our GWAS calculation externally on SparkR in a Hadoop-cluster [26].
After importing the data into tranSMART, it is highly recommended to validate the data;
A biometrician, for example, might use the R-interface to access the project and the
exactly same data which have been analyzed by the physician before by the web front-
end.
Even if the reproducibility of the analysis results is important for researchers, it
cannot be guaranteed throughout tranSMART versions. For example, major changes
occurred in the results of the survival analysis between version 1.2.3 and 1.2.4. due to
minor changes in the R-scrips. Although only one character had been changed in the
source code2, the resulting negation yields completely different results of the analysis.
Unfortunately, there was no notification or documentation about this change in the
programmer's change log or elsewhere. Even if the documentation of tranSMART was
complete, the checks and the validation of the platform would be hampered by the
dependencies on numerous external libraries such as R-packages. An updated R-
package, for example, caused missing values in a survival-related table after the
tranSMART upgrade from version 1.2.5 to 16.13.
As a consequence, it is necessary to validate the imported data as well as the software
configuration including all dependencies. Snapshots of the virtual machine are used to
secure setups; but as this procedure is quite storage intensive, we consider using a more
lightweight technology such as Docker in the future. Additionally, it would be beneficial,
if tranSMART would support reproducibility by version management similar to other
tools such as Galaxy.
5. Conclusion
2
status <- currentDataSubset[[censor.field]] changed to status <- !currentDataSubset[[censor.field]]
3
Version 16.1 is the direct successor of 1.2.5 due to a new name convention.
78 J. Christoph et al. / Two Years of tranSMART in a University Hospital
our use cases, the data could be modeled and imported although the platform was only a
final success in three cases.
Data modeling was the most important task in the provision of tranSMART as a
service and requires an in-depth knowledge of the domain which is probably more crucial
than the choice of the platform. While tranSMART provides functions for data analysis
in an intuitive and graphical way it still requires basic statistical knowledge as it does not
exempt the user from testing the prerequisites of an analysis. Using tranSMART for
education as part of lectures and exercises for medical students and for computer
scientists helps to sensitize future users.
TranSMART has been appreciated at our university hospital by translational
researchers (“tranSMART opens a magnitude of new opportunities for our future
research!”) and will be fostered further as a productive service for interested groups.
Acknowledgments
The research has been supported by the Smart Data Program of the German Federal
Ministry for Economic Affairs and Energy (1MT14001B). The present work was
performed in fulfillment of the requirements for obtaining the degree “Dr. rer. biol. hum.”
from the Friedrich-Alexander-Universität Erlangen-Nürnberg (JC). We thank Mrs. B.
Schuler-Thurner (Department of Dermatology, Friedrich-Alexander Universität
Erlangen-Nürnberg) for her cooperation and participation to adopt tranSMART at her
department in research use case 2. Finally, we thank Stephanie Newe for linguistically
proof-reading.
6. References
[1] Halevy, A., Rajaraman, A., and Ordille, J. 2006. Data integration: the teenage years. In Proceedings of
the 32nd international conference on Very large data bases, 9–16.
[2] Jonnagaddala, J., Croucher, J. L., Jue, T. R., Meagher, N. S., Caruso, L., Ward, R., and Hawkins, N. J.
2016. Integration and Analysis of Heterogeneous Colorectal Cancer Data for Translational Research.
Studies in health technology and informatics 225, 387–391.
[3] Bauer, C. R., Knecht, C., Fretter, C., Baum, B., Jendrossek, S., Ruhlemann, M., Heinsen, F.-A., Umbach,
N., Grimbacher, B., Franke, A., Lieb, W., Krawczak, M., Hutt, M.-T., and Sax, U. 2016.
Interdisciplinary approach towards a systems medicine toolbox using the example of inflammatory
diseases. Briefings in bioinformatics.
[4] Gao, J., Aksoy, B. A., Dogrusoz, U., Dresdner, G., Gross, B., Sumer, S. O., Sun, Y., Jacobsen, A., Sinha,
R., Larsson, E., Cerami, E., Sander, C., and Schultz, N. 2013. Integrative analysis of complex cancer
genomics and clinical profiles using the cBioPortal. Science signaling 6, 269, pl1.
[5] Ohno-Machado, L., Bafna, V., Boxwala, A. A., Chapman, B. E., Chapman, W. W., Chaudhuri, K., Day,
M. E., Farcas, C., Heintzman, N. D., Jiang, X., Kim, H., Kim, J., Matheny, M. E., Resnic, F. S., and
Vinterbo, S. A. 2012. iDASH: integrating data for analysis, anonymization, and sharing. Journal of the
American Medical Informatics Association : JAMIA 19, 2, 196–201.
[6] Athey, B. D., Braxenthaler, M., Haas, M., and Guo, Y. 2013. tranSMART: An Open Source and
Community-Driven Informatics and Data Sharing Platform for Clinical and Translational Research.
AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational
Science 2013, 6–8.
[7] Kohane, I. S., Churchill, S. E., and Murphy, S. N. 2012. A translational engine at the national scale:
informatics for integrating biology and the bedside. Journal of the American Medical Informatics
Association : JAMIA 19, 2, 181–185.
[8] Dunn, W., JR, Burgun, A., Krebs, M.-O., and Rance, B. 2016. Exploring and visualizing
multidimensional data in translational research platforms. Briefings in bioinformatics.
J. Christoph et al. / Two Years of tranSMART in a University Hospital 79
[9] Canuel, V., Rance, B., Avillach, P., Degoulet, P., and Burgun, A. 2015. Translational research latforms
integrating clinical and omics data: a review of publicly available solutions. Briefings in bioinformatics
16, 2, 280–290.
[10] Haas, M., Stephenson, D., Romero, K., Gordon, M. F., Zach, N., and Geerts, H. 2016. Big data to smart
data in Alzheimer's disease: Real-world examples of advanced modeling and simulation. Alzheimer's &
dementia : the journal of the Alzheimer's Association 12, 9, 1022–1030.
[11] Scheufele, E., Aronzon, D., Coopersmith, R., McDuffie, M. T., Kapoor, M., Uhrich, C. A., Avitabile, J.
E., Liu, J., Housman, D., and Palchuk, M. B. 2014. tranSMART: An Open Source Knowledge
Management and High Content Data Analytics Platform. AMIA Summits on Translational Science
Proceedings 2014, 96–101.
[12] Rance, B., Canuel, V., Countouris, H., Laurent-Puig, P., and Burgun, A. 2016. Integrating
Heterogeneous Biomedical Data for Cancer Research: the CARPEM infrastructure. Applied clinical
informatics 7, 2, 260–274.
[13] Satagopam, V., Gu, W., Eifes, S., Gawron, P., Ostaszewski, M., Gebel, S., Barbosa-Silva, A., Balling,
R., and Schneider, R. 2016. Integration and Visualization of Translational Medicine Data for Better
Understanding of Human Diseases. Big data 4, 2, 97–108.
[14] Schumacher, A., Rujan, T., and Hoefkens, J. 2014. A collaborative approach to develop a multi-omics
data analytics platform for translational research. Applied & translational genomics 3, 4, 105–108.
[15] Camacho Rodriguez, J. C., Staubert, S., and Lobe, M. 2016. Automated Import of Clinical Data from
HL7 Messages into OpenClinica and tranSMART Using Mirth Connect. Studies in health technology
and informatics 228, 317–321.
[16] Wang, S., Pandis, I., Wu, C., He, S., Johnson, D., Emam, I., Guitton, F., and Guo, Y. 2014. High
dimensional biological data retrieval optimization with NoSQL technology. BMC genomics 15 Suppl 8,
S3.
[17] Thomas, D. C. 2006. High-volume "-omics" technologies and the future of molecular epidemiology.
Epidemiology (Cambridge, Mass.) 17, 5, 490–491.
[18] Prokosch, H. U. and Ganslandt, T. 2009. Perspectives for medical informatics. Reusing the electronic
medical record for clinical research. Methods of information in medicine 48, 1, 38–44.
[19] Sonntag, D., Tresp, V., Zillner, S., Cavallaro, A., Hammon, M., Reis, A., Fasching, P. A., Sedlmayr, M.,
Ganslandt, T., Prokosch, H.-U., Budde, K., Schmidt, D., Hinrichs, C., Wittenberg, T., Daumke, P., and
Oppelt, P. G. 2016. The Clinical Data Intelligence Project. Informatik Spektrum 39, 4, 290–300.
[20] Alexander Bondarev. tMDataLoader. https://github.com/ThomsonReuters-LSPS/tMDataLoader.
Accessed 15 March 2017.
[21] Turnbull, J. 2013. The Logstash Book. James Turnbull.
[22] Smith, J. J., Deane, N. G., Wu, F., Merchant, N. B., Zhang, B., Jiang, A., Lu, P., Johnson, J. C., Schmidt,
C., Bailey, C. E., Eschrich, S., Kis, C., Levy, S., Washington, M. K., Heslin, M. J., Coffey, R. J.,
Yeatman, T. J., Shyr, Y., and Beauchamp, R. D. 2010. Experimentally derived metastasis gene
expression profile predicts recurrence and death in patients with colon cancer. Gastroenterology 138, 3,
958–968.
[23] Hripcsak, G., Duke, J. D., Shah, N. H., Reich, C. G., Huser, V., Schuemie, M. J., Suchard, M. A., Park,
R. W., Wong, I. C. K., Rijnbeek, P. R., and others. 2015. Observational Health Data Sciences and
Informatics (OHDSI): opportunities for observational researchers. Studies in health technology and
informatics 216, 574.
[24] He, S., Yong, M., Matthews, P. M., and Guo, Y. 2016. tranSMART-XNAT connector-image selection
based on clinical phenotypes and genetic profiles. Bioinformatics (Oxford, England).
[25] Wagholikar, K. B., Mandel, J. C., Klann, J. G., Wattanasin, N., Mendis, M., Chute, C. G., Mandl, K. D.,
and Murphy, S. N. 2016. SMART-on-FHIR implemented over i2b2. Journal of the American Medical
Informatics Association : JAMIA.
[26] Sedlmayr, M., Wurfl, T., Maier, C., Haberle, L., Fasching, P., Prokosch, H.-U., and Christoph, J. 2016.
Optimizing R with SparkR on a commodity cluster for biomedical research. Computer methods and
programs in biomedicine 137, 321–328.
80 Health Informatics Meets eHealth
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-80
1. Introduction
1
Corresponding Author: Jozef Aerts, University of Applied Sciences FH Joanneum, Eggenberger Allee
11, 8020 Graz, Austria, E-Mail: Jozef.Aerts@FH-Joanneum.at
J. Aerts / The Use of RESTful Web Services in Medical Informatics and Clinical Research 81
functionality. In the review process for new drug applications and treatments at the
regulatory authorities FDA, PMDA and EMA, the use of web services is unknown,
although such services could help bringing new treatments to the patients earlier and
improve the quality of the review process considerably [6].
2. Methods
As we are developing RESTful web services for use in clinical research, we first made
an inventory of available web services (based on SOAP as well as on REST) that are
publicly available for use in medical informatics and in clinical research. We first
performed several searches in PubMed, then extended the search to a classic internet
search, as most of these services are described by web pages. As RESTful web services
usually are however easier to implement in software [7], and many RESTful web services
have support for the JSON notation which is the preferred data format by many mobile
application developers [8], we dropped SOAP and further concentrated on RESTful web
services in medical informatics and clinical research. The results of our inventory are
described in the "Results" section. The scope of our research excludes bioinformatics,
for which a large number of web services is available [9].
2.2. Development of new RESTful web services for use in clinical research
As we found that there are almost no public RESTful web services available for use in
clinical research, we developed a number of them in the following areas:
x Lookup services for LOINC (Logical Observation Indentifiers Names and
Codes) codes
x UCUM (Unified Code for Units of Measure) unit conversion services
x Services for using CDISC controlled terminology
x Services for use with SDTM (Submission Data Tabulation Model) electronic
submissions to the FDA and PMDA
x Experimental services for supporting modern "open rule" validations for
electronic submissions to the FDA and PMDA
These RESTful web services were developed in the Java language, using the
"Jersey" toolkit [10] and deployed as web applications on a Java application server.
A number of CDISC volunteers have developed the "Smart Dataset-XML Viewer" [11],
allowing to inspect and work with electronic submissions to the FDA and PMDA that
use the "Dataset-XML" format [12]. It has been named "smart" as its features go far
beyond what the FDA and PMDA have available for inspecting and validating electronic
submissions in SAS-XPT format. This software is written in Java and is freely available
as "open source" [11]. Several of the web services that we developed on the server are
82 J. Aerts / The Use of RESTful Web Services in Medical Informatics and Clinical Research
being used ("consumed") by the "Smart Dataset-XML Viewer" (as "client"), as will be
explained in the "Results" section.
3. Results
The most interesting publicly available web services (and especially RESTful web
services) for use in medical informatics that we found originate from:
x The National Library of Medicine (NLM) with its "MedLinePlus Connect" set
of web services [13], allowing client applications to request for information on
diagnosis (problem) codes, medications and lab tests. These web services are
available as RESTful web services, either returning XML, JSON or JSONP.
x Another set of web services (REST and SOAP) is offered by "RxNav" (also part
of NLM) concentrating on drug information [14]. It uses the RxNorm
terminology that is specific to the US. It also provides a RESTful web service
for drug interactions [15] based on the RxNorm identification number and a
LOINC mapping service [16], allowing to find tests that are related.
x HIPAASpace provides a number of web services, with the choice between
SOAP and REST, the latter allowing responses in (compressed or non-
compressed) JSON or XML [17]. Most of these web services are either in the
area of provider insurance, or lookup systems for codes used in health
informatics (ICD-x, LOINC, and others). The emphasis is clearly on insurance
and reimbursement use cases.
x UMLS offers a number of very interesting RESTful web services, allowing to
find relationships between terms in different coding systems [18]. These
services are still in beta, but are very promising. For example, it allows client
applications to find out relationships between lab tests and diseases. Although
the web service requires authentication tokens, we found out that they are pretty
easy to implement in client applications.
x OpenFDA offers a number of RESTful web services [19] returning information
in JSON format for queries on national drug codes (NDC), structured product
labels (SPL), unique ingredient identifiers (UNII), and branded drugs (RxNorm).
x The US National Cancer Institute (NCI) offers a range of RESTful web services
for querying the SEER (Surveillance, Epidemiology, and End Results)
databases [20]. Responses are formatted as JSON. They require an API key
which can be obtained after registration (free of charge)
x NCI also offers other RESTful web services for use in medical research, such
as for querying the Genomic Data Commons (GDC) database [21] and for
obtaining information on registered clinical trials [22].
x Several RESTful web services are available for lookup of ICD-10 codes [23,24]
x ClinicalTrials.gov, the US registry for clinical trials offers web services which
can be regarded as RESTful [25]. They do, however, return zip files containing
sets of XML files. These files can, of course, also be consumed by client
applications.
x CDISC recently published an API for its SHARE (Shared Health And Research
Electronic) library [26] containing all the CDISC standards. The approach can
J. Aerts / The Use of RESTful Web Services in Medical Informatics and Clinical Research 83
It was striking that we did not found any web services in the area of medical
informatics from organizations in Europe. For example, DIMDI (Deutsches Institut für
Medizinische Dokumentation und Information) does not yet make any public web
services available [28]. The only RESTful web services we found from a European
organization that may somehow be useful in medical informatics, are the web services
from Europe PMC [29] for finding articles in the area of life science. Also the EMA
(European Medicine Agency) has no equivalent to openFDA [19], and does not allow
automated searches for clinical trials in EudraCT, the European clinical trials register
[30]. It only allows to search for clinical trials through a browser interface. In the DACH
region, we observe that even for very simple things, like looking up a medication ID
("Pharmazentralnummer" in Germany and Austria, "Pharmacode" in Switzerland) no
web services are available. Solely, information about approved medications in
Switzerland can be obtained in an automated way using a SOAP-based web service [31]
by using the GTIN article number of the medication. Also no web services seem to be
available for working with ICD-10 GM (Germany) or ICD-10 BMG (Austria) codes.
Of course, the diversity in languages used in the EU and the fragmentation of information
over the different member countries does not make it easy to develop web services in the
area of healthcare informatics that can be consumed by modern applications. The lack of
even the simplest web services in this area may however also have to do with
"protectionism on information" i.e. organizations claiming "ownership" of information
even when the development or generation of that information was paid by tax money.
We recently developed a number of new RESTful web services for use in medical
informatics and especially in clinical research [32]. These web services can be
categorized in four categories:
x RESTful web services for working with LOINC codes
x RESTful web services for working with CDISC controlled terminology and
submission data standards variables (CDISC-SDTM)
x A RESTful web service for unit conversions using the UCUM (Unified Code
for Units of Measure) [33] system
x A very experimental web service for retrieving machine-readable FDA rules
concerning electronic submissions implemented as XQuery scripts
("OpenRules for CDISC" initiative)
The RESTful web service for working with LOINC codes implements the latest
version of the LOINC database (December 2016) [34]. It allows to look up the LOINC
name (containing the 5-6 components) and the LOINC long name (lab code description),
and when applicable the example UCUM units for the given LOINC code. It can be seen
as complimentary to the RESTful web service of MedlinePlus Connect [13] which
84 J. Aerts / The Use of RESTful Web Services in Medical Informatics and Clinical Research
essentially returns a description for the code and a hyperlink to a web page containing
information about the laboratory test which is more suitable as information to patients.
Both of these web services have been implemented in the "Smart Dataset-XML Viewer"
[11], a review software for use in regulatory submissions to the FDA and PMDA, as will
be explained further on in the "discussion" section. We are currently extending our
LOINC web services to allow chaining, e.g. for looking up whether a specific test is a
member of a panel, and then to provide all other tests of that panel.
We also developed a large number of RESTful web services for use with the CDISC
set of standards [35] and especially the submission standard SDTM and the CDISC
controlled terminology as published by NCI [36]. These web services have also been
implemented in the "Smart Dataset-XML Viewer" [11], but are now also already used
by other applications in the pharma industry. These applications typically try to answer
concrete questions, like "what is the variable label and data type of SDTM variable XYZ
in version ABC of the standard?", or "is CDISC codelist XYZ version ABC extensible
or not?". The underlying base of these web services is a set of databases, containing all
SDTM information since v.1.2 and all CDISC controlled terminology since version
2014-03-28. The approach here differs from the one use by the CDISC-SHARE API as
will be discussed in the "discussion" section.
Furthermore, we developed a RESTful web service for automating unit conversion
using the UCUM notation [33]. The service is not based on conversion tables, but is
based on the property that any UCUM unit can be broken down into a combination of
base units. Doing this for as well the source as target unit allows to calculate the
conversion factor between both. The algorithm for this goes beyond the scope of this
paper and will be described in a subsequent publication. The service allows client
applications to automate unit conversions, not only in medicine and clinical research, but
also in other areas.
Furthermore, we developed an experimental RESTful web service to automatically
retrieve the latest updates of sets of FDA and PMDA rules for electronic submissions
[37]. These rules have been implemented before in software by a vendor [38], but in such
a way that the rule implementation is hidden, so that users cannot see how exactly the
rule was implemented in the software. The "Open Rules for CDISC" Initiative of a
number of CDISC volunteers aims at making these rules implementations transparent,
by providing them as machine-readable as well as human-readable scripts. Therefore,
they have been rewritten in XQuery, the W3C standard for querying XML [39]. The web
service allows client applications to always retrieve the newest version of each of the
rules, as the latter are still in development and optimized for speed and efficacy.
4. Discussion
Figure 1. Use of the LOINC web services in the "Smart Dataset-XML Viewer".
As an example, the software used by the FDA and PMDA to validate electronic
submissions contains a file with NDF-RT (National Drug File Reference Terminology)
codes. This list is updated monthly, but the software itself is only updated every 6 months
to 1 year. The consequence is that end-users have started complaining that valid, but
newer NDF-RT codes are rejected by the software, which may lead to an (automated)
rejection of a new drug application by the FDA.
Combined usage of different web services may open completely new perspectives
in medical informatics. For example, the "Smart Dataset-XML Viewer" [11] combines
different RESTful web services (from different sources) to help validating electronic
submissions to the FDA and PMDA. One of the use cases overcomes the difficulty that
reviewers at the FDA and PMDA do not know what each LOINC code means. The
viewer triggers our LOINC lookup web service when the user points the mouse over a
LOINC code, and uses the response to display the basic information (LOINC names and
preferred units) as a tooltip (Fig ire1). A right mouse click then triggers another RESTful
web service from "MedLinePlus Connect" [13] which returns the address of a static
website containing more extensive information about the test. The application then opens
a new browser windows with the address provided by the "MedlinePlus Connect" web
service.
The UMLS RESTful web services, although still in beta, do already allow to
generate "networks of information". Some of our early experiments with these services
already allowed us to create mini-networks where lab test codes are related to panel codes
and these again to diseases.
Our own RESTful web services can be regarded as "bottom up" web services: they
try to answer single questions, like "provide the mandatory variables in SDTM domain
XYZ in version ABC of the SDTM standard", or "is term XYZ a valid term in codelist
MNO version ABC of the CDISC controlled terminology?", meaning that the responses
are always small, very granular pieces of information, which can be used immediately
without much parsing effort. They complement other "top down" web services which
typically contain relatively large amounts of information (e.g. extensive lists), such as
the CDISC SHARE API [5] which is more meant to populate metadata repositories at
pharmaceutical companies, which then can of course be queried by … RESTful (intranet)
web services within the company itself.
Web services can, as they are easy to implement and prevent "the reinvention of the
wheel" considerably contribute to patient empowerment. With the use of public web
services, software providers could easily generate systems for patients helping to govern
their own health, like helping them to interpret their lab values, or to find information
86 J. Aerts / The Use of RESTful Web Services in Medical Informatics and Clinical Research
about different medications for their specific disease, or to find clinical trials they can
enroll in. Even in Europe, such applications are already beginning to emerge [40]. Such
patient empowerment applications are, however, seriously hindered in Europe by the
lack of suitable web services, even for the simplest things, like looking up (and
comparing) information for different medications by their medication ID.
5. Conclusions
One of the advantages of RESTful web services is that they are easy to implement, as
well on the server side as on the client side. When combining different web services
(from different organizations) on the client side, there is of course the difficulty that one
needs to deal with different APIs, different authentication mechanisms and different
response formats. Even then, developing client applications that combine different web
services is relatively easy and straightforward.
Most web services in the area of medical informatics are provided by US based
organizations like the National Library of Medicine or the National Cancer Institute (both
part of the National Institutes of Health - NIH). In Europe, almost no RESTful web
services in this area exist, probably due to fragmentation of efforts, and the difficulty
having to deal with different countries and languages. So if Europe wants to make a
serious effort in patient empowerment, it needs (among other actions) start creating an
infrastructure where information about diseases, medications, lab tests and many others
is accessible through the use of RESTful web services. This not only requires a change
in mentality among the "information owners", but also a European equivalent of the US
National Library of Medicine.
6. References
1. Introduction
Electronic health information systems enable the electronic collection and support high
quality analysis of medical data. This is expected to have a big potential to improve
patient care and medical research. Individuals with very specific characteristics could
be identified, which is mandatory for personalized medicine as well as epidemiological
and clinical studies, but also general big data applications would be possible. Certainly,
such methods must respect patients’ privacy, furthermore, the usefulness must be
evaluated very carefully. Due to the heterogeneity of data structures, the necessary
integration of data from different stake holders is posing a crucial problem.
Up to now, the exchange and reuse of medical data models is not yet common
practice in research, even though it is demanded by a growing number of institutions
[1,2]. Data models define a structured way to capture medical information. Their
complexity can vary from a simple form that contains the outcome of a routine
examination to a study questionnaire with several hundred items. Electronic health
information systems, such as electronic health record systems (EHRs) and electronic
1
Corresponding Author: Stefan Hegselmann, Institute of Medical Informatics, Albert-Schweitzer-
Campus 1, 48149 Münster, Germany, E-Mail: stefanhegselmann@uni-muenster.de.
S. Hegselmann et al. / Automatic Conversion of Metadata from the Study of Health 89
data capture systems (EDCs), use data models to manage and process medical data.
Currently, however, most models are not publicly available, so medical institutions are
forced to come up with their own definitions [3]. This process is not only costly [4], but
also results in a vast amount of different data models. Already for a protocol of vital
signs, it is very unlikely that two forms will be identical regarding item names, data
types, code lists and measurement units. An electronic system requires the definition of
all models to be able to process the inputs correctly. In the age of Big Data, where
information shall be used across several institutions, different complex data models in
the involved institutions would render this approach infeasible.
Missing annotations with unique semantic identifiers constitute another key
problem to interoperability. In order to combine information obtained with different
medical data models, it must be possible to interpret their elements. Often only an
item’s label and data type can be utilized, which might result in ambiguities. For
example, consider an item called body temperature. It is impossible to determine the
procedure where the temperature was measured (sublingual, axillary or rectal) nor are
there any information whether the value is stored in degree Celsius or Fahrenheit. The
Unified Medical Language System (UMLS), integrating medical terminologies and
classifications and linking medical concepts to codes, can remedy this shortcoming [5].
By mapping concept unique identifiers (CUIs) to each element in the data model that
determine their exact meaning, the medical content becomes machine-readable and
there is no need for textual information to clarify its content. Nevertheless, only a
fraction of medical data models is semantically annotated so far and the intercoder
reliability is still problematic, i.e. the agreement on uniform codes for the same items
[6].
A proposed solution to these two problems is to share and reuse medical data
models in an open access portal [3]. This could not only save the costs for redesigning
new models for various applications, but also support the development of general
principles and guidelines for the way medical data is captured [7]. Furthermore,
previously mapped semantic identifiers can be reused from existing models, saving the
laborious work of reannotating medical concepts by human experts and strengthening
the use of uniform codes. Medical data models can be represented in the operational
data model (ODM) format [8], a widely adapted XML standard maintained by the
Clinical Data Interchange Standards Consortium (CDISC) to exchange and archive
metadata and data from clinical trials [9].
The objective of this work is to support the interoperability of medical data models
from the Study of Health in Pomerania (SHIP) by an automated converter into ODM
representation. Specifically, this enables integration of data models from SHIP into the
Portal of Medical Data Models2 (MDM) [10]. This research infrastructure was created
to foster sharing of medical data models and offers the possibility to generate, edit and
comment on models, as well as download them for reuse in different data formats [3].
SHIP is a major epidemiological study in Germany. It was initiated in 1997 to obtain
scientific valid data regarding factors contributing to a shorter life-expectancy in
eastern Germany after the reunification. In contrast to other population-based studies
“it does not specifically address one selected disease; it rather attempts to describe
health-related conditions with the widest focus possible” [10].
2
www.medical-data-models.org
90 S. Hegselmann et al. / Automatic Conversion of Metadata from the Study of Health
A mapping between the SHIP database and the ODM format will be formalized and
analyzed regarding structural maintenance of the original semantics and potential loss of
information.
Figure 1: Simplified ODM example (left) and the corresponding form generated by the MDM portal. (right).
2. Methods
The current version 1.3.2 of the ODM format served as a basis for the conversion. An
example of the standard’s capability to store metadata of medical forms is given in
Figure 1. A simplified excerpt of an ODM document on the left-handed side is
illustrated with the corresponding form generated by the MDM portal on the right.
Indentation indicates a parent-children relationship and outgoing edges to the left the
existence of a parent element in the XML tree. The form is represented with an element
of type ItemGroupDef. It contains the form name and references to the contained items
via ItemRefs. They point to ItemDef elements, which are identified with the unique
object ids (OIDs) bodytemp and pulse. The first contains a question attribute that yields
a labeled text field, the second references to a CodeList with OID pulseyn. This
represents a selection of values, which are 0 and 1 with the labels No and Yes.
A complete XML tree of an ODM document is presented in Figure 2 on the right-
handed side to illustrate the mapping. The root element ODM usually contains the
children Study and ClincalData, which define the structural information of a study and
the corresponding data. In this work, only the former is necessary, since we need to
represent metadata of study questionnaires. GlobalVariables contains general
information about the study, such as the name and a more thorough description. The
specific structure and procedures of the study are stored in MetaDataVersion. A
hierarchy of the elements Protocol, StudyEventDef, FormDef, ItemGroupDef, ItemDef
and CodeList is used in which children are defined via references. The example in
Figure 1 illustrates this for ItemGroupDef, ItemDef and CodeList. This approach
allows the simple reuse of elements. Note that the references are omitted in Figure 2 for
simplicity reasons. A Protocol defines the specific events that take place in the study,
represented by StudyEventDefs. These elements contain forms (FormDef), which are
built up by items (ItemDef), responsible for the data input. The items are grouped in
S. Hegselmann et al. / Automatic Conversion of Metadata from the Study of Health 91
ItemGroupDefs. If a selection of values is offered for the input, an ItemDef element can
refer to a CodeList, which contains CodeListItems that encode the possible options.
Figure 2: Suggested mapping from SHIP data dictionary to the ODM format. Direct (solid arrows) and
preprocessed mappings (dashed arrows) are distinguished.
The Study of Health in Pomerania uses a relational database to store metadata, which is
referred to as the SHIP data dictionary. Figure 2 shows the most important database
tables and their relationships on the left-handed side. All table names are written in
capital letters to emphasize the contrast to XML elements in case of ODM. The
indentation means that tables are related by a foreign key relationship. We may refer to
the respective tables as parent and children. Furthermore, tables with the same names
are indeed identical and are just repeated to simplify the illustration.
To formalize a mapping to ODM, the basic structure of a medical form in the SHIP
data dictionary was analyzed. Depending on the role, relations and attributes of an
92 S. Hegselmann et al. / Automatic Conversion of Metadata from the Study of Health
element an equivalent was identified in the ODM standard. The general approach was
to convert all available information. This includes visible elements such as text labels,
but also definitions of data types and logical constraints. While some attributes could
be adapted directly, in some cases a preprocessing step was necessary to transform
values. Whenever no suitable mapping was possible, at least a textual representation
was included in a description or similar attribute. So, the information is still available
in the ODM document. Unique identifiers of the SHIP data dictionary where converted
into the ODM OIDs, which yields a one-to-one mapping between items in the two data
formats. This simplifies the exchange of semantic annotations.
To evaluate the mapping, three converted forms were examined in-depth. They
were uploaded to the MDM portal 3 and analyzed regarding the correctness of
preprocessed values and semantical maintenance.
The converter was written in the Java programming language and compiled with
OpenJDK 8. The SHIP data dictionary is stored in a PostgreSQL database and was
accessed with the Java Object Oriented Querying (jooq) library [11] version 3.9.1. It
generated Java classes for the existing database tables allowing a comfortable and type
safe access. To create the corresponding ODM files the Java Architecture for XML
Binding (JAXB) was applied. It generates classes according to an XML schema that
defines the structure for a valid ODM document. With this approach the whole
document could be defined with Java objects from which the final XML was generated
automatically. Furthermore, JAXB validated the resulting XML against the ODM
schema, guaranteeing syntactic correctness. The preprocessing methods were
implemented with case distinctions, string manipulations and regular expressions for
pattern matching.
3. Results
3
www.medical-data-models.org/forms/{18087,18098,18097}
S. Hegselmann et al. / Automatic Conversion of Metadata from the Study of Health 93
Figure 3: Excerpt of an exported questionnaire as generated in the MDM portal (left) and the detailed
view of the items (right).
was transformed to a textual representation of the disjunctive normal form with unique
item ids as variable names. The dependency mechanism in the SHIP data dictionary
can be converted to an element of the type ConditionDef. It also contains an instance of
FormalExpression that encodes the circumstances under which an element of the form
should be disabled.
Figure 3 shows an excerpt of an exported questionnaire as it is accessible in the
MDM portal. On the left the generated form is presented; the detailed item view is
given on the right. The Question elements contain a TranslatedText with different
translations. The English ones are shown here. Item names are defined from the
English version of the question. Measurement units and data types were derived with
preprocessing methods from SHIP data dictionary entries. Furthermore, semantic
annotation of items with UMLS codes was added.
4. Discussion
In this work, we present a mapping from the metadata of the SHIP data dictionary to
the ODM format and describe the implementation of an automatic converter. The
structure of the data dictionary was successfully converted to the structures of ODM
and all relevant information was included in the resulting forms. The suggested
mapping was implemented and three sample questionnaires were evaluated in-depth,
which demonstrates the feasibility of this conversion. As a result, hundreds of data
entry forms with more than 15.000 items can be converted into a standardized format,
which enables integration of data models from SHIP into the Portal of Medical Data
Models. The portal offers more than 15 different export formats, including import
formats for various EDC systems like REDCap or FHIR. This enables the download
and integration of the metadata used in the SHIP into various systems.
Due to the clinical relevance, population representativeness and high quality of the
data collected in SHIP [10], more than 600 papers have been published in the last 15
years. The data collected covers a broad spectrum of medical conditions, ranging from
endocrine-metabolic to cardiovascular, neurological, gastroenterological and dental
disorders [10], enabling the analysis of a vast field of diseases. This explains the
interest of several institutions to adapt the structure and metadata of SHIP within
Germany and abroad as part of the SHIP-International concept. In addition to
Pomerania, a first SHIP sister study was implemented in Brazil in 2014. A second
sister study in Poland, Bialystok, is currently in its piloting phase. Further sites in
Brazil are under consideration. In Greifswald, selected examinations conducted within
the interdisciplinary GANI_MED [12] project on Individualized Medicine have been
aligned with SHIP.
The implementation at other sites was laborious and time-consuming due to the
proprietary format of the SHIP data dictionary. By implementing a conversion tool
from the SHIP database structure to ODM, the metadata of SHIP could be provided in
a standardized format for clinical data and their metadata, facilitating the
implementation of selected SHIP examinations in different study sites. Comparability
with the data collected in the SHIP is supported by adapting the unique item identifiers
in the exported metadata.
There are certain limitations to the conversion: different data types are used in the
SHIP data dictionary and ODM, so a more general type might be applied after the
conversion. Furthermore, measurement units are extracted from the labels with a
S. Hegselmann et al. / Automatic Conversion of Metadata from the Study of Health 95
References
[1] M. Dugas, K. H. Jöckel, T. Friede, O. Gefeller, M. Kieser, and M. Marschollek, et al., “Memorandum
“Open Metadata”,” Methods of information in medicine, vol. 54, pp. 376–378, 2015.
[2] Guidance to Encourage the Use of CDEs, https://www.nlm.nih.gov/cde/policyinformation.html,
accessed: 11/02/2017.
[3] M. Dugas, P. Neuhaus, A. Meidt, J. Doods, M. Storck, and P. Bruland, et al., “Portal of medical data
models: information infrastructure for medical research and healthcare,” Database, vol. 2016, bav121,
2016.
[4] Beresniak, A. Schmidt, J. Proeve, E. Bolanos, N. Patel, and N. Ammour, et al., “Cost-benefit
assessment of using electronic health records data for clinical research versus current practices:
Contribution of the Electronic Health Records for Clinical Research (EHR4CR) European Project,”
Contemporary clinical trials, vol. 46, pp. 85–91, 2016.
[5] U.S. National Library of Medicine, Unified Medical Language System,
https://www.nlm.nih.gov/research/umls, accessed: 11/02/2017.
[6] M. Dugas, A. Meidt, P. Neuhaus, M. Storck, and J. Varghese, “ODMedit: uniform semantic annotation
for data integration in medicine based on a public metadata repository,” BMC Medical Research
Methodology, vol. 16, p. 65, 2016.
[7] J. Varghese, C. Holz, P. Neuhaus, M. Bernardi, A. Boehm, and A. Ganser, et al., “Key Data Elements
in Myeloid Leukemia,”, 2016.
[8] Clinical Data Interchange Standards Consortium, Operational Data Model,
https://www.cdisc.org/standards/foundational/odm, accessed: 11/02/2017.
[9] V. Huser, C. Sastry, M. Breymaier, A. Idriss, and J. J. Cimino, “Standardizing data exchange for
clinical research protocols and case report forms: An assessment of the suitability of the Clinical Data
Interchange Standards Consortium (CDISC) Operational Data Model (ODM),” Journal of Biomedical
Informatics, vol. 57, pp. 88–99, 2015.
[10] H. Völzke, D. Alte, C. O. Schmidt, D. Radke, R. Lorbeer, and N. Friedrich, et al., “Cohort Profile: The
Study of Health in Pomerania,” International Journal of Epidemiology, vol. 40, p. 294, 2010.
[11] Java Object Oriented Querying, https://www.jooq.org, accessed: 11/02/2017.
96 S. Hegselmann et al. / Automatic Conversion of Metadata from the Study of Health
[12] H. J. Grabe, H. Assel, T. Bahls, M. Dorr, K. Endlich, and N. Endlich, et al., “Cohort profile:
Greifswald approach to individualized medicine (GANI_MED),” Journal of translational medicine, vol.
12, p. 144, 2014.
[13] P. Bruland, B. Breil, F. Fritz, and M. Dugas, “Interoperability in clinical research. From metadata
registries to semantically annotated CDISC ODM,” Studies in Health Technology and Informatics, vol.
180, pp. 564–568, 2012.
[14] M. Dugas, “ODM2CDA and CDA2ODM. Tools to convert documentation forms between EDC and
EHR systems,” BMC medical informatics and decision making, vol. 15, p. 40, 2015.
Health Informatics Meets eHealth 97
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-97
Abstract. The new virtual reality based medical applications is providing a better
understanding of healthcare related subjects for both medical students and
physicians. The work presented in this paper underlines gamification as a concept
and uses VR as a new modality to study the human skeleton. The team proposes a
mobile Android platform application based on Unity 5.4 editor and Google VR SDK.
The results confirmed that the approach provides a more intuitive user experience
during the learning process, concluding that the gamification of classical medical
software provides an increased interactivity level for medical students during the
study of the human skeleton.
1. Introduction
Gamification represents a confirmed but relatively new approach that integrates classic
game design elements and mechanisms into everyday activities with the direct result of
an increased engagement level of users. A large number of new applications are present
in education, various business sectors and medicine to motivate the participants to reach
their pre-established goals either by using AR (augmented reality) or even VR (virtual
reality) oriented games. While AR applications are based on enhancing and adding of
new, virtual data to the real world scenes usually seen trough special designed glasses
(e.g. Google Glass [1], Magic Leap [2], Microsoft HoloLens [3]), the VR games offer a
completely immersive experience to users because they can see only the 3D scene
projected inside a head-mounted display (e.g. Oculus Rift [4], Avegant Glyph [5]).
The VR-based applications may help both patients and healthcare professionals to
better understand or develop new treatments for various medical conditions. The VR
rehabilitation of post-stroke adult patients [6] has been proven to bring numerous benefits
over conventional therapy by providing additional real-time feedback and allowing the
adjustment of difficulty levels during therapy.
Other studies [7, 8] show that VR may be used in pain management in order to
increase pain tolerance levels or to distract patients from feeling pain when they are
exposed to an immersive gaming experience. This approach might help patients in cases
where opioid analgesics are not enough to alleviate pain and any additional,
nonpharmacologic analgesic might help them during medical interventions.
1
Corresponding Author: Stelian NICOLA, Polytechnic University of Timisoara, Department of
Automation and Applied Informatics, Timisoara, Romania, E-Mail: stelian.nicola@aut.upt.ro
98 S. Nicola et al. / VR Medical Gamification for Training and Education
Medical students also might benefit of VR oriented environments where they can
plan and practice complex operations without exposing the patients to complications that
might come up during a real surgical interventions [9].
The current article presents an application for the use of medical students to better
understand the complexity of human anatomy by exploring it in an immersive
environment that leads to an increased interactivity level for anatomy lectures. This paper
describes the process and results of the development of a complex mobile application
that helps medical students to manipulate and learn the bones of a virtual human skeleton
displayed in a virtual 3D scene. The application is designed as a game, avoiding boredom
and providing learning the bones of the human skeleton interacting with the 3D objects
through this application. Previously the team created a desktop version of the application,
called Skedu [10], that is based on gesture interaction using a Leap Motion controller.
One motivation for building a mobile application is based on the huge use of smartphones
by the target public of the application. Another important motivation is using this mobile
application for investigating which are the best values for the object’s coordinates and
improve interaction. Also to add new functionalities and integrate the VR headsets – as
an alternative type of interaction than Skedu [10] application, both from the point of view
of 3D objects visualization and as available functionalities.
The novelty of this application versus the previous application is that it is designed for
mobile devices and eliminates the need for a Leap Motion controller which has been
replaced by a VR headset for interaction.
Virtual reality refers to artificial environments created on a computer that provides a
simulation of reality so successful that the user can get an impression of physical
presence close to real, also in some real places and imaginary places [11]. Virtual reality
allows the user to interact with 3D objects both on a PC and on a mobile phone. Currently,
the most popular VR systems are Oculus Rift, HTC Vive, Microsoft HoloLens and
console Sony Playstation VR [12].
The most widespread equipment for virtual reality headsets use the mobile phone
mounted inside. They are equipped with two lenses through which one can see two
separate views/perspectives from the mobile phone at the same time in order to provide
a 3D depth sensation.
The application uses Unity 5.4 software, as an editor of applications, games or
experiments where one can write code in C# and JavaScript. The resulting applications
from this editor may run on different platforms such as PC, Mac and Linux, iOS, Android,
Windows Store, Xbox One and Samsung TV, either in 2D or 3D.
2.1. SQLite
In order to store information about human skeletal bones for the application, we used
SQLite, a C library that implements an SQL database engine embedded, provides the
ability to insert it into different systems and requires zero-configuration [13].
S. Nicola et al. / VR Medical Gamification for Training and Education 99
Other Game Object editor Unity menu objects are: Point light object for setting light
in the application, Text object used to display information from the database when
selecting a bone in the skeleton and the buttons: Reset, Start / Stop Rotate, Move each
button representing the functionality of our application. Figure 1 presents an overview
of previously described items and the Interaction buttons.
As previously mentioned in the description of the Game Object of the Unity editor,
as menu items, buttons represent a functionality of the app allowing the user to interact
with the 3D human skeleton. This interaction is possible by pressing the buttons and
using the Gvr Recticle object mentioned above. The user may press the menu control
button by placing the point of interaction on the desired button for 2 seconds.
The main features of the application are:
x Turning On and Off the spin of the skeleton. These two functionalities are
present on the Start / Stop Rotate menu. First, pressing the button, the 3D
skeleton will begin to rotate from right to left each second at an angle of 45
degrees, and the skeleton will make a complete rotation in 8 seconds. At the
second press of the same button the skeleton will stop spinning and it will
remain in the current position. This functionality is implemented with a C#
script on its corresponding button: Start / Stop Rotate.
x Movement in 3D space. This functionality allows the user to navigate through
the application. Basically the user can move in the virtual 3D space and may
view the skeleton in different positions, like he has in front of him/her a real
skeleton. This functionality is related to the Move button, and when pressing it,
using the Gvr Recticle object, the user can navigate through the application. The
user may stop this movement by pressing the Reset or interaction focused the
yellow point down (if the position y of the object Gvr Recticle in virtual 3D
space is less than -5). If the user will focus the subject Gvr Recticle down, so
that the position of this object on the y axis (the height), to be less than or equal
to -5, then it will stop the movement. The object Gvr MainCamera defines the
functionality and uses a C# script where the object Gvr MainCamera is placed
for moving forward. The script has been added to the Move menu button user
control.
x Selected bone information. When selecting a bone from the 3D skeleton using
the yellow point (object Gvr Recticle) the user receives information about the
selected bone from the SQLite database on his mobile phone, and the selected
bone information is displayed. Additionally, for a better view, the bone that is
selected will change its original color in green. The code linking the database
and selecting the right information, as well as color changing of the selected
bone is a C# script added over the object Gvr Recticle.
x Reset current position in the virtual 3D space. To reset the position of the
skeleton application and the current user location in the 3D virtual space uses a
Reset button, setting all to default status.
The functionalities are provided by a VR headset that helps the user to control the
application running on the mobile phone by moving his head, thus setting in motion
object Gvr Recticle for the interaction with the 3D model.
S. Nicola et al. / VR Medical Gamification for Training and Education 101
3. Results
To set the application for a mobile phone and an Android system, after developing the
application we moved to another development platform from the PC, Mac & Linux to
Android and build the file “.apk” (option Build). In addition to the file “.apk” the editor
had to prepare two external unity tools “Android.sdk” and “JDK 1.8”.
Figure 2 shows the main functionality of the application, mainly presenting user
information. The student may learn by placing the bones of the human skeleton, Gvr
Recticle (yellow point) on 3D images (bones) that make up the entire 3D image Bone.
In Figure 2 the user places a yellow point (the object Gvr Recticle) on the head of
the human skeleton and then automatically receives information about i\at in the top of
the display.
For functionalities like the spin off rotation skeleton, and also the movement of the
user through the virtual 3D space we use Equations 1-3:
Spinning equation:
R=mi+k; (1)
where:
mi - represents the measure of the angle of rotation at moment
i (i = 1-> n; n = 8);
k - constant angle of 45 °;
R - measure of the angle of rotation
Table 1 presents the values for a rotation depending on the measure angles of
rotation and the rotation moments. Note that a complete rotation of the skeleton requires
8 positions.
102 S. Nicola et al. / VR Medical Gamification for Training and Education
Movement equation:
Pfc(xf, yf, zf) = Pic(xi, yi, zi) * s; (xf > xi, yf = yi, zf > zi) (3)
where:
Pfc(xf, yf, zf) - the final position of the camera on coordinates x, y, z;
Pic(xi, yi, zi) - the initial position of the camera on coordinates x, y, z;
s - constant vector speed
Defining these three equations: spinning (1), spinning of (2) and movement (3), has
an important role in the functionality of the application and also in the study of the
dependence of the positions (x, y, z) in the 3D virtual space of the objects that one
interacts with, reported to the time and other 3D positions of other objects. It is important
to specify that all these coordinates of the virtual objects are coordinates read from a
virtual space and are very important in defining new functionalities in our application.
We aim to find the gestures and their reflection to coordinates that provide the best
interaction.
4. Discussion
The application is in tests using an Android 4.2 platform mobile phone. For now, from a
technical point of view we concluded that in terms of memory space the application
needs an Android system version at least 5.1 or better. The VR headset together with the
mobile phone allows the user to modify the distance between headset and the mobile
phone lenses and to modify the distance between the two lenses. These changes are done
mechanically by each user from each corresponding buttons, changes depending on each
user's eyesight. The application is scheduled to enter user testing with medical students
of the first year from the University of Pharmacy and Medicine Timisoara. The testing
is using a well defined plan assessing the sense of presence associated with virtual
environments [15].
The application described in this paper enables the users – medical students or any
interested one - to learn the bones of the human skeleton in an interactive way and a
realistic 3D environment. Installing the application on mobile phones is easy, being the
same as for any other regular applications. Also, the VR headsets offered by various
companies are not very expensive. The application’s functionality does not depend on
the type or brand of the VR headset used. The application offers an easy to use and
relatively cheap modern training solution.
Using VR headsets provides an alternative way to create applications for 3D
visualization of virtual objects. Gamification, the concept that the application is based
on, offers to users the possibility to learn and control the bones of the human skeleton in
S. Nicola et al. / VR Medical Gamification for Training and Education 103
a realistic mode. The study of the coordinates of the positions of the virtual 3D objects
reported to the time and other coordinates in the virtual 3D space open new opportunities
to create new functionalities.
The next version of the application will be based on augmented reality (placement
of virtual objects in the real world and viewed using VR glasses), the human skeleton
will not occur in a virtual world, but will appear in the real world, so users being able to
explore much better this skeleton.
In the future we plan to integrate a LEAP Motion controller for multimodal
interaction.
References
1. Introduction
Efficient, interested, and committed human resources play an important role in the
promotion of public health [1]. In this regard, continuous education programs are of great
importance due to the dynamic nature of medical sciences. Therefore, educational needs
assessment is the first step towards developing educational programs for employees and
is actually the first prerequisite to improve and guarantee the effectiveness of education
[2, 3]. In fact, educational activities should not be implemented without paying attention
to existing conditions and staff needs [4, 5]. In addition, professional education updates
1
Corresponding author, Department of Medical Records and Health Information Technology, School of
Paramedical Sciences, Mashhad University of Medical Sciences, Azadi Square, PardisDaneshgah, Mashhad,
IR Iran, Tel: +98-5138846728, E-mail: sarbazm@mums.ac.ir
K. Kimiafar et al. / Assessing the Educational Needs of Health Information Management Staff 105
healthcare providers’ knowledge and enables them to function better and, consequently,
meet health care standards, improve the health of the population, motivate, and guarantee
service quality by acquiring new knowledge, skills, and attitudes.
Studies on HIM employees confirm the importance of determining their educational
requirements [6]. HIM professionals have a combination of different skills. Healthcare
systems are changing and the need for HIM skills rising rapidly [7]. These needs are
changing over time [8]. For example, in recent years, the growth of the ICT, information
systems, electronic health records, and new methods of reimbursement are changing the
HIM roles and also their educational needs [9].
In Iran, HIM employees have various roles in the healthcare system such as
providing health statistics, clinical coding, application of health information technologies,
protecting health information privacy, etc. [10] Previous Iranian research indicated
certain deficiencies. For example, some studies show a poor quality of coding and
clinical documentation [11, 12]. In addition, some researchers showed the poor
knowledge of HIM staff regarding privacy and confidentiality [13].
Recently, the ministry of health has implemented a health reform program in which
the applications of electronic systems such as electronic health records and hospital
information systems have been emphasized. Other studies indicated the necessity of
restructuring the HIM departments in Iran [10]. Some authors also emphasized the
creation of new roles and positions in HIM departments to promote health information
practices and health information technologies in hospitals [14, 15]. Although most HIM
employees have graduated under formal university educational programs, the follow-up
courses emphasize staff re-training [10]. Additionally, moving to the electronic
environment (such as eHealth) makes re-training necessary. For this, it is important to
do an educational need assessment. Therefore, this study aimed to assess the educational
needs of the HIM staff in hospitals of the Mashhad University of Medical Sciences in
Iran.
2. Methods
A descriptive analytical study was conducted between March and September 2015 in the
eight teaching hospitals affiliated with the Mashhad University of Medical Sciences
(northeast Iran). The Mashhad University of Medical Sciences is one of Iran’s biggest
universities, with 7,000 students in different fields of medicine and several teaching
hospitals. It is responsible for the health care of 2,782,976 people.
The criteria for the selection of participants were their willingness to participate and
availability. A questionnaire was offered to all 60 HIM staff. A total of 41 HIM staff
completed the questionnaire (response rate: 68.3 %).
Data was collected from the questionnaire. Authors developed the preliminary
questions based on the current educational curriculum in Iran and the practices of HIM
departments. The questions were evaluated by HIM professors for content validity.
Therefore, vague questions were reviewed and corrected. The reliability of the
questionnaire was calculated by a Cronbach's alpha (0.98). The questionnaire contained
two parts: a) demographic characteristics, and b) subject and sub-subject educational
themes.
The responses were collected through a five-point Likert scale. The data were
analyzed using the SPSS version 16.0. The questions were first scored (1 = strongly
disagree to 5 = strongly agree). Then, the mean and frequency of the responses were
106 K. Kimiafar et al. / Assessing the Educational Needs of Health Information Management Staff
calculated. The normality of the data was tested and then the means of the groups were
compared.
3. Results
The mean age of the participants was 35.6 years. Most participants were female (92.7%).
The educational levels of the participants were as follows: 26.8% had associate’s degree,
65.9% bachelor’s degree, and 7.3% master’s degree. Working experience was 12.5 years
on average.
The most important educational need was related to medical terminology,
occupational safety, legal aspects, the newest rules and regulations, and ministry
guidelines. The extent of educational needs in any topic did not differ on the basis of age,
gender, educational level or field of study. The need to learn about classification systems
and coding had a significant relationship with work experience (P = 0.045) and those
with a work experience of 6̽10 years had lesser needs. Educational need for statistics
was also significantly associated with the number of years since graduation (P = 0.005),
as those with 5̽10 years’ experience after post-graduation had lesser needs than others.
The learning of statistical programs, such as the SPSS and Excel, was found to be the
greatest need. Regarding classification and coding, the maximum need was related to the
ICD-10 and ICD-9-CM rules. Topics related to occupational safety, such as ergonomics
and standards, were the greatest educational need of the staff in this segment (Table 1).
Regarding legal aspects, knowledge about the principles of obtaining informed
consent, access to information, and confidentiality were among the greatest needs. The
EHR standards were also among the essential and important subjects about which greater
understanding was needed.
Education in various aspects of insurance was not identified as an important need.
However, among the cases pertaining to this area, issues related to the role of the HIM
in insurance and insurance organizations were considered more important. In the field of
national legislation, the maximum need was felt for a better understanding of the laws
concerning Iranian electronic health records, and the health reform program. In the field
of storage, topics related to the elimination and retention of information and, in the field
of medical terminology, information about the diseases, surgery, and anatomy were the
most important educational needs. In the field of technology, greater knowledge of issues
such as the HIS and the sharing of sources were the most important need, and the least
felt need of the staff was related to programming. As for the “other” issues, data quality
was the most important.
4. Discussion
Various studies have emphasized on developing new role for HIM staff and also their
educational needs [16-18]. Our results showed that the greatest educational needs were
related to medical terminology, occupational safety, legal aspects, the new rules and
regulations, and ministry rules, while the minimum requirements were related to
insurance and topics such as disease registry, data ownership, and data quality. Another
study in this field showed that educational needs were dissimilar in different HIM
departments. This research indicated the need to promote and develop technical skills in
K. Kimiafar et al. / Assessing the Educational Needs of Health Information Management Staff 107
the spheres of archive units, coding, statistics, and human skill in hospital admission
units [19]. Consistent with our findings, a study in the USA showed that 96 percent of
HIM students believed that HIM educational program prepared them for their first job,
while only 70 percent of the HIM staff agreed with this statement. 56 percent of the HIM
staff believed that they needed additional training when they entered their position [6].
According to the American Health Information Management Association (AHIMA),
HIM staff need to prepare for different roles and should be more educated in different
domains [18]. Another study emphasized on the three skills including managerial skills
(change management, financial and planning), data skills (database, data management,
data standards and data analysis) and healthcare knowledge (ethics, regulations,
healthcare system, patient centered health care) [16].
The educational need for classification systems and coding was significantly related
to work experience, and those with greater experience had a lesser need for further
training. Education relating to statistics also had a significant association with the
number of years spent in the profession since graduation. Regarding classification, the
greatest need was for the learning of the ICD-10 rules. The poor quality of coding in
Iranian hospitals suggested there was a need for educating coders in this area [12]. Owing
to the importance of data coding in quality management activities, planning, research
activities, and payments, the need was felt for planning an educational program to meet
the qualitative elements of coding. Education for coding and classification is also
emphasized by AHIMA in 2014 HIM curriculum [18].
Topics related to occupational safety, such as ergonomics and related standards,
were found to be the greatest educational need of the staff. The HIM staff spends a lot of
time working with computers and doing things that involve very little movement, are
monotonous, routine, and repetitive. Such work can lead to musculoskeletal disorders.
Computers are an inseparable part of the HIM workplace. Published reports indicate a
high risk of musculoskeletal disorders among computer users compared to those in other
professions [20]. The AHIMA also emphasized on educating such topics to HIM staff
[18]. Another educational need taken into consideration in this study concerned access
to information and privacy. Recent studies have emphasized the need for continuous
education for health staff in ethical issues and confidentiality of information [18, 21]
which is consistent with our findings.
In the area of technology, education in the HIS and sharing of information resources
was the most important need of the employees. In the “other” category, clear concepts
regarding data quality were found to be more essential than other aspects. Studies in other
developing countries, such as Nigeria, also showed that due to the rapid changes in the
ICT, educational programs for the empowerment of HIM personnel must be changed and
updated [22]. AHIMA have also considered skills related to data analytics, health
information technology, health informatics and data quality as well as clinical document
improvement as essential skills for HIM staff [18].
In conclusion, results of the current study showed that despite the development of
health information systems and moving from traditional HIM practices to health
information technology and the fact that educational programs and curriculum of Iranian
universities had progressed towards computerized information systems, the employees
of HIM departments still needed to be educated on computerized information
management. Therefore, those who plan educational programs for health information
professionals must have a comprehensive view of the needs of the staff based on the new
requirements of health systems.
K. Kimiafar et al. / Assessing the Educational Needs of Health Information Management Staff 109
References
[1] Behnampour N, Heshmati H, Rahimi S. A survey on Paramedical and health students' satisfaction with
their discipline and some of the related factors. Iranian Journal of Medical Education. 2012;12(8):616-
8. [Persian]
[2] Hojat M. Need Assessment of Nursing Personnel of Jahrom University of Medical Sciences Using
Delphi Technique in 2008. Iranian Journal of Medical Education. 2011;10(4):464-73. [Persian]
[3] VanNieuwenborg L, Goossens M, De Lepeleire J, Schoenmakers B. Continuing medical education for
general practitioners: a practice format. Postgraduate Medical Journal. 2016; 92(1086):217-22.
[4] Kjaer NK, Vedsted M, Høpner J. A new comprehensive model for continuous professional development.
The European Journal of General Practice. 2017;23(1):20-26.
[5] Engström M, Löfmark A, Vae KJ, Mårtensson G. Nursing students' perceptions of using the Clinical
Education Assessment tool AssCE and their overall perceptions of the clinical learning environment -
A cross-sectional correlational study. Nurse Education Today. 2017; 51:63-67.
[6] Bates M, Black C, Blair F, Davis L, Ingram S. Perceptions of health information management
educational and practice experiences. Perspectives in Health Information Management. 2014;11.
Available from: https://www.ncbi.nlm.nih.gov/pubmed/25214821
[7] American Health Information Management Association; AHIMA assembly on education transitions
health information management college programs into electronic age. Health & Medicine Week 2007
Aug 06:438.
[8] Cleveland AD. Miles to go before we sleep: education, technology, and the changing paradigms in health
information. Journal of the Medical Library Association. 2011;99(1):61-9.
[9] Abdelhak M, Grostick S, Hanken MA. Health information: management of a strategic resource: Elsevier
Health Sciences; 2014.
[10] Hosseini A, Sheikhtaheri A. A new model for the organizational structure of medical record departments
in hospitals in Iran. Perspectives in Health Information Management/ American Health Information
Management Association. 2006; 3 (4). Available from:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2047300/
[11] Farzandipour M, Sheikhtaheri A. Evaluation of Factors Influencing Accuracy of Principal Procedure
Coding Based on ICD-9-CM: An Iranian Study. Perspectives in Health Information Management/
American Health Information Management Association. 2009; 6(5). Available from:
www.ncbi.nlm.nih.gov/pmc/articles/PMC2682663/
[12] Farzandipour M, Sheikhtaheri A, Sadoughi F. Effective factors on accuracy of principal diagnosis
coding based on International Classification of Diseases, the 10th revision (ICD-10). International
Journal of Information Management. 2010;30 (1): 78-84.
[13] Sheikhtaheri A, Kimiafar K, Barati A. Knowledge of physicians, nurses and medical record personnel
about legal aspects of medical records in teaching hospitals affiliated to Kashan university of medical
sciences. Health Information Management. 2010;7 (2):
[14] Moghadasi H, Sheikhtaheri A. CEO is a vision of the future role and position of CIO in healthcare
organizations. Journal of Medical Systems. 2010; 34 (6): 1121-1128.
[15] Sadoughi F, Davaridolatabadi N, Maleki MR, et al. A Comparative study on organizational positions
of health management and information technology department of hospitals and proposing a model for
Iran. Bimonthly Journal of Hormozgan University of Medical Sciences. 2015; 19 (2):93-99.
[16] Cortelyou-Ward K, Noblin A, Kahlon S. Competencies for global health informatics education:
leveraging the US experience. Educational Perspectives in Health Informatics Information Management.
2013. Available from: http://eduperspectives.ahima.org/competencies-for-global-health-informatics-
education-leveraging-the-us-experience/
[17] Ryan J, Patena K, Judd W, Niederpruem M. Validating Competence: a new credential for clinical
documentation improvement practitioners. Perspectives in Health Information Management. 2013;10.
Available from:
https://www.ncbi.nlm.nih.gov/pubmed/?term=Validating+Competence%3A+a+new+credential+for+cl
inical+documentation+improvement+practitioners.
[18] AHIMA Foundation. Academic competencies. 2014. Available from:
http://www.ahimafoundation.org/education/curricula.aspx.
[19] Jahanbakhsh M, Setayesh H. Educational needs of medical records practitioners in Isfahan teaching
hospitals. Iranian Journal of Medical Education. 2011;10(5):962-71. [Persian]
[20] Solhi M, Khalili Z, Zakerian S, Eshraghian M. Prevalence of symptom of musculoskeletal disorders and
predictors of proper posture among computer users based on stages of change model in computer users
in central Headquarter, Tehran University of Medical Sciences. Iran Occupational Health.
2014;11(5):43-52. [Persian]
110 K. Kimiafar et al. / Assessing the Educational Needs of Health Information Management Staff
[21] Sher M-L, Talley PC, Cheng T-J, Kuo K-M. How can hospitals better protect the privacy of electronic
medical records? Perspectives from staff members of health information management departments.
Health Information Management Journal. 2016:1833358316671264.
[22] Makinde OA, Mami MI, Oweghoro BM, Oyediran KA, Mullen S. Investing in health information
management The right people, in the right place, at the right time. Health Information Management
Journal. 2016; 45(2):90-6.
Health Informatics Meets eHealth 111
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-111
1. Introduction
In 2012 the government of the Republic of Albania made a strategic decision to adopt
industry standards for the emerging domain of eHealth in the country. With the support
of the Austrian government, the Ministry of Health of Albania (MOH) launched a
nationwide Electronic Health Record implementation project at the end of 2014, which
was successfully completed in 2016.
This article provides a case study of the nationwide Electronic Health Record
implementation project and outlines the lessons learned, focusing on the area of
application of global healthcare information exchange (HIE) standards in Albania. The
main objectives of this paper are: EHR implementation results assessment, identification
and analysis of the key project implementation components, shortcomings, and factors
that contributed to the successful implementation.
The term EHR is ambiguous and requires an explicit definition in the scope of this paper.
The EU commission has defined an Electronic Health Record as a comprehensive
1
Corresponding Author: Olegas Niaksu, AME International GmbH, Vienna, Austria, niaksu@acm.org
112 O. Niaksu et al. / Implementation of Nationwide Electronic Health Record in Albania
medical record or a similar documentation of the past and present physical and mental
state of health of an individual in electronic form and provides an ubiquitous availability
of these data for medical treatment and other closely related purposes [1]. According to
the World Health Organization, an EHR shall contain all personal health information
which belongs to an individual [2]. This information shall be entered and accessed by
healthcare providers.
Noteworthy, in this paper we omit the WHO imperative that data shall be entered
and accessed primarily by healthcare professionals. One of the strategic goals of the
Albanian government is the possibility for a patient to access his personal clinical data
which is stored in the EHR system. Another valuable source for integrated medical care
is the utilization and connection of data collected by medical devices for home use.
Finally, the topics of the inclusion of Personal Health Records (PHR) and the possibility
of merging these records with the EHR is intended for the future.
According to a HIMMS study [3], there are four major architectural models for a
national EHR system. In a fully federated architecture model, patient data resides in a
local facility. There is no central clinical data repository (CDR). The patient data always
remains in the source systems of facilities. The EHR system front-end pulls data from
the systems of health care providers. In a federated architecture model, patient data
resides in a local facility. Patient data is consolidated in local CDRs. The EHR system
pulls from a local CDR for updates to a central CDR as needed. In a service oriented
model, patient data is sent to the EHR system by a message at the end of a care event.
Local systems are message enabled. The CDR holds care events within a patient record.
Finally, an integrated Electronic Patient Record (centralized architecture) model
represents a single integrated system. There are also hybrid models. For instance, France
has an architecture model that falls in the category model Nr. 2 and 3. Sweden has an
architecture that falls in the category model Nr.1 and 2.
The selection of the EHR architecture model is of fundamental importance. First, it
shall meet major non-functional requirements for the information system. Second, it shall
be robust enough to support not only the growth of the EHR system but also to
accommodate the natural evolution of a country’s e-Health ecosystem. Third, the
shortcomings and limitations of the selected model shall be accepted and justified for the
specific use case. The selected architecture model of the Albanian EHR system is
presented in Section 2.2.
Initial ICT setting and eHealth landscape in Albania before project start
The existing Health Information System in Albania before the project was fragmented
[4], of poor quality and in most cases not relevant for clinical and administrative
decision-making. Lack of national standards and agreed data exchange protocols and
terminology for Health Information Systems resulted in severe shortcomings and
limitations of some ICT projects implemented in that time.
The ICT infrastructure in Albanian Healthcare System was underdeveloped and only
islands of local area networks of different quality and standards were in place. In smaller
Healthcare Provider Organizations (HPO) there was no networks infrastructure deployed.
Larger HPOs had a few computers in administration department and statistical back
office. In the largest hospitals computers were partially utilized in the admissions.
To the date, only a few information systems have been implemented in public HPOs.
The cost accounting application “Kosto Spitali” had been introduced in 2011 [5]. The
application has been used to collect statistical and financial data. However, the
O. Niaksu et al. / Implementation of Nationwide Electronic Health Record in Albania 113
2. Methods
This study employs a single explanatory/exploratory case study design. All authors have
in depth knowledge and first-hand experience in the subject of the study and hence are
able to analyze and evaluate the technological, governance and change management
impact of the EHR implementation. The assessment of the system uptake, user
satisfaction and subsequent conclusions have been drawn relying on the extracted
statistical data of the system usage and the performed survey of the end-users.
Prior to the implementation of the nationwide EHR solution, the Albanian government
with support of the Austrian government conducted a feasibility study. The study
investigated AS-IS situation of the ICT penetration in the Albanian public healthcare
sector and has proposed, based on Austrian and EU experiences, a prioritized approach
for the informatization of healthcare in Albania.
The study has shown that the majority of public hospitals are lacking network
infrastructure. Out of 40 public hospitals only a handful had LAN networks and deployed
medical information systems. The fundamental outcome of the study was an outlined
prioritization of the necessary ICT activities in the healthcare sector of Albania. Two
subsequent major steps were proposed: the implementation of a nationwide EHR system
and the subsequent implementation of hospital information systems. Considering the
"greenfield" ICT status in Albanian public HPOs, the study suggested to establish an
industry standard EHR backbone, which would provide main building blocks for the
eHealth infrastructure, as well as to establish well defined data exchange protocols for
transport and semantic interoperability. Aiming to address the internet connectivity
issues in remote areas, the study recommended the application of a federated architecture
for the nationwide EHR, which would enable the operation of local facilities’ systems in
offline mode. Addressing the limited number of existing information systems and ICT
infrastructure, it was suggested that the EHR solution shall include a front-end solution
for healthcare providers as well as the necessary ICT hardware required to partially
computerize public HPOs.
The Ministry of Health of Albania (MOH) has decided to implement the Austrian EHR
COTS 2 solution, which is based on industry standards and is conformant with IHE
2
COTS - commercially available off-the-shelf software
114 O. Niaksu et al. / Implementation of Nationwide Electronic Health Record in Albania
profiles recommended by the EU commission [6]. This strategic decision aimed to ensure
an interoperable healthcare information exchange, predictable project results and future
possibility of connectivity and data exchange with other EU countries through the
epSOS/EXPAND project [7, 8].
Based on the outcomes of the feasibility study, a hybrid architecture model was
proposed. By the year 2015, the internet connectivity has considerably improved,
although limited bandwidth and low quality of service prevailed in remote areas.
Considering this development, the hybrid model, combining federated architecture with
service oriented architecture was adopted.
The selected architectural model has the following building blocks and features:
1. Local master patient index, XDS3 register and repository are deployed for
each healthcare facility;
2. National master patient index, national XDS register and a copy of all
repositories are deployed in a central node;
3. Asynchronous communication of local nodes and central node enables the
operation in unstable or none-connectivity mode;
4. Duplication of the MPI and clinical data repositories provide an extra layer
of data loss protection.
The logical scheme of the EHR solution has two major layers – a central node (data
center) and local facilities nodes. The central node includes the global master patient
index (eMPI), copies of the XDS repositories and registers, the HPO organizations and
personnel catalogue, components for auditing, access control, patient policy handling,
interface adapters and the reporting module. Each local node is comprised of the front-
end portal, a local MPI, a local XDS registry and repository, a HPO organizations and
personnel catalogue, components for auditing, access control and asynchronous
communication component. The following IHE profiles has been utilized in the system:
ATNA, BPPC, CT, XCA, XDR, XDS, XUA, PDQv3, PIXv3, RFD, XPID, XDW, MS,
EDPN, TN, and IC.
The presented architecture was designed, implemented and deployed at the national
e-government data center (central node) and 79 HPOs in the country (local nodes).
In the early phase of the project, it was decided to rely on the HL7 CDA 4 level 3
document architecture. However, at the time when the project started, Albania had no
3
XDS – Cross Enterprise Document Sharing is a defined set of standards and implementation rules for
cataloging and sharing patient records across health institutions.
4
HL7 CDA – Clinical Document Architecture
O. Niaksu et al. / Implementation of Nationwide Electronic Health Record in Albania 115
In this section, we address the critical EHR solution building blocks, related issues and
relevant implementation decisions.
decided to structure and provide an identity to all Albanian provider organizations, their
hierarchical sub-organizations, their IT systems and their personnel using HL7 OID
schemes.
The Ministry of Health officially registered the root HL7 OID for Albania eHealth
and adopted the structure of the Austrian OID tree to the local requirements.
3. Results
The most noticeable outcomes after completion of the system rollout are as follows:
• Integration with the e-government platform through e-government Electronic Service
Bus, allowing to query and receive high quality patient demographic data.
• Improved patient flow control due to a better patient demographic data collection.
• Implemented front-end portal and BI component, allowing the minimization or
complete elimination of paper journals for the registration of patients in the facilities.
• Creation of standardized electronic discharge summaries, emergency physician notes
and out-patient visit summaries.
• Provision of direct access through the patient portal and of indirect access through an
API to clinical data stored in clinical document repositories.
• Exchange of clinical documents and health information between 79 health care
facilities and the possibility to connect a non-limited number of further public HPOs,
e.g. GP offices.
• The Albanian Health Data Dictionary [11] was appended with the following coding
systems and classifiers:
o LOINC: CDA DocumentCode, CDA SectionsCode, CDA EntryCode;
o HL7: PersonalRelationshipRoleType, RoleClass,
TelecommunicationAddressUse, TimingEvent, URLScheme,
ObservationInterpretation, ActSite,
o EDQM: DoseForm, Package
o ISO: ISO 639-1 (LanguageCode), ISO 3166-1 (CountryCode
o UCUM: Units
o Albanian classifiers: Allergy, AdverseReaction, ProblemCode,
TargetSiteCode, Severity, SocialHistory, StatusCode
118 O. Niaksu et al. / Implementation of Nationwide Electronic Health Record in Albania
4. Discussion
Given the complexity of eHealth projects and high diversity of the healthcare sector
environments, an objective assessment of eHealth initiatives has proven to be a
challenging task. Nevertheless, we try to evaluate this EHR implementation project
through the dimensions of the project goals achievement and the assessment of the main
project deliverables.
We have identified various shortcomings and indispensable compromises, which
have the potential to influence the decision-making process in other e-Health projects
and may support the completion of e-Health environments in other countries.
Assessment of achieved Results and Recommendations for other nationwide EHR
Solutions
Limitations
The authors of this paper have been directly involved in the implementation of the
nationwide EHR project in Albania. Although our intention has been to remain neutral
and constructive, possible bias in the assessment of the results and the overall evaluation
may occur.
Moreover, the evaluation timeframe after completion of the project is still too short
to draw a solid conclusion on the final results. Therefore further studies are required to
examine the effects and impacts of a nationwide EHR on the healthcare landscape in
Albania.
Several important activities are already in an implementation phase and others are in a
planning stage. The integration with the Albania citizen portal is currently undergoing
testing and is waiting for the final acceptance. The e-Health section of the portal will
allow Albanian citizens to access their medical documents stored in the EHR system. It
120 O. Niaksu et al. / Implementation of Nationwide Electronic Health Record in Albania
will provide an auditing report and will allow the submission of consent for data
exchange between health care institutions.
It is planned that all implemented national e-Health systems will be connected to the
open and standard based EHR backbone. This will affect the recently implemented and
deployed adult check-up system and the e-prescription solution.
Another important direction to be addressed is the further informatization of public
hospitals. The Ministry of Health has started preparatory work for the implementation of
hospital information systems in all regional hospitals in the country. All new IT systems
for clinical information management, procured by the MoH, will be connected to EHR
backbone.
References
1. Introduction
response that are caused by pharmacogenetic variants [6]. The effects of these gene-drug-
drug interactions (GDDI) are often subsumed under the term “phenoconversion” to
account for a mismatch between the “phenotype” that was predicted merely based on a
patient’s PGx results and his real phenotype that may be influenced by many more factors.
If not sufficiently addressed by CDS algorithms, prescription-drug mediated
phenoconversion has the potential to undermine the power of PGx CDS interventions in
finding the right dosage for the individual patients. While these important limitations of
PGx-guided prescribing are commonly acknowledged, the actual extent of prescription-
drug mediated phenoconversion remains unclear. In this study, we aimed to assess the
frequency of potential GDDIs in the Austrian population by screening claims data from
four federal provinces for concomitant use of PGx drugs (i.e. drugs that can be dosed
based on PGx guidelines) and drugs that are known to act as inhibitors or inducers of the
affected enzyme or transporter.
2. Methods
Table 1: Overview of the number of PGx drugs, inhibitors and inducers included in the analysis, broken down
by gene.
Gene PGx drug Inhibitor Inducer Total
CYP3A5 1 64 30 95
CYP2D6 27 60 2 89
CYP2C9 8 37 16 61
CYP2C19 18 19 12 49
SLCO1B1 1 12 0 13
UGT1A1 1 5 2 8
TPMT 3 5 0 8
Table 2: Number of concomitant prescriptions of PGx drugs with inducers or inhibitors within the two-year
observation period from 2006 to 2007, broken down by gene and level of inhibition or induction.
Gene Strong Moderate Weak Unclassified Total
CYP2D6 16,620 34,963 37,356 53,737 142,676
CYP2C19 N/A 58,782 N/A 2,157 60,939
CYP2C9 317 8,117 5,934 5,537 19,905
SLCO1B1 N/A 7,377 N/A N/A 7,377
TPMT N/A N/A N/A 752 752
CYP3A5 61 663 17 8 749
UGT1A1 N/A N/A N/A 0 0
Total 16,998 109,902 43,307 62,191 232,398
3. Results
Our study population consisted of 1,587,829 Austrian insurance holders from four
federal provinces and health insurance funds (“Gebietskrankenkasse Niederösterreich”,
“Gebietskrankenkasse Kärnten”, “Gebietskrankenkasse Salzburg” and
“Betriebskrankenkasse Neusiedler”). In total, 393,476,104 prescriptions issued in the
years 2006 and 2007 were screened for overlapping prescriptions of PGx drugs and
inhibitors or inducers of the respective enzyme or transporter. 928,309 patients (58.8%
of the study population) received at least one PGx drug in the observed time frame.
For 1,124 out of the 4,440 included interaction pairs, our analysis revealed at least one
case of a concomitant prescription of a PGx drug with an inhibitor or inducer in the
observed time frame. In sum, our analysis detected a total of 232,398 such cases,
implying that, on average, every fourth patient who was treated with a PGx drug was
concomitantly prescribed an inhibitor or inducer of the respective enzyme or transporter.
More than half of those cases could be attributed to concomitant prescriptions of
PGx drugs with moderate (47.3%) or strong (7.3%) inhibitors or inducers. Concomitant
prescriptions of PGx drugs with weak or unclassified inhibitors or inducers accounted
for 18.6% and 26.8% of all cases, respectively.
As can be seen from Table 2, interaction pairs associated with the CYP2D6 gene
made up more than half of all detected cases, followed by CYP2C19 which accounted
for more than a quarter of the overall number of cases.
Table 3 shows the ten most prescribed interaction pairs together with the affected gene
and the level of inhibition or induction that is potentially caused by the agent.
Antidepressants with moderate or weak inhibiting properties (i.e. citalopram,
escitalopram, fluoxetine) were strongly represented among the most prescribed pairs.
Similarly, the ten most prescribed pairs of PGx drugs with strong inhibitors or
inducers almost exclusively consisted of combinations of the antidepressants paroxetine
and fluoxetine with other antidepressants or pain medication (see Table 4).
K. Blagec et al. / The Importance of Gene-Drug-Drug-Interactions in PGx Decision Support 125
Table 3. Top 10 most prescribed pairs of PGx drugs and inducers or inhibitors
Inhibitor / Inducer PGx drug Degree of inhibition / Gene Number of
induction concomitant
prescriptions
Clarithromycin Simvastatin Moderate inhibition SLCO1B1 5,483
Citalopram Tramadol Unclassified inhibition CYP2D6 5,111
Dexamethasone Tramadol Unclassified induction CYP2D6 4,423
Carvedilol Tramadol Moderate inhibition CYP2D6 4,339
Omeprazole Citalopram Moderate inhibition CYP2C19 4,253
Escitalopram Mirtazapine Weak inhibition CYP2D6 3,783
Escitalopram Tramadol Weak inhibition CYP2D6 3,647
Fluoxetine Pantoprazole Moderate inhibition CYP2C19 3,639
Metoclopramide Tramadol Unclassified inhibition CYP2D6 3,459
Citalopram Mirtazapine Unclassified inhibition CYP2D6 3,101
An overview of the ten overall most co-prescribed drugs with inhibiting and/or
inducing properties across all interaction pairs can be found in Table 5.
4. Discussion
While it is commonly acknowledged that the intake of multiple prescription drugs has
the potential to weaken the significance of pharmacogenomic testing in predicting drug
response, the actual extent of potential prescription-drug mediated phenoconversion
remains unclear. This study aimed to address this gap by determining the frequency of
concomitant prescription of drugs that can be subject to PGx-based dosing, and drugs
that have the potential to modulate the activity of precisely those enzymes and
transporters whose function PGx test results try to predict.
Our results indicate that, on average, every fourth person with a prescription of a
PGx drug is concomitantly treated with an inhibitor or inducer of the respective enzyme
or transporter, which—if not adequately addressed by decision support algorithms—has
the potential to dilute the significance of PGx test results in finding the right dosage for
the individual patient.
While both the development of clinical decision support solutions (CDS) for PGx
testing and the optimization of CDS for drug-drug-interactions (DDI) have been major
research focuses in the past years, far too little attention has been paid to the intersection
between those two fields, i.e. gene-drug-drug interactions [12-16]. In light of the
progressing efforts to integrate PGx in clinical routine, it will be essential to ensure that
Table 4. Top 10 most prescribed pairs of PGx drugs and strong inducers or inhibitors
Inhibitor / Inducer PGx drug Degree of inhibition / Gene Number of
induction concomitant
prescriptions
Paroxetine Tramadol Strong inhibition CYP2D6 1,808
Paroxetine Mirtazapine Strong inhibition CYP2D6 1,563
Fluoxetine Tramadol Strong inhibition CYP2D6 1,528
Fluoxetine Mirtazapine Strong inhibition CYP2D6 1,153
Paroxetine Metoprolol Strong inhibition CYP2D6 1,121
Paroxetine Carvedilol Strong inhibition CYP2D6 897
Fluoxetine Metoprolol Strong inhibition CYP2D6 840
Paroxetine Amitriptyline Strong inhibition CYP2D6 752
Paroxetine Risperidone Strong inhibition CYP2D6 693
Fluoxetine Carvedilol Strong inhibition CYP2D6 671
Fluoxetine Amitriptyline Strong inhibition CYP2D6 648
126 K. Blagec et al. / The Importance of Gene-Drug-Drug-Interactions in PGx Decision Support
Table 5. Top 5 most co-prescribed drugs with inhibiting and/or inducing properties
Drug Number of concomitant prescriptions
Citalopram 8,212
Escitalopram 7,430
Clarithromycin 5,483
Dexamethasone 4,423
Carvedilol 4,339
Omeprazole 4,253
Fluoxetine 3,639
Metoclopramide 3,459
Amiodarone 3,081
Sertraline 3,033
important GDDI are adequately represented in clinical decision support solutions to call
the healthcare provider’s attention to their significance for PGx-based prescribing.
A limitation of our study lies in the fact that the duration of medication intake
captured by GAP-DRG are inferred based on measures such as DDDs and package size,
which do not necessarily reflect the real treatment duration. However, given that a
sizeable fraction of the drugs considered in this analysis are primarily used for long-term
treatment of chronic conditions, an overlap in the real intake duration can be assumed in
the majority of cases.
Furthermore, the results we present here are based on a retrospective analysis of
claims data from four Austrian federal provinces of the years 2006 and 2007. Prescribing
practices may vary between different regions and countries, and change over time, which
makes our findings less generalizable to other healthcare settings or later prescription
periods. More recent claims data from the years 2008 to 2011 would have been available
via the GAP-DRG2 database. However, using this dataset for our analysis would have
yielded even more regionally restricted results since only the federal province of Lower
Austria is covered.
This study emphasizes the importance of addressing GDDIs in PGx CDS systems
by showing that co-prescriptions of PGx drugs with inhibitor and inducer drugs are not
uncommon. For our analysis, we used a simple categorization scheme to grade the degree
of inhibition or induction a drug potentially causes. While such schemes could, in a first
step, be helpful in compiling a list of high priority GDDIs for use by PGx CDS systems
to alert medical professionals to the risk of potential interactions, predicting the actual
effects on a patient’s drug response phenotype in the presence of pharmacogenetic
variants will heavily rely on the availability of more detailed pharmacometric models
which take into account the involved PGx variants, drug substances and
inhibition/induction mechanisms.
Acknowledgment
We want to thank the Main Association of Austrian Social Security Institutions and
especially Gottfried Endel for granting us access to their database.
This project has received funding from the European Union's Horizon 2020 research
and Innovation programme under grant agreement No 668353 and by the Austrian
Science Fund (FWF) [P 25608-N15].
K. Blagec et al. / The Importance of Gene-Drug-Drug-Interactions in PGx Decision Support 127
References
[1] J.L. Mega, T. Simon, J.-P. Collet, J.L. Anderson, E.M. Antman, K. Bliden, C.P. Cannon, N. Danchin,
B. Giusti, P. Gurbel, B.D. Horne, J.-S. Hulot, A. Kastrati, G. Montalescot, F.-J. Neumann, L. Shen, D.
Sibbing, P.G. Steg, D. Trenk, S.D. Wiviott, M.S. Sabatine, Reduced-function CYP2C19 genotype and
risk of adverse clinical outcomes among patients treated with clopidogrel predominantly for PCI: a meta-
analysis, JAMA. 304 (2010) 1821–1830. doi:10.1001/jama.2010.1543.
[2] SEARCH Collaborative Group, E. Link, S. Parish, J. Armitage, L. Bowman, S. Heath, F. Matsuda, I.
Gut, M. Lathrop, R. Collins, SLCO1B1 variants and statin-induced myopathy--a genomewide study, N.
Engl. J. Med. 359 (2008) 789–799. doi:10.1056/NEJMoa0801936.
[3] M. Pirmohamed, G. Burnside, N. Eriksson, A.L. Jorgensen, C.H. Toh, T. Nicholson, P. Kesteven, C.
Christersson, B. Wahlström, C. Stafberg, J.E. Zhang, J.B. Leathart, H. Kohnke, A.H. Maitland-van der
Zee, P.R. Williamson, A.K. Daly, P. Avery, F. Kamali, M. Wadelius, A Randomized Trial of Genotype-
Guided Dosing of Warfarin, New England Journal of Medicine. 369 (2013) 2294–2303.
doi:10.1056/NEJMoa1311386.
[4] R.R. Shah, D.R. Shah, Personalized medicine: is it a pharmacogenetic mirage?, British Journal of
Clinical Pharmacology. 74 (2012) 698–721. doi:10.1111/j.1365-2125.2012.04328.x.
[5] R.R. Shah, R.L. Smith, Addressing phenoconversion: the Achilles’ heel of personalized medicine:
Impact of phenoconversion, British Journal of Clinical Pharmacology. 79 (2015) 222–240.
doi:10.1111/bcp.12441.
[6] M. Klieber, H. Oberacher, S. Hofstaetter, B. Beer, M. Neururer, A. Amann, H. Alber, A. Modak,
CYP2C19 Phenoconversion by Routinely Prescribed Proton Pump Inhibitors Omeprazole and
Esomeprazole: Clinical Implications for Personalized Medicine, J. Pharmacol. Exp. Ther. 354 (2015)
426–430. doi:10.1124/jpet.115.225680.
[7] J.J. Swen, M. Nijenhuis, A. de Boer, L. Grandia, A.H. Maitland-van der Zee, H. Mulder, G.A.P.J.M.
Rongen, R.H.N. van Schaik, T. Schalekamp, D.J. Touw, J. van der Weide, B. Wilffert, V.H.M. Deneer,
H.-J. Guchelaar, Pharmacogenetics: From Bench to Byte— An Update of Guidelines, Clinical
Pharmacology & Therapeutics. 89 (2011) 662–673. doi:10.1038/clpt.2011.34.
[8] K.E. Caudle, T.E. Klein, J.M. Hoffman, D.J. Muller, M. Whirl-Carrillo, L. Gong, E.M. McDonagh, K.
Sangkuhl, C.F. Thorn, M. Schwab, J.A.G. Agundez, R.R. Freimuth, V. Huser, M.T.M. Lee, O.F.
Iwuchukwu, K.R. Crews, S.A. Scott, M. Wadelius, J.J. Swen, R.F. Tyndale, C.M. Stein, D. Roden, M.V.
Relling, M.S. Williams, S.G. Johnson, Incorporation of pharmacogenomics into routine clinical practice:
the Clinical Pharmacogenetics Implementation Consortium (CPIC) guideline development process, Curr.
Drug Metab. 15 (2014) 209–217.
[9] P.D. Hansten, J.R. Horn, The top 100 drug interactions: a guide to patient management, 2015.
[10] Flockhart Table Indiana University, School of Medicine, Department of Medicine, (n.d.).
http://medicine.iupui.edu/clinpharm/ddis/main-table (accessed December 13, 2016).
[11] WHOCC - ATC/DDD Index, (n.d.). https://www.whocc.no/atc_ddd_index/ (accessed October 15, 2016).
[12] K. Kawamoto, D. Lobach, H. Willard, G. Ginsburg, A national clinical decision support infrastructure
to enable the widespread and consistent practice of genomic and personalized medicine, BMC Medical
Informatics and Decision Making. 9 (2009) 17. doi:10.1186/1472-6947-9-17.
[13] G.C. Bell, K.R. Crews, M.R. Wilkinson, C.E. Haidar, J.K. Hicks, D.K. Baker, N.M. Kornegay, W. Yang,
S.J. Cross, S.C. Howard, R.R. Freimuth, W.E. Evans, U. Broeckel, M.V. Relling, J.M. Hoffman,
Development and use of active clinical decision support for preemptive pharmacogenomics, J Am Med
Inform Assoc. (2013) amiajnl-2013-001993. doi:10.1136/amiajnl-2013-001993.
[14] B.R. Goldspiel, W.A. Flegel, G. DiPatrizio, T. Sissung, S.D. Adams, S.R. Penzak, L.G. Biesecker, T.A.
Fleisher, J.J. Patel, D. Herion, W.D. Figg, J.J.L. Lertora, J.W. McKeeby, Integrating pharmacogenetic
information and clinical decision support into the electronic health record, J Am Med Inform Assoc. 21
(2014) 522–528. doi:10.1136/amiajnl-2013-001873.
[15] C.L. Overby, P. Tarczy-Hornoch, J.I. Hoath, I.J. Kalet, D.L. Veenstra, Feasibility of incorporating
genomic knowledge into electronic medical records for pharmacogenomic clinical decision support,
BMC Bioinformatics. 11 (n.d.) S10–S10. doi:10.1186/1471-2105-11-S9-S10.
[16] S. Phansalkar, A. Desai, A. Choksi, E. Yoshida, J. Doole, M. Czochanski, A.D. Tucker, B. Middleton,
D. Bell, D.W. Bates, Criteria for assessing high-priority drug-drug interactions for clinical decision
support in electronic health records, BMC Medical Informatics and Decision Making. 13 (2013) 65.
doi:10.1186/1472-6947-13-65.
128 Health Informatics Meets eHealth
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-128
1. Introduction
The prevalence of multimorbidity will rise with the predicted global demographic change
and the associated aging of the population. Multimorbid patients are treated with
complex polypharmacy, which is commonly defined as permanent intake of more than
five drugs [1]. Polypharmacy increases the risk of drug-related problems (DRP),
including inadequate prescribing (IP), medication errors, and adverse drug reactions and
events (ADR, ADE), which in turn are frequent causes of hospitalization and death [2].
In Germany, several studies [3,4] reported a hospitalization rate of 3-7% due to ADE,
which led to more than 16,000 drug-related deaths [5] and more than 400 million Euros
in additional health costs [6] annually. Additionally, 340,000 to 720,000 patients suffer
consequential damages on account of ADE [5]. Up to 50% of all ADE-related
hospitalizations are judged to be preventable by avoiding prescribing errors [7]. The
causes of prescribing errors in patients are multifaceted and complex, including
prescribers lack of medication-related information or insufficient knowledge of geriatric
pharmacotherapy. Furthermore, the pharmacotherapy is frequently not perceived as a
1
Corresponding Author: Alban Shoshi, Bioinformatics/Medical Informatics Department, Bielefeld
University, Universitätsstraße 25, D-33615 Bielefeld, E-Mail: alban.shoshi@uni-bielefeld.de.
A. Shoshi et al. / KALIS – An eHealth System for Biomedical Risk Analysis of Drugs 129
risk process and thus, potential drug-drug interactions (DDI) and drug-disease
interactions remain undetected, which are associated with a significantly increased risk
for ADE. On the other hand, most the therapeutic guidelines advise prescribing of each
drug individually and do not consider multiple diseases to discuss the applicability of
their recommendations. Therefore, the evidence is increasing that polypharmacy
resulting from recommendations of different therapeutic guidelines can potentially cause
more damage than benefit in multimorbid patients.
A variety of commercial or scientific approaches brought up numerous drug-related
databases and systems. These available databases such as ifap index®KLINIK [8],
DRUG-REAX® [9], Lexi-Interact® [10] and Drug Interaction Facts® [11] have shown
to contribute substantially to minimizing prescribing errors, but also have revealed a
large discrepancy in number and clinical relevance of detected DDI [12]. Accordingly,
at least two drug-related databases are needed to meet the high medical-pharmacological
requirements of a reliable analysis of drugs for treatment in patients. Moreover,
optimizing polypharmacy without using additional biomolecular information remains
still complicated. A comprehensive and patient-specific analysis needs to include
biomolecular data for gaining a better understanding of the underlying mechanism of
drug action and the potential impact on the disease. Especially in the case of dose
adjustment of psychotropic drugs, information on pharmacogenetic interactions is
essential to health professionals trying to improve the individual response and thus, to
reduce side effects.
For this purpose, we have developed an eHealth system named KALIS for
biomedical risk analysis of drugs. The underlying data warehouse named KALIS-DWH
integrates drug-related pharmacological, biomolecular, and patient-related databases.
This data warehouse enables KALIS to provide task-specific modules for analysis and
visualization of drug risks.
2. Methods
2
SAX is the Simple API for XML. SAX was the first widely adopted API for XML in Java.
A. Shoshi et al. / KALIS – An eHealth System for Biomedical Risk Analysis of Drugs 131
performed to extract the datasets, transform the data into the respective MySQL database
and to load it efficiently into KALIS-DWH. Furthermore, relevant metadata for user
management and data analysis is stored in a separate database.
The pharmacological databases of KALIS-DWH were fused with the biomolecular
databases such as DrugBank [22], SIDER [23] or KEGG [24] via the interface to
DAWIS-M.D. This biomolecular data can be used for knowledge discovery of the
underlying mechanism of drug action or the potential impact on the disease. The
integration of the biomolecular data sources into DAWIS-M.D. was performed by
implementing SAX-Parser in Java and using the software kit BioDWH [25]. National
and international standards were used for coding, mapping and assignment of medical
information such as drugs, therapeutic indications, diseases and side effects. These
standards include medical-pharmacological classifications and terminologies such as
ATC [26], ICD-10 [27] or MedDRA [28]. In this way, the homogeneous data warehouses
KALIS-DWH and DAWIS-M.D. provide pharmacological and biomolecular
information for efficient and goal-oriented risk analysis of drugs. The standardized codes
support the accuracy of data inputs and processing as well as a simple data exchange and
uniform communication between KALIS and end user.
CYP-P450
The most important pharmacokinetic interactions occur at the level of biotransformation.
The family of Cytochrome P450 Enzymes (CYP) plays a crucial role in the
biotransformation of many substances. The inter-individual variability in the
biotransformation of drugs by enzyme induction or inhibition and genetic
polymorphisms also indicate a significant issue of drug therapy. Based on these facts, a
new database CYP-P450 was designed, which contains information on interactions
between substances and CYP enzymes in the liver and kidney. Currently, the database
contains 680 substrates, 30 enzymes, 2,661 interactions and 738 references. This data is
primarily based on the results of the literature research of Dippl [20].
PRISCUS-Liste
Due to age-changing pharmacokinetics and pharmacodynamics as well as increasing
multimorbidity, numerous drugs are considered as potentially inappropriate in elderly
patients (> 65 years). For this purpose, the Priscus List was created as a part of the joint
project “PRISCUS” [21], which was funded by the German Federal Ministry of
Education and Research (BMBF). The Priscus List includes 83 drugs of 18 drug classes
that are adapted to the German drug market. The risk of these drugs for any side effects
or age-related complications prevails the medical benefits. Therefore, a new database
PRISCUS-Liste was designed, which contains all the information on these 83 potentially
inadequate drugs (e.g.: reason, therapy alternatives).
132 A. Shoshi et al. / KALIS – An eHealth System for Biomedical Risk Analysis of Drugs
The server-side application logic is responsible for algorithmic analyzing, processing and
visualizing of the data from KALIS-DWH. The focus is on the web-based availability of
drug-related information associated with patient-specific risk factors or therapeutic
targets. Accordingly, a wide range of task-specific modules for decision support in drug
therapy is offered by KALIS. The major modules are described in the following:
1) Pharmacological risk-check
This module enables users to check the patient-related data for drug-drug
interactions, contra-indications, side effects, drug allergies, double prescriptions
and drug-food interactions. Using two recognized databases, ABDAMED and
ROTE LISTE, the quality of interactions data is significantly increased,
resulting in a higher rate of detection and clinical relevance of the interactions.
For further support and improvement of medication process, alternative
therapies are also determined and proposed to the user.
2) Inadequate medications
This module allows users to check the prescription of elderly patients (> 65
years) for potentially inadequate drugs.
3) Pharmacogenetic drug interactions
Many drugs inhibit or induce the activity of CYP, which in turn is important to
health professionals trying to give an appropriate dosage of those drugs. These
interactions between substrates and the family of Cytochrome P450 Enzymes
(CYP) can be predicted by this module.
4) Adverse drug events
Incident reports of ADE can provide new impulses in the identification of
potential trigger of side effects if they are not included in medicinal product’s
A. Shoshi et al. / KALIS – An eHealth System for Biomedical Risk Analysis of Drugs 133
professional information already. This module supports the user search for ADE
reports of a drug considering the patient characteristics (age, gender, etc.).
5) Patient-specific medication assistant in hypertension
This module assists in the computing of patient-specific diagnostic scores
(PIDS) in hypertension by using evidence-based therapeutic guidelines [29] and
individual patient data (age, blood pressure, gender, etc.). The predictive PIDS
defines the suitability of the drug for the treatment of hypertension.
6) Diagnoses-based drugs-check
With this module, the corresponding medications of the entered diagnoses can
be checked for adverse drug-drug interactions. Due to the preventive
identification of interactions and their measures, prescribing errors can be
avoided.
7) Molecular risk-check
Besides the pharmacological risk analysis of drugs (1-5), this module gives
insight into the underlying biomedical networks where drugs interact at the
biomolecular level. It includes various search forms for the analysis of drug-
drug interactions, drug-molecule interactions and side effects using additional
biomolecular data via an interface to the partner system GraphSAW [30].
These comprehensive modules can contribute to pinpoint risks of drugs at
pharmacological and biomolecular level.
3. Results
4. Discussion
The consistency of the drug-drug interactions and drug side effects among the integrated
pharmacological and biomolecular databases was analyzed in a former study [30, 31].
This comparative assessment showed important discrepancies in comprehensiveness and
accuracy of DDI and side effects among the databases ABDAMED, DrugBank and
SIDER. Apart from that, the data quality increased by merging knowledge of these
databases. The study showed that this combination of databases increases the information
density of DDI (> 30%) and side effects (> 60%). This results in higher rate of detection
and clinical relevance in a risk analysis. In addition, the study indicated that at least one
in a hundred of side effects represents a drug-induced disease. However, biomolecular
databases are intended for scientific research purposes given the facts of errors and
inconsistencies of content. Moreover, the extent of side effects differs in each patient due
to several impact factors such as inter-individual genetic diversity, diet or environment.
Some of these aspects such as patient’s functional level, values, and preferences can be
considered appropriately by implementing an interface between KALIS and a
computerized physician order entry (CPOE). A CPOE uses established standards, e.g.
Health Level 7 (HL7), to provide the patient-related data for the exchange and to address
clinical needs in support of prescribing process.
6. References
[1] T. Jörgensen, S. Johansson, A. Kennerfalk, M.A. Wallander, K. Svärdsudd, Prescription drug use,
diagnoses, and healthcare utilization among the elderly, Ann Pharmacothe 35(9) (2001), 1004-1009.
[2] A.H. Lavan, P.F. Gallagher, D. O’Mahony, Methods to reduce prescribing errors in elderly patients with
multimorbidity, Clin Interv Aging 11 (2016), 857-866.
[3] H. Dormann, M. Criegee-Rieck, A. Neubert, T. Egger, A. Geise, S. Krebs, T. Schneider, M. Levy, E.
Hahn, K. Brune, Lack of awareness of community-acquired adverse drug reactions upon hospital
admission: dimensions and consequences of a dilemma, Drug Saf 26(5) (2003), 353-362.
[4] S. Rottenkolber, S. Schmiedl, M. Rottenkolber, K. Farker, K. Saljé, S. Mueller, M. Hippius, P.A.
Thuermann, J. Hasford, Adverse drug reactions in Germany: direct costs of internal medicine
hospitalizations, Pharmacoepidemiol Drug Saf 20(6) (2011), 626-634.
[5] G. Glaeske, F. Hoffmann, Der „Wettbewerb“ der Leitlinien bei älteren Menschen –- Multimorbidität
und Polypharmazie als Problem, NeuroGer 6(3) (2009), 115-122.
[6] S. Schneeweiss, J. Hasford, M. Göttler, A. Hoffmann, A.K. Riethling, J. Avorn, Admissions caused by
adverse drug events to internal medicine and emergency departments in hospitals: a longitudinal
population-based study, Eur J Clin Pharmacol 58(4) (2002), 285-291.
[7] M. Pirmohamed, S. James, S. Maekin, C. Green, A.K. Scott, T.J. Walley, K. Farrar, B.K. Park, A.M.
Breckenridge, Adverse drug reactions as cause of admission to hospital: prospective analysis of 18 820
patients, BMJ 329(7456) (2004), 15-19.
[8] Index®KLINIK, http://www.ifap.de, 30.03.2016
[9] DRUG-REAX®, http://www.truvenhealth.com, last access: 30.01.2017.
[10] Lexi-Interact®, http://www.wolterskluwercdi.com, last access: 30.01.2017.
[11] Drug Interaction Facts®, http://www.clineguide.com, last access: 30.01.2017.
A. Shoshi et al. / KALIS – An eHealth System for Biomedical Risk Analysis of Drugs 135
1. Introduction
The European eHealth Action Plan 2012-2020 [1], explicitly states the importance and
benefits of eHealth services and the requirements on a wider adoption of interoperability
standards. The EU funding project/framework Connecting Europe Facility (CEF)
focuses within one sector on eHealth with a Digital Service Infrastructure (DSI) [2]. The
increasing mobility of EU citizens is a challenge. The place of work and the main
residence can be situated in different countries for a critical number of citizens. This fact
raises challenges in the case of injuries or diseases to the distribution and access of
medical information and services. Interoperable IT systems ideally foster the
interconnectivity of Healthcare IT-Systems like Electronic Health Record (EHR)
Systems, not only on a large scale, but also for example on a bilateral level between
1
Corresponding Author: Philipp Urbauer, UAS Technikum Wien, Hoechstaedtplatz 6,
E-Mail: philipp.urbauer@technikum-wien.at.
P. Urbauer et al. / Propose of Standards Based IT Architecture 137
Hospital associations. This fact is once more reinforced by the necessity of the National
Health Record projects in Europe, like the Austrian ELGA.
Nevertheless, apart from such prestigious projects, telemonitoring projects are
emerging in diverse forms and shades. There are many different telemonitoring
approaches in a dynamic and emerging field with fast evolving technologies that
potentially lead to benefits for different stakeholders. As Maeng et al. [3] showed in a
study from 2008-2012, readmissions were reduced from 38 to 44 percent, thus reducing
overall costs. This numbers show the high potential of telemonitoring to improve
efficiency and cost save. Further improvements can be expected from the integration of
state-of-the-art IT-Standards for Interoperability like architectural building blocks of the
Continua Health Alliance (CHA) [4].
The funded project “INNOVATE” aims to investigate interoperability standards, but
also design and implement “development kits”, for example interoperable and modular
IT-Framework components for the integration and exchange of data from eHealth,
mHealth as well as open data applications and data sources, based on interoperability
standards. This project builds on past work on the investigation, design, implementation
and testing of interoperability standards for telemonitoring [5,6]. These approaches were
concerned with a Personal Health Device Setup in telemonitoring systems and EHR
Systems measuring data and transmitting it via interoperability standards.
In this work the focus is extended to third party data sources, like open data and big
data platforms for the use case of integration of pollen allergy related forecast data.
Allergies are the 6th leading cause of chronic illness in the U.S. with an annual cost of 18
Billion Dollar and there is an important need to support patients affected according to
[7]. Therefore, the proposed standards based IT-Architecture was applied in a prototypic
setup for the integration of pollen forecast data provided by the Medical University of
Vienna in this work. Personal Health Device (PHD) data from a telemonitoring setup
was correlated with this forecast data. This may raise the quality of life for persons
affected in future.
2. Methods
In a first step an internet based literature research, according to [8], was performed for a
state-of-the-art research on the application of IT-Standards in the field of telemonitoring.
The search was conducted using the databases ScienceDirect [9], IEEE Xplore [10],
PubMed [11] as well as Google Scholar [12]. The following keywords and their
combinations, where applied as parameters:
• Telemonitoring
• Continua Health Alliance
• IHE (Integrating the Healthcare Enterprises)
• HL7 (Health Level 7)
• FHIR (Fast Healthcare Interoperability Resources)
• IEEE 11073
• Bluetooth Low Energy
• Standardization
The selection process considered the actuality and the significance of the papers as
well as the actuality and amount of practical application of the standards referenced.
After this process, an expert’s review was conducted to investigate the selected papers
138 P. Urbauer et al. / Propose of Standards Based IT Architecture
in detail. As a result the standards were selected and the standards based IT-Architecture
was proposed. Subsequent the first prototypes started the work for the proof of concept
after the implementation. The Hardware used in this setup was:
• Nonin Onyx Vantage 9590 Finger-Puls Oximeter (Continua Certified, IEEE
11073 standards family based)
• A&D Medical Blood Pressure Monitor UA-651ble (Continua Certified &
Bluetooth Low Energy)
• Android 6.0 (Marshmallow) on a OnePlus 3 Smartphone
• Asus Zen Watch 2 (Smart Watch Bluetooth v4.1 BLE)
• Open Source HAPI FHIR for the Interfaces and the Server [13,14]
The requirements for the integration of the puls oximeter and the blood pressure
device are derived from the CHA design guidelines [15] and CHA interface guidelines
[16]. To meet the interoperability requirements derived from the use case the feasibility
was tested by performing interoperability tests with the prototypes. Thus a possible
correlation between changes in the patient’s vital parameters (SpO2, pulse and blood
pressure values) measured with the telemonitoring devices and the real time pollen
allergy data recorded within the same timeframe should be reported (or not).
3. Results
The literature research focused to identify and study architectures and data exchange
approaches from eHealth systems and approaches from other domains, like Smart City
Figure 1. The proposed architectural approach for the overall system divided into three main components,
based on interoperability standards. The telemonitoring component is concerned with the integration of
personal health devices via an Android application. The Server Infrastructure component shows the core
components for data storage and visualization. The connector component provides the necessary standard
based interfaces and mapping algorithms to integrate different data sources (pollen forecast data in this proof
of concept).
P. Urbauer et al. / Propose of Standards Based IT Architecture 139
related systems or cloud approaches, independent of the health sector. This approach was
selected to include a wider perspective of technologies used, since the integration of
further 3rd party open data sources is a clear aim of the actual project. Out of the different
identified architectures common approaches could be identified. One example is the
registry/repository-model, similar to the definition of the “Integrating the Healthcare
Enterprise” (IHE) XDS-Profile within the IT-Infrastructure Technical Framework [17].
Based on these methods, the author already proposed architectural approaches regarding
integration of open-data platforms [18]. In the next step these approaches are extended
with light-weight technologies, like RESTful web-services instead of SOAP based web-
services. Such an approach was shown in the Smart City approach in [19]. Through this,
the decision was made to use a RESTful web-services, i.e. FHIR based, approach as the
integration and application of IT-standards is a clear requirement for this work.
The following specifications (Figure 1) were designed from the stated literature
research i. a. [19-21], experts review and previous work [18], including the Continua
Health Alliance, IEEE 11073 (X73) Standards Family, Bluetooth Low Energy and HL7
FHIR (Fast Healthcare Interoperability Resources). Figure 1, shows the resulting System
providing data aggregation, data analysis and data exchange capabilities. The latter is
completely based on the previously stated interoperability standards. Basically, the
system was divided into three components, described in the following.
This component focuses on the telemonitoring part of the system. Two devices were used
in the setup. The first was the Onyx Vantage 9590 Finger Puls Oximeter from the
company Nonin. This device, which is a Continua Certified Device, is continuously
measuring oxygen saturation as well as pulse and transmits the measured data via the
IEEE 11073-20601 Optimized Exchange Protocol using the specified Medical Device
Encoding Rules (MDER). In this case the transport channel of the protocol is Bluetooth
by using the Health Device Profile (HDP) and Secure Simple Pairing (SSP). The device
acts as an X73-Agent, which is the generic term for a Personal Health Device (PHD) in
Continua’s Terminology. The acquired data is then send to the Android Application
running on the OnePlus 3 Smartphone, both acting as X73-Manager in Continua terms.
The second PHD used in the setup is A&D Medical Blood Pressure Monitor UA-
651ble (CHA Certified) using Bluetooth Low Energy (BLE) technology, as described in
the CHA Interface design guidelines [16]. It transmits systolic-, diastolic- and mean
arterial pressure as well as pulse. The Smartphone and Smart Watch (Asus Zen Watch
2), acted as Continua Mangers. They include an X73 & BLE Interoperability Connector
(not shown in the system architecture), which is capable of reading and extracting the
measured data. This was performed using the X73 specifications via Bluetooth HDP for
the pulse oximeter and BLE specifications for the blood pressure monitor. Therefore, the
prototype system is able to support three different Manger devices: Smartphone, Tablet
as well as Smartwatch.
The focus lies on the server components of the system. Data from the PHD’s is
transformed and transmitted in the Android applications with the Interoperability
Connector described in the connector component, to an instance of the open source FHIR
server FHIRbase [13]. The server stores and provides the data via its FHIR API’s to the
140 P. Urbauer et al. / Propose of Standards Based IT Architecture
visualization application. Therefore, the server is the core component for long time
persistence and also stores additional data from other interconnected data sources, as
described in the connector component. The visualization application finally requests the
collected data (SpO2, Pulse, Systolic-, Diastolic- and MAP) for analysis.
4. Discussion
Figure 2 shows an application XML-snippet of the proposed FHIR-Resource Setup based on the Observation-
Resource with the Extension, to transport pollen allergy forecast data indicating the daily pollen loads to a
server based on HL7 FHIR for proper storage. Therefore, the example shows the structure filled with test data.
142 P. Urbauer et al. / Propose of Standards Based IT Architecture
Acknowledgment
References
1. Introduction
One of the most important factors for a successful rehabilitation process is the increase
of the health-related quality of life. Besides that, it is also important to improve the
possibilities of the individual person to take part in social, cultural and work related
activities. In order to reach this goal, it is important to also consider the recovery process
after discharge. The recovery continues in the home environment, and therefore should
be supported under these conditions as well [1]. Complex processes of rehabilitation
often cannot be limited to the sessions conducted in a specialized facility, due to socio-
economics reasons and continuously increasing number of patients. It is important to
maintain motivation and compliance of the patients throughout the whole duration of the
rehabilitation program, especially after discharge. Home-based rehabilitation programs
like tele-rehabilitation increases patient’s motivation and therefore supports reintegration
to everyday life. Furthermore, both long term and short term studies conducted in various
fields of rehabilitation confirm positive effect of tele-rehabilitation [2], [3].
1
Corresponding Author: Richard Paštěka, University of Applied Sciences Technikum Wien, Höchstädtpl.
6, 1200 Wien , E-Mail: pasteka@technikum-wien.at
R. Paštěka et al. / Multidisciplinary and Telemedicine Focused System Database 145
Taking the need of home based rehabilitation processes into account and combining
this with the ongoing demographic change, it can be deducted that the demand for
rehabilitation will increase, which leads to an increasing financial burden in the near
future [4]. The tele-rehabilitation conducted in home environment can support reduction
of financial costs and help to re-include the individual persons into everyday life [5].
The parallel world-wide trend of an increasing number of mobile devices is currently
supporting mobile approach in healthcare. Mobile devices and mobile applications on
various platforms will potentially change healthcare, its system, its quality and finally its
costs [6].
The funded project REHABitation combines these novel developments and applies
them to home based rehabilitation processes. The goal of the project is to develop new
concepts and approaches for home based rehabilitation, supported by the combinatory
use of mobile devices, novel and easy-to-use sensors. More specifically state-of-the-art
and off-the-shelf technologies like wearables sensors, virtual reality, the Wii Balance
Board or the Microsoft Kinect sensor are utilized. Target application includes adapted or
established assessments and exercises for rehabilitative purposes. Mobile devices in
these terms include smart phones and tablets as well as mobile activity trackers, personal
health devices and other wellness related devices. Integration of modern technologies
and equipment into rehabilitation processes may increase diagnostic value, progress
monitoring capabilities and rehabilitation efficiency.
Currently the exercises for rehabilitation are explained to the patient within the time
in the healthcare institution and in most cases documented by a paper based explanatory
description. At this state the patient has no further possibility to inform him or herself
about the exercises than asking the therapist. On the other hand, the therapists usually
have a typical set of standardized and already established exercises and assessment
methods, which they apply for the individual patients. In order to allow the therapists to
get an overview of possible combinations of exercises and assessment methods with
available sensors and devices the REHABitation database has been developed.
The objective of this work is to combine the different points of view, representing
the target group – therapist and patient – into one database, which shall provide an up-
to-date database in the rehabilitative context. It shall be usable as a collaboration platform
also for the inclusion of device manufacturers and shall not be limited to the medical
field of rehabilitation. At the current moment, available databases for rehabilitative
exercises are mostly based on the economical principle of memberships. They provide
information and guidance to specific exercises (e.g. PhysioTools [7] or HEP2go [8]) and
several assessments, but do not give an extensive combinational option of available
systems and exercises. Those platforms which list a selection of available rehabilitative
exercises and assessments and allow free access, do not include linked possible usable
devices and tools for the listed tasks at all or only partially.
2. Methods
Over the course of first two years the REHABitation research project brought to live
several new rehabilitation concepts, exercises and novel usage of existing equipment.
However, availability of hereby acquired knowledge was restricted only to small circle
of specialist and associated partners. New solutions were explored in order to further
extend accessibility and expandability of the results. The following requirements have
been set for development of the database:
146 R. Paštěka et al. / Multidisciplinary and Telemedicine Focused System Database
The Data Access Layer (DAL) provides access to the database. The entries (exercises,
assessments and equipment) are stored at the server and processed utilizing PHP
programing language. Users interact with DAL indirectly by means of Presentation layer
(PL). Apache is used as a web server platform and MySQL RDBMS (5.5.52) is used as
database server for data manipulation. The content of the information sent to the server
from the front end is validated. The backend performs requested action and returns status
message back to the frontend. The user is subsequently informed whether and how it’s
necessary to modify entered information or if the query was successful.
Rehabilitation exercises and assessments are the centrepiece of the database.
Distinction between the two above mentioned categories expand possible applications of
the database to multiple medical fields. On the one hand the category of exercises covers
a need for including rehabilitation exercises, which in general aim to improve, maintain
or restore physical strength. On the other hand, the category of assessments offers
inclusion of various tests, informing physical therapist objectively about patient’s status
and progress. Further relevant information is given about the characteristics of the
assessments or exercises (name, duration, focused medical field, description, restrictions
etc.), connected equipment (mobility, connectivity, manufacturer, type, status etc.),
diagnosis (ICD) and systems (composition of system).
The Presentation Layer is represented by a web based application, which comes directly
into contact with the user. The Presentation Layer is based upon Bootstrap (3.3.7) as a
HTML, CSS and JS development framework. Bootstrap is compatible with the latest
versions of the Google Chrome, Firefox, Internet Explorer, Opera, and Safari browsers,
allowing desired accessibility regardless of user web browser preferences. Furthermore,
responsive web design is supported, meaning web pages adjusts dynamically, in
accordance to the device used (desktop, tablet, mobile phone). In order to present
R. Paštěka et al. / Multidisciplinary and Telemedicine Focused System Database 147
Figure 1. Overview of Presentation Layer. The User represents patient, physical therapist or technician looking
for an information.
information in clear and concise way table plug-in for jQuery called DataTables
(1.10.10) is used. Sorting, paging and filtering abilities are added to plain HTML tables.
Thanks to this solution the resources of the back end are not requested and performance
of the whole database is therefore increased.
The web based application allows to view the information in two ways. The
compressed overview is accessible on the main page. Information about name of the
exercise, exercise description, related diagnosis, area of interest and associated
equipment is shown. The second form provides more detailed information and can be
accessed by choosing the desired sub-database showing additional and supplementary
descriptions. Users can search through exercises, equipment, system or diagnosis sub-
databases, as depicted in Figure 1. These are the possible entry points for the individual
search. Each entry point is clearly identified using standard (e.g. WiFi) or standardized
labeling (e.g. ICD code). The user can choose and proceed the search from the individual
point of view, that suits his needs best. Each exercise or assessment focuses on
improvement of specific area. In order to show underlying connections, internal links to
the equipment and system, which could be utilized during rehabilitation sessions are
given. Information about the equipment or system is accessible by communication
standards and obtains potential certification by recognized certification institutions. A
diagnosis is unambiguously identified by an ICD code and is associated with an exercise
or assessment.
148 R. Paštěka et al. / Multidisciplinary and Telemedicine Focused System Database
Three patient use cases were created in order to allow qualitative testing and practical
usability of the REHABitation database. The use cases represent three major areas of
interest in rehabilitation: neurological defects, trauma focused rehabilitation and
preventive measures. The testers, representing medical professionals, had the task to
search for available exercises, assessments and the connected tools for the therapy of the
given patient. On the other hand, a testing group, representing a patient as user, had the
task to search for their tasks defined by the therapists and the equipment needed for that
purpose.
3. Results
Figure 2. REHABitation web based application with filtered exercise result matrix according to stroke patient
use case. In the example entries are firstly roughly sorted by entering “Stroke” into general search field. In
order to get specific results additional filtering is executed by typing in key words according to the use case
specifications.
searched, as well as new combinations between equipment and exercises can be set and
included into the database for future use and application in various medical fields.
Besides the use of the database as knowledge collection and overview for the therapists,
the database also allows the patient, or user in general, to search for the given tasks,
exercises and assessment methods, and get an overview of which equipment or system
could be used in which specific context. This point of view has been introduced into the
database as an easy-to-use entry point for the patients.
Functionality of the database has been tested using three practical use cases. Available
exercises can be looked up in the database by inserting previously defined use cases
information into respective search fields of the web based application. The exercise result
matrix together with dynamically adjusted web page for smaller screens for stroke patient
use case is shown on Figure 2. Additional information about exercise can be accessed by
clicking on a filtered entry. Afterwards the user is automatically redirected to exercise
sub-database.
Unregistered user can view information contained in the REHABitation database via web
based application. However, an unregistered user cannot enter data into the database due
to their specific nature and impact they may represent. Integrity and validity of the
150 R. Paštěka et al. / Multidisciplinary and Telemedicine Focused System Database
entered data is therefore ensured by a three role model authentication hierarchy. After
registration, the user is assigned into one of the following roles: Editor, Moderator or
Administrator. The Editor can create a new database entry or modify an existing one.
However, changes initialized by Editor have to be validated and confirmed by the
Moderator. Until then, they are not included in the database. In addition to creation and
modification of entries, the Moderator can also delete non-complying records. Rights for
both above mentioned roles can be restricted only for certain sub-database, allowing
narrow specialization and greater relevance survey of the entries. The highest level of
control represents the role of the Administrator, that assigns rights and roles to the
registered users according to their credentials.
An option of adding a new database entry becomes available in case sufficient rights are
assigned to the previously registered user. An example of data entry form for Equipment
sub-database, offering 9 entry elements, can be seen in Figure 3. The user is required to
fill in respective text fields or choose an option from presented list of option field.
Multiple select option menu serves for establishing internal connections with other
database sub parts. In the presented example user, can link entered equipment to existing
exercises, add supported communication standards and include equipment as a part of a
system.
Each sub-database has specific and unique data entry form, although following the
same logical arrangement. Exercises entry form offers 14 data entry elements to provide
comparison up to now. Detailed information about what should be entered into the
corresponding field can be obtained by hovering over the field label.
Additionally, every entry contains information about the name of the author, date
and time of its creation, allowing the possibility to further identify origin of the entered
data. Upon edition of the entry, information about editor and time of editing is also stored.
Figure 3 Creation of a new equipment entry for authenticated user. Text can be inserted into first three text
fields. Mobility, State and Type are a select option fields. Last three select option fields create internal links to
the other parts of the database and support multiselect.
R. Paštěka et al. / Multidisciplinary and Telemedicine Focused System Database 151
4. Discussion
The presented database shows the possibilities of connecting novel and available devices
and tools for rehabilitative home exercising. It allows the user to either search for
available combinations or to set new ones. By such means new treatment opportunities
can be established and stored for future purposes. The included data representation is
also limited by the fact that most rehabilitation centres have their specific set of exercises,
which are not be included.
Performed tests, using real-life inspired patient use cases have shown, that the setup
of the database can be utilized as a look-up tool for the different types of users. As the
individual focus may vary between a therapist and a patient, for example, the structure
of the REHABitation database offers the possibility to display exactly the information
the individual user is looking for in minimal time and effort. However, future tests with
specific search tasks and applicability criteria, including the different points of view as
users, should be performed for optimising the data representation of the presented
database. The database will now be demonstrated and offered to potential new user
groups as mentioned above. The intention is to form long standing partnerships and
further maintain the content in a collaborative effort.
The future development of the database will include two major steps. On the one
hand the database will be used as an interactive collaboration platform to allow device
manufacturers, as well as medical professionals to publish their available devices and
exercises and develop new combinations. This will also include a clustering
methodology which will allow the labelling of the entered devices for domains and
existing certifications (e.g. Continua certified, IHE compliant). This active integration of
the different user groups during the project’s running time shall avoid the expiring of the
database with the termination of the project. On the other hand, the database has been
structured in a way, that a higher variety of medical fields can be included into the
database. By these means the database will include exercises and assessments for other
medical domains and the transition from a project-related knowledge database to a
publicly available and market oriented information service is supported.
5. References
[1] C. Faria, et al., Rehab@home: a tool for home-based motor function rehabilitation. Disabil Rehabil
Assist Technol, (1-8) (2013)
[2] R. S. Taylor, et al., Home-based versus centre-based cardiac rehabilitation, The Cochrane database of
systematic review 8(8) (2015), 1-88.
[3] P. Bernocchi, et al., Home-based telesurveillance and rehabilitation after stroke: a real-life study, Topics
in Stroke Rehabilitation 23(2) (2016), 106–115.
[4] R. Müller, A. Klimesch, Entwicklung der ambulanten medizinischen Rehabilitation in Österreich, in:
Rehabericht 2011, Pensionsversicherungsanstalt, Wien, 2011.
[5] L. Zidén, et al., Long-term effects of home rehabilitation after hip fracture - 1-year follow-up of
functioning, balance confidence, and health-related quality of life in elderly people, Disability and
Rehabilitation 32(1) (2010)
[6] H. Wang, J. Liu, Mobile Phone Based Health Care Technology, Recent Patents in Biomedical
Engineering, (2) (2009), 15-21.
[7] PhysioTools, PhysioTools – the smart way to create Exercise Programs for rehabilitation and fitness,
(online) PhysioTools Oy, Finland (2017)
[8] HEP2go, Home Exercise Programs for Rehab Pro’s, (online) HEP2go Inc. (2017)
[9] J. Lee, James, B. Ware, Open Source Web Development with Lamp: Using Linux, Apache, MySQL,
Perl, and PHP, Addison-Wesley, Boston (2003)
152 Health Informatics Meets eHealth
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-152
1. Introduction
Rehabilitation is key for a variety of acute and chronic diseases. It can eliminate the
consequences of the disease or at least mitigate them, so that participation in socio-
economic life and / or the working capacity is restored. With intensive treatment in the
hospital, the patient can achieve great progress in a short time.
After discharge, patients are challenged to continue the prescribed measures (e.g.
exercise programs, nutrition plans) on their own. Patients are indeed often dismissed in
a highly motivated state from inpatient rehabilitation but their compliance decreases
quickly. Reasons for this include the changes back to the home environment and the
lack of feedback. As the patient is not under permanent supervision of medical experts,
the success of self-care application depends to a large extent on the patient’s adherence
to the rehabilitation plan and the exercise quality. Studies show that readmissions
within 30 days can be reduced by 50% when patients are engaged [2].
Knowing the quality of the rehabilitation activities of the patient helps doctors to
detect and prevent readmission risk before they occur. With the emerging trend of
Quantified Self, patients are now able to track their rehabilitation activities as well as
1
Corresponding Author: Andreas Hamper, FAU Erlangen-Nürnberg, Lange Gasse 20, 90403 Nürnberg, E-
Mail: andreas.hamper@fau.de
A. Hamper et al. / Rehabilitation Risk Management 153
fitness activities and habits including sleeping, daily routine etc. with consumer grade
technology. These data can be used to calculate health risk and provide alerts to
medical experts and the patient. In this way, patients receive more targeted monitoring
and management post discharge and thereby enjoy a higher quality of care, better
outcomes and higher satisfaction. At the same time, costs to the healthcare system of
unplanned re-admissions are minimized (see Figure 1).
For many decades, healthcare spending and its influencing factors have been studied
intensively, resulting in a wide range of social and economic factors. A large
proportion of health expenditure is reflected in the cost of hospitals, accounting for
30% of the increase in total health expenditure of the inpatient hospital sector, followed
by the pharmaceutical area in second place. Factors influencing hospital costs include
internal costs related to personnel (e.g. raises and lack of trained professionals),
hospital structures and financing, medical advances (e.g. ICT and medication) and
external influences, such as inflation, demographic changes etc. Especially the
demographic trend is fundamentally changing the social structures of most western
countries: An increasing number of older people, an increasingly longer expected
lifetime and a low birth rate force the current health care system to face major
challenges. At the same time, all healthcare sectors are in high demand for qualified
personnel. The shortages are eminent for doctors, hospitals and especially health care
providers in elderly care [3].
Considering hospital financing, the introduction of diagnosis-related groups (DRG)
forced hospital management to operate more effectively and performance-oriented
resulting in financial problems and cost-pressure for many hospitals. In consequence,
more dynamic and process-oriented structures evolved to optimize patient care. These
challenges make it increasingly difficult to ensure good nursing care for people with
age-related or chronic diseases. Practice shows that the disease management and
rehabilitation for individual care are less than optimal. Reasons for this are incomplete
or delayed information sharing between stakeholders, heterogeneous sources for
disease-related data as well as the lack of individually created supply concepts [4]. This
results in the need for a simple and automatic collection and storage of rehabilitation
and monitoring data to optimize available information for all involved professional
actors and family members.
154 A. Hamper et al. / Rehabilitation Risk Management
The success of rehabilitation, e.g. the recovery of motor functions after surgery depends
heavily on compliance and proper implementation of rehabilitation exercises by the
patient [28]. If the recommended exercises are not performed correctly or not regularly,
this can lead to severe restrictions on mobility and hinder the patient in the long term
healing process. To improve the mobility of a patient after surgery, rehabilitation should
be started as early as possible already in the hospital. Initial motion exercises start under
instructions of a physiotherapist to promote muscle growth and mobility early on. After
hospitalization, patients are transferred to rehabilitation facilities or homecare for
follow-up treatment. When the patient is sent home, the continuous implementation of
rehabilitation exercises is key, as the patient is independently responsible to further
stabilize the affected muscles and joints. Current solutions for home rehabilitating
include recommendations from a wide repertoire of physiotherapy, occupational
therapy and rehabilitation balneological therapies to increase mobility, and the
continual strengthening of muscles. Through lack of discipline or lack of support,
patients often don’t carry out the recommended practice sessions efficiently or not
regularly, which can lead to an inevitable loss of rehabilitation success and
consequently costly aftercare and unplanned readmission to the hospital [5]. Thus, for
rehabilitation after acute diseases, such as joint replacements, movement tracking (e.g.
by counting steps) is a first easy way to monitor patient behaviour in the home
rehabilitation process. More advanced methods involve the use of 3D sensors for
movement analysis during exercises, for example. For patients with chronic diseases,
which account for the biggest part of readmissions to the hospital, the monitoring of
simple values can heavily support the pre-emptive detection of patient deterioration.
Most prominently, renal failure, Septicemia, diabetes, psychotic disorders, airway
disease and cardiac disease often result in readmissions to the hospital [27]. The
collection of data to support monitoring these diseases at home can range from simple
devices, such as digital scales (e.g. to track fluid fluctuations with renal disease), more
advanced, non-invasive sensors for blood sugar measurement (e.g. through contact
lenses) to more complicated or invasive measures, e.g. testing for inflammatory markers.
data security), and the upper layer provides value-added services using the collected
information [1]. With common communication standards such as ZigBee, Ant+,
Bluetooth, RFID, GPRS, UMTS and LTE, the different levels can communicate with
each other [16].
For the monitoring of vital parameters, current approaches already use wearable
technologies to gather data and activities of patients. The market for mobile fitness
applications faced a tremendous development in the past years. Especially the number
of smartphone applications in the categories “health & fitness” in the mobile markets
has grown constantly. Platform developers recognized this trend and the demand to
support mobile fitness applications [3]. Therefore, both Apple and Google started
dedicated platforms to build health applications upon: Apple HealthKit [17] and
Google Fit [18]. These platforms enable developers to implement feature-rich
applications by accessing multiple sensors integrated into electronic devices that allow
measuring activity data as well as relevant body information like heart rate and body fat
level.
Recently, companies like Apple and Samsung presented a new category of mobile,
sensor-enabled devices: smartwatches. A major focus lies on their capabilities to act as
fitness companions during the day. In contrast to fitness applications from adidas etc.,
smartwatches are designed to assist during activities of everyday life like walking and
cycling and not only focus on explicit workout sessions. This process of consistently
tracking movement data as well as health related information through wearable sensory
is referred to as “quantified self” or “quantified self-tracking” [19]. In combination
with smartwatches, especially the healthcare sector offers enormous potential for
innovative use cases [20]. Tim Cook unveiled one of Apple’s long-rumored wearable
smart devices in September 2014 – the Apple Watch [21]. A basic feature that comes
with the watch is the Activity App. Apple highlights the watch’s special characteristic
to “track a wider variety of activities because it’s able to collect more types of data”
[21] by combining gathered body movement data, the heart rate frequency, and the
covered distance into one application. Since then, multiple rival products have entered
the market, such as smart bracelets and jewelry, that aim at the comprehensive
collection and visualization of personal information, using similar features as presented
in the exemplary case of Apple below.
A. Hamper et al. / Rehabilitation Risk Management 157
Figure 3. Combination of Quantified Self and Smart Home Data for Rehabilitation
To enable more comfort in everyday life, multi-modal sensors within the home
environment of the patient can be combined with wearable devices to detect different
activities in a less invasive way. The correct detection of data for different activities is
the foundation for the integration of technologies that support humans in rehabilitation
and daily living unobtrusively in smart home environments [6]. To support long-term
health and wellbeing, both the physiological and activity data are combined to gather
further insights of the patients’ status and behavior [6]. This approach is also applied to
support rehabilitation patients, by analyzing their activities, monitor how and when
they perform their exercises, track the advancement of their healing progress and detect
potential risks, such as falls. Sensor data is collected within the patient’s home (Smart
Home) or directly from the patient (Quantified Self), processed by software and
subsequently classified by matching the received data with known patterns to detect
activities ("activity labeling"). It is of particular interest to find correlations between the
behavior and the health status of the patient. By combining data from both the Smart
Home and the Quantified Self environment, a comprehensive overview of the patient is
given. Figure 3 gives an example of sensor combinations for the rehabilitation setting.
In this paper we presented the relevance and risks of outpatient rehabilitation and
outlined potential benefits of utilizing consumer technology from the field of Smart
Homes and Quantified Self in this context. To understand the benefits of these
technologies for healthcare, we want to highlight the contributions to theory and
practice.
For practice, using Quantified Self and Smart Home data helps to improve
healthcare value. This key aspect in healthcare delivery is supported by enabling higher
quality of care through an enriched data collection. Throughout different levels of
complexity, commercially available sensors and devices can already be used to analyse
patient behaviour and detect deterioration during outpatient rehabilitation. An earlier
identification of issues can thus lead to fewer unnecessary readmissions and timely
interventions. In addition, by utilizing existing technologies of consumer sensors, no
new hard- or software has to be developed, allowing for a better allocation of resources.
For theory, we address the underlying factors that impact risk management.
Generally risk management in healthcare looks at managing one type of risk at a time
158 A. Hamper et al. / Rehabilitation Risk Management
(e.g. Diabetes, falls, etc). We suggest an all in one approach to integrate the
management of various risks simultaneously as some measures can often indicate
multiple problems. The variety of consumer products and healthcare sensors brings
challenges for the integration of such a heterogeneous technology landscape. A
modular sensor framework can work as an abstraction layer to separate the technical
sensor specifications from the output data that is required by the healthcare. A
rudimentary framework as proposed by Hamper [25] can work as a basis. Data are
collected by Sensor Data Providers and categorized in context categories as Activity
and Identity. The data can then be used by healthcare services disregarding the
underlying sensor technology (see Figure 4).
5. Discussion
In future research, this framework can be extended by more complex sensor solutions
and measures that are relevant to detect multiple risks in outpatient rehabilitation. The
following parameters as proposed by [26] give suggestions on relevant measures that
can accurately identify numerous risks for certain diseases:
• Ultrasound Scan: To determine a user’s degree of vascular calcification.
• Metabolic Diagnosis (HbA1c value): To identify the current metabolic status.
• Fat Profile: Various fat values provide additional information about the cardiac risk.
• Uric Acid Level: Informs about risk for coronary heart disease.
• Resting heart rate & Blood pressure: Indicators for several diseases such as
thyroid disorder or cardiac arrhythmias.
• Bio-Impedance Analysis: Determine a person’s percentage of water, fat, and
muscles.
Based on the determined values the rehabilitation and interventions can be tailored
towards the user’s health risk profile to further improve outcomes and reduce costly
readmissions resulting from insufficient outpatient rehabilitation. In this regard, data
security and data interoperability are important issues that have to be considered in
future research.
References
[1] M. Bick, T.-F. Kummer, and W. Rössig, Ambient intelligence in medical environments and devices:
Qualitative Studie zu Nutzenpotentialen ambienter Technologien in Krankenhäusern. Berlin: European
School of Management, 2008.
[2] Nilmini Wickramasinghe, Delivering value-based patient centred care with the Point of care system.
[3] Bundesagentur für Arbeit, Der Arbeitsmarkt in Deutschland –Altenpflege. [Online] Available:
http://statistik.arbeitsagentur.de/Statischer-Content/Arbeitsmarktberichte/Branchen-Berufe/generische-
Publikationen/Altenpflege-2014.pdf. Accessed on: Jul. 20 2016.
[4] BMFSFJ, Möglichkeiten und Grenzen selbständiger Lebensführung in stationären Einrichtungen
(MuG IV): Demenz, Angehörige und Freiwillige, Versorgungssituation sowie Beispielen für „Good
Practice“. [Online]. Accessed on: Jul. 20 2016.
[5] HealthQuest. [Online] Available: http://www.healthquest.org/. Accessed on: Jul. 20 2016.
[6] C. Nugent, A. Coronato, and J. Bravo, Ambient assisted living and active aging: 5th International
Work-Conference, IWAAL 2013, Carrillo, Costa Rica, December 2-6, 2013, Proceedings. Berlin:
Springer, 2013.
[7] C. Tunca, H. Alemdar, H. Ertan, O. D. Incel, and C. Ersoy, “Multimodal wireless sensor network-
based ambient assisted living in real homes with multiple residents,” (eng), Sensors (Basel,
Switzerland), vol. 14, no. 6, pp. 9692–9719, 2014.
[8] E. Aarts and R. Wichert, “Ambient intelligence,” in Technology Guide: Springer Berlin Heidelberg,
2009, pp. 244–249.
[9] Fraunhofer, Mobile EFA-Reha-App. [Online] Available: http://www.efa.fraunhofer.de/de/efa-
anwendungen/mobile-efa-reha-app.html. Accessed on: Jul. 20 2016.
[10] 3D4Medical, Rehabilitation for Lower Limbs. [Online] Available:
http://applications.3d4medical.com/rehabilitation_lowerlimbs. Accessed on: Jul. 20 2016.
[11] AmbiGate, e-Reha. [Online] Available: http://www.ambigate.com/e-reha/. Accessed on: Jul. 20 2016.
[12] P. Georgieff, Ambient Assisted Living: Marktpotenziale IT-unterstützter Pflege für ein
selbstbestimmtes Altern. Stuttgart: MFG-Stiftung Baden-Württemberg, 2008.
[13] L. C. de Silva, C. Morikawa, and I. M. Petra, “State of the art of smart homes,” Advanced issues in
Artificial Intelligence and Pattern Recognition for Intelligent Surveillance System in Smart Home
Environment, vol. 25, no. 7, pp. 1313–1321, 2012.
[14] F. J. Fernandez-Luque, F. L. Martínez, G. Domènech, J. Zapata, and R. Ruiz, “Ambient assisted living
system with capacitive occupancy sensor,” Expert Systems, vol. 31, no. 4, pp. 378–388, 2014.
[15] J. Bizer et al, Technikfolgenabschätzung Ubiquitäres Computing und Informationelle
Selbstbestimmung. [Online] Available: https://www.datenschutzzentrum.de/taucis/ita_taucis.pdf.
[16] M. Mulvenna et al, Visualization of data for ambient assisted living services: Institute of Electrical and
Electronics Engineers (IEEE), 2011.
[17] Apple, Apple HealthKit - iOS8. [Online] Available: http://www.apple.com/ios/health/. Accessed on:
Jul. 20 2016.
[18] Google, Google Fit - Platform Overview. [Online] Available:
https://developers.google.com/fit/overview. Accessed on: Jul. 20 2016.
[19] M. Swan, “Sensor mania! the internet of things, wearable computing, objective metrics, and the
quantified self 2.0,” Journal of Sensor and Actuator Networks, vol. 1, no. 3, pp. 217–253, 2012.
[20] C. Zagel, A. Hamper and F. Bodendorf, “SmartHealth for Senior Self-Monitoring: Nutzenpotenziale
von Smartwatches für die Überwachung des Gesundheitszustands von Senioren,” 2014.
[21] B. X. Chen, “The iPhone 6 Goes Big, as Apple Aims Small With a Smartwatch,” The New York
Times, 2014, 2014,
[22] A. Rütten, K. Abu-Omar, W. Adlwarth and R. Meierjürgen, “Sedentary lifestyles. Classification of
different target groups for the promotion of health-enhancing physical activities,” Gesundheitswesen,
vol. 69, no. 7, pp. 393–400, 2007.
[23] Apple, Apple Watch - Features. [Online] Available: https://www.apple.com/watch/features. Accessed
on: Jul. 20 2016.
160 A. Hamper et al. / Rehabilitation Risk Management
[24] J. O. Prochaska and W. F. Velicer, The transtheoretical model of health behavior change. [S.l.]: [s.n.],
1997.
[25] A. Hamper, "A Context Aware Mobile Application for Physical Activity Promotion", 2015 48th
Hawaii International Conference on System Sciences (HICSS), vol. 00, no. , pp. 3197-3206, 2015
[26] T. Horbach, Expert Interview on a Service Portfolio for Health Services.
[27] Anika L. Hines, Ph.D., M.P.H., Marguerite L. Barrett, M.S., H. Joanna Jiang, Ph.D., and Claudia A.
Steiner, M.D., M.P.H.: Conditions With the Largest Number of Adult Hospital Readmissions by
Payer, 2011. Hg. v. Agency for Healthcare Research and Quality.
[28] Oddone, Eugene Z.; Bosworth, Hayden B.; Weinberger, Morris (Hg.) (2006): Patient treatment
adherence. Concepts, interventions, and measurement. ebrary, Inc. Mahwah, N.J: Lawrence Erlbaum
Associates Publishers.
Health Informatics Meets eHealth 161
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-161
1. Introduction
The number of people with chronic heart failure (chronic HF) currently in Austria is
estimated to be 300.000 [1]. In 2015, about 27.000 HF patients were hospitalized. 25.000
of them were 65. years old or more. The average duration of hospital stay amounts to 8.4
days [2]. The average annual costs of HF treatment have reached around 320 million
Euros. An optimization of the treatment methods by increasing their effectiveness and a
reduction of costs are at this point of paramount importance.
The application of telemedicine services as an additional component to standard
methods of HF treatment should pave the way further. A timely and active response to a
worsening of the patients' health condition should prevent possible stationary treatment
and reduce subsequent costs͘
1
Corresponding Author: Ahmed. S. Hameed, Faculty of Business and Economics, Mendel University,
Zemedelská 1, 613 00 Brno, E-Mail: a.hameed@secdatacom.net
162 A.S. Hameed et al. / Identification of Cost Indicators with Significant Economic Impact
Figure 1. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flowchart of the
systematic review
2. Methods
reduced list costing [6] to identify the relevant cost indicators involved in the treatment
settings that directly influenced the total cost of treatment.
For each research article, the individual cost positions are graded and subsequently
summarized in the appropriate cost category (direct, indirect). This approach was chosen
in order to save the time and costs invested into evaluation of health care analysis within
the scope of a clinical study.
We defined the following criteria for the qualitative analysis of the cost indicators:
1. Establishment of the indicators regarding the perspective of the health care
provider that are directly involved in the treatment process
2. The indicators investigated stand for specific services which were provided
3. The top five cost indicators form proportionately the greatest fees (≥ 95%) of
the total costs of treatment
4. The weighting is based on the absolute and cumulative distribution frequency
of the established cost indicators from the research articles
A plausibility check of the observed cost indicators followed in the second stage of
the review based on the cost data derived from the collected research articles. We
examined the relations between the observed cost indicators and the total costs of
treatments employed by ranking the individual cost indicators. The direct comparison of
therapies applied should inform us about the actual allocation of costs.
The observed cost indicators and the cost data were gathered from the research
articles we selected based on the qualitative analysis of different therapy-settings. In the
individual cost positions, we primarily counted direct costs, which according to the
studies played integral parts of the corresponding therapy.
3. Results
The papers revealed different numbers in population size (n= 90 – 1023) and duration of
study (3 – 360 months). Furthermore, they covered a wide geographical territory from
Asia through Europe including North America.
The cost indicators taken from the selected articles varied according to the therapy-
settings (standard or telemedicine) and the aim of the conducted economic evaluations.
The analytical results showed us that from the perspective of the health care provider the
direct treatment costs are to be listed as the following sequence:
1. Costs of hospitalization: these include the costs of hospitalization, nursing staff,
blood products, medical equipment, anesthesia, medical examinations, diagnostic
tests and the costs of the hospital ward. The analysis showed us that all 12 articles
pointed to this indicator as essential during the evaluation of total costs. The
weighting of this indicator amounts to a relative frequency of 26.09%. This
percentage is the highest value among the identified indicators and is thus ranked first.
1. Costs of medical services: these include the costs of diagnosis, general practitioners,
medical specialists, druggists, home calls, ambulatory care services, therapists,
primary care, ambulatory treatment and the costs of the medical care. We recognized
these costs in 11 articles out of the selected 12. The weighting of this indicator
amounts to a relative frequency of 23.91%.
2. Costs of medication: these include the costs of drugs that were prescribed to the
chronic HF patients during their treatment. This indicator was recognized as
economically relevant in 10 articles out of the selected 12. The resulting indicator-
emphasis is 21.74%.
164 A.S. Hameed et al. / Identification of Cost Indicators with Significant Economic Impact
The analysis of the therapy expenses of Neumann et.al. [10] resulted in a relative
extra-cost expenditure of the intervention costs. A substantial reduction of the
hospitalization costs based on the use of telemedicine could not be reported. Within the
scope of this study the intervention costs on the one hand lead to a reduction in mortality
rate and on the other to better quality of life for chronic HF patients. No decrease could
be achieved in the rate or costs of hospitalization.
In the study of Maru et. al. [11] the hospitalization costs represented the largest
proportion of the total treatment costs. The costs of the medical services for CBI
increased by 3% in comparison to the costs of HBI. This escalation is based on shifting
intervention costs caused by an increase use of medical services. Economically speaking,
a therapy using HBI is 27% more cost efficient than a therapy using CBI.
We identified two major cost blocks in the study of Moertl et. al. [12]. These were
the costs of hospitalization and the costs of medical services. The increase in medical
services and medication costs for BMC therapy can be ascribed to the intensive patient
management program targeting therapy optimization. Economically speaking, a therapy
with BMC is, in comparison to the standard therapy, 52% more cost efficient.
In Table 1 we listed the statistical values of distribution based on the cost
comparison for both therapy settings. A comparison between hospitalization costs
(Figure 3, top left) shows a negative skew (ݔୣௗ > ݔሻ for both therapy settings and was
thus asymmetric with a cost concentration on the higher percentage shares. A significant
positive relationship between both settings, r = 0.8379, ρ < 0.05 was noted. We found no
cost outliers when analyzing the data of the hospitalization costs.
The median value of costs for telemedicine was smaller than for standard therapy.
This suggests that the hospitalization costs for telemedicine were significantly lower than
for standard therapy. Though a higher IQR value for telemedicine was measured
compared to an IQR for standard therapy. This is based on the wide discrepancy of costs
caused by the various measuring approaches of hospitalization within the framework of
the selected studies.
Figure 3, top middle shows an asymmetric positive skew ሺݔௗ < )ݔof the costs of
medical services. We detected no cost outliers when analyzing the data. This median for
standard therapy was slightly higher than the one for telemedicine therapy. The costs
were ordinarily fairly close to each other in the lower percentage shares. Telemedicine
had, however a higher IQR in comparison with the IQR of standard therapy, which is to
be ascribed to a higher, spread of costs. A marginally nonsignificant relationship between
both settings, r = 0.7348, ρ = 0.09 was noted.
Intervention costs are mainly apparent for telemedicine. Figure 3, right displayed
these cost components. They showed a positive skew ሺݔௗ < )ݔwhich pointed to a
concentration of costs in the lower percentage share. We found no cost outliers when
analyzing the data. The intervention costs represented the second largest proportion of
the total costs for telemedicine. The IQR amounted to 18% and so outlined a wider spread
of costs.
The analysis of medication costs (Figure 3, bottom left) revealed minor variation
between the arithmetic mean and the median of individual therapies. This subsequently
led to a positive skew ሺݔௗ < )ݔof the costs. The medication costs for telemedicine are
marked by a low IQR against a higher IQR for standard therapy which is to be ascribed
to the spread of measured data. We detected no cost outliers. A significant positive
relationship between both settings, r = 0.9806, ρ < 0.05 was noted.
The rehabilitation and emergency costs (Figure 3, bottom middle) demonstrated a
positive skew ሺݔௗ < ) ݔwith low values of median for telemedicine and standard
166 A.S. Hameed et al. / Identification of Cost Indicators with Significant Economic Impact
therapy within the framework of the total costs. A significant positive relationship
between both settings, r = 0.9459, ρ < 0.05 was noted. Both IQR values for telemedicine
and standard therapy fall into the lower percentage share of the range of spread. The
range of the 4th quartile of standard therapy was striking however, which is to be
explained by the low number of the available data.
The Spearman's rank correlation between the intervention and hospitalization costs
was presented in (Figure 3, bottom right). We identified a strong negative correlation
(ρ=-0.8286, R2=0.6865, n=6) between the cost indicators (hospitalization and
intervention costs). This enabled us to establish a connection between the intervention
and hospitalization costs. The data analysis showed that high intervention costs correlate
to low hospitalization costs. We could provide no information about the cause-and-effect
relationship (i.e. type of telemedicine therapy and length of treatment) between the
examined characteristics by using the calculated correlation coefficient.
4. Discussion
The top 5 cost indicators to capture therapy costs of chronic HF patients were identified
in this review. Cost structure and cost allocation were highly dependent on the
characteristics, aims and employed therapy methods of the various research studies. A
key factor concerning the reduction of total costs was assigned to medical interventions.
Cost shifts (or reductions) followed, thus benefiting telemedicine therapies.
The establishment of the five cost indicators, along with the classification and
analysis of cost positions which was based on this establishment, covered a wide
spectrum (≥ 95%) of incurred therapy costs per therapy (basic median value). The
distribution of individual cost positions, their relation to the total costs and the
significance of the results were highly dependent on the data availability and details of
the articles in particular.
The order of cost indicators, based on the determined median values, showed only
a marginal difference in comparison to the order of costs based on the frequency
distribution. The analysis revealed that the majority of therapy costs was assigned to
hospitalization costs in each type of therapy. The share of intervention costs constituted
the second largest cost block for telemedicine, followed by the costs of medical services.
We could not identify any order changes in standard therapy.
The selected articles followed various approaches while analyzing the therapies of
chronic HF patients using telemedicine. The classification of costs based on a
standardized cost structure as well as the evaluation of eHealth services following the
roadmap of Köberlein-Neu et. al. [19] were only partly available. The analysis covered
Table 1. Statistical value of distribution, comparing standard therapy (Std) to telemedical therapy (Tele). All
numbers are presented in [%]
Costs of rehab
Costs of Costs of Costs of Costs of
Statist. value & emergency
Hospitalization medical services Intervention Medication
services
Std Tele Std Tele Std Tele Std Tele Std Tele
Median (ݔௗ ሻ 72.48 57.11 12.89 12.71 0 13.54 10.47 11.14 1.34 1.11
Mean (ݔሻ 70.84 56.54 14.33 14.46 0 15.43 10.60 11.15 4.23 2.41
IQR 22.67 32.22 18.04 24.13 0 17.92 12.76 7.94 8.43 5.48
P-value 0.0373 0.0961 0 0.0006 0.0043
Correlation 0.8379 0.7349 0 0.9806 0.9459
A.S. Hameed et al. / Identification of Cost Indicators with Significant Economic Impact 167
a limited number of papers with concrete cost data, from which articles we took the cost
indicators. The impact of IT processes on the costs was not in the focus of the analyzed
studies. These restrictions could represent a possible influence on the results identified.
Our review has also shown outcomes differed across research groups. Some of the
findings regarding the efficiency of therapies and the related costs were partly
contradictory. Henderson et. al. [9] ranked the application of telemedical interventions
contrary to the results of Heinen-Kammerer et. al. [7] and Ho YL et. al. [8] as not cost
efficient. The relative cost component for intervention holds the costs of hospitalization
for chronic HF patients stable on a lower level, when summed, they exceeded though the
costs of hospitalization for standard therapy.
References
Nationalen Präventionskongresses Dresden 1. und 2. Dezember 2005. Springer Berlin Heidelberg; 2006.
pp 531–549.
[8] Ho YL, Yu JY, Lin YH, Chen YH, Huang CC, Hsu TP, Chuang PY, Hung CS, Chen MF., Assessment
of the Cost-Effectiveness and Clinical Outcomes of a Fourth-Generation Synchronous Telehealth
Program for the Management of Chronic Cardiovascular Disease. JMIR 16(6) (2014), e145.
[9] Henderson C, Knapp M, Fernández JL, Beecham J, Hirani SP, Cartwright M, Rixon L, Beynon M,
Rogers A, Bower P, Doll H, Fitzpatrick R, Steventon A, Bardsley M, Hendy J, Newman SP, Cost
Effectiveness of Telehealth for Patients with Long Term Conditions (Whole Systems Demonstrator
Telehealth Questionnaire Study): Nested Economic Evaluation in a Pragmatic, Cluster Randomised
Controlled Trial. BMJ 2013, 346:f1035.
[10] Neumann A, Mostardt S, Biermann J, Gelbrich G, Goehler A, Geisler BP., Siebert U, Störk S, Ertl G,
Angerrmann CE., Wasem J, Cost-Effectiveness and Cost-Utility of a Structured Collaborative Disease
Management in the Interdisciplinary Network for Heart Failure (INH) Study. Clin Res Cardiol 104(4)
(2015), 304–309.
[11] Maru S, Byrnes J, Carrington MJ, Chan YK, Thompson DR, Stewart S, Scuffhama PA, Cost-
effectiveness of home versus clinic-based management of chronic heart failure: Extended Follow-up of
a Pragmatic, Multicenter Randomized Trial Cohort — The WHICH? study (Which Heart Failure
Intervention Is Most Cost-Effective & Consumer Friendly in Reducing Hospital Care). International
Journal of Cardiology 201 (2015), 368–375.
[12] Moertl D, Steiner S, Coyle D, Berger R, Cost-Utility Analysis of NT-proBNP-guided Multidisciplinary
Care in Chronic Heart Failure. International Journal of Technology Assessment in Health Care 29(1)
(2013), 3-11.
[13] Cook C, Cole G, Asaria P, Jabbour R, Francis DP., The Annual Global Economic Burden of Heart
Failure. International Journal of Cardiology 171(3) (2014), 368–376.
[14] Banka G, Heidenreich PA., Fonarow GC., Incremental Cost-Effectiveness of Guideline-Directed
Medical Therapies for Heart Failure. Journal of the American College of Cardiology 61(13) (2013),
1440-1446.
[15] Rohde LE, Bertoldi EG, Goldraich L, Polanczyk CA. Cost-effectiveness of heart failure therapies.
Naturereviews/Cardiology 10(6) (2013), 338–354.
[16] Flodgren G, Rachas A, Farmer AJ, Inzitari M, Shepperd S, Interactive telemedicine: effects on
professional practice and health care outcomes. Cochrane Database of Systematic Reviews 2015, Issue
9. Art. No.: CD002098
[17] Pandor A, Thokala P, Gomersall T, Baalbaki H, Stevens JW, Wang J, Wong R, Brennan A, Fitzgerald
P, Home telemonitoring or structured telephone support programmes after recent discharge in patients
with heart failure: systematic review and economic evaluation. HTA 17(32) (2013)
[18] Farré N, Vela E, Clèries M, Bustins M, Cainzos-Achirica M, Enjuanes C, Moliner P, Ruiz S, Verdú-
Rotellar JM, Comín-Colet J, Medical resource use and expenditure in patients with chronic heart failure:
a population-based analysis of 88 195 patients. European Journal of Heart Failure 18(9) (2016), 1132-
1140.
[19] Köberlein-Neu J, Müller-Mielitz S, Roadmap zur Entwicklung eines Evaluationskonzeptes, in: E-
Health-Ökonomie, Springer Fachmedien, Wiesbaden, 2017. pp. 881-892.
Health Informatics Meets eHealth 169
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-169
1. Introduction
more than they are expected to, especially when help is provided. In some cases, they
successfully complete university [6] and become as independent as possible. The idea of
POSEIDON is to support people with Down syndrome with the help of modern
information technology. Nine partners2 from the United Kingdom, Norway and Germany
were involved in this project. It was founded by the European Commission from 2013
to 2016.
In the POSEIDON project, different applications were developed for people with Down
syndrome and their carers to support them to manage their daily activities as
independently as possible. POSEIDON provides support in the areas of time
management, mobility and money handling. The developmental process followed a user-
centered approach and involved Primary (people with Down’s syndrome), Secondary
(e.g. caregivers, parents) and Tertiary Users (e.g. teachers). The users were involved to
test the apps on several stages of the developmental process. Their feedback was
fundamental for design and functions and led to the awareness of the necessity of a
greater personalization of all apps.
The POSEIDON app supports daily planning, traveling, shopping and personal
video clips (see Figure 1). The main menu gives the user with Down syndrome the
following options:
• Routes – Start navigation by using planned routes
• Preferences – Turn on/off position tracking and choose colour themes
• Calendar – View planned events and add new events
• Videos – View videos that are uploaded by the carer
• Training – Access the Money Handling Training app
• Shopping – Access the Money Handling Assistance app
In order to support the competencies on time management, an easy usable calendar
was developed. Carers are able to add, change and delete appointments with the help of
an online platform (Web for carers). Appointments can be personalized with videos,
instructions, voice recordings and symbols or signs according to the needs and abilities
of their protégées. Carers can also add pictures of items people have to take with them
when going to school, to work or to appointments (e.g. an umbrella). People with Down
syndrome receive notifications on their smartphone when they have appointments. These
notifications come up half an hour before the appointment starts. A time bar highlighted
with different colours indicates how much time is left. If they want to and are able to,
people with Down syndrome can add, change and delete appointments as well. The
Primary Users also receive weather notifications based on the weather forecast. These
messages are connected with recommendations what to wear according to the
temperature outside and can be personalized by the Secondary Users.
An additional feature is the possibility to add routes to appointments. This leads to
the next area which is supported by the POSEIDON project: Mobility. People with Down
syndrome often have problems to get from A to B. Carers can create routes with the help
of a Route Creator app. These routes can be rehearsed with the help of the Home
-CTFG #5 /KFFNGUGZ7PKXGTUKV[(TCWPJQHGT+)&$+5Ō$GTNKP+PUVKVWVGHQT5QEKCN4GUGCTEJ(WPMC0W
6GNNW#50QTYGIKCP0GVYQTMHQT&QYP5[PFTQOG
00&5&QYPŏU5[PFTQOG#UUQEKCVKQPŌ7-
&5#
#UUQEKCVKQP&QYP5[PFTQOGŌ)GTOCP[
A. Engler and E. Schulze / POSEIDON 171
Navigation system on the PC and they can be used on smartphones to navigate outside.
Carers can add pictures of certain places to this route to make the navigation easier and
they can customise the steps of a journey by adding text and speech. To strengthen their
competencies regarding money handling, a money handling game for smartphones was
developed. With the help of this app we want to increase their knowledge about the value
of money and how to pay for certain products. The game is about choosing the correct
amount of money for the priced product being displayed on the screen. Carers can make
shopping lists on the Web for carers by adding product images and prices. The shopping
lists are automatically transferred to the Money Handling Training App in which the user
can practice before going to the shop. Moreover, a Shopping App (on smartphone) was
developed which supports the users on the spot – when they are out shopping. The app
gives an overview of which products to buy (the created shopping list), the price for each
product and the total price including what type of coins and notes to pay with. On the
web the carer can not only make personal calendar events, shopping lists and add videos
to the app, the carer can also track the user´s position (if the person with Down syndrome
has switched on this function in the app) and mark important places on the map. Since
the abilities and needs of people with Down syndrome vary to a high extent, all
applications and functions can be personalized. The POSEIDON apps are available in
the Google Play Store (only for Android phones) in three languages (English, German
and Norwegian). Figure 2 gives an overview about POSEIDON´s framework and all
functions.
3. Methods
User-centeredness was paramount for the success QH VJG RTQLGEV 6JKU OGCPU VJCV HTQO VJG
DGIKPPKPIQPall stakeholders, as there are Primary Users (people with Down syndrome),
172 A. Engler and E. Schulze / POSEIDON
Secondary Users (parents and carers) and Tertiary Users (teachers or supervisors e.g.),
are included in all stages of the developmental process. 6JGHKTUVUVGRQHVJGRTQLGEVYCU
CTGSWKTGOGPVCPCN[UKUKPHQTOQHCPQPNKPGUWTXG[6JGCKOYCUVQCUUGUUVJGPGGFUCPF
TGSWKTGOGPVUCUYGNNCUVJGWUCIGQHVGEJPQNQI[QHRGQRNGYKVJ&QYPU[PFTQOG6JG
UWTXG[YCUEQPFWEVGFKPUKZEQWPVTKGU
VJG7-0QTYC[)GTOCP[+VCN[5NQXGPKCCPF
2QTVWICNCPFUVCTVGFKP&GEGODGT The questionnaire3 addressed everybody caring
for people with Down syndrome.#NNKPCNN583 questionnaires were filled#FFKVKQPCNN[
HCEGVQHCEGKPVGTXKGYUYKVJCVQVCNQHRGQRNGYKVJ&QYPU[PFTQOGYGTGEQPFWEVGFCV
VJG DGIKPPKPI QH VJG RTQLGEV 6JG TGUWNVU NGF VQ C UGV QH RGTUQPCU CPF UEGPCTKQU VJCV
FGUETKDGVJGNKHGUKVWCVKQPHQTUGXGPHKEVKQPCNRGQRNGYKVJ&QYPU[PFTQOG6JGRGTUQPCU
CTGKORQTVCPVHQTVJGVGEJPKECNTGSWKTGOGPVUYJKEJGPUWTGVJCVVJG215'+&10UQNWVKQPU
HKVVJGVCTIGVITQWR
Moreover, three workshops, which covered topics like the use of technology in
general and assistive technologies in particular, the living situation and daily activities
of people with Down syndrome, took place with guests from Croatia, Italy, Luxembourg,
Portugal, Romania, Slovenia, Switzerland, Ukraine, Scotland and France. These guests
were parents or carers from people with Down syndrome or members of national Down
Syndrome Associations.
Based on the results of the workshops, the interviews and the requirement analysis
which indicated that people with Down syndrome mainly need support to handle time,
appointments, money and support for mobility, different technical applications were
developed. These applications were evaluated in two field tests and one extended field
test in form of a one-day event. The field tests were conducted each time with three
families in each country (the United Kingdom, Norway and Germany) in 2015 and 2016.
The Down’s syndrome associations of each involved country recruited the families that
took part in the field tests.
For each family, the field tests lasted four weeks. Members of the POSEIDON
project regularly visited the families to rehearse tasks and to consider the learning process.
3
Available under www.bis-berlin.de/POSEIDON/Quest/RequirementAnalysis.pdf
A. Engler and E. Schulze / POSEIDON 173
For data collecting, different qualitative and quantitative methods were used. The
carer/parents had to fill in questionnaires for every function in the beginning and in the
end of the pilot, interviews were conducted two times with the person with Down
syndrome and the parents. Moreover, they were observed during the visits while using
the POSEIDON devices and they had to fill in User Protocols. Due to the low sample
size, the results of the field tests were analysed in form of case studies, which are based
on qualitative data. Therein, the Primary Users are presented in their living context with
their daily routines, abilities, challenges, goals and their general use of technology. The
Primary and Secondary Users’ experiences with POSEIDON are described in detail as
well as the outcome of using POSEIDON applications.
4. Results
Results indicated that a majority of people with Down syndrome uses modern
information technologies: Tablet-PC 71.2 percent, smartphone 60.8 percent. The easiest
seems to be the use of Tablet PCs (only 42.6 percent need help using them) while 65.6
percent of people with Down syndrome need to be assisted when using a smartphone.
Modern assistive technology is regarded as helpful to overcome challenges in daily life
(57.7 percent), although some of the carers do not seem to be well informed about the
chances these technologies offer. Social integration varies highly between work/school
and leisure time. 44.8 percent are well integrated at school or work but only 23.3 percent
in leisure time. Most important seem to be supporting communication, socializing and
school/work/learning; more than 50 percent of participants consider these aspects as very
important. These results go in line with the fact that people with Down syndrome need
to be accompanied by relatives when going to leisure activities (73.8 percent). Making
traveling for them safer and more flexible could result in building and maintaining
friendships. More than 75 percent of the carers indicate that checking that the person
they care for has reached a destination as well as locating the person would be very
helpful features for them.
Concerning the usability and design, especially the motivating and fun aspects are
stressed (67.1 percent and 61.7 percent considered them as very important). Very
important is also the adaptability to individual needs (62.5 percent), the avoidance of a
need for fast reactions (58.1 percent), and the aspect that the device should be robust
(57.8 percent). Other aspects, such as large buttons (22.7 percent), a display with strong
contrasts (22.2 percent), a flexible change between icons/symbols and text (29.3 percent),
are not frequently regarded as “very important”.
The general impression of the first pilot study, which was conducted in summer 2015,
was that all participants seem to like the idea behind the POSEIDON applications and
the POSEIDON vision itself. During the first pilot phase, a lot of problems had to be
overcome, but the participants were aware of the limitations of the system. However,
they mentioned a lot of advantages: They liked to learn how to handle money on a new
device with the help of a gamification approach. They also considered the calendar app
as helpful for a better structure of their daily life. Most of them can imagine that the
174 A. Engler and E. Schulze / POSEIDON
Home Navigation System can help to learn new safe routes to home, and they very much
liked the idea of having a navigation app using routes which can be adapted to their needs.
However, there was still room for improvements. Especially usability and user
experience, safety and personalization aspects had to be considered for further
developmental activities.
In both pilots, the applications sometimes did not work properly or bugs occurred
while testing. This was very frustrating for the Primary Users but also for their carers.
Most people with Down syndrome have a lower frustration tolerance. This increases the
tendency to give up when problems cannot be solved immediately and the tendency to
become bored or annoyed. Carers often had to encourage their protégées to use the
applications.
The second pilot study started in Summer 2016. The POSEIDON functions had been
revised based on the experiences of the first pilot, the extended pilot and the workshops
and some new functions were integrated. New functions in pilot 2 were the shopping
assistance and the integrated video function in the POSEIDON app, a new Route Creator
app and a social network.
The system worked more reliable than during the first pilot. One of the most
important results of pilot 2 was that all participants see the potential of POSEIDON to
increase the independence and autonomy of people with Down syndrome, the potential
for a better organization of daily tasks and for a higher mobility.
The Primary Users were able to master their individual challenges and to reach most
of their goals they mentioned beforehand, even though a longer period of usage might be
necessary for some goals: With the help of the calendar app, they achieved their goal
remembering appointments and bringing all necessary things to school or work, through
the money handling and the shopping app they were able to achieve a better
understanding of the value of money. The navigation app made traveling more secure
for the Primary Users and was reassuring for the carers who easily could inform
themselves about the whereabouts of their protégées. By combining different
POSEIDON apps the users could organize and conduct different daily tasks. They could
for instance do a shopping tour completely on their own: creating a shopping list,
organizing the money for the planned purchase, planning and training the route to the
shop, using the navigation app to go there, doing the shopping, paying on their own and
going back home again with help of the navigation app.
Most participants were also very open-minded and excited to learn new things. Their
ability to master POSEIDON made them proud and they had fun testing POSEIDON for
four weeks and were pleased to be an important part of the project.
Nevertheless, many problems occurred in the pilot and there was still much room for
improvements. Participants mentioned a lot of ideas for improvement and
recommendations for further development. These recommendations were used to make
some last improvements until the project ended in December 2016.
5. Discussion
POSEIDON contributes to the field of smart environments where little has been done
before for people with Down syndrome. Not much is really known about their interaction
with technology.
Results of the project indicate that information technologies are of great importance
for people with Down syndrome and that the developed POSEIDON applications can be
A. Engler and E. Schulze / POSEIDON 175
a great support to increase their independence and inclusion into society. All participants
see the potential of POSEIDON to help people with Down syndrome to be more
independent and autonomous in their daily lives. However, the impact is very individual
for each person with Down syndrome. According to their individual challenges and
abilities, they consider different functions as most helpful and supportive. For some
participants, the navigation training and the navigation app was the most helpful function
in order to achieve a better orientation and to be able to go out on their own. Participants
who were going to move away from home into supported living considered POSEIDON
as a great help to handle this situation. For other participants, the Money Handling
Training and the shopping app were most helpful, because they had no understanding of
the value of money. All participants could master the technology and this had a positive
impact on their self-consciousness. They had fun testing POSEIDON and were proud to
be a part of the project. Most participants would also like to use POSEIDON in the future,
at least specific functions of POSEIDON.
In the study design, the time schedule of the field tests was limited. Therefore, we
could not measure the long-term impact on independence and inclusion in the society.
Long-term field tests would be very helpful to measure the psycho-social impact on the
lives of people with Down syndrome in terms of independence, autonomy and inclusion.
In longer field tests, it could also be measured if POSEIDON increases the chance of
employment which was also not possible to measure in four-weekly field tests.
It is planned to continue the development of the applications (like accessibility for
Apple phones, not only Android; translation in more languages) in order to bring
POSEIDON onto the market. Not only people with Down syndrome, but also people
with other cognitive disabilities would strongly benefit from using POSEIDON to
increase their quality of life.
References
[1] F. Hickey, E. Hickey, K.L. Summar, "Medical update for children with Down syndrome for the
pediatrician and family practitioner.". Advances in pediatrics 59 (1): pp.137–57, 2012.
[2] L. Abbeduto, M. Pavetto, E. Kesin, M.D. Weissman, S. Karadottir, A. O’Brien, S. Cawthon, “The
linguistic and cognitive profile of Down syndrome: Evidence from a comparison with fragile X
syndrome. Down Syndrome Research and Practice.” 7(1), pp. 9-15, 2001.
[3] S. Buckley, G. Bird, “Speech and language development in individuals with Down syndrome (5-11
years): An overview Tech. rep.: Down Syndrome Educational Trust” Portsmouth, UK, 2001.
[4] L. Kumin, “Early Communication Skills in Children with Down Syndrome: A Guide for Parents and
Professionals.” Bethesda, Maryland: Woodbine House, 2003.
[5] J. Lazar, L. Kumin, J.H. Feng, „Understanding the Computer Skills of Adult Expert Users with Down
Syndrome: An Exploratory Study.” In: The Proceedings of the 13th International ACM SIGACCESS
Conference on Computers and Accessibility). New York, NY, USA: ACM, pp. 51–58, 2011.
[6] Down Syndrom Regensburg, What about the intelligence of children with down´s syndrome? (Was ist
mit der Intelligenz bei Kindern mit Down-Syndrom?) published on: http://www.down-syndrom-
regensburg.org/das-down-syndrom/was-ist-mit-der-intelligenz-bei-kindern-mit-down-syndrom/, 2016.
176 Health Informatics Meets eHealth
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-176
Abstract. Standard toilets often do not meet the needs of a significant number of
older persons and persons with disabilities. The EU funded iToilet project aims at
design and development of a new type of ICT enhanced modular toilet system which
shall be able to support autonomy, dignity and safety of older persons living at home.
Methodologically the project started with gathering user requirements by means of
questionnaires, interviews and focus group discussion involving a total of 74
persons, thereof 41 subjects with movement disorders (primary users), 21 caregivers
(secondary users) and 12 healthcare managers (tertiary users). Most important
wishes were bilateral removable handrails, height and tilt adjustment, emergency
detection, simplicity. In parallel to the ongoing technical development participatory
design activities have been carried out at user test sites in order to continuously
involve users into the design process and to allow quick feedback with regards to
early prototype parts. The project currently is working on the finalization of the first
prototype ready to enter the lab trial stage in spring 2017. The experiences will be
used for redesigning a prototype 2 which is planned to be tested in real life settings
early 2018.
1. Introduction
The toilet [1], most commonly used in Western society, is in the form of a fixed height
"seat". Considering the diversity of people and the different needs and preferences in
personal hygiene, there are a considerable number of deficiencies in this standard form
of the toilet. These deficiencies can present serious obstacles to older persons and persons
with reduced mobility [2, 3].
1
Corresponding Author: Paul Panek, HCI Group, Institute for Design and Assessment of Technology,
TU Wien, Favoritenstrasse 11/187-2, A 1040 Vienna, Austria, E-Mail: panek@fortec.tuwien.ac.at
P. Panek et al. / On the Prototyping of an ICT-Enhanced Toilet System for Assisting Older Persons 177
In the project "iToilet" [4, 5, 6], partners are developing an extended toilet, which is
based on the existing sanitary products “Lift-WC” and “mobile toilet chair” [7] and is
enhanced by ICT-based components, which can support older persons in an active and
independent life at home as well as in institutions. Thereby functions such as setting the
optimal seat height, dynamic support during standing up and lowering, automatic
recognition of user preferences, voice control, safety (emergency detection, fall detection,
etc.) are provided. The target groups are older people living alone, as well as their (formal
and informal) caregivers, institutions, and costumers.
Only a surprisingly small number of research projects in Europe have hitherto been dealt
with technical improvements in the area of the toilet for older persons or disabled users.
Nevertheless, there is a large research body studying the topic of forces that occur during
standing up and sitting down procedure and how physical training to do this most
efficiently from the therapeutic point of view should be done.
The "Friendly Restroom" project, coordinated by TU Delft (2002-2005) and partly
funded by the EU in the 5th Framework Programme, was a user-centered research project
focusing on the design of adaptable toilets and toilet rooms [2]. A basic product of a
height-adjustable and tiltable toilet came on the market in 2006 (LiftWC [7]).
The research project "The Future Bathroom" (2008-2011) of the University of
Sheffield conducted studies on the user-centered design in the bathroom for people living
with age-related disabilities. The project was very concerned with the integration of old
people into the design process [3].
In contrast, the "iToilette" project of RWTH Aachen University (2009-2011)
focused on the automated measurement of vital parameters [8]. The EU project I-Support,
launched in 2015, develops a robotic shower to enable independent living for a longer
period of time and thus improve the quality of life of frail people [9].
Products on the market in North America and Europe focus on simple toilet
implementations, additional handles, as well as mechanical (and partly motorized) lifting
aids (for example LiftWC) as well as add-ons for intimate hygiene (shower WC) and seat
hygiene. In Japan there is also a remarkably strong spread of additional toilet facilities
such as anal cleaning, bidet, seat heater and odor removal.
The iToilet project [4, 5, 6], coordinated by the Vienna University of Technology and
launched in 2016, focuses on older people who live independently at home. The wishes
and needs to be met when using a toilet at home are to be met as far as possible with
ICT-based support modules and thereby empower the elderly to a more independent and
dignified life. In addition to the main area of work at home, the system is also intended
to bring benefits in institutions and not only to the older persons themselves, but also
their caregivers by reducing the burden of personal assistance on the toilet.
As stated above, the needs and wishes of older (or physically impaired) people play
a central role in the use of a toilet. The iToilet project, however, also supports the needs
178 P. Panek et al. / On the Prototyping of an ICT-Enhanced Toilet System for Assisting Older Persons
of caregivers with the assistance of users in the toilet room and also takes into account
the perspective of nursing and care facilities (mobile and institutional care) and financing
/ funding organisations, e.g. insurance, social systems. All stakeholders are actively
involved, not only on a point-by-point basis but as continuously as possible, and not just
during the trial phase, but during the entire project runtime.
The iToilet project aims at developing field test ready solutions for the known
problems and this is done by having in mind certain hypotheses, e.g. we expect that
adjustability for different toilet heights for different “actions” should make toilet use
much easier for (most of) our target users. Additionally, we assume that the adjustability
will support users’ independence and maintain safety for unattended use. We also
planned for testing in institutions because of a broader sample, a safer environment and
a better availability of experts from areas of nursing, therapy and medicine. Thus, one of
the first tasks in the project was to check if the hypotheses hold true.
During the initial phase of the project it was important to assess user requirements and
verify the assumptions in order to provide a clearer view about expectations and needs
regarding toilet use (from entering to leaving the toilet room). The eventually derived
user requirements also included a ranking of the most needed functionalities. This created
a solid basis for specifying the iToilet system and helped to decide about what are the
crucial and optional components of the to-be-developed intelligent toilet.
In the iToilet project there are challenges due to the special setting. For example it is
foreseen that the prototypes are not tested at home, but in two institutions, which makes
it easier to carry out the user test. On the other hand, while doing so it must always be
taken into account that an innovative product for the home area is the primary project
goal.
With regard to the users involved in the project there are (a) older people within a
clinical rehabilitation setting (Rehabilitation clinic in Budapest) and on the other hand
(b) mostly younger but physically limited users with multiple sclerosis (MS day care
centre in Vienna). The evaluation in everyday life must be carried out while the test
partners’ own day-to-day operational tasks are running.
A basic methodological difficulty is given by the taboo-related context of toileting.
This can have significant impact on the participatory design activities. However, the
authors already have positive experiences from successful previous projects [2, 10, 11]
in which it was shown that, by intensive preparation, well-accepted possibilities for user
involvement and participation in the development and design process can be created
despite the taboo area. A good, detailed and continuous information strategy, the creation
of trust between researchers and users as well as carrying out the initial toilet tests in the
laboratory in the dressed state (with cloths on) have proved to be particularly helpful.
Due to the taboo area and due to the target audience of vulnerable people the topic
of ethics is particularly important [12, 13]. The safety aspect is also of importance,
especially since dynamic physical support (for example during sitting down and getting
up) shall be created, whereby all conceivable safety risks have to be eliminated or
minimized in the framework of a safety analysis.
P. Panek et al. / On the Prototyping of an ICT-Enhanced Toilet System for Assisting Older Persons 179
In the iToilet project, the involvement of users right from the beginning of the project
plays a special role. These actors come from the non-scientific field, and have a very
important role in the project structure, as they act as experts for their own life experiences
at their home and contribute their own everyday knowledge to the knowledge base of the
project. The test sites in Hungary and Vienna serve as a framework for (a) the integration
of non-scientific participants, the later users of the iToilet systems, and (b) as an
opportunity for the representatives of the other disciplines to enter a continuous dialogue
across technical boundaries.
4. Results
This section presents some preliminary results from the ongoing project. It also outlines
some selected technical components which will be part of the first prototype of the iToilet
system, which currently is in final phase of development.
are arriving). This difference has to be covered in the technical implementation, e.g. the
alarm chain.
The requirements were ranked and grouped in high and medium priority according
to answers in questionnaires and frequency of mentioning in interviews (Table 1, more
details can be found in [14, 15]).
Secondary and tertiary user groups looked at toilet use scenarios from a much wider
perspective, nevertheless they come to the same conclusions about user requirements as
the primary users themselves.
Concluding, the differences of user groups were diminished by the common needs
they have. This is seen by the priority rankings of the user requirements (cf. Table 1).
The consortium aims at developing iToilet prototypes able to cover all top priority items
from user requirements and 50% of the medium priority items.
Despite the taboo-related topic of toileting and corresponding routines of intimacy and
personal hygiene the involvement of older and vulnerable users in the design and
development activities could be established successfully. Initial topics for the
participatory design (PD) activities were: Toilet paper dispenser, speech control, various
mechanical buttons and remote controls and grip bars. First initial results [16, 17] are
promising and will influence directly the technical development. In the upcoming months
the participatory design activities will be continued by involving more users and by
providing additional and/or improved hands-on material.
For the system architecture a modular approach has been chosen which allows for a high
level of flexibility during research and also during upcoming commercialization, as
different combinations of the iToilet components can be selected according to the
individual setting, preferences and wishes. The following hardware modules [18] are
foreseen for the upcoming prototype system:
x A motorized height and tilt adjustable toilet “chair” forms the mechanical base
(see Figure 1). Two separate motors can change height and tilt of the seat.
Sensors are integrated to measure the actual position of the toilet and the static
or dynamic load (e.g. by a person sitting on or standing up from the toilet).
x A control unit runs the inferences software unit, the dialogue manager and the
network coordination of the different (partly optional) modules.
x Sensors in the environment measure activities (e.g. person presence).
x A 3D sensor aims at recognizing falls (manufacturer: CogVis GmbH).
x A speech recognition with microphone in the far field (without the need to be
worn by the user) allows control via speech alternatively to the buttons.
x Buttons (tactile commands) for controlling the toilet are available on a remote
control connected via cable or as buttons integrated in the grip bars.
x RFID reader at the entrance for user identification. Allows to automatically
recall individual user preferences (e.g. height, tilt, language).
x Interface to care documentation systems, useful for storing preferences, for
visualization of usage data and as interface for connecting mobile devices.
x The output is given by synthetic speech, sound or devices like smart phones.
P. Panek et al. / On the Prototyping of an ICT-Enhanced Toilet System for Assisting Older Persons 181
Figure 1. Left: Base toilet seat module of Prototype 1. Height and tilt can be changed by two independent
motors. Middle: The toilet seat can be moved over an existing toilet bowl. Right: Base toilet seat module of
Prototype 1 (side view) in maximum height position (left: no tilt (horizontal seat), right: tilted seat).
The base chair (Figure 1) has to be put over a toilet bowl (seat has to be removed from
bowl) and connected to mains power.
In parallel to the traditional remote control (pressing buttons), users can give voice
commands to control the iToilet system. This is especially useful when transferring from
the chair device seat using both arms.
The voice control unit (in English, German and Hungarian language) has been
developed in two versions: one based on existing Software Developer Kits (SDKs) for
Automatic Speech Recognition (ASR), the other on an innovative Speaker Independent
Large Vocabulary Continuous Speech Recognition (LVCSR) engine [19], which has
been customised to encompass the specificities of the iToilet domain. The vocabulary of
the speech recognition system has been limited to some commands defined by PD results
of user partners. Extensive acoustic training has been done by native speakers. The
system offers powerful personalisation capabilities (it can be easily trained to individual
voices). In addition it can be trained to recognise users with mild speech impairment and
disturbances. Currently height and tilt of the chair device can be adjusted and flush and
bidet can be activated/stopped via the following basic speech commands in the three
project languages: “sit down”, “stand up”, “higher”, “lower”, “flush”, “bidet on”, “bidet
off”, “forward”, “backward”, “stop”. Additionally the “help” command initiates an
emergency call to pre-stored phone numbers.
The toilet rooms where the iToilet system is going to be installed are an enough
silent space for the speech recognition system to work fine (verified by PD). In addition,
to avoid interferences, an omnidirectional microphone with background noise deletion is
adopted and the microphone is activated only when a person enters the toilet room
(people entering are recognised by the User ID Module and the relevant speaker profile
is uploaded to the system) and deactivated when the person leaves the toilet room. This
allows the system to achieve high accuracy in the recognition of commands.
The voice control unit activates speech synthesis when the user enters the toilet room
(to give instructions how to use it) and as a reaction to the “help” command.
The mounting position of the voice control unit and its microphone is opposite from
the chair device in a distance between 1 to 1.5 meters. The voice control unit is connected
to the other iToilet units via a private local wireless network and communicate with them
using the MQTT protocol.
182 P. Panek et al. / On the Prototyping of an ICT-Enhanced Toilet System for Assisting Older Persons
In the iToilet project, ICT-based support measures are being developed to improve
existing toilet modules and support the independent and safe life of older people at home.
To reach these goals the project uses existing toilet base modules and develops ICT-
based add-on modules. The architecture foresees a very modular toilet system, which is
individually adjustable and can be installed over an existing WC bowl as typically found
in private homes or institutions. In parallel to the technical development participatory
design (PD) activities are carried out. A key concern is to ensure the relevance of the
solution to the users, that is, to enable the users to experience the desired assistive toilet
system as useful and usable.
In spring 2017 first complete prototypes of the height and tilt adjustable seat module
for lab testing will be available. An improved prototype of the iToilet system, taking up
the results from lab trials, is planned to be available for real life evaluation in 2018.
6. Acknowledgements
7. References
Abstract. Since 2012 six AAL pilot regions were launched in Austria. The main
goal of these pilot regions is to evaluate the impact of AAL technologies in daily
use considering the entire value chain. Additionally, go-to market strategies for
assistive technologies based on an involvement of all relevant stakeholders are
developed. Within this paper an overview of the specific objectives, approaches and
the status of all Austrian AAL pilot regions is given. Taking into account the
different experiences of the different pilot regions, specific challenges in
establishing, implementing and sustaining pilot region projects are discussed and
lessons-learned are presented. Results show that a careful planning of all project
phases taking into account available resources is crucial for the successful
implementation of an AAL pilot region. In particular, this applies to all activities
related to the active involvement of end-users.
1. Introduction
AAL (Active and Assisted Living) products and services aim at promoting older people’s
independence and social participation, improving their personal safety and well-being as
well as supporting healthy lifestyles. In the course of the last years, a wide range of
technologies and services were developed that have the potential to contribute to an
1
Corresponding Author: Markus Garschall, Center for Technology Experience, Austrian Institute of
Technology GmbH, Giefinggasse 2, 1210 Wien, E-mail: markus.garschall@ait.ac.at.
N. Ates et al. / Assistive Solutions in Practice: Experiences from AAL Pilot Regions in Austria 185
increased quality of life of older people. Beyond their ability to support every-day
activities and their acceptance within the target group, the development of holistic service
concepts and sustainable business models are critical for successfully introducing AAL
solutions to the market.
Since 2012 six AAL pilot regions were launched, co-funded by the Austrian Minis-
try for Transport, Innovation and Technology (BMVIT). Within these pilot regions, AAL
products and services are put into practice and tested over a longer period of time than
typically possible in other projects. Overarching goals are: evaluating the impact of AAL
technologies in daily use along the entire value chain; and developing go-to market
strategies involving representatives from businesses and the housing sector as well as
stakeholders from research, health care, insurances and the public sector.
A number of initiatives and projects aim at setting up living labs for assistive tech-
nology on a European level [1]. Within the European Innovation Partnership on Active
and Healthy Ageing (EIP on AHA), 74 regional and local organizations have been
awarded the status of a “reference site” in 2016 [2]. Overarching goal of this initiative is
to accelerate the scaling-up of innovative approaches and practices by fostering active
knowledge exchange. In Germany a pilot region for assistive technology will be
established as part of the SmartSenior project [3]. The gAALaxy project [4] which started
in May 2016 is aiming at developing innovative, holistic and market-driven AAL
bundles with affordable, retrofit, easy-to-use and maintainable smart home solu-tions.
During the evaluation phase, up to 180 private test households connected to diverse
service and care providers in Austria, Italy and Belgium will be involved. Within the
ACTIVAGE project [5], a multi-centric large-scale pilot will be set up in nine de-
ployment sites across Europe. Goal is to provide evidence on the benefit of the de-
ployment of an interoperable IoT-enabled active and healthy ageing platform.
So far, knowledge exchange between the Austrian AAL pilot regions primarily took
place on a bilateral basis or was initiated by the Austrian Research Promotion Agency
(FFG). For example, guidelines for the implementation of pilot studies [6] and the
involvement of different stakeholders in the development of business models for AAL
solutions [7] as well as a taxonomy for the classification of AAL solutions [8] were
developed.
With this paper, we aim to foster this exchange between AAL pilot regions and
discuss specific challenges in establishing, implementing and sustaining projects across
the different contexts and experiences of all AAL pilot regions funded thus far.
We begin by providing an overview of all AAL pilot regions, their principle aims,
approaches and current status. Table 1 summarizes the projects and their basic
information.
1.1.1. moduLAAr
The research project moduLAAr (A modular and scalable AAL system as lifestyle
element for silver-ager up to assisted living) [9] was Austria’s first co-funded test region
project with a great demonstration character and aimed to measure the impact of AAL
technology on the quality of life of older adults and to enhance the public perception of
AAL in general, but also among policy makers and different stakeholders.
186 N. Ates et al. / Assistive Solutions in Practice: Experiences from AAL Pilot Regions in Austria
In the project 50 flats, which are mainly connected to a care facility and provided
care services on demand from the Samariterbund in the rural region of Burgenland, were
equipped with AAL technologies from the domains safety, social inclusion, health and
comfort. Some flats were recently built, 11 flats provided care services without a nearby
care facility and 14 flats were private properties. The system provided consisted of a
tablet computer, a mini PC for continuous activity tracking in the living environment, an
NFC enabled blood pressure monitor, a weighing scale and a mobile phone as well as a
mobile emergency call system with GPS, and domotic sensors. Additionally, a cloud
server was provided for formal and informal care persons (e.g. to share photos). The
LeichterWohnen Android app for Android tablets formed the central user interface and
from the end-users view the core component of the overall system.
The average age of the participants was 71 years. The participants had a low affinity
to technology and 76% lived alone. To measure the impact on quality of live, a number
of quantitative and qualitative instruments, mainly standardized questionnaires, have
been used which were adopted partly to satisfy special needs of the target group. The
results clearly showed a positive effect of AAL technology on the quality of life in an
age group where maintaining the same quality of life can be already seen as a success.
The project was closed at the end of December 2015 and in the final stages, a multi-
stage exploitation strategy was developed to enable a low-cost entry into the use of AAL
technology for the end-users by promoting lifestyle and health aspects at first. The
modular architecture of the system allows an easy extension depending on the change of
user needs later on. Steps towards commercial exploitation in co-operation with partners
based on the outcome from moduLAAr and other projects are ongoing.
1.1.2. West-AAL
The aim of West-AAL (the second Austrian test region funded within the Benefit
programme) [10] is to identify and analyze existing AAL solutions as well as ICT based
smart home systems and smart services to allow a requirement based evaluation of the
fit and potential benefits of single as well as bundles of solutions for older people and
(in-)formal caregivers with respect to their distinct influencing surroundings. Therefore
the consortium of West-AAL consists of six different user organizations which ensure a
variety of environments and access to up to 74 households. For example, urban as well
as rural regions are covered, older adults living in their own property and others living
in a rented apartment are involved, households with and households without direct access
to professional care services provided by public or private organizations are part of the
project. Additionally to the preliminary evaluation of the identified solutions, West-AAL
N. Ates et al. / Assistive Solutions in Practice: Experiences from AAL Pilot Regions in Austria 187
1.1.3. ZentrAAL
Most of the technological solutions for supporting older people focus on comfort and
consequently on functionalities to support people in their everyday life. As beneficial as
these technologies can be, by taking over simple activities they can also cause functional
skill losses and increase care needs. Therefore, the AAL pilot region Salzburg
(ZentrAAL) [11] focusses on the development of technology-enabled services to prolong
independent living (and to reduce/delay care demand) by aiming to maintain or even
improve functional abilities and the current health status of older adults living in
sheltered housing schemes.
Consequently, these people will be trained to use these technologies and get to know
the benefits of AAL usage. In order to ensure end user acceptance of the developed
services, lead users are involved in all relevant project phases starting with the
requirements analysis [15]. The developed system MeinZentrAAL provides
188 N. Ates et al. / Assistive Solutions in Practice: Experiences from AAL Pilot Regions in Austria
1.1.4. RegionAAL
Only a few AAL technologies have been successfully introduced to the market up to now.
Within the Styrian pilot region RegionAAL [12] the question on what may be the reasons
why many AAL solutions haven’t been successful so far is addressed. One reason might
be that research and development did not sufficiently consider the specific needs and
requirements of this particular user group.
Therefore, RegionAAL takes (based on an evidence analysis) as its starting point
ICT based interventions, which are acceptable for this target group and have been proven
beneficial; appropriate technologies are searched for those interventions, further
developed and integrated in a way that makes them easy accessible and usable for the
involved users. The developments of RegionAAL cover functionalities for the areas
safety, communication and interaction with care organizations. A scientific evaluation
of the effectiveness of the technology intervention will be performed.
The development of the service modules is nearly finished; the test phase lasting 12
months is about to be started with in 100 households (as of January 2017). The design of
the accompanying scientific evaluation also foresees a control group with about the same
number of households.
1.1.5. WAALTeR
The Viennese AAL pilot region WAALTeR [13] is one of two pilot regions funded in
the latest cycle of the benefit programme. It is also the first in Vienna, promising to yield
new insights of AAL in large scale urban environments. The consortium is a diverse mix
of care service providers, research institutions and companies developing technology. As
lead, the Vienna smart city agency TINA ensures that results are embedded in the overall
smart city strategy and associated policy making. In line with the funding scheme, the
overall goal of the project is to support the independent and self-determined life of the
elderly within their own homes.
WAALTeR aims to equip 83 households with AAL technologies with the focus on
integrating existing products and prototypes, rather than developing novel components.
The WAALTeR system covers three service areas: social integration (e.g., through
neighborhood networks), safety (e.g., fall detection) and health (e.g., tele-monitoring).
Mobility is defined a cross-sectoral theme that is reflected in all service areas.
The project is committed to engaging the target user group throughout the design of
the system and its accompanying services. To this end, the project has started to recruit
participants for design workshops in which requirements, needs and ideas for the
technology are assessed and co-designed. An 18 month long evaluation study is planned
to start by the end of the year in which we additionally involve 35 control households.
An experimental study design is applied to measure the impact of the technology
intervention on quality of life, physical activity, loneliness, frailty, self-esteem as well as
to evaluate technology acceptance and user experience.
N. Ates et al. / Assistive Solutions in Practice: Experiences from AAL Pilot Regions in Austria 189
Smart VitAALity [14] is the second of two pilot regions funded in the latest cycle of the
benefit programme. It will implement a large-scale (n = 100+100) and long-term (15
months) evaluation of an integrated AAL system in a smart city “Health, Inclusion and
Assisted Living” setting within older people’s households in Carinthia.
The Smart VitAALity system offers future users utility-based, expandable, modular,
intuitive and user-friendly services that are currently well integrated in existing everyday
processes. The approach is based on the principles of modular-function packaging, deep
service integration and adaption towards the continuum of needs.
The functional clusters (health, wellbeing and social inclusion) are geared to support
the long-term sustainment of quality of life (sQoL) and its dimensions (well-being, health,
social inclusion). This should allow older people to live longer independently and
happier in their own private homes.
The evaluation is based on a multi-domain strategy (usabililty, acceptance, sQoL,
socio-economic models) – whereas a core component will be a quasi-random-controlled
trial (n=100 intervention group, n=100 control group). The results will be the basis for
the development of a sustainability concept.
The implementation of an AAL pilot region involves several stages from the initial
planning to the development of sustainable concepts and business models for sustaining
the pilots beyond the end of the projects. The phases requirements analysis, technology
integration and evaluation correspond to the phases of the UCD approach [17], the phases
planning and outreach activities were added because of their importance for the
successful implementation AAL pilot regions. Figure 1 provides an overview.
Within the planning phase a consortium of research, industry, end-user and
additional partner organizations is set up in order to ensure access to all necessary
resources (e.g. SW and HW components), expertise and competences (e.g. user research,
business modelling) necessary for the implementation of the pilot region. A work plan is
set up defining the roles and responsibilities of all partners together with a concept for
the AAL system to be deployed and a draft methodology for the evaluation. For the
Austrian AAL pilot regions this phase also included the development and submission of
a project proposal within the benefit programme.
The requirements analysis phase covers all activities related to collecting ideas,
needs and wishes related to the design of the AAL solution to be deployed, co-design
activities as well as activities aiming to raise awareness for the project within the target
group (in preparation for recruiting households for the pilot study). Within the
190 N. Ates et al. / Assistive Solutions in Practice: Experiences from AAL Pilot Regions in Austria
technology integration phase the individual AAL service components are adjusted and
integrated into one or multiple service packages according to the user requirements.
The core phase of every pilot region is the implementation of the evaluation study
in older user’s households. This phase comprises the ethical clearance, recruitment of
participants, defining an evaluation methodology, deploying the systems to the user’s
households and running the study as well as the data analysis. According to pre-defined
evaluation parameters the impact of the technology intervention on individual (e.g.
quality of life), system-related (e.g. user experience) and socio-economic factors (e.g.
cost efficiency) is analyzed.
Outreach activities are performed throughout all phases of a pilot region and cover
the dissemination of (scientific) project results as well as the development of sustainable
service concepts and business models (scaling up activities). Within all pilot regions,
showcase flats are set up as a channel to present the project developments to interested
parties. The involvement of stakeholders such as public bodies and (social) insurances is
crucial in this phase.
2. Method
This paper is based upon a reflection on past activities within the Austrian AAL pilot
regions. Consequently, the main responsible person of the coordinating organizations
and additional key project team members of all six Austrian AAL pilot regions were
involved in the analysis process. Section 3 collates the results of all pilot regions,
structured along appropriate project phases in order to meaningfully contrast relevant
experiences. While not all pilot regions are at the same stage, results were discussed
according to the progress made (as described in section 1.2). The overall discussion and
the lessons learned described in chapter 4 look across all projects and their results and
seeks to draw out common insights.
3. Results
The following provides a synopsis of approaches, experiences and lessons learned across
the Austrian AAL pilot regions in relation to the different phases of establishing,
implementing and sustaining a pilot region.
The experiences from the Austrian AAL pilot regions show that bringing together the
right partners for the implementation of a pilot region is challenging, also if
municipalities or regional authorities are involved. When setting up the project
consortium, it is important to involve partners that are jointly able to cover all the
necessary roles along the project lifecycle. This is crucial not only for the implementation
of the pilot region, but also for scaling up activities. The involvement of key persons with
long experience in the AAL domain can help to lead the planning activities but also to
drive the implementation of the project.
One important finding of the first Austrian pilot regions is that the effort involved in
recruiting participants for the pilot study is easily underestimated. Therefore it’s
N. Ates et al. / Assistive Solutions in Practice: Experiences from AAL Pilot Regions in Austria 191
necessary to engage end-user organizations and allocate sufficient resources to this task.
Thus, pilot regions, such as WAALTeR which launched recently, aim to create
awareness amongst potential participants of the pilot study from the very beginning of
the project.
The commitment of end-user and care organizations to actively contribute to the
project has also shown to be important for aligning the technology according to the
specific needs and real demands of the users. Thorough planning of which technologies
are to be best suited to reflect these needs of the defined target groups is critical. In
addition, a clearly defined concept for a multi-domain evaluation (taking into account
individual, system-related and socio-economic factors) is necessary to ensure that the
project outcomes will bring evidence on the impact of the technology intervention on
individual and socio-economic level.
The experience from all Austrian AAL pilot regions has shown that the requirements
analysis plays an important role in designing the AAL services and service packages to
be evaluated. Various resources can be leveraged to adjust the technology according to
the needs of the targeted user groups: the experience gained in previous R&D projects,
the expert know-how of the involved end-user organizations as well as an active user
involvement and co-design activities performed within the pilot region project. While
within the first pilot region moduLAAr the requirements were defined primarily based
on previous experience and expert know-how, later pilot regions increasingly applied
participatory approaches.
In ZentrAAL for example, a comprehensive participatory approach has proved
successful. User requirements were gathered by involving lead-users (older people living
in sheltered housing schemes as well as employees of a social care organization)
contributing with their intimate knowledge about the needs in daily life. Based on lead
user input, personas (fictional characters to represent different user types) and scenarios
(possible interactions of personas with the technical system) have been developed and
subsequently revised with the lead-users. Similar approaches were followed in West-
AAL and RegionAAL and will be followed in WAALTeR and Smart VitAALity.
In RegionAAL, experiences have shown that both, an involvement of professionals
already working in the field for many years as well as the active contribution of end users
through workshops is necessary to gain a comprehensive view on how the provided
services have to be designed in order to meet the everyday needs of older users. In
addition to the analysis of functional requirements, additional insights can be gained
based on an early involvement of the defined user groups. One of the first activities
performed in WAALTeR was a workshop to co-design the information material for the
project with a view to understand why older people should be interested in engaging with
the project. In West-AAL one major outcome of the requirements analysis was that the
dimension of maintenance & support plays a major role in testing and piloting but also
for sustaining and scaling-up the AAL intervention in the post-project phase.
There are several challenges related to the technology integration phase of AAL pilot
regions. In general, integrating single components into an AAL service package and
adopting these technologies according to the needs of older users is a time-consuming
192 N. Ates et al. / Assistive Solutions in Practice: Experiences from AAL Pilot Regions in Austria
task. The experiences from moduLAAr have shown that also the integration of
technologies that did not exist at the time of planning the project can lead to additional
efforts. For example, while tablet devices were outside the scope of the project as a
frontend for the end-users, they became available on the market during the integration
phase and therefore had to be considered.
Another source of unforeseen efforts within the technology integration phase, are
needs that were not considered in the planning phase, but that emerged during the
requirements gathering phase. Within RegionAAL, this led to the integration of a fall
detector and alarming functions on the used smart watch device. In general, only a fully
integrated system that is aligned to the user requirements has the chance to be accepted
by the users within the evaluation study.
AAL pilot regions are usually bound to a maximum duration of three years, which
raises additional challenges for development. West-AAL, for example, developed a
priority-list of integration activities to ensure that key services were allocated sufficient
resources in timely fashion. Due to the limited resources of partners and the limited
interoperability of components, the consortium prioritised the integration of safety and
security relevant components, also taking existing solutions (e.g. social alarm) into
consideration. In addition a process-orientated integration approach was chosen, which
not just takes technical processes into account but also non-technical business processes.
Cost considerations are essential for the creation of sustainable business models.
Another strategy to address limited resources and stability needs is the integration
of (market-)proven solutions, which has proven beneficial in ZentrAAL. Commercially
available solutions (e.g. iLogs MOCCA tablet application [18]) have been used as well
as community-based open source components (e.g. home automation platform FHEM
[19]).
Experiences from the pilot regions moduLAAr, West-AAL and ZentrAAL (pilots
already finished or currently running in January 2017) have shown that challenges exist
that relate to the recruitment of participants, the definition of an evaluation methodology,
the deployment of the AAL services and service packages to the user’s households and
to the actual implementation of the study.
Even though a lot of resources have been allocated for performing the evaluation,
the efforts required for recruiting participants have been underestimated in moduLAAr.
An unexpected low number of people living in the facilities of the involved end-user
organization agreed to participate in the study. Experiences from ZentrAAL show that at
least three months have to be planned for the recruitment of the trial participants.
An important factor for a successful implementation of the pilot study is the
development of a suitable evaluation methodology. While early pilot regions followed a
pre/post design, more recent pilot regions chose (quasi-)experimental designs (involving
a control group) – which implies higher efforts for recruiting the study participants.
In moduLAAr, a mix of standardized quantitative and qualitative instruments where
applied in a pre/post study design [20]. The standardized setup allowed to achieve
comparable results within all study participants. The evaluation design was also
approved by external experts and submitted to the responsible ethics committee as well
as the Austrian data protection commission. Thus, sufficient resources have to be
allocated for the development of the evaluation methodology.
N. Ates et al. / Assistive Solutions in Practice: Experiences from AAL Pilot Regions in Austria 193
Within all Austrian pilot regions various dissemination channels are used to raise
awareness for project activities and the topic of AAL in general. In addition to press
releases, articles placed in newspapers, magazines and relevant journals also press
conferences are held. An important outreach activity within all pilot regions is to
establish one or multiple showcase flats that serve as stage to host press conferences,
press visits and to demonstrate AAL solutions to a broader audience (e.g. 260 visitors in
the ZentrAAL showcase flat in 2016). Public outreach activities have also shown to be
beneficial for raising interest on the project among potential study participants.
Dedicated activities to foster the exchange of experiences among Austrian and in-
ternational pilot regions (as for example a public event organized by ZentrAAL or the
“Tour d'Autriche” of the RegionAAL project team) have shown to be beneficial. Cur-
rently an intensified exchange among the Austrian pilot regions as well as a joint (online)
presentation of the pilot regions is planned under the umbrella of the Austrian innovation
platform AAL Austria [21].
In addition to dissemination activities, also follow-up activities are planned within
the pilot region projects. For example, within West-AAL an AAL competence center
will be set up at the University of Innsbruck (Department of Strategic Management,
Marketing and Tourism) to combine AAL-relevant research and piloting of innovative
AAL solutions. Users and solution providers will receive access to research results and
have the chance to test new solutions and prototypes.
4. Discussion
The experiences of the Austrian AAL pilot regions show that a careful planning of all
project phases is crucial for a successful implementation of a pilot region. Major
challenges are limited time, personnel and financial resources as well as the active
involvement of representatives of all relevant user groups and other stakeholders in all
phases of the project.
194 N. Ates et al. / Assistive Solutions in Practice: Experiences from AAL Pilot Regions in Austria
Therefore, before starting a pilot region project, it is necessary to ensure that the
project consortium comprises all necessary resources and competences to cover all tasks
along the project lifecycle. The involvement of end-user organizations is crucial, on the
one hand to get access to their expert know-how on the target group, but even more
importantly to ensure that representatives of all target groups can be actively involved in
all phases of the project. Within the requirements phase, a participatory approach
involving older users and other relevant target groups in the analysis of functional and
non-functional requirements has proven successful. This also helps to prioritize services
according to the actual needs of the users, and thus to focus on the important system
features in the technology integration phase. The integration of solutions, of
(market-)proven solutions is another strategy to address limited resources, but to ensure
the necessary stability of the AAL services and service packages within the pilot study.
Further, also the actual implementation of the pilot study is accompanied by
resource-intensive preparation tasks such as the recruitment of study participants, the
development and ethical clearance of the evaluation methodology, the deployment of the
system to the users’ households as well as the ongoing support of the users and the
maintenance of the installations. All these challenges can only be addressed by
appropriate planning and the allocation of sufficient resources. A careful planning of the
evaluation methodology is necessary to ensure that the project outcomes will bring
evidence on the impact of the technology intervention on individual and socio-economic
level.
Public outreach activities such as the establishment of showcase flats not only allow
to raise awareness for the project and for the topic of AAL in general, but also address
potential partners for follow-up activities.
In addition to the challenges discussed in this paper, also a seamless communication
between all involved stakeholders is necessary for the successful implementation of AAL
pilot regions. For ethical considerations it is necessary to keep in mind that expectations
are raised when asking people to participate. A clear communication of potential failure
is necessary.
Additional challenges arise from legal frameworks relevant to the implementation
of pilot regions. A new regulation of the EU [22] will be in place for all EU member
states in 2018. The collection of data and the processing has to be clearly defined and
just collected for single purpose. The use of data for further research, if not stated in an
informed consent, is prohibited. By this it is necessary to ask for permission to use the
collected data for further studies. A strong binding to the participants even after the time
of a project will be necessary to use resources efficiently.
This paper and a dedicated workshop within the eHealth Summit 2017 [23] should
only be a starting point to establish a platform for the exchange of experiences between
the Austrian and international AAL pilot regions, as well as for discussing similarities
and communalities of pilot projects in the health sector.
Acknowledgements
The pilot regions moduLAAr (grant no. 835863), West-AAL (grant no. 840714),
ZentrAAL (grant no. 846246), RegionAAL (grant no. 850810), WAALTeR (grant no.
6855514) and Smart VitAALity (grant no. 858380) are co-financed by funds of the
benefit programme from the Austrian Federal Ministry for Transport, Innovation and
Technology (BMFIT).
N. Ates et al. / Assistive Solutions in Practice: Experiences from AAL Pilot Regions in Austria 195
References
[1] Siegel, C., & Dorner, T. E. (2017). Information technologies for active and assisted living− Influences
to the quality of life of an ageing society. International Journal of Medical Informatics.
[2] Website European Innovation Partnership on Active and Healthy Ageing (EIP on AHA) Reference Sites,
https://ec.europa.eu/eip/ageing/reference-sites_en, last accessed on 20 March 2017
[3] Website SmartSenior, http://www.smart-senior.de, last accessed on 20 March 2017
[4] Website gAALaxy, http://gaalaxy.eu, last accessed on 20 March 2017
[5] Website ACTIVAGE, http://www.activageproject.eu, last accessed on 20 March 2017
[6] IntegrAAL – AAL in der Praxis. Ein Leitfaden zu Fragen der Implementierung und Effizienzsteigerung.
Available online :
http://www.wpu.at/integraal/index_htm_files/IntegrAAL-%20Abschlussbericht%202014-12-30.pdf
[7] Selhofer, H., Wieden-Bischof, D., & Hornung-Prähauser, V. (2016). Geschäftsmodelle für AAL-
Lösungen entwickeln: durch systematische Einbeziehung der Anspruchsgruppen (Vol. 2). BoD–Books
on Demand.
[8] TAALXONOMY – Entwicklung einer praktikablen Taxonomie zur effektiven Klassifizierung von
AAL-Produkten und –Dienstleistungen (Guidebook), Available online: http://www.taalxonomy.eu/wp-
content/uploads/Downloads/benefit%20846232-TAALXONOMY-D4.3-Guidebook.pdf
[9] Website moduLAAr, http://www.modulaar.at, last accessed on 20 March 2017
[10] Website West-AAL, http://www.west-aal.at, last accessed on 20 March 2017
[11] Website ZentrAAL – Salzburger Testregion für AAL-Technologien, http://www.zentraal.at, last
accessed on 20 March 2017
[12] Website RegionAAL, http://www.regionaal.at, last accessed on 20 March 2017
[13] Website WAALTeR – Wiener AAL-TestRegion, http://waalter.wien, last accessed on 20 March 2017
[14] Website Smart VitAALity, http://www.smart-vitaality.at, last accessed on 20 March 2017
[15] Schneider, C. & Trukeschitz, B. (2015). "Let users have their say" - Experiences on user involvement
from the AAL Pilot Region Salzburg. 6th International Carers Conference, Göteburg, Schweden, 04.09-
06.09.
[16] Trukeschitz, Birgit, Cornelia Schneider, Daniela Krainer, Johannes Oberzaucher, Susanne Ring-
Dimitriou, Siegfried Eisenberg, and Ulrike Schneider. 2015. “ Geplantes Evaluierungsdesign von
„meinZentrAAL“, ZentrAAL-Forschungsbericht, Wien (unpublished).
[17] ISO. 9241-210: 2010. Ergonomics of human system interaction-Part 210: Human-centred design for
interactive systems. International Standardization Organization (ISO). Switzerland (2010).
[18] Website iLogs MOCCA eHealth platform, http://www.ilogs.com/en/12mocca, last accessed on 20
March 2017
[19] Website FHEM home automation platform, http://fhem.de, last accessed on 20 March 2017
[20] Siegel, C., Prazak-Aram, B., Kropf, J., Kundi, M., & Dorner, T. (2014). Evaluation of a modular scalable
system for silver-ager located in assisted living homes in Austria–study protocol of the ModuLAAr
ambient assisted living project. BMC public health, 14(1), 736.
[21] Website AAL Austria, http://www.aal.at, last accessed on 20 March 2017
[22] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the
protection of natural persons with regard to the processing of personal data and on the free movement
of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Available online:
http://eur-lex.europa.eu/legal-
content/EN/TXT/?uri=uriserv:OJ.L_.2016.119.01.0001.01.ENG&toc=OJ:L:2016:119:TOC
[23] Website eHealth Summit 2017, http://www.ehealthsummit.at. last accessed on 20 March 2017
196 Health Informatics Meets eHealth
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-196
1. Introduction
Adverse drug events and patient harm is often caused by errors in the medication process
such as overdosing, drug-drug-interactions, or contraindications [1]. Information
systems and clinical decision support systems provide means to avoid such medication
errors by automatic checks and availability of all relevant information [2,3]. Even though
technical means are in principle available, a challenge in many countries is still the
information transfer on prescribed and dispensed medications among several healthcare
providers [4]. Consider the example of our fictional patient, Elisabeth Brönnimann, a 78
years old woman, who gets prescriptions from her general practitioner, by her
orthopaedic specialist, and by her specialist in pulmonology. From time to time, she is
1
Contributed equally
2
Corresponding Author: Kerstin Denecke, Berner Fachhochschule, Quellgasse 21, 2502 Biel,
Switzerland, kerstin.denecke@bfh.ch
M. Tschanz et al. / eMedication Meets eHealth with the eMMA 197
going to the pharmacy directly and buys additional drugs. When she is going to the
hospital for a hip implant, she receives yet another medication. All these prescriptions
and dispensings are documented in different information systems leading to fragmented
health information [5,6]. To avoid this fragmentation, many European countries try to
establish eHealth strategies, aiming at providing an accurate, current medication list of a
patient [7]. With the various eHealth strategies adopted in Germany, Austria and
Switzerland in recent years, digitalization in the health care system is on its way. In
Switzerland, a Federal Law on the Electronic Patient Dossier [8] entered into force at
the beginning of 2017. By 2020, all hospitals are expected to have an implementation of
the Electronic Patient Dossier (EPD). The Swiss eHealth strategy was adopted by the
Federal Council in 2007. It targets at making all data relevant for a treatment accessible
to the health care professionals. An interdisciplinary group of professional associations
(IPAG) developed the technical content for a national exchange format for eMedication
[7]. An interim solution is to generate a paper-based medication list which can be
digitised by a QR-Code and is then updated by the doctor and pharmacist when necessary
[9].
In particular with regard to a digital implementation including a connection to an eHealth
platform, there are still numerous open questions. In this paper, we examine the question:
How can a digital medication plan be linked to an eHealth platform, to make sure that all
health care providers can benefit from digitization, even if not all the prerequisites for
eHealth are currently met? We developed a concept and a prototype of a mobile app that
address this question.
7. For this reason, the pain of the rheumatic ailments returns and the patient has to
return to the hospital.
The main reason for this or similar problems is the inconsistent information flow among
the physicians and the healthcare team. In addition, there is no direct communication
between the different actors in the healthcare system. Instead, the patient has to carry and
provide the relevant information which is an additional burden for a patient and leads to
information loss as in the scenario described before.
Figure 1. Current situation in drug prescription: Information drug prescription and dispensing is not integrated
leading to side effects in the patient
3. Methods
Our concept and prototype development was realized in three steps: 1) requirements
analysis, 2) concept development and 3) prototype implementation.
This work is embedded in the "Hospital of the Future Live" project, involving 16
companies and 6 hospitals in order to develop IT solutions for future optimized health
care processes taking into account eHealth technologies [10]. For this reason, we were
applying a multi-stakeholder principle: requirements were collected from the different
actors by E-Mail and interviews and integrated into a coherent concept. More specifically,
we asked physicians for a description of the current situation and ideas on possible
improvements. In addition, relevant literature was assessed for collecting requirements.
In the following, we are describing the infrastructure in Switzerland and the available
technology that has been selected for realising our concept.
Within the Swiss eHealth strategy, standards and processes are specified among others
for eMedication. More specifically, according to the interprofessional working group
eMedication in Switzerland (IPAG), eDocuments with medication information are going
to be exchanged between various healthcare providers in future [8]. IPAG specified a
first draft of an exchange format for eMedication in 2016 [11] consisting of four main
elements: 1) eCurrentMedication, 2) ePrescription, 3) eDispense and 4)
M. Tschanz et al. / eMedication Meets eHealth with the eMMA 199
Conversational User Interfaces provide text and language based interactions between
user and system [13]. The idea originates from the immense use of text messaging
systems such as WhatsApp or Telegram by persons of many age groups. Thus, the idea
is to exploit language based chatbots to reduce the complexity of user interfaces and in
this way to simulate a communication is similar to human conversations. Chatbots have
already been used for providing diabetes control / management to patients [12]. However,
this type of interface design is relatively new and thus experiences in the health care
domain are still limited.
4. Results
From the interviews, we collected the requirements and objectives towards our concept
and system. The system should:
x reduce double prescription and medication misuse,
x reduce contraindications and medication errors,
x make relevant, current information on the medication available at any time for
health professionals and the patient,
x provide access to compliance data regarding drug consumption for health
professionals.
In the following, we are introducing the developed concept and are describing the
prototype implementation of the mobile application eMMA.
4.1. Concept
Our concept integrates the EPD with an eMediplan and the mobile app eMMA to provide
a solution for all relevant stakeholders including the patient and to make the medication
information available to all involved persons. eMMA retrieves the current medication
from the eHealth platform and can generate an eMediplan which in turn can be imported
200 M. Tschanz et al. / eMedication Meets eHealth with the eMMA
Figure 2. Future situation with eMMA and eMediPlan: Relevant information is stored in the eHealth platform
and can be accessed on demand.
by the health professional through scanning of the QR-Code [9]. The patient always has
his current medication plan available through this app. Furthermore, the app can
exchange all data with an eHealth platform. In addition to the digitization of the drug
plan, the app also provides essential functions for the safe use of medications. Among
others, an active substance register and an interaction check are integrated. The patient
is additionally reminded of the medication intake and can give feedback on his health
status. The app has an intuitive user interface and additionally a Conversational User
Interface [13] for interaction with the system.
Figure 2 shows the scenario when implementing our concept: The eHealth platform
is in the middle of the care process. Whenever medication information is required or new
information is added, the current data is collected from the platform or the data on the
platform is updated. In this way, double prescriptions and possible drug-interaction are
recognized in time and can be avoided.
Further, the patient can communicate with the application. For example, because of
dizziness, the patient decides to not take the blood pressure lowering medication. He uses
the app eMMA with the conversational user interface and mentions that he did not took
the medication because of the recognized symptoms. Such compliance data provided in
the conversation with the app is stored in the eHealth platform and the physicians can
access this data on demand. Furthermore, the patient receives reminders from the eMMA
app to take his medicine. With an access rights management, the patient can decide who
is allowed to access his stored data. In addition, the patient can check for interactions of
his current medication with food and can – if necessary – adapt his lifestyle.
The functionalities of the app eMMA have been described already in the previous section.
We are focussing now on the data flow (see Figure 3). After the successful login, the app
will check for updated medication information on the electronic patient record (1). If an
updated eMedication exists on the platform, the corresponding data is downloaded,
transmitted as HL7 CDA CH MTPS [14] and stored on the database of the application
M. Tschanz et al. / eMedication Meets eHealth with the eMMA 201
by applying the same structure as the eMediplan. If the patient has a more recent version
of his medication on a paper-based eMediplan, he can scan the QR-Code and in this way,
import the data from the paper. (2) The app is collecting and (3) storing locally the agents
for each drug from the medication plan using an agent database such as the compendium
(https://compendium.ch/). (4) The application checks for interaction for each active
substance by querying a knowledge base and (5) stores the information again locally.
Then, (6) the patient communicates with the app over a conversational user interface [13].
Through this interface, the patient can ask the app specific questions about his current
medication. (7) For interpreting the user input and to produce realistic answers, the
application is connected to a semantic server. A possible question could be on
interactions, or on the dosage. (8) As an additional feature, the patient can take a picture
of his medication. The application will make a request to the identification database (9)
to retrieve the name of the drug. This allows to add easily additional drugs that are not
yet listed on the eMediplan [9]. (10) The interaction with the system, in particular the
conversations regarding the drug consumption are stored in the eHealth platform. Later,
the doctor can easily access this data and monitor whether the patient takes regularly his
drugs or whether he needs additional guidance.
The drug-interaction check aims not at providing all medication interaction to a
patient for his current medication. This would overwhelm the patient who is often lacking
the background knowledge for a correct interpretation and judgement of the actual risks.
Instead, a subset of interactions is extracted, namely information on food interaction of
the current medication, which is then provided to the patient. Checking for drug-drug
interactions remains in the hand of the prescribing health professional.
Figure 4 shows an example interaction of the patient with the conversational UI. At
the bottom of the screen, there are the different options for an answer. This enables the
patient to have a fast and easy way to communicate with the app without the need of
entering words or even sentences.
Figure 3. Concept of the mobile application data exchange between external services.
202 M. Tschanz et al. / eMedication Meets eHealth with the eMMA
5. Discussion
Our concept is based on the assumption that an EPD will be implemented in Switzerland
in the near future. Changes in the health care system due to the introduction of the EPD
in all hospitals will be available only in a few years [8]. As long as the EPD is not yet
realized, the eMediplan will be used as main data source for the app. How far the
distribution of the eMediplan could be supported by our solution still needs to be
considered. In any case, the outpatient physicians have to be motivated to use the EPD
by demonstrating benefits. The eMediplan in combination with our concept could be a
first motivating step. To the best of our knowledge, there is no mobile application
currently available that enables a process as described in this work. It provides a mean
to make the current medication data of a patient available.
Even though the law for the introduction of the EPD [8] currently does not make it
obligatory to document a patient’s self-medication, it is clear that such information can
be important to avoid contraindications. By providing additional functionalities such as
a reminder or checks for food interactions, an additional benefit for the patient is created.
Through the conversational UI, a patient can be motivated in taking his medication
regularly, relevant information can be provided and compliance data can be collected
[13]. This provides him with more responsibilities and leads to patient empowerment.
Preliminary tests with the prototype showed that CUI could provide a good mean for the
interaction in this context. However, a comprehensive usability study still needs to verify
this. The responsibility for the complete drug interaction will still be in the hands of the
physicians or the pharmacists because the patient has not the competence to fully
understand and judge all possible interactions. Further, functionalities of the application
have to be evaluated and integrated into the final version. Also, legal aspects need to be
taken into account. The stored data about the compliance of the patient should not be
M. Tschanz et al. / eMedication Meets eHealth with the eMMA 203
used to harm him. For example, the cost of a rehospitalisation cannot be shifted to the
patient only because he did not take his medication regularly.
Another important feature of our app will be the improved medication adherence for
the patient achieved through electronic reminders. Recent studies have shown
ambivalent results with respect to an improvement of the adherence through electronic
reminders. While some studies demonstrated an improvement of the medication safety
with reminders [15],other studies could not verify an improvement for the adherence [16].
Therefore, we decided to design our app not only as a reminder. Through a conversation
with the application, a bond of trust will be created. To enable this, all interactions will
be designed in a way that the patient feels safe about the information he receives from
the application. Only in this way, we can ensure that he is willing to take all the
medications the app is recommending him and that the adherence information he implied
is correct. Future studies with our app will show if this concept is realistic and accepted
from patients.
References
[1] ML. Lampert, S. Kraehenbuehl, BL. Hug. Drug-related problems: evaluation of a classification system
in the daily practice of a Swiss University Hospital. Pharm World Sci. 2008 Dec;30(6):768-76
[2] Committee on Patient Safety and Health Information Technology; Institute of Medicine.: Health IT and
Patient Safety: Building Safer Systems for Better Care, Washington (DC): National Academies Press
(US); 2011 Nov.
[3] E. Ammenwerth et al.: Memorandum on the use of information technology to improve medication safety.
Methods Inf Med. 2014;53(5):336-43
[4] A.Geissbuhler. Lessons learned implementing a regional health information exchange in Geneva as a
pilot for the Swiss national eHealth strategy. Int J Med Inform. 2013 May;82(5):e118-24
[5] Lina M Hellström, A. Bondesson, P. Höglund, T. Eriksson: Errors in medication history at hospital
admission: prevalence and predicting factors, BMC Clin Pharmacol. 2012 Apr 3;12:9.
[6] H. M. Seidling, J. Kaltschmidt, E. Ammenwerth, W. E. Haefeli, Medication safety through e-health
technology: can we close the gaps? Br J Clin Pharmacol. 2013 Sep;76 Suppl 1:i-iv. doi:
10.1111/bcp.12217
[7] AW. Gall, AF. Ay, R. Sojer, S. Spahni, E. Ammenwerth: The national e-medication approaches in
Germany, Switzerland and Austria: A structured comparison. Int J Med Inform. 2016 Sep; 93:14-25
[8] eHealthSuisse: Electronic Patient Dossier. http://www.e-health-
suisse.ch/umsetzung/00135/00218/00256/index.html (last access: 23.01.2017)
[9] HCI Solution AG. Konzeptskizze „eMediplan“. Thurgau: HCI Solution AG, 2014
[10] T. Bürkle, K. Denecke, M. Lehmann, J. Holm: Spital der Zukunft Live: Transformation in das
Gesundheitswesen 2.0. FOCUS tcbe.ch, ICT CLuster Bern, 2016; 29, p. 10-12 ,
https://www.ti.bfh.ch/fileadmin/data/medizininformatik/medien/I4MI/Spital_der_Zukunft_Live_Trans
formation_in_das_Gesundheitswesen_2.0.pdf
[11] eHealthSuisse: eMedication report. http://www.e-health-
suisse.ch/umsetzung/00252/index.html?lang=en (last access: 23.02.2017)
[12] A.S. Lokman, J.M. Zain. Designing a Chatbot for diabetic patients. In: International Conference on
Software Engineering & Computer Systems (ICSECS'09), 19-21 October 2009 , Swiss Garden Kuantan
Pahang
[13] V.H. Joyce Chai. A Conversational Interface for Online Shopping. New York City USA: IBM t. J.
Watson Research Center, 2001
[14] C.S. Stéphane Spahni. Format d’échange Plan de traitement médicamenteux partagé V0.63. Bern:
eHealth Suisse, 2016.
[15] M. Vervloet, A. J. Linn, J.C. van Weert et al. The effectiveness of interventions using electronic
reminders to improve adherence to chronic medication: a systematic review of the literature. J Am Med
Inform Assoc. 2012 Sep-Oct;19(5):696-704. AO.
[16] Talisuna , A. Oburu, S. Githinji , et al.: Efficacy of text-message reminders on paediatric malaria
treatment adherence and their post-treatment return to health facilities in Kenya: a randomized controlled
trial. Malar J 16 (1), 46. 2017 Jan 25
204 Health Informatics Meets eHealth
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-204
Abstract. Background: Data from the health care domain is often reused to create
and parameterize simulation models for example to support life science business in
the evaluation of new products. Data quality assessments play an important part to
help model users in interpreting simulation results by showing deficiencies in the
raw data used in the model building and offers model builders a comparison of data
quality amongst the used data assets. Objectives: Assess data quality in raw data
prior to creating simulation models and prepare results for model users. Methods:
Using a literature review and documentation of previous models created, we
searched data quality criteria. For eligible criteria we formulated questions and
viable answers to be used in a questionnaire to assess data quality of a data asset.
Results: We developed a web tool to evaluate data assets using a generic data model.
Percentage results are visualized using a radar chart. Conclusion: Data quality
assessment with questionnaires offers model builders a framework to critically
analyse raw data and to detect deficiencies early in the modelling process. The
summarized results can help model users to better interpret simulation results.
1. Introduction
1
Corresponding Author: Christopher Wendl, Center for Medical Statistics, Informatics and Intelligent
Systems, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria, E-Mail:
christopher.wendl@meduniwien.ac.at.
C. Wendl et al. / A Web-Based Tool to Evaluate Data Quality of Reused Health Data Assets 205
Wand and Wang developed an ontological base to assess data quality [5]. They listed
26 data quality dimensions and categorized them into internal and external views. In [6]
these dimensions are further categorized hierarchically into intrinsic, contextual
representational and accessibility data quality dimensions. In [7] data assets are evaluated
in respect to their reusability by consulting data quality experts and comparing these
assets to an asset with a very high quality. Results are presented using radar charts.
We developed a web based questionnaire for model builders to evaluate data quality
criteria without the need of consulting any data quality experts and allow model users to
get a quick overview of the underlying data used in the modelling process. This work
presents the preliminary results of an ongoing bachelor thesis.
2. Methods
In the first step, criteria to categorize data assets and data quality were searched.
Precedence was given to criteria with special focus on the medical domain. Beside a
literature review we searched for Austrian data assets suitable for the model creation and
parameterization process by looking at previous models and simulations of the Austrian
health care landscape. The found criteria were textually evaluated with respect to their
eligibility for reused health data. For each eligible criterion a question with permitted
answers was formulated.
A generic database based on the Entity-Attribute-Value (EAV) design [8] was used
to persist the criteria categories, criteria with corresponding questions, permitted answers
as well as the results of the data asset evaluation in a MariaDB [9] database. In Figure 1
the four tables of the EAV design are shown. The EAV design allows adding new criteria
and categories without changes in the database schema.
For the web tool the PHP framework Laravel [10] was used. The model view
controller (MVC) design pattern was applied, separate forms to enter new categories and
corresponding criteria with questions and answers were implemented. The evaluation of
a data asset was performed using a questionnaire displaying all the questions from the
criteria table. The result of a data quality assessment was visualized using the JavaScript
framework D3[11] and the radar charts plugin.
Figure 1. Entity–relationship diagram showing the EAV design used in our evaluation tool. A criterion (i.e the
attribute) belongs to one category. In an evaluation (i.e the entity described) many evaluation answers (i.e the
values) are assessed. Each evaluation answer corresponds to one criterion.
206 C. Wendl et al. / A Web-Based Tool to Evaluate Data Quality of Reused Health Data Assets
3. Results
The 15 quality criteria from Wang and Strong [6] were selected as base for the quality
criteria. The main reason for selectin the criteria of Wang and Strong were the existing
interpretation for the medical domain published in [4] as well as existing definitions for
the criteria and the many citations in literature. We found some very similar criteria
[12, 13], but due to the missing definitions and missing definitions in a clinical context,
Wang and Strong’s criteria were selected. The 15 criteria were reduced to 10 criteria by
combining and omitting as following. The reputation criterion was omitted since the
definition of reputation is the source of data [6], which does not influence the quality of
a data asset, especially in a clinical context. Furthermore we combined relevance and
value of the data (i.e. the definitions are very similar and data with higher value would
be more relevant), amount of data and completeness (i.e. in a clinical context a complete
data asset needs to have the proper amount of data), interpretability and ease of
understanding (i.e. those definitions are practically the same) and representational
consistency and concise representation (i.e. consistent representation in a data asset
means that the data is well formatted so those two criteria definitions are very similar)
by considering the definition of these criteria and aiming for similarities. The hierarchical
categories (i.e. intrinsic data quality, contextual data quality, representational data
quality, accessibility data quality) were adopted directly.
By analysing previous simulations we found that in most cases the raw data was
collected for other purposes (e.g. routine care, reimbursement purposes, legal purposes,
health survey, studies, registries, etc.) and was reused in the model creation and
parameterization process. We found that the reason why information is documented
influences what and how it is documented and has to be considered during the model
creation and interpretation phase (e.g. when using claims data, diagnoses not relevant for
reimbursement are not documented). Further, the different data assets are either available
in aggregated form (e.g. literature, health surveys, etc.) stratified aggregations (e.g.
Statistic Austria offering population information stratified by age-group, place of
residence, year, etc.), individual record per activity (e.g. reimbursement data one record
per hospital stay) or data assets with direct (e.g. electronic health records) and indirect
(e.g. pseudonymous data) person identifiers. We added additional criteria to assess the
initial purpose the reused data was collected for and the granularity of the raw data.
However, those two criteria do not reflect the quality of a data asset and only offer
additional information to model builders
To help model builders understand the purpose of a criterion, for each criterion one
or more questions were formulated and to each question a set of permitted answers was
attached. For each answer we defined either a percentage corresponding to the amount
how good or bad data quality is reflected or a textual value. We were using categories to
group criteria and reduce the complexity of the result visualisation. As shown in Table 1
the categories were directly related to those in [6]. The contextual data quality is a
measure that describes the quality of the asset in its entirety. A low contextual data
quality could be due to an out of date data set or many missing values. The intrinsic data
quality category lets model users know how good the specific entries of a data asset are.
A low intrinsic data quality could indicate many typos in a dataset. The representational
data quality shows consistency in the representation of the data (e.g. entries following a
standard) and indicate to the model user how much pre-processing and cleansing of the
raw data was applied or necessary. The last two categories are the granularity and the
source of data, they tell the model user in what granularity the data was available to the
C. Wendl et al. / A Web-Based Tool to Evaluate Data Quality of Reused Health Data Assets 207
Table 1. Overview of data quality criteria selected for questionnaire with corresponding category and number
of questions per criteria
Category criterion Number of questions
Intrinsic data qualityAccuracy 3
Believability 2
Objectivity 2
Contextual data qualityRelevance 3
Timeliness 3
Completeness 3
Representational data qualityInterpretability 2
Representational consistency 2
Quality of accessibilityAccessibility 2
Access security 1
Other relevant criteriaData Source 1
Granularity 1
model builder and indicates if fine grained parameters could be implemented. The
question to granularity has answers from coarse grained aggregated data assets to fine
grained individual level data assets. Since the source of data is only for informational
purposes there will only be a text field where users can enter a source
The definitions in [4] were used to guide us in the formulation of our questions. For
example accuracy is defined as “the extent to which data are correct, reliable, and free of
error, or in a clinical context, data values should represent the true state of a patient within
the limitations of the measurement methods” [4]. Using this information the example
question in Figure 2 was formulated.
During the visualization we distinguished between percentage and textual answers.
For percentage answers, the arithmetic mean of all answers from the criteria in one
category was calculated and displayed as a spoke in the radar chart. For each category a
spoke was displayed (see Figure 2 showing 5 spokes). Textual answers were displayed
below the radar chart on the result page. Each evaluation resulted in a distinct URI and
can be made available together with the model. This allows model builders to evaluate
data assets by answering the questionnaire and to quickly get an overview of the quality
of evaluated assets by considering the radar chart. It also allows them to compare the
quality of evaluated data assets amongst each other.
4. Discussion
Figure 3. Sample radar chart showing the result of a data asset questionnaire of a specific data asset.
In the interoperable asset registry for asset discovery and assessment presented in
[7] quality metrics were categorized into eight domains (development process, maturity
level, trustworthiness, support and skills, sustainability, semantic interoperability cost
and effort, maintenance) and were evaluated by 20 experts. Similar to their approach we
use a radar chart to visualize results of a questionnaire. Our assessment focuses on data
quality criteria of the underlying raw data used in the model creation process in contrast
to their criteria focused on assessing the suitability for reuse.
The Health Data Navigator [14] is a tool to assess the performance and evaluate the
quality of data sources used for comparative evaluation of health systems. Data sources
are split into aggregated data or individual level data and vary from health status data to
efficiency, cost and expenditure data. Quality criteria are split into entry errors, breaks
and consistency of terminology and are documented in textual form. Also the other
criteria (i.e. coverage, linkage, access, strength and weaknesses) are documented as free
text. We used the quality criteria from Wang and Strong [6] in our assessment since they
covered more than these three aspects.
The EMIF catalogue [15] was developed as part of an IMI European project and
allows researchers to find databases which fulfil their particular research study
requirements. Using 12 categories and 208 questions, amongst others general
information (contact information to access the data sources, database populations),
Database characteristics (start date, etc.), linkage data set description, examples of
covered data elements, publication and comments can be assessed. The EMIF catalogue
offers a very detailed description of a data asset and can be a valuable source to select
raw data for creating and parameterizing simulation models.
C. Wendl et al. / A Web-Based Tool to Evaluate Data Quality of Reused Health Data Assets 209
Acknowledgement
References
[1] Markiewicz K, van Til JA, IJzerman MJ. Medical devices early assessment methods: systematic
literature review. International journal of technology assessment in health care. 2014;30(2):137-46.
[2] dwh GmbH. imProve - Managing the Health Product Development. Available from:
http://www.dwh.at/de/expertise/projekte/improve/ (accessed March 2017).
[3] Safran C, Bloomrosen M, Hammond WE, Labkoff S, Markel-Fox S, Tang PC, et al. Toward a National
Framework for the Secondary Use of Health Data: An American Medical Informatics Association White
Paper. Journal of the American Medical Informatics Association. 2007;14(1):1-9.
[4] Kahn MG, Raebel MA, Glanz JM, Riedlinger K, Steiner JF. A pragmatic framework for single-site and
multisite data quality assessment in electronic health record-based clinical research. Medical care.
2012;50.
[5] Wand Y, Wang RY. Anchoring data quality dimensions in ontological foundations. Commun ACM.
1996;39(11):86-95.
[6] Wang RY, Strong DM. Beyond accuracy: what data quality means to data consumers. J Manage Inf Syst.
1996;12(4):5-33.
[7] Moreno-Conde A, Thienpont G, Lamote I, Coorevits P, Parra C, Kalra D. European Interoperability
Assets Register and Quality Framework Implementation. Studies in health technology and informatics.
2016;228:690-4.
[8] Friedman C, Hripcsak G, Johnson SB, Cimino JJ, Clayton PD, editors. A generalized relational schema
for an integrated clinical patient database. Proceedings of the Annual Symposium on Computer
Application in Medical Care; 1990: American Medical Informatics Association.
210 C. Wendl et al. / A Web-Based Tool to Evaluate Data Quality of Reused Health Data Assets
Abstract. The huge amount of data stored in healthcare databases allows wide range
possibilities for data analysis. In this article, we present a novel multilevel analysis
methodology to generate and analyze sequential healthcare treatment events. The
event sequences can be generated on different abstraction levels automatically from
the source data, and so they describe the treatment of patients on different levels of
detail. To present applicability of the proposed methodology, we introduce a short
case study as well, in which some analysis results are presented arising from the
analysis of a group of patients suffering from colorectal cancer.
1. Background
Sequential pattern mining is a widespread data mining technique, which aims to unfold
statistically relevant repeating sets of data from sequential databases [1]. Since in the
healthcare sector every action, like treatments, diagnosis, prescription of medications,
occurs time to time, the available healthcare databases contain a plethora of event
sequences. This gives a fundamental basis to various research works in the field of
sequential pattern mining in healthcare to find potentially new medical knowledge or
even to predict future occurrence of certain events. Accordingly, in medical research
reports, we can find numerous application examples for sequential pattern mining. For
example, sequential pattern mining has been applied in hepatitis type detection [2],
examination of chemotherapy and targeted biologics sequence of metastatic colorectal
cancer [3], analysis of influencing factors of glioblastoma survivals [4], and in the
prediction of next prescribed medication [5]. The discovered sequences can be compared
with the care processes of medical guidelines [6] or with the practice of other hospitals
[7].
All these research works aim to discover sequentially repeated data, but the applied
methods vary significantly. Some of them analyze similar events [8], while others are
based on the analysis of different type of events (e.g. symptoms, test results, treatments)
[9,10,11]. The main questions of these methods are: (1) what kind of events are presented
in the event sequences, (2) how is the duration of the considered events presented and
(3) how are the parallel events (e.g. treatments) handled.
1
Corresponding Author: Ágnes Vathy-Fogarassy, Department of Computer Science and Systems
Technology, University of Pannonia, 2. Egyetem Str., 8200 Veszprém, Hungary, E-Mail: vathy@dcs.uni-
pannon.hu.
212 K. Tóth et al. / Frequent Treatment Sequence Mining from Medical Databases
Event sequences are typically extracted from medical databases by writing complex
database quires. A different way to unfold and analyze treatment event sequences is to
use process mining techniques. Process mining algorithms (e.g. alpha algorithm or
HeuristicMiner) aim to create flow diagrams of processes automatically from log files or
databases. As medical treatments can be considered as the events of a process, these
algorithms may also provide effective assistance to discover sequences from medical
databases. However, if these algorithms are applied, special attention should be paid to
their shortcomings and limitations (e.g. the problem of short loops).
The goal of our research is to create a hierarchical data analysis method to generate
and discover health care sequences and sequential patterns from medical databases. In
our work, only healthcare treatments are considered as events and event sequences are
generated from relational databases using database queries. The main contribution of this
research work is that the event sequences can be represented at different levels of detail.
This is achieved by a hierarchical code system for treatments and a discipline of
aggregations, which reflect professional medical practice. As a result, healthcare events
can be analyzed on a very detailed low-level, where the diversity is extreme, or they can
also be summarized in higher levels, resulting in a considerable reduction in diversity of
patient pathways. The highest level of the abstraction emphasizes only the main
characteristics of the medical treatments and it can be generated from the lowest level
automatically. The proposed method facilitates the comparison of the specificity of
treatments of healthcare institutes, and the exploration of frequent or rare treatment
sequences for given diseases.
The remaining part of this work is organized as follows. Section 2 introduces the
suggested data analysis methodology from theoretical aspects and Section 3 presents a
short case study of a possible application. Finally, Section 4 concludes the paper.
2. Methods
The first step of the generation of health care sequences is to determine the cohort of
patients to be examined. After the selection of the population, we can generate a detailed
event sequence for every patient separately, including all the relevant events of the
considered health care. Then, the event sequences can be automatically transformed into
a higher, less detailed level. Higher levels contain less specific information about the
treatments hereby highlighting their main characteristics. The first two level of the event
sequences can be beneficial in clinical practice, while the second two levels are useful in
the statistical analysis of health care patterns.
In the next chapters, we introduce these different abstraction levels through a real-
life example, which represents the treatment sequence of a patient with colorectal cancer.
In Table 1, the detailed event sequence of patient p1 can be seen. The events are
isolated by a separation character (||), and all of them are built up from two parts: the
timestamp and the code of the treatment, separated by a colon.
On the zeroth day, the patient had three kinds of treatments: a labor diagnostic
procedure (ICMP: 29000: histology) and two clinical diagnostic procedure (14500:
Biopsy Intestini Crassi per Colonoscopies and 16410 Colonoscopy). These events are
connected to the first detection of the colorectal cancer of the patient. 16 days after the
detection, the patient underwent an operation, which was depicted in the code system by
three surgical ICMP codes (54551: Haemicolectomia dextra, 54688: Adhaesiolysis
interintestinalis and 55431: Resectio omenti maioris). The histological samples from this
operation where evaluated on the 24th day, which activity appears in the coding history
of the patient with the code of 29000 (Histology) and code 29050 (Immunhistochemy).
From the 78th until the 185th day, the patient underwent eight chemotherapies. Finally,
the event sequence ends with the death of the patient on the 329 th day. As death does not
have an ICMP code, we extended the list of ICMP codes with X0000 to represent the
death of patients.
sequence. The result of this process for patient p1 can be seen in Table 2. On the zeroth
day of patient p1, we can see a histology, represented with the code P000, and only one
procedure, which is coded by DC00. We have to mention, that in this case in the original
event sequence there were two clinical diagnostic procedures recorded with different
ICMP codes, but both were mapped into DC00. The three surgical procedures on the 16 th
day are mapped into two groups: colon surgery (MLB0) and abdominal surgery (MLH0).
The next two histologies in the original sequence get the same code, thus they appear as
one event in the new sequence (P000). Afterward, the chemotherapies appear with the
code K000, and finally, the code of death is displayed.
With the help of the typified code system, the variability of the event sequences
decreases. For example, in our research work, we have selected 1436 different ICMP
codes for the examination of colorectal cancer, and after the code mapping procedure,
these codes resulted in 11 different 4-digit codes. Of course, the reduction of the number
of codes decreases the variability of event sequences as well.
The typified event sequences give a proper basis to execute aggregation operations,
which we present in the next chapter.
Additionally, we can enhance the representation of the aggregated events with a time
interval, which interval shows the number of days between the first and the last
aggregated events (Table 4).
Table 5. Example for the aggregation of similar events with the same timestamp
Patient Event sequence
p1 0:P000||0:DC00||16:MLB0||24:P000||78:K000:107||329:X000||
Patients can undergo more than one similar event in one day during the medical care.
Regularly, these treatments are connected to the same event, thus it is enough if only the
most relevant treatment appears in the event sequence. To define which treatment should
be the one that appears in the sequence, hierarchy rules have to be defined. If there are
more than one event with the same timestamps in the typified event sequence of a patient,
only the event with the highest priority will appear in the aggregated event sequence.
For example, if we define a rule for colon surgeries and abdominal surgeries, so that
the colon surgery (MLB0) has a higher priority than the abdominal surgery (MLH0),
then in the case of patient p1 only the colon surgery will appear in the sequence, as it can
be seen in Table 5.
With the help of these two aggregation methods, the number of the different event
sequences are effectively reduced without the loss of valuable information.
3. Results
To present the applicability of the previously introduced method we have analyzed the
healthcare data of oncology patients in Hungary. As the financial healthcare database in
Hungary includes data from all healthcare providers, our study was performed on this
database. This database contains information about the diagnosis and the medical
treatments of patients related to the service providers. The identification of the diagnosis
is defined by the International Classification of Diseases (ICD) codes and diagnostic
procedures and treatments are recorded by the ICPM code system applied in Hungary.
216 K. Tóth et al. / Frequent Treatment Sequence Mining from Medical Databases
The cohort of patients was selected as follows. Those cancer patients were selected into
the study, which had been diagnosed and treated with primer colorectal carcinoma
between 1st January 2009 and 31st December 2014. To identify colorectal carcinoma, we
used the ICD blocks of C18, C19 and C20. These blocks were separated to two
subgroups: rectum (C20H0) and colon carcinoma (C18-C19). Additionally, the inclusion
criteria for the study has specified, that the fact of colorectal cancer had to be verified
with histology. The date of verification was set as the origin of the examination (0 th day).
Patients with recidive tumor were excluded from the analysis. According to these criteria
28817 patients, 16575 men and 12242 women were enrolled into the analysis, whose
average age is 70 ± 11 years.
For the analysis, we have developed a user-friendly application in Java programming
language. This application provides a wide range of graphical possibilities to generate
and analyze the treatment sequences and patterns on different abstraction levels. With
the help of this application, the event sequences for each enrolled patients were generated
on all four levels of detail automatically. The consecutive radio- and chemotherapies
were aggregated with the method described in Section 2.1.3, and for the events with the
same timestamp the following hierarchy was defined (Equation 1):
(1)
In most cases, the event sequences were examined from the origin until a maximum
of 365 days. In our analyses, the event sequences were closed in the case of a 180 days
long event-free period. We considered this 180-day-long period as the stabilization of
the condition of the patients, and handled the events after this period as they were not
involved in the original treatment sequence but happened because of the change in the
condition of the patients.
As the financial healthcare database contains information about the healthcare
providers, like the code of the institute, the postal code, and medical provider code we
were able to make some geographical and institutional analysis as well. The patients and
their event sequences were assigned to the institutes based on the place of the record of
pathological diagnosis and we considered these institutes as the primary healthcare
facilities of the patients.
We analyzed the distribution of the health care patterns by grouping them by
institutes and patterns. For example, Table 7 shows the distribution of the ten most
frequent patterns for the top ten supplier institutes in case of colon carcinoma. Rows
represent the institutes and columns the treatment patterns truncated for the first three
treatment events. As it can be seen, colon surgery (B) and chemotherapy after colon
surgery (BK) were the most frequent applied therapies in all institutes, but the
frequencies of these patterns are different.
To visualize the frequencies, we have generated the box-plot diagram of the patterns
(Figure 2). As it can be seen, the frequency of occurrence of the event sequences shows
a significant deviation for different institutes. For example, the rate of the event
sequences, which contains only chemotherapies, varies between 2.53% and 65.06%.
During our analysis, we have established, that in general, the number of the patients
treated in the institutes shows a very weak correlation with the frequency of the different
event sequences. It is our suspicion that the frequency of event sequences ending with
K. Tóth et al. / Frequent Treatment Sequence Mining from Medical Databases 217
Table 7. The distribution of patients' health care patterns diagnosed with colon carcinoma (in percentage)
Institutes/
Patterns B BK K H E BH EB EBK BB BKE Sum
1 34,01 42,13 10,41 2,79 1,90 2,92 1,78 1,65 1,65 0,76 100
2 34,83 37,31 5,11 2,94 3,87 3,25 3,72 5,11 2,48 1,39 100
3 22,48 34,87 28,14 5,13 1,77 1,95 1,06 0,53 0,88 3,19 100
4 25,94 39,53 20,21 2,86 1,61 1,61 2,86 3,76 1,07 0,54 100
5 35,52 33,33 8,38 4,55 6,19 2,37 4,19 3,28 0,91 1,28 100
6 36,82 25,98 13,83 2,62 6,92 3,36 5,05 2,24 2,43 0,75 100
7 21,36 28,95 29,16 3,08 4,52 1,23 3,90 4,72 0,62 2,46 100
8 30,90 40,13 9,44 1,50 5,15 2,36 4,08 3,43 1,50 1,50 100
9 46,08 30,18 5,07 4,15 2,53 3,69 2,07 1,84 2,07 2,30 100
10 33,73 27,83 8,73 4,25 6,37 1,65 6,84 6,37 2,59 1,65 100
only chemotherapy (K) is in reverse correlation with the size of the institute. While
moving in the same direction, the number of colon surgeries followed by chemotherapies
(BK) are slightly increasing. In contrast with this, the examination of the event sequences
of the patients, diagnosed with rectum carcinoma shows a clearer correlation. On the one
hand, the event sequences containing only a colon surgery (B) were determinative, but
this dominancy declines by the increase of the volume of the institute. On the other hand,
the rate of all the event sequences containing radiotherapy (RBK, RB and R) increases
by the increase of the volume of the institutes.
4. Conclusion
In this paper, we presented a novel hierarchical treatment sequence analysis method. The
proposed method allows the generation of treatment sequences from healthcare treatment
events on four abstraction levels automatically. These abstraction levels hide the details
of the medical treatment processes. The first level contains all specific information about
the treatments of patients. The second level classifies the healthcare events into
categories and the third level performs two different aggregations on the classified events.
Finally, the fourth abstraction level hides all specific information and emphasizes only
the main characteristic of the treatment process.
Acknowledgement
This publication has been supported by the Hungarian Government through the project
VKSZ 12-1-2013-0012 – Competitiveness Grant: Development of the Analytic
Healthcare Quality User Information (AHQUI) framework.
5. References
[1] N. R Mabroukeh, C. I. Ezeife, A taxonomy of sequential pattern mining algorithms, ACM Computing
Surveys (CSUR) 43(1):3 (2010)
[2] S. Aseervatham, A. Osmani, Mining short sequential patterns for hepatitis type detection, Springer
(2005), 6
[3] R.C. Parikh, X.L. Du, R.O. Morgan, D.R. Lairson, Patterns of Treatment Sequences in Chemotherapy
and Targeted Biologics for Metastatic Colorectal Cancer: Findings from a Large Community-Based
Cohort of Elderly Patients, Drugs - Real World Outcomes 3(1) (2016), 69–82.
[4] K. Malhotra, D. H. Chau, J. Sun, C. Hadjipanayis, S. B. Navathe, Constraint Based Temporal Event
Sequence Mining for Glioblastoma Survival Prediction, Journal of Biomedical Informatics 61 (2016),
267-275.
[5] A. P. Wright, A. T. Wright, A. B. McCoy, D. F. Sittig, The use of sequential pattern mining to predict
next prescribed medications, Journal of Biomedical Informatics 53 (2015), 73-80.
[6] F. Caron, J. Vanthienen, K. Vanhaecht, E. Van Limbergen, J. Deweerdt, B. Baesens, Monitoring care
processes in the gynecologic oncology department, Computers in Biology and Medicine 44 (2014), 88-
96
[7] R. Mans, H. Schonenberg, G. Leonardi, S. Panzarasa, A. Cavallini, S. Quaglini, W. van der Aalst,
Process mining techniques: an application to stroke care, Studies in Health Technology and Informatics
136 (2008), 573-578
[8] K. Wongsuphasawat, D.H. Gotz, Outflow: Visualizing Patient Flow by Symptoms and Outcome, IEEE
VisWeek Workshop on Visual Analytics in Healthcare 3 (2011), 25-28.
[9] E. Roorda, Exploring Patient Data - Getting insight into treatment processes with data mining techniques,
Vrije Universiteit Amsterdam, 2009
[10] C. Plaisant, R. Mushlin, A. Snyder, J. Li, D. Heller, B. Shneiderman, LifeLines: Using Visualization to
Enhance Navigation and Analysis of Patient Records, Proc AMIA Symposium (1998), 76-80.
[11] K. Wongsuphasawat, J. A. Guerra-Gomez, C. Plaisant, T. D. Wang, B. Shneiderman, LifeFlow:
Visualizing an Overview of Event Sequences, CHI '11 Proceedings of the SIGCHI Conference on
Human Factors in Computing Systems (2011), 1747-1756.
Health Informatics Meets eHealth 219
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-219
1. Introduction
In a heart failure disease management network, regular monitoring of the vital signs of
numerous patients may take a lot of time. Thus, it is essential to give the physicians as
much technical support as possible to make it easier for them to focus on those patients
who might need attention or therapy adjustment [1]. Therefore, algorithms have been
developed to detect signs of critical changes in the health status of the patients. Body
weight increase can be an indicator for worsening of the patient’s health status or
increased risk of rehospitalisation due to oedema in the patients’ legs or lungs prior to
hospitalisation [2-5].
Clinical alarm systems have to cope with the occurring of false positive alarms, i.e.
they alert health professionals in situations without real need for action. Deploying
algorithms with too low reliability in real patient care process is a burden for the
1
Corresponding Author: Alphons EGGERTH, Institute of Neural Engineering, Graz University of
Technology, AIT Austrian Institute of Technology, Reininghausstraße 13/1, 8020 Graz, Austria, E-Mail:
alphons.eggerth@hotmail.com
220 A. Eggerth et al. / Comparison of Body Weight Trend Algorithms
caretakers, because they have to evaluate every alarm as they can’t tell if it is true or not
beforehand. Thus, nuisance alarms are wasted time, which could be invested in other
important tasks. Furthermore, many nuisance alarms lead to a decline in the attention
given to the alarms, so-called “alarm fatigue” [6].
Automatic event detection is used in telemedicine based heart failure disease
management programs based on recommendation of the European Society of Cardiology
(ESC) guidelines to alert healthcare professionals and to increase the diuretic dosage if
patients experience a weight increase of 2 kg or more in 3 days [7]. Weight gain related
events can be calculated by the so-called Rule-of-Thumb (RoT) algorithm, which is a
simple algorithm to predict critical weight raises [8]. In a recently published paper [9],
Gyllensten et al. compared the performance of different weight based algorithms with
body impedance measurements. Gyllensten and co-workers showed that non-invasive
transthoracic bio-impedance measurements combined with trend algorithm improved the
detection of heart failure decompensation.
It was the aim of the present paper to compare the performance of different
algorithms for prediction of hospitalisations or diuretic dose increases from daily body
weight measurements in a collaborative heart failure disease management program in
Austria, and – based on these comparisons – to determine optimal processing parameters
for these algorithms.
2. Methods
The present paper is based on data from an existing disease management network:
Starting in 2012, the heart failure disease management network called “HerzMobil-Tirol”
has been active in the Austrian region of Tyrol. Patients hospitalised for heart failure are
asked whether they want to join the program. Participants get a blood pressure measuring
device and a weight scale that communicate with a smartphone via Near Field
Communication (NFC). Via a special app, running on the smartphone, the following vital
signs are recorded and transmitted to a backend server: blood pressure, heart rate, body
weight, well-being and drug compliance. After teaching patients how to use this
telemonitoring equipment, they are discharged and start transmitting their vital signs
from their homes on a daily basis [2-4].
So far a total of 136 patients have participated in the program. In the first year of
HerzMobil-Tirol the patients were monitored for 12 months which was reduced to 6
months in the second year of the program and finally resulted in a comprehensive 3
month telemonitoring program. Some patients dropped out within the first two weeks
and therefore these data were not used for the present analysis. The final patient cohort
consisted of 106 patients, 32 women and 74 men, with an age of 71 ± 12 years (mean ±
standard deviation).
In order to rate the performance of different algorithms for event prediction, two types
of events have been defined:
x Heart failure related hospitalisations
A. Eggerth et al. / Comparison of Body Weight Trend Algorithms 221
We compared the performance of the following two algorithms for prediction of Heart
Failure Events from body weight signals: Rule-of-Thumb (RoT) Algorithm and Moving
Average Convergence Divergence (MACD) Algorithm.
The RoT estimated an output index (OI) by subtracting a previous value from the
actual value (see Eq. 1, taken from [9]). The obtained output index was compared to a
threshold. In the actual system of HerzMobil-Tirol a threshold = 2 kg at a time window
of d = 2 days was implemented [8].
The output index resulting from the MACD algorithm was defined as the difference
of two sums that were calculated with all previous values which were weighted
exponentially (see Eq. 2, taken from [9]).
ܱܫሺݐሻ ൌ ߙ௦ σஶ ௫ ஶ ௫
௫ୀሺͳ െ ߙ௦ ሻ ݓሺ ݐെ ݔሻ െ ߙ σ௫ୀሺͳ െ ߙ ሻ ݓሺ ݐെ ݔሻ (2)
ଶ ଶ
ߙ݄ݐ݅ݓ௦ ൌ ܽ݊݀ߙ ൌ
ேೞ ାଵ ே ାଵ
The pre-processing was done with the open source analytics platform KNIME [10].
Therefore, the PostgreSQL [11] database dumps of the HerzMobil-Tirol telemonitoring
software were transformed to a CSV file and outliers and never-beginners were removed
from the data set. To present the data, a visualisation tool was created with JavaScript
222 A. Eggerth et al. / Comparison of Body Weight Trend Algorithms
and the Highcharts library [12]. Further analyses were performed with Python 3.5 [13]
scripts in Eclipse Neon [14].
The weight curve was created by using the morning indications of weight (i.e.
subsequent measurements on that day were ignored). Missing weight values were
estimated by the previous available weight. As shown in Equation 2, the MACD
algorithm formally considers an infinite number of preceding values, with an exponential
decay of weights for values in the past. Practically, the calculations were limited to 100
terms and were initialised with the very first available body weight on day 0.
2.4. Evaluation
To be able to compare our results with related research, similar evaluation criteria as
used in [9] were applied. However, instead of two-week time intervals we applied one-
week time intervals for evaluation because of the weekly checks that are routinely
performed in HerzMobil-Tirol.
For each patient, the whole observation timespan was divided into weeks: Starting
from the last measurement, 7-day time windows prior to this measurement were applied
until an event was reached. If less than 7 days were left, the time window was not counted
as a valid week and thus it was not considered for evaluation. This procedure was
subsequently continued for the remaining record. Weeks containing more than two
missing values (< 70% of expected measurement points) were excluded, too.
The following classification was made:
x True Positive (TP): in a valid week preceding an event, at least one output index
value exceeded the defined threshold.
x False Positive (FP): in any other valid week at least one output index value
exceeded the defined threshold.
x False Negative (FN): in a valid week preceding an event, no output index value
exceeded the defined threshold.
x True Negative (TN): in any other valid week, no output index value exceeded
the defined threshold.
To estimate the predictive power of the algorithms, sensitivity (see Eq. 3), specificity
(see Eq. 4), area under the curve (AUC) and Youden index (see Eq. 5) were calculated.
்
ݕݐ݅ݒ݅ݐ݅ݏ݊݁ݏൌ ்ାிே (3)
்ே
ݕݐ݂݅ܿ݅݅ܿ݁ݏൌ (4)
்ேାி
As can be seen in Eq. 1 and Eq. 2, the MACD algorithm supports two processing
parameters that can be optimised. To find the best set of parameters, combinations of
these parameters were evaluated.
For RoT, the day shift d was varied from 1 to 20 days in steps of 1 day and the
threshold for event detection was varied in the range 0.1 – 5 kg in steps of 0.1 kg. For
MACD, parameters were optimised within a range of for the short time window Ns = 1
– 10 days with steps of 1 day and for the long time window Nl = 4 – 50 days with steps
A. Eggerth et al. / Comparison of Body Weight Trend Algorithms 223
of 2 days, whereas the long window had to be longer than the short window. The
threshold for event detection was varied from 0.1 to 3 kg with steps of 0.1 kg.
3. Results
Calculating the maximum Area under the curve (AUC) for every parameter set
considering all thresholds with a specificity over 95% and choosing the threshold with
the biggest Youden index and a specificity over 0.90 led to the parameters N s = 4 days
and Nl = 12 days with a threshold of 0.8 kg. For these parameters, the output index was
a smoothed weight curve that reacted rather slowly to quick changes in the weight of a
patient. Therefore, slow changes over long time periods could be detected well. This
filter had the characteristics of a low pass and thus eliminated sharp steps and outliers
quite efficiently. On the other hand, with these settings, there was quite a time delay from
the onset of a weight change to the detection of an event (see Figure 1).
Calculating the total AUC (considering all thresholds) for the MACD algorithm
revealed an optimum for Ns = 1 day and Nl = 6 days with a threshold of 0.4 kg (cut-off
chosen by maximum Youden index). For these parameters, the output index was quite
similar to the weight curve with only little filtering. Stepwise weight changes could well
be observed in the output index. For periods, where the weight was rather stable, the
output index settled around 0. The time delay from weight change to event generation
was short. Thus, with this parameter set, quick changes in the weight curve could be
detected, while sensitivity to slow trends was low. This filter had the characteristics of a
high pass (see Figure 2).
Comparing the ROC curves (see Table 1) revealed that the set with the low pass
characteristics for the MACD algorithm had the higher maximum Youden index for
specificity > 0.90 (0.19) for a threshold of 0.8 kg, but a lower prediction accuracy for
other thresholds. The total AUC was 0.57, i.e. – considering all thresholds – the
prediction quality was similar to a random decision. The set with the high pass
characteristics had the lower maximum Youden index for specificity > 0.90 (0.17) but
the total AUC was higher (0.65) and, therefore, this parameter set was more stable
regarding the selection of another threshold (see Figure 3, left). Calculating the
maximum AUC for the RoT algorithm considering thresholds with a specificity over
95% had a maximum for 8 days with a total AUC of 0.57 and a maximum Youden index
for specificity > 0.90 of 0.17 for a threshold of 2.9 kg. For the RoT algorithm, maximising
the total AUC led to 1.0 kg in 5 days (cut-off chosen by maximum Youden index). The
total AUC for 5 days was 0.68 with a maximum Youden index for specificity > 0.90 of
0.16 for a threshold of 2.8 kg (see Figure 3, right).
Table 1. Optimised parameter sets for the MACD and RoT algorithms together with sensitivity, specificity and
Youden index for the given threshold and further with the total Area Under the Curve (AUC) calculated over
all thresholds of the given parameter set. (All values rounded to 2 digits behind the comma.)
Algorithm Sensitivity Specificity Youden Total
Index AUC
MACD (Ns = 1, NL = 6, Threshold = 0.4kg) 0.87 0.41 0.28 0.65
MACD (Ns = 4, NL = 12, Threshold = 0.8kg) 0.24 0.95 0.19 0.57
RoT (shift = 5 days, Threshold = 1 kg) 0.76 0.57 0.33 0.68
RoT (shift = 8 days, Threshold = 2.9 kg) 0.24 0.93 0.17 0.57
RoT (shift = 2 days, Threshold = 2 kg) 0.19 0.91 0.10 0.64
RoT (shift = 3 days, Threshold = 2 kg) 0.22 0.89 0.11 0.61
224 A. Eggerth et al. / Comparison of Body Weight Trend Algorithms
Figure 1. Fluctuating weight curve (top) and output index as obtained from the Moving Average Convergence
Divergence (MACD) algorithm (bottom) for the two optimised parameter sets: short time window N s = 4, long
time window Nl = 12 with threshold 0.8 kg (blue) and short time window Ns = 1, long time window Nl = 6 with
threshold 0.4 kg (black)
The currently used RoT algorithm (2 kg in 2 days) had a Youden index of 0.10. At
the same time this was almost the maximum Youden index for a specificity > 0.90 for a
day shift of 2 days. The total AUC for 2 days was 0.64.
Further a stratified 8-fold cross validation was performed (max. AUC for thresholds
with specificity > 0.95, threshold chosen by biggest Youden index of thresholds with
specificity > 0.90) to get an impression of the variance of the estimated parameters.
For MACD the variance (mean ± standard deviation) found was 3.88 ± 0.83 for the
short time window, 12.50 ± 2.33 for the long time window and 0.76 ± 0.16 for the
threshold. The sensitivity was 0.26 ± 0.20 and the specificity was 0.93 ± 0.03.
For RoT the variance found was 9.12 ± 2.70 for the day shift and 2.92 ± 0.15 for the
threshold. The sensitivity was 0.18 ± 0.15 and the specificity was 0.94 ± 0.04.
Figure 2. Rapidly rising weight curve (top) and output index as achieved from the Moving Average
Convergence Divergence (MACD) algorithm (bottom) for the two optimised parameter sets: short time
window Ns = 4, long time window Nl = 12 with threshold 0.8 kg (blue) and short time window Ns = 1, long
time window Nl = 6 with threshold 0.4 kg (black)
A. Eggerth et al. / Comparison of Body Weight Trend Algorithms 225
TPR
TPR
FPR FPR
Figure 3. ROC (True Positive Rate plotted vs. False Positive Rate) left. MACD, short time window Ns = 1,
long time window Nl = 6 (AUC = 0.65), right. RoT, day shift = 5 (AUC = 0.68)
4. Discussion
We have evaluated different algorithms and processing parameters for the prediction of
heart failure related hospitalisations and dose increases of diuretics. The results were in
good agreement with results obtained for hospitalisations only in a comparable study
published in 2016 [9]. These results indicate, that diuretic dosage increases may indeed
have prevented some hospitalisations which otherwise may have had occurred.
Evaluation of event generation algorithms based on a gold standard may seem to be
a straightforward task. However, there are plenty degrees of freedom which can
significantly influence the evaluation result. The decision to ignore events for the first 7
days after discharge was taken since this is the interval in which the physicians were
asked to view the patient data in HerzMobil-Tirol. We limited specificity of thresholds
to > 95% when calculating the AUC in order to get no more than 5% of false alarms.
This seems to be a useful limitation since too many false alarms are expected to be
ignored by the physicians but – as in all such settings - goes at the cost of sensitivity.
Another aspect was the sparsity of events. Within our dataset, for all patients, there
were data from 1,980 monitoring weeks. However, due to invalid weeks with missing
values and/or hospitalisations, 520 weeks had to be excluded, leading to 1,460 valid
weeks. The remaining heart failure related hospitalisations and dose increases of
diuretics resulted in a total of 54 events. For the training of a classification algorithm,
more data and especially more events would be valuable.
In this analysis hospitalisations and raises in diuretic prescriptions were taken as the
events to be detected by the algorithms. Although only heart failure related
hospitalisations were considered, there was no guarantee that each of them was preceded
by fluid retention. Thus, there were events, which had no foregoing weight gain and
therefore could not be detected by only looking at the weight data at all. Also the
assumption that a raise in the prescription of a diuretic is automatically related to an
oedema is a simplification, as there are other reasons for such medication adaptions too.
This could be a reason for the relatively low AUCs and Youden index values.
Parameter sets from different optimisation strategies could be used for different
purposes: The low pass parameter set could help a physician to get an impression of the
weight his patient is evolving to. This could reveal a slow aggregation of liquid and
therefore be an alarm for oedema. The high pass parameter set, on the other hand, could
simultaneously be used for quick detection of jumps. This fast detection might be
226 A. Eggerth et al. / Comparison of Body Weight Trend Algorithms
necessary if the body weight increases a lot in a short time period. A combination of both
parameter sets might be useful in a real-world scenario.
In the present analysis, we considered only body weight data. For a comprehensive
patient surveillance, additional parameters should be included which, most likely, would
improve the prediction of the worsening of a patient’s health status.
Comparison of the two algorithms for real-world monitoring data showed similar
results regarding total and limited AUC. An improvement of the sensitivity might be
possible by including additional health data (other vital signs and diagnostic parameters)
because body weight variations obviously are not the only cause of HF related
hospitalisations or diuretic dose increases.
5. References
[1] Desai, L. Stevenson., Connecting the circle from home to heart-failure disease management. N Engl J
Med. 363 (2010), 2364–2367
[2] S. Chaudhry, Y. Wang, J. Concato, T. Gill, H. Krumholz, Patterns of weight change preceding
hospitalization for heart failure. Circulation. 116 (2007), 1549–1554
[3] Von der Heid, E. Ammenwerth, K. Bauer, B. Fetz, T. Fluckinger, A. Gassner, W. Grander, W. Gritsch,
I. Haffner, G. Henle-Tarliz, S. Hoschek, S. Huter, P. Kastner, S. Krestan, P. Kufner, R. Modre-Osprian,
J. Noebl, M. Radi, C. Raffeiner, S. Welte, A. Wiseman, G. Poelzl, HerzMobil-Tirol network: rationale
for and design of a collaborative heart failure disease management program in Austria, Wiener klinische
Wochenschrift. Nov (2014).
[4] R. Modre-Osprian, G. Poelzl, A. Von der Heidt, P. Kastner, Closed-loop healthcare monitoring in a
collaborative heart failure network, Stud Health Technol Inform. 198 (2015), 17-24
[5] R. Modre-Osprian, K. Gruber, K. Kreiner, G. Schreier, G. Poelzl, P. Kastner, Textual Analysis of
Collaboration Notes of the Telemedical Heart Failure Network HerzMobil-Tirol, Stud Health Technol
Inform., 212 (2015), 57-64.
[6] M. Vukovic, M. Drobics, K. Kreiner, D. Hayn, G. Schreier, Alarm Management In Patient Health Status
Monitoring, Proceedings of the eHealth2012, (2012)
[7] P. Ponikowski, A.Voors, S. Anker et al., 2016 ESC Guidelines for the diagnosis and treatment of acute
and chronic heart failure, Eur Heart J 37 (2016) 2129-2200
[8] M. Kropf, R. Modre-Osprian, K. Gruber, F. Fruhwald, G. Schreier, Evaluation of a Clinical Decision
Support Rule-set for Medication Adjustments in mHealth-based Heart Failure Management, Stud Health
Technol Inform. 212 (2015), 81-7.
[9] Gyllensten, A. Bonomi, K. Goode, H. Reiter, J. Habetha, O. Amft, J. Cleland., Early Indication of
Decompensated Heart Failure in Patients on Home-Telemonitoring: A Comparison of Prediction
Algorithms Based on Daily Weight and Noninvasive Transthoracic Bio-impedance, JMIR Med Inform.
4(1) (2016), e3.
[10] KNIME.com AG, KNIME, https://www.knime.org//, last access: 31.01.2017.
[11] The PostgreSQL Global Development Group, PostgreSQL, https://www.postgresql.org/, last access:
31.01.2017.
[12] Highsoft AS, Highcharts, http://www.highcharts.com//, last access: 31.01.2017.
[13] Python Software Foundation, Python, https://www.python.org/, last access: 31.01.2017.
[14] The Eclipse Foundation, Eclipse, http://www.eclipse.org/, last access: 31.01.2017.
Health Informatics Meets eHealth 227
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-227
1. Introduction
Almost all people know what they need to do in case of a medical emergency: call the
ambulance. Dialing “144”, the Austrian emergency number, brings a workflow into
motion that is characterized with high dynamics, decisions under time- and resource-
pressure and high social competences. Ambulance service providers assure a competent
and moreover complete prehospital care nationwide. In all clinical settings
documentation is an integral part of the daily business. Electronic documentation has
already emerged as the standard way of satisfying documentation needs. Existing
workarounds, such as writing on pieces of paper or on gloves and transferring it
afterwards to the electronic patient record, are needed fewer every day and are replaced
by the usage of mobile devices. These devices offer a huge amount of useful features to
ease the clinical daily work, such as the ability to dictate or character and speech
recognition. Furthermore a complete documentation is encouraged without losing
important information due to discarding accidentally written paper or gloves.
However the digital age seems to remain in the hospitals but has not arrived yet in
out-of-hospital settings, such as prehospital care. Only few emergency service providers
take advantage of electronic documentation systems and moreover there is no
organization known at the time of research that supports ambulance services with
1
Corresponding Author: Katharina Rohrer, Vienna University of Technology, Institute for Design &
Assessment of Technology, Human Computer Interaction, Vienna, Austria. Email:
rohrer.katharina@gmail.com
228 K. Rohrer / Electronic Health Records in Prehospital Care
additional information of the patient, such as medical history or current medication. This
data is stored in Electronic Health Records, which provide an overview of the patient’s
health. The Austrian Electronic Health Record, “ELGA” (short for “Elektronische
Gesundheitsakte”), is available for all Austrian citizen and represents therefore an
interesting information source that is not considered for out-of-hospital settings, such as
prehospital care.
This work analyzes the special requirements for electronic documentation and
support systems in prehospital care, which represents a very dynamic setting with high
importance for clinical treatments. Based on the need of a complete documentation
system in prehospital care the research goal is to point out which data is required at the
emergency scene, especially in cases when such important information cannot be
collected. Furthermore it should be identified if the data, which is stored in ELGA covers
the found requirements and how it is therefore applicable for prehospital care. Based on
the collected data a data model is created that captures the needed information at the
emergency scene. With this result a descriptive conclusion about applicability of ELGA
in prehospital care can be achieved.
2. Methods
This work was done with an exploratory research using mixed methods and a systematic
review of findings. The work is build based upon three phases which are described in
detail in the next sections.
2.1. Research
The main focus in phase 1, Research, is on state-of-the-art literature and ELGA, the
Austrian Electronic Health Record. This establishes the theoretical basis of the research
and enables a descriptive conclusion of results.
The main goal of the data collection was to achieve a deep understanding of prehospital
care and the daily work of the staff. Therefore two approaches have been chosen to
satisfy the information needs, observations and expert interviews. The observations are
considered as very useful to get special insights and to enrich the literature research with
experience. Whereas the expert interviews are providing detail information about the
daily work and focus on giving the staff an ability to tell their stories in a comfortable
setting.
The aim of the observations was to get insights how ambulance staff works at the
emergency scene and how they handle their documentation duty. Furthermore informal
talks with the staff were done to support the overall understanding of decisions and
actions at the emergency scene. It was possible to get different opinions on ELGA as
well as on the integration scenario and workability. The collected data was processed
after the observations and put down as notes from memory. This supported that no
sensitive information, such as patient data or provider data, was collected or used in this
work at any time. The data was organized as a basis for phase 3, the analysis, together
with the collected data from the expert interviews.
K. Rohrer / Electronic Health Records in Prehospital Care 229
The interviews have been achieved with 7 experts of different educational levels.
The experts were assigned to two groups:
x Group 1: Paramedics
In this group the experts are educated as “Notfallsanitäter” with at least two
additional competences. The experts are located in Vienna and Lower Austria. In
this group 4 persons have been interviewed.
x Group 2: Physicians
In this group 3 physicians of the union “Medizinercorps” in Styria, Graz, have been
interviewed.
The interviews have been done face-to-face to get qualitative insights. The
interview has been designed as semi-structured with an interview guideline to gather
required data in a broad sense by giving the experts space for storytelling. Moreover the
interviews have been audio recorded and supported with taking notes on the survey and
afterwards transcribed to be able to achieve an analysis.
2.3. Analysis
This phase is split up into two steps. In the first step the gathered data from phase 2
(Data Collection) is structured and analyzed with a thematic analysis approach as
suggested by Braun and Clarke [1]. The aim of the analysis is to discover whether the
decision about treatment and diagnosis changes with additional information about the
patient and moreover what data should be provided to be able to do so. Furthermore it
should be tried to give an insight on the special needs and requirements of the usage of
electronic health records in prehospital care.
The second step tries to put the content findings from qualitative research into an
easy understandable structure. It represents an overview of the needed data during an
ambulance call and illustrates the whole process from the call in the dispatch center until
the possible transport to a hospital. Therefore an Entity-Relationship-Model [2] and a
Flow Chart was developed.
The transcribed interviews and taken notes were split up in small phrases and
relevant statements to enable a structured review. An exemplary statement from the
transcribed interview with an expert (E2) is shown below:
E2 [15:03]: “[…]Und ich glaube, dass eine zu frühe Information an
Vorerkrankungen ein falsches Bild wirft. […]Das ist auch der Grund, warum die
Patientengeschichte, die Medikamente eher am Schluss kommen.“
Based on the information content of the statements, codes have been assigned to
each of them. In case of the exemplary statement the code “Relevanz der
Patientengeschichte” was assigned. The codes have been chosen uniquely for each data
set, e.g. interview or observation. So it was tried to get a broader view on the data and
discover codes, and later on themes, which are not obvious. After each statement was
assigned to a code, the codes have been grouped according to their meaning and
relevance. In this steps themes like “Prepare for the emergency” or “Get a diagnosis”
have been formulated. Afterwards the themes have been reviewed and tried to be
assigned to more general themes, which collect the already found subthemes. In Figure 1
a rough overview of the found themes, including sub-themes, is illustrated, which
represent a thematic grouping of assigned codes.
230 K. Rohrer / Electronic Health Records in Prehospital Care
In order to model the process a flow chart was generated. Flow charts are very
common to illustrate algorithms and processes in any medical setting, because it
simplifies the often very complex process into an understandable series of steps [3]. In
this step the huge amount of content findings was tried to be structured. Based on this
findings the flow chart was generated, which focuses on the structure of the process on
scene. It pointed out that it is a homogeneous process, mainly because of the associated
guidelines.
Based on the content findings from the qualitative research a data model was created
that captures the requirements and needs concerning data on the emergency scene. An
Entity-Relationship-Model [2] was chosen to provide easy and understandable access to
this information for ambulance personnel, which are mainly lay persons to data modeling.
Furthermore the development of a database is not intended in this work, therefore an
abstract modeling of the meaning of the collected data satisfies the information
requirements.
The ER-Model should give an overview of the needed data during an emergency
call and represents a basis on which further development of an Electronic Health Record
for prehospital care can be achieved. Beside the gathered data during an emergency call
the work on scene is of great interest too. It has shown that there are many overlapping
data objects, which already have been organized in the data model in a way it may
support a future integration of an Electronic Health Record to a prehospital setting.
3. Results
There have been two main workflows found that can be analyzed separately. The first
begins with receiving the alert text and ends with the arrival at scene. This process is
very dynamic and not supported with any guidelines or algorithms. The preparation on
the way to the emergency is depending on the severity of the incident and the traveling
time. Furthermore the experience of the team working together and their educational
level make an impact on the organization of, e.g., resource and equipment management.
The process does not contribute to the patient’s outcome and is secondarily for the
research of this work for this reason.
The second workflow starts with arriving at the scene and ends with the transport of
the patient to a hospital or health care facility. Prehospital care is a challenging
environment, characterized by decision-making under uncertainty and high pressure in
K. Rohrer / Electronic Health Records in Prehospital Care 231
time and resources. Urgent emergencies such as trauma, stroke or heart attack have a
critical time frame, i.e. the first hour after the emergency has happened is the so
called ”golden hour”. In this time the patient has to get appropriate treatment, otherwise
the probability to die is increasing rapidly. The decisions on the diagnosis and treatments
have to be done in a very short time [4]. It is obvious that anything that can be structured
is supporting the work of the medical personnel, e.g. guidelines.
Based on the findings it was possible to conduct a flow chart, shown in Figure 2 that
illustrates the process after arriving at the emergency scene.
As soon as the ambulance staff arrives at scene the environment is analyzed and the
decision about additional forces, such as police or fire fighter, is done. The personal
safety of the ambulance personnel is essential and for this reason they have to wait until
the scene is safe. If the scene is safe, they get a first impression of the patient and start
with examining him/her according to the guidelines. The goal is to decide very quickly
if the patient is critical or not and if additional forces, e.g. an emergency doctor, is needed.
Therefore well established guidelines for examination are followed and immediately
needed actions are set.
Beside the physical examination, the ambulance staff tries to find out what exactly
has happened to reconstruct the incident. For this matter different possibilities are
available: The patient himself/herself or a third party (e.g. relative, friends, and
bystander) is able to give appropriate information or no information can be collected, e.g.
if patient is unconscious without witnesses. In case it is possible to gather additional
information the medication of the patient is of great interest. In this case the personnel
asks the patient or relative about known long-term or current medication. As a next step
the staff tries to find out known diseases. If the patient doesn’t know his/her diseases
exactly but has a list of current medication the medical personnel is able to derive the
diseases out of the medication. For this reason the medication represents an essential
information to the physicians. If there is time left further surveys are done.
If the patient hasn’t already provided available medical letters, such as discharge
letters from a recent stay, to the ambulance staff, they would ask for them as a next step.
It is also feasible that attending relatives are instructed to search for them in order to keep
them busy while continuing with the examination.
Based on the collected information during examination and anamnesis the staff
achieves a suspected diagnosis and bases the treatment upon it. Furthermore the decision
about hospitalization and the need of a specialized department is done. In case it is not
possible to acquire information the suspected diagnosis, the treatment and the transport
is based upon the symptoms and treated as an initial occurrence of the problems. In fact
it pointed out that the process is pretty straight forward and can be modeled in common.
The groups differed only in the usage of additional information, which is depending on
their educational level.
The flow chart of the process on emergency scene enabled an overall understanding
of an emergency and the needed information at scene. Based on the findings and the flow
charts the first part of the research question, which data is collected and needed at scene,
can be answered. It was possible to further process this data and develop a data model
which was compared and mapped to available data fields in ELGA. For this purpose
already stored documents in ELGA have been analyzed with the help of implementation
guidelines and mapped to the findings from qualitative research [5].
The data model represents the required information at scene resulting from findings of
the qualitative research mapped to available data in ELGA. The main findings from
qualitative research are organized as follows:
x Emergency data
environmental factors, patient history, vital signs measured at scene, case
data, emergency protocol, resulting diagnosis, treatment and transport
x Patient data
personal data, insurance data
K. Rohrer / Electronic Health Records in Prehospital Care 233
x Medication
x Medical history
known diseases, allergies, risk factors, legal regulations (e.g. DNR orders)
It pointed out that there are two existing documents in ELGA which potentially
contain valuable data, i.e. the medical discharge letter and the e-Medication. It was
possible to map relevant data, such as insurance data and medical data of the patient, to
data fields in these two documents. Especially the medical history of a patient has direct
impact on the decisions about diagnosis, treatment and transport.
Known diseases, current medication, risk factors, allergies and legal regulations
(e.g. “Do-Not-Resuscitate”-orders) represent the most relevant data for decision-making
on scene and are furthermore available data fields in ELGA documents. Figure 3 shows
an extract of the resulting data model where the mapping of data fields to the medical
history of the patient is illustrated.
Besides providing evidence data for decision-making it can be shown that ELGA is
suitable to support several use cases during an emergency, such as preventing the loss of
important information after handing over the patient to a hospital or health care facility.
The emergency protocol could be integrated as a separate document in ELGA and
support the clinical handover, which is usually a verbal, non-structured action. This can
overcome the existing information gap and improve the patient’s outcome.
4. Discussion
Available information in ELGA can serve several use cases, e.g. by providing
medical information of the patient to the ambulance staff and assuring evidence-based
decision-making. Moreover the emergency protocol can be stored in ELGA to support
the clinical handover and avoid the loss of information during the verbal action. In this
case all performed actions, administered medication or documented anomalies during
emergency or transport are transparent to the hospital in advance and, what is more
important, after the ambulance staff has left. Sarcevic et al. showed that if the information
during clinical handover is not documented properly it gets lost and is not available at a
later point of the treatment where it might make a difference in the clinical therapy [[6],
[7]]. Furthermore Wood et al. pointed out that the clinical handover has a direct impact
on the patient’s outcome and needs to contain all relevant information [8]. Issues in
transferring the emergency protocol to the hospital would also be solved with this
approach because the clinical staff can access it anytime with the already available and
established infrastructure and systems.
Documentation in prehospital care is directly connected to the patient’s outcome.
The more information is documented on the emergency scene and the more complete the
handover to the clinical personnel is achieved, the lower the mortality rate in hospital for
patients [9]. A complete documentation improves the outcome of the patient and
furthermore has the potential to save costs in avoiding unnecessary examinations and
providing the optimal treatment to the patient.
ELGA is a rather young concept which could support developing of new standards
in prehospital care. It could help establish a new way of efficient documentation which
might increase the quality of emergency services by reducing the time spent on the
emergency scene, providing evidence data to the ambulance staff and giving them the
ability to focus on the patient by automatically capturing important medical data during
emergency and transport and moreover bringing transparency to a very dynamic process.
References
[1] Virginia Braun and Victoria Clarke. Using thematic analysis in psychology. Qualitative research in
psychology, 3(2):77-101, 2006.
[2] Alfons Kemper and André Eickler. Datenbanksysteme. Oldenbourg Wissenschaftsverband, 2013.
[3] Georg Kovacs and Pat Croskerry. Clinical decision making: an emergency medicine perspective.
Academic Emergency Medicine, 6(9):947-952, 1999.
[4] F. Von Kaufmann and K-G. Kanz. Die Rolle der Leitstelle im Prozess der präklinischen Versorgung.
Notfall+ Rettungsmedizin, 15(4):289–299, 2012.
[5] ELGA GmbH. Technische Implementierungsleitfäden. https://www.elga.gv.at/technischer-
hintergrund/technische-elga-leitfaeden/index.html, 2016. Online; accessed 27-May-2016.
[6] Aleksandra Sarcevic, Zhang Zhang and Diana S. Kusunoki. Decision making tasks in time-critical
medical settings. In Proceedings of the 17th ACM international conference on Supporting group work,
pages 90-102. ACM, 2012
[7] Aleksandra Sarcevic and Randall S. Burd. Information handover in time-critical work. In Proceedings
of the ACM 2009 international conference on Supporting group work, pages 301-310. ACM, 2009
[8] Kate Wood, Robert Crouch, Emma Rowland and Catherine Pope. Clinical handovers between
prehospital and hospital staff: literature review. Emergency Medicine Journal, 32(7): 577-581, 2015.
[9] Dann J Laudermilch, Melissa A Schiff, Avery B Nathens, and Matthew R Rosengart. Lack of emergency
medical services documentation is associated with poor patient outcomes: a validation of audit filters
for prehospital trauma care. Journal of the American College of Surgeons, 210(2):220–227, 2010.
Health Informatics Meets eHealth 235
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-235
Abstract. For the treatment of terminal heart failure, the therapy with left-
ventricular assist devices has already been established. In the systems used today,
pump speed does not adjust during physical activity so that cardiac output and
exercise capacity remain markedly limited. It is the aim of this study to develop an
automatic pump speed control based on filling pressure values in order to improve
exercise capacity and quality of life in these patients. Different approaches are
planned, to be tested in an in vitro patient simulator. The algorithms aim to match
the pump speed with the increased venous return. In addition, preservation of aortic
valve function should be taken into account.
Keywords: heart failure, left-ventricular assist device, closed loop system, exercise
capacity, quality of life
1. Introduction
Heart failure (HF) has become a major burden in the Western world, increasingly
affecting millions of people [1]. In Germany, it was the fourth most common cause of
death in 2014 [2]. HF is defined as a serious condition in which the heart is not capable
of delivering a sufficient amount of blood to the body. In consequence, the tissue
metabolism cannot be adequately supplied with oxygen and nutrients at rest, or
especially under exercise conditions [3].
For the treatment of HF, therapy with left-ventricular assist devices (LVAD) has
already been established [4]. Nowadays, implantation of an LVAD is mainly performed
as a destination therapy and patients are dependent on their device until the end of life
1
Corresponding Author: Thomas Schmidt, Schüchtermann-Klinik Bad Rothenfelde, Department for Cardiac
Rehabilitation, Ulmenallee 5 - 11, 49214 Bad Rothenfelde, Germany, E-Mail: tschmidt@schuechtermann-
klinik.de
236 T. Schmidt et al. / Adaptive Pump Speed Algorithms to Improve Exercise Capacity
[5]. In order to enable these patients to participate in social life with adequate quality of
life, achievement of sufficient exercise capacity is crucial [6].
During physical activity, cardiac output (CO) increases 4-6-fold in healthy people
to meet the raised metabolic requirements [7]. In the pulsatile LVAD systems used in the
past, CO could also be adapted by an increase in pump frequency. However, the modern
systems used today, are designed as small continuous-flow pumps which operate on a
fixed rotational speed. This means that the speed remains unchanged even under exercise
conditions. Consequently, during physical activity the muscles cannot be supplied with
sufficient oxygen, and ultimately exercise capacity remains significantly restricted [8].
It is the aim of this study to develop a closed loop system which adjusts LVAD pump
speed (and pump flow) automatically in order to improve exercise capacity in these
patients.
Although an automatic pump speed control has been the subject of research for many
years, until now it has not been implemented into currently used devices [9]. This is
probably because many approaches are related to the measurement of filling pressures.
In the past, there were only a few ways of measuring these values. However, in recent
years there has been much progress and some promising implantable sensors could be
successfully evaluated (e.g. CardioMEMS, Chronicle IHM or HeartPOD) [10, 11].
Further sensors are under development and will be available in the future, so that an
automatic pump speed control seems to be feasible.
In developing and implementing such a system, various considerations and
challenges must be taken into account:
open due to physiologic pressure differences. A regular opening of the aortic valve
should be attempted to avoid aortic valve insufficiency or fusion [14].
In general, there are two objectives to be achieved by the algorithms: Our main goal is
to optimize blood flow through the LVAD pump depending on venous return. A
secondary goal is the preservation of aortic valve function, to avoid aortic valve
insufficiency. However, these two objectives are competitive in some way because an
open aortic valve strategy leads to restricted exercise capacity and hemodynamics [15].
Therefore, in our approach the support of aortic valve opening should only be performed
under resting conditions. That means that the algorithms will have two main control
modes: optimize pump flow and aortic valve opening. These two modes will not run in
parallel because they have mutually exclusive goals.
For implementation of such an adaptive pump speed control, integration of different
pressure and flow sensors is highly desirable. The sensors should deliver precise data
values in a sufficient temporal resolution to assess the changes over a cardiac cycle.
Figure 1: During physical activity, increased venous return cannot be managed completely by the LVAD due
to the fixed pump speed. Blood accumulates and causes an increase in filling pressures.
238 T. Schmidt et al. / Adaptive Pump Speed Algorithms to Improve Exercise Capacity
Figure 2: Coarse structure of the mock circulation system. Different pressure sensors are integrated, sending
values continuously to the control unit. Three control modes are planned.
One of the most important components in the overall system is the use of a reliable
technique for suction detection. Suction detection could be performed, for example, via
analyzing pressure values or by discriminant analysis of pump parameters [16]. The same
applies for the detection of aortic valve opening since Granegger et al. [17] have
developed a method allowing a continuous monitoring of aortic valve function via pump
parameters. Detection of activity could be performed by analyzing venous return and
total CO.
The overall system must combine all these functionalities. In consequence it has to
select the appropriate control mode depending on the specific values. In the case of
sensor faults, the system should notice the malfunction and switch automatically to an
emergency mode. The emergency mode ensures the basic function of the LVAD (like
current functioning) without additional risk for the patients.
The literature shows [9] that, for implementing such an LVAD control system, both
a conventional approach like a PID control, as well as a fuzzy logic control seem to be
feasible.
2.3. Evaluation
3. Discussion
Due to demographic changes and improved medical care, the number of patients with
terminal HF will continue to increase in the future, so that LVAD therapy will gain even
higher importance. In this context, it will be exciting and important to see which
improvements can be achieved by the further development of mechanical rotatory pump
systems. Besides the reduction of common complications and the development of a
wireless energy transmission, pump speed adaption will come to the fore.
Recent studies show that pump speed adjustments during exercise can be performed
manually without adverse events [18]. On the basis of a changed number of revolutions,
a significant increase in CO and peak oxygen consumption can be achieved [19]. In
addition, the aerobic exercise duration can be prolonged [18] and the anaerobic threshold
can also be postponed [19]. This is particularly relevant and promising because
submaximal exercise capacity is related to the ability to cope with activities of daily
living.
The speed adjustments in the abovementioned studies were all carried out manually
by the physician within experimental settings. With the aid of new sensors, an adaptive
pump speed control based on pressure values could also be feasible. In the future, it will
be important that the sensors used can transmit the values continuously and not only at
points. In addition, the transferred values also have to be reliable and must not exhibit
any kind of drift since, in implantable sensors, a regular calibration from outside would
be difficult to handle. By solving these challenges the exercise capacity of LVAD
patients could be increased, accompanied by an improvement in quality of life and social
participation.
Acknowledgment
This project is funded by the German Federal Ministry of Education and Research
(BMBF) within the framework of the ITEA 3 Project Medolution (14003).
References
[1] Ponikowski, P.; Anker, S.; AlHabib, K., et al.: Heart failure. Preventing disease and death worldwide.
In: ESC Heart Failure, (2014), 1, S. 4–25.
[2] Statistisches Bundesamt: Gesundheit. Todesursachen in Deutschland 2014, Fachserie 12 Reihe 4, (2016)
URL:
https://www.destatis.de/DE/Publikationen/Thematisch/Gesundheit/Todesursachen/Todesursachen2120
400147004.pdf?__blob=publicationFile. Last Access: 19.01.2017.
[3] Ponikowski, P.; Voors, A.; Anker, S., et al.: 2016 ESC Guidelines for the diagnosis and treatment of
acute and chronic heart failure: The Task Force for the diagnosis and treatment of acute and chronic
heart failure of the European Society of Cardiology (ESC). Developed with the special contribution of
the Heart Failure Association (HFA) of the ESC. In: Eur J Heart Fail, (2016), 8, S. 891–975.
[4] Krabatsch, T.; Potapov, E.; Soltani, S., et al.: Ventrikulare Langzeitunterstutzung mit implantierbaren
kontinuierlichen Flusspumpen. Auf dem Weg zum Goldstandard in der Therapie der terminalen
Herzinsuffizienz. In: Herz, (2015), 2, S. 231–39.
[5] Kirklin, J.; Naftel, D.; Pagani, F., et al.: Long-term mechanical circulatory support (destination therapy):
on track to compete with heart transplantation? In: J Thorac Cardiovasc Surg, (2012), 3, S. 584-603;
discussion 597-8.
[6] Leibner, E.; Cysyk, J.; Eleuteri, K., et al.: Changes in the functional status measures of heart failure
patients with mechanical assist devices. In: ASAIO J, (2013), 2, S. 117–22.
240 T. Schmidt et al. / Adaptive Pump Speed Algorithms to Improve Exercise Capacity
[7] Joyner, M.; Casey, D.: Regulation of increased blood flow (hyperemia) to muscles during exercise: a
hierarchy of competing physiological needs. In: Physiol Rev, (2015), 2, S. 549–601.
[8] Jung, M.; Gustafsson, F.: Exercise in heart failure patients supported with a left ventricular assist device.
In: J Heart Lung Transplant, (2015), 4, S. 489–96.
[9] AlOmari, A.-H.; Savkin, A.; Stevens, M., et al.: Developments in control systems for rotary left
ventricular assist devices for heart failure patients: a review. In: Physiol Meas, (2013), 1, S. R1-27.
[10] Merchant, F.; Dec, G.; Singh, J.: Implantable sensors for heart failure. In: Circulation, (2010), 6, S. 657–
67.
[11] Abraham, W.; Adamson, P.; Bourge, R., et al.: Wireless pulmonary artery haemodynamic monitoring
in chronic heart failure. A randomised controlled trial. In: Lancet, (2011), 9766, S. 658–66.
[12] Reiss, N.; Schmidt, T.; Workowski, A., et al.: Physical capacity in LVAD patients: hemodynamic
principles, diagnostic tools and training control. In: Int J Artif Organs, (2016), 9, S. 451–59.
[13] Tolpen, S.; Janmaat, J.; Reider, C., et al.: Programmed Speed Reduction Enables Aortic Valve Opening
and Increased Pulsatility in the LVAD-Assisted Heart. In: ASAIO J, (2015), 5, S. 540–47.
[14] Imamura, T.; Kinugawa, K.; Nitta, D.; Hatano, M.; Ono, M.: Opening of Aortic Valve During Exercise
Is Key to Preventing Development of Aortic Insufficiency During Ventricular Assist Device Treatment.
In: ASAIO J, (2015), 5, S. 514–19.
[15] Camboni, D.; Lange, T.; Ganslmeier, P., et al.: Left ventricular support adjustment to aortic valve
opening with analysis of exercise capacity. In: Journal of cardiothoracic surgery, (2014), S. 93.
[16] Ferreira, A.; Boston, J.; Antaki, J.: A rule-based controller based on suction detection for rotary blood
pumps. In: Conference proceedings : … Annual International Conference of the IEEE Engineering in
Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual Conference,
(2007), S. 3978–81.
[17] Granegger, M.; Schima, H.; Zimpfer, D.; Moscato, F.: Assessment of aortic valve opening during rotary
blood pump support using pump signals. In: Artificial organs, (2014), 4, S. 290–97.
[18] Jung, M.; Houston, B.; Russell, S.; Gustafsson, F.: Pump speed modulations and sub-maximal exercise
tolerance in left ventricular assist device recipients: A double-blind, randomized trial. In: J Heart Lung
Transplant, (2016).
[19] Vignati, C.; Apostolo, A.; Cattadori, G., et al.: Lvad pump speed increase is associated with increased
peak exercise cardiac output and vo2, postponed anaerobic threshold and improved ventilatory
efficiency. In: Int J Cardiol, (2016).
Health Informatics Meets eHealth 241
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-241
Abstract. According to a new state educational standard, students who have chosen
medical cybernetics as their major must develop a knowledge engineering
competency. Previously, in the course “Clinical cybernetics” while practicing
project-based learning students were designing automated workstations for medical
personnel using client-server technology. The purpose of the article is to give insight
into the project of a new educational module “Knowledge engineering”. Students
will acquire expert knowledge by holding interviews and conducting surveys, and
then they will formalize it. After that, students will form declarative expert
knowledge in a network model and analyze the knowledge graph. Expert decision
making methods will be applied in software on the basis of a production model of
knowledge. Project implementation will result not only in the development of
analytical competencies among students, but also creation of a practically useful
expert system based on student models to support medical decisions. Nowadays,
this module is being tested in the educational process.
1. Introduction
The specialty "Medical cybernetics" has existed in higher education in the Russian
Federation for more than 30 years. It provides comprehensive information technology
and bio-medical training with a basic level of clinical skills. One of the areas of graduate
career development is the role of a system analyst in the elaboration of various medical
information systems. This requires the development of particular competencies in
students during their training at university.
During the period from 2010 to 2016, there was a transition from the second to the
third generation of educational standards in the Russian higher medical education
system. The standard of the second generation says that in the subject “Clinical
cybernetics” expert knowledge is used to build a model of diagnostic and treatment
process in health care facilities required for the development of automated workstations
for medical personnel [1]. A new discipline "Medical Information Systems", which
forms this competence, has been included in standard for the third generation [2]. The
latest Federal state educational standard for this specialty was introduced in 2016 [3]. It
provides competencies of heuristic approaches, application and creation of expert
systems, the implementation of which is impossible without knowledge engineering. In
1
Corresponding author: Sergey Karas, Siberian State Medical University. 634050, Moskovsky trakt, 2,
Tomsk, Russia. karas@ssmu.ru
242 S. Karas and A. Konev / Knowledge Engineering as a Component of the Curriculum
2. Methods
3. Results
Figure 1. Program “Lynx”: the screen form for arrangement of knowledge nodes.
Figure 2. Program “Lynx”: the screen form for establishing connections between nodes.
Figure 3. Program “Promo”: the screen form for expert rules formation.
4. Discussion
Making diagnostic and treatment decisions is one of the main responsibilities of a doctor.
Unfortunately, doctors have to work with incomplete and at times unreliable information
about the patient, in addition to sometimes having inefficient diagnostic tests, which can
cause the misinterpretation of test results. There are many factors influencing the
decision of a doctor, and formalized decision-making algorithms may appear ineffective
under these conditions.
Expert knowledge, improved during the period of real work, allows them to make
decisions effectively. The heuristic approach is of particular importance in medicine,
especially in its poorly formalized fields. Acquisition, formalization and subsequent
research of expert knowledge are needed to implement this approach. The procedure
precedes the creation of expert systems that are demand in practical healthcare, and
requires knowledge engineering competencies, which are not developed in medical
students. These competencies are included into the educational standard of Medical
cybernetics specialty, which allows graduates to work as analysts in the development of
knowledge-based systems.
The introduction of new state educational standards certainly leadsto changes in the
content of a discipline curriculum. Since 2016 the development of AWS for health
professionals has been included in the program of the Medical Information Systems
discipline. Instead, a new "Knowledge engineering" module, which is described in the
present article and this year is being tested at Siberian State Medical University, has been
included in the program of "Clinical Cybernetics". Within this module, we plan not only
S. Karas and A. Konev / Knowledge Engineering as a Component of the Curriculum 247
to develop analytical competencies among students, but also create expert systems to
support the decisions of young doctors in the project domain.
The "Knowledge engineering" module can also be included in the curricula of other
biomedical and humanities specialties and programs of other disciplines. We are ready
to elaborate joint curricula and programs with international universities. The software of
knowledge engineering (Lynx and Promo), which are developed by the authors of this
article can be used as a methodological support for teaching.
The role of the system analyst in recent years is in demand in the market of program
application development. The introduction of the "Knowledge Engineering" module in
teaching develops the analytical competencies, which can be applied in any domains.
The methodology and technologies of knowledge engineering are intrinsically
international, that is why we offer projects aimed at developing and implementing joint
educational programs to interested universities.
References
[1] State educational standard for higher professional education 041001 Medical cybernetics, Moscow,
2000.
[2] Federal state educational standard for higher professional education 060609 Medical cybernetics,
Moscow, 2010.
[3] Federal state educational standard for higher education 300503 Medical cybernetics, Moscow, 2016.
[4] M.C.English,A.Kitsantas, Supporting student self-regulated learning in problem- and project-based
learning,Interdisciplinary J. Problem-Based Learn.7(2) (2013), 128-150.
[5] A.Morgan, Theoretical aspects of project-based learning in higher education,Br. J. Educ. Technol.1
(1983), 66-78.
[6] M.Kaprova, S.Karas, Project learning in higher medical education, HigherEducationinRussia12 (2013),
108-113.
[7] S. Osugi, U.Saeki (Ed.),Knowledge acquisition, Mir, Moscow, 1990.
[8] O.S.Noran, Business Modeling: UML vs. IDEF, http://ru.bookzz.org/book/463932/a1bd5b,last access:
17.1.2017.
[9] A.Shkryl,The development of client-server applications in Delphi, BHV, Petersburg, 2006.
[10] S. Karas, A. Konev, Program complex of knowledge engineering "Lynx". Certificate of official
registration of the computer program N2002611433, 2002.
[11] S. Karas, A. Konev, Program complex for creation and application of training expert systems "Promo".
Certificate of official registration of the computer program N2002611820, 2002.
[12] G.Kopanitsa, Development of demands to medical information system at process approach, Doctor and
Information technologies4 (2014), 20-26.
[13] S.Karas, O.Konnykh, P.Ketov, Development of medical information systems: project-based training of
the staff, Doctor and Information technologies5 (2011), 77-80.
[14] S.Kendal, M.Creen, An Introduction to Knowledge Engineering, Springer-Verlag, London, 2007.
248 Health Informatics Meets eHealth
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-248
1. Introduction
During the course of a pregnancy, women are likely to visit several physicians and
medical institutions for continuous monitoring of growth and development of the fetus.
Beside periodical examinations at the gynecologist, which normally take place at
intervals of 4 weeks, it is possible to pass further examinations, such as ultrasonic testing
and prenatal diagnosis. Whenever a medical examination takes place, data about the
woman and her unborn child are recorded. In a maternity record, the relevant vital data
of the unborn child resulting from these examinations is summed up. It is issued by the
gynecologist or midwife and handed out to the patient once a pregnancy has been
diagnosed. The patient is advised to carry it with her all the time, so it can be filled with
information continuously. This simplifies the communication between physicians and
facilitates the exchange of information. Further, in case of an emergency the relevant
data is immediately available.
We generally distinguish between paper-based from electronic maternity record.
Although easier to handle, the paper-based version has some disadvantages such as poor
readability and unavailability in emergency situations. In contrast to Germany, where the
1
Contributed equally
2
Corresponding Author: Stephan Nüssli, Berner Fachhochschule, Quellgasse 21, 2502 Biel, E-Mail:
stephan.nuessli@bfh.ch
M. Murbach et al. / A First Standardized Swiss Electronic Maternity Record 249
paper-based maternity record is officially distributed to pregnant women since 1968, and
Austria (since 1974) [1], Switzerland does not have an official maternity record. There
are voluntary electronic records available that are in use by several physicians and
midwifes, but the collected information is varying as there is no guideline and fixed data
sets. This makes it hard to share the data with other physicians.
To benefit from the advantages of a digital maternity record, we develop in this work
a concept for exporting and storing relevant data from the primary care information
system in a standardized manner. For this purpose, we propose a standardized dataset.
We develop a prototype that allows pregnant women and their physicians to access the
data in form of an electronic maternity record through a mobile and web application.
2. Methods
In order to specify the relevant dataset, we compared existing approaches for maternity
records available in Switzerland, Germany (e.g. [2]) and Austria [3]. The initial dataset
has been adapted and revised by two gynecologists as well as by an experienced midwife.
By involving physicians and midwifes, we aimed at defining a maternity record for a
cross-disciplinary use. For developing the concept for data export, we reviewed data
exchange formats available in Switzerland for Primary Care Information Systems. We
identified SMEEX (Swiss Medial Data Exchange) as a possible data exchange standard.
SMEEX is a standardized data language that enables the conversion of data from one
system to another [4]. It allows a cross-institutional exchange of information. A smeex-
file can be generated by more than 40% of all Primary Care Information Systems in
Switzerland. Apart from the founder, Vitodata AG and TMR AG, some other software
producer such as HCI Solutions AG, Logival Informatique SA and Praxinova AG
support the standard. The advantage of SMEEX is the structure of its data. Exported
smeex-files can be transmitted to other institutions and imported in their information
system. The file can then be further processed, keeping the structure of the imported
information. The actual smeex-file structure is a zip-file, containing XML- and binary
file formats like pdf [5]. Using the data set and this exchange format, we developed our
concept for the electronic maternity record and implemented a prototype of a mobile and
web application.
By means of a questionnaire, we studied the acceptance of the developed prototypes
in women. 27 women filled the questionnaire. Main objective was to find out which
additional functionalities are desired and how much women are willing to pay for an
electronic maternity record app.
3. Results
Our concept for the recording and handling of the digital maternity record aims at
reducing the effort for involved people for data transmission and access, and at limiting
the error rate due to missing information in the clinical decision making process. Figure
1 shows several use cases for the maternity record as well as the overall concept
explained in the following.
1. The recorded data is exported and saved as a .smeex-file at the end of an
examination. SMEEX data is directly exported from the Primary Care
Information System.
2. The smeex-file is uploaded to the website. This is done manually by the
gynecologist, but could be automated in a future extension.
3. The uploaded file, which consists of XML- and binary files, is copied to a file
system that triggers the follow-up processing.
4. The data is stored in a central database and can be accessed by the web interface
of the app as described in steps 5 to 8.
5. The website displays the information from the record in a structured and
graphical manner. The data is grouped into categories to increase the readability
and clarity of data presentation. The gynecologist can access the information
and discuss the results with the patient. As usual, the maternity record can be
printed for the patient.
6. A mobile application allows the patient to access her maternity record.
7. With help of the eHealth-Connector, the data can be transformed to the
standardized format CDA-CH and be stored in an electronic patient record
repository. From there, it is accessible for other physicians and midwifes.
8. The data can be stored anonymized in a database and be used for research
purposes.
In this work, we focused on the extraction of the dataset from the smeex-file and the
storage of the information in a database. Therefore, the processing module unzips the
uploaded smeex-file and stores the components in a new folder. Each smeex-file consists
of an index.xml and a data.xml file as well as, if any, binary data. In the case of a
pregnancy, binary data are mainly ultrasound images and forms. The data.xml file
contains all data needed for the maternity record. The values are then stored in the
database. Implementing steps 7 and 8 were not part of this word and remain open for
future work.
M. Murbach et al. / A First Standardized Swiss Electronic Maternity Record 251
Two prototypes have been developed to demonstrate the use cases of the maternity
record for pregnant women and access for the attending physicians. The website can be
used by physicians to add new data to the maternity record by uploading a new smeex-
file. After the upload, the data is extracted and stored in a database and can be verified
by the uploader. The website has been implemented using PHP and MySQL to query the
database. The visualisation of the information is structured by the categories mentioned
in chapter 3.1. This aims at increasing the clarity and making it easier for the physician
to find required information. Through interviews with a midwife, we have learned that
only little information is required for a first assessment. We therefore decided to provide
an overview featuring the most important data, such as the week of pregnancy, risks
during pregnancy and number of pregnancies and births.
The pregnant woman can access her maternity record with a mobile application. The
information is structured by the same categories as the website. The only difference is
the possibility to add personal notes which are only stored locally on her smartphone.
This allows a woman to collect questions concerning their maternity record or pregnancy
or to add personal observations. During the next consultation, she can discuss her notes
with her physician or midwife. In this way, an active involvement of the woman in the
treatment process is ensured.
On average the 27 participants would pay 4.37 CHF for the digital maternity record app.
The lowest amount was 0 CHF (3/27), the highest rankings were 10 CHF (3/27) and 20
CHF (1/27). Three participants enquired about the availability of the app. As additional
functionalities, the participants stated that they would like to be able to access
4. Discussion
The high interest in our work, the positive feedback from the survey and findings from
existing literature [6-8] indicate the importance of an electronic maternity record for
pregnant women. The increasing number of births and the desire to move between
different locations will cause a highly distributed data set and leads to an increased need
for data exchange. An electronic maternity record can improve this exchange and
availability of the data.
A clearly structured presentation of all relevant data in a mobile application is an
advantage for the patients. Attending physicians and midwives can benefit from less
effort and there is a lower likelihood of errors, because the data is automatically added
into the maternity record. To consider the fact that not all women can or wish to use an
electronic maternity record, a data export in form of a physical copy should therefore be
enabled.
The developed concept is based on the assumption that the entered data is available
in a structured format provided by the primary care information system. Currently, the
content and types of documentation still differ very much. Another problem is the usage
of empty text fields which makes it hard to extract specific information. To simplify
further usage, it is necessary to ensure that all data entered is structured. The maternity
record can be added to the electronic patient record as shown in figure 1. To achieve this,
the document must be transformed to the format CDA-CH for example with the eHealth
References
[1] Gemeinsamer Bundesausschuss. Richtlinien des Bundesausschusses der Ärzte und Krankenkassen über
die ärztliche Betreuung während der Schwangerschaft und nach der Entbindung.(Guidelines for the
supervision of pregnant women) Neufassung vom 28. November 1967. Dtsch Ärztebl 1968; 12: 669–
672
[2] Gemeinsamer Bundesausschuss. Mutterpass. [Online].; 2015 [accessed: 04.01.2017]: https://www.g-
ba.de/informationen/richtlinien/anlage/38/.
[3] Bundesministerium für Gesundheit Österreich. Mutter-Kind-Pass. [Online].; 2015 [accessed:
13.03.2017]:
http://www.bmg.gv.at/home/Schwerpunkte/Gesundheitsfoerderung_Praevention/Eltern_und_Kind/Mu
tter_Kind_Pass.
[4] P. Amherd. SMEEX, der Standard, damit eHealth durchgehend klappt. clinicum; 2015, 6: p. 52-54
[5] R. Mettler, V. Ehrensperger. Technische Dokumentation zur Anwendung von SMEEX. 2010, available
at https://www.smeex.ch/fileadmin/content/documents/Technische_Dokumentation_smeex_Beta.pdf
[6] HC. Brown, HJ. Smith. Giving women their own case notes to carry during pregnancy. Cochrane
Database Syst Rev. 2004;(2)
[7] H. Phipps. Carrying their own medical records: the perspective of pregnant women. Aust N Z J Obstet
Gynaecol. 2001 Nov;41(4):398-401 M. Forster, K. Dennison, J. Callen, et al. Maternity patientsʼ access
to their electronic medical records: use and perspectives of a patient portal. Health Information
Management Journal; 2015, 44(1):4-11
254 Health Informatics Meets eHealth
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-254
Over the past few years, the use of information communication technology in health has
dramatically increased. Advanced technologies encourage the modernization of our
health system [2]. Researchers are often working independently on similar projects [4].
Promising projects will not be pursued after the pilot phase or do not get rolled out
because of the lack of available mentors. The exchange of information about successful
projects and funded projects should be published in a structured way and as transparently
as possible [1].
The development and introduction of platforms that enable interdisciplinary
exchange on current developments and projects in the area of eHealth have been
stimulated by national and international authorities and are part of eHealth strategies.
At the end of the year 2014 the regional health fund (Gesundheitsfonds) of the
federal province of Styria commissioned the FH Joanneum GmbH, Institute for eHealth
to develop an interactive platform in order to consolidate and profile eHealth activities.
The aim of this project was to develop a repository of eHealth projects that will make
the wealth of eHealth projects visible and enable mutual learning through the sharing of
experiences and good practice. The desired potential of the work is to identify the key
health and non-health sector stakeholders and to establish a governance mechanism to
provide improved awareness and coordination of eHealth activities. With this
1
Corresponding Author: Mag. Dr. Karin Messer-Misak, FH Joanneum Gesellschaft mbH, Eggenberger
Allee 11, 8010 Graz, E-Mail: karin.messer-misak@fh-joanneum.at
K. Messer-Misak and C. Reiter / eHealth Networking Information Systems 255
2. Methods
As the database should be of interest, on the one hand, to interested citizens and, on the
other hand, to a broad group of stakeholders in the health care sector, it was necessary to
define appropriate framework conditions.
The implementation took place in 5 project phases:
x Preliminary study: Collection and analysis of the functional and non-functional
requirements of the database, identification of the stakeholders, their analysis
and evaluation.
x Concept development: Definition of the quality standards, definition of the test
and evaluation plans, clarification of the prerequisites for the ongoing operation
after completion of the prototype and definition of the workflow.
x Technical implementation: Development of the Web 2.0 portal, quality
assurance, usability tests on the horizontal prototype with selected stakeholders,
approval for pilot operation.
x Pilot phase: Collection and consolidation of relevant eHealth projects in Styria,
assignment to the multi-dimensional project categories. Rollout and training for
future users.
x Phase-Interval Evaluation: Review of the ongoing project activities based on
the evaluation plan. Preparation of an evaluation report.
The content and search criteria as well as their categories were determined in close
co-ordination and cooperation with stakeholders from the specialist areas. Overall, the
main target was to raise awareness for the opportunities which result from active-
cooperative efforts between the scientific community, together with the providers of
health services and developers of health systems.
256 K. Messer-Misak and C. Reiter / eHealth Networking Information Systems
Technically, we used Java Server Faces (JSF) for the implementation of the frontend of
the web application. This framework is based on the Java Servlets and Server Pages
technology and offers a wide range of user interface and navigation components out-of-
the-box. Additionally, we included the CSS-Framework Bootsfaces in order to provide
a modern and user-friendly style for the views.
However, when running Java based applications a servlet container like Apache Tomcat
is essential. Therefore, we decided to use the lightweight Apache TomEE web profile. It
offers a fully fletched JavaEE implementation and is a certified JavaEE engine. The
advantages of the Apache TomeEE over other open-source JavaEE implementations,
such as JBoss or GlassFish, are:
x Package size – the entire web profile comprises 25Mb
x Very little memory usage
x Very agile – when running in embedded mode the web profile can go through
a start/deploy/test/undeploy/stop cycle in less than 3 seconds
The setup for the backend is based on a MySQL database and the Java Persistence
API (JPA). JPA provides an interface to easily map data objects and store them in the
database.
3. Results
4. Discussion
References
[1] Andelfinger, V., Hänisch, T. (Hrsg.), eHealth. Wie Smartphones, Apps und Wearables die
Gesundheitsversorgung verändern warden, Springer Fachmedien, Wiesbaden, 2016. pp. 25-30.
[2] Brupbacher, S., Ein Schlüssel für den notwendigen Strukturwandel
http://fmc.ch/uploads/tx_news/CM_6_2007.pdf, 2007.pp.6-8.
[3] Landes-Zielsteuerungskommission, eHealth-Strategie Steiermark 2014+. Anlage zu TOP 8, 2014.
[4] WHO, Library Cataloguing-in-Publication Data, National eHealth Strategy Toolkit.
http://apps.who.int/iris/bitstream/10665/75211/1/9789241548465_eng.pdf?ua=1, last access:
01.12.2016.
[5] WHO, National eHealth Strategy Toolkit. Overview.
http://www.who.int/ehealth/publications/overview.pdf, last access: 06.02.2017
[6] WHO, Connecting for Health. Global Vision, Local Insight. Report for the World Summit on the
Information Society
http://www.who.int/ehealth/publications/WSISReport_Connecting_for_Health.pdf?ua=1 . last access:
5.12.2016.
Health Informatics Meets eHealth 259
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-259
Abstract. Poor data quality prevents the analysis of data for decisions which are
critical for business. It also has a negative impact on business processes.
Nevertheless the maturity level of data quality- and master data management is still
insufficient in many organizations nowadays. This article discusses the
corresponding maturity of companies and a management cycle integrating data
quality- and master data management in a case dealing with benchmarking in
hospitals. In conclusion if data quality and master data are not properly managed,
structured data should not be acquired in the first place due to the added expense
and complexity.
1. Introduction
Hospital costs are increasing steadily, making further efforts to control costs both
inevitable and essential. Simultaneously, there is a growing need for even better medical
quality for the treatment of patients. LeiVMed, a non-profit programme provided by the
degree course Process Management in Health Care (in German: Prozessmanagement
Gesundheit, PMG) at the University of Applied Sciences Upper Austria’s Steyr Campus,
has been pursuing this mission for about eight years. "LeiVMed" stands for
“Leistungsvergleich Medizin” (in English: benchmarking in health care) and is used to
prepare administrative and medical data to provide information about treatment specific
surgical patient outcomes, variable treatment costs, processes of care, and data for
guiding local quality improvement efforts as well as cost efficiency programs. PMG
developed LeiVMed over the course of several practical and research projects. Besides
providing support for hospitals it serves scientific purposes while also creating synergies
with teaching.
LeiVMed currently analyzes the treatment processes of hip total endoprothesis,
prostatectomy, hernia, strumectomy, cholecystectomy, colon operation and rectum
operation for three Austrian hospital operators. The data is acquired from hospital data
bases as well extracted from unstructured medical documents by trained data nurses.
LeiVMed analyzed so far about 15 000 medical cases of the treatment process types
mentioned above plus appendectomy (appendectomy cases are no more analyzed,
because they are largely acute cases inadequate to interpret). “Medical cases” in
LeiVMed imply cases from the perspective of doctors and managers and not billing cases
1
Corresponding Author: Dr. Klaus Arthofer, FH OÖ Studienbetriebs GmbH, Wehrgrabengasse 1-3, 4400
Steyr/Austria, E-Mail: Klaus.Arthofer@fh-steyr.at
260 K. Arthofer and D. Girardi / Data Quality- and Master Data Management – A Hospital Case
being subject of hospital applications. LeiVMed’s medical cases thus involve mostly at
least one stationary and one ambulatory billing case including the corresponding medical
procedures. Besides this LeiVMed’s medical cases comprise anonymized personal data
and medical risk factors of the according patients, medical complications and laboratory
values.
Regarding cost indicators LeiVMed especially addresses the standardization of
processes. In respect of standardization LeiVMed shows the depreciation of the medical
treatment processes mentioned above in medical departments from the corresponding
evidence based medical guidelines. These cost and process indicators are complemented
with medical outcome indicators represented by medical complication ratios. Medical
complications are of central relevance for LeiVMed’s hospital customers and have to be
manually extracted from unstructured medical documents by study nurses. Data quality
and master data management in LeiVMed focus on internal procedure data (internal from
the perspective of a hospital), because the entry of internal procedure data in particular
is generally still inadequately organized in hospitals. Although LeiVMed is limited to
specific (but frequent) treatment processes and focuses on internal procedures of
especially radiology and laboratory departments, it takes about 1000 internal procedure
concepts into account. LeiVMed uses SNOMED CT (Systematized Nomenclature of
Human and Veterinary Medicine) as a standard for its procedure definitions and
organizes the matching of the proprietary procedure definitions of its hospital customers
with the SNOMED definitions as well as its remaining concept definitions based on an
ontology.
LeiVMed’s major success factors are fairness and accuracy. Fairness is addressed
with analytical models, specifically risk adjustment ensuring indicators related to the
special characteristics of the patient sample analyzed. Accuracy is primarily based on
data quality. The management of data quality- and master data management in LeiVMed
as well as in hospitals will be discussed in this article. Data quality- and master data
management is not only a success factor for LeiVMed, it is of particular interest for all
organizations using database applications. This is because data quality- and master data
management is renowned (garbage in, garbage out), and it is more of an organizational
issue and perhaps for this very reason remains highly problematic.
2. Methods
This article discusses data quality- and master data management in the case of LeiVMed.
Both authors of this article have been working on LeiVMed since its inception, have
been publishing related articles, especially on ontology-based data analysis, and worked
on several similar projects dealing with sectors other than healthcare. Therefore and due
to both the general relevance and the general validity of the topic, this article basically
addresses data quality- and master data management for all sectors and shows the
corresponding implications via LeiVMed.
This article starts with an integrated definition of data quality- and master data
management followed by a literature review on data quality and master data management
in companies along with their maturity over the last decade. The literature review used
scientific literature databases such as SpringerLink, IEEExlore and SciVerse Science-
Direct, along with queries in Google Scholar. A survey taking the form of semi-
structured, oral interviews with data management experts at six large Austrian hospital
operators supplements the literature review regarding hospitals.
K. Arthofer and D. Girardi / Data Quality- and Master Data Management – A Hospital Case 261
This discussion on the maturity of companies regarding data quality- and master data
management results in the approach for data quality- and master data management of
LeiVMed.
Dippold et. al analyzed in 2005 that corporate data management is generally about twenty
years behind the current state of research. Process organization and to a certain extent
human resources are in particular need of improvement with regards to the management
of data quality and master data. Important data is better managed and unimportant data
tends to be neglected. As mentioned above, industries differ in terms of their maturity in
managing data quality and master data [2]. Otto cites a study from BARC dating from
2008, stating that two thirds of companies view their master data management as
immature or not fully mature and concludes that the majority of companies are in the
midst of organizing or reorganizing master data management [12].
In a survey based on partially structured interviews with data management
specialists at six large Austrian hospital operators in 2012 consensus has been found
regarding the importance of data quality and master data management (even in areas
irrelevant to billing). However, no expert claimed to regularly examine the quality of
relevant non-billing data. Specific departments were responsible for defining master data
depending on the master data type (e.g. procedure master data by controlling department),
with this generally being agreed within departments at an individual site and not
throughout the holding. No hospital operator allocated such responsibilities to specific
persons. The maintenance of master data in the applications in general was primarily the
work of the IT department, particularly if doing so required advanced computer skills.
No hospital operator offered courses on master data or data entry rules. The relevant
users simply received an e-mail. On the whole, both the data quality and master data
management structures and processes were implicit rather than explicit. As the focus of
262 K. Arthofer and D. Girardi / Data Quality- and Master Data Management – A Hospital Case
the experts' comments thus fell on IT, data management appears to both IT staff as well
as staff in other departments as more a technical issue [14]. Therefore only a limited level
of maturity in managing data quality and master data can generally be attributed to these
hospital operators. Probably as the losses made by austrian, public hospitals are
compensated with operational loss coverage, data relating to internal procedures, surgical
staff functions etc., however cost-relevant it may be, is regarded as less important. Thus
these hospitals themselves also focus on important data (from their perspective) when it
is a question of data quality and master data management.
In the literature, data management in general is viewed at the strategic,
organizational and systemic levels [3], [12], [16], [4]. The general focus tends to be on
the systemic level – especially if the literature does not explicitly focus on data
management. CobiT (Control Objectives for Information and Related Technology), the
international IT governance framework, also examines data management in its "deliver
and support" domain at the systemic level and not in the "plan and organize" domain.
Although CobiT does not want to focus on how IT requirements are implemented but on
which IT requirements need to be implemented [7]. Thus data management generally
seems to be seen as more of a technical domain, what implicates its potential at both the
strategic and the organizational level.
A recent study published in January 2016 on the state of enterprise data quality from
451 Research commissioned by Blazent, a US-based company focusing on data
management, confirms this hypothesis. They identified a “disconnect between
responsibility and accountability for data quality”. IT departments are mainly responsible
for data quality management and managerial teams are ultimately (meaning “not
explicitly”) held accountable for it. But these parties are poorly aligned with each other,
which of course has negative impacts on data quality. The authors of the study believe
that this finding explains the contradiction between the other two key findings (besides
the latter) of this study: on the one hand respondents generally have doubts about the
effectiveness of the data quality management initiatives in their companies, on the other
hand respondents believe, that a substantial portion of business value can be lost due to
poor data quality. Therefore they conclude that there is a “laissez-faire attitude toward
the quality of data and the DQM practices in their organizations” and consider it “to be
exacerbated due to the anticipated growth of data and plans for future projects that drive
data creation and therefore need for quality management” [34]. A study from Omikron
in 2013 surveying 200 business managers from the German speaking area in Europe
came to very similar conclusions: neither business departments nor IT departments
promote data quality, but both know, that “business nowadays is based on data and not
on computers”. They explain this finding mainly with the insufficient cooperation
between business departments concerning data quality [18].
In summary a certain progress in data quality management has been achieved over
the last decade. Unfortunately this progress seems to be too weak to keep pace with the
anticipated relevance of data. Regarding the evolution of the market for master data
management software [5], the complex, ever-increasing and constantly changing
regulatory environment [15] and last but not least the evolution of data management
assessment tools (e.g. CMMI DMM) [1] the evolution of master data management may
be comparable with that of data quality management.
K. Arthofer and D. Girardi / Data Quality- and Master Data Management – A Hospital Case 263
In their survey from 2011 Haug and Arlbjørn found that the most significant barriers to
master data quality are:
x Poor delegation of responsibilities for maintaining master data (which is the
most important barrier)
x The absence of master data monitoring routines
x The lack of employee skills [6]
Thus the task of managing data quality and master data needs to be assigned to a
person or group of persons as specifically as possible. The same person or group of
persons needs to perform regular and successively expanded tests of data quality and
master data maintenance to allow employees to see the data quality problems they are
causing at regular intervals. Employees can then learn how to improve and thus be
motivated to consistently generate data with sufficient quality.
LeiVMed supports its hospital customers in defining master data, particularly
internal procedure concepts, and training staff to manage relevant master data along with
the semantic testing of the transaction data (in cases of pneumonia thorax X-rays for
example need to be recorded). As LeiVMed thus takes over a large portion of the quality
and master data management of the relevant data, the hospital customers have sourced
out aspects of managing data quality and master data besides medical controlling
activities. This reduces the problem of assigning roles as well as that posed by the lack
of master data monitoring routines and employee skills substantially. This in turn enables
the hospital customers to adapt their own resources whilst already taking advantage of
their medical benchmarking activities.
The data quality and master data management process is presented (simplified) as a
cycle in Figure 1, also indicating the potential for continuous improvement of transaction
and master data quality. LeiVMed calculates benchmarks on a quarterly basis. It is
sensible to institutionalize testing of the data before loading it in a data warehouse. The
data quality check is not merely used in the immediate data correction for data analysis
but also needs to be integrated in the organization to successively improve data quality.
Thus data quality in the online transaction systems is improved fostering their support
for the execution of business processes. Inferior master data means that often only a
fraction of the functions from applications introduced with considerable effort and cost
can then be utilized [17]. After all, the effort needed to correct data errors will continue
to decrease and the quality of the data assessed will improve, making data analysis more
informative.
Both syntactic and semantic checks need to be performed on the extracted data.
Adequate plausibility tests need to be defined along with the use of software tools with
rules engines, fuzzy search methods etc. which are complex and time-consuming. The
outsourcing of data assessment and testing also involves data confidentiality issues,
which can be dealt with by using encryption and pseudonymization as well as non-
disclosure agreements etc.
After checking the data, LeiVMed will notify the hospital about the data quality. If
systematic errors are found in the data check, eg a medical consilium of a certain
department is not recorded in general, they are often caused by poor master data
definitions or data entry rules, or the application users failing to properly utilize such
definitions and rules. This situation constitutes the link between the management of data
quality and management of master data in Figure 1. Data quality management thus
extends into master data management for systematic errors, whereby LeiVMed edits or
explicitly sets master data definitions and/or data entry rules. This is in keeping with the
initial definition of master data and data entry rules following a specific request from the
hospital.
This approach combining outsourcing aspects of data quality (from the perspective
of the hospitals) with master data management has led to the following benefits at
LeiVMed’s hospital customers:
x Surmounting the most significant barriers to master data quality (which are also
applicable for the quality of transaction data):
o Explicitly defined persons in charge of core activities in data quality and
master data management
o Provision of master data monitoring routines
o Provision of data quality and master data skills
x Capability to analyze data on a scientific level
x Potential for continuous improvement of the maturity of the hospital’s data
quality and master data management and support in the development of an
appropriate attitude
This approach initializes only with great engagement on the part of both LeiVMed
and the hospital. The data quality and master data management activities have to be
coordinated between LeiVMed and the hospital and some resistance to organizational
change in the hospital has to be overcome. Finally the potential for improvements can
only be realized with sustained adequate leadership in the hospital.
K. Arthofer and D. Girardi / Data Quality- and Master Data Management – A Hospital Case 265
6. Discussion
Management of data quality and master data are crucial for adequate data analysis and
business decision-making as well as to the smooth running of business processes. Given
the lack of maturity in this regard, the management of data quality and master data offers
huge potential, not just in hospitals but for organizations in general.
On top of the methodical and technical hurdles, the assignment of responsibility for
data quality and master data management to one or more persons presents a special
challenge. At the same time the strategic and organizational perspective (see Chapter 4)
need to be taken into account. Data management is generally seen as an IT issue but the
IT department does often not really understand the precise semantics of a considerable
amount of data. This creates a vacuum in terms of responsibility, as the operative
departments view data management as an IT issue, but IT cannot work properly without
input from the departments. In addition, attention needs to be paid to the associations
between the activities of management of data quality and management of master data as
well as to their triggers.
Data quality management and master data management are important to the users of
all business applications and not merely data analysis, decision-making and other
applications reliant on adequate data. Doctors in hospitals have long complained about
the excessive effort involved in entering data (e.g. encoding of procedures). Adequate
management of data quality and master data would not only provide for more meaningful
analyses but probably also demonstrate that a good deal of data is input to no real purpose.
The importance of data quality and master data management may not be ignored. If
data cannot be properly managed or if there is no will to do so, the data should not be
acquired, structured or even encoded in the first place due to the added expense
(recording effort, cost of IT solutions) and complexity of doing so.
References
[1] CMMI Institute, 5 Steps to Jump-start Your Data Management Program, available on:
http://cmmiinstitute.com/sites/default/files/resource_asset/Data%20Management%20Maturity%20DM
M%20ebook.pdf
[2] Dippold R, Meier A, Schnider W, Schwinn K, Unternehmensweites Datenmanagement. Von der
Datenbankadministration bis zum Informationsmanagement, Vieweg&Sohn, 2005, pp 175-182
[3] Dippold R, Meier A, Schnider W, Schwinn K, Unternehmensweites Datenmanagement. Von der
Datenbankadministration bis zum Informationsmanagement, Vieweg&Sohn, 2005, p 12
[4] Gansor T, Scheuch R, Ziller C, Master data management. Strategie, Organisation, Architektur, dpunkt,
2012, p 68
[5] Goetz M, The Forrester Wave™: Master Data Management, Q1 2016. The 12 MDM Providers That
Matter Most And How They Stack Up, available on:
http://www.forrester.com/pimages/rws/reprints/document/119980/oid/1-ZFXE6A
[6] Haug A, Arlbjørn J, Barriers to master data quality, Journal of Enterprise Information Management, vol
24, 2011, pp 288-303.
[7] KPMG Austria, CobiT 4.0. Deutsche Ausgabe, 2005, pp 157 – 159, available on: http://www.isaca.at
[8] Lehmann C, Roy K, Winter B, The State of Enterprise Data Quality: 2016. Perception, Reality an the
Future of DQM, 451 Research, 2016
[9] Messerschmidt M, J. Stüben, S. Gehrls, Lean Master Data Management - agieren statt reagieren,
Information Management und Consulting, vol 25, 2010, p 75.
[10] MesserschmidtM , Stüben J, Verborgene Schätze - Eine internationale Studie zum Master-Data-
Management, PwC, 2011, pp 76-77.
[11] Otto B, Hüner K, Österle H, Toward a functional reference model for master data quality management,
Inf Syst E-Bus Manage (2012) 10, p 399
266 K. Arthofer and D. Girardi / Data Quality- and Master Data Management – A Hospital Case
1
Corresponding Author: Prof. Dr. med. Nils Reiss, Schüchtermann-Klinik, Bad Rothenfelde, Department
for Clinical Research, Ulmenallee 5-11, 49214 Bad Rothenfelde, Germany, E-Mail nreiss@schuechtermann-
klinik.de
268 N. Reiss et al. / Telemonitoring and Medical Care of Heart Failure Patients
1. Introduction
Heart failure with more than 23 million affected people worldwide and 2 million newly
diagnosed people per year is a disease with epidemic character [1]. Progress in
pharmacological and device therapy was made, nevertheless at the final stage a
significant number of patients will need any kind of mechanical circulatory support.
Heart transplantation is an established surgical approach for the treatment of end stage
heart failure and has been shown to improve exercise capacity, quality of life, and
survival [2–4]. Unfortunately, there is an evident imbalance between the supply of donor
hearts and the demand of patients with end-stage heart failure [5]. These circumstances
have led to an increase in the use of left ventricular assist devices (LVADs) both as a
bridge to heart transplantation and as so-called “destination therapy” (DT) [5]. Over time
LVADs underwent miniaturization and showed increased reliability; furthermore both
patient selection and perioperative management were improved [6].
However, a significant number of device-related complications such as thrombosis,
bleeding, right heart failure, infection and arrhythmia continue to hinder further
advancements in the outcome of LVAD patients [7].
Early after device implantation frequent ambulatory visits ensure an appropriate
monitoring of LVAD patients, and it allows the medical team to continually assess the
patient’s competency with device management. With positive development the time
intervals between the ambulance visits become longer in view of quality of life. Between
the ambulatory visits the LVAD patient is without any kind of supervision and
monitoring. In order to fill this gap continuous telemonitoring using specific available
sensors may be very helpful in this special patient group.
Resources to manage and act on the transmitted information from these patients are
vital to the success of telemonitoring. A significant challenge posed by continuous
telemonitoring of LVAD patients is the burden of information overload.
Therefore, Medolution’s main technical challenges are to deal with the enormous
amounts of heterogeneous data and data sources, to integrate and combine data, and to
extract relevant information from them. Medolution must do this while ensuring safety
and reliability of the devices in the patient’s environment that produces and consumes
this data as well as ensuring security and privacy. Medolution addresses the challenges
by implementing big healthcare data processing and analysis in the cloud leading to: 1.
Early and pro-active decision support for patients and healthcare professionals in the
form of timely meaningful alerts and notifications, 2. the ability to generate healthcare
predictions based on continuous trend analysis and 3. the ability to share healthcare data
dependable between devices and persons.
2. Methods
controller. The controller transmits the data to a central registry via Internet. On a
secured internet platform the physician has access to the detailed information.
2. Blood pressures (pulmonary artery pressure, mean arterial pressure): Pulmonary
artery pressure will be measured by the CardioMEMS-system. The CardioMEMS-
system consists of an implantable sensor, placed in the pulmonary artery. This sensor
is about the size of small paper clip and does not require batteries or wires. Paired
with an external patient electronic device, the system allows patients to transmit
pulmonary artery pressure data from their homes to their health care providers. In
general, the CardioMEMS-system is able to measure arterial pressures when placed
in an artery.
3. Pacemaker (CRT/ICD) related parameters (e. g. heart rhythm, thoracic impedance):
Modern cardiac resynchronization therapy defibrillators (CRT-D) are equipped with
reliable diagnostics able to provide a series of alerts, including lung fluid
accumulation (thoracic impedance, occurrence of atrial fibrillation, or technical
issues. Early diagnosis and intervention may play a crucial role in minimizing major
cardiovascular events and reducing hospitalization of LVAD patients.
4. Coagulation - values (INR) and medication: Home monitoring of anticoagulation is
an appealing option for patients. There are initial efforts to send the INR values
determined by the patient via a special module to a telemedicine center.
5. Further Smartphone transferred parameters and findings (pictures of driveline exit
site, activity): Device-related infections continue to be a prevalent complication in
LVAD patients and contribute significantly to the financial burden of this therapy
due to an increased need for hospitalizations and surgical interventions. In
teledermatology, telecommunication technologies are used to exchange medical
information (concerning skin conditions) over a distance. In Medolution it is planned
that the patient will take pictures from the drive-line exit site during wound dressing
exchange using a smartphone and will send them to the clinician. Daily activity of
the LVAD patient should be monitored via activity tracker.
Figure 1. LVAD patient with external equipment (controller, accumulators) and telemonitoring environment.
270 N. Reiss et al. / Telemonitoring and Medical Care of Heart Failure Patients
3. Results
3.1. Questionnaire
76.8 % of our patients want the use of telemonitoring, although only 30.3 % have heard
about telemedicine before. Our patients expect a higher quality of life with LVAD
telemonitoring (62.98 vs. 68.85; 0 = poor, 100 = high; p > 0.001). The desire for
telemedicine correlates with the expectation of an increased safety for their therapy (p >
0.001). Patients who heard about telemedicine before desire future LVAD telemonitoring
more often compared to patients who had their first contact with telemedicine within our
survey (88.0 % vs 73.9 %, p < 0.05). Summarized, LVAD patients desire telemedicine
as an improved support for their outpatient care, expecting higher quality of life and
increased therapy safety. Confidence in technique including data safety is high. Based
on these results we have started the project by building a sophisticated architecture of
data processing and analysis components.
The architecture in Figure 2 depicts the component structure for the analytical (cloud)
components which are fed by the LVAD gateway and other data sources. The data
ingestion phase is handled by Apache Kafka [8] as Message Broker which stores the
patient sensor records in the NoSQL database Cassandra [9] after it has been authenticated.
In the Medolution scenario the medical IoT devices of a patient need to be registered in
N. Reiss et al. / Telemonitoring and Medical Care of Heart Failure Patients 271
order to validate the data. In the architecture the adapters are shown combining the
components. The architecture follows loosely the lambda architecture pattern [10]. It
consists of a stream-processing branch based on Storm [11] and a batch-oriented one
based on an R-environment. The medical expert gets access to the R functionality
through a user-friendly web-interface which is provided by the medical data scientist and
offers a use-case specific interface to analyze the LVAD and related data.
The components of this architecture are being deployed in an OpenStack [12] cloud
in a redundant storage configuration. The scalability of the services in the cloud is
another building block to increase the dependability and fault-tolerance of the solution
for the medical domain of LVAD-patients.
3.3. Security
According to the EU/national laws and regulations, Medolution makes provisions for
components devoted to security and to the preservation of user privacy and rights. A
special attention is paid to the protection, control and traceability of the user’s generated
data.
At the beginning of the project a graphical user interface (GUI) mockup was developed.
The user (patient, physician, LVAD coordinator) has to authenticate by a fingerprint-
sensor, before he can use the application. Figure 3 (left) a shows the start-screen, which
displays the most important and most critical values. For each parameter, a colored circle
displays if the value lies in the required range and another symbol shows if the sensor is
actually connected to the smartphone (traffic light metaphor). At the bottom of the view,
there exists an emergency-button, which allows the patient to call the doctor directly.
When the physician is using a corresponding app, the app has more functionality. In
the admin mode the physician can monitor all important and relevant parameters (see
Figure 3 right). By clicking on one of the values, a more detailed time course appears.
For each value, the physician sets high critical set points, which create an alarm when
Figure 3. Telemonitoring application:start-screen of the patient´s GUI (left); admin mode of physician´s GUI
(right).
272 N. Reiss et al. / Telemonitoring and Medical Care of Heart Failure Patients
Figure 4. Change of power consumption (mW) in case of developing pump thrombosis; unprocessed time
course (Top); values of an uneventful course subtracted from values of developing thrombosis case (Bottom);
earlier detection of developing thrombosis is possible.
they are exceeded and he sets lower critical set points, which creates an alarm again when
this threshold is crossed. Furthermore, with admin rights the physician can change
LVAD-properties and can set values for different parameters. The physician has also the
possibility to view further information like patient information, patient contacts and logs.
Within the project we identified the most frequent adverse events (thrombosis, bleeding,
right heart failure, infection and arrhythmia) we want to prevent. Although LVAD
implantation is considered a safe and effective procedure, adverse events during support
are still observed and can lead to poor outcomes and many returns to the hospital
[13-15].
Pump thrombosis is one central complication which is influenced by other
complications [16]. Therefore, we have selected the use case of pump thrombosis for
initial realization of Medolution`s aims. For adequate treatment often pump exchange is
necessary. According to the literature the costs per pump replacement amount to about
N. Reiss et al. / Telemonitoring and Medical Care of Heart Failure Patients 273
85.000 € [17]. Early stages of pump thrombosis can be detected by the physician via
visible changes in pump parameters (e. g. energy consumption).
In the Medolution project we aim - as described above - to register pump parameters
continuously and automatically by a smartphone. An algorithm on the smartphone should
calculate values for a normal day for each patient based on the history of pump
parameters. The algorithm detects deviations from the normal values and alerts the
physician via the telemonitoring system. By means of algorithms, early stages of pump
thromboses can be detected (see Figure 4). The algorithm should recognize beginning
pump thromboses significantly earlier than the physician and the patient. Because of
earlier intervention, the number of thrombosis-induced pump replacements can be
reduced. In consequence, this should be also associated with a lower risk for the patient
and a significant cost reduction for the health care system.
4. Discussion
Acknowledgement
References
[1] Mazurek, J.; Jessup, M.: Understanding Heart Failure. In: Heart Fail Clin, (2017), 1, S. 1–19.
[2] Conway, A.; Schadewaldt, V.; Clark, R., et al.: The psychological experiences of adult heart transplant
recipients: a systematic review and meta-summary of qualitative findings. In: Heart Lung, (2013), 6, S.
449–55.
[3] Kittleson, M.: Changing Role of Heart Transplantation. In: Heart Fail Clin, (2016), 3, S. 411–21.
[4] Nytroen, K.; Gullestad, L.: Exercise after heart transplantation: An overview. In: World J Transplant,
(2013), 4, S. 78–90.
[5] Toyoda, Y.; Guy, T.; Kashem, A.: Present status and future perspectives of heart transplantation. In: Cir
J, (2013), 5, S. 1097–110.
[6] Schumer, E.; Black, M.; Monreal, G.; Slaughter, M.: Left ventricular assist devices: current
controversies and future directions. In: Eur Heart J, (2016), 46, S. 3434–39.
[7] Birati, E.; Rame, J.: Left ventricular assist device management and complications. In: Crit Care Clin,
(2014), 3, S. 607–27.
[8] “Apache Kafka.” [Online]. Available: http://kafka.apache.org/. [Accessed: 21-Jul-2016].
[9] “Apache Cassandra.” [Online]. Available: http://cassandra.apache.org/. [Accessed: 26-Oct-2016].
[10] D. Jebaraj, “Lambda Architecture: Design Simpler, Resilient, Maintainable and Scalable Big Data
Solutions,” InfoQ, 12-Mar-2014. [Online]. Available: https://www.infoq.com/articles/lambda-
architecture-scalable-big-data-solutions. [Accessed: 23-Nov-2016].
[11] “Apache Storm.” [Online]. Available: http://storm.apache.org/. [Accessed: 24-May-2016].
[12] OpenStack, “OpenStack Summit Portland 2013,” OpenStack Summit Portland 2013. [Online].
Available: https://www.openstack.org/summit/portland-2013/.
[13] Smedira, N.; Hoercher, K.; Lima, B., et al.: Unplanned hospital readmissions after HeartMate II
implantation: frequency, risk factors, and impact on resource use and survival. In: JACC Heart Fail,
(2013), 1, S. 31–39.
[14] Hasin, T.; Marmor, Y.; Kremers, W., et al.: Readmissions after implantation of axial flow left ventricular
assist device. In: J Am Coll Cardiol, (2013), 2, S. 153–63.
[15] Forest, S.; Bello, R.; Friedmann, P., et al.: Readmissions after ventricular assist device: etiologies,
patterns, and days out of hospital. In: Ann Thorac Surg, (2013), 4, S. 1276–81.
[16] Nguyen, A.; Uriel, N.; Adatya, S.: New Challenges in the Treatment of Patients With Left Ventricular
Support: LVAD Thrombosis. In: Curr Heart Fail Rep, (2016), 6, S. 302–09.
[17] Baras Shreibati, J.; Goldhaber-Fiebert, J.; Banerjee, D.; Owens, D.; Hlatky, M.: Cost-Effectiveness of
Left Ventricular Assist Devices in Ambulatory Patients With Advanced Heart Failure. In: JACC Heart
Fail, (2016), [Epub ahead of print].
Health Informatics Meets eHealth 275
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-275
1. Introduction
Due to the impact of demographic change, the development of eHealth and AAL services
is increasingly the focus of scientific interest [1]. In the past the focus of R&D activities
in new service development were dominated by a push-perspective combined with a
mainly technological view [2]. Despite numerous scientifically successful research
projects in the eHealth/AAL field, only a minority of these projects has been developed
into a market-ready solution and business model [3]. Nowadays the increasingly
inadequate customer orientation of designed hybrid eHealth/AAL services is seen as a
principal reason for this situation [4]. To overcome the gap between research interests
and market-orientated development, current eHealth/AAL projects are shifting the focus
from a mainly push-driven approach to a push- and pull-driven approach [5] with
emphasis on the customer. In this context, the importance of service quality and customer
satisfaction and customer delight is rising. To ensure the adequate and necessary
compliance during the service development process, the use of service excellence models
1
Corresponding Author: Johannes Kriegel, Faculty of Health and Social Sciences, University of Applied
Sciences Upper Austria, Linz, Austria. Email: johannes.kriegel@fh-linz.at
276 J. Kriegel et al. / New Service Excellence Model for eHealth and AAL Solutions
(SEM) have become established cross-industry [6]. Prominent examples are the EFQM
model [7], the Kano model [8], the Johnston service excellence model [9] and the
Baldrige Criteria for Performance Excellence (BCPE) [10].
So far, the development of eHealth/AAL services has not usually required market
establishment of eHealth/AAL services. It is therefore necessary to provide the service
quality and customer benefits and the focus of the New Service Development (NSD)
process. The special requirements need to be considered along the development process.
As part of the EU funded research project DALIA (Assistant for Daily Life Activities at
Home) and the research project PenAAL (Key Performance Measurement Index) funded
by the State of Upper Austria a framework to support the development of customer-
oriented and market-ready eHealth/AAL services was developed and tested. Using
literature research and expert interviews, the relevant dimensions of an eHealth/AAL
business model were defined and described, interpreted and weighed according to their
importance along the service development process. Firstly, the different dimensions of a
eHealth/AAL business model were classified. Secondly, weights of the dimensions of an
eHealth/AAL business model were developed and tested, for the various phases of the
service development process different.
The development process of eHealth/AAL services is increasingly supported by the
application of NSD concept [11]. In the process, different stages (for example, design,
prototyping, and market establishment) are accompanied by iterative process models
[12]. The various relevant dimensions (for example, product/service, costs and financing,
key partners) are examined and processed separately along the NSD process and their
importance for the success of the addressed service is weighted differently. Apart from
the technological aspects of eHealth/AAL, service quality and service excellence are
becoming more important [13]. Service quality describes the degree of achievement
between expectation and the provision of the service, but there can be a distinction
between objective and subjective quality of service [14]. Service excellence is aimed at
the enthusiasm of the customers and the surpassing of customer expectations [15]. The
market-focused and customer-oriented development of eHealth/AAL services provides
a link to a service excellence model. Service excellence models are used for conceptual
and strategic orientation and implementation of customer orientation as part of service
delivery [16]. This raises the question: How must service excellence models be designed
and structurally weighted to support the continuous development of customer-oriented
and market-ready eHealth/AAL services?
2. Methods
stage can be quantified by an assigned weight factor [21]. The weighting was carried out
in the context of a priority analysis using cost-utility analysis by means of a pair
comparison. Pair comparison is a method to evaluate the alternatives when making
decisions. One criterion is compared with the other criteria and evaluated. The sum of
the points per line / criterion finally gives the value of the criterion compared to the whole.
The respective weightings have to be adjusted further and specified by other practical
applications within R&D projects.
To enable benchmarks with the developed SEM, a scoring method was developed and
tested. Questionnaires for each dimension build the basis for the assignment of
performance grades of the SEM excellence criteria. The goal was to provide an easy-to-
use tool alongside the SEM in order to analyze and document the successful application
of the model. Table 1 illustrates the score calculation systematically:
3. Results
Based on the analysis of previous eHealth/AAL R&D projects shifting priorities can be
identified concerning the different phases of the individual NSD process.
Example: During the design phase [22], the focus is on the development of creative
ideas with regard to the product or the hybrid service [23]. Therefore, the weight factor
is 20 percent and the maximum score with the analysis tool (see 2.) is 300 for the core
dimension product/service. In contrast, the set of the complementary dimension has a
lower weight factor. Table 3 gives an overview of the self-assessment of an eHealth/AAL
R&D project in the design phase by one project expert.
Table 3: Weighted self-assessment of an eHealth/AAL R&D Project in the design phase by one expert
J. Kriegel et al. / New Service Excellence Model for eHealth and AAL Solutions 279
In the development phase, the focus shifts towards the complementary dimensions,
taking into account all the other dimensions, which are also of major importance [24].
Therefore, with the exception of the meta dimension the weight factors will be at a
similar level. In the market-entry phase, the focus will shift towards the complementary
dimensions. The reason for this shift is the increased importance of customer demand
and the associated pull-driven perspective [25]. In relation to our example (see Table 1),
this would cause a shift to a weight factor between 12-13 for the complementary
dimension.
4. Discussion
The eHealth/AAL New Service Excellence Model aims to ensure continuous customer
orientation in the context of the development process of eHealth/AAL services. Through
the customer on the focus, improved service quality and enhanced customer delight
should be achieved [13]. The development and application of a specific eHealth/AAL
Service Excellence Model provides the ability to push forward targeted and conceptual
the development of customer-oriented and market-ready eHealth/AAL services.
Therefore it will be necessary to integrate the New Service Excellence Model in the
service development process in a practical and application-oriented way for future
market-ready eHealth/AAL applications. Moreover, it is important by means of an
appropriate questionnaire and evaluation grid to edit the various dimensions of the New
Service Excellence Model to thereby obtain a meaningful analysis result.
With the eHealth/AAL SEM and the related systematic assessments of activities, it
is possible to monitor goal achievement and to support a systematic development process
from the idea to the market establishment. The inadequate development of marketable
and customer-oriented eHealth/AAL business models should be the main focus [26] of
the model validation. Supported by the detailed analysis of the dimensions of the
eHealth/AAL business model canvas, the early reduction of entry barriers (for example,
available distribution channels) and uncertainties (for example, potential customer
segments’ willingness to pay) can be achieved [27]. The eHealth/AAL SEM provides
the option of comprehensive benchmarks with best practice solutions during the new
service development process. This increases the likelihood of sustainable market success
[28].
Based on the required specification and validation of the eHealth/AAL New Service
Excellence Model, it will be necessary to apply the model to other areas of health care.
Possible areas of application are, for example, the medical device field [29]. It is also
necessary to extend the model and the dimensions to appropriate methods of performance
measurement [30]. In the sense of total quality management (TQM), the eHealth/AAL
SEM can also be used as part of an external evaluation (e.g. through R&D funding [31])
and certification (e.g., as part of supplier management [32]).
280 J. Kriegel et al. / New Service Excellence Model for eHealth and AAL Solutions
References
[1] R.C. Leventhal, Aging consumers and their effects on the marketplace, Journal of Consumer Marketing
14 (1997), 276 – 281
[2] A. Brem, K.I. Voigt, Integration of market pull and technology push in the corporate front end and
innovation management - Insights from the German software industry, Technovation 29 (2009), 351-
367
[3] C. Jaschinski, S.B. Allouch, An Extended View on Benefits and Barriers of Ambient Assisted Living
Solutions, International journal on advances in life sciences 7 (2015), 40-53
[4] R. Luzsa, S. Schmitt-Rüth, F. Danzinger, Alterssensible Marktforschung - Ältere Kunden verstehen -
Methoden, Ansätze & Lösungen. Fraunhofer, Stuttgart, 2014
[5] B. Edvardsson, B. Enquist, R. Johnston, Cocreating Customer Value Through Hyperreality in the
Prepurchase Service Experience, Journal of Service Research 8 (2008), 149-161
[6] J.C. Crotts, D.R. Dickson, R.C. Ford, Aligning organizational processes with mission: the case of service
excellence, Academy of Management Executive 19 (2005), 54-68
[7] J. Moeller, The EFQM Excellence Model. German experiences with the EFQM approach in health care,
International Journal for Quality in Health Care 13 (2001), 45-49
[8] N. Kano, Attractive quality and must-be quality, Journal of the Japanese Society for Quality Control 14
(1984), 39-48
[9] R. Johnston, Towards a better understanding of service excellence, Managing Service Quality 14 (2004),
129-133
[10] M.W. Ford, J.R. Evans, Conceptual Foundations of Strategic Planning in the Malcolm Baldrige Criteria
for Performance Excellence, Quality Management Journal 7 (2000), 8-26
[11] J. Kriegel, S. Schmitt-Rüth, B. Güntert, P. Mallory, New Service Development in German and Austrian
Health Care – Bringing eHealth Services into the market, International Journal of Healthcare
Management 6 (2013), 77-86
[12] H.J. Bullinger, A.W. Scheer, Service engineering - Entwicklung und Gestaltung innovativer
Dienstleistungen, Springer, Berlin, 2006
[13] M. Gouthier, A. Giese, C. Bartl, Service excellence models: a critical discussion and comparison,
Managing Service Quality: An International Journal 22 (2012), 447-464
[14] A. Parasuraman, V.A. Zeithaml, L.L. Berry, A Conceptual Model of Service Quality and Its Implications
for Future Research, Journal of Marketing 49 (1985), 41-50
[15] M. Gouthier, Kundenbegeisterung durch Service Excellence: Erläuterungen zur DIN SPEC 77224 und
Best-Practices, Beuth, Berlin, 2012
[16] M. Asif, M. Gouthier, What service excellence can learn from business excellence models, Total Quality
Management & Business Excellence 25 (2014), 511-531
[17] M. Gersch, J. Liesenfeld J, AAL- und EHealth-Geschäftsmodelle, Springer, Wiesbaden, 2012
[18] K. Auinger, T. Ortner, R. Kränzl-Nagl, J. Kriegel - Empirisches Evaluationsraster für AAL
Geschäftsmodelle. - 8. AAL Kongress, Frankfurt, Deutschland, 2015, 324-329
[19] A. Osterwalder, Y. Pigneur Y, Business Model Generation. Campus, Frankfurt/M., 2011
[20] D. Kindstroem, Towards a service-based business model – Key aspects for future competitive advantage,
European Management Journal 28 (2010), 479-490
[21] J. Kriegel, K. Auinger - Service Development Loom – From the Idea to a Marketable Business Model.
- eHealth2015, Wien, Österreich, 2015, 125-133
[22] M. Jaekel, A. Wallin, M. Isomursu, Guiding Networked Innovation Projects Towards Commercial
Success - a Case Study of an EU Innovation Programme with Implications for Targeted Open Innovation,
Journal of the Knowledge Economy 6 (2015), 625-639
[23] D. Kelly, C. Storey, New service development: initiation strategies, International Journal of Service
Industry Management 11 (2000), 45-63
[24] I. Alam, C. Perry, A customer orientated new service development process, Journal of Service Marketing
16 (2002), 515-534
[25] G. Di Stefano, A. Gambardella, G. Verona, Technology push and demand pull perspectives in innovation
studies: Current findings and future research directions, Research Policy 41 (2012), 1283-1295
[26] K. Spitalewsky, J. Rochon, M. Ganzinger, P. Knaup, Potential and Requirements of IT for Ambient
Assisted Living Technologies, Methods Inf Med 52 (2013), 231-238
[27] G. Van Den Broek, F. Cavallo, C. Wehrmann, AALIANCE Ambient Assisted Living Roadmap,
Amsterdam, IOS, 2010
[28] S. Massa, S. Testa, Innovation or imitation? Benchmarking: a knowledge̺management process to
innovate services, Benchmarking: An International Journal 11 (2004), 610-620
J. Kriegel et al. / New Service Excellence Model for eHealth and AAL Solutions 281
Abstract. Primary health care (PHC) is currently being improved in all developed
industries. The aim is to make healthcare more patient-centered and close to the
patient's place of residence. In addition to the organizational and interdisciplinary
reorientation, the use of digital media is increasingly being emphasized. Through
literature research and an online survey among Austrian doctors and general
practitioners, the current and future challenges for the use of digital media in
networked and regional primary health care were identified and prioritized. It
becomes clear that basic functions like documentation, communication and
coordination in the individual medical practice are at the forefront. In the future it
will be necessary to support regional and interprofessional networking through
digital media.
1. Introduction
Primary care in Austria is characterized by the provision of health care services by family
doctors and general practitioners. Primary care is focused on the provision of health care
services by family doctors and / or general practitioners [1]. They are usually self-
employed (practicing physicians) and are the first professional point of contact for people
with medical problems. A family doctor cares for approximately 1,500 to 2,000 people,
although not all have direct contact with him or her. The service spectrum includes
general medical services provided to children and adults. In addition, primary care is
characterized by a special trust relationship and by a longer-term care period between
the patient and their family doctor [2].
Primary care in Austria is to be provided and professionalized by a new,
multiprofessional and interdisciplinary primary health care (PHC) concept [3, 4]. The
aim here is to organize the previously isolated health professionals within the scope of a
PHC center or a virtual PHC network to co-ordinate care in a patient-centered, disease-
related and interdisciplinary approach. The establishment and design of such a PHC
network requires overarching and sustainable communication, cooperation and
coordination of all players. [5].
1
Corresponding Author: Johannes Kriegel, Faculty of Health and Social Sciences, University of Applied
Sciences Upper Austria, Linz, Austria. Email: johannes.kriegel@fh-linz.at
J. Kriegel et al. / Digital Media for Primary Health Care in Austria 283
To improve health care security and quality of care in primary care, it is important to
overcome the division of labor and fragmented health care through stronger, cross-border
and patient-centered delivery of services [6]. In addition to the patient's perspective (e.g.,
wishes and possibilities), patient-centered care also encompasses the process perspective
(e.g., interruption-free and barrier-free design of the care process). A key aspect of
patient-centered care is cross-sectoral and cross-departmental primary care [7]. More and
more digital media are used for documentation, knowledge management, communication,
coordination and patient-specific case management [8]. The goal must be to inform the
health professionals addressed about the possibilities and added value of the IT
deployment. It is also necessary to implement an adapted, parallel IT process flow. This
raises the question: How can the use of digital media support primary care in Austria that
is close to the patient’s residence and patient-oriented?
2. Methods
The identification of the digital media used in the primary care as well as the associated
challenges was achieved through a semi-structured literature research. For this purpose,
relevant national and international databases (e.g., Thieme Connect, Science Direct
College Edtition, SpringerLink, Emerald Collections, Pubmed, Cochrane Library) were
searched by using targeted keywords or keyword combinations (e.g., primary health care,
home care, digital media, doctor-patient care, interaction, etc.). The identified articles,
studies and reports were reviewed and their contents interpreted in the context of the
research question. Furthermore, the results of the literature research informed the
development and design of the survey instrument (online questionnaire) used below.
The survey of the physician's perspective using an online survey focused on the current
situation and the future application possibilities of patient control in primary care. The
questionnaire focused on the use of digital media in primary care. For this purpose, a
standardized online questionnaire, based on the results of literature research as well as
an experts’ workshop, was prepared. On the basis of a pre-test (n=4), the number of
application options as well as challenges to be selected and the formulation of the
questions were adopted. The online survey was conducted from October 3, 2016 to
October 15, 2016 using the Unipark survey tool [9]. Therefore, 1317 members of the
Upper Austrian Society of General and Family Medicine (OBGAM) were invited to
participate by e-mail. The return was n = 97, with 48 answering the questionnaire on
digital media, yielding a return rate of 3.6%.
284 J. Kriegel et al. / Digital Media for Primary Health Care in Austria
3. Results
The use of digital media in the primary care of patients is confronted with a multitude of
challenges. These range from legal regulations to access to health care services, as well
as the lack of pilot and coordination functions within primary care. A systematic
classification of the various challenges in the external dimensions of the environment /
society, healthcare system and technological developments, as well as the internal
dimensions of the organization / results, information / communication and health
professionals of primary care can be done (see figure 1).
use [47]. Other distinctive uses are e-mail communication and the health insurance card
as well as fax machine. In addition, connectivity with different digital networks and
media has so far only been realized at an early stage. Physicians’ assessments regarding
future requirements indicate that the physicians see a corresponding demand and
development potential in the direction of digital networking and interaction, in particular
with other health professionals. Intensified digital interaction with patients is not
prioritized or predicted by the physicians.
4. Discussion
Primary care and thus also the provision of family doctor services are increasingly being
assigned more important role within the Austrian healthcare system [62]. This has its
origin not only in economic reasons, but also is based on the assumption that patient-
centered care should be ensured by patient-centered and comprehensive case
management [63]. The current challenges in primary health care, which is work-based
and fragmented, mainly concern interagency communication, coordination, co-operation
and documentation, including knowledge management. This results in the need to close
or bridge the resulting gaps and barriers in an individual patient’s journey. The focus
here is on the reduction of information asymmetries, the acceleration of patient careers
and the assurance of care quality. Furthermore, primary care should be available close to
a patient’s place of residence. It is necessary to provide individualized health services by
the best qualified health professionals. The respective care physician takes a central
position, supported by the other internal (office team) and external (health professionals
in the region) players and above all by the patient herself or himself [64]. Digital media
are particularly important in the coordination of an individual patient’s journey as well
as the cooperation and communication between the respective players, which is relevant
for the patient’s overall journey. It is therefore important to consider the respective user
requirements regarding future ICT system design intensively [47].
References
[1] S. Gress, C.A. Baan, M. Calnan et al., Co-ordination and management of chronic conditions in Europe:
the role of primary care--position paper of the European Forum for Primary Care, Qual Prim Care, 17
(2009), 75-86
[2] J. Kriegel, E. Rebhandl, N. Reckwitz, W. Hockl, Stellschrauben in der Hausarztversorgung –
Identifizierung strategischer Erfolgsfaktoren für eine verbesserte hausärztliche Versorgung in
Oberösterreich, Gesundheitswesen, 74 (2016), 835-842
[3] C.M. Auer, Das Team rund um den Hausarzt – Konzept zur multiprofessionellen und interdisziplinären
Primärversorgung in Österreich. BfG, Wien, 2014
[4] Bundesgesetz zur partnerschaftlichen Zielsteuerung-Gesundheit (Gesundheits-Zielsteuerungsgesetz –
G-ZG). StF, BGBl. I, Nr. 81/2013
[5] E. Rebhandl, M. Maier, Primary Health Care (PHC) – ein Konzept zur Optimierung der extramuralen
Gesundheitsversorgung. AM PULS, Wien, 2012
[6] G.L. Jackson, B.J. Powers, R. Chatterjee, Improving patient care. The patient centered medical home.
A Systematic Review, Ann Intern Med, 158 (2013), 169-178
[7] A. Xyrichis, K. Lowton, What fosters or prevents interprofessional teamworking in primary and
community care? A literature review, Int J Nurs Stud, 45 (2008), 140-153
[8] G. Demiris, L.B. Afrin, S. Speedie et al., Patient-centered Applications: Use of Information Technology
to Promote Disease Management and Wellness. A White Paper by the AMIA Knowledge in Motion
Working Group, J Am Med Inform Assoc, 15 (2008), 8-13
[9] QuestBack, Enterprise Feedback Suite EFS survey. QuestBack, Köln-Hürth, 2013
[10] M. Mars, R.E. Scott, Global E-Health Policy: A Work In Progress, Health Aff 29 (2010), 237-243
[11] Y. Xue, Z. Ye, C. Brewer, J. Spetz, Impact of state nurse practitioner scope-of-practice regulation on
health care delivery: Systematic review, Nursing Outlook 64 (2016) 71-85
J. Kriegel et al. / Digital Media for Primary Health Care in Austria 287
[12] P. Knaup, E. Ammenwerth, C. Dujat, et al., Assessing the Prognoses on Health Care in the Information
Society 2013 - Thirteen Years After, J Med Syst 38 (2014), 73. doi:10.1007/s10916-014-0073-6
[13] T. Matzner, Why privacy is not enough privacy in the context of “ubiquitous computing” and “big data”,
Journal of Information, Communication and Ethics in Society, 12 (2014), 93-106
[14] M.W. Kroneman, H. Maarse, J. van der Zee, Direct access in primary care and patient satisfaction: A
European study, Health Policy, 76 (2006), 72-79
[15] C. Sinnott, S. McHugh, J. Browne, et al., GPs’ perspectives on the management of patients with
multimorbidity: systematic review and synthesis of qualitative research, BMJ Open (2013), 3:e003610.
doi: 10.1136/bmjopen-2013-003610
[16] A. Dahlhaus, N. Vanneman, C. Guethlin, et al., German general practitioners’ views on their
involvement and role in cancer care: a qualitative study, Family Practice, 31 (2014), 209-214
[17] E. Nolte, C. Knai, M. Hofmarcher et al., Overcoming fragmentation in health care: chronic care in
Austria, Germany and the Netherlands, Health Econ Policy Law, 7 (2012), 125-146
[18] C. Schoen, R. Osborn, M.M. Doty et al., A Survey Of Primary Care Physicians In Eleven Countries,
2009: Perspectives On Care, Costs, And Experiences, Health Aff , 28(2009), w1171-w1183
[19] P. Rattay, H. Butschalowsky, A. Rommel et al., Utilisation of outpatient and inpatient health services in
Germany – Results of the German Health Interview and Examination Survey for Adults (DEGS1),
Bundesgesundheitsblatt - Gesundheitsforschung - Gesundheitsschutz 56 (2013), 832-844
[20] J. Kriegel, E. Rebhandl, W. Hockl, AM. Stöbich, Primary Health Care in Österreich – Tu Felix Austria
nube – Konzept der Vernetzung in der primären Gesundheitsversorgung von Oberösterreich, Wien Med
Wochenschr, 166 (2016). doi:10.1007/s10354-016-0531-5
[21] T. Freund, C. Everett, P. Griffiths et al., Skill mix, roles and remuneration in the primary care workforce:
Who are the healthcare professionals in the primary care teams across the world?, International Journal
of Nursing Studies, 52 (2015), 727-743
[22] A. Loh, D. Simon, C.E. Wills et al., The effects of a shared decision-making intervention in primary
care of depression: A cluster-randomized controlled trial, Patient Education and Counseling, 67 (2007),
324-332
[23] Y. Lehmann, M. Ewers, Pathways of Invasive Ventilated Patients Released into Intensive Home Care:
The Perspective of Home Care Providers, Gesundheitswesen, 74 (2016). doi:10.1055/s-0042-116224
[24] N. Peek, C. Combi, R. Marin, R. Bellazzi, Thirty years of artificial intelligence in medicine (AIME)
conferences: A review of research themes, Artificial Intelligence in Medicine, 65 (2015), 61-73
[25] S.A. Moorhead, D.E. Hazlett, L. Harrison et al., A New Dimension of Health Care: Systematic Review
of the Uses, Benefits, and Limitations of Social Media for Health Communication, J Med Internet Res,
15 (2013), e85. doi:10.2196/jmir.1933
[26] M.M. Davis, M. Freeman, J. Kaye et al., A Systematic Review of Clinician and Staff Views on the
Acceptability of Incorporating Remote Monitoring Technology into Primary Care, Telemedicine and e-
Health, 20 (2014), 428-438
[27] A.H. Krist, J.W. Beasley, J.C. Crosson et al., Electronic health record functionality needed to better
support primary care, J Am Med Inform Assoc, 21 (2014), 764-771
[28] M.P. Silver, Patient Perspectives on Online Health Information and Communication With Doctors: A
Qualitative Study of Patients 50 Years Old and Over, Journal of medical Internet research, 17 (2015),
e19. doi:10.2196/jmir.3588
[29] W.L. Liao, F.J. Tsai, Personalized medicine: A paradigm shift in healthcare, BioMedicine, 3 (2013), 66-
72
[30] K.Y. Lin, Physicians' perceptions of autonomy across practice types: Is autonomy in solo practice a
myth?, Social Science & Medicine, 100 (2014), 21-29
[31] A. Edwards, M. Rhydderch, Y. Engels et al., Assessing organisational development in European primary
care using a group̺based method: A feasibility study of the Maturity Matrix, International Journal of
Health Care Quality Assurance, 23 (2010), 8-21
[32] J. Siegrist, R. Shackelton, C. Link et al., Work stress of primary care physicians in the US, UK and
German health care systems, Social Science & Medicine, 71 (2010), 298-304
[33] P.P. Groenewegen, P. Dourgnon, S. Greß et al., Strengthening weak primary care systems: Steps towards
stronger primary care in selected Western and Eastern European countries, Health Policy, 113 (2013)
170-179
[34] C. Löffler, J. Höck, A. Hornung et al., What Makes Happy Doctors? Job Satisfaction of General
Practitioners in Mecklenburg-Western Pomerania – a Representative Cross-sectional Study,
Gesundheitswesen, 77 (2015), 927-931
[35] L. Rogan, R. Boaden, Understanding performance management in primary care, International Journal
of Health Care Quality Assurance, 30 (2017), 4-15
288 J. Kriegel et al. / Digital Media for Primary Health Care in Austria
[36] M. Härter, H. Müller, J. Dirmaier et al., Patient participation and shared decision making in Germany –
history, agents and current transfer to practice, Z. Evid. Fortbild. Qual. Gesundh. wesen, 105 (2011),
263-270
[37] E. Bernabeo, E.S. Holmboe, Patients, Providers, And Systems Need To Acquire A Specific Set Of
Competencies To Achieve Truly Patient-Centered Care, Health Aff February, 32 (2013), 250-258
[38] R. Tandjung, T. Rosemann, N. Badertscher, Gaps in continuity of care at the interface between primary
care and specialized care: general practitioners’ experiences and expectations, International Journal of
General Medicine, 4 (2011), 773-778
[39] C. May, G. Allison, A. Chapple et al., Framing the doctor-patient relationship in chronic illness: a
comparative study of general practitioners' accounts, Sociol Health Illn, 26 (2004), 135-158
[40] T. Green, T. Martins, W. Hamilton et al., Macleod, Exploring GPs’ experiences of using diagnostic tools
for cancer: a qualitative study in primary care, Family Practice, 32 (2015), 101-105
[41] K. Goetz, B. Musselmann, J. Szecsenyi, S. Joos, The influence of workload and health behavior on job
satisfaction of general practitioners, Fam Med, 45 (2013), 95-101
[42] R. Milstein, C.R. Blankart, The Health Care Strengthening Act: The next level of integrated care in
Germany, Health Policy, 120 (2016), 445-451
[43] R. Osborn, D. Moulds, E.C. Schneider et al., Primary Care Physicians In Ten Countries Report
Challenges Caring For Patients With Complex Health Needs, Health Aff, 34 (2015), 2104-2112
[44] R.A. Lawrence, J.K. McLoone, C.E. Wakefield et al. Primary Care Physicians’ Perspectives of Their
Role in Cancer Care: A Systematic Review, Journal of General Internal Medicine, 31 (2016), 1222-1236
[45] B. Sklar, Digital Communications: Fundamentals and Applications. Prentice Hall, New Jersey, 2001
[46] J.M. Holroyd-Leduc, D. Lorenzetti, S.E. Straus et al. The impact of the electronic medical record on
structure, process, and outcomes within primary care: a systematic review of the evidence, J Am Med
Inform Assoc, 18 (2011), 732-737
[47] D.A. Ludwick, J. Doucette, Adopting electronic medical records in primary care: Lessons learned from
health information systems implementation experience in seven countries, International Journal of
Medical Informatics, 78 (2009), 22-31
[48] G. Schmiemann, W. Schneider-Rathert, A. Gierschmann, M. Kersting, Practice Information Systems in
Family Medicine – Between Compulsory and Voluntary Exercise, Z Allg Med, 88 (2012).
doi:10.3238/zfa.2012.00127–00132
[49] B. Chaudhry, J. Wang, S. Wu et al., Systematic Review: Impact of Health Information Technology on
Quality, Efficiency, and Costs of Medical Care, Ann Intern Med, 144 (2006), 742-752
[50] S. Kripalani, F. LeFevre, C.O. Phillips et al., Deficits in Communication and Information Transfer
Between Hospital-Based and Primary Care Physicians, JAMA, 297 (2007), 831-841
[51] T. Bodenheimer, E.H. Wagner, K. Grumbach, Improving primary care for patients with chronic illness,
JAMA, 288 (2002), 1775-1779
[52] B.H. Crotty1, Y. Tamrat, A. Mostaghimi et al., Patient-To-Physician Messaging: Volume Nearly Tripled
As More Patients Joined System, But Per Capita Rate Plateaued, Health Aff, 33 (2014), 1817-1822
[53] S.J. Wang, B. Middleton, L.A. Prosser et al., A cost-benefit analysis of electronic medical records in
primary care, American Journal of Medicine, 114 (2003), 397-403
[54] M.A. Clarke, J.L. Belden, R.J. Koopman et al., Information needs and information-seeking behaviour
analysis of primary care physicians and nurses: a literature review, Health Info Libr J. 30 (2013), 178-
190
[55] J.M. Paterson, R.L. Allega, Improving communication between hospital and community physicians.
Feasibility study of a handwritten, faxed hospital discharge summary. Discharge Summary Study Group,
Can Fam Physician, 45 (1999), 2893–2899
[56] D.M Hynes, G. Stevenson, C. Nahmias, Towards filmless and distance radiology, lancet, 350 (1997),
657-660
[57] S.M. Strayer, M.W. Semler, M.L. Kington, K.O. Tanabe, Patient Attitudes Toward Physician Use of
Tablet Computers in the Exam Room, Fam Med, 42 (2010), 643-647
[58] N. Ernstmann, O. Ommen, M. Neumann et al., Primary Care Physician's Attitude Towards the
GERMAN e-Health Card Project—Determinants and Implications, J Med Syst, 33 (2009), 181.
doi:10.1007/s10916-008-9178-0
[59] J. Walker, E. Pan, D. Johnston, J. Adler-Milstein, D.W. Bates, B. Middleton, The value of health care
information exchange and interoperability, Health Aff, 19 (2005). doi:10.1377/hlthaff.w5.10
[60] B. O'Neill, Towards an improved understanding of modern health information ecology, Social Science
& Medicine, 173 (2017), 108-109
[61] K.M. Myers, D. Lieberman, Telemental Health: Responding to Mandates for Reform in Primary
Healthcare, Telemedicine and e-Health, 19 (2013), 438-443
[62] T. Czypionka, S. Ulinski, Primärversorgung. IHS, Wien, 2014
J. Kriegel et al. / Digital Media for Primary Health Care in Austria 289
[63] J. Gensichen, M. von Korff, M. Peitz et al., Case management for depression by health care assistants
in small primary care practices: a cluster randomized trial, Ann Intern Med, 15 (2009), 369-378
[64] F.C. Cunningham, G. Ranmuthugala, J. Plumb et al., Health professional networks as a vector for
improving healthcare quality and safety: a systematic review, BMJ Qual Saf, 21 (2012), 239-249
[65] K. Hoffmann, A. George, T.E. Dorner et al., Primary health care teams put to the test a cross-sectional
study from Austria within the QUALICOPC project, BMC Family Practice, (2015), 16:168.
doi:10.1186/s12875-015-0384-9
[66] R.Z. Pinto, M.L. Ferreira, V.C. Oliveira et al., Patient-centred communication is associated with positive
therapeutic alliance: a systematic review, Journal of Physiotherapy, 58 (2012), 77-87
[67] R. Grol, J. Grimshaw, From best evidence to best practice: effective implementation of change in
patients' care, Lancet, 362 (2003), 1225-1230
290 Health Informatics Meets eHealth
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-290
1. Introduction
1
Corresponding Author: Leila Erfannia, Health Information Management Department, School of Health
Management and Information Sciences, Iran University of Medical Sciences, Number 6, Rashid Yasemi St,
Vali-e Asr Ave, Tehran, Iran. E-mail: Leila.erfannia@gmail.com
F. Sadoughi and L. Erfannia / Health Information System in a Cloud Computing Context 291
technology projects due to high consumption and operation costs. On the other hand,
maintaining, supporting, and updating IT projects is confronted with fundraising
challenges. Additionally, the need for specialized personnel to develop, manage, and
maintain IT projects is another challenge. In order to overcome eHealth problems, most
organizations move towards new models with advanced technologies.[1]
Cloud computing, as one of the new advancements in the world of technology, has
had a great impact on various spheres of knowledge. Cloud computing is a new IT-based
model providing service. It can host different sources of information by creating an
integrated platform. Such integrated information platforms provide suitable data analysis
and model discovery opportunities in health sciences. [2]
High implementation, maintenance, and manpower costs complicate successful
realization of IT-based health projects. Cloud storage, through the property of service
providing payment considerably decreases implementation costs and enhances the
degree of financial savings. Additionally, increasing the volume of medical data on the
one hand and restrictions on the storage and maintenance of this data on the other hand
had necessitated the possibility of using high-volume data storage tanks. [3] Health
service providers intend to provide tele-medicine services via cloud computing. They
focus on electronic files as necessary tools for implementing cloud computing. Fusing
electronic health records technology and cloud computing will enhance the quality of
medical services. [4]
The advantages of cloud computing facilitate the implementation of electronic
records. Thus, stakeholders are more inclined to cooperate and invest in this issue.[5] On
the other hand, since the possibility of interoperability between systems is one of the
greatest problems of transforming electronic data, it seems by standardizing health
information before migration to the cloud and integrating related data in this platform, a
suitable solution for overcoming this barrier will provide.[6]
Applying health information systems as the most prominent sub-category of eHealth,
confronts several challenges. Hardware limitation in storing and maintaining health data,
slowed down data access due to the increased volume of stored data, security issues,
privacy and data backup, and sharing health information which has been traditionally
stored are some main challenges.[7] It seems that cloud computing is able to overcome
barriers that health information systems are dealing with. Several studies have focused
on cloud computing in the realm of health issues. However, different forms of health
information systems, such as electronic health records have not been analyzed. Based on
problems and complications surrounding health information systems and considering the
potential of cloud computing in resolving these problems, it seems that reviewing related
texts can provide helpful information on the extent of use and success of health
information systems and cloud storage. Thus, the main objectives of the present study
include:
• Investigating the advantages and limitations of the implementation of health
information systems for cloud platforms.
• Solution for successful implementation of health information systems for cloud
platforms.
2. Methods
The present systematic review was conducted in 2016. Following are the main stages of
292 F. Sadoughi and L. Erfannia / Health Information System in a Cloud Computing Context
the study: research strategy, designing the extraction form, inclusion and exclusion
criteria, and the quality of the articles. Research strategy provided considerable
contribution in finding good answers for study questions. First, necessary keywords were
extracted based on the objectives of the study. Then, related papers were searched based
on found keywords, AND/OR conjunctions, and search strings. Cloud computing, health
information system, advantages, benefits, challenges, issue, solution were keywords that
make search strings and were searched in Scopus, Science Direct, Web Of Science, IEEE,
PubMed and Google Scholar in the period 2000 to 2016 . The search process is shown
in Figure 1.
Inclusion criteria: all papers (journal and conference articles) in English with
available full texts as well focus on cloud-based health information system and the
potential of providing sufficient answers for research questions qualified for the study.
Exclusion criteria: irrelevant papers (which not cover health information system or
not describe the advantages and limitation of health information system implementation
in a cloud platform), short studies, letters to editors, papers that provide only authors
opinion without scientific methods to test evidences or not report specific outcome about
health information system implementation in cloud platform were removed.
Quality assessment was carried out by research team simultaneously and separately.
If assessors disagreed about the quality of a certain paper, a third party would issue the
final creed. After studying the relevant papers, related data were put in a data collection
form that was designed in Excel sheet. Then, all extracted materials were compared and
their connections, similarities, and differences were determined. Finally, the found
materials were fused to provide a suitable answer for research questions and justified
conclusions were presented.
3. Results
Electronic health record, personal health record and electronic medical record were
the subject of papers. Ten articles had discussed the advantages and limitations of the
F. Sadoughi and L. Erfannia / Health Information System in a Cloud Computing Context 293
4. Discussion
According to the study, the main advantages of cloud-based health information system
can be categorized into the following groups: economic benefits and advantages of
information management. According to several analyzed studies, the main impetus
which encouraged organizations in adapting cloud-based systems is economic benefits;
organizations won’t need costly infrastructures, pay merely for services they receive,
save money, and invest in other areas and sections. The need for hiring full-time
personnel is removed and maintenance costs are considerably reduced. The cost of
electricity consumption falls significantly and reducing the production of foot points
helps preserve the environment.[4, 13, 15]
In terms of the benefits of information management the following can be considered;
high access to data and information in such a way that patients and other beneficiaries
will be able to use databases anywhere, anytime if they have access to internet. The
clarity of data increases and higher quality is a justified expectation. Clouds allow storage
of huge volume of data and remove the majority of huge data management problems.
Easy implementation is an important advantage of clouds; users are able to enjoy and
apply services without having to apply a particular hardware or software and required
software is easily updated due to centrality of cloud management. Scalability of clouds
makes changes in response to user expectation and this result in sustained development
of health information systems. The agility of clouds makes health information systems
very mobile, because it enables users to ask for required service without any initial
actions toward service preparation. [4, 9, 15, 23]
Based on the results of this research, using a set of general guidelines and standards
in developing information systems is the main step in the promotion of interoperability.
Using standard data models and data formats in health information systems are two main
solutions. Some studies have recommended the use of standards, such as HL7CDA, or
HL7-based XML, as the main solution for establishing interoperability between
systems.[6, 8, 22]
Considering high sensitivity of health data, migration process to the cloud form is
complicated unless there is a complete trust between customer and provider. Therefore,
it is recommended for the organizations to do full-length research before selecting a
provider and choosing the best option after that. Studying the history of such people and
assessing their qualification is quite essential before the inauguration of the cloud-based
project. [9] Based on the results, despite limitations of cloud computing, organizations
and health service providers have given positive feedback about this technology. Some
mechanism and solution were discussed and it seems it would be a suitable area for future
scientific research. Limitations and solutions discussed in this systematic study help
healthcare managers and decision makers take better and more efficient advantage of this
technology and also better planning to adopt cloud-based health information systems.
Acknowledgment
The present study was financially supported by Iran University of Medical Science by
grant No 26879.
References
[1] R. Buyya, C.S. Yeo, S. Venugopal, J. Broberg, and I. Brandic, Cloud computing and emerging IT
platforms: Vision, hype, and reality for delivering computing as the 5th utility, Future Generation
Computer Systems 25 (2009), 599-616.
[2] E. AbuKhousa, N. Mohamed, and J. Al-Jaroodi, e-Health cloud: opportunities and challenges, Future
Internet 4 (2012), 621-645.
[3] P.K. Bollineni and K. Neupane, Implications for adopting cloud computing in e-Health, Master Thesis
No. MCS 201 1: 17, School of Computing, Blekinge Institute of TechnoloKv, Sweden, 2011.
[4] G. Fernández-Cardeñosa, I. de la Torre-Díez, M. López-Coronado, and J.J. Rodrigues, Analysis of
cloud-based solutions on EHRs systems in different scenarios, Journal of medical systems 36 (2012),
3777-3782.
[5] E. Achampong, Readiness of Electronic Health Records for the Cloud Network, J Health Med Informat
5 (2014), e127.
[6] A. Bahga and V.K. Madisetti, A cloud-based approach for interoperable electronic health records
(EHRs), IEEE Journal of Biomedical and Health Informatics 17 (2013), 894-906.
[7] F. Gao, S. Thiebes, and A. Sunyaev, Exploring Cloudy Collaboration in Healthcare: An Evaluation
Framework of Cloud Computing Services for Hospitals.
[8] O.-S. Lupşe, M.M. Vida, and L. Stoicu-Tivadar, Cloud computing and interoperability in healthcare
information systems, in: The First International Conference on Intelligent Systems and Applications,
2012, pp. 81-85.
[9] J.J. Rodrigues, I. de la Torre, G. Fernández, and M. López-Coronado, Analysis of the security and
privacy requirements of cloud-based electronic health records systems, Journal of medical Internet
research 15 (2013), e186.
[10] R. Wu, G.-J. Ahn, and H. Hu, Secure sharing of electronic health records in clouds, in: Collaborative
Computing: Networking, Applications and Worksharing (CollaborateCom), 2012 8th International
Conference on, IEEE, 2012, pp. 711-718.
F. Sadoughi and L. Erfannia / Health Information System in a Cloud Computing Context 297
[11] M. Preethi and R. Balakrishnan, Cloud enabled patient-centric EHR management system, in: Advanced
Communication Control and Computing Technologies (ICACCCT), 2014 International Conference on,
IEEE, 2014, pp. 1678-1680.
[12] A. Alabdulatif, I. Khalil, and V. Mai, Protection of electronic health records (EHRs) in cloud, in:
Engineering in Medicine and Biology Society (EMBC), 2013 35th Annual International Conference of
the IEEE, IEEE, 2013, pp. 4191-4194.
[13] E.J. Schweitzer, Reconciliation of the cloud computing model with US federal electronic health record
regulations, Journal of the American Medical Informatics Association 19 (2012), 161-165.
[14] Y.-Y. Chen, J.-C. Lu, and J.-K. Jan, A secure EHR system based on hybrid clouds, Journal of medical
systems 36 (2012), 3375-3384.
[15] G. Fern'ndez, I. De La Torre-díez, and J.J. Rodrigues, Analysis of the cloud computing paradigm on
Mobile Health Records Systems, in: Innovative Mobile and Internet Services in Ubiquitous Computing
(IMIS), 2012 Sixth International Conference on, IEEE, 2012, pp. 927-932.
[16] O. Boyinbode and G. Toriola, CloudeMR: A Cloud Based Electronic Medical Record System,
International Journal of Hybrid Information Technology 8 (2015), 201-212.
[17] B. Pardamean and R.R. Rumanda, Integrated model of cloud-based E-medical record for health care
organizations, in: 10th WSEAS international conference on e-activities, 2011, pp. 157-162.
[18] A.S. Radwan, A.A. Abdel-Hamid, and Y. Hanafy, Cloud-based service for secure electronic medical
record exchange, in: 2012 22nd International Conference on Computer Theory and Applications,
ICCTA 2012, 2012, pp. 94-103.
[19] J. Haskew, G. Rø, K. Saito, K. Turner, G. Odhiambo, A. Wamae, S. Sharif, and T. Sugishita,
Implementation of a cloud-based electronic medical record for maternal and child health in rural Kenya,
International journal of medical informatics 84 (2015), 349-354.
[20] D. Sobhy, Y. El-Sonbaty, and M.A. Elnasr, MedCloud: healthcare cloud computing system, in: Internet
Technology And Secured Transactions, 2012 International Conference for, IEEE, 2012, pp. 161-166.
[21] P. Van Gorp and M. Comuzzi, Lifelong personal health data and application software via virtual
machines in the cloud, Biomedical and Health Informatics, IEEE Journal of 18 (2014), 36-45.
[22] G. Hsieh and R.J. Chen, Design for a secure interoperable cloud-based Personal Health Record service,
in: Cloud Computing Technology and Science (CloudCom), 2012 IEEE 4th International Conference
on, 2012, pp. 472-479.
[23] E. Ekonomou, L. Fan, W. Buchanan, and C. Thuemmler, An integrated cloud-based healthcare
infrastructure, in: Cloud Computing Technology and Science (CloudCom), 2011 IEEE Third
International Conference on, IEEE, 2011, pp. 532-536.
[24] T.-S. Chen, C.-H. Liu, T.-L. Chen, C.-S. Chen, J.-G. Bau, and T.-C. Lin, Secure dynamic access control
scheme of PHR in cloud computing, Journal of medical systems 36 (2012), 4005-4020.
[25] H. Wang, W. He, and F.-K. Wang, Enterprise cloud service architectures, Information Technology and
Management 13 (2012), 445-454.
[26] C.H. Wu, J.J. Hwang, and Z.Y. Zhuang, A Trusted and Efficient Cloud Computing Service with Personal
Health Record, in: 2013 International Conference on Information Science and Applications (ICISA),
2013, pp. 1-5.
[27] K. Haufe, S. Dzombeta, and K. Brandis, Proposal for a Security Management in Cloud Computing for
Health Care, Scientific World Journal (2014).
298 Health Informatics Meets eHealth
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-298
1. Introduction
Ageing makes elderly patients prone to develop chronic diseases which include
Congestive Heart Failure (CHF), Chronic Obstructive Pulmonary Disease (COPD),
Diabetes Mellitus Type II (DMT2), Hypertension (HTN), etc. Over 50% of elderly
patients suffer from more than one of these conditions and develop multi-morbidity [1].
The combination of several diseases may decrease their reserve and resistance to
stressors and enhances the risk of adverse events. This phenomenon has been defined as
the “frailty syndrome” and is associated with higher hospitalization, disability, falls and
death [2], as well as with increased healthcare utilization, reduced quality of life and
cognitive capacity. Accordingly, geriatricians have emphasized the absolute necessity
for the identification, assessment and treatment of pre-frail and frail patients. The
continuous monitoring of their status and the prevention of sudden adverse events and
their consequences are expected to lead to more sustainable and efficient healthcare
systems [3].
The multidisciplinary and multidimensional Comprehensive Geriatric Assessment
(CGA) has been established as an appropriate instrument for frailty evaluation and
1
Corresponding Author: Andreas Ziegl, AIT Austrian Institute of Technology, Reininghausstraße 13/1 8020
Graz, E-Mail: a.ziegl@student.tugraz.at
A. Ziegl et al. / TUG Device for Unsupervised Functional Assessment of Elderly Patients 299
2. Methods
First, we carried out a literature review in order to research all available physical
performance tests and discussed outcomes of this review in informal meetings.
Additionally, geriatricians from the clinical partners contributed to the identification of
the desired functional assessment, according to some pre-established requirements; the
desired test should be simple, low-cost and easily repeatable. Moreover, it should include
clear tasks that are familiar to patients in their daily lives, so that human assistance and
guidance would not be necessary. Finally, it should be a proven predictor of functional
impairment.
We identified TUG test [8] as the most appropriate tool for the assessment of the
functional status of patients with chronic conditions [9-14] as it meets all the pre-
established requirements. The test standardizes most of the ‘basic mobility manoeuvres’
of elderly subjects while it is quick and simple and includes everyday tasks like sitting
down and standing up [8]. Its main outcome (time) is a continuous and objective
parameter. These features are perfect for the assessment of small changes in functional
status and patient evolution in the long-term after an intervention [15].
Once the desired test was defined we discussed the minimum requirements of the TUG
device which should reach at least the timing accuracy of a person trained in the
performance of the TUG test. Only then could the test be performed without a healthcare
300 A. Ziegl et al. / TUG Device for Unsupervised Functional Assessment of Elderly Patients
We devised ten different scenarios, including the correct performance of TUG, some fail
cases detected during the development of the prototype and the most recurrent fail cases
monitored by our clinical partners. Figure 1 shows the different paths of the scenarios.
While green is the correct path, the red, purple and yellow paths are incorrect. The
different scenarios are described in Table 1.
3. Results
Prototype
To overcome reliability problems our design aimed at gaining complete control over
the architecture of the test setting and over the distance to the patient during the whole
TUG process. The patient would have to walk towards a physical barrier (wall) 3.4
meters from the chair (so they eventually walk 3 meters), touch it with both hands, come
back and sit down. This setting prevents them from going further than 3 meters. The
continuous distance signal between the chair and the patient, together with pre-
established distance cut-off points, allows the monitoring of the critical phases of the test
and provides acoustic signals in response, guiding patients along the way. Furthermore,
this distance signal is stored enabling further analysis, verification of each TUG test and
302 A. Ziegl et al. / TUG Device for Unsupervised Functional Assessment of Elderly Patients
To obtain a statement on whether the performed TUG test was done according to the
rules, we developed an algorithm which divides the area between the edge of the chair
and the wall into defined sub areas that have to be passed through in the correct sequence
(Figure 3). The areas are defined as followed:
• Area 1 from 1.1 m to 1.6 m,
• Area 2 from 1.7 to 2.3,
• Area 3 from 2.4 m to 3.0 m and
• Area 4 from 3,2 m to 3.5 m.
The sequence for a correct test has to be 1-2-3-4-3-2-1. To avoid incorrect scenarios
there must be no distance value exceeding Area 4 (Maximum Line). Spaces between the
areas make a vacillation of the user possible. In case of scenarios 6 and 7 in Table 1,
Area 4 can never be reached. If scenario 9 appears, the values would exceed Area 4,
therefore this test would be detected as incorrect. Scenarios 8 and 10 can easily be
detected by a wrong order of the detected areas.
A. Ziegl et al. / TUG Device for Unsupervised Functional Assessment of Elderly Patients 303
Five healthy volunteers from our team participated in a lab study performing the TUG
test in a controlled setting. Each of them went through every scenario in Table 1.
Consequently, a total of 50 distance signals were collected and stored. The results from
correctly performed tests ranged from 4 to 12 seconds. Out of the 50 signals, 100%
(50/50) were successfully classified by the algorithm either as correctly performed or
failed trials. Figure 3 shows a correct signal of a TUG test from the lab study. Incorrect
as well as correct TUG signals of different subjects show close similarities. Feedback
from the subjects indicated that the device is easy to use and that the acoustic signals are
easy to hear.
4. Discussion
Our initial results clearly indicate that the development of a device for autonomous and
unsupervised functional assessment is possible. We have designed a tool that allows the
assisted temporary establishment of the TUG setting, the guidance over the test to the
patient and an algorithm that successfully verifies the correct performance. Other
researchers have suggested different approaches to monitor the TUG test based on
accelerometers, video records, or ambient sensors [36]. Indeed, some companies have
developed commercial products such as smartphone apps (iTUG) and gadgets (Actibelt).
However, these tools do not support to control the TUG set-up, nor give complete
guidance along the TUG process. Moreover, they do not allow verifying whether the test
has been completed correctly. Although they provide complementary information on gait
pattern, forces and balance, our approach may offer advantages in terms of data reliability
and test simplicity in an unsupervised environment.
Using an ultrasonic sensor has both advantages and disadvantages. Due to a fixed
wide beam anchor the user is not forced to walk exactly along a straight line to the wall.
However, there must be enough free space to avoid detecting other objects. The
ultrasonic echo of a solid obstacle is much stronger than from a human body. This can
lead to problems when the user stands next to the wall. Consequently, a suitable gain
setting had to be found to always detect the human between the chair and the wall.
304 A. Ziegl et al. / TUG Device for Unsupervised Functional Assessment of Elderly Patients
The algorithm was built based on the information coming from 10 simple and well
defined scenarios which however, may not represent all the potential situations occurring
during real TUG tests. Hence, in the future we will test the device first in a supervised
clinical environment and later in the community, either at nursing homes or patients’
dwellings. Furthermore, additional information can be extracted from the data beyond
time and correctness. Potentially, it may be possible to detect the time needed to stand
up, turn around and sit down. In terms of the communication capabilities, a Bluetooth
connection will be established in future versions in order to extract the whole series of
distance measurements of each TUG test and make it available for extended analysis.
References
[1] Marengoni, A., Angleman, S., Melis, R., Mangialasche, F., Karp, A., Garmen, A., ... &Fratiglioni, L.
(2011). Aging with multimorbidity: a systematic review of the literature. Ageing research reviews, 10(4),
430-439.
[2] Clegg, A., Young, J., Iliffe, S., Rikkert, M. O., & Rockwood, K. (2013). Frailty in elderly people. The
Lancet, 381(9868), 752-762.
[3] McNallan, S. M., Singh, M., Chamberlain, A. M., Kane, R. L., Dunlay, S. M., Redfield, M. M., ...&
Roger, V. L. (2013). Frailty and healthcare utilization among patients with heart failure in the
community. JACC: Heart Failure, 1(2), 135-141.
[4] Kenis, C., Heeren, P., Decoster, L., Van Puyvelde, K., Conings, G., Cornelis, F., ...& Van Rijswijk, R.
(2016). A Belgian survey on geriatric assessment in oncology focusing on large-scale implementation
and related barriers and facilitators. The journal of nutrition, health & aging, 20(1), 60-70.
[5] Huang, F., Chang, P., HOU, I. C., Tu, M. H., &Lan, C. F. (2015). Use of a Mobile Device by Nursing
Home Residents for Long-term Care Comprehensive Geriatric Self-assessment: A Feasibility Study.
Computers Informatics Nursing, 33(1), 28-36.
[6] Chumbler, N. R., Mann, W. C., Wu, S., Schmid, A., &Kobb, R. (2004). The association of home-
telehealth use and care coordination with improvement of functional and cognitive functioning in frail
elderly men. Telemedicine Journal & E-Health, 10(2), 129-137.
[7] Von der Heidt, Andreas, et al. "HerzMobil Tirol network: rationale for and design of a collaborative
heart failure disease management program in Austria." Wiener klinischeWochenschrift 126.21-22
(2014): 734-741.
[8] Podsiadlo, D., & Richardson, S. (1991). The timed “Up & Go”: a test of basic functional mobility for
frail elderly persons. Journal of the American geriatrics Society, 39(2), 142-148.
[9] Al Haddad, M. A., John, M., Hussain, S., & Bolton, C. E. (2016). Role of the Timed Up and Go Test in
Patients WithChronic Obstructive Pulmonary Disease. Journal of cardiopulmonary rehabilitation and
prevention, 36(1), 49-55.
[10] Palmerini, L., Mellone, S., Avanzolini, G., Valzania, F., & Chiari, L. (2013). Quantification of motor
impairment in Parkinson's disease using an instrumented timed up and go test. Neural Systems and
Rehabilitation Engineering, IEEE Transactions on, 21(4), 664-673.
[11] Hwang, R., Morris, N. R., Mandrusiak, A., Mudge, A., Suna, J., Adsett, J., & Russell, T. (2015). Timed
Up and Go Test: A Reliable and Valid Test in Patients With Chronic Heart Failure. Journal of cardiac
failure.
[12] Benavent-Caballer, V., Sendín-Magdalena, A., Lisón, J. F., Rosado-Calatayud, P., Amer-Cuenca, J. J.,
Salvador-Coloma, P., &Segura-Ortí, E. (2015). Physical factors underlying the Timed “Up and Go” test
in older adults. Geriatric Nursing.
[13] Nocera, J. R., Stegemöller, E. L., Malaty, I. A., Okun, M. S., Marsiske, M., Hass, C. J., & National
Parkinson Foundation Quality Improvement Initiative Investigators. (2013). Using the Timed Up& Go
test in a clinical setting to predict falling in Parkinson's disease. Archives of physical medicine and
rehabilitation, 94(7), 1300-1305.
[14] De Vries, N. M., Staal, J. B., Van Ravensberg, C. D., Hobbelen, J. S. M., Rikkert, M. O., &Nijhuis-Van
der Sanden, M. W. G. (2011).
[15] Larburu, Nekane, et al. "An ontology for telemedicine systems resiliency to technological context
variations in pervasive healthcare." IEEE journal of translational engineering in health and medicine 3
(2015): 1-10.
Health Informatics Meets eHealth 305
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-305
1. Introduction
Diabetes Mellitus (DM) is a chronic disease that requires regular medical appointments,
during which treatment recommendations and changes in lifestyle are given by the
physician. However, it is in fact mostly the patient who controls the course of the disease.
Educators and physicians are only responsible for 5% of diabetes care, the other 95% are
responsibilities and choices of patients about their eating habits, physical activities, stress
control and monitoring of the disease [1]. These choices are important factors for their
health and well-being [2]. The extent to which patients follow these treatment
recommendations and guidelines is defined as adherence [3].
1
Corresponding Author: Daniel Vilsecker, Center for Medical Statistics, Informatics and Intelligent
Systems, Institute of Medical Information Management, Medical University of Vienna, Austria, E-Mail:
daniel.vilsecker@gmail.com.
306 D. Vilsecker et al. / Disease Monitoring Related Adherence and Its Association with Mortality
2. Methods
As our data source we used de-identified claims data of persons covered by the regional
health insurance carrier of Lower Austria (Niederösterreichische Gebietskrankenkasse,
NÖGKK) that were provided by the Main Association of Austrian Social Security
Institutions. The analysis of de-identified data is conformant with the Austrian data
protection law. The database holds data of about 70% of the population of Lower Austria
from the period between January 1, 2008 and December 31, 2011. The average
population of Lower Austria during this period was 1,605,885. About 30% of the
population was covered by other insurers and is therefore not covered in our database.
The following types of data were used in our study: (i) outpatient health services
data from practitioners coded by a local coding system of the Main Association of
Austrian Social Security Institutions, (ii) diagnoses documented during hospitalizations
and coded by the International Statistical Classification of Diseases and Related Health
Problems (ICD-10) [7], (iii) medication dispensing data coded by the Anatomical
Therapeutic Chemical (ATC) classification [8], and (iv) patient demographics (date of
birth, gender, date of death).
Subjects older than 18 years receiving DM-specific medication (ATC codes according
to Chini et al. [9]) between January 1, 2008 and December 31, 2011 were selected for
our study. We applied a retrospective cohort study design. The first prescription date of
a DM medication started a one-year “harvesting phase” during which adherence was
calculated. The “time at risk phase” marked the period between the end of the harvesting
phase and the end of available data of a patient during which mortality was analyzed.
The end of available data was given by December 31, 2011. Figure 1 depicts how these
phases could look like for an exemplary subject.
In patients with DM, glycated hemoglobin (also referred as hemoglobin HbA 1c) needs to
be measured every three months to identify the average plasma glucose concentration in
diabetes patients. [10] Subjects were considered adherent if they had HbA1c tests
performed regularly in a three-month interval (90 days) during the harvesting phase.
As our adherence metric we calculated the fraction of the harvesting phase (i.e. one year)
during which a subject was adherent. After having performed an HbA 1c test within the
harvesting phase we assumed the subject to be in adherent state for 90 days. We now
checked whether the subject had the next lab test within these 90 days. If true, the subject
remained in state adherent and the corresponding “deadline” was reset to 90 days from
the date of the current lab test. If not, he/she was switched to non-adherent state after 90
days until the date of the next lab test. This procedure was reiterated until the end of the
harvesting phase was reached, resulting in the number of days that the subject was
adherent during one year. To determine whether a subject started the harvesting phase in
adherent or non-adherent status, we checked whether an HbA 1c test had been performed
within the three months before the start of the harvesting period and initialized the
deadline with the remaining days from the 90 day period in the positive case.
Figure 2 shows an example of the calculation. The subject started the harvesting
phase in status adherent with 59 days (90 - 31) remaining until the deadline for the next
lab test. The distances between the tests were calculated. If the number was greater than
three months, the value was set to 90. In the final step the adherent days before the start
of the harvesting period was subtracted from the sum of all adherent days. In this example
the subject was adherent for 285 days (78%) and non-adherent for 80 days (22%).
The cumulative incidence of death was estimated in the total study population and
separately for different levels of adherence using the product-limit method. The mortality
was compared for the different levels of adherence by the log-rank test. As our software
tools we used Matlab for the descriptive statistics and R for the analysis of the association
between adherence and mortality.
3. Results
The median follow up time was 2.6 years. About 10% of patients (N=5,232) died during
the follow up time. The cumulative incidence of mortality after one and two years was
4.2% and 8.7%, respectively. Figure 4 shows the cumulative incidence of mortality for
patients with at most 12 adherent days (N=18,656), between 13 and 163 adherent days
(N=18,645) and more than 163 adherent days (N=18,572), respectively. Patients with
low adherence had a significantly higher risk of mortality than patients with high
adherence (p<0.001).
D. Vilsecker et al. / Disease Monitoring Related Adherence and Its Association with Mortality 309
Figure 4. Cumulative incidence of mortality for patients with at most 12 adherent days (N=18,656), between
13 and 163 adherent days (N=18,645) and more than 163 adherent days (N=18,572), respectively.
4. Discussion
The identification of DM patients via their medication data has been shown to yield
satisfying results [9] and avoids problems resulting from inaccurate coding of diagnoses
[11].
The large size of our study population represents a strength of our study. As it covers
about 70% of Lower Austria’s diabetes patients2, our study’s results are representative
for this province. They are also in line with the results of earlier work [4-5].
However, our study is also limited in several respects. First of all, the underlying
study data were originally collected for billing purposes and their suitability for being
reused for research purposes may be questioned in general [12].
One particular problem concerning data quality we had to cope with was the fact
that we did not have accurate data of when a subject dropped out of the insurance
coverage of NÖGKK. Recognizing that a patient left the NÖGKK was, however,
important for us as we were lacking data of ܣܾܪଵ tests after such an event3. Therefore,
we tried to identify the end of insurance coverage heuristically by looking for implausible
long periods where not a single health service was documented for a patient. Even though
DM patients may be assumed to receive health services in rather high frequency, the
heuristic may in some cases have led to an inadvertent exclusion of patients from the
study population.
Further, the high number of patients who did not have a single ܣܾܪଵ test during
their harvesting phase is surprising. This may partially be explained by the fact that our
database lacks data from outpatient clinics. According to the Austrian diabetes report 4,
type II diabetes patients, who represent about 90% of all diabetes patients, are mostly
treated by practitioners. Outpatient diabetes clinics are primarily visited only by the
smaller group of type I patients. Even though ܣܾܪଵ tests in particular will likely be
performed by practitioners in the usual case due to easier accessibility and shorter waiting
times compared to an outpatient clinic, we might still have missed some ܣܾܪଵ tests due
to this shortcoming of the database.
2
Diabetes patients receiving a non-pharmaceutical therapy are not included.
3
Dates of death were always available as they originated from a different source.
4
See http://www.oedg.at/pdf/diabetesbericht_2013.pdf
310 D. Vilsecker et al. / Disease Monitoring Related Adherence and Its Association with Mortality
ܣܾܪଵ tests performed during a hospital stay are not visible in the database as they
are financed as part of an all-inclusive reimbursement of the whole stay. To compensate
for this lack of data, we assumed that during each hospital stay with a diabetes-specific
diagnosis a ܣܾܪଵ test was performed. We assumed these “implicit” tests to be
performed as a standard procedure on the day of admission.
For the calculation of the disease monitoring related adherence only the ܣܾܪଵ test
was used. This is just one aspect when calculating this type of adherence.
At this stage of the analysis, we can only provide the preliminary results of a
univariate analysis. To rule out negative effects of potential confounding variables, we
plan to perform an additional multivariable analysis. Hereby we will adjust for age,
gender, and comorbidities (estimated from diagnoses and received medication). Results
will be presented at the conference.
5. Conclusion
References
[1] M. M. Funnell and R. M. Anderson, "The Problem with Compliance in Diabetes," MS JAMA Medical
Student Jama, vol. 284, no. 13, pp. 1704 - 1709, 2000.
[2] R. M. Anderson and M. M. Funnell, "Compliance and Adherence are Dysfunctional Concepts in
Diabetes Care," The Diabetets Educator, vol. 26, no. 4, pp. 597- 604, July/August 2000.
[3] J. A. Cramer, A. Roy, A. Burrell, C. J. Fairchild, M. J. Fuldeore, D. A. Ollendorf and P. K. Wong,
"Medication Compliance and Persistance: Terminology and Definitions," Value in Health, vol. 11, no.
1, pp. 44-47, 2008.
[4] M. R. DiMatteo, P. J. Giordani, H. S. Lepper and T. W. Croghan, "Patient Adherence and Medical
Treatment Outcomes: A Meta Analysis," Medical Care, vol. 40, no. 9, pp. 794-811, 2002.
[5] H. Fukuda and M. Mizobe, "Impact of nonadherence on complication risks and healthcare costs in
patients newly-diagnosed with diabetes," Diabetes Research and Clinical Practice, vol. 123, no. January
2017, pp. 55-62, January 2017.
[6] D. E. Morisky, A. Ang, M. Krousel-Wood and H. J. Ward, "Predictive Validity of a Medication
Adherence Measure in an Outpatient Setting," JCH - The Journal of Clinical Hypertension, vol. 10, no.
5, pp. 348-354, 2 May 2008.
[7] World Health Organization, ICD-10 International Statistical Classification of Diseases and Related
Health Problems, 10 ed., vol. 2, Geneva: World Health Organization, 2010.
[8] World Health Organization Collaborating Centre for Drug Statistics Methodology, "Guidelines for ATC
classification and DDD assignment 2016," Oslo, 2016.
[9] F. Chini, P. Pezzotti, L. Orzella, P. Borgia and G. Guasticchi, "Can we use the pharmacy data to estimate
the prevalence of chronic conditions? a comparison of multible data sources," BMC Public Health , vol.
11, no. 688, 2011.
[10] K. Miedema, "Standardization of HbA1c and optimal range of Monitoring," Scandinavian Journal of
Clinical and Laboratory Investigation, vol. 65, no. Sup240, pp. 61-72, 2005.
[11] K. O'Malley, K. Cook, M. Price, K. Wildes, J. Hurdle and C. Ashton, "Measuring diagnoses: ICD code
accuracy," Health Serv Res, vol. 40 (5 Pt 2), pp. 1620 - 39, 2005.
[12] H. Nathan and T. M. Pawlik, "Limitations of Claims and Registry Data in Surgical Oncology Research,"
Annals of Surgical Oncology, vol. 15, no. 2, pp. 415-423, 2008.
Health Informatics Meets eHealth 311
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-311
Comparison of
Control Group Generating Methods
Szabolcs SZEKÉRa, György FOGARASSYb and Ágnes VATHY-FOGARASSYa,1
a
Department of Computer Science and Systems Technology, University of Pannonia,
Hungary
b
State Hospital for Cardiology, Balatonfüred, Hungary
1. Introduction
In observational medical studies, the goal of the analysis is to identify and evaluate
causes of diseases and adverse medical events, or to analyze the effect of specific risk
factors for the outcome to be analyzed. The interpretation of results is generally based
on a comparative analysis between two independent groups of patients. These groups
ideally are very similar to each other, but they differ for a certain characteristic that is in
the focus of the study. For example, if we want to evaluate the effect of smoking on lung
cancer, then we have to compare the results of smoker patients (case group) with the
results of non-smoker individuals (control group).
In cohort studies, subjects are selected by their exposure status and they are followed
over a period of time until the outcome of analysis occurs. Because these studies have a
temporal framework to assess the causality of the influencing factors, they have the
potential to provide the strongest scientific evidence [1]. Cohort studies can be classified
as prospective and retrospective studies. Although the methodology of prospective and
retrospective cohort studies is fundamentally the same, the study design in these cases is
very different because of the different implementation methods [2].
In case of prospective studies, two groups of individuals (exposed, case group and
unexposed, control group) are selected on the bases of factors that are to be examined
for possible effects on the outcome. Prospective studies are performed from the present
to the future and accordingly, data is collected and recorded during the whole period of
follow-up.
1
Corresponding Author: Ágnes Vathy-Fogarassy, Department of Computer Science and Systems
Technology, University of Pannonia, 2. Egyetem Str., 8200 Veszprém, Hungary, E-Mail: vathy@dcs.uni-
pannon.hu
312 S. Szekér et al. / Comparison of Control Group Generating Methods
In retrospective studies participating individuals can be selected not only on the basis
of their exposure status, but they can be also classified as either having some outcome
(case group) or lacking it (control group). In retrospective studies, participants are
selected in the present, but the examination was carried out in the past. According to this,
data about influencing factors and relevant features were measured and recorded also in
the past.
Both study methods have some advantages and disadvantages. For example, due to
the long time follow-up period, the investigation of prospective cohort studies generally
requires many years. In contrast, this is substantially reduced in case of retrospective
studies, given that data collection happened in the past and required study time only
includes the time of the analysis. However, while in a well-designed prospective study
the selection of the participating individuals into case and control groups can be designed
and performed in a predetermined manner, in case of retrospective studies this cannot be
done in such a way. Generally, the selection of the case group can be carried out based
on the study aims, but the determination of the control group has difficulties and it raises
many questions [3].
Wacholder and his coworkers determined three principles for the selection of the
control group in case-control studies [4]. In this work the comparability is defined as (1)
all comparisons must be made within the study base, (2) the comparison of the effects of
the levels of exposure on the outcome must not be distorted by the effects of other factors
and (3) any errors in measurement of exposure must be non-differential between cases
and controls (comparable accuracy). Ensuring these principles is not a simple task. If
control group selection is not appropriate, the third principle of comparability is
unsatisfied. In retrospective studies, the investigator has limited control over data
collection and the maximal size of the analyzable population is predetermined, so control
group selection principles may be damaged and significant biases may affect the
selection of controls.
In the literature a lot of different methods have been proposed to select control
groups for case-control studies. In the simplest case, the individuals of the control group
are generally selected by stratified sampling (SS) [5]. In these cases, strata are defined
based on the predictive variables. This methodology is proper if the size of the set of
possible candidates is large enough and the distribution in each stratum is corresponsive.
Otherwise, the selection of individuals for control cannot be performed properly.
Using balancing scores such as propensity score (PS) offers an alternative solution
for selecting proper individuals into the case group. Propensity score is the probability
of treatment assignment conditional on observed baseline characteristics. There are
numerous ways of utilizing PS in control group selection, such as matching, stratification,
inverse probability with PS as weight or covariate adjustment [6]. The most popular may
be propensity score matching (PSM) which gives a solution to the abovementioned
problem by matching treated and untreated subjects who share a similar value of PS. The
weakness of PSM comes from its attempt to approximate a completely randomized
experiment. This property makes PSM uniquely blind to often large imbalances that can
be eliminated by approximating full blocking with other matching methods. Moreover,
for adequately balanced data PSM approximates random matching which increase
imbalance even relative to original data [7].
In this paper we present two novel nearest neighbor based control selection methods
to solve the problem of control group selection. In contrast to propensity score matching,
the suggested methods do not consider the influencing effect of the observed covariates.
Our aim was to develop such control group selection methods that can ensure the same
S. Szekér et al. / Comparison of Control Group Generating Methods 313
distributions of all measured variables in the control group as in the case group. Stratified
sampling considers all measured variables, not only the influencing ones. Thus the
efficiency of the proposed methods was compared to the well-known stratification-based
method and to the basic nearest neighbor based selection method. Our tests show, that
the suggested methods may outperform the efficiency of the classical methods.
The structure of the paper is the following. Section 2 introduces the well-known and
newly developed methods. In Section 3 the results of the comparative tests are presented.
Finally, Section 4 concludes the paper.
2. Methods
Retrospective approach infers various constraining factors, given, that data collection
took place in the past. The number of available variables for the analysis is limited and
therefore it is difficult to take into account the effect of possible confounding variables.
This makes control selection a nontrivial problem. Before we present some solutions for
this problem, let us introduce the following notations.
Given a population P characterized by variables ݂ଵ ǡ ݂ଶ ǡ ǥ ǡ ݂ (݊ אԳ) (for example,
age, gender, etc…). P denotes the group of individuals of the retrospective study that are
investigated. Denote ܲ the case group, which is a subset of ܲ (ܲ ܲ ؿሻ. Our goal is to
determine such a ܲ ܲ( ܲ ؿ ܲ ת ൌ )control group, in which the distribution of
݂ଵ ǡ ݂ଶ ǡ ǥ ǡ ݂ holds as in the case group ܲ . ܲ ܲ ת ൌ means, that the individuals of the
control group should be different from the individuals of the case group. The elements
of the control group are selected from the set ܲ ൌ ܲ െ ܲ , which we call as candidate
subpopulation.
Traditional methods of control group selection often utilize simple randomized sampling
or stratified sampling. As random sampling does not consider the similarity of the case
group and the control group, in aspect of the investigation variables, we do not deal with
this method in detail.
A more sophisticated method is stratified sampling. Stratified sampling divides the
members of the population into homogeneous subgroups (strata) before sampling,
reducing sampling error. Every element in the population must be assigned to one and
only one stratum based on their values of variables ݂ଵ ǡ ݂ଶ ǡ ǥ ǡ ݂ . The elements of the
control group are selected from these strata based on the frequencies of the individuals
with these values in the case group. The main problem of stratified sampling lies within
the strata. On one hand, if the number of variables and their recorded values are numerous,
we have to generate exponentially large number of strata. On the other hand, if the size
of a stratum is inadequate (that is the stratum contains not enough individual element
from the candidate subpopulation), we cannot select enough individuals from that
stratum into the control group. Consequently, if constraints on the size of the control
group are unsatisfied, the result will be biased.
314 S. Szekér et al. / Comparison of Control Group Generating Methods
Another way to approach the problem might be nearest neighbor-based control group
formation. The k nearest neighbor based clustering method may offer a better solution
for the problem. Let us consider each element of the population as an n-dimensional data
point in the n-dimensional space, where each relevant variable (݂ଵ ǡ ݂ଶ ǡ ǥ ǡ ݂ ) represents
a unique dimension. In this case, the problem of control group selection is translated into
a distance-minimization problem. To find proper people into the control group, we have
to select those individuals from the candidate subpopulation which lie close to the
individuals of the case group. Namely, if individuals are close to each other in the n-
dimensional space, they are similar to each other as well. The concept of closeness can
be defined different ways. In our research the distance of the individuals was calculated
as the weighted distance of different type of variables [6]. Naturally, this method does
not guarantee, that matched individuals will coincide on all the features, but the degree
of the similarity can be determined in the function of distance.
In the simplest case, to select the individuals into the control group we have to find
the closest element from the candidate set for each individual in the case group. More
formally, an adequate control group can be achieved by calculating the distance between
each ܑ ܠൌ ሺݔଵ ǡ ݔଶ ǡ ǥ ǡ ݔ ሻ ܲ א ǡ ሺ݅ ൌ ͳǡ ʹǡ ǥ ǡ ܰ ሻ and ܒܡൌ ሺݕଵ ǡ ݕଶ ǡ ǥ ǡ ݕ ሻ ܲ א ሺ݆ ൌ
ͳǡ ʹǡ ǥ ǡ ܰ ሻ, and selecting those ܒܡfor each ܑ ܠ, where ݀ሺ ܑ ܠǡ ܒܡሻ distance is minimal. The
notation ܰ represents the number of individuals in the case group and ܰ stands for the
number of the candidate individuals. The main steps of this basic nearest neighbor based
control group selection algorithm can be summarized as follows:
2.3. Algorithm 1: Nearest Neighbor based Control Group Selection method (NNCS)
Step 1: Calculate the distance for each pair of individuals from the case group and the
candidate group.
Step 2: Select the nearest neighbor from the candidate subpopulation for each individual in
the case group into the control group.
However, selecting people like this violates the uniqueness of the elements of
matched control group. An individual from the control group can belong to more than
one patient in the case group, which violates the aforementioned size constraint. For this
reason, we have developed two extended nearest neighbor based algorithms, which
ensure the uniqueness of elements in the control group.
The Extended Nearest Neighbor based Control Group Selection Method (ENNCS)
ensures the uniqueness of the control group by eliminating conflicts. A conflict occurs,
when an individual in the candidate subpopulation (ܲ א ܒܡ ) is selected as the nearest
neighbor for more than one individual in the case group ( ܑ ܠ ǡ ܑ ܠ ǡ ǥ ܲ א ). In the
extended version of the nearest neighbor based algorithm in such a scenario, ܒܡis
assigned to that ܓܑ ܠelement in the case group for which ݀ሺ ܓܑ ܠǡ ܒܡሻis minimal. To select
the proper pair for the unmatched individuals in the case group the next nearest neighbor
is selected. This iterative process repeats until for each individual in the case group a
unique element of the candidate set is selected. The algorithm can be summarized as
follows:
S. Szekér et al. / Comparison of Control Group Generating Methods 315
2.4. Algorithm 2: Extended Nearest Neighbor based Control Group Selection method
(ENNCS)
Step 1: Calculate the distance for each pair of individuals from the case group and the
candidate group.
Step 2: Select the nearest neighbor from the candidate subpopulation for each individual in
the case group into the control group and delete them from the candidate group. Yield
all individuals in the case group as matched element.
Step 3: If an element from the candidate subpopulation was selected as the nearest neighbor
for more than one person in the case group, then assign this candidate element as the
matched pair to the closest individual in the case group. All other individuals in the
case group, to which this candidate element was the closest pair, yield as unmatched.
Step 4: Delete the matched elements from the case group, and repeat from Step 2.
where ܑ ܠis an individual from the case group, with ܒܡas nearest neighbor and ݊݊ሺ ܑ ܠሻሺଶሻ
the second nearest neighbor of ܑ ܠfrom the candidate group. If an ܒܡindividual from the
candidate group is the nearest neighbor for more than one person in the case group, then
NNCSE algorithm matches this individual to that ܑ ܠfor which ݎݎܧ൫ ܑ ܠǡ ܒܡ൯ is minimal. In
short, a selected candidate is assigned to that individual in the case group for which the
next closest neighbor is farther. By doing so, overall error becomes minimal.
2.5. Algorithm 3: Nearest Neighbor based Control Group Selection method with Error
Minimization (NNCSE)
Step 1: Calculate the distance for each pair of individuals from the case group and the
candidate group.
Step 2: Select the nearest neighbor from the candidate subpopulation for each individual in
the case group into the control group and delete them from the candidate group. Yield
all individuals in the case group as matched element.
Step 3: If an element ( ) ܒܡfrom the candidate subpopulation was selected as the nearest
neighbor for more than one person in the case group, then assign this candidate
element as the matched pair to the individual in the case group ( ) ܑܠfor which
ݎݎܧ൫ ܑܠǡ ܒܡ൯ is minimal. All other individuals in the case group, to which this
candidate element was the closest pair, yield as unmatched.
Step 4: Delete the matched elements from the case group, and repeat from Step 2.
316 S. Szekér et al. / Comparison of Control Group Generating Methods
Figure 1 Runtime results of different control group selection methods (size of case group: 1000 people; desired
size of control group: 1000 people)
3. Results
3.1. Runtime
Runtime tests were performed to evaluate the time required to generate the control group.
As a reference, we used the stratified sampling (SS) method. In case of SS, the required
Figure 2 Selection error and the size of the control group in case of SS with increasing population size (size
of case group: 1000 people; desired size of control group: 1000 people)
S. Szekér et al. / Comparison of Control Group Generating Methods 317
Figure 3 Selection error for SS, ENNCS and NNCSE with increasing size of population from 5000 to 50000
(P5000, P10000, P20000, P50000)
times for the scenarios ordered by the size of the population were as follows: 76.86ms,
140.48ms, 267.61ms, 648.75ms, and 1280.78ms. We can see, that the runtime is a linear
function of the size of the population.
Of the nearest neighbors-based methods (NNCS) was the fastest algorithm. It’s
roughly 17% faster than SS, 38% faster than ENNCS and 45% faster than NNCSE.
ENNCS and NNCSE, by ensuring that the size of the selected control group is adequate,
are the slowest methods. However, it is important to notice, that while being the slowest
ones, their runtimes are still under 2 seconds in the worst case as well. Runtime results
can be seen in Figure 1.
3.2. Precision
The precision of the methods was evaluated based on a selection error (SE) that is the
bias of the resulted control group and takes into account the number of individuals in a
strata. SE was computed as follows:
σ σೕฬหೕ ห ିหೕ ห ฬିȁேಲ ିேಳ ȁ
ುಲ ುಳ
ܵܧሺܲ ǡ ܲ ሻ ൌ (2)
ଶேಲ
where ห݂ ห yields the number of the individuals characterized by the ݆-th value of the
ಲ
݅ -th feature in the case group, and analogously ห݂ ห in the control group. If the
ಳ
distribution of the generated control group is the same as the distribution of the case
group, then selection error is 0.
As mentioned before, the precision of stratified sampling is in relation with the size
of the population. Figure 2 shows, that by increasing the size of the population, the
selection error is decreasing, as expected. If the size of a stratum is inadequate, we cannot
select enough individuals from that stratum. For example, if the population contained
5000 individuals, the SS algorithm was only able to select 962 people into the control
group from that population. Therefore, the requirement formulated for the size of the
control group could not be met.
The basic NNCS method is not appropriate to generate control groups as the
selection errors in this case may be very high. For example, this algorithm has selected
only 526 people into the control group from the population containing 5000 people, and
this value did not even exceed 600 at best with a selection error ranging between 0.311
and 0.223
In the case of ENNCS and NNCSE algorithms, the selection errors are lower than
the selection error of the SS method, even at a smaller population (see Figure 3). As these
algorithms always guarantee the required size of the control group, the selection error is
318 S. Szekér et al. / Comparison of Control Group Generating Methods
arising only from some biases. With a population of 5000 individuals, the selection error
was around 0.002, which is around the value at SS with a population of 200000 people.
This value further decreases by increasing the size of the population, reaching a selection
error of only 0.0004 in the case of population of 50000. These results justify our initial
thoughts that control group generating with the improved versions of nearest neighbor
based selection can be upgraded to a more effective level.
4. Conclusion
In this article, we have proposed two nearest neighbors based control group generating
methods. These methods place patients in an n-dimensional space according to the
number of the characterizing variables (n). The individuals are selected into the control
group from a candidate subpopulation in the function of the distance from the patients in
the case group. While the proposed Extended Nearest Neighbor based Control Group
Selection Method (ENNCS) takes into account only the first neighbors of the individuals,
the Nearest Neighbor based Control Group Selection Method with Error Minimization
algorithm (NNCSE) looks further and optimizes the selection error locally. The
efficiency of the proposed ENNCS and NNCSE algorithms was compared to the classical
stratified sampling and to the basic nearest neighbor selection method. Results show, that
ENNCS and NNCSE algorithms offer a reasonable alternative to stratified sampling and
the basic version of the nearest neighbor selection method is not appropriate for control
group generation. However, stratified sampling is a very fast algorithm, it does not
guarantee the predefined size of the control group. In contrast, ENNCS and NNCSE
algorithms can achieve it at almost the same speed, but with better precision even for a
smaller population. Our future goals include the evaluation of EENCS and NNCSE by
comparing them to a PSM based implementation.
Acknowledgement
This publication has been supported by the Hungarian Government through the project
VKSZ 12-1-2013-0012 - Competitiveness Grant: Development of the Analytic
Healthcare Quality User Information (AHQUI) framework.
References
[1] Everitt BS, Palmer CR., Encyclopaedic Companion to Medical Statistics, Hodder Arnold, London, 2005.
[2] Song, Jae W., Kevin C. Chung, Observational Studies: Cohort and Case-Control Studies, Plastic and
reconstructive surgery 126(6) (2010), 2234–2242.
[3] Koepsell, Thomas D., and Noel S. Weiss, Epidemiologic methods: studying the occurrence of illness,
Oxford University Press (UK), 2014.
[4] Sholom Wacholder, Joseph K. McLaughlin, Debra T. Silverman, and Jack S. Mandel, Selection of
Controls in Case-Control Studies, Americal Journal of Epidemology 135(9) (1992), 1019-1028.
[5] Jewell NP., Least squares regression with data arising from stratified samples of the dependent variable,
Biometrika 72 (1985), 11-21.
[6] Peter C. Austin, An Introduction to Propensity Score Methods for Reducing the Effects of Confounding
in Onservational Studies, Multivariate Behav Res 46(3) (2011), 399-424.
[7] Gary King, Richard Nielsen, Why Propensity Scores Should Not Be Used for Matching. 1st ed.
Available at: j.mp/psnot (Accessed: 2 January 2017)
Health Informatics Meets eHealth 319
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-319
Abstract. Background: With the increasing growth of HIV positive people the use
of information and communication technologies (ICT) can play an important role in
controlling the spread of the AIDS. Web and Mobile are the new technologies that
young people take advantage from them. Objectives: In this study a review to
investigate the web and mobile based HIV prevention and intervention programs
was carried out. Methods: A scoping review was conducted including PubMed,
Science direct, Web of Science and Proquest to find relevant sources that published
in 2009 to 2016. To identify published, original research that reported the web and
mobile-based HIV prevention and intervention programs, an organized search was
conducted with the following search keywords in combination: HIV, AIDS, m-
Health, Mobile phone, Cell phone, Smartphone, Mobile health, internet, and web.
Results: Using the employed strategies, 173 references retrieved. Searched articles
were compared based on their titles and abstracts. To identify duplicated articles,
the title and abstracts were considered and 101 duplicated references were excluded.
By going through the full text of related papers, 35 articles were found to be more
related to the questions of this paper from which 72 final included. Conclusion: The
advantages of web and mobile-based interventions include the possibility to provide
constancy in the delivery of an intervention, impending low cost, and the ability to
spread the intervention to an extensive community. Online programs such as Chat
room-based Education program, Web-based therapeutic education system, and
Online seek information can use for HIV/AIDS prevention. To use of mobile for
HIV/AIDS prevention and intervention, programs including in: Health system
focused applications, Population health focused applications, and Health messaging
can be used.
1. Introduction
In recent decades the AIDS Prevalence rate is expanding, consequently its control is
overtaking of human progression [1].With the growing of AIDS outbreaks, one of the
approaches found helpful to improve the quality of health care and increase the
effectiveness and efficiency of health care services organization is the use of Information
Technology (IT) [2,3] that can play an important role in controlling the spread of the
disease. Also, it should be noted that prevention, treatment, care and protection of the
HIV positive persons' rights are key factors in AIDS intervention programs [4,5]. Mobile
health and smartphones are the important achievements of IT that could use by people
with long term conditions such as AIDS to execute of remote health control and self-
1
Corresponding author: Esmaeil MEHRAEEN, Tehran University of Medical Sciences, Qhods Square,
Post Code: 7134845794, Tehran, Iran. E-mail: es.mehraeen@gmail.com.
320 S. Niakan et al. / Web and Mobile Based HIV Prevention and Intervention Programs
management strategies [6-8]. AIDS treatment processes include the steps that must be
performed and followed by the own patients as well as should be accomplished
continuously for a long time [9,10]. When a patient doesn't take of his/her drugs in a
timely manner, resistance of HIV virus against the medication will be common, as a
result, the drugs should be administered at a more and more dose and this will lead to
increased cost and side effects. In the other hand, another important point about AIDS
patients is on-time taking of prescribed medications and Mobile-based prevention and
intervention programs can be offer timely reminders for medication, exercise or food
regimens [11-13].
With the rise of smart phone technology, there are new opportunities to consider the
role of social media in HIV care and prevention. Smart phones have successfully been
used to support aspects of HIV care and treatment. For example, mobile phone text
message interventions significantly improve adherence to antiretroviral therapy [14-16].
In addition to the primary use of mobile technology such as talking and text messaging
(SMS), growing proportion of users are now using mobile devices capable of accessing
the Web and Internet [17,18]. Furthermore, with increased access to mobile technologies,
there are more opportunities for the application of mobile health (m-health) strategies in
the delivery of HIV prevention and intervention programs [19,20]. Prevention and
intervention programs are most effective when they include easily usable, individually
tailored, multifunctional adherence tools familiar to the patient, which do not alert others
[21,22]. Once developed, web and mobile-based interventions may be relatively
inexpensive to implement and deliver via the internet [23,24]. In this paper, we undertook
a scoping review on new and emerging research on HIV related Web and mobile–based
prevention and interventions.
2. Methods
The aim of this study is to review of Web and Mobile-Based HIV Prevention and
Intervention Programs. In this paper we conduct a scoping review of prevention and
intervention programs for HIV positive people that use a web or mobile technology for
these programs. Research questions including: 1. What are the Web-Based HIV/AIDS
Prevention Programs? 2. What are the Web-Based HIV/AIDS Intervention Programs? 3.
What are the Mobile-Based HIV/AIDS Prevention/ Intervention Programs? 4. What are
the Mobile-Based HIV/AIDS Intervention Programs?
In this phase we searched PubMed, Science direct, Web of Science and Proquest to find
relevant sources that published in 2009 to 2016. To identify published, original research
that reported the Web and Mobile-Based HIV Prevention and Intervention Programs, an
organized search was conducted with the following search keywords in combination:
HIV, AIDS, m-Health, Mobile phone, Cell phone, Smartphone, Mobile health, internet,
and web.
In this study we considered the following criteria to select the relevant sources: Key
words: Only full text papers with the keywords in the title or abstracts were selected.
S. Niakan et al. / Web and Mobile Based HIV Prevention and Intervention Programs 321
Date of publication: The studies published between 2009 and 2016 were reviewed.
Language: Only studies published in English were searched. Type of a study: The
research and review articles were reviewed, and we excluded resources such as reports,
editorial letters, newspapers, and other type of sources.
3. Results
In this study using the strategies described above, 173 references retrieved. Searched
articles were compared based on their titles and abstracts. To identify duplicated articles,
the title and abstracts were considered and 101 duplicated references were excluded. By
going through the full text of related papers, 35 articles were found to be more related to
the questions of this paper from which 72 final included (Table 1).
In this paper we review HIV/AIDS preventions and intervention programs that using web
and mobile technologies (Table 2, 3). Table 4 shows final articles that be used for the
purpose of analysis and investigation of research questions.
4. Discussion
Internet communities, news and discussion groups, Chat room-based programs, e-mail
lists, websites, online programs and electronic reports are examples of Web-Based
applications that offer unique opportunities for electronic communication [25,26].
Table 3. Utility of web and mobile for HIV prevention and intervention
Web-based Mobile-based
Prevention Knowledge promotion Education
Intervention Stigma and Social relationship management Self-management, Physical activity
tracking, Reminding
Table 4. Summary of final studied articles and their relevance to the research questions (RQ)
Results related to RQ
ID First Author Year Reference
1 2 3 4
1 Luenen SV 2016 40
2 Henry BL 2016 58
3 Jongbloed K 2016 51
4 Marsch LA 2015 29
5 Billings DW 2015 39
6 Montoya JA 2015 56
7 Mushamiri L 2015 48
8 Tufts KA 2015 53
9 Cote J 2015 38
10 Blackstock OJ 2015 42
11 Blackstock OJ 2015 43
12 L'Engle KL 2015 61
13 Villegas N 2015 31
14 Rose CD 2015 41
15 Forrest JI 2015 16
16 Kessler SF 2015 44
17 Jacobs RJ 2014 35
18 Villegas N 2014 30
19 Montoya JL 2014 62
20 Odeny TA 2014 14
21 Ybarra ML 2014 64
22 Danielson CK 2013 23
23 Miller CWT 2013 55
24 Catalani C 2013 46
25 Marhefka SL 2013 37
26 Belzer ME 2013 63
27 Kasatpibal N 2012 32
28 Ybarra M 2012 26
29 Muessig KE 2012 60
30 Rosser BRS 2011 27
31 Cornelius JB 2011 49
32 Hong Y 2011 34
33 Chang LW 2011 59
34 Rhodes S 2010 28
35 Bull S 2009 25
delivery and addressed that this technology can be used by healthcare centers to educate
and promote of consumer knowledge [49]. Much of the existing evidence on mobile-
based HIV/AIDS prevention programs has focused on improving HIV knowledge by
providing HIV prevention messages [50]. However, Jongbloed et al, have investigated
the potential for a two-way supportive text-message program to reach out to young drug-
using indigenous people to reduce vulnerability to HIV infection [51].
The exploration of mobile-based HIV/AIDS intervention programs in healthcare
organizations may be a practical and low-cost method for increasing the ability of people
with chronic diseases in self-management [52-54]. Miller et al, declared that the vast
324 S. Niakan et al. / Web and Mobile Based HIV Prevention and Intervention Programs
majority of patients in an urban HIV clinic own mobile phones and would use them to
enhance adherence interventions to HIV medication [55]. Montoya et al, in a randomized
controlled trial study concluded that utilization of Short Message Service (SMS)
intervention can be increased physical activity and subsequently improve neurocognitive
functioning in HIV-positive people [56]. Short Message Service/Multimedia Message
Service (SMS/MMS) intervention can be used to monitor and encourage physical
activity in persons with HIV and the results indicate that using an SMS/MMS physical
activity intervention improve functioning in HIV-positive persons. Chang et al, in
qualitative analyses found that use of text message feature of mobile can be improve
patient activity and facilitate healthcare delivery by Peer Health Workers on AIDS Care
[57-59]. Because evidence show that people who seek sexual partners online are at
increased risk for HIV exposure and transmission through their risk behaviors, mobile-
based technologies providing HIV education programs such as communication through
text messaging and Internet (on-line chatting, social networking sites) for HIV
intervention [60]. Furthermore, Mobile-based interventions provide common technique
to address several of the key barriers to good medication adherence and promote
antiretroviral therapy (ART) adherence by providing reminders for care and a direct
connection to health providers [61-64].
5. Conclusion
Web and Internet is a rich source of useful health information for different knowledge
level individuals. Web-based prevention programs, such as Chat room-based education
program and Web-based therapeutic education system (TES) can be useful in raising
awareness, knowledge promotion and prevent HIV infection. Programs such as web-
based e-health education intervention, Online Group-based Interventions, and HIT
System can be effective to improve medication adherence in HIV positive people. Mobile
phone has many advantages such as physical activity tracking, synchronization with
social network and reminding compared to web. However, the use of mobile and web-
based technologies in combination is the best solution for the prevention and intervention
of AIDS. Mobile-based applications such as SMS/MMS, Text Message Reminders,
Health Messaging and On-line chatting will be useful to find healthy sex partner and to
find health information and provide common technique to address several of the key
barriers to good medication adherence, self-management and promote antiretroviral
therapy.
References
[1] Dalal N, Greenhalgh D, Mao X. A stochastic model of AIDS and condom use. Journal of Mathematical
Analysis and Applications. 2007;325(1):36-53.
[2] Mehraeen E, Ayatollahi H, Ahmadi M. Health Information Security in Hospitals: the Application of
Security Safeguards. Acta Informatica Medica. 2016;24(1):47-50.
[3] Mehraeen E, Ghazisaeedi M, Farzi J, S M. Security Challenges in Healthcare Cloud Computing: A
Systematic Review. Global Journal of Health Science. 2016;9(3):1-10.
[4] Igodan C. E, Akinyokun O.C, Olabode O. Online Fuzzy-Logistic Knowledge Warehousing and Mining
Model for the Diagnosis and Therapy of HIV/AIDS. International Journal of Computational Science
and Information Technology. 2013;1(3):27-40.
[5] Alpay L, van der Boog P, Dumaij A. An empowerment-based approach to developing innovative e-
health tools for self-management. Health informatics journal. 2011;17(4):247-55.
S. Niakan et al. / Web and Mobile Based HIV Prevention and Intervention Programs 325
[6] Arie S. Can mobile phones transform healthcare in low and middle income countries? BMJ.
2015;350(13)19-32.
[7] Dexheimer JW, Borycki EM. Use of mobile devices in the emergency department: A scoping review.
Health informatics journal. 2015;21(4):306-15.
[8] Pandher PS, Bhullar KK. Smartphone applications for seizure management. Health informatics journal.
2016;22(2):209-20.
[9] Mbuagbaw L, Mursleen S, Lytvyn L, Smieja M, Dolovich L, Thabane L. Mobile phone text messaging
interventions for HIV and other chronic diseases: an overview of systematic reviews and framework for
evidence transfer. BMC health services research. 2015;15:33.
[10] Darlow S, Wen KY. Development testing of mobile health interventions for cancer patient self-
management: A review. Health informatics journal. 2016;22(3):633-50.
[11] Ramanathan N, Swendeman D, Comulada WS, Estrin D, Rotheram-Borus MJ. Identifying preferences
for mobile health applications for self-monitoring and self-management: focus group findings from
HIV-positive persons and young mothers. International journal of medical informatics. 2013;82(4):e38-
46.
[12] Sims MH, Fagnano M, Halterman JS, Halterman MW. Provider impressions of the use of a mobile
crowdsourcing app in medical practice. Health informatics journal. 2016;22(2):221-31.
[13] Muntaner A, Vidal-Conti J, Palou P. Increasing physical activity through mobile device interventions:
A systematic review. Health informatics journal. 2016;22(3):451-69.
[14] Odeny TA, Newman M, Bukusi EA, McClelland RS, Cohen CR, Camlin CS. Developing content for a
mHealth intervention to promote postpartum retention in prevention of mother-to-child HIV
transmission programs and early infant diagnosis of HIV: a qualitative study. PloS one.
2014;9(9):e106383.
[15] Xia Q, Westenhouse JL, Schultz AF, Nonoyama A, Elms W, Wu N, et al. Matching AIDS and
tuberculosis registry data to identify AIDS/tuberculosis comorbidity cases in California. Health
informatics journal. 2011;17(1):41-50.
[16] Forrest JI, Wiens M, Kanters S, Nsanzimana S, Lester RT, Mills EJ. Mobile health applications for HIV
prevention and care in Africa. Current opinion in HIV and AIDS. 2015;10(6):464-71.
[17] Househ M. The role of short messaging service in supporting the delivery of healthcare: An umbrella
systematic review. Health informatics journal. 2016;22(2):140-50.
[18] Speirs KE, Grutzmacher SK, Munger AL, Messina LA. Recruitment and retention in an SMS-based
health education program: Lessons learned from Text2BHealthy. Health informatics journal.
2016;22(3):651-8.
[19] Labrique AB, Vasudevan L, Kochi E, Fabricant R, Mehl G. mHealth innovations as health system
strengthening tools: 12 common applications and a visual framework. Global health, science and
practice. 2013;1(2):160-71.
[20] Pop-Eleches C, Thirumurthy H, Habyarimana JP, Zivin JG, Goldstein MP, de Walque D, et al. Mobile
phone technologies improve adherence to antiretroviral treatment in a resource-limited setting: a
randomized controlled trial of text message reminders. AIDS (London, England). 2011;25(6):825-34.
[21] Page TF, Horvath KJ, Danilenko GP, Williams M. A cost analysis of an Internet-based medication
adherence intervention for people living with HIV. Journal of acquired immune deficiency syndromes
(1999). 2012;60(1):1-4.
[22] Kirk GD, Himelhoch SS, Westergaard RP, Beckwith CG. Using Mobile Health Technology to Improve
HIV Care for Persons Living with HIV and Substance Abuse. AIDS research and treatment.
2013;13(11):17-26.
[23] Danielson CK, McCauley JL, Jones A, Borkman AO, Miller S, Ruggiero KJ. Feasibility of Delivering
Evidence-Based HIV/STI Prevention Programming to A Community Sample of African-American Teen
Girls via the Internet. AIDS education and prevention : official publication of the International Society
for AIDS Education. 2013;25(5):394-404.
[24] Ybarra ML, Bull SS. Current trends in Internet- and cell phone-based HIV prevention and intervention
programs. Current HIV/AIDS reports. 2007;4(4):201-7.
[25] Bull S, Pratte K, Whitesell N, Rietmeijer C, McFarlane M. Effects of an Internet-based intervention for
HIV prevention: the Youthnet trials. AIDS and behavior. 2009;13(3):474-87.
[26] Ybarra ML, Biringi R, Prescott T, Bull SS. Usability and navigability of an HIV/AIDS internet
intervention for adolescents in a resource-limited setting. Computers, informatics, nursing : CIN.
2012;30(11):587-95.
[27] Rosser BR, Wilkerson JM, Smolenski DJ, Oakes JM, Konstan J, Horvath KJ, et al. The future of
Internet-based HIV prevention: a report on key findings from the Men's INTernet (MINTS-I, II) Sex
Studies. AIDS and behavior. 2011;15(1):91-106.
326 S. Niakan et al. / Web and Mobile Based HIV Prevention and Intervention Programs
[28] Rhodes SD, Hergenrather KC, Duncan J, Vissman AT, Miller C, Wilkin AM, et al. A Pilot Intervention
Utilizing Internet Chat Rooms to Prevent HIV Risk Behaviors Among Men Who Have Sex with Men.
Public Health Reports. 2010;125(1):29-37.
[29] Marsch LA, Guarino H, Grabinski MJ, Syckes C, Dillingham ET, Xie H, et al. Comparative
Effectiveness of Web-Based vs. Educator-Delivered HIV Prevention for Adolescent Substance Users:
A Randomized, Controlled Trial. Journal of substance abuse treatment. 2015;59(11):30-7.
[30] Villegas N, Santisteban D, Cianelli R, Ferrer L, Ambrosia T, Peragallo N, et al. The development,
feasibility and acceptability of an Internet-based STI-HIV prevention intervention for young Chilean
women. International nursing review. 2014;61(1):55-63.
[31] Villegas N, Santisteban D, Cianelli R, Ferrer L, Ambrosia T, Peragallo N, et al. Pilot testing an internet-
based STI and HIV prevention intervention with Chilean women. Journal of nursing scholarship : an
official publication of Sigma Theta Tau International Honor Society of Nursing / Sigma Theta Tau.
2015;47(2):106-16.
[32] Kasatpibal N, Viseskul N, Srikantha W, Fongkaew W, Surapagdee N, Grimes RM. Developing a web
site for human immunodeficiency virus prevention in a middle income country: a pilot study from
Thailand. Cyberpsychology, behavior and social networking. 2012;15(10):560-3.
[33] Fasula AM, Fogel CI, Gelaude D, Carry M, Gaiter J, Parker S. Project power: Adapting an evidence-
based HIV/STI prevention intervention for incarcerated women. AIDS Educ Prev. 2013;25(3):203-15.
[34] Hong Y, Li X, Fang X, Lin X, Zhang C. Internet use among female sex workers in China: implications
for HIV/STI prevention. AIDS and behavior. 2011;15(2):273-82.
[35] Jacobs RJ, Caballero J, Ownby RL, MN K. Development of a culturally appropriate computer-delivered
tailored internet-based health literacy intervention for spanish-dominant hispanics living with HIV.
BMC Medical Informatics and Decision Making. 2014;14(103):1-10.
[36] Robinson C, Graham J. Perceived Internet health literacy of HIV-positive people through the provision
of a computer and Internet health education intervention. Health information and libraries journal.
2010;27(4):295-303.
[37] Marhefka SL, Iziduh S, Fuhrmann HJ, Lopez B, Glueckauf R, Lynn V, et al. Internet-based video-group
delivery of Healthy Relationships--a "prevention with positives" intervention: report on a single group
pilot test among women living with HIV. AIDS care. 2013;25(7):904-9.
[38] Côté J, Cossette S, Ramirez-Garcia P, De Pokomandy A, Worthington C, Gagnon M-P, et al. Evaluation
of a Web-based tailored intervention (TAVIE en santé) to support people living with HIV in the adoption
of health promoting behaviours: an online randomized controlled trial protocol. BMC Public Health.
2015;15(3):10-24.
[39] Billings DW, Leaf SL, Spencer J, Crenshaw T, Brockington S, Dalal RS. A Randomized Trial to
Evaluate the Efficacy of a Web-Based HIV Behavioral Intervention for High-Risk African American
Women. AIDS and behavior. 2015;19(7):1263-74.
[40] van Luenen S, Kraaij V, Spinhoven P, Garnefski N. An Internet-based self-help intervention for people
with HIV and depressive symptoms: study protocol for a randomized controlled trial. Trials.
2016;17(3):172-185.
[41] Dawson Rose C, Cuca YP, Kamitani E, Eng S, Zepf R, Draughon J, et al. Using Interactive Web-Based
Screening, Brief Intervention and Referral to Treatment in an Urban, Safety-Net HIV Clinic. AIDS and
behavior. 2015;19(2):186-93.
[42] Blackstock OJ, Haughton LJ, Garner RY, Horvath KJ, Norwood C, Cunningham CO. General and
health-related Internet use among an urban, community-based sample of HIV-positive women:
implications for intervention development. AIDS care. 2015;27(4):536-44.
[43] Blackstock OJ, Shah PA, Haughton LJ, Horvath KJ, Cunningham CO. HIV-infected Women's
Perspectives on the Use of the Internet for Social Support: A Potential Role for Online Group-based
Interventions. The Journal of the Association of Nurses in AIDS Care: JANAC. 2015;26(4):411-9.
[44] Finocchario-Kessler S, Odera I, Okoth V, Bawcom C, Gautney B, Khamadi S, et al. Lessons learned
from implementing the HIV infant tracking system (HITSystem): A web-based intervention to improve
early infant diagnosis in Kenya. Healthcare. 2015;3(4):190-5.
[45] Speizer IS, Gomez AM, Stewart J, Voss P. Community-level HIV risk behaviors and HIV prevalence
among women and men in Zimbabwe. AIDS Educ Prev. 2011;23(5):437-47.
[46] Catalani C, Philbrick W, Fraser H, Mechael P, Israelski DM. mHealth for HIV Treatment & Prevention:
A Systematic Review of the Literature. The open AIDS journal. 2013;7:17-41.
[47] de Tolly K, Skinner D, Nembaware V, Benjamin P. Investigation into the use of short message services
to expand uptake of human immunodeficiency virus testing, and whether content and dosage have
impact. Telemedicine journal and e-health : the official journal of the American Telemedicine
Association. 2012;18(1):18-23.
S. Niakan et al. / Web and Mobile Based HIV Prevention and Intervention Programs 327
[48] Mushamiri I, Luo C, Iiams-Hauser C, Ben Amor Y. Evaluation of the impact of a mobile health system
on adherence to antenatal and postnatal care and prevention of mother-to-child transmission of HIV
programs in Kenya. BMC Public Health. 2015;15(9):102-114.
[49] Cornelius JB, Cato M, Lawrence JS, Boyer CB, Lightfoot M. Development and pretesting multimedia
HIV-prevention text messages for mobile cell phone delivery. The Journal of the Association of Nurses
in AIDS Care : JANAC. 2011;22(5):407-13.
[50] [50] Forsyth AD, Valdiserri RO. A State-Level Analysis of Social and Structural Factors and HIV
Outcomes Among Men Who Have Sex With Men in the United States. AIDS Educ Prev.
2015;27(6):493-504.
[51] Jongbloed K, Friedman AJ, Pearce ME, Van Der Kop ML, Thomas V, Demerais L, et al. The Cedar
Project WelTel mHealth intervention for HIV prevention in young Indigenous people who use illicit
drugs: study protocol for a randomized controlled trial. Trials. 2016;17(1):128-138.
[52] Buhi ER, Trudnak RE, Martinasek MP, Oberne AB, Fuhrmann HJ, RJ M. Mobile phone-based
behavioural interventions for health: A systematic review. Health Education Journal. 2013;72(5):564-
83.
[53] Tufts KA, Johnson KF, Shepherd JG, Lee JY, Bait Ajzoon MS, Mahan LB, et al. Novel interventions
for HIV self-management in African American women: a systematic review of mHealth interventions.
The Journal of the Association of Nurses in AIDS Care : JANAC. 2015;26(2):139-50.
[54] Himelhoch S, Mohr D, Maxfield J, Clayton S, Weber E, Medoff D, et al. Feasibility of telephone-based
cognitive behavioral therapy targeting major depression among urban dwelling African-American
people with co-occurring HIV. Psychology, health & medicine. 2011;16(2):156-65.
[55] Miller CW, Himelhoch S. Acceptability of Mobile Phone Technology for Medication Adherence
Interventions among HIV-Positive Patients at an Urban Clinic. AIDS research and treatment.
2013:670525.
[56] Montoya JL, Wing D, Knight A, Moore DJ, Henry BL. Development of an mHealth Intervention
(iSTEP) to Promote Physical Activity among People Living with HIV. Journal of the International
Association of Providers of AIDS Care. 2015;14(6):471-5.
[57] Stephens J, Allen J. Mobile phone interventions to increase physical activity and reduce weight: a
systematic review. The Journal of cardiovascular nursing. 2013;28(4):320-9.
[58] Henry BL, Moore DJ. Preliminary Findings Describing Participant Experience With iSTEP, an mHealth
Intervention to Increase Physical Activity and Improve Neurocognitive Function in People Living With
HIV. The Journal of the Association of Nurses in AIDS Care : JANAC. 2016;27(4):495-511.
[59] Chang LW, Kagaayi J, Arem H, Nakigozi G, Ssempijja V, Serwadda D, et al. Impact of a mHealth
intervention for peer health workers on AIDS care in rural Uganda: a mixed methods evaluation of a
cluster-randomized trial. AIDS and behavior. 2011;15(8):1776-84.
[60] Muessig KE, Pike EC, Fowler B, LeGrand S, Parsons JT, Bull SS, et al. Putting prevention in their
pockets: developing mobile phone-based HIV interventions for black men who have sex with men.
AIDS patient care and STDs. 2013;27(4):211-22.
[61] L'Engle KL, Green K, Succop SM, Laar A, Wambugu S. Scaled-Up Mobile Phone Intervention for HIV
Care and Treatment: Protocol for a Facility Randomized Controlled Trial. JMIR research protocols.
2015;4(1):e11.
[62] Montoya JL, Georges S, Poquette A, Depp CA, Atkinson JH, Moore DJ. Refining a personalized
mHealth intervention to promote medication adherence among HIV+ methamphetamine users. AIDS
care. 2014;26(12):1477-81.
[63] Belzer ME, Naar-King S, Olson J, Sarr M, Thornton S, Kahana SY, et al. The use of cell phone support
for non-adherent HIV-infected youth and young adults: an initial randomized and controlled intervention
trial. AIDS and behavior. 2014;18(4):686-96.
[64] Ybarra ML, Bull SS, Prescott TL, Birungi R. Acceptability and feasibility of CyberSenga: an Internet-
based HIV-prevention program for adolescents in Mbarara, Uganda. AIDS care. 2014;26(4):441-7.
328 Health Informatics Meets eHealth
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-328
1. Introduction
Due to an ever increasing amount of data generated from healthcare systems each day,
physicians are more and more overloaded with information. Rule-based decision support
systems can be highly effective in areas where the domain is well understood. However,
as systems grow over time, manual rules are hard to maintain. Moreover, when new data
sources are introduced, finding patterns for new rules is time-consuming. Predictive
models based on machine learning algorithms can help to quickly identify patterns in the
data. Especially random forests (RFs) seem to be a promising approach, as they provide
an effective method to estimate missing data (as commonly present in healthcare),
balancing error in imbalanced data and providing estimates for the importance for
individual features [1,2]. However, interpretation of suggestions from RFs is difficult.
The present paper proposes a method to present decisions from RFs to physicians in
a way that they understand, which attributes of the data have significantly influenced an
individual decision. The paper first describes, how decision trees and RFs can be
implemented and how currently the importance of features within a RF can be quantified.
1
Corresponding Author: Dieter Hayn, AIT Austrian Institute of Technology, Reininghausstr. 13, 8010
Graz, Austria, E-Mail: dieter.hayn@ait.ac.at.
D. Hayn et al. / Plausibility of Individual Decisions from Random Forests 329
Figure 1 – Example of a simple decision tree. During training (left), starting from root node N0, optimal
parameters and thresholds for separating the training set in two separate sets are identified. During prediction
(right), a specific path is followed within the tree, leading to a specific output value Y, as illustrated in bold.
Decision trees are a simple, fast and intuitive supervised machine learning technique [3].
During the learning phase, decision trees initially identify the feature F0 which best splits
the learning dataset by applying one single threshold Th0, based on a split criterion (e. g.
information gain, Gini impurity, etc.). In node N0, the learning data are then split into
two subsets according to Th0, and for each subset (i.e. in nodes N1 and N2) the procedure
is repeated. This process is continued until a specific termination criterion is met. When
prospectively applying a trained decision tree on a new data vector (X0, X1, ..., Xn),
starting from node N0, F0 of the input data is compared to Th0 in order to decide which
node to check next and finally a prediction for Y based on the current input vector is
achieved. See [4] for details on decision trees. A simple decision tree is shown in Figure 1.
RFs consist of several decision trees (i.e. a “forest”), each of which is trained on a) a
subset of observations and b) a subset of features from the learning set. When
prospectively predicting a specific outcome from a vector of input data (X0, X1, ..., Xn),
the input data are applied to each single tree in the forest, leading to one output value YTi
per tree. The final prediction Y is derived by combining all YTi (e.g. by calculating the
mean, the weighted average or the median of all YTi). An illustration of an RF is shown
in Figure 2. See [2,5] for more details concerning RFs.
In recent years, various research groups investigated the application of machine learning
in healthcare. Different types of models, such as linear regression models, support vector
machines, neuronal networks, decision trees, or RFs have been applied to many different
healthcare related questions, including prediction of readmissions (e.g. [6-9]), adverse
drug reactions (e.g. [10]), blood transfusion needs [11], etc. RFs turned out to be a good
choice in many scenarios, since RFs combine several properties which are important in
330 D. Hayn et al. / Plausibility of Individual Decisions from Random Forests
Figure 2 – Example of prediction of Y based on input vector (X0, X1, ..., Xn) in a simple random forest. For each
tree Ti, one prediction YTi is calculated and results are statistically combined to output parameter YForest.
healthcare scenarios. However, physicians may be skeptical when faced with such a
prediction (e.g. “Re-admission risk = 22 %”), without any further reasoning.
Most machine learning algorithms provide a method to estimate the importance of each
of the features on the final model. One possibility to calculate the importance of a specific
feature is to train the model with all data and test the model a) with a full test dataset and
b) replace all values for a specific feature in the test dataset by random values (Figure 3).
Feature importance is calculated based on a parameter, which quantifies the model
precision, such as accuracy, root mean square error, area under the receiver operating
curve, etc. By comparing a) the precision as achieved with the full dataset
(Precision(ref)) with b) the precision as achieved with random values for feature Fi
(Precision(no Fi)), the feature’s importance FIFi can be estimated (Equation 1).
FI Fi Precision (ref ) Precision (no Fi ) [unit of precision parameter] (1)
Figure 3 – Illustration of the estimation of feature importance for feature F0 for a decision tree by comparing
the model precision when using all features (Precision (ref), left) with the model precision as achieved when
all data of feature F0 are removed from the input data (Precision (no F0), right)
D. Hayn et al. / Plausibility of Individual Decisions from Random Forests 331
Current implementation guidelines for clinical decision support systems suggest, that in
addition to the suggested decision the features and rules which led to the respective
suggestion are provided to the physician [12]. We assume that similar requirements apply
for machine learning approaches when deployed in real-world scenarios.
Although decision trees are built automatically from learning data, they have the
advantage that interpretation of their suggestions is similar to interpretation of rule based
systems, since they reflect in fact a set of rules (see Figure 1). More complex machine
learning algorithms, however, do not provide similar properties. In case of RFs, the
individual decision is derived from a set of decision trees, each of which per se could
easily be interpreted. However, the whole forest’s decision is currently hard to explain
to a physician, especially in complex models with 1,000 trees or more, each providing
e.g. 10 node levels. Feature importance (see above) always concerns the whole model
and it does not explain, why for a specific input data set, a certain decision was derived.
In a recent paper, Ribeiro et al. [13] present a tool for presenting such information,
based on estimators described by Baerens et al. [14]. They calculate a local gradient by
analyzing the effect of slight changes of the input vector on the outcome. In this paper,
we present a different approach which is solely based on the individual input vector.
Additionally, we propose a way of presenting the most important features to physicians.
2. Methods
A decision derived from a decision tree can be explained by visualizing the tree. In order
to quantify the influence of a specific feature on an individual decision, the following
procedure based on an approach described at [15] was applied:
In the first step, for each node Ni, the mean value Mi of all outputs Yj underneath the
respective node was derived. In case of terminating nodes (“leafs”), Mi corresponded to
the respective output value Yi (see Equation 2).
Yi .......if N i is a leaf
°
Mi ® 1
* ¦leaves
° N Child leaves(i ) j Child
Y j [same unit as outcome parameter Y] (2)
¯ (i )
Thereafter, the difference Diffi of the mean values of the two child nodes of Ni was
calculated according to Equation 3.
These calculations were done once after training and the results were stored together
with the decision tree model. An illustration of the resulting extended decision tree is
shown in Figure 4.
332 D. Hayn et al. / Plausibility of Individual Decisions from Random Forests
Figure 4 – Extended decision tree including – for each node – the mean values Mi of all underlying output
parameters Yi and the difference Diffi between the mean values of the node’s 2 children. General concept (left)
and simplified example for prediction of re-admission risk from sex and age for a 60 y old male patient (right).
Whenever a prediction was made, the feature influence FID of each available feature
was calculated. Initially, all values of FIDFi,Tree were set to 0. Thereafter, for each node
passed during deriving the predicted outcome, FID of the respective node’s feature was
increased / decreased by Diffi of the respective node, depending on whether the input
data were greater or smaller than the node’s threshold value (Equation 4).
The approach was applied to RFs. Therefore, for each feature, the mean FIDF,Tree as
derived for each single tree of the forest was calculated (see Equation 4). Results from
the various trees in the forest were collated according to Equation 5.
1
FIDF , Forest
NTrees
* ¦ FID
iTrees
F ,Tree i [same unit as outcome parameter Y] (5)
FIDF,Forest had the same unit as the model’s outcome parameter Y. For visualization
purposes, the measures FIDF,Forest and FIF were transformed to [%] by scaling them to
the sum of the respective measure of all features (Equation 6).
MeasureF
MeasureF ,Percent N Features
[%] (6)
¦ abs(Measure
i 1
Fi )
Finally, in order to quantify which features had higher impact on the current decision
than those usually had in the model, we calculated the relative feature influence on a
specific decision FIDF,Forest,rel (similar to an odds ratio). Therefore, we compared the
influence of a feature F on a specific decision FIDF,Forest with the overall feature
importance of the respective feature FIF as described in Equation 7.
FIDF ,Forest,Percent
FIDF ,Forest,rel [1] (7)
FI F ,Percent
D. Hayn et al. / Plausibility of Individual Decisions from Random Forests 333
Figure 5 – Example for visualizing modelling results of a random forest (symbolic data)
2.3. Validation
3. Results
Based on equations 1-7, calculation of absolute and relative FID was feasible with only
little computational effort. We suggest also to present FIDF,Forest,Percent and FIDF,Forest,rel
to the physician whenever a prediction risk is visualized. An example for visualizing our
results is shown in Figure 5.
We validated our approach by calculating FI, FIDF,Forest,Percent and FIDF,Forest,rel for
all observations of all features of a test dataset consisting of 127,264 observations and
112 features, which we used for prediction of hospital readmissions at the time of hospital
discharge. Table 1 summarizes the difference in between the three parameter.
Table 1. List of the 10 most important features based on feature importance FI, as well as on the mean value
for all observations of the feature influence for individual observations FIDF,Forest,Percent and of the relative
influence for individual observations FIDF,forest,rel. Features are named FEATURE 1 to FEATURE 112 based
on their rank according to global feature importance FI.
Rank FI FIDF,Forest,Percent FIDF,Forest,rel
1 FEATURE 1 FEATURE 1 FEATURE 74
2 FEATURE 2 FEATURE 9 FEATURE 77
3 FEATURE 3 FEATURE 6 FEATURE 60
4 FEATURE 4 FEATURE 18 FEATURE 97
5 FEATURE 5 FEATURE 3 FEATURE 92
6 FEATURE 6 FEATURE 4 FEATURE 101
7 FEATURE 7 FEATURE 20 FEATURE 95
8 FEATURE 8 FEATURE 14 FEATURE 94
9 FEATURE 9 FEATURE 5 FEATURE 85
10 FEATURE 10 FEATURE 34 FEATURE 73
334 D. Hayn et al. / Plausibility of Individual Decisions from Random Forests
4. Discussion
We have shown that the influence of a specific feature on an individual decision proposed
by an RF can be quantified by summing up feature influences along a decision path.
Feature importance FIF reflects the role of each single feature within the whole
model. Therefore, FIF is an important measure which can be evaluated during the training
phase of a specific model. Once the model was trained, FIF for each feature will not
change. Our paper, however, presents a way to quantify the influence of a specific feature
on an individual prediction. Feature influence FIDF is derived for each individual set of
input data, whenever a prediction is made with an existing, trained model.
As shown in Table 1, the ranking of important features based on FIF and
FIDF,Forest,Percent is more or less similar, i.e. all 6 out of the 10 most important features
based on FIDF,Forest,Percent are among the ten most important features based on FIF and
none of those ten features shows a lower rank than 34 based on FIF. However, most
important features according to FIDF,Forest,rel are in between the 73th and 101st important
features based on FIF. This effect was of course expected due to the inverse relation in
between FIDF,Forest,rel and FI, however, we have not identified an inverse correlation.
Therefore, we expect that FIDF,Forest,Percent and FIDF,Forest,rel both are relevant for
physicians. FIDF,Forest,Percent represents the influence of a specific feature on the predicted
outcome, while FIDF,Forest,rel also compares the current patient / observation with others.
4.1. Limitations
Our approach is applicable on boolean and ordinal output data and on decision tree based
algorihtms only. For application on multi-class classification problems or other types of
machine learning algorithms, adaptions of our algorithms would be required.
4.2. Outlook
We have not yet evaluated the actual benefits of our approach in a real world scenario.
Today’s predictive modelling solutions in healthcare are primarily applied in research
scenarios. Therefore, our approach cannot yet be tested as a tool in an existing application.
Furthermore, predictive modelling will require intuitive tools, which help physicians
interpreting the suggestions given by the model. Therefore, future work will address
evaluation of predictive models AND tools for model decision explanation, since neither
of both alone is able to reap the value of such approaches in healthcare.
In some applications feature discretization (e.g. transformation of a continuous
feature Age to categories 1-18y, 19-20y, 21-50y, 51-70y, >70y) can help to improve
performance of RFs. In such cases, we calculated FID separately for all such categories.
However, it might be helpful to (additionally) provide information concerning the FID
of the original overall feature Age, e.g. by summing up the FID values of all sub
categories, which has not been implemented yet.
Up to now, all leafs of all trees within the RF were equally weighted by our
algorithms. However, leafs containing many observations in the learning dataset might
better be higher weighted than leafs that are rarely reached. Additionally, results from
different trees might better be weighted during averaging according to Equation 5, e.g.
by giving higher weights to trees with higher precision.
D. Hayn et al. / Plausibility of Individual Decisions from Random Forests 335
As described in chapter 4.1, our approach is currently only applicable for RFs in
selected scenarios. Combination of our approach with the generally applicable methods
published in [13, 14] based on local gradients might further generalize our solution.
5. Conclusion
Acknowledgements
The research leading to these results has received funding from Austrian Research
Promotion Agency under the project HIS-PREMO, grant agreement number 853264.
References
[1] M. Khalilia, S. Chakraborty, and M. Popescu, "Predicting disease risks from highly imbalanced data
using random forest," (in English), Bmc Medical Informatics and Decision Making, Article vol. 11, p.
13, Jul 2011, Art. no. 51.
[2] L. Breiman, "Random Forests," Machine Learning, vol. 45, no. 1, pp. 5-32, 2001.
[3] T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference,
and Prediction., 2nd Edition ed. Springer, 2009.
[4] J. R. Quinlan, "Induction of Decision Trees," Machine Learning, vol. 1, no. 1, pp. 81-106, 1986.
[5] L. Breiman and A. Cutler. (2014). Random forests. Available:
http://www.stat.berkeley.edu/breiman/RandomForests/cc home.htm
[6] Y. Xie et al., "Predicting Number of Hospitalization Days Based on Health Insurance Claims Data using
Bagged Regression Trees," 2014 36th Annual International Conference of the Ieee Engineering in
Medicine and Biology Society (Embc), pp. 2706-2709, 2014 2014.
[7] Y. Xie et al., "Analyzing health insurance claims on different timescales to predict days in hospital," (in
ENG), J Biomed Inform, Jan 2016.
[8] Y. Xie, S. Neubauer, G. Schreier, S. Redmond, and N. Lovell, "Impact of Hierarchies of Clinical Codes
on Predicting Future Days in Hospital," in 37th Annual International Conference of the IEEE
Engineering in Medicine and Biology Society, Milan, 2015, pp. 6852-6855: IEEE.
[9] Y. Xie et al., "Predicting Days in Hospital Using Health Insurance Claims," (in eng), IEEE J Biomed
Health Inform, vol. 19, no. 4, pp. 1224-33, Jul 2015.
[10] M. Liu et al., "Large-scale prediction of adverse drug reactions using chemical, biological, and
phenotypic properties of drugs," (in English), Journal of the American Medical Informatics Association,
Article vol. 19, no. E1, pp. E28-E35, Jun 2012.
[11] D. Hayn et al., "Data Driven Methods for Predicting Blood Transfusion Needs in Elective Surgery," (in
ENG), Stud Health Technol Inform, vol. 223, pp. 9-16, 2016.
[12] D. W. Bates et al., "Ten commandments for effective clinical decision support: Making the practice of
evidence-based medicine a reality," (in English), Journal of the American Medical Informatics
Association, Article vol. 10, no. 6, pp. 523-530, Nov-Dec 2003.
[13] M. T. Ribeiro, S. Singh, and C. Guestrin, "“Why Should I Trust You?” - Explaining the Predictions of
Any Classifier," presented at the KDD 2016, San Francisco, CA, USA, 2016. Available:
https://arxiv.org/pdf/1602.04938v3.pdf
[14] D. Baehrens, T. Schroeter, S. Harmeling, M. Kawanabe, K. Hansen, and K. R. Muller, "How to Explain
Individual Classification Decisions," (in English), Journal of Machine Learning Research, Article vol.
11, pp. 1803-1831, Jun 2010.
[15] A. Saabas. (2014). Interpreting random forests. Available: http://blog.datadive.net/interpreting-random-
forests/
336 Health Informatics Meets eHealth
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-336
1. Introduction
Digital communication interfaces are a core requirement for electronic exchange and
sharing of data between software applications. In the context of health telematics this can
be observed within the introduction and implementation of electronic health records
systems (EHRs), which enable health professionals to share clinical information
independent of time and place. These electronic health records systems are supposed to
be used mainly within a professional domain and with requirements and challenges
characteristic for this domain. Nevertheless, due to the needed integration of software
solutions provided by multiple software vendors, the specification of the communication
interfaces is crucial. For the electronic health records system in Austria those
specifications are based on profiles defined by IHE [1] and are adapted and provided by
the ELGA GmbH [2]. Beside the Austrian electronic health record system being
developed, efforts are made by the Austrian Federal Ministry of Health and Women’s
Affairs to design an architecture for telemonitoring applications [3]. For telemonitoring
1
Corresponding Author: Matthias Frohner, Department of Biomedical, Health and Sports Engineering,
University of Applied Sciences Technikum Wien, Höchstädtplatz 6, 1200 Wien, Austria, E-Mail:
matthias.frohner@technikum-wien.at.
M. Frohner et al. / Bluetooth Low Energy Peripheral Android Health App 337
Figure 1: Personal health device simulator application simulating personal health care devices for
interoperability testing and educational purposes
additionally considers the option that personal health devices are sending one
measurement data set after the measurement cycle or sending multiple (historic)
measurements sets when data have been stored in the device’s internal storage. In
addition, further device capabilities, like support of a real-time clock or the capability to
manage multiple users should be stated in the configuration file.
This configuration provides the possibility to adjust and define software test cases
without the need to adapt the code of the application. Having this test case configuration
feature in place, it allows to extend the software test portfolio over time, i.e.
configurations for new personal health devices can be added in the configuration file
resulting in new test cases that can be executed using the graphical user interface of the
application.
One requirement for the smartphone, where the personal health device simulator
application is executed, is that this phone supports the capability to act as a Bluetooth
Low Energy “data source”. The Bluetooth SIG uses the term Bluetooth Low Energy
Peripheral for such devices. Android introduced this feature with Android 5.0 [8]
however there is strong evidence that the phone’s firmware must support Bluetooth Low
Energy Peripheral Mode as well and that the latter is not available on a broader number
of Android phones available at the moment [9].
2. Methods
First, a suitable Android smartphone with enabled Bluetooth Low Energy Peripheral
mode is selected. In order to validate if a smartphone supports this feature specific
M. Frohner et al. / Bluetooth Low Energy Peripheral Android Health App 339
applications from Google’s Play Store can be used (e.g. [10, 11]). Android smartphones
and tablets at hand, running on Android 5.0 or higher have been tested for this support:
x HTC Google Nexus 9, 16 GB WiFi, Android 7.1.1 (Build-Number NMF26F)
x OnePlus 3, A3003, Android 7.0 (Build-Number ONEPLUS
A3003_16_170114)
x Honor 8, FRD-L09, Android 6.0 (Build-Number FRD-L09C432B131)
From the three tested devices only the Honor 8, running on Android 6.0, does not
support Bluetooth Low Energy Peripheral mode.
For the structure of the test configuration file a XML structure is defined, that can
hold all the relevant information for different test cases. Test cases shall enable a user to
send multiple simulated measurements with different timestamps for simulated blood
pressure monitoring devices, weighting scales, and blood glucose meters. In order to
identify the needed data for the configuration file the corresponding Bluetooth Low
Energy service specifications from the Bluetooth SIG are studied [12]. In addition,
requirements for Bluetooth Low Energy profiles set by the Continua Design Guidelines
are considered.
A first prototype implementation is implemented in Android Studio (API 24,
Android 7.0) and the capabilities of the developed application simulating personal health
devices is tested against an Android application based on former developments [13].
3. Results
Based on the investigation on the supported content defined in the Bluetooth Generic
Attribute Profile (GATT) services
x Device Information,
x Battery Service,
x Weight Scale,
x Blood Pressure, as well as
x Glucose
and the corresponding characteristics, a XML configuration structure has been designed.
Basic device information - stated in the Device Information service - overarching
different device classes are reflected by a set of common XML attributes. Informational
content that is only available for a specific device class is mapped to a “key-value”-like
representation in the XML (see Figure 2).
Moreover, the availability of certain informational objects, like support of BMI for
weight scales, is implemented by the Bluetooth SIG by setting a flag in the characteristics
features list. The XML structure does not use this explicit flagging but provide this
implicitly when a corresponding key-value pair exists. The Android simulator
application will then sets the flag in the corresponding feature list.
After the Bluetooth pairing has been finished, the simulator application is loading
the configuration file and based on the number of defined test cases (TestCase XML tag)
the user interface will be populated with buttons for test case execution. The read-
characteristics (e.g. device information) that can be derived from the configuration file
will then be accessible for external reading and subsequently indication-characteristics
(e.g. blood pressure measurements) will be available.
340 M. Frohner et al. / Bluetooth Low Energy Peripheral Android Health App
4. Discussion
schematron file containing the content and validating the used configuration file. In
general, a trade-off between increasing the complexity of the configuration file and
amount of logic that is hard-coded within the applications has to be considered. When
the code is kept very generic, more knowledge is needed for setting up a configuration
file that conforms to the requirements stated by Bluetooth SIG, i.e. required key-values
for GATT services and dependencies between different attributes need to be stated in the
configuration file itself. Nevertheless, the test engineer should be enabled to extend the
simulator app with new test cases by adding additional test case specifications in the
configuration file. However, extending the simulator app with new personal health
devices to be simulated is not possible in the current set-up. This is due to the fact, that
needed logic for devices is currently coded in the application itself. In future this logic
might be transferred to the configuration file, but this will require sufficient knowledge
of the test engineers maintaining the configuration.
The application is intended to be used in lectures in the upcoming study semesters
where students are supposed to implement software projects showing the communication
flow from the personal health device towards an electronic health record system. Based
on the experiences, further improvements and features might be implemented. Feedback
and experiences gathered during the lectures is expected to influence the decision
whether to increase the complexity of the configuration file allowing the definition of
new health devices to be simulated, or to stick to a simple configuration file where
additional test cases can be defined rather easily. The latest version of the personal health
device simulator application will be available for public download on the project
homepage.
Software testing tools like the presented personal health device simulator application
are essential artefacts to reduce development time and to improve the quality of software
products, in terms of interoperability, and might play a role in software certification
processes in future.
Acknowledgment
References
[1] IHE International Inc., “Integrating the Healthcare Enterprise.” [Online]. Available: http://www.ihe.net/.
[Accessed: 30-Jan-2017].
[2] ELGA GmbH, “ELGA - die elektronische Gesundheitsakte.” [Online]. Available:
http://www.elga.gv.at/. [Accessed: 30-Jan-2017].
[3] S. Sauermann and I. Weik, “eHealth Applications in Austria: Telemonitoring,” in 7. Nationaler
Fachkongress Telemedizin.
[4] Personal Connected Health Alliance, “Personal Connected Health Alliance.” [Online]. Available:
http://www.pchalliance.org/. [Accessed: 30-Jan-2017].
[5] Personal Connected Health Alliance, “H . 811 Personal Health Devices Interface Design Guidelines,
Version 2016 (August 4, 2016),” 2016.
[6] UAS Technikum Wien, “Innovate.” .
342 M. Frohner et al. / Bluetooth Low Energy Peripheral Android Health App
1. Introduction
Today, the need to quick access to information to support key decisions made by health
professionals cannot be denied. Regarding this, it is necessary to understand the real
information needs of health professionals and ways of meeting these needs [1]. Internet,
as a tool to enhance the delivery of health care services, has attracted considerable
attention and is employed by both health professionals and patients [2]. Therefore, the
Internet is an important source of medical information [3, 4]. Advances in information
technology have led to the establishment of electronic information sources like Medline,
Embase, Cochrane, etc. It is expected that the use of online sources will improve clinical
decisions [5]. This technology helps in continuing education and provides general
information like health education, prevention, prognosis and treatment of diseases, which
is helpful for decision-making on different patients [6]. Data retrieving technology in
Internet improves access to updated medical knowledge on demand and as needed when
compared with other sources of information like discussion with colleagues and printed
1
Corresponding author: Khalil Kimiafar, Department of Medical Records and Health Information
Technology, School of Paramedical Sciences, Mashhad University of Medical Sciences, Azadi Square,
PardisDaneshgah, Mashhad, IR Iran, Tel: +98-5138846728, E-mail: kimiafarkh@mums.ac.ir
344 M. Sarbaz et al. / Physicians’ Use of Online Clinical Evidence
textbooks or journals [7, 8]. Previous studies have shown that physicians' use of the
Internet is on the increase on average and its utilization has become a strategic thinking
for them [9-11]. On the contrary, the web-based information is heterogeneous in terms
of quality and is not always appropriate for direct utilization in practice [12]. This study
aims to evaluate online information seeking behavior of physicians of Mashhad
University of Medical Sciences (Northeast Iran) on the use of online clinical evidence.
2. Methods
3. Results
In this study, 252 physicians (126 residents (response rate: 79%) and 126 specialists
(response rate: 81 %.) completed the questionnaire. Most physicians (65.5% specialists
vs. 61% residents) were male. The average age of specialists was 45 years (SD = 9.5),
compared to 32 years for residents (SD = 5.3). In the present study, the majority of
physicians (63.6 % specialists and 58.5% residents) had no internet access in their
consultation and visit rooms while the majority of physicians had good or very good
M. Sarbaz et al. / Physicians’ Use of Online Clinical Evidence 345
skills in using the Internet to search for information and the majority of them (54.6%
specialists vs. 44.4% residents) used the Internet daily for seeking medical information.
In response to the question ''why do you use the Internet'', the majority of specialists
answered that they utilized Internet to search for information about their patients'
problems (66.9%) and to answer questions raised by their students (58.8%). The majority
of residents (48.3%) stated that they utilized the Internet to search for information on
their loved one's health problems. The majority of specialists (67.7%) referred to "Easy
access to information" and "access to much information from various sources" as their
most important reasons for utilizing the Internet to search for health-related information.
The majority of residents (63%) stated that the main reason for using the Internet to
search for health-related information was to access a lot of information from various
sources. With regard to search experience on the Internet, 39% of specialists and 32.5%
of residents and 60.2% specialists and 64.2% of residents, respectively chose complete
agreement and agreement options in responding to the question "I am satisfied with the
information”. The majority of physicians (77.8% of specialists and 80% of residents),
respectively used Google, as well as 67.5% specialists and 52.2% residents always used
Medline or PubMed, among electronic resources, to search for information. EBSCO and
Embase were least used sources by the physicians (Table 1).
The main reasons for physicians' dissatisfaction with the Internet, as a source of health
information, was lack of access to medical sources owing to the non-targeted filtering
(61% specialists and 74.6% residents) and very slow connection of the Internet (75.6%
specialists and 83.1% residents) (Table 2).
4. Discussion
In the present study, the majority of physicians utilized the Internet for medical
information every day. The most important reasons for using the Internet included easy
access to information. Another reason stated by the physicians for using the Internet was
access to plenty of information from different sources. In this regard, the results of the
study carried out by Kitchin and Applegate confirm the results of the present study. They
revealed that radiology residents introduced the Internet as their most important and most
utilized source of information among other sources, including books, journals and
internet [15]. In another study, Bernard et al. stated that the most important reason why
physicians use the Internet is to search for information for patient care, which is in line
with our results (Specialists' opinions) [14]. Unlike the results of the present study, Allen
346 M. Sarbaz et al. / Physicians’ Use of Online Clinical Evidence
Table2. Physicians' reasons of dissatisfaction with the Internet as a source of health information
Specialists Residents p-
N (%) N (%) values
Yes No Yes No
Lack of internet access 18(14.5) 106(85.5) 32(27.1) 86(72.9) 0.016
Very slow connection of the internet 93(75.6) 30(24.4) 98(83.1) 20(16.9) 0.154
Difficulty of searching the Internet 6(4.9) 117(95.1) 10(8.5) 108(91.5) 0.262
Lack of time to search the Internet 30(24.4) 93(75.6) 27(22.9) 91(77.1) 0.783
High costs of Internet 11(8.9) 112(91.1) 24(20.3) 94(79.7) 0.012
Technical difficulties 39(31.7) 84(68.3) 47(39.8) 71(60.2) 0.188
Lack of specific information on the Internet 7(5.7) 116(94.3) 15(12.7) 103(87.3) 0.059
High volume of non-relevant information 26(21.1) 97(78.9) 44(37.3) 74(62.7) 0.006
Unreliability of the information 14(11.4) 109(88.6) 14(11.9) 104(88.1) 0.907
Lack of access to medical sources owing to 75(61) 48(39) 88(74.6) 30(25.4) 0.024
non-targeted filtering
Language barrier 10(8.1) 113(91.9) 28(23.7) 90(76.3) 0.001
Lack of Internet searching skills 29(23.6) 94(76.4) 20(16.9) 98(83.1) 0.201
et al. revealed that physicians are not sure about their ability to determine the quality of
information obtained from the Internet. They assessed data quality based on insight and
understanding and mainly match the information based on their knowledge. The method
of data presentation is considered very important. Physicians are seeking for information
that is directly related to their clinical practice. They feel that they are not skilled in
creating a query to conduct a search and are aware of their weakness in determining the
quality of information searched [16]. Nevertheless, in the present study, physicians stated
that they had appropriate internet searching skills. Given that most physicians are
conducting their searches in Google search engine, it seems that this claim is attributed
to a lack of awareness of ideal methods of specialized search on the Internet. The results
of a web log analysis, which was performed in metasearch engine that covers 150 health
sources and variety guidelines in 2007, revealed that the majority of queries were made
employing a search phrase alone and without Boolean operators [17]. The most
important problems stated in the present study on the search for health information on
the Internet include lack of access to some medical sources owing to non-targeted
filtering and very slow speed of the Internet. In previous literature, internet search
barriers included time spent for finding information, difficulty in setting key questions
and finding the optimal search strategies, lack of appropriate information source,
unreliability to finding all relevant information, high volume of information, language
barrier, lack of training or skill on the Internet, lack of familiarity or experience, software
problems, concerns on data security, no internet access and costs [14, 18, 19]. In the
present study, most physicians stated that they obtained valid information from their
online search. Nevertheless, evaluating and identifying the validity of the resulting
information is a difficult and specialized task, which needs information literacy along
with specialized literacy in the relevant profession. Previous studies have shown that
educational interventions in information management improvement, has a positive
impact on physicians' information-seeking behavior [9]. It is recommended that medical
informatics training programs should be incorporated, where physicians are required to
search for health information in the Internet.
M. Sarbaz et al. / Physicians’ Use of Online Clinical Evidence 347
References
[1] Revere D, Turner AM, Madhavan A, Rambo N, Bugni PF, Kimball A, et al. Understanding the
information needs of public health practitioners: A literature review to inform design of an interactive
digital knowledge management system. Journal of Biomedical Informatics. 2007;40(4):410-21.
[2] Baker L, Wagner TH, Singer S, Bundorf MK. Use of the Internet and e-mail for health care information:
results from a national survey. The Journal of the american medical association. 2003 May
14;289(18):2400-6. PubMed PMID: 12746364. Epub 2003/05/15. Eng.
[3] Sarbaz M, Kimiafar K, Sheikhtaheri A, Taherzadeh Z, Eslami S. Nurses' Information Seeking Behavior
for Clinical Practice: A Case Study in a Developing Country. Studies in health technology and
informatics. 2016;225:23-7. PubMed PMID: 27332155. Epub 2016/06/23. Eng.
[4] Griffiths JM, King DW. US information retrieval system evolution and evaluation (1945-1975). IEEE
Annals of the History of Computing. 2002;24(3):35-55.
[5] Hersh WR, Hickam DH. How well do physicians use electronic information retrieval systems? A
framework for investigation and systematic review. The Journal of the american medical association.
1998 Oct 21;280(15):1347-52. PubMed PMID: 9794316. Epub 1998/10/30. Eng.
[6] Kagolovsky Y, Moehr JR. Terminological problems in information retrieval. Journal of medical systems.
2003 Oct;27(5):399-408. PubMed PMID: 14584617. Epub 2003/10/31. Eng.
[7] Schwartz K, Northrup J, Israel N, Crowell K, Lauder N, Neale AV. Use of on-line evidence-based
resources at the point of care. Family medicine. 2003 Apr;35(4):251-6. PubMed PMID: 12729308. Epub
2003/05/06. Eng.
[8] Pluye P, Grad RM, Dunikowski LG, Stephenson R. Impact of clinical information-retrieval technology
on physicians: a literature review of quantitative, qualitative and mixed methods studies. International
journal of medical informatics. 2005 Sep;74(9):745-68. PubMed PMID: 15996515. Epub 2005/07/06.
Eng.
[9] Schuers M, Griffon N, Kerdelhue G, Foubert Q, Mercier A, Darmoni SJ. Behavior and attitudes of
residents and general practitioners in searching for health information: From intention to practice.
International journal of medical informatics. 2016 May;89:9-14. PubMed PMID: 26980354. Epub
2016/03/17. Eng.
[10] Sarbaz M, Naderi HR, Aelami MH, Eslami S. Medical Information Sources Used by Specialists and
Residents in Mashhad, Iran. Iran Red Crescent Med J. 2016 March ; 18(3)::e22483. Epub 2016-03-06.
[11] Masters K. For what purpose and reasons do doctors use the Internet: a systematic review. International
journal of medical informatics. 2008 Jan;77(1):4-16. PubMed PMID: 17137833. Epub 2006/12/02. Eng.
[12] Westberg EE, Miller RA. The basis for using the Internet to support the information needs of primary
care. Journal of the American Medical Informatics Association. 1999 Jan-Feb;6(1):6-25. PubMed
PMID: 9925225. Pubmed Central PMCID: PMC61341. Epub 1999/01/30. Eng.
[13] Callen JL, Buyankhishig B, McIntosh JH. Clinical information sources used by hospital doctors in
Mongolia. International journal of medical informatics. 2008 Apr;77(4):249-55. PubMed PMID:
17646126. Epub 2007/07/25. Eng.
[14] Bernard E, Arnould M, Saint-Lary O, Duhot D, Hebbrecht G. Internet use for information seeking in
clinical practice: a cross-sectional survey among French general practitioners. International journal of
medical informatics. 2012 Jul;81(7):493-9. PubMed PMID: 22425281. Epub 2012/03/20. Eng.
[15] Kitchin DR, Applegate KE. Learning radiology a survey investigating radiology resident use of
textbooks, journals, and the internet. Academic radiology. 2007 Sep;14(9):1113-20. PubMed PMID:
17707320. Epub 2007/08/21. Eng.
[16] Allen J, Gay B, Crebolder H, Heyrman J, Svab I, Ram P. The European definitions of the key features
of the discipline of general practice: the role of the GP and core competencies. The British journal of
general practice : the journal of the Royal College of General Practitioners. 2002 Jun;52(479):526-7.
PubMed PMID: 12051237. Pubmed Central PMCID: PMC1314348. Epub 2002/06/08. Eng.
[17] Meats E, Brassey J, Heneghan C, Glasziou P. Using the Turning Research Into Practice (TRIP) database:
how do clinicians really search? Journal of the Medical Library Association. 2007 Apr;95(2):156-63.
PubMed PMID: 17443248. Pubmed Central PMCID: PMC1852632. Epub 2007/04/20. Eng.
[18] Krueger RA, Casey MA. Focus groups: A practical guide for applied research: Sage publications; 2014.
[19] Coumou HC, Meijman FJ. How do primary care physicians seek answers to clinical questions? A
literature review. Journal of the Medical Library Association. 2006 Jan;94(1):55-60. PubMed PMID:
16404470. Pubmed Central PMCID: PMC1324772. Epub 2006/01/13. Eng.
348 Health Informatics Meets eHealth
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-348
Abstract. Background: The older population of Europe is increasing and there has
been a corresponding increase in long term care costs. This project sought to
promote active ageing by delivering tasks via a tablet computer to participants aged
65-80 with mild cognitive impairment. Objectives: An age-appropriate gamified
environment was developed and adherence to this solution was assessed through an
intervention. Methods: The gamified environment was developed through focus
groups. Mixed methods were used in the intervention with the time spent engaging
with applications recorded supplemented by participant interviews to gauge
adherence. There were two groups of participants: one living in a retirement village
and the other living separately across a city. Results: The retirement village
participants engaged in more than three times the number of game sessions
compared to the other group possibly because of different social arrangements
between the groups. Conclusion: A gamified environment can help older people
engage in computer-based applications. However, social community factors
influence adherence in a longer term intervention.
1. Introduction
2. Methods
Three focus groups were conducted as part of an iterative user-centered design process
in the development of serious games and gamification to encourage better nutrition,
increased physical activity, social interaction and cognitive function. Participants were
recruited with normal or mild cognitive impairment as assessed by the mini mental state
examination (MMSE) [26] or the Montreal Cognitive Assessment (MoCA) [27]. The
first focus group consisted of nine people (5 male; mean age=77.0, SD=7.47; mean
MMSE=29.3, SD=1.00). The purpose of this group was to elicit information on the
experience of participants about games and gaming for health together with their
motivations. The second focus group consisted of five people (1 male; mean age=74.6,
SD=5.46; mean MoCA=22.8, SD=1.64). Themes defined in the first round of focus
groups were presented to participants with suggestions for the gamification of these
themes. The third focus group consisted of four people (1 male; mean age=78.5,
SD=1.91; mean MoCA=22.0, SD=2.45). This group further refined the themes from the
second focus group and obtained participant views on gamification together with the
display of game progress. The groups were facilitated by two psychologists and each
session lasted between 50 and 70 minutes. The recordings were transcribed and analyzed
thematically.
of videos of age-appropriate exercises and stretches that participants could view and copy.
The area to promote social interaction gave participants challenges to interact with other
people eg go for a walk and visit a friend and also to encourage others enrolled in the
intervention by giving “well done” comments. The area to promote healthy eating asked
participants to keep a food diary and were given electronic feedback on possible future
healthy meal choices.
The timeline of the intervention phase consisted of participants receiving
personalized face-to-face training on how to use the tablet computers, the apps and the
gamified environment over a 17 day period. Following this training phase there was an
intervention period of 47 days where participants were asked to use the tablet computers
in their homes independently. Participants were encouraged to engage with the
DOREMI application five days a week. Technical support was available for the
participants. For those participants living in the retirement village the support was
available face-to-face. For the other participants the support was available via telephone.
The tablets were connected via WiFi to a central server and the amount of time
participants spent using the DOREMI apps to promote cognition and exercise was
recorded. After the intervention there eight participants were interviewed about their
experiences of using the tablets. The interviews were recorded, transcribed and analyzed
thematically.
3. Results
In the first focus group some participants reported that they played cognitive games to
keep their brains active but they did not play classes of games for other health benefits.
A theme that emerged was the importance of social interaction as participants felt that
loneliness and isolation were a risk factor with age. Participants were positive about
gamification concepts such as rewards for completing stages within games.
The second focus group introduced participants to a prototype gamified environment
of walking a dog along a path, a theme derived from feedback from the first focus group.
Participants said they liked to travel and build collections to remind them of where they
had been. They said that keeping motivated was an important factor in persevering with
a task.
352 M. Scase et al. / Development of and Adherence to a Computer-Based Gamified Environment
Figure 1. The gamified environment illustrating the path the dog will follow through the city. At certain
points on the path landmarks will be reached where users collect postcards illustrating those landmarks
The third focus group combined feedback from the first two to present a gamified
environment to participants. A scenario was presented where game progress was
represented as walking a dog through a European city people were visiting. At points
along the path landmarks were reached. At these landmarks a virtual postcard would be
collected that would be inserted into an album.
The final gamified environment incorporated suggestions from the three focus
groups (see Figure 1). Progress was visualized by a dog walking a path through a city
with the collection of postcards along the way to build an album. As participants
completed the path for one European city they would then progress to another city and
collect postcards of well known landmarks there. Participants could also view a
graphical representation of their progress in the four separate game areas of the
application.
Following the intervention phase 435 distinct sessions on the tablet computers were
recorded for all participants with the number of sessions per participant varying
considerably from 2 to 59 sessions of engagement with the cognitive games and/or
exercise area. An independent-samples t-test was conducted to compare the number of
sessions between the two groups of participants. This test choice was due to the presence
of two independent groups, and the t-test’s ability to compare two means There was a
significant difference in the mean number of sessions for retirement village participants
(mean=29.1, SD=14.8) and those living separately (mean=8.8, SD=7.5), adjusted
t(14.3)=4.1, p=0.001. These results suggest that community living tends to promote
adherence to the tablet-delivered intervention. The session length whilst participants
interacted on the tablet computers ranged from 32 to 13611 seconds. An independent-
M. Scase et al. / Development of and Adherence to a Computer-Based Gamified Environment 353
samples t-test was conducted to compare the participant mean session length between
the two groups of participants. There was no significant difference in the mean
participant session length between the retirement village participants (mean=1707,
SD=1040) and those living separately (mean=1747, SD=1436), t(22)=-0.07, p=0.94.
The results suggest that there was no difference between the groups in how long they
spent on the tablet computers in each session.
Thematic analysis of the interviews after the intervention revealed a number of
points. Participants spoke positively about their experience of taking part in the project.
Some spoke about how it made them feel better (physically) and others spoke about the
benefits of learning new things “You’re never too old to learn, that’s very true, and you
know you got to be open to being forward thinking...because some things, you know, this
digital age... and all this technology that's coming across, I mean it’s fantastic...”
(participant E02). Participants said they had enjoyed the project and how it fitted into
their current lifestyle “Well, there wasn’t a lot really. I know it was... most of it was easy,
you know... so it didn’t take any of my time...really anymore than it should.” (participant
E01). Some participants enjoyed the social aspects of the project, particularly those
living in the retirement village. They liked meeting and interacting with others on the
project “Well the strengths were you all get round a little group …and that that you know
you can really cope with each other, which is nice, you know.” (participant E02). There
were some negative aspects of the project. Participants spoke of technical issues which
could be frustrating. However, the participants living in the retirement village stated that
these technical issues were mitigated by the ease of availability of help to resolve the
issues “…we had DC [technical support person] readily available” (participant E06).
Some participants found the project intrusive in terms of the tasks they were asked to
perform. However, for most participants the positive aspects of the project mitigated
what they were asked to do “Very interesting, sometimes stressful, sometimes frustrating,
ummm sometimes I felt it was a bit intrusive ummm... challenging... ummm but of course,
I think it’s been very interesting...err....and I think one of the things that’s really carried
it for me of course has simply been the people involved” (participant E06).
4. Discussion
Iterative user-centered design has been an effective technique for developing games and
a gamified environment. This approach of gathering information and preferences from
an older population can result in a gamification model that the target participants feel
comfortable with, can engage with and will motivate them to persevere during an
autonomous intervention. From testing participants with tablet computers we found that
older people with no previous experience very quickly learned how to use these devices.
However, with age-related visual decline [29] and manual dexterity [30] usability issues
of interface designs need to be considered carefully. Successful gamification should use
the three principles of: providing meaning to participants; enabling mastery to maintain
Flow; and ensuring autonomy so users can participate freely [31].
Results from the intervention suggested that the gamified environment had worked
well to motivate the older people to engage with the applications. A mean session time
of approximately 28 minutes demonstrated that the participants were using applications.
Some participants informally reported that the environment was highly engaging and
they could play for hours (perhaps accounting for some session times being over 3 hours.
There was no significant difference between the mean session time of the two groups
354 M. Scase et al. / Development of and Adherence to a Computer-Based Gamified Environment
possibly suggesting that gamification promoted engagement equally well in the two
groups and was independent of the participant location. Computerised training has been
used with older adults in several other studies but there has been little emphasis on age-
appropriate gamification as a motivating factor [32]
There was a very significant difference in the number of sessions between the two
groups. Participants in the retirement village engaged with more than three times the
number of sessions as those participants living spread across a city. A number of factors
emerged that could help explain this very large difference. The participants in the
retirement village got to know each other as the intervention progressed (they previously
were not acquainted). There was a bonding and sense of community between these
participants that helped encourage them to engage. The social interaction element in
enhancing wellbeing in older adults has previously been reported [33]. Furthermore,
technical support was available face-to-face and participants felt they were not on their
own and help was easily available. It is important to acknowledge that all participants
had mild cognitive impairment which could have caused mild memory losses. As such,
those taking part in their own homes without a social community of fellow participants
might have forgotten to use the tablet computers regularly.
These results are consistent with previous results that suggest older people in general
have less technical confidence when using technology [34]. Training older people with
technology helps improve their self-confidence [35]. Furthermore, mobile applications
have been demonstrated to help enhance the health of older people and maintain social
relationships [36-37]. However, as our study has suggested, the face-to-face social
element of older people engaging in technology is crucial for participants to help them
return and re-start a gamified session [33].
Acknowledgments
This study is part of the EU Framework 7 DOREMI project, Grant agreement - 611650
http://www.doremi-fp7.eu/
References
[1] S. Deterding, D. Dixon, R Khaled, L. Nacke, From Game Design Elements to Gamefulness: Defining
Gamification. Proceedings of the 15th International Academic MindTrek Conference: Envisioning
Future Media Environments (2011) 9-15.
[2] B. Burke, How Gamification Motivates People to Do Extraordinary Things, Bibliomotion Inc, Brookline,
2014.
[3] DOREMI - Decrease of cOgnitive decline, malnutRition and sedEntariness by elderly empowerment in
lifestyle Management and social Inclusion. EU Seventh Framework Programme. Grant agreement no:
611650 (http://www.doremi-fp7.eu/)
[4] European Commission, Commission Communication: the Demographic Future of Europe – From
Challenge to Opportunity. European Commission, Brussels, 2006.
[5] World Health Organization, Active Ageing: A Policy Framework, 2002.
[6] R. Beech, D. Roberts, Research Briefing 28: Assistive Technology and Older People. Social Care
Institute for Excellence, 2008
[7] G. Zichermann, C. Cunningham, Gamification by Design: Implementing Game Mechanics in Web and
Mobile Apps. O’Reilly Media Inc, Sebastopol 2011.
[8] D.R. Michael, S.L. Chen, Serious Games: Games that Educate, Train, and Inform. Muska &
Lipman/Premier-Trade, 2005.
M. Scase et al. / Development of and Adherence to a Computer-Based Gamified Environment 355
[9] Lumos Labs Inc: Lumosity (Version 2.0) [Mobile application software]. Retrieved from
https://www.apple.com/uk/itunes/, 2014.
[10] MyFitnessPal LLC: My Fitness Pal [Mobile application software]. Retrieved from
https://www.apple.com/uk/itunes/, 2014.
[11] Nike Inc: Nike+ running (4.5.5 ed) [Mobile application software]. Retrieved from
https://www.apple.com/uk/itunes/, 2014.
[12] W. Ijsselsteijn, H.H. Nap, Y. de Kort, K. Poels, K., Digital Game Design for Elderly Users. Proceedings
of the 2007 Conference on Future Play (2007) 17-22.
[13] B.A. Jones, G.J. Madden, H.J. Wengreen, S.S. Aguilar, E.A. Desjardins, Gamification of Dietary
Decision-Making in an Elementary-School Cafetaria. PloS One, 9 (2014) e93872.
[14] B.F. Skinner, The Behavior of Organisms: An Experimental Analysis. Appleton-Century-Crofts, East
Norwalk, 1938.
[15] Foursquare Labs: Foursquare (Version 7.0.11) [Mobile application software]. Retrieved from
https://www.apple.com/uk/itunes/, 2014.
[16] Jozic Productions: 30 day ab challenge free (2.1st ed.) [Mobile application software]. Retrieved from
https://www.apple.com/uk/itunes/, 2014.
[17] Joggle Research: Joggle Brain Training (Version 2.4) [Mobile application software]. Retrieved from
https://www.apple.com/uk/itunes/, 2014.
[18] S. Deterding, Gamification: Designing for Motivation. Interactions, 19 (2012) 14-17.
[19] M. Wu, Unlocking the Hidden Motivational Power of Gamification. Presented at the Gamification
Summit 2011, New York (2011).
[20] E.L Deci, R.M. Ryan, Intrinsic Motivation and Self-Determination in Human Behavior. Plenum Press,
New York, 1985.
[21] M. Csikszentmihalyi, Flow: The Psychology of Optimal Experience. Harper & Row, New York, 1990.
[22] A. Bandura, Self-efficacy: Toward a unifying theory of behavioral change, Psychological Review, 84
(1977) 191-215.
[23] M. Conner, P. Norman, Predicting Health Behaviour, McGraw-Hill Education, Berkshire, 2005.
[24] B. Lange, C. Chang, E. Suma, B. Newman, A.S. Rizzo, M. Bolas, Development and Evaluation of Low
Cost Game-Based Balance Rehabilitation Tool Using the Microsoft Kinect Sensor, Engineering in
Medicine and Biology Society, EMBC, 2011 Annual International Conference of the IEEE (2011) 1831-
1834.
[25] R, Profitt, B. Lange, User Centred Design and Development of a Game for Exercise in Older Adults.
International Conference on Technology, Knowledge and Society (2012).
[26] M.F. Folstein, S.E. Folstein, P.R. McHugh, Mini-mental State: A Practical Method for Grading the
Cognitive State of Patients for the Clinician, Journal of Psychiatric Research, 12 (1975) 189-198.
[27] Z.S. Nasreddine, N.A. Phillips, V. Bédirian, S. Charbonneau, V. Whitehead, I. Collin, J.L.C. Cummings,
H. Chertkow, The Montreal Cognitive Assessment, MoCA: A Brief Screening Tool For Mild Cognitive
Impairment. Journal of the American Geriatric Society, 53, (2005) 695–699.
[28] Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. 4 Qualitative Research in
Psychology, Vol. 3, No. 2, pp. 77-101.
[29] N.S. Gittings, J.L. Fozard, Age Related Changes in Visual Acuity, Experimental Gerontology, 21 (1986)
423-433.
[30] J. Desrosiers, R. Hébert, G. Bravoa, A. Rochettea, Age-Related Changes in Upper Extremity
Performance of Elderly People: a Longitudinal Study, Experimental Gerontology 34 (1999) 393-405.
[31] S. Deterding, Meaningful play: Getting Gamification Right. Google Tech Talk (2011).
[32] A.M. Kueider, J.M. Parisi, A.L. Gross, G.W. Rebok, Computerized Cognitive Training with Older
Adults: A Systematic Review. PLoS ONE, 7, (2012) 1-13.
[33] I.K. Far, M. Ferron, F. Ibarra, M. Baez, S. Tranquillini, F. Casati, N. Doppio, The interplay of physical
and social wellbeing in older adults: investigating the relationship between physical training and social
interactions with virtual social environments. PeerJ Computer Science 1:e30;DOI 10.7717/peerj-
cs.30.(2015).
[34] K. Arning, M. Ziefle, Understanding age differences in PDA acceptance and performance. Computers
in Human Behavior, 23 (2007) 2904-2927.
[35] Q. Ma, A.H.S. Chan, K. Chen, Personal and other factors affecting acceptance of smartphone technology
by older Chinese adults. Applied Ergonomics, 54, (2016) 62-71.
[36] I. Plaza, L. Martín, S. Martin, C. Medrano, Mobile applications in an aging society: Status and trends.
The Journal of Systems and Software, 84, (2011) 1977-1988.
[37] C-J. Chiu, Y-H. Hu, D-C. Lin, F-Y. Chang, C-S. Chang, C-F. Lai, The attitudes, impact, and learning
needs of older adults using apps on touchscreen mobile devices: Results from a pilot study. Computers
in Human Behavior, 63, (2016) 189-197.
356 Health Informatics Meets eHealth
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-356
1. Introduction
Biosignals play an important role in many medical fields, for example cardiology, sleep
studies, orthopaedics, (tele-) rehabilitation, sports and fitness.
Many types of sources generate biosignal raw data:
x motion data like kinematic and kinetic measures, force and acceleration
x electrical biosignals like ECG, EMG, EEG, EOG, ENG
x body area sensors and networks, as used in the “quantified self”
x respiratory signals like airflow, pressure and temperature
Today no “golden standard” is available, to enable consistent acquisition, storage,
presentation, analysis and sharing of biosignal data, while additionally conserving the
1
Corresponding Author: Stefan SAUERMANN, Department of Biomedical. Health and Sports
Engineering, University of Applied Sciences Technikum Wien, Höchstädtplatz 6, 1200 Vienna, Austria,
E-Mail: stefan.sauermann@technikum-wien.at
S. Sauermann et al. / Biosignals, Standards and FHIR – The Way to Go? 357
meta data that is needed for further automated processing of the signals (semantic
interoperability). A large number of standards for storing biosignals are available, for
example SCP-ECG as standardised in the IEEE 11073 series [1], the European Data
Format (EDF) [2] or DICOM waveforms [3]. Within IHE the profiles defined in the
Patient Care Device (PCD) technical framework [4] enable sharing vital parameters from
medical devices for example within a hospital. The HL7 Personal Healthcare Monitoring
Report [5] enables to share data with healthcare providers in a summary report. The
Personal Connected Health Alliance (PCHA) refers to these standards and profiles in
their Continua implementation guidelines [6].
A review of biosignal file formats is available in [7], describing and considering the
following requirements:
x single file format
x multiple sampling rates and scaling factors
x multiple binary data types (int16, int32, float, etc.), dynamic range
x events, annotations and markers, standardised terms
x support of demographics information
x support for quality control (i.e. who, when, where did the recording using which
equipment)
x support for automated overflow (out-of-range) detection
x physical units, standardised
x random data access and streaming
x electrode positions
Based on this, in the year 2014 the requirements for storing biosignals in a
standardised format were again raised by the authors in a meeting of the interoperability
forum of the SDOs in Austria (Interoperabilitätsforum der Österreichischen SDOs). It
was found that the status described in [7] did not change and existing standards still did
not cover all the requirements.
Work therefore started in the “Medical Informatics” committee of the Austrian
Standards Institute (ONK 238) to develop an Austrian standard based on the already
available General Data Format (GDF) for Biomedical Signals [8], with substantial input
from the authors. The standard [9] was published in the year 2015. The results and
software tools from the BioSig project [10] were valuable for studying the feasibility in
practice in this effort.
During the development of the standard, the authors, together with other experts in
standardization from ONK 238 and DICOM, explored the feasibility of archiving
standardised biosignal files into DICOM archives. It was found that additional
standardisation efforts are needed, concerning the biosignal file format as well as further
improvement of other standards e.g. HL7 CDA, and DICOM. Additionally, since 2015,
Fast Healthcare Interoperability Resources (FHIR) [11] emerged within HL7, with the
goal to support data transfer between software systems in healthcare, using state of the
art IT protocols and technologies. The FHIR community raised substantial interest within
the international standards community. For example, cooperations started between HL7,
IHE, IEEE and PCHA to enable sharing medical device data on mobile platforms. The
authors discussed biosignal file formats again in 2016, together with other experts in the
Interoperabilitätsforum. It was found that a new review of the landscape of standards is
necessary, considering FHIR and other recent developments, before starting further work
on standards.
358 S. Sauermann et al. / Biosignals, Standards and FHIR – The Way to Go?
2. Methods
Within the INNOVATE project the following use cases from medical fields were defined
together with bio-medical experts, for example in the field of cardiology, sleep studies,
orthopaedics, (tele-)rehabilitation, sports and fitness:
x Basic use cases
o Standardised storing of biosignal files
o Archiving of biosignal files
x Advanced use cases
o Management of multi-channel biosignal files from complex acquisition
protocols
o Analysis and presentation of raw and processed biosignal data, e.g. filtering,
classification
Advanced use cases may occur in clinical settings, for example as a measurement is
done on a patient in the hospital. It may alternatively be situated in research, as a large
number of existing biosignal files are analysed, using different sets of analysis methods,
in order to find the optimal setup for a given requirement. Similar use cases are described
in more detail in [14]. In order to implement these use cases, IT systems are needed to
address the requirements listed above.
The authors together with other experts discussed and developed a standards based
draft IT architecture in a series of meetings in standards development organisations
(SDOs):
x Interoperabilitätsforum der Österreichischen SDOs, Vienna, Austria, 17.1.2017
x At the HL7 Work Group Meeting San Antonio, USA, 16-20.1.2017:
o HL7 Health Care Devices Work Group, including experts from the IHE
Patient Care Devices Technical Work Group and from IEEE 11073
Standards Work Group
o HL7 Imaging Integration Work Group
At these meetings, expert opinions on the following issues were collected and
considered:
S. Sauermann et al. / Biosignals, Standards and FHIR – The Way to Go? 359
x Is there existing work that covers the requirements listed above (exchange protocols,
nomenclatures, …)?
x Is there a working group that is currently developing standards or profiles for the
requirements?
x Is there cooperation between SDOs to assure a consistent set of interoperability
standards and profiles?
x Which existing working group seems best suited to define the specifications needed
to address the requirements?
x When can results be expected, considering the workload and available resources?
The draft architecture was developed, incorporating the feedback from the
discussions.
3. Results
The experts that were contacted in the workgroup meetings responded that there is no
existing standard or profile that covers all the requirements from the biosignal use cases
described in this work. There are however existing standards from HL7, the IEEE 11073
series, DICOM, and as well profiles from IHE that may be further developed. Many
experts recommended especially the nomenclature and coding rules for medical device
data, defined in the IEEE 11073 series of standards, as a valuable contribution.
Currently there are working groups within IEEE 11073, HL7, IHE and PCHA
working towards standards for exchanging vital parameters (e.g. blood pressure and
weight scale readings) in a strong effort. These working groups cooperate and most
experts expect a set of consistent standards and profiles. Some work on biosignals has
already started, also within DICOM. However, in the year 2017 no final results may be
expected. Many experts suggested joining these existing efforts, with results not to be
expected before the year 2018. Many provided feedback to the initial architecture that
was presented to them.
Figure 1 shows the first steps of the simple case, where a device is able to store a
biosignal in GDF format. Tools e.g. from the BioSig project [10] then export the
structured header from the GDF file in JSON format.
Figure 2 shows the case where a device stores the biosignal in a file of another format,
and the further processing steps. The BioSig project [10] provides converters for more
than 100 file formats that enable to generate a GDF file. A FHIR resource, called Type
A within this work, may then be defined that consumes the biosignal data as a GDF file
and additionally the header information in JSON format, and stores it on a FHIR server.
Alternatively, a FHIR resource of a Type B may be defined that consumes the same GDF
file and JSON header as Type A, but stores the biosignal file in a DICOM Picture
Archiving and Communication System (PACS) as a DICOM information object. The
DICOM object generated by a Type B resource may include the GDF file as it was
provided.
360 S. Sauermann et al. / Biosignals, Standards and FHIR – The Way to Go?
Figure 1: Data from biosignal measurement is stored in GDF format. The header information may then be
exported in JSON format e.g. using tools from the BioSig project [10]
A third FHIR resource of a Type C may be defined that consumes the same GDF
file and JSON header as Type A and B. However, it converts the incoming GDF file into
a DICOM Waveform and stores it in the PACS. The expected advantage of this approach
is that existing DICOM viewers may display the biosignal on the screen.
All FHIR Resources Type A, B and C may also read the archived files back, and
provide them as a GDF file.
In the discussion with the standardisation experts, it was suggested to define further types
of FHIR resources that provide additional functions, which the advanced use case
requires:
x Acquisition of biosignals
x Analysis of biosignals (filtering, integration, annotation, FFT, …)
x Presentation of raw and analysed biosignals and of the derived parameters
x Control of multiple FHIR resources to orchestrate the above FHIR resources
within a multi-step protocol
Figure 2: Proposal for standardized data transfer architecture using GDF, FHIR and DICOM
S. Sauermann et al. / Biosignals, Standards and FHIR – The Way to Go? 361
Many experts agreed that it may well be feasible to define and implement these use
cases by defining and implementing additional FHIR resources. It was however noted
that the available teams will be engaged within existing plans in the near future. Many
experts reported that work on biosignal standards may likely start later in the year 2017,
and that standards for the basic use cases may be expected over the year 2018. The
experts also reported that work on the advanced use cases will only start when the basic
use cases has provided first tangible results and implementations.
4. Discussion
In this work a draft standards based architecture for basic biosignal use cases was
developed and discussed with standard experts in Austria and internationally. It was
found that existing standards and profiles do not cover the requirements from the
biosignal use cases. Working groups within IEEE 11073, HL7, DICOM, IHE and PCHA
were identified that have great potential to successfully cover the requirements of the
basic biosignal use cases described in this work.
DICOM experts suggested to convert biosignal data into DICOM waveforms, as
described in the results, see Figure 2. This will enable users to use existing DICOM
viewer tools to display the biosignals on the screen. On the other side, DICOM waveform
only supports a single sampling rate for all channels. The storage size of DICOM
waveforms may be substantially larger, e.g. if channels with different sampling rates are
stored in the same file.
The discussions with experts revealed many open issues. For example there is strong
evidence that binary biosignal file formats (e.g. GDF) are to be preferred to
(semi)structured file formats like XML: Structured formats introduce substantial
overhead and severely increase file size, which reduces efficiency and speed when
storing and loading data to and from files. Although the authors conclude that binary
formats are essential for biosignal raw data, this issue remains for discussion in the
context of different use cases: Header information that is included in the biosignal file
may for example be provided both in structured and in binary formats to enable
registration of biosignals e.g. for fast and efficient search.
Further work will now engage with the existing standardisation efforts identified in
this work. A first effort will address the basic use cases. In later phases, additional FHIR
resources may then extend the architecture for advanced use cases like clinical protocols
and experimental work.
Acknowledgment
References
[1] ISO 11073-91064 (2009): Health informatics - Standard communication protocol: Computer-assisted
electrocardiography.
362 S. Sauermann et al. / Biosignals, Standards and FHIR – The Way to Go?
[2] Bob Kemp: European Data Format (EFD). http://www.edfplus.info/, last access 7.2.2017
[3] Digital Imaging and Communication (DICOM): Waveform Module.
http://dicom.nema.org/medical/Dicom/2015b/output/chtml/part03/sect_C.10.9.html, last access
7.2.2017
[4] Integrating the Healthcare Enterprise (IHE): Patient Care Device Technical Framework, Revision 6.0,
November 09, 2016, Volume 1.
http://www.ihe.net/uploadedFiles/Documents/PCD/IHE_PCD_TF_Vol1.pdf, last access 7.2.2017
[5] HL7 CDA® R2 Implementation Guide: Personal Healthcare Monitoring Report, Release 1, 3.1.2017.
http://www.hl7.org/documentcenter/private/standards/cda/HL7_CDAR2_PHMRPTS_R1_2017JAN.zi
p, last access 7.2.2017
[6] Personal Connected Health Alliance (PCHA): Continua Design Guidelines, Version 2016.
https://cw.continuaalliance.org/document/dl/14699, last access 7.2.2017.
[7] Alois Schlögl: An overview on data formats for biomedical signals. In: Image Processing, Biosignal
Processing, Modelling and Simulation, Biomechanics (2009), S. 1557 – 1560, World Congress on
Medical Physics and Biomedical Engineering; 2009.
[8] Alois Schlögl: GDF - A general dataformat for biosignals, v10, specification available online
(https://arxiv.org/abs/cs/0608052, last access 7.2.2017.
[9] ÖNORM K 2204 (15.11.2015): General data format for biomedical signals. Austrian Standards Institute,
Committee 238 Medical informatics.
[10] Alois Schlögl: The BioSig Project. http://biosig.sourceforge.net/, last access 7.2.2017
[11] Health Level 7 (HL7): Fast Healthcare Interoperability Resources. https://www.hl7.org/fhir, last access
7.2.2017.
[12] P. Urbauer, M. Frohner, M. Forjan, B. Pohn, S. Sauermann, and A. Mense, “A Closer Look on Standards
Based Personal Health Device Communication: A Résumé over Four Years Implementing
Telemonitoring Solutions,” Eur. J. Biomed. Informatics, vol. 8, no. 3, pp. 65–70, 2012.
[13] [6]P. Urbauer, S. Sauermann, M. Frohner, M. Forjan, B. Pohn, and A. Mense, “Applicability of
IHE/Continua components for PHR systems: Learning from experiences,” Comput. Biol. Med., vol. 59,
pp. 186–193, Apr. 2015.
[14] S. Sauermann, M. Bijak, M. Reichel, C. Schmutterer, D. Rafolt, W. Mayr, H. Lanmüller.
BioSignalizer—A Class Library for Acquisition, Analysis, Display, and Management of Biosignals in
Clinical and Experimental Research. In Adlassnig, K.-P. (ed.) Intelligent Systems in Patient Care.
Proceedings of the EUNITE Workshop, Österreichische Computer Gesellschaft, 5. Oct. 2001, Vienna,
Austria, p.143.
Health Informatics Meets eHealth 363
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-363
Abstract. The purpose of our investigation was to develop a novel and state of the
art digital skin imaging method capable for remote monitoring and objective
assessment of Radiation Induced Dermatitis (RID). Therefore, radiation therapy
related side effects were assessed by medical experts according to Common
Terminology Criteria for Adverse Events (CTCAE) grade of severity in 20 female
breast cancer patients in a clinical trial over the treatment time frame of 25-28
radiation cycles, 50.0 – 50.4 Gy each. Furthermore the intensity of developed skin
erythema was documented by using conventional spectrophotometry plus digital
skin imaging. Thereby we could derive the Standardized Erythema Value (SEV), a
novel objective parameter, which in contrast to single parametric L* and a* delivers
a long dynamic measurement range for analyzing RID from bright to very dark skin
tones. Methodical superiority of the SEV could be proven over spectrophotometer
measurements in terms of a higher sensitivity and by enabling signal intensity
mapping in analyzed skin images. Our thereupon-derived patent enables novel
objective dermatologic eHealth applications in a broad range of medical and
industrial use by opening likewise the window for augmented dermatology. The first
of its kind system is now already further developed in form of the medical device
product Scarletred®Vision. It is available on the market for primary usage in clinical
trials and in medical routine.
1. Introduction
The purpose of the investigation was to develop a novel and state of the art digital skin
imaging method capable for remote monitoring and objective assessment of radiation
induced dermatitis (RID) for usage in clinical trials and medical routine. Radiation
damage of the skin can occur as a result of cancer treatment and represents one of the
most frequent side effects of radiotherapy leading to acute inflammation in 95% of
patients, whereof in 87% of patients moderate to severe RID occurs during or at the end
of the treatment [1]. The pathophysiological cause for the reaction is linked to massive
radical oxygen species (ROS) production, which is induced during irradiation treatment
1
Corresponding Author: Dr. med. Richard PARTL, Address: Medical University of Graz - Department
of Therapeutic Radiology and Oncology, 8036 Graz, Austria, E-Mail: richard.partl@medunigraz.at.
364 R. Partl et al. / 128 SHADES OF RED: Objective Remote Assessment of Radiation Dermatitis
and in the course of radiochemotherapy [2, 3]. The frequency as well as the severity of
the reaction depends on aspects associated with the therapy (radiation quality, dose per
fraction, cumulative dose, fraction scheme, size of the treatment area, concomitant
therapy, previous radiation, localization of irradiated area) and on aspects associated with
the patient (skin type, sensitivity to radiation, concomitant diseases) ranging from
moderate erythema to deep ulcerations. Pruritus, erythema, skin distension, epitheliolysis,
and pain affect not only the quality of life but also pose a risk of an infection of open
wounds. Consequently, this may lead to treatment interruptions or discontinuations of
the irradiation therapy and longer deferment of the subsequently planned system therapy.
Although there is still no evidence that prophylactic treatments, beyond keeping the
irradiated area clean and dry, are effective in reducing the incidence or severity of RID
[4], most physicians advocate the topical use of aloe vera gel, trolamine, or Aquaphor (a
petrolatum-based ointment) to minimize discomfort [5, 6]. However promising novel
therapeutic concepts in the treatment of inflammatory conditions rooted in the
neutralization of increased ROS levels are in advanced clinical research
(ClinialTrials.gov identifier NCT01513278). Treatment associated toxicities can impair
quality of life and adversely affect outcome. In order to cope with these dermal toxicities
a sensitive and early detection, precise documentation and objective classification are of
great importance.
Current state of the art to classify these skin reactions is based on the visual
inspection of morphologic alterations only. The most common system is the Common
Terminology Criteria for Adverse Effects (CTCAE v. 4.03), developed by the Radiation
Therapy Oncology Group (RTOG) and the National Cancer Institute (NCI), dividing skin
reactions into five distinct grades, according to the degree of severity [7]. Grade 1
changes include faint erythema or dry desquamation, which may be accompanied by
pruritus, skin distension, hair loss, and pigment alteration. These changes normally occur
a couple of days or up to a couple of weeks after the beginning of treatment. Grade 2
RID changes include moderate to brisk erythema or a patchy moist desquamation, mostly
confined to skin folds and creases, and a moderate edema. These changes are often
painful and bear an increased risk of infection [8]. In grade 3 RID the area of moist
desquamation spreads to areas outside of the skin folds. Hemorrhage from minor trauma
and abrasion are often present. Grade 4 RID is a life-threatening condition characterized
by skin necrosis and ulceration of full thickness dermis. There is a particularly high risk
of spontaneous bleeding. These changes are very painful and are characterized by poor
healing. Skin grafts may be needed. Grade 5 RID leads to the death of the patient. The
shortcomings of this, and also other clinical classification systems is mainly based on the
subjective assessment and thus on the description of the observed skin condition, whose
perception can vary greatly among assessors and may even differ in one assessor in the
course of one day. A further drawback of this method is the classification in only five
grades. Thus minor differences in the skin condition, as needed for clinical evaluation
and comparison of the effectiveness of topical medication, cannot be sufficiently
indicated, especially when study groups are small. For reduction of inter- and intra-
observer variability as well as for early, sensitive and quantitative assessment of the
degree of RID, objective tools are urgently needed.
Method establishment was carried out in the course of a Phase I drug trial
(NCT01513278) wherein 20 female patients with histologically confirmed early-stage
breast cancer were included to prove safety/efficacy of a novel biological medical
product in the treatment of RID. In parallel to this we conducted 2862 single point
spectrophotometric measurements, each consisting of three specific color coordinates
R. Partl et al. / 128 SHADES OF RED: Objective Remote Assessment of Radiation Dermatitis 365
(L*a*b*) within the CIELAB color space [9]. At this initial basis we could derive the
SEV, a novel objective parameter which is capable of measuring erythema based skin
alterations by simultaneously providing a long dynamic range from bright to very dark
skin tones (Fitzpatrick Type 0 to 5). In contrast to spectrophotometric methods
measuring L*, a* or b* only, the SEV is based on the algorithm (L*max - L*) x a*. It can
be used independently of the skin type, and takes into account the basic or even a
changing skin color of a subject. The observed change in the analyzed erythema signal
was shown to be statistically significant higher (p<0.0001) with our algorithm when
compared to single parametric measurement of a*, proving the higher sensitivity of our
method. We consequently applied the SEV method on taken digital skin images, whereby
to our surprise for the first time erythema waves could be unveiled reflecting the applied
irradiation treatment setup. Due to this we assume that our derived novel skin imaging
and analysis method is superior to spectrophotometry. It is assumed being capable to
measure both, the efficacy and side effects of investigational novel skin drugs and
treatment methods for RID and any other related inflammatory skin diseases.
2. Methods
In the period between February and June 2012, 20 female patients with histologically
confirmed early-stage breast cancer, who underwent prior breast-conserving surgery,
were included in a prospective clinical study (ClinialTrials.gov identifier,
NCT01513278). Further eligibility requirements were: Age 18 years or older, Karnofsky
Performance Status (KPS) ≥ 80% and Bra cup size ≤ D. Patients were excluded if they
had bilateral or inflammatory breast cancer, lymphangiosis carcinomatosa, medically
significant dermatologic conditions affecting the irradiated area, if the use of other agents
with the aim of preventing and/or treating RID was planned, if they took concomitant
medications which might exacerbate radiation damage on the skin, and had a history of
previous breast radiation therapy.
All patients received whole-breast irradiation by using the Clinac® iX system linear
accelerator (Varian Medical Systems Inc.) and applying standard opposed medial and
lateral tangent fields to a total dose of 50.0 – 50.4 Gy in 25 – 28 fractions (5 x 1.8 – 2
Gy/week). Dosimetric measurement was done via thermoluminiscent diodes (TLD) to
validate the locally applied dose at the skin surface. According to the study protocol, RID
was assessed daily, based on the EORTC/RTOG-CTCAE v4.03 classification system
beginning at baseline before the start of radiotherapy.
Spectrophotometric skin measurement was carried out at screening and daily from
fraction 6 to fraction 25/28 always prior to the application of the study medication. The
Spectrophotometer CM-700d (Konica Minolta, Tokyo, Japan) was used with a
measurement diaphragm Ø 8mm in MAV and auto-calibration mode. Data of six single
366 R. Partl et al. / 128 SHADES OF RED: Objective Remote Assessment of Radiation Dermatitis
spot measurements was collected per investigation time point and patient in determined
areas of the medial/lateral breast region. In total 2862 measurements were carried out,
each comprising the complete L*a*b* (CIELAB) color space, a global used industry
standard approved by the French Commission Internationale de l’Èclairage (CIE) [9]. It
describes all the colors visible to the human eye and was created to serve as a device-
independent model to be used as a reference. The three coordinates of the CIELAB
reflect the values of a measured color with respect to the lightness (L*, ranging from
black to white), its position on the red/green axis (a*, negative values indicate green
while positive values indicate red) and its position on the yellow/blue axis (b*, negative
values indicate blue and positive values indicate yellow). For objective analysis we
separated the parameters L*, a* and b* for each individual patient (#01 to #20) and mean
values were calculated per fraction (FR01 to FR28) and region.
To calculate the SEV, we convert the image from the RGB into the CIELAB color space
in two steps, starting with the linear transformation of the RGB to CIEXYZ [10] (Eq. 1):
ିଵ
ൌ ோீ כ (1)
כൌ ͳͳ כᇱ െ ͳ כൌ ͷͲͲ כሺ ᇱ െ Ԣሻ
כൌ ʹͲͲ כሺ ᇱ െ Ԣሻ (2)
ൌ ሺ ǡ ǡ ሻ
with
ᇱ ൌ ଵ ǡ ᇱ ൌ ଵ ǡ ᇱ ൌ ଵ
ଵ
ଷ
ଵ ൌ ͳ
כ
ͳͳ
where
ଷ ʹͳ ͳ ʹͻ ଷ ͺͶͳ
ൌ ൌ ൌ ൌ
ʹͻ ʹͶ͵ͺͻ ͳͳ ͵ ͳͲͺ
R. Partl et al. / 128 SHADES OF RED: Objective Remote Assessment of Radiation Dermatitis 367
is the reference white point of a specified illuminant. We use the D65 (indirect
daylight) with = (0.95047, 1.0, 1.08883) illuminant as a reference.
A redness gradient was generated starting from the darkest red color to the brightest red
color by in silico method using a color depth of 8 bit per color channel R’G’B’. This was
achieved by increasing the value of the primary R’ in the R’G’B’ (RGB) color space
from 0 to 255 (8 bit) and then kept constant. On the increasing side of R’, the primaries
G’ and B’ were set constant to the value 0 whereas on the constant side of R’, both G’
and B’ start from 0 and increase to 255 in a step size of 1. The resulting values were
converted into the CIELAB color space. Corresponding erythema values were calculated
by the formula ((255 – L*) × a*), divided by 255 to get comparable values in the range
from 0 to 255. For obtaining only the red contingent of the CIELAB color space, signal
cut off for a* was defined at 128. For objective quantitative assessment and comparison
between single images, gained erythema signals were normalized in the range 0-1 and
optionally also used for signal intensity mapping in a next step [12].
The calculated SEV signal was extracted from the original image as a grey value image.
Subsequently the high dynamic SEV grey scale images were pseudo colored to optimize
for human perception. Each single intensity value was mapped to a color according to
our developed SEV color map, which is designed and tailored to the underlying signal.
It defines a color gradient over the minimum to maximum signal range enabling to
visualize more details and potential signal saturation [12].
Data was summarized with respect to demographic and baseline characteristics, safety
observations and measurements, and efficacy observations and measurements. Summary
statistics include the mean, N, standard deviation, median, quartiles, minimum, and
maximum values for continuous variables, and frequencies and percentages for
categorical variables. Demographics and baseline data was summarized overall by using
standard summary statistics. For all variables assessed, conducted statistical tests (paired
t-testing, unpaired t-testing) and estimated p-values are descriptive.
3. Results
Patient demographics: All patients were Caucasian female with a median age of
58 years (range, 40-72). In 10 patients the carcinoma was located in the right breast, in
10 in the left breast. The most common skin type according to the Fitzpatrick scale was
skin type 3 (65%), followed by skin type 2 in 20% of the population. One patient had
diabetes mellitus type 2 and another one had a contact dermatitis to band-aids. One
patient discontinued the participation in the study, thus the analyzed population included
19 patients (n = 19). Overall, the incidence of RID grade 2 was low. Only 4 out of 20
patients (20%) developed grade 2 RID, whereas 14 developed grade 1 RID (70%). Only
368 R. Partl et al. / 128 SHADES OF RED: Objective Remote Assessment of Radiation Dermatitis
one patient (#18) did not develop signs of radiation skin damage. One patient has
withdrawn the participation from the study after the first week of cancer irradiation
treatment (Table 1).
Figure 1A. Signal trend analysis of spectrophotometry data and paired t-testing of baseline versus irradiation
fraction 25 for L* (p < 0.0002, ***), a* (p < 0.0001, ****) and b* (p = 0.4357); n = 19, m = slope
R. Partl et al. / 128 SHADES OF RED: Objective Remote Assessment of Radiation Dermatitis 369
Figure 2A. Signal trend analysis of spectrophotometry data and unpaired t-testing of the SEV given by (L*max
- L*) x a* with developed RID grading; m = slope
370 R. Partl et al. / 128 SHADES OF RED: Objective Remote Assessment of Radiation Dermatitis
400
300
200
100
0
(L*max - ∆L*) x ∆a* a*
Figure 2B. Superior sensitivity of the SEV for measuring changes in the intensity of erythema when compared
to a* (p<0.0001); shown mean and SD of spectrophotometry
225
Relative signal intensity
210
0.8
195
0.7 180
165
0.6 150
135
0.5
120
0.4 105
90
0.3 75
60
0.2
45
0.1 30
15
0.0 0
Figure 3. In silico image analysis of a created erythema gradient; Calculated SEV, L* and a* values are based
on 8 bit per channel according to the CIE L*a*b* color space
R. Partl et al. / 128 SHADES OF RED: Objective Remote Assessment of Radiation Dermatitis 371
0.07
n = 19 patients n = 19 patients
m = 0,00060
SEV* (image analysis)
0.06 2
RID GRADE
***
****
0.05
1
0.04
0
0.03
0.00 0.02 0.04 0.06 0.08 0.10 0.12
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
Rx fraction SEV* (image analysis)
Figure 4. Objective image analysis of the developed skin erythema over study time and unpaired t-testing of
the SEV* with developed RID grading; n=19 patients, m = slope
4. Discussion
Despite of the fact that nowadays numerous mobile digital devices can support clinicians
already by efficient local data collection, signal tracking and objective remote analysis,
a convenient and efficient solution for objective measurement of visual skin changes is
still pending. Thus the aim of this paper was to identify visual objective parameters to
create a novel method enabling efficient, automated and remote assessment of erythema
related skin diseases such as the radiation induced dermatitis (RID) in clinical practice
SEV* (image analysis)
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Rx fraction Rx fraction
Week w2 w3 w4 w5 w6 Week w2 w3 w4 w5 w6
SEV* (image analysis)
SEV* (image analysis)
0.06 0.06
0.04 0.04
0.02 0.02
0.00
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Rx fraction
Rx fraction
Week w2 w3 w4 w5 w6
Week w2 w3 w4 w5 w6
Figure 5. Cycling erythema waves: Objective image analysis of the developed skin erythema (SEV) in
individual patients over treatment time; w = week
372 R. Partl et al. / 128 SHADES OF RED: Objective Remote Assessment of Radiation Dermatitis
Figure 6. Augmented Dermatology by pseudo-color signal intensity mapping of the SEV; exemplary images
of patient #11 at baseline, FR 11, FR 24 and study end (final)
and trials. According to historical data, breast size, smoking history, body mass index
(BMI) and comorbidities such as diabetes, rheumatoid arthritis and hypertension are
under suspicion to be correlated with an increased risk for the development of RID in the
course of cancer irradiation therapy [3]. However the currently published data is
conflicting and reliable prognostic parameters are still missing. According to a recent
single blind randomized phase III trial in breast cancer patients, no association between
dermatitis and BMI or breast size were observed [4], which is in line with our own
observations (Table 1).
As a basis for our study, time resolved spectrophotometric analysis was conducted
to quantify the intensity of erythema in combination with clinical assessment of observed
skin toxicities in 20 irradiated female breast cancer patients. We observed RID grade 0-
2 varying from mild erythema to moist desquamation confined to the breast fold. The
radiation field size (cm2) and the Fitzpatrick-Scale (1-2 vs. 3-4) did not deliver a
statistical significant correlation with RID grade 2 in our trial. According to previous
studies spectrophotometric measurement of a* and L* values are still suggested as
objective parameters useful for analyzing RID [13-15]. In contrast to this we could prove
that assessment of a* alone fails completely due to the possible generation of a* value
duplicates, which impedes its usage for objective quantitative erythema assessment. As
shown by In-Silico analysis the observed effect is caused by a significant decrease of the
measured a* signal when simultaneously lowering the L* (Figure 3). In a biological
context, a decrease in the L* value can simply be caused by an increase in skin
pigmentation, which is typically observed in the development of RID. This means, the
higher a*, the more intense is the skin reddening and the lower L* the more pigmented
becomes the patient`s skin upon irradiation treatment (Figure 1 A/B). Due to this
measured a* value duplicates can either originate from a light or a dark red skin region,
which thus cannot be quantitatively distinguished by current spectrophotometric
methods. Fitting to this result the a* value alone provides a linear scaling of the measured
signal only for a very narrow application range, when hemoglobin or melanin content in
the skin are low [14, 15]. Consequently in a clinical study a higher hemoglobin or
R. Partl et al. / 128 SHADES OF RED: Objective Remote Assessment of Radiation Dermatitis 373
melanin content can lead the investigator to an over- or underestimation of the visually
perceived skin reddening and hence connected skin toxicities.
Our novel method however overcomes the pitfall of spectrophotometric skin
analysis by introducing the Standardized Erythema Value; SEV (Figure 2 A/B).
Importantly and in strict contrast to methods which apply the L*, a* or b* value only,
the SEV enables objective quantification of skin erythema (Figure 4). It can be used in a
broad range of skin types, and considers the basic or even a changing skin color of a
subject over investigation time. Thereby it is the first objective erythema parameter,
which provides a linear scale from bright skin (Fitzpatrick skin type I) to very dark skin
tones (Fitzpatrick skin type VI) [12]. This can be essential not only in context of RID but
also in other skin diseases where visual skin color changes are evident.
To our surprise, application of the SEV method on consecutively taken patient
images could also increases the quality of erythema assessment in such a magnitude that
for the first time cycling erythema waves became visible on a single patient level. The
pronounced erythema waves are well synchronized with the weekly irradiation treatment,
which includes two days of treatment pause over the weekend (Figure 5). Such
significant higher resolution of the image analysis results from the high data density of a
pixel by pixel based quantification of the SEV signal, which in contrast to the
spectrophotometric analysis is by a magnitude of 1x106 higher, due to the high number
of available measurement replicates. Thereby the superiority of the SEV for usage in
image analysis can be proven over spectrophotometry, especially when visual
differences in the erythema are low or the signal appears inhomogeneous and the
selection of representative skin areas is difficult for the individual observer. A more
detailed follow up investigations on the single patient level leads us now also to assume
that a higher SEV at baseline might be associated with an increased risk to develop RID
grade 2 and higher. Whether the SEV can prospectively also serve as a novel risk
parameter for developing severe forms of RID need to be still proven in a bigger patient
cohort, which upon success could consequently enable an early therapeutic intervention
and/or fine tuning of the currently used standard irradiation treatment setup.
Furthermore by combining the method with the potential of signal intensity mapping
we could open a window to Augmented Dermatology (Figure 6). The powerful tool is
assumed being capable to prospectively support the objective assessment of skin
erythema and time dependent visual changes of skin inflammation or pigmentation not
only in RID but as well in any other inflammatory skin reaction where objective
assessment of the erythema intensity and scoring of the erythema over time is of
relevance, such as in contact dermatitis, rosacea, acne, psoriasis, erysipelas, chronic
wounds, systemic lupus, Kawasaki disease and drug allergy, only to provide some
representative examples.
Due to the broad medical application scope of our technology we have patented and
developed the method already further in form of the CE marked medical device product
Scarletred®Vision. The first of its kind medical device platform is fully mobile and
capable to run on commercial smartphones. It also integrates a novel digital Gold
Standard in form of a skin patch, which serves as an internal reference and enables to
standardize and automate the process of image documentation and measurement by
computational assisted analysis of taken skin images with respect to varying local light
conditions and the distance of the imaged object. The dermatologic eHealth technology
is in use in a broad range of medical and industrial applications and supplied online via
www.scarletredvision.com.
374 R. Partl et al. / 128 SHADES OF RED: Objective Remote Assessment of Radiation Dermatitis
The study was carried out in accordance with the Declaration of Helsinki, all applicable
laws and regulations of Austria, where the study was conducted, and in compliance with
the current Good Clinical Practice guideline (CPMP/ICH/135/95). It was approved by
the appropriate IEC and by the Austrian competent authority AGES (Österreichische
Agentur für Gesundheit und Ernährungssicherheit).
References
[1] McQuestion M; Evidence-based skin care management in radiation therapy; Semin Oncol Nurs. 2006;
22(3):163-73
[2] Tanaka E, Yamazaki H, Yoshida K, Takenaka T, Masuda N, Kotsuma T, Yoshioka Y, Inoue T.;
Objective and longitudinal assessment of dermatitis after postoperative accelerated partial breast
irradiation using high-dose-rate interstitial brachytherapy in patients with breast cancer treated with
breast conserving therapy: reduction of moisture deterioration by APBI; Int J Radiat Oncol Biol Phys.
2011 Nov 15; 81(4):1098-104.
[3] Kasapović J, Pejić S, Stojiljković V, Todorović A, Radošević-Jelić L, Saičić ZS, Pajović SB;
Antioxidant status and lipid peroxidation in the blood of breast cancer patients of different ages after
chemotherapy with 5-fluorouracil, doxorubicin͒and cyclophosphamide, Clinical Biochemistry 43
(2010) 1287–1293
[4] Bolderston A, Lloyd NS, Wong RK, Holden L, Robb-Blenderman L; Supportive Care Guidelines Group
of Cancer Care Ontario Program in Evidence-based Care; The prevention and management of acute skin
reactions related to radiation therapy:͒a systematic review and practice guideline; Support Care Cancer
(2006) 14: 802–817 DOI 10.1007/s00520-006-0063-4
[5] Fisher J, Scott C, Stevens R, et al. Randomized phase III study comparing best supportive care to biafine
as a prophylactic agent for radiation-induced skin toxicity for women undergoing breast irradiation:
Radiation Therapy Oncology Group (RTOG 97-13. Int J Radiat Oncol Biol Phys 2000; 48:1307-1310
[6] Elliott EA, Wright JR, Swann RS, et al. Phase III Trial of an emulsion containing trolamine for the
prevention for the prevention of radiation dermatitis in patients with advanced squamous cell carcinoma
of the head and neck: Results of Radiation Therapy Oncology Group Trial 99-13. J Clin Oncol 2006;
24:2092-2097
[7] Cox. JD, Stetz JA, Pajak TF; Toxicity Criteria of the Radiation Oncology Group and the European
Organisation for Research and Treatment of Cancer (EORTC), Int J Radiat Oncol Biol Phys 1995;
31:1341-1346
[8] Hymes SR, Strom EA and Five C: Radiation dermatitis: clinical presentation, pathophysiology, and
treatment 2006; J Am Acad Dermatol. 2006 Jan; 54(1): 28-46
[9] Commission Internationale de l’Eclairage; Colorimetry, 3rd Edition. Tech. rep. 2004
[10] Burger W. and Burge M.J.; Digital Image Processing: An algorithmic introduction using java texts in
computer science; Springer London, 2016
[11] Graphic technology – Spectral measurement and colorimetric computation for graphic arts images.
Standard. Geneva, CH: International Organization for Standardization, ISO, 2009
[12] Schnidar H and Neubauer A; Methods for assessing Erythema; European Patent 2976013A1, Jan.2017
[13] Yoshida K, Yamazaki H, Takenaka T, Tanaka E, Kotsuma T, Fujita Y, Masuda N, Kuriyama K, Yoshida
M, Nishimura T; Objective Assessment of Dermatitis Following Post-operative Radiotherapy in Patients
with Breast Cancer Treated with Breast-conserving Treatment; Strahlenther Onkol 2010 No 11
[14] Takiwaki H; Measurement of skin color: practical application and theoretical considerations", The
journal of medical investigation; JMI, 1 February 1998 (1998-02-01), page 121
[15] Chardon A et al. Skin color typology and sun tanning pathways; International Journal of Cosmetics
Science, vol. 13, no. 4, 1 August 1991 (1991-08-01), pages 191-208
Health Informatics Meets eHealth 375
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-375
1. Introduction
1
Christian Knell, Chair of Medical Informatics, Friedrich-Alexander-Universität Erlangen-Nürnberg,
Wetterkreuz 13, 91058 Erlangen, Germany. E-Mail: christian.knell@fau.de
376 C. Knell et al. / Developing Interactive Plug-ins for tranSMART Using the SmartR Framework
In discussions with researchers from the division of Molecular and Experimental Surgery,
the Radiation Clinic and the Women’s Clinic of the University Hospital Erlangen a list
of often used methods was generated. The methods have been prioritized in regards to
perceived limitations. Based on this list, we decided to implement the survival analysis,
because it had not yet been realized by an existing SmartR workflow, the complexity
seemed adequate for a first prototype and we had specific requirements from two of our
research partners.
2. Methods
Between 01/2016 and 06/2016 first insights into the SmartR architecture were gathered
during a master thesis. [7] With beginning of 10/2016 a prototypical workflow has been
implemented, which has been brought to productive state in 01/2017.
Before starting the implementation, a list of features of the old survival analysis has been
created:
x support of only one subset – selecting two subsets results in an implicit merging
into a bigger subset
x support of categorical, numerical and high-dimensional data
x Kaplan-Meier plot visualising the results with an included legend
The list of requirements from our research partners ranged from smaller adjustments
like the possibility to change the position of the plot legend to complete new features like
a so-called risk table listing all the patients under risk at dedicated time points.
To start the implementation, the first step was that a clean tranSMART instance has
been set up, running the stable und widely used version 1.2.4. After installing
tranSMART the development environment with the stable version 1.2 of SmartR, which
is available on GitHub [8], has been set up. In close correspondence with the main
developer of SmartR, first a simple view and visualisation were realised and step by step
new functionality had been included. Colour-blind-friendly colours [9] were used to
reach high discriminability between the curves of the Kaplan-Meier plot.
3. Results
First a short description of the SmartR architecture will be given. Then our prototype,
which realises the survival analysis visualised as a Kaplan-Meier plot, is presented.
C. Knell et al. / Developing Interactive Plug-ins for tranSMART Using the SmartR Framework 377
SmartR is a plug-in for tranSMART for realising dynamic and interactive visualisation
and analysis (called workflows) that is built upon the tranSMART-CoreAPI. It is written
in the programming language groovy by using the web application framework grails and
accesses the data stored in the tranSMART data warehouse through the CoreAPI. The
process of accessing the data is encapsuled through a Service called rServeService, which
stores the results into an R-Session, by which the database has to be accessed only once
and the data can be used in multiple analyses with different parameters – unless there are
changes to the defined concepts and subsets. This reusability accelerates the processing
speed of the analysis by approximately 230% in comparison to the normal
Rmodules [10].
This process takes place in the so-called backend, which is invisible to the workflow
user, who only sees the frontend with different input fields, switches and checkboxes for
defining parameters of and running the analysis.
A single workflow is usually split into three steps: fetching data, pre-processing data
(optional) and running the analysis. For defining the user interface, SmartR defines
multiple AngularJS directives [11]. These directives enrich the HTML language, can be
used like normal HTML tags and are rendered as elements for the user interface. The
behaviour of these directives can be controlled by pre-defined attributes.
An example for these directives is the so-called conceptBox [12] that is used to
collect the concepts to be used in a query by drag & drop. This directive offers the
attributes type, min and max for setting the accepted data types (high-dimensional,
continuous or discrete low-dimensional) and the minimum and maximum number of
concepts for a specific conceptBox. Hence, it is possible to validate the selected concepts
dependent on the defined attribute values.
The separation into the different steps can be visually achieved by using the directive
tabContainer. Each tabContainer produces a container which can be switched through
pressing the corresponding tab. By using the attribute tab-name you can define the title
displayed in the frontend.
Other important directives are the fetch, preprocess and run button. They connect the
front- and backend and trigger the underlying SmartR concepts for fetching or
preprocessing the data or running the desired statistical R script.
In the fetch tab, the user aggregates the set of concepts by dragging the concepts into
the conceptBoxes. Then, fetch takes the set of concepts and runs the query on the database
yielding the dataset. Finally, run uses the attribute store-results-in to store the results
from the analysis into an AngularJS variable which gets automatically displayed in the
corresponding D3 visualisation. The visualisation is defined as an AngularJS directive
and included into the user-interface.
Figure 2 shows a typical user interface with three tabContainers – Fetch Data,
Preprocess and Run Analysis -, and on the fetch tab two conceptBoxes and one fetch
378 C. Knell et al. / Developing Interactive Plug-ins for tranSMART Using the SmartR Framework
button. In addition it shows the inherent validation as the left conceptBox requires at least
one categoric variable, which is visualised through a red border and an error message.
Figure 3 gives an overall view of the architecture and the underlying technologies.
To this point so far only the view, consisting of the visualisation and the user interface,
has been described. The last parts not explained yet, are the controller and the model.
The controller initialises the workflow and defines the default-values of the
AngularJS variables, which are used in the user interface. The model consists of one or
more R scripts, which contain all the statistical calculations. SmartR requires a function
main in each R script and calls this function automatically. All the loaded data from the
database can be accessed in the scripts via the variable loaded_variables. As both – the
model and the controller – are highly dependent on the analysis, only limited statements
are possible and having a closer look at the already realized components to gain further
insights is recommended.
Finally, SmartR uses the principle convention over configuration and enforces a
naming convention and directory structure. Table 1 gives an overview of the locations
SmartR looks for the front- and backend files for an example workflow called wrkflw.
To enable the new workflow, the location and filename of the visualization and controller
have to be added to the file SmartRResources.groovy inside the folder grails-app/conf/.
Upon start SmartR scans the folder web-app/HeimScripts/ and generates the list of
available workflows. This list is made available to the user as a dropdown menu in the
web frontend by the SmartRController [13].
For the frontend, our prototype was separated into the two steps fetching data and running
analysis, which are represented by two tabContainers. In the first step, the user can define
three parameters – time, category and censoring variable – by using three conceptBoxes.
In the next step, the user can decide between various options and settings, like width and
height of the plot, the position of the legend or whether the newly implemented risk table
should be displayed or not.
In the backend, the R library survival [14] has been used for calculating the statistics
depending on the selected concepts, parameters and settings. In comparison to the old
survival analysis, a new feature is the possibility to use the build-in subset-concept of
tranSMART for comparing two subsets with the selected categories during one analysis
or merge the two subsets into one bigger subset and receive the behaviour of the old
survival analysis.
The same feature has been implemented for merging all the selected categories and
thus comparing only the two subsets independently from the categories. The last new
feature is an option to define the interpretation of the censoring variable. So whether a
patient with or without this censoring variable is used for the calculation of the survival
probability. By using these two merging options and the interpretation feature a
researcher is able to make up to eight different analyses with his selected subsets and
categories.
Figure 5: The new Kaplan-Meier plot with the added grid in the background, the legend which can be used to
(de)active single lines of the plot and the new risk table under the plot. Merging Subsets is deactivated.
The visualisation with D3 is based on [15] and can be implemented parallel to the
user interface and the R scripts as it is nearly complete independent from them. We added
a grid in the background for a better visibility, a clickable legend, which enables the user
to deactivate and therefore hide single curves from the Kaplan-Meier plot, and a tooltip
when hovering with the mouse over a single line.
The workflow is now available in version 0.8 and included into the official
repository hosted on GitHub [8] and is available for free. Figure 4 and 5 show the old
and new Kaplan-Meier plot generated with the same subsets and concepts.
4. Discussion
The build-in tranSMART plug-in, called Rmodules, provides only static analyses and
visualisations of data. This limitation is rectified by the dynamic framework SmartR,
which is built with modern web technologies like AngularJS and D3 for the frontend as
well as Groovy and R for the backend. SmartR currently implements four of the 18
analyses provided by Rmodules and two additional and completely new analyses. In
comparison SmartR workflows are about twice as fast as the currently implemented
analyses.
C. Knell et al. / Developing Interactive Plug-ins for tranSMART Using the SmartR Framework 381
Our workflow covers most of the functions provided by the original analysis from
Rmodules. In the current state, the workflow neither supports high-dimensional nor
numerical data for defining the categories, due to the lack of binning functionality of
SmartR.
All the stated requirements have been implemented. Therefore, our workflow offers
new additional features, like the possibility to show a risk table, which is standard in
modern survival analyses and mandatory for many publications. [16] Using the image
capture functionality by SmartR, an export of the graph as image is possible. The
usability for publications is even further improved by using colour-blind- and printer-
friendly colours.
The user can build up to 8 groups of patients and execute the survival analysis on
each group. Furthermore, he has the possibility to merge these groups by subset or by
category or both and to define the interpretation of the censoring variable, resulting in
different analysis.
The new mouse over tooltip appearing over a single curve provides additional
information to the user and helps him with the interpretation of the Kaplan-Meier plot.
Other features like the newly implemented interactive legend, the possibility to select the
height and width of the plot and the position of the legend enable a perfect presentation
of the Kaplan-Meier plot.
Insights into the SmartR architecture were gained and the underlying interactions
between the different parts were identified and described. Upon this knowledge, it was
possible to build a workflow prototype of a survival analysis. All of the requirements
have been completed. Due to existing SmartR limitations, some functionality of the old
survival analysis, like support of high-dimensional and numerical data, could not be
implemented.
The SmartR development team is already working on new SmartR workflows and
new features (like binning support) are planned. With the completion of these features,
it will be possible to fix the limitations of our workflow. The recently released
tranSMART version 16.2 has SmartR already included. Self-build workflows can be
added to the official SmartR-Version and will therefore be part of all future tranSMART
installations, thus becoming available for a wide spectrum of researchers.
Acknowledgements
We thank Sascha Herzinger for providing help during the development process. In
addition we thank Prof. Dr. Michael Stürzl for providing feedback to the existing analysis
methods.
The research has been supported by the Smart Data Program of the German Federal
Ministry for Economic Affairs and Energy (1MT14001B). The present work was
performed in fulfillment of the requirements for obtaining the degree “Dr. rer. biol. hum.”
from the Friedrich-Alexander-Universität Erlangen-Nürnberg (CK).
382 C. Knell et al. / Developing Interactive Plug-ins for tranSMART Using the SmartR Framework
References
[1] William Dunn, Jr, Anita Burgun, Marie-Odile Krebs, Bastien Rance, Exploring and visualizing
multidimensional data in translational research platforms, Brief Bioinform 2016 bbw080, doi:
10.1093/bib/bbw080
[2] Vincent Canuel, Bastien Rance, Paul Avillach, Patrice Degoulet, Anita Burgun, Translational research
platforms integrating clinical and omics data: a review of publicly available solutions, Brief Bioinform
2015, 16 (2): 280-290, doi: 10.1093/bib/bbu006
[3] Venables W.N., Smith D.M. and the R Core Team, An Introduction to R, https://cran.r-project.org/
doc/manuals/r-release/R-intro.pdf, last access: 13.02.2017
[4] Sijin He, May Yong, Paul M. Matthews, Yike Guo, tranSMART-XNAT Connector tranSMART-XNAT
connector—image selection based on clinical phenotypes and genetic profiles, in Bioinformatics 2016
btw714, doi: 10.1093/bioinformatics/btw714
[5] Axel Schumacher, Tamas Rujan, Jens Hoefkens, A collaborative approach to develop a multi-omics
data analytics platform for translational research, in Applied & Translational Genomics, Volume 3, Issue
4, 1 December 2014, Pages 105-108, ISSN 2212-0661, doi: 10.1016/j.atg.2014.09.010
[6] Satagopam Venkata, Gu Wei, Eifes Serge, Gawron Piotr, Ostaszewski Marek, Gebel Stephan, Barbosa-
Silva Adriano, Balling Rudi, and Schneider Reinhard, Integration and Visualization of Translational
Medicine Data for Better Understanding of Human Diseases, in Big Data, June 2016, 4(2): 97-108, doi:
10.1089/big.2015.0057
[7] Knell Christian: Verwendung und Erweiterung der tranSMART-Plattform für den Omics-Bereich;
Friedrich-Alexander-University Erlangen-Nuremberg; non-public
[8] N.N., SmartR, https://github.com/transmart/SmartR, last access: 14.02.2017
[9] N.N., ColorBrewer: Color Advice for Maps, http://colorbrewer2.org, last access: 12.02.2017
[10] Herzinger Sascha, SmartR S Herzinger, https://www.youtube.com/watch?v=jM_ttuzjmO8, last access:
13.02.2017
[11] N.N., transmart/SmartR, https://github.com/transmart/SmartR/tree/master/web-app/js/smartR/_angular/
directives, last access: 11.02.2017
[12] N.N., transmart/SmartR – conceptBox.js, https://github.com/transmart/SmartR/blob/master/web-app/js/
smartR/_angular/directives/conceptBox.js, last access: 11.02.2017
[13] N.N., transmart/SmartR – SmartRController.js, https://github.com/transmart/SmartR/blob/master/
grails-app/controllers/smartR/plugin/SmartRController.groovy, last access: 11.02.2017
[14] M. Therneau Terry, Package ‘survival’, https://cran.r-project.org/web/packages/survival/survival.pdf,
last access: 10.02.2017
[15] N.N., brad-stuff – 2, https://github.com/smashingboxes/brad-
stuff/blob/master/dataviz/d3/kaplan/2.html, last access: 10.02.2017
[16] Guyot Patricia, Ades AE, JNM Ouwens Mario, J Welton Nicky, Enhanced secondary analysis of
survival data: reconstructing the data from published Kaplan-Meier survival curves, in BMC Medical
Research Methodology 2012, doi: 10.1186/1471-2288-12-9
Health Informatics Meets eHealth 383
D. Hayn and G. Schreier (Eds.)
© 2017 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/978-1-61499-759-7-383
Abstract. This paper suggests the usage of the Microsoft Kinect to detect the onset
of the scoliosis at high school students due to incorrect sitting positions. The
measurement is done by measuring the overall posture in orthostatic position using
the Microsoft Kinect. During the measuring process several key points of the human
body are tracked like the hips and shoulders to form the postural data. The test was
done on 30 high school students who spend 6 to 7 hours per day in the school
benches. The postural data is statistically processed by IBM Watson’s Analytics.
From the statistical analysis we have obtained that a prolonged sitting position at
such young ages affects in a negative way the spinal cord and facilitates the
appearance of malicious postures like scoliosis and lordosis.
1. Introduction
Seated position is a less strenuous than orthostatic position for several reasons: area of
support is higher faces being represented by the thighs and rear faces of the lower limbs
planting on the ground; the center of gravity of the body is closer to the supporting
surface or base support and is designed to the posterior body part; energy request is
lower; cardiovascular activity is easier; muscular effort to maintain stability and balance
of the body is less..
If prolonged work in a sitting position is performed disorders of the muscles, tendons,
neck, shoulders, stress, side effects that may even lead to mental or psychiatric disorders
[1] may occur. Extended use of the computer has some-times vision disorders as effect.
It is recommended for people working in an office environment to have a neutral
posture of the body as depicted in Figure 1. The neutral posture of the body in a sitting
position is with the shoulders in a relaxed position, the back part of the body in a vertical
position and well supported by the backrest. The forearms need to be parallel with the
floor and the elbows close to the body as shown in Figure 1, left. [2].
1
Corresponding Author: Norbert Gal-Nadasan, Department of Automation and Applied Informatics
Politehnica University of Timisoara, Address: Bulevardul Vasile Pârvan, Nr. 2, Timisoara, Romania , E-Mail:
norbert.gal@upt.ro.
384 N. Gal-Nadasan et al. / Measuring the Negative Impact of Long Sitting Hours
Figure 1 Left: Recommended body work positions, Right: incorrect body work position
Working in an office is often seen as having a low risk, but there are actually a
number of risks to which workers in an office environment are exposed:
x Postural problems: due to sedentary work, static posture and prolonged
work in forced position due to improper arrangement of the work station;
x The duration, intensity and design office work: working for long periods of
time on computer keypad, no keypad de-vices and computers, with frequent
and repetitive movements of the hand / wrist, with high levels of
concentration and over loading information.
x Psychological facts (subjective perception of workers on work
organization): it works with the perception that work is demanding, often
under time pressure, low self-control over working hours, inadequate
support from managers and colleagues;
x Environment: work at inappropriate temperatures, inadequate lighting,
noise, restricted access and obstructions. For example, office floor design
can create difficulties in terms of communication and concentration for
office workers
The basic rules to create an efficient working and studying are for high school
students are presented in Figure 2 [3].
2. Methods
In Romania high school students often spend 6 to 7 hours of a day in a poorly designed
bench in the class rooms. They only have a 10 minute break between courses at 50 minute
intervals. After school hours generally the students spend 2-4 hours to prepare for the
Figure 2 Horizontally maximum working area and Maximum vertical zone to special works conditions:
Left. Sedentary position ; Right . orthostatic position
N. Gal-Nadasan et al. / Measuring the Negative Impact of Long Sitting Hours 385
Figure 3. Left: Kinect skeleton data representation, Right: posture measuring tool
next day. In these conditions which facilitate the prolonged relaxation of several muscles
from the upper body they have the risk of developing some kind of malicious body
postures. The most common malicious posture malformations are the kyphotic and
scoliotic body postures. With regular screening these malicious body postures can be
detected in early stages and corrected using medical rehabilitation exercises.
The proposed screening method uses a non-invasive, non-irradiant and marker less
human body tracking method based on the Microsoft Kinect 3D sensor. This sensor has
already proven it’s usability in the medical rehabilitation domain with several medical
applications [4]. It is capable to detect the human body posture in orthostatic position
and in a seated position [5].
The sensor uses a structured light system based on an IR grid projected from the IR
(infra-red) laser diode. Using an IR camera the system detects the grid and creates a depth
map of the surrounding space. The system is capable to separate the human body from
the rest of the objects. The detected body is represented as a “matchstick” skeleton like
in Figure 3, left:
The markerless tracking system can track 20 joints of the human body and for each
joint assigns a 3 dimensional value which represents the joint position in Cartesian space.
The X coordinate represents the horizontal space, the Y coordinate represents the vertical
space and the Z coordinate represent the depth space.
To get a relevant view about the patient’s posture several correlations between the
tracked joints must be analyzed. The most important correlations are given by the angles
between the interested joints. These angles are calculated using the 2 vector method on
a 2 dimensional plane projection.
The method implies 3 joints on a 2 dimensional plane; the first joint called the middle
joint represents the point where the angle is measured and two other adjacent joints
between which the angle is measured. The formulas are presented below (Eq. 1-3):
A x B ( Ax x Bx Ay x By ) (1)
A Ax x Ax Ay x Ay (2)
Ax B
theta cos 1 ( ) (3)
AB
where A and B are the adjacent joints of the joint of interest and Ax, Ay, Bx and By
are the Cartesian coordinates of the A and B points. |B| is calculated using equation (2).
Theta is the searched angle.
Using these angles at key points and there depth data an image of the posture can be
created.
x The patient must not wear high heels due to negative effects on overall
measurements.
x The patient must not wear loose clothing, it is recommended to wear a sports top
and shorts.
The saved data from the CSV file is imported in an EXCEL compatible file and then
analyzed using IBM’s Watson analytics software [6]. This tool permits to visualize any
kind of correlation between the measured values. The most important from our point of
view was to correlate the height difference between the shoulders and the age of the
patient’s. This correlation has been chosen because there is a great distribution of heights
between high school students at the same ages.
3. Results
The system was tested with 30 high school students (18 male; 12 female). The average
age of the high school students was 16 year’s old. In the first stage the students were
measured with the Kinect device and after that they were consulted by a medical
rehabilitation specialist.
The raw data from the evaluations were analyzed using IBM’s Watson analytics
software. The IBM’s Watson analytics software was chosen because it can visualize any
kind of correlation between the measured values.
One of the measured key metrics was the difference between the heights of the
shoulders in correlation with the patients ages. A second reason to choose this value was
that the high school students spend 6 to 7 hours a day in a poorly designed desk in an
incorrect sitting position like in Figure 1 B. From height difference of the shoulders we
can infer if there is a problem with the general posture of the patient. In Figure 5 it can
be observed the shoulder height difference distribution by height differences.
It can be observed that most of the students have a higher shoulder height difference
value than we would expected. This can be explained with a longer period of time spent
in an incorrect sitting position. The two spikes are confirmed scoliosis cases as well as
the last negative spike.
After the Kinect evaluation each patient was evaluated by a medical doctor to
confirm or infirm the scoliosis diagnosis. This evaluation was a blinded evaluation based
on which the confusion matrix form table 1 was created.
The confusion matrix confirms that the Kinect device can be used in the high school
education system to monitor the changes of the spinal cord to detect in early stages the
signs of scoliosis and kyphosis.
4. Discussion
The study revealed that the high school student’s musculoskeletal apparatus is affected
by the prolonged sitting positions. The results confirm that the prolonged sitting positions
at young ages have a negative impact on the spinal cord. If not prevented or treated the
malicious postures induced by the long sitting hours can affect the student’s spinal cord.
The diseases of the spinal cord will be visible after the age of 30 years.
The Microsoft Kinect is easily implementable at any high school or elementary
school and can be an effective preventive tool. Due to the fact that it uses non irradiant
infrared laser beams it can be used at every physical activity classes.
The confusion matrix showed that the Kinect is not the most accurate screening
device the KINECT-based method has advantages, as it could potentially be deployed as
a cost-effective addition to conventional methods in a high school setting.
Active medical supervision is necessary for patients with existing problems in order
to prevent further complication occurrence.
References
Subject Index
AAL 176, 184 data mining 1, 211
access to information 88 data quality 204, 259
ad hoc participation 55 data transmission 248
ADL 176 database 144, 254
adverse drug reaction 128 delirium 32
aging 348 diabetes mellitus 305
AIDS 319 digital media 282
ambient assisted living 275 disease management 267
ambulances 227 documentation 227
anatomy 97 Down syndrome 169
Arden Syntax 16 drug interactions 128
assistive technology 184 drug safety 121
augmented dermatology 363 drug side effects 128
Austria 282 economic evaluation 161
automation 63 education 104, 241
autonomy 169 eHealth 128, 196, 254, 275,
biomedical research 70, 375 348, 363
biosignal 356 electronic health record(s) 24, 111,
bone and bones 97 227, 290
BPMN 63 electronic medical record 32
business model 275 electronic patient record 248
cardiac edema 219 electronic prescription 196
claims data 121 emergency medical services 227
classification 40 exercise capacity 235
clinical decision-making 48 family doctor 282
clinical decision support 16, 219 feature importance 328
clinical practice patterns 211 frail elderly 298
clinical research 80 general practitioner 282
closed loop system 235 geriatric assessments 298
cloud computing 290 gynecology 248
colorectal tumors 211 health care common procedure
compliance 305 coding system 40
computer-assisted drug therapy 128 health information 111
computer simulation 204 health information exchange 55
computer software applications 97 health information management 104
control groups 311 health information system(s) 88, 290
conversational UI 196 health monitoring 298
cost comparison 161 heart failure 161, 219, 235, 267
cost effectiveness 161 HIV 319
critical incidents reporting 1 HL7 FHIR 63
data analytics 152 hospital 259
data governance 259 hospitalized patients 32
data management 259 hygiene 176
390
Author Index
Acker, T. 48 Garschall, M. 184
Adlassnig, K.-P. 16 Geroldinger, A. 305
Aerts, J. 80 Gessner, S. 88
Aljunid, S. 40 Gezgin, D. 204
Arthofer, K. 259 Girardi, D. 259
Ates, N. 184 Gozali, E. 319
Auinger, K. 275 Grabenweger, J. 111
Aumayr, G. 184 Haller, F. 48
Banaye Yazdipour, A. 343 Hameed, A.S. 161
Bauer, M. 363 Hamper, A. 152
Blacky, A. 16 Hancox, J. 348
Blagec, K. 121 Hanke, S. 348
Bodendorf, F. 152 Hayn, D. v, 32, 219, 328
Boerries, M. 48 Hegselmann, S. 88
Christoph, J. 48, 70, 375 Hein, A. 235, 267
David, V. 356 Henke, J. 88
de Bruin, J.S. 16 Hinderer, M. 48
Denecke, K. 1, 196, 248 Hofestädt, R. 128
Deniz, E. 235, 267 Hoffmann, J.-D. 235, 267
Dorner, T.L. 196 Holzer, K. 63
Drobics, M. 184 Hoseini, M. 104
Duftschmid, G. 204, 305 Jagsch, C. 32
Dugas, M. 88 Jonko, B. 363
Ebner, H. 328 Kaiser, P. 24
Egelkraut, R. 63, 356 Karas, S. 241
Eggerth, A. 219 Kastner, P. 219, 298
Eigner, I. 152 Kim, S. 40
Endel, G. 305 Kimiafar, K. 104, 343
Engelmann, U. 55 Kmenta, M. 136
Engler, A. 169 Knell, C. 70, 375
Erfannia, L. 290 Kodra, P. 111
Falgenhauer, M. 298 Kofler, M. 184
Fazekas, G. 176 Koller, W. 16
Feldmann, C. 235, 267 Konev, A. 241
Festag, S. 8 Kósa, I. 211
Fogarassy, G. 311 Krainer, D. 184
Forjan, M. 144 Kramer, D. 32
Förster, K.M. 184 Kränzl-Nagl, R. 275
Franz, B. 63 Krauss, O. 63
Frauenberger, C. 184 Kreiner, K. 328, 348
Frohner, M. 136, 336, 356 Kreuzthaler, M. 24
Gal-Nadasan, E.G. 383 Kriegel, J. 275, 282
Gal-Nadasan, N. 383 Kropf, J. 184, 348
394