Sie sind auf Seite 1von 19

HONOR 3374 1

Joe Passman
April 23, 2014

PRIVACY IMPLICATIONS OF SECURITY BEST PRACTICES IN THE
DESIGN OF WIRELESS MEDICAL APPLICATIONS OR DEVICES

INTRODUCTION
The computer that guided the first Apollo astronauts to the moon had 64 kB of random
access memory
1
an amount of memory so incredibly tiny that the scientists in charge of
designing the system had to simplify calculations to make the computer function. Compared with
the $400,000 ENIAC computer that filled a 1,500-square-foot room
2
, even the original Apollo
computer is representative of the concurrent decreased physical size and increased computing
power of processors that has occurred since the 1950s. An exemplary device known as the
Raspberry Pi illustrates this trend. The Raspberry Pi is a $35 computer that has 8192% times as
much memory as the original Apollo computer in addition to hosting an in situ operating system
and wireless capability.
This technological trend to increase computational power while decreasing the physical
size of the device, known as Moores Law
3
, has helped decrease the cost of wireless medical
applications and devices. Recently, there has been a huge push to incorporate these new,
powerful devices into point-of-care and remote healthcare to reduce cost of healthcare. In an age

1
(Har-Even, 2009) It is interesting to note that the hand-held calculators in the 1980s had more processing
power than the computer on the original Apollo craft. This is again, representative of the rapid increase in the
computational power of computers as well as the decrease in physical size of these systems.
2
(Levy, 2013) The ENIAC computer was a commissioned by the U.S. government as a mathematical
robot
designed to free scientific thought from the drudgery of lengthy calculating work. It filled forty 9-foot cabinets in
a 1,500-square-foot room at the University of Pennsylvania.
3
(Moore, 1965) Moores Law is a concept derived from a 1965 publication by Intel chairman Gordon
Moore. The general concept is that the number of transistors synonymous with processing power on a computer
chip will double every 2 years while the cost of the chip will remain the same.
2

where the cost of medical care has come under intensifying scrutiny, it is foreseeable that the
interest to incorporate these devices into healthcare will become even more vested.
Comics creator Stan Lee famously writes in the original Spider-Man story With great
power there must also come great responsibility! The incorporation of high-power computers,
embedded systems, and other wireless applications into the medical industry has not come
without scares. For example, former U.S. Vice President Dick Cheney revealed that his doctor
ordered that his heart device have its wireless capability disabled for fear that the device may be
hacked in an attempt to assassinate him
4
. The security threat inherent to connecting medical
devices/applications to the internet-of-things is very real especially with web sites like Shodan
(www.shodanhq.com) that easily expose password-less internet-connected devices.

I. THE NEED FOR DESIGNERS OF INTERNET-CONNECTED MEDICAL
DEVICES/APPLICATIONS TO UNDERSTAND THE PRIVACY IMPLICATIONS OF
CURRENT SECURITY BEST PRACTICES
Balancing proper security protocols against individual privacy rights is an age-old
dilemma in privacy law. Clearly, it is integral to maintain intensive security measures to prevent
improper collection and usage of medical data. Unfortunately, the security measures used to keep
medical data safe may begin to suffer from a phenomenon known as function creep, leading to
patient privacy infringement on highly sensitive data. For example, DNA collected for a cancer-
related diagnostic may be stored beyond past its application-specific usage and subsequently
utilized to make a paternity determination without patient consent. This is clearly a privacy

4
(Peterson, 2013) Former U.S. Vice President Dick Cheney had an internet-connected implanted
defibrillator. Information technology (IT) professionals have long-warned of the risks of medical device software
risks associated with the tendency of Food and Drug Administration to regulate new and/or updated software.
Cheney stated that the advice of his physician was credible in light of consultations with IT professionals.
3

violation. This paper intends to provide guidance to medical device and application designers in
regard to privacy implications of the security best practices provided by the Food and Drug
Administration (FDA), Federal Communications Commission (FCC), and other entities.
Wireless medical applications and devices are becoming increasingly prevalent due to the
reduced cost, reduced size, and increased power of internet-connected devices. These devices fit
neatly into the new arena of telemedicine a type of remote healthcare thought to reduce the cost
of healthcare. In an exemplary recent development, a Google Glass application and coupled
server platform has performed remote immunographic rapid diagnostic tests for HIV and prostate
cancer with 99.6% accuracy
5
. Once the price of Google Glass comes down (according to
Moores Law, it will), these tests could eliminate diagnostic visits. Additionally, the advent of
Health Information Patient Accountability Act (HIPAA)-compliant remote-conferencing
software
6
could eliminate the need to visit a doctors office for interpretation of the results of the
tests. All but the most technologically and equipment-intensive procedures may be conducted
remotely very soon.
It seems that with every security protocol implemented, a resulting privacy implication is
raised. Personally-identifiable health care information is considered highly sensitive. The
sensitive nature of health care data heightens the patients expectation of privacy, and this
heightened expectation has been codified in acts such as HIPAA. From sequencing our DNA to
the results of a recent breast cancer exam, every bit of information collected in a health care
setting is intrinsically linked to our person.

5
(Berger, 2014) The Google Glass application was coupled with additional physical hardware that
scattered light across the surface of the eye. The Google Glass camera picked up the scattered light signals and
relayed this information to a centralized server. The server processed the signals transmitted from the Google Glass
and determined whether the patient had HIV and/or prostate cancer, concurrently.
6
(Aggarwal, 2006) This patent describes a system to identify, monitor, and track patients remotely. It
marks one of the first systems to provide a method of secure, remote identification to ensure HIPAA-compliant
communication between a patient and a health care provider.
4

The FCC released a statement in September 2013 that proposes to remove burdens on
wireless infrastructure
7
. In addition, the FCC has established a radiofrequency band (608-614,
1395-1400, 1427-1432 MHz range) for wireless medical radiocommuncation telemetry services
and a band (401-406, 413-419, 426-432, 438-444, and 451-457 MHz range) for wireless medical
device telemetry services
8
. In light of this, the Food and Drug Administration (FDA) released
recommendations on design of safe and effective radiofrequency wireless medical devices and
applications
9
. The FDA document highlights the concept of privacy by design by providing
guidelines and concepts for designers of internet-connected medical devices/applications.
Privacy by design suggests that designers should consider privacy of the sensitive data
generated/collected by the device in the early stages of product development. The designers may
be able to implement a realistic threat model
10
to prevent potential security and/or privacy
threats.
This paper will touch on the most discussed security protocols provided in the FDA
document and elsewhere for the safe and effective use of wireless medical devices and
applications. The general progression of the paper will outline three security best practices that
a medical application/device designer may choose to implement or consider when developing a
new application/device. This paper will then address the privacy issues surrounding these
security best practices. The security best practices are user verification and control of access to

7
(FCC, 2013) This document outlines timeframes for establishing new wireless towers and relay stations. It
has the goal of reducing the cost and delay of setting up new wireless infrastructure
8
(FCC, 2014) This outlines the implanted devices and frequency ranges that these devices can use to
transmit signals wirelessly.
9
(FDA(CDRH), 2013) This document suggests radiofrequency wireless medical devices should be
connected to well-characterized, secure wireless networks that time-delayed or interference-prone transmissions
with comprehensive instructions for use.
10
(Burleson, Clark, Ransford, & Fu, 2012) This paper discusses design criteria when developing wireless
medical devices. It provides a detailed section on developing threat models which are designed to predict and
model potential security threats. This is done to assign appropriate precautions preemptively.
5

user information, transmission of patient healthcare information over wireless networks, and
retention of patient healthcare information (both centrally and locally on the device).
This document will add to the literature by analyzing the privacy implications of each
security decision. Moreover, the concept of privacy-by-design will be applied in context of each
security best practice to understand how privacy considerations may be more fully considered
in the design and development cycle of the device. The focus of the industry has been largely
placed on security best practices, but it is worthwhile to consider the privacy-related implications
of these best practices. This provides a comprehensive picture of how each security decision
affects patient privacy in the realm of sensitive healthcare information.

II. THE PRIVACY IMPLICATIONS OF THESE SECURITY BEST PRACTICES
A. Security Best Practice 1 Implement a method for device, user, and administrator
verification.
Security frameworks can be essential to reducing the risk of exposure to patient and vital
information. One of the most effective ways to enforce privacy through security is by assigning a
role-based access control mechanism
11
. An access control matrix (i.e. Table 1) strictly pre-
defines user and administrator roles and is used to restrict or control data access. This is
essentially a method of pre-authentication and top-down control of patient privacy.





11
(Raman, 2007) Role-based access control mechanisms are can be framed through an access-control
matrix. This matrix specifies who can read and/or modify patient information. These roles are hard-coded into the
software and can only be changed by the top-level administrator.
6

Table 1: An exemplary access-control matrix. This can be used to control who has access to patient healthcare
information being collected or stored on a wireless, internet-connect medical device. The privacy benefit of this
security protocol is that it preemptively, strictly defines user access and can only be changed by the top level
administrator. The privacy risk occurs when the role of the top level administrator can be easily changed or
impersonated, allowing for role changes to occur.
Identification/Role Demographic
Information
Insurance
Information
Procedural
Information
Healthcare
Information
Consent
Information
Patient 01 Read, Write Read Read Read Read, Write
Device 01 No Access or
Data Storage
No Access
or Data
Storage
Receive
Input and
Store
Collect and
Store
No Access
or Data
Storage
Doctor 01 Read - Read,
Write
- Read
Nurse 01 - - - Read, Write -
Insurance
Company
- Read, Write Read No Access -

From a privacy standpoint, access-control matrices only work when the role of top-level
administrator is strictly controlled. If the identity of the top-level administrator is compromised
or impersonated, the roles in the matrix can be changed. The definition of user identity and user
roles is critical. The matrix answers the question who owns that data? from a policy
perspective by defining who has the authority to edit and/or delete data. From an administrative
standpoint, the roles of deletion and editing are somewhat indicative of data ownership.
This well-defined user hierarchy is a time-honored approach to protecting the privacy of
patient data because it controls who can access, read, edit, and delete data with hard-coded roles.
However, it is important to verify the user before a role is allowed to be performed. Strong
authentication of users, devices, and administrators may help prevent impersonation breaches
and unwarranted access to information.
7

The integrity of a user hierarchy is essential to maintaining privacy of patient healthcare
information. In addition to ensuring patient privacy, verification of roles (including verifying the
device) is important because ensures that a measurement or signal belongs to the patient it is
claiming to come from
12
. It is consequently worth discussing authentication procedures for roles
defined in the access-control matrix for both practical, security, and privacy reasons.
There is principally one current method to validate data coming from a remote wireless
device/application. In this method, a device identifier is assigned to a remote sensing
device/application. Usually, this device is assigned to a patient, and the patient is loosely
assigned the same ID as the device. In some cases, the device allows multiple users to program
an ID number into the device. This is common, for instance, in environments such as nursing
homes where multiple users may use the wireless device to send blood pressure measurements
into a central hospital.
The current method has several security and privacy related problems. First, this method
does not intrinsically couple the device and user ID. If the hospital does not maintain a highly
structured and cross-linked device/patient ID registry, data can be assigned to the wrong patient.
Essentially, the access-control matrices are then assigned to the wrong users. This can cause a
whole new set of people to see data they were not intended to be shown.
Imagine, for example, a situation where a patients electrocardiogram (ECG) information
is being collected via a wireless ECG sensor in an attempt to assess the risk of a heart attack. The
ECG device ID is registered to Hospital A. Hospital B, however, also maintains a registry of
device and patient IDs. The patient goes to Hospital B for prenatal care and has been assigned a

12
(Petkovic, 2009) This paper suggests that verifying both the device and user is essential to ensuring that
the collected data is coming from the correct patient and that it is verifiable. This concept originates from a term
known as data integrity. Data integrity is critical for practical purpose, such as ensuring that routine healthcare is
based on correct information. It also helps protect patient privacy by ensuring that patient data is not misconstrued or
taken out of context.
8

device to monitor the composition of her amniotic fluid. The patient programs the user ID
88PregnantHeartLady88 into both devices. As the patient is driving near Hospital B while
doing errands, the ECG device sends out a random alert signal to the nearest hospital alerting
medical professionals of a potential heart attack. The user ID 88PregnantHeartLady88 is
immediately recognized, and the response team activates the GPS-tracking system on her
amniotic fluid monitoring device to find the patient. The patient is rushed to Hospital B, and an
entirely new health care team finds out she is prone to heart attack. The heart team at Hospital B
alerts her prenatal care doctor at Hospital B of her propensity for heart attack, and the prenatal
care doctor attempts to pressure the patient into a midterm abortion to preserve her health.
This scenario is excellent for demonstrating the security and privacy issues associated
with the current authentication and verification scheme in remote healthcare and wireless
monitoring. First, it is problematic that the patient is allowed to manually input a user ID into a
registered device. In the above scenario, the patient was able to place a custom user ID into a
device. The user ID was maintained in a database at two hospitals because she used the same
user ID for two different databases. This could create confusion as well as allowing unauthorized
parties to access the roles defined in each devices access-control matrix. User IDs and device
IDs should be intrinsically coupled and assigned by the hospital upon device delivery to mitigate
the confusion and privacy concerns associated with patient-assigned user IDs. If the device is
designed to monitor multiple users, such as in a nursing home setting, the responsible body in
charge of all monitored patients should provide a verified list of patient IDs to the hospital
database. Third party assignment of user IDs to devices helps mitigate the potential for
misappropriation of data to the wrong patient as well as keeping data native to the appropriate
access-control schemes.
9

Coupling user and device IDs at the outset of device delivery does not solve the problem
of malicious users. It is well-known that malicious users may impersonate a user to gain access
to sensitive health care information or to fictitiously generate information pertaining to the
patient. The development of several biometric methods, such as an automated ECG
authentication
13
, help ensure that a user is who the user claims to be. These biometric data,
however, are sensitive information in and of themselves.
These biometric data can be intercepted more easily since they are generated to access the
device. These biometric data cannot be encrypted until after they are stored on the device
14
. This
leaves the device open to being tapped for biometric information. This information could be sold
to interested parties. These parties could be insurance companies who build user profiles for
wireless medical device/application users to assess the risk of covering wireless medical devices.
Thus, even biometric data for secure access and verification of the patient is prone to privacy-
related issues.
Verification and authentication of users, healthcare professionals, administrators, and
devices in access-control matrices provides practical, security, and privacy benefits to patients.
Practically, the healthcare professional receiving patient data from a wireless, remote device or
application needs to verify that (a) the data is coming from the pre-determined patient, (b) the
data has been taken under conditions that align with device specifications, and (c) it has not been
modified before its receipt. In terms of security, authentication procedures help maintain patient
safety by ensuring that medical decisions are based on quality, verifiable data. In terms of

13
(Fahim & Khalil, 2008) This paper describes a system that looks for markers, time delays, and other
quantitative metrics to learn a patients individual ECG signal. This feature detection algorithm has intrinsic
associated error because of minimal feature extraction size.
14
(Varshney, 2007) This paper describes the various comprehensive wireless monitoring schemes. In one
section of the paper, the author writes on how abuse of authentication protocols can lead to loss of data
confidentiality. I applied this and extended it to biometric authentication a scenario that could lead to loss of
personally identifiable healthcare information through external skimming (think ATM skimming schemes) before
the information is encrypted locally on the device.
10

privacy, authentication procedures in access-control matrices ensure that sensitive health care
information is shared on a need-to-know basis. Tight verification and authentication in access-
control schemes rises to meet the heightened reasonable expectation of privacy in healthcare
information.

B. Security Best Practice 2 Transmit data only over noisy channels.
Securing data in a mobile environment filled with internet connected devices is an
increasingly difficult challenge. Although data breaches at centralized servers are more costly in
terms of the bulk volume of compromised data, each bit of data coming from dispersed devices
represents personal, sensitive healthcare information. However, securing these data is much more
difficult each device has a different password, there are many different devices, etc. The
privacy and security principles of stored healthcare information both the local device and
central server will be discussed in the next section. One area, perhaps more critical to the
security of healthcare information, is protection of collected healthcare data during transmission
to a centralized server (e.g. a server at a hospital).
It has been suggested that eavesdropping on transmissions of healthcare data from a
wireless medical device/application to a central server is an efficient way to illicitly access
healthcare information
15
. In a typical eavesdropping scheme, a malicious person would listen
to the healthcare data transmission by snooping on data packets as they leave the network access
port on the host device. The IP address of nearly every device is visible to the public domain, and
this is often all a malicious person needs to pick up these packets of data. A malicious person

15
(Sriram et al., 2009) This paper discusses problems in data quality assurance. One of the issues they
discuss is that data may be invalidated as it crosses multiple networks. These crossing points are known as network
access ports. Network access port adaptors can be configured to capture client (device)/server conversations. This is
a security concern for unencrypted data.
11

need not be technologically savvy to sniff for data. There are several open-source software
programs, such as SmartSniff
16
that can be easily installed on an internet-connected device to
sniff data packet transmissions and send them to the malicious user. In addition, these, software
programs can monitor communications on any remote device, such as a wireless medical
device/application. This is especially scary because there are now IP address crawlers like
Shodan (www.shodanhq.com) that search the web for the IP address of web-connected devices.
If these data packets are left unencrypted, the malicious person can intercept the transmitted data
and process it without needing a password (or even a user name) to access the data on the device.
The data may become compromised during transmission even if the data is encrypted.
The recent HeartBleed secure socket layer bug is exemplary of this vulnerability
17
. HeartBleed
essentially asks the server if it is there multiple times. Heartbleed confuses the attacked device
into thinking that both are exchanging a mutual security certificate. However, the attacker does
not have anything to exchange other than a lie that it is exchanging something of a certain
amount. The attacked device undergoes multiple exchanges with the attacker, dumping bits of
information for nothing in return. In any one of these exchanges, the attacked device may leak
something useful to the attacker. For example, the attacked device may leak the key-file to the
encrypted healthcare data being sent in packets over the network. When the malicious user
finishes sniffing all the packets, he may use the bled key-file to decrypt the data and expose
sensitive healthcare information. Consequently, it is not enough to simply encrypt the transmitted
data.

16
(SmartSniff, 2004) SmartSniff is an old software program that can be downloaded and installed on a
device to monitor internet-based conversations with a remote device. The remote device, for example, could be a
medical device. It is now very easy to identify remote devices with IP address search engines like Shodan.
17
(Limer, 2014) This article describes the Heartbleed bug that was published in popular media in March
2014. Heartbleed essentially tricks the attacked device into slowly sending the contents of its memory to the
attacker. The attacker may or may not receive useful information in a timely manner.
12

Encrypting transmitted data does not ensure that data sniffed during transmission to a
central server will be protected. It is integral to ensure the security of transmitted data to prevent
unwarranted people from accessing sensitive healthcare information. It has been suggested that
transmitting data over noisy channels can prevent sniffed data from being readable by the
malicious party
18
. Essentially, the noisy transmission channel is like a trying to eavesdrop on a
strangers conversation in a chaotic environment (e.g. a noisy restaurant). A listener may be able
to hear snippets of the conversation but, overall, the intercepted conversation is too convoluted to
obtain any true meaning from it.
A noisy channel provides and extra layer of security during data transmission. Sending
encrypted data over a noisy channel could provide an extra layer of security for sensitive
healthcare information. A noisy channel is essentially a physical disruption of the data that
causes the data to become messy/unreadable during its transmission (Fig. 1)
19
. Usually, a secret

18
(Adesina, Agbele, Februarie, Abidoye, & Nyongesa, 2011) This paper stresses the need to protect
healthcare information during transmission from a wireless medical device to a central server (e.g. at a hospital).
19
(Threepak, Mitatha, & Yupapin, 2010) This paper describes a process to scramble data during data
transmission a procedure known as sending data over a noisy channel. This paper also gives a nice schematic
description of the process.
Figure 1: Data, like the image of the female in the above schematic, can be modulated to become undiscernible. This
undiscernible data can then be sent from point to point. The data can be unscrambled at the reception point if the
receiving server has the same secret key used to scramble the data. This schematic is taken from Threepak et al
18
.
13

disruption key is passed to a modulator. This scrambles the data. The scrambled data can then be
sent from the internet-connected medical device/application to a central server. The data can be
unscrambled if the central server has the same secret disruption key.
Sending data over a noisy channel provides several security benefits. It can work well
within an access-control matrix scheme. Only verified users could be allowed permissions to the
secret key, thereby only giving certain users the ability to modify or even read the protected
healthcare information. The real benefit of the secret key and noisy transmission is that only
verified users are aware of the existence of the key. The key can be physically housed and
attached to the user profile on both the wireless medical device/application and the central server
before the device is deployed. This secret key could have access-control associated with it even
before the device is delivered to the hospital or patient. Preemptively controlling access to the
secret key used to unscramble data prevents the need to place ultimate control in the top-level
administrator a practice which has been implicated in multiple breeches of private patient
information in the past
20
.

C. Security Best Practice 3 Data mining should be limited to specific users and types of
analysis as well as including de-anonymizing protocols.
The disparate nature of data collected by wireless medical devices/applications will soon
become centralized and stored in standardized formats with the push for electronic medical
records. Data mining has the capability to combine, integrate, and formulate new information

20
(Meingast, Roosta, & Sastry, 2006) This paper stresses that the integrated, distributed systems used in
wireless medical devices/applications may require multiple administration points. Since administrators usually have
ultimate control known as root access over the roles of others users in an access-control matrix, multiple
administration points leave more room for data breech.
14

coming from disparate wireless medical devices/applications. This can be beneficial as well as
harmful.
Combined analysis of data from distinct wireless devices monitoring physiological data
over extended periods of time (made possible by the wireless and remote-sensing medical
device) may be able to provide information difficult to obtain from acute human studies or long-
term animal studies. On the other hand, data mining can be used to build a health profile about a
particular patient. This information could be sold (i.e. to insurance companies) if privacy and
security safeguards are not put in place or maintained. Proper access, types of analysis, and de-
anonymizing protocols should be used to ensure that patient privacy is maintained.
Data mining protocols should be designed to minimize harm while maximizing the
benefit of integrating long-term, varied human medical data. This can be primarily done by
limiting the type of user that can analyze the data as well as limiting the types of analysis that
can be performed on specific types of data. These limitations could stop potential profiling and
discrimination based on healthcare information.
As an example, one possible beneficial use of data mining could be in analyzing the long-
term risk factors of human immunodeficiency virus (HIV) contraction. It is currently unknown
whether oral sex poses a risk of HIV transmission or if oral sex is merely associated with other,
riskier sexual activities such as anal sex
21
. It is possible to imagine an implanted medical device
that assesses the type and frequency of sex a person is having. The person may also be equipped
with a device that tests for sexually transmitted infections (STIs). A data analyst may be able to
combine the information coming from both devices to define the transmission rate of HIV via

21
(CDC, 2014) and (Buchbinder et al., 2013) Most experts such as the Center for Disease Control (CDC)
and University of California San Francisco (UCSF) agree that there is a biological possibility that HIV
transmission through oral sex. Both entities agree that it is difficult to assess that exact risk of HIV transmission
through oral sex because oral sex rarely occurs as an isolated sexual act and surveys of sexual activity are fraught
with inaccurate reporting.
15

oral sex. This example could potentially lead to profiling patients. Perhaps the patient would be
denied anti-retroviral drugs or insurance coverage if the patient persistently engaged in high-risk
sexual activity
The potential risk of data mining (e.g. patient profiling) in the preceding example could
be mitigated by controlling who performs the analysis and what types of analysis could be
performed. Perhaps the data from the sexual monitor and the STI monitor could not be analyzed
together, or maybe the data sets could only be combined for analysis after a certain period of
time. In addition, it is possible that de-anonymizing the data could protect the personally
sensitive information of the individual patient but what about the healthcare information of
groups? Gay men were stigmatized in general for group-based stereotypes derived from the high
prevalence of HIV/AIDS in the gay community. Proper foresight into the type of analysis and
who analyzes the data coming from disparate medical devices is a good start to protecting patient
privacy. However, much more ethical insight will be derived and should be applied as data
mining from internet-connected medical devices and applications becomes more prevalent.

CONCLUSIONS
Wireless medical devices are becoming increasingly prevalent due to the reduced size
and increased power of processors. Wireless medical devices pose both security and privacy
issues. Security and privacy are interrelated and often oppose each other. The issue of how
security impacts the privacy of patient healthcare information will become increasingly
important as more data is collected from disparate medical devices and more data is stored in
easily-accessible electronic medical records. These data are powerful enough to enable new
advances in healthcare but also pose risk to induce patient profiling and discrimination.
16

This paper had the goal of characterizing how three well-established security best
practices, as outlined by the FDA, FCC, and other sources, impacts the privacy of patient
healthcare information. Access-control matrices can provide hard-coded limits to whether a
certain user can edit, modify, or delete patient data but can be compromised from a malicious,
impersonating user. Strong authentication procedures and sending data over noisy wireless
channels can provide extra layers of privacy in wireless medical devices/applications. Finally,
integration of data from disparate wireless medical devices can be beneficial to healthcare but
also harmful to patient and group privacy. The best method to ensure patient privacy is to design
for it in all stages of product development and to incorporate new changes into wireless medical
devices/applications as more becomes known regarding the impact of data mining and other
technological advances designed to protect patient privacy in electronic healthcare.

17

REFERENCES
Adesina, A. O., Agbele, K. K., Februarie, R., Abidoye, A. P., & Nyongesa, H. O. (2011).
Ensuring the security and privacy of information in mobile health-care communication
systems. South African Journal of Science, 107, 27-33.
Aggarwal, A. (2006). System and Method for Patient Identification, Monitoring, Tracking, and
Rescue: Google Patents.
Berger, M. (2014). A Google Glass app for instant medical diagnostics Retrieved March 14,
2014
Buchbinder, S., Hecht, F., Klausnerm, J., Osmond, D., Shafer, K., & Vittinghoff, E. (Producer).
(2013, April 20, 2014). Risk of HIV Infection Through Receptive Oral Sex.
Burleson, W., Clark, S., Ransford, B., & Fu, K. (2012). Design Challenges for Secure
Implantable Medical Devices. Paper presented at the DAC 2012, San Francisco, CA.
CDC. (2014). Oral Sex and HIV Risk.
Fahim, S., & Khalil, I. (2008, 15-18 Dec. 2008). An automated patient authentication system for
remote telecardiology. Paper presented at the Intelligent Sensors, Sensor Networks and
Information Processing, 2008. ISSNIP 2008. International Conference on.
FCC. (2013). Acceleration of Broadband Deployment: Expanding the Reach and Reducing the
Cost of Broadband Deployment by Improving Policies Regarding Public Rights of Way
and Wireless Facilities Siting.
FCC. (2014). Medical Device Radiocommunications Service (MedRadio).
FDA(CDRH). (2013). Radio Frequency Wireless Technology in Medical Devices: Guidance for
Industry and Food and Drug Administration Staff.
18

Har-Even, B. (2009). Man on the Moon: Technology then and now. Retrieved April 15, 2014,
2014
Levy, S. (2013). The Brief History of the ENIAC Computer. Smithsonian Magazine.
Limer, E. (2014). How Heartbleed Works: The Code Behind the Internet's Security Nightmare.
Gizmodo.
Meingast, M., Roosta, T., & Sastry, S. (2006, Aug. 30 2006-Sept. 3 2006). Security and Privacy
Issues with Health Care Information Technology. Paper presented at the Engineering in
Medicine and Biology Society, 2006. EMBS '06. 28th Annual International Conference
of the IEEE.
Moore, G. (1965). Cramming more components onto integrated circuits. Electronics, 38(8).
Peterson, A. (2013). Yes, terrorists could have hacked Dick Cheneys heart. The Washington
Post.
Petkovic, M. (2009, 7-9 Oct. 2009). Remote patient monitoring: Information reliability
challenges. Paper presented at the Telecommunication in Modern Satellite, Cable, and
Broadcasting Services, 2009. TELSIKS '09. 9th International Conference on.
Raman, A. (2007, 8-11 Nov. 2007). Enforcing Privacy through Security in Remote Patient
Monitoring Ecosystems. Paper presented at the Information Technology Applications in
Biomedicine, 2007. ITAB 2007. 6th International Special Topic Conference on.
SmartSniff. (2004). SmartSniff v2.08 - Capture TCP/IP packets on your network adapter.
Retrieved April 14, 2014
Sriram, J., Shin, M., Kotz, D., Rajan, A., Sastry, M., & Yarvis, M. (2009). Challenges in Data
Quality Assurance in Pervasive Health Monitoring Systems. In D. Gawrock, H. Reimer,
19

A.-R. Sadeghi & C. Vishik (Eds.), Future of Trust in Computing (pp. 129-142):
Vieweg+Teubner.
Threepak, T., Mitatha, S., & Yupapin, P. (2010). A Novel Data Transmission Security via a
Noisy Channel Using a Microring Resonator System. Paper presented at the Progress In
Electromagnetics Research Symposium Xi'an, China.
Varshney, U. (2007). Pervasive healthcare and wireless health monitoring. Mob. Netw. Appl.,
12(2-3), 113-127. doi: 10.1007/s11036-007-0017-1

Das könnte Ihnen auch gefallen