Michela Perathoner
Ein Besuch in der Abteilung für scientific computing der Uni Wien um zu erfahren, was
wissenschaftliches Rechnen eigentlich ist, wozu es gut sein soll und was in der Forschung damit
alles bewirkt werden kann.
“Es ist, als ob wir die maximale Geschwindigkeit des schnellsten Autos alle 18 Monate verdoppeln
würden“- es mag wohl wie ein banales Bespiel klingen, aber für Nicht-Informatiker ist das Beispiel
leicht verständlich: Wilfried Gansterer, Vorstand des Forschungslabors „Computational
Technologies and Applications“ der Universität Wien, trifft den Punkt, wenn er begeistert von der
ständig wachsenden Leistung der Computer-Prozessoren spricht. Der Vergleich, der Autoindustrie
entnommen, vereinfacht nämlich nicht IT-Experten um einiges die Vorstellung. Laut dem
sogenannten Mooreschen Gesetz, nach Gordon Moore benannt, verdoppelt sich die Anzahl der
Transistoren auf einem handelsüblichen Prozessor alle achtzehn Monate. Das heißt, dass die
Computer-Leistungen ständig zunehmen, Jahr für Jahr. Sollte ja an sich keinen verwundern: wer
erinnert sich nicht an den Commodore 64 der Neunziger Jahre? „Die aktuellen Laptops wären vor
zwanzig Jahren sogenannte Supercomputer gewesen“, erklärt diesbezüglich Siegfried Benkner,
Vorstand der Abteilung für wissenschaftliches Rechnen der Universität Wien.
Aber es ist nicht nur von „Supercomputern“ die Rede, wenn man in den Räumlichkeiten der
Abteilung für scientific computing der Fakultät für Computerwissenschaften in Wien herumstöbert.
Cloud computing und Grid sind ebenso Fachbegriffe im wissenschaftlichen Rechnen, einer
interdisziplinären Forschungsdisziplin, die durch numerische Simulation neben den beiden
klassischen Säulen der Forschung, Theorie und Experiment, hinzutritt. Und in vielen Natur- und
Ingenieurwissenschaften Kosten verringert und Berechnungen und Darstellungen ermöglicht, die
real gar nicht durchführbar wären. Konkret kann es sich dabei um Unfallssimulationen ebenso wie
um Wettervorhersagen oder chirurgische Eingriffe handeln: die Anwendungsbereiche reichen von
der Autoindustrie bis zur Molekularbiologie. „Denken wir doch an eine Gehirnoperation“, erklärt
Peter Brezany, Professor der Abteilung für scientific computing in Wien, „und an die Tatsache, dass
ein Tumor beim Öffnen des Gehirnschädels einem unterschiedlichen Druck ausgesetzt wird und
sich dadurch verschieben kann: Computersimulationen können dies vorhersehen und darstellen“.
Medizinisch natürlich ein ganz schöner Vorteil.
Aber wenn man von scientific computing, Englisch für „wissenschaftliches Rechnen“, spricht, geht
es nicht nur um Auto-Design, graphischen Output, Simulationen oder schnelles Rechnen:
Datenspeicherung und virtuelle Verbindung zwischen Firmen, Krankenhäusern oder Labors zählen
ebenso zu den Forschungsbereichen der Abteilung der Uni Wien. Wie im Falle des Projekts
@neurist, von der Europäischen Kommission im Rahmen des FP7 finanziert: 17, 5 Millionen Euro,
ungefähr 13 davon von der EU beigesteuert, um zwischen 2006 und 2009 gemeinsam mit 28
Partnern eine allgemeine Grid-Infrastruktur zur Verwaltung und Verarbeitung heterogener Daten zu
entwickeln. Worum es bei diesen Daten geht? Wie der Name des Projekts, das mittlerweile dem
Ende zusteuert, verrät, um Aneurysmen.
Was das sind, weiß man spätestens seit populäre Krankenhausserien wöchentlich von dem einen
oder anderen Fall berichten. Medizinisch handelt sich dabei um Ausbuchtungen beziehungsweise
Ausweitungen von Arterien, die beeren- oder sackförmig, kahnförmig, geschlängelt oder
rankenförmig aussehen, und an Stellen entstehen, an denen die Gefäßwand geschwächt ist und nicht
über eine normale Muskelschicht verfügt. Ursachen dafür können eine angeborene Schwäche
ebenso wie eine durch Verletzungen erworbene Veränderung in den Gefäßwänden sein. Ein
Skiunfall, zum Beispiel. Was aber die Informatik, beziehungsweise das wissenschaftliche Rechnen,
damit zu tun haben sollen? Technisch spricht man von der Schaffung einer Grid Infrastruktur über
einen gesicherten Webservice. @neurist sieht nämlich, auf fünf verschiedenen Teilprojekten
unterteilt, die Erstellung einer Datenbank vor, die ermöglicht, dass die Informationen der einzelnen
Patienten zwar in den eigenen Krankenhäusern bleiben, aber virtuell auch anderen Ärzten zur
Verfügung stehen.
Die Privatsphäre wird dabei geschützt, und die Patienten der fünf Krankenhäuser, die bisher am
Projekt teilgenommen haben (Genf, Oxford, Rotterdam, Barcelona, Sheffield), wurden nach Ihrer
Einwilligung gefragt. 783 Frauen und 420 Männer haben bisher zugestimmt. „Der Zugriff auf
Daten von Patienten verschiedener Krankenhäuser in unterschiedlichen Ländern ermöglicht zum
einen die Vergleichbarkeit von Aneurysmen und vereinfacht somit die Entscheidungen von Ärzten
bezüglich eventueller Eingriffe und Behandlungen ebenso wie die Berechnung von Risiken“ meint
Martin Köhler, Forscher der Abteilung für scientific computing der Universität Wien. Ein weiterer
Vorteil betrifft die Forschung: Der Zugriff auf verschiedene Daten und Beispiele ermöglicht es,
Links zur Genetik herzustellen.
Klingt wohl einfacher, als es ist, denn verschiedene Datenbanken im medizinischen Bereich
miteinander zu verbinden, um eine Art Netzwerk von Krankenhäusern und Forschungslabors zu
schaffen, benötigt viel computing power, um die Daten zu verwalten und zu analysieren. Weitere
Schwierigkeit beim Ganzen sind erstmals der Datenschutz: „Gesichter mussten natürlich
herausgeschnitten werden, damit die Patienten nicht erkannt werden können“, meint dazu Martin
Köhler, „und die Daten befinden sich ja dank des Systems im eigenen Krankenhaus und werden
nicht in einen anderen database kopiert“.
Dann natürlich auch noch die Kommunikation zwischen Informatikern und Ärzten: um graphische
Darstellungen, Simulationen und Datenspeicherungen zu ermöglichen, muss man sich nämlich
erstmals zusammensetzen. „Computer-Wissenschaftler könnten zwar ein System erarbeiten, aber
kein Arzt wäre dann in der Lage, es zu verwenden“, meint Siegfried Benkner. Was die Informatik-
Forscher der Uni Wien vor Beginn des Projekts über Aneurysmen wussten? Kaum was, wie
Gerhard Engelbrecht, Mitarbeiter am Projekt, lachend erzählt. Zwar hat er im Laufe der Monate
immer mehr darüber erfahren, aber Medizin und IT sind natürlich zwei separate Dinge. “Das
System schafft ja nichts allein, sondern stellt Ärzten nur die Daten und Darstellungen zur
Verfügung“, erklärt Martin Köhler. Aber es sei natürlich wichtig, Experten miteinzubeziehen, die
sowohl mit Computern als auch mit den Informationen etwas anfangen können. Insgesamt sind am
Projekt ungefähr 100 Personen beteiligt, mindestens zwei Drittel davon sind Ärzte.
Was in Zukunft kommen soll? Das Projekt soll natürlich nicht mit dem Abschlussdatum zu Ende
gehen: die Struktur wäre schon mal geschafft, und neue Daten und Informationen werden ständig
hinzugefügt. Und eine Erweiterung des Netzwerks und somit der Teilnehmer würde natürlich noch
mehr Möglichkeiten mit sich bringen.
Providing Instruments for Data Surgery
Viktorija Rusinaite
Supercomputers and grid computing resources provided for @neurist project helps researchers in
the medical field to cut into the vast amount of data and extract the knowledge about diagnosis and
treatment of the cerebrial aneurysm.
“What do you do with your old supercomputers, are they recycled?,” – asks one of the participants
of the MyScience ICT workshop in the University of Vienna. “Oh, I keep my flowers on the last
one, it’s in my office,” – Prof. Siegfried Benkner answers with a joyful smile.
Later on he will actually take all of 15 students from all over Europe to his office to show this huge
fridge-like iron baby, that he is so proud of. This non working rarity and collectable item now
serves as a shelf for flowers and was last seen working before unfortunate overwatering few years
ago. Along with the flowers, it’s probably the only one trifle in the all-functional office.
Super computers are actually huge machines designed to perform tasks requiring lots of
computational resources, such as memory, processing capabilities. The capabilities of our own
personal computers or netbooks are more or less the same as the capabilities of super computers
used 20 years ago. Some of the new and working ones are used, explored and exploited for the sake
of science, this is also the case in University of Vienna, department of Scientific Computing.
Supercomputers among many different resources are used in interdisciplinary projects involving
both medical and IT researchers and practitioners to improve medical research conditions. Such
interdisciplinary project called @neurist aimed at treating aneurisms since 2006 is collectively
carried out by the department and 32 other partners from hospitals and research institutes all over
Europe.
Improvements in software and hardware and it’s uses in projects aimed at improvement of medical
treatmens are among the main reasons why this cold friday afternoon in Vienna I knock on the
office door of Head of Department of Scientific Computing, professor Siegfried Benkner.
@neurist project is aimed to improve diagnosis and treatment of cerebral aneurysms, a small blood
filled dilations usually occurring in arteries or aorta near human brain and resulting severe bleeding
on brain and sometimes even death. Treatment of these dilations are usually either endovascular or
invasive surgery, which both are costly and does not prevent the recurrence of aneurysm, because
aneurysms occur as the symptom of other diseases.
For a long time treatments of aneurysms were not personalized and rather based on doctors
experience and generic guidelines. The diagnosis was based on size and location of the dilation,
which was derived from medical imaging and risk factors such as age, sex and etc. There were
some evidences that there may also be genetical reasons for the natural history and evolvement of
aneurysm, but the possibilities of exploration and comparative analysis were limited because of
large amounts of data, which are difficult to manage and analyze. Thats how @neurist project
uniting research labs, hospitals and IT specialists came into spotlight of FP7 programme, funded by
EU.
“@neurist project is distinctively interdisciplinary project,” says prof. Benkner, who is leading one
of IT resource provider groups in @neurist. Some of the partners in the project are hospitals
collecting all relevant data with the aim to collect data samples from 1200 patients, others search for
the evolvement patterns in the data collected, medical solutions for the problems. To make all of
this happen a lot of computational power is required in order to deal with a large amounts of patient
data collected in the hospitals, so @neurist is based on technology called grid computing. “Grid
computing is about connecting computer resources, computers, databases, the instruments over the
internet in order to provide a platform for scientific computing, applications, simulations and so on.
You can use all distributed computers as one big virtual computer, ” explains prof. Benkner
As databases and computers of @neurist project are based in different Western European countries
as well as different scientists using the system, the kind of service is required. The end-users of the
service would submit the query into the system and thanks to the virtualisation software and
distribution systems wouldn’t even notice which computer and where performed their task.
With the help of this huge system and relevant data collected you can for example simulate the
movement of the aneurysm inside the skull of the patient, which would be caused by invasive
surgery. Therefore the surgeon can plan where to cut in when performing the surgery.
“One of the problems we face in modern medicine is fragmentation of data,” acknowledges
professor Benkner, “that means that each hospital has its own data formats and different databases.”
In order to analyze large amounts of data, group it and extract the knowledge from it the unified
format is needed. Thats when ontologies comes into picture. “By having ontology we can say that in
one hospital it is called X and in other it is called Y, but it is in fact the same type of information
and then we can combine it technically, “ – adds professor Benkner. From the computational point
of view the language or nationality of the data does not matter, because in this respect DNA, or the
qualities of blood sample are discrete, even if presented in different data formats they can be unified
with the help of ontology. Thats why medical scientists in the project then can track diagnostic
similarities in different cases and foresee the evolvement of the disease and suggest the treatment.
Moreover the same system adapted to the end user could be used in other fields of research or
business.
“The most significant innovations are medical,” says prof. Benkner, “during the project people have
got much better understanding about aneurisms and their treatment.”
www.aneurist.org
@neurIST @work
Hanna Siemaszko
Dr Gerhard Engelbrecht talks about heroism in programming, the @neurIST project and his work at
the University of Vienna
Viviana Lupi
Steven Afonso Portela ha 22 anni e arriva dal Portogallo. Il suo progetto per il 2010 è fondare una
rivista di scienza dedicata ai bambini fino ai 12 anni.
Veronica Frigeni di anni ne ha 21 e arriva dall’Italia. E’ giornalista freelance e vuole comunicare la
scienza a un pubblico di non esperti: “Con i miei articoli”, dice, “cerco di rendere gli argomenti
facilmente comprensibili, ma non lesino in quanto a provocazioni. Comunicare non è mai a senso
unico, provo a innestare un dibattito, a coinvolgere il pubblico”.
Hanna Siemaszko, 24 anni dalla Polonia, è giornalista, traduttrice e insegnante. “La mia missione”,
afferma parlando dei suoi tre lavori, “è trasferire una cultura nell’altra e aiutare le persone a capire”.
Sono alcuni dei giovani giornalisti scientifici che hanno deciso di partecipare al progetto “My
Science”, finanziato dall’Unione europea e coordinato dall’EURAC di Bolzano, sotto la
responsabilità di Farah Fahim, con la collaborazione dell’associazione giornalistica Polis (Polonia)
e della società di comunicazione Eurideas (Ungheria).
Grazie al progetto, 90 giornalisti scientifici parteciperanno a sei differenti workshop a Vienna,
Gödöllő (Ungheria), Praga e Bolzano, per conoscere da vicino il lavoro degli scienziati in alcuni
settori chiave della ricerca europea: cellule staminali, tecnologie dell’informazione e della
comunicazione, energie rinnovabili, ambiente, scienze umanistiche e tecnologie chimiche.
Il primo workshop si è svolto con successo all’Università di Vienna dal 7 all’ 11 dicembre, gli altri
si susseguiranno fino al 13 marzo, mentre il 28 maggio è prevista la conferenza finale a Bolzano (La
sede potrebbe essere Bruxelles al posto di Bolzano <prego di sostituire se nel frattempo viene
confermata questa opzione>).
A Vienna, 15 giornalisti, provenienti da Croazia, Estonia, Francia, Italia, Lituania, Polonia,
Portogallo, Romania e Slovacchia, sono stati ospitati dal Dipartimento di Scientific Computing
diretto da Siegfried Benkner e hanno potuto approfondire le tematiche più attuali nel campo delle
tecnologie dell’informazione, come la diffusione dei super computer nel mondo e le caratteristiche
dell’Internet del futuro, nonché gli aspetti etici legati ad esempio alla protezione dei dati personali.
In particolare, i ricercatori hanno presentato ai partecipanti tre progetti finanziati nell’ambito del
Settimo Programma Quadro dell’Unione europea: “@neurIST”, “ADMIRE” e PEPPHER, volti a
sviluppare rispettivamente nuove tecnologie per gestire i dati medici sugli aneurismi cerebrali,
estrarre con maggiore efficienza informazioni dai grandi data base e realizzare computer con
elevate velocità di calcolo.
Un’esperienza di cui i giovani giornalisti faranno tesoro nella pratica quotidiana di comunicare la
scienza. Finalità, questa, che sembrano voler interpretare con tenacia. “La mia rivista di scienza per
bambini”, specifica Steven Afonso Portela, “costerà uno o due euro. Mi occuperò di tutto, anche
dell’impaginazione. Il mio obiettivo è che i portoghesi familiarizzino con la scienza”.
Auspicio senz’altro valido e non solo per il popolo portoghese.
Advanced Data Mining and Integration Research for Europe
Samuel Krošlák
All over the world, the amount and the complexity of data is doubling each year. For last few
decades, computer performance was used mostly for modeling and simulation. However in present,
there is a strong need to approach the existing data in such a way, so we can extract a value. Enter
data mining.
Data mining
“Is data mining the same thing as statistics?” we asked professor Peter Brezany from University of
Vienna. “Statistics is used mainly for analyzing existing data. Data mining focuses on finding
patterns in data. In some way, data mining uses statistics as one of its tools.” In last few years, data
mining is becoming a necessity. From definition, data mining refers to identifying valid and
potentially useful patterns in large volumes of complex data. These patterns can be useful for either
prediction or description. The end product of data mining is knowledge.
Ethical issues
“Tools can’t be dangerous, only people can be dangerous. Scientists are only developing tools”,
said Sabri Pllana as a reply for a doubtful comment from the group of journalists. This is certainly
true in many domains and environments, and it is especially true in computer science. The issue of
security of sensitive data in computer networks and storage systems can be on a very high level. But
in the end, there is only human that is going to operate a computer and human can do harm,
regardless if the data is on paper, which is locked in a locker, or stored in a remote database. Thus
this discussion has no place in scientific discovery and should not be slowing down the progress in
computer science. When scientific research will make an output, which can be implemented and
used in a real word and to serve people, only after that, we can open a discussion about information
security and moral aspects.
Article was made with a kind support of My Science organization and the department of Scientific
Computing on University of Vienna.
L'uomo oltre il dato
Veronica Frigeni
Questioni prospettiche e di valore: con @neurist il paziente diventa il dato, e il dato possibilità di
salvezza. Integrazione, analisi, calcolo del rischio e simulazione scandiscono la via informatica alla
lotta contro l'aneurisma cerebrale.
Matthieu Dailly
Tous les ans, 6000 milliard de milliards d'informations (IDC/EMC) sont crées sous forme de
données informatiques (0 et 1), « trois fois plus que ce qu'il n'a jamais été écrit dans les livres » (1).
Impossible de tout stocker. Les scientifiques tentent d'organiser cette « matrice » afin d'en utiliser
une partie dans leurs recherches. L'Europe a déjà investit plusieurs milliards d'euros dans la culture
de son « jardin numérique ».
De l'étude du plus petit écosystème sur Terre à celle des évènements célestes, les scientifiques
utilisent des ordinateurs pour simuler les réactions biochimique et physique ou tout simplement
prévoir la météo. Leurs expériences font naître une multitude de données qu'il faut ensuite trier,
traiter, analyser... A l'université de Vienne (Autriche)(2), des chercheurs tentent de développer la
prochaine génération d'outils informatiques qui permettront d'organiser cette masse informe
(intégration, bases de données, logiciels pour infrastructures massivement parallèles, ....)
Jaguar - Cray, Roadrunner, Kraken XT5, JUGENE - Blue Gene, ces noms barbares sont ceux des
nouveaux « cerveaux » de la science (ils réalisent jusqu'à un million de milliards d'opérations à la
seconde). En climatologie ou en sismologie, les modèles informatiques sont corrigés en temps réel
par une armée de capteurs. « C'est le Quatrième Paradigme, précise le chercheur viennois Peter
Brezany, la force des observations historiques couplée à celle des modélisations informatiques ».
Au laboratoire Cern, le berceau du Web, ces monstres sont utilisés pour « simuler le Big Bang ».
Tandis que certains États les utilisent pour recréer les conditions d'un essaie nucléaire (CEA,
France).
Mais ces techniques sont aussi utile au monde industriel. Les simulations sont utilisées par les
constructeurs automobiles et aéronautiques pour la conception de prototypes. Plus étonnant, la
Playstation 3 et ses processeurs graphiques (conçues pour la 3D, les simulations physiques...), sont,
eux, utilisés dans des programmes de lutte contre les maladies dégénératives, le cancer ou encore le
Sida (Folding@Home, FightAIDS@Home). Les machines sont organisées en « cluster », reliées
entre elles, afin d'accumuler leur puissance. Reste qu'administrer ces infrastructures complexes en
devient d'autant plus difficile. D'où l'origine du projet Peppher de l'université de Vienne. Un
programme qui tend à créer un logiciel assez malléable pour s'adapter aux composants électroniques
de demain, leur amortissement excédant parfois 20 ans.
« Comme dans tout projet humain, il y a un risque à utiliser ces techniques. Mais le potentiel pour
notre bien être à tous est formidable, explique Sabri Pllana, le coordinateur du projet. Admire (base
de données), @Neurist (détection d'anévrismes), la plupart de nos projets de recherche concernent
soit directement les sciences informatiques, soit la santé ». En effet, d'autres traitent du cancers ou
de la faim dans le monde. Mais leur point commun a tous est qu'il est impossible de présager du
résultat des modélisations, tant le nombre de paramètres peut-être décisif.
Et c'est la que le bas blesse. Car la coopération s'impose dans ces programmes. Quelles applications
pour quels résultats? Quel partenariat Public/Privé? A Vienne, l'un des sponsors du projet Admire
développe des logiciels d'entreprise pour la gestion des ressources, (financières et humaines). Ce
type d'applications permet, entre autres, de mieux connaître les consommateurs et leurs besoins.
D'autres parts, Google pas plus que Facebook ou les jeux vidéo en ligne, ne fonctionnerait sans les
« fermes de serveurs ».
Reste que ces ordinateurs sont en fait capables du pire comme du meilleur, car comme l'explique
Dorian Karatzas, responsable Éthique et Recherche à la commission européenne : « tout dépend de
l'utilisation que l'on en fait ». La multiplication des données, qu'elles soit personnelles ou non, est
indéniable. Reste que leur administration est encore difficile, et leur protection toujours un
problème (Ipv6) (3).
Index :
1.http://www.informationweek.com/news/internet/search/showArticle.jhtml?articleID=197800880
2.70.000 étudiants, crée au XIVe siècle
3.Prochaine génération d'adresse Internet (IP), très controversée, et qui « identifiera » les produits
de consommation.
The people behind @neurIST: Geeky nurses? Unlikely.
Andrei Dulvac
“Andrei, this is Celline”, said prof. Siegfried Benkner, head of the Computer Science department at
the University of Vienna. “She may seem small, but she is a powerful thing”. Celline is the main
piece of machinery used in @neurIST, an EU funded research project which aims to help clinical
patients with aneurysm. Prof. Benkner is the coordinator of the project and he agreed to give me
details and show me around.
We were in one of the servers’ room at the university. It was cold, dry and noisy and I started
thinking that Celline liked different things than me. “The airflow and temperature are being
monitored periodically because the machines need this kind of environment in order to work
properly”, said the man louder this time, as we got closer to the computers and the noise built up.
“The goal of this project is to develop and integrate the IT infrastructure in order to improve the
diagnostics and treatment of this disease by merging different kind of data, from literature, research
clinical data and this new method, computer simulations. This way, we make it much easier for
doctors to diagnose and treat aneurysm”, he explained. The project wasn’t very clear to me so I
oversimplified things in my mind and thought of the people working with @neurIST as geeky
nurses with a laptop. This image was soon to be changed completely in my mind.
Aneurysm is a complicated disease, usually linked with faulty genes, that has to do with a lot of
factors and patient background. Diagnostic and treatment is usually done by looking at similar cases
of aneurysm. But having access to such cases is a hard thing, even between local hospitals, let alone
hospitals in different countries. The @neurIST project aims to make this an easy process. Simply
put, it gathers, groups and analyses medical data from patients with aneurysm. This might not seem
that hard to accomplish, but it actually is. @neurIST took 4 years (2006-2010), 17.5 million Euros,
28 research partners and 20 hospitals to get to this stage, and it’s still far from being a very mature
project. The amount of work needed was so large that the project had to be split into 6 modules: 2
which deal mainly with the infrastructure (@neuCompute ad @neuInfo), and 4 that deal more with
the clinical practice (@neuLink, @newFuse, @neuRisk and @neuEndo).
@neuLink deals with finding links between the disease and genetics. With the help of high
performance computers (HPC), information can be found on how and what genes influence the
occurrence and the development of aneurysm.
@neuFuse tries to define how patient data should be gathered from different sources and how it
should be stored. This part is extremely important and hard to deal with. Why is it so important?
Let’s take traditional diagnosis. You have an atypical case of aneurysm. You go to a doctor and he
sends you to another one and so on. You end up having 20 doctors. Doctors need to know
everything about your lab results and your medical history. So your charts have to go from one
doctor to another and this takes time and a lot of administrative effort. Besides that, they can’t fully
rely on lab results given by other hospitals so they take some of the lab tests again. And you have to
give your history to all of your doctors. This is called data fragmentation and data redundancy.
@neuFuse aims to overcome these problems, which always rise when dealing with a lot of data
from different sources.
@neuRisk is the project module that provides personalized risk assessment and treatment
guidelines. Your medical information gets in the system and the system presents this information in
a visual, easily interpretable manner. You can see the shape of your blood vessels, you get
information about pressure and other things and you also get a risk factor and different treatment
options, based on other similar cases that have been processed by @neurIST.
@neuEndo deals with optimizing and developing medical equipment used in treating aneurysm. It
provides a very useful tool for simulating the mechanics of such devices, as well as the outcome of
a medical intervention with the devices that are designed.
I left the servers’ room cold and with a dry mouth. Prof. Benkner felt my surprise with how
uncomfortable it could get in there and he explained that people don’t get in that often and that
everything is controlled remotely. He offered to show me one of the labs where they develop and
test parts of the project. I said “yes.”
The room was bigger than I expected a computer lab to be. There were two people working, each at
their own desk. One of them was looking at a 3D model of a brain, testing the interface. Prof.
Benkner explained that some medical information is gathered in real time from hospitals, from real
patients, but in order to keep the patients’ privacy, the system hides their real names. “Legal and
ethical issues are a major concern in this project because we are dealing with patient specific data
and the privacy of this data is a major challenge, from the technical point of view, also. And, of
course, the system we are developing has to obey the European law. And it’s worse than that
because each European country has its own specific law that we have to take into consideration.
This is why we also have legal and ethical experts working close with everyone else in this project.”
I realized then how complex and how hard it is to develop such a system and what I had from then
on was just respect for the people behind @neurIST. I couldn’t look at them as nurses, anymore,
helping doctors, the real benefactors. And the need for such a system became clear to me. When
will we see @neurIST in hospitals everywhere, as we now see the trivial MRI machine?
Computer che curano, le tecnologie IT al servizio della medicina nella cura dell’aneurisma
celebrale.
Priscilla Manzotti
L’aneurisma consiste in una sorta di estroflessione, rigonfiamento di un’arteria, che può avvenire
sia per cause congenite, che acquisite nell’arco della propria vita che possono dar luogo ad
aneurismi di diverse dimensioni. L’incidenza annuale di emorragie risultanti dalla rottura di
aneurismi intracranici è stata stimata essere 9 per 100000, una grande porzione di persone che
hanno queste emorragie morirà o rimarrà invalida e quindi dipendente dalle cure di altre persone.
Ai medici spetta la scelta della cura, la valutazione di tutti i fattori per decidere se intervenire o
meno chirurgicamente.
Quale sarà la cura migliore? Questa sarà la domanda rivolta al programma sviluppato dal progetto
@neurIST che vede coinvolto il dipartimento di “Scientific Computing” dell’università di Vienna.
Il progetto @neurIST (Integrated Biomedical Informatics for the Management of Cerebral
Aneurysms) è stato finanziato con un budget di 17 miloni di euro dal settimo programma quadro
della comunità europea, ha una durata di quattro anni, partito a gennaio 2006, ed unisce istituzioni
pubbliche e private di 12 nazioni europee.
Il progetto si propone di aiutare i medici a prendere decisioni e selezionare il trattamento più
appropriato per ogni caso in esame, migliorando la cura dei pazienti, con l’obiettivo finale di
ridurre il numero di trattamenti non necessari del 50%.
@neuriIST punta ad un miglioramento della diagnostica, della prevenzione e del trattamento
dell’aneurisma; permette ad esempio di calcolare il rischio di rottura dell’aneurisma nei pazienti e
ottimizzare il loro trattamento.
@neuriIST permetterà inoltre di migliorare le conoscenze dell’aneurisma (la formazione, la crescita
e la rottura ), e trovare le relazioni tra fattori genetici, stili di vita e la malattia.
Tutto questo sarà reso possibile da un’infrastruttura IT che sarà sviluppata per la gestione e
l’elaborazione della vasta quantità di dati eterogenei acquisiti durante le diagnosi.
Queste grandi quantità di dati riguardano informazioni distribuite su diversi livelli: dal livello
molecolare (gene e proteine) a quello della persona.
@neurIST permetterà l’accesso a queste informazioni e ne garantirà la conservazione, si potrà
quindi selezionare quando richiesto il miglior trattamento per ogni paziente ad esempio cercando
informazioni relative a pazienti con simili aneurismi per volume, forma, superficie, stile di vita del
paziente e base genetica.
Il progetto @neuriIST riunisce persone con esperienze e conoscenze diverse, dai medici ai
ricercatori informatici, che lavorano insieme, creando una collaborazione che potrà costituire un
valido aiuto nella cura dell’aneurisma.
I can see, what’s in your head
Triin Tammert
Scientific computing is today widely used in many fields. Medicine is one of them and it is a huge
benefit – doctors can model and watch items inside people’s heads without the need to first open
them.
There is an everlasting question: should I rent a house or should I buy one? If I will rent it, I pay for
what I use and when I need a bigger and better one, I just rent bigger and better house and move
there. On the other hand, if I will buy a house, I will finally own it, but it is not sure, how much it is
worth then and can I fit my whole family to this house.
Information and communication technologies and scientific computing may seem difficult to
understand but there are similar dilemmas. Should science lab own a supercomputer or should they
just rent the access to a „virtual supercomputer” that scientists can use via Internet? There is a
chance that they will have so much data, that some day the „space” in their supercomputer is just
not enough, so maybe still renting? But there can also be problems – are the connections fast
enough, etc...
Scientific computing has become third pillar of scientific discovery, complementing theory and
experiment. Computers are used in many fields: weather prediction, financial optimization, material
sciences, transport logistics, molecular modeling, medicine, etc.
@aneurIST, an EU-funded medicine project, that connects scientists and doctors from many
European countries. It is a good example of interdisciplinary project that uses the possibilities that
cloud computing offers. You can call it „Google model” – it means having many programs you can
use just by logging in, without installing them to your personal machine (like Google Maps,
Blogger, etc).
@neurIST is focused on cerebral aneurysms and intends to provide an integrated decision support
system to assess the risk of aneurysm rupture in patients and to optimize their treatments. The
current process of cerebral aneurysm diagnosis, treatment planning and treatment development is
highly compromised by the fragmentation of relevant data. @neurIST presents a new paradigm to
understand and manage cerebral aneurysms. A complete IT infrastructure is developed for the
management and processing of the vast amount of heterogeneous data acquired during diagnosis.
@neurIST benefits patients with better diagnostics, prevention and treatment because it combines
efforts of clinicians and industry. Through research clinicians gain a greater insight in aneurysm
understanding, while industry will be dragged by these achievements to develop more suitable
medical devices to treat the disease.
@neurIST provides IT infrastructure that shares biomedical knowledge providing access to a set of
software tools and platforms such as @neuLink, @neuFuse, @neuRisk, @neuEndo, @neuCompute
and @neuInfo.
One of the reasons, why @neurIST project was born, is that personalised risk assessment could
reduce unnecessary treatment of aneurysms by 50%, with concomitant savings estimated in the
order of several million euros per year.
This project includes – in addition to computer science and medicine – also many ethical and legal
questions, since there are involved patients from many countries, and delicate information about
patients (genetical, medical, etc).
How it really works? Patient accepts (by signing a paper) using his/her data and it is added
anonymusly to the database in the hospital (for example in Barcelona). The software – created for
the @neurIST project – has access to the database and creates an illusion, that there is one big
database (although actually there are parts of database in different locations and there parts are
connected via Internet). So if now a doctor in the United Kingdom has a patient who has an
aneurysm and the doctor wants to compare it with some similar cases, he or she just has to log in to
the database and has the access to all the information.
Chance to share and compare data helps finding links between genomics and cerebral aneurysms
and helps doctors to take decisions and select more appropriate treatments. @neurIST improves
also patient care by identifying patients with high risk of rupture by assessing a personal risk factor,
thereby reducing the patient's operation risks and anxiety.
Enrico Minora
The common framework in EU means and implies a new method of evolution for cultural
development and sharing frameworks for scientific added values in international environments.
The most direct way to ensure a regulation in international matter and contexts is to create software
solutions for a cooperation standard and common frameworks of analysis for so called “next
generation networks”. This implies that taking into consideration science rules and laws to support a
more modern and integrated method of devolution for ensuring stronger ideas and concepts in an
open knowledge society is the best practice way to strengthen a path in EU systems to enlarge a
most advanced society. The importance to define a common “E-Science” based on cultural data
inclusion of common people about information spread and original new rules on technological
instruments for cooperation is the only way to settle down a new social frameworks in international
environments. And so the origin of species depends on a generally talking overview about human
and life sciences integrated on common science standards for a most important and deep technology
usages to enrich people in semantic ways also on ontology ways such as, for example, widespread
health new methods to apply E-care and relative methods of analysis to compute necessary
information for a best practice regulation over cultural items. Science consists in a more and more
important trace given by species evolution itself frame-worked under cultural diffusion about
scientific data management. Improving new matters and also discovery fields for discussion about
E-items means also strengthening common ways and leaving more modern traces to ensure a
regular and progress in science definition. In these fields of regulation about inter-cultural progress
European projects for international integration and inclusion means, on one hand more discussion
topics for common people studying in multi-cultural project such as, for example, MY SCIENCE
PROGRAM implemented by European Commission in General Direction for Research and
Development, and on the other hand research over a sharing base in ruling most important
definitions about common topics of discussion and scientific use of modern tools for larger
implementations in progressive generations and young scientific journalism as source to ensure the
European scientific culture diffusion.
As a result of this increasingly integration process to affirm new ways to define cultural patterns in
the best way there is also social inclusion for E-countries, such as all European nations generally
talking but also Eastern ones in a specific way to get a propulsive translations of their languages and
scientific expressions in the Euro-area framework. Moreover the only approach for a good quality
output on this area is reaching a general overview about definitively useful new generational
standards and expression in the way of a more à-la-page common and generally valid rule for
international cooperation and progress both in science and society. In conclusion ethics in science is
the best way to strengthen a progressive parameter over both general and specifically technical
approaches in all cultural fields.
Internet Computation Technology:
Web 3.0 path through computing science upgrade
Enrico Minora
After more than ten years history of Web 2.0, it is necessary to implement a new system of tools
integrated in a more modern operative platform as for ICT new semantic definition approach and
best performance evolution of most common adaptive instruments for users' labor and technical
methods of new software and hardware combinatorial solutions
Internet evolution steps from a social networking level of exploitation to more complex and
sophisticated in integration interoperability outputs in defining a usable technique of development
both for simpler and more advanced characteristics of analysis. Cooperative networks give origin to
many new possible interpretation and functional adaptations to the historical upgrade of society and
economic modern systems by taking inspiration from a new ruling role of World Wide Web
instruments scientific-based.
The best standard to accept a re-definition of Internet is the same acronym WWW, that means a 3W
or, anyway, Web 3 application of knowledge information systems integrated in new kinds and ways
to combine data processing and computational technology. This means that, according to the
historical value given to computerization and present world applicable scientific studies and
elaborations about data mining inside of economical and social evolution enlargement processes for
developing a deeper inside technology method for analysis, IBM sector estimation - that means
International Business Machines even interpretable as a convert towards an Internet Best Model
solution - over health-care 5,4% costs cut by new electronic systems applications (E-Health,
European Health or Electronic Health) is also extensible to the environmental field by US
observatories over renewable energy studies already studied at Stanford University under the
supervision of Web marketing expert Ward Hanson. This combinatorial model takes inspiration
from HWS definition as up-grade Health Web System gaining sunk costs reduction by service
virtual care methods based on ICT compliance, attitude and technological automatic calculation of
therapy.
As for environmental framework of study, in the European Union so called 20-20-20 target
implementable as a 20% CO2 emissions reduction by renewable energy solutions can be combined
with US 2030 objective to define as a WWS (Water, Wind, Sun) model meaning hydroelectric,
Aeolian and solar resources path. All of these kinds of clean solutions can be implemented by green
IT implementation as for the equalization between WWS and WWW with Sun element as Web part
for new policy developments.
According to UStanford elaboration described by December 2009 number of Scientific American,
renewable energy world consumption will consist in 11,5 Terawatt (TW) that means: 11,5 TW =
WWS => 11,5 T = WS. 11,5 T coefficient unit implies a 5,4 T energy net cost reduction by new
ICT models up to a global 16,9 Terawatt employ. So 5,4 element is to be considered a data
computational base to analyze and apply useful criteria for both policies and create a convergence
between 2020 path and 2030 trend. T can be considered the Technology constant and also Trend,
calculation basis and unit for determination of the benefit of correlated interventions over both
health and energy matters. And in the model we can infer also a translation from '20 to '30 horizons
of common intervention recalling transition from a number 2.0 Internet environment to best 3.0
applications.
(Data processing elaboration has been made at University of Vienna laboratories during MY
SCIENCE PROGRAM - Research and Development Direction of European Commission - on
December 2010)
Bio-medicine and relative E-care computer-based therapies
Enrico Minora
Taking care of many kinds of diseases consists in a great variety of interventions based on smart
grids and advanced operations based essentially on multi-core applications and even prevention
given up by bio-pharmaceutics and information technology applied to genomics and cell
calculations
The main purpose of most advanced medical E-care activity and therapies is to settle down a useful
quantity of health operations based on computer science methods of intervention. As a matter of
fact, the origin of greater part of diseases is given up by a scarcely deep knowledge of health
troubles conditions such as, for example, a low level of information about the treatment or also a
gap in mind data processing due to a problem source gap. So the best way to practice good
medicine depends essentially on both good and rational kind of action over disease generated by a
computer data elaboration of the way to get a sufficient impact over the gap to be filled.
As a consequence of this intelligent modern E-care intervention method, one possible instrument to
be applied consists in a precise strategy for a competent therapy application on the patient. Firstly
this modus operandi can comprise a virtual simulation of health operation in due diligence with data
privacy respect and context conditions for a good information interoperability between the doctor
(agent) and the patient. This last stakeholder is basically influenced by the environment of therapy
application also taking into consideration that same intervention needs to be applied in proactive
context conditions. The direct effect of a good framework situation in developing attitudes to get
success in the operation is an important part given by an advanced way for disease impact reduction
and technical evaluation of health mission critical information contained in the therapy.
A basically important influence over patient ability to leave her/his health diseases in a successful
way is the h-data processing impact of E-care systems integrated by modern instruments such as,
for example, smart grids and bio-molecular treatments consisting in computer cell-based relative
elaborations. All of this variety of care styles is originated by a key-solution to be spread in terms of
practical impact and psychological centrality of prevention. This last main operational approach
comprehends just health information processing in its so many relevant fields of applications giving
origin to a variety of action methods. In concrete terms, University of Vienna Computing Science
Department supercomputer is able to manage a great quantity of data by mainframe calculation
power based also on an open source E-health platform widespread as policy impact all over Europe
by neurIST applied research E-project. Both data system integration and therapy solutions are
logistically linked by interoperability information treatment and just-in-time interventions in deep
knowledge of most modern health therapy criteria and strategies.
Movie of Kristina Medic about @neurIST can be found on:
http://www.eurac.edu/it/newsevents/webtv/pages/default.aspx?mediaid=44981&page=1