Sie sind auf Seite 1von 3

​Can you identify some positive/negative effects of technology in the article?

Warnings of a Dark Side to A.I. in


Health Care
Last year, the Food and Drug Administration approved a device that can
capture an image of your retina and automatically detect signs of diabetic
blindness.
This​ new breed of artificial intelligence t​echnology is rapidly spreading across
the medical field, as scientists ​develop systems that can identify signs of illness
and disease in a wide variety of images​, ​from X-rays of the lungs to C.A.T.
scans of the brain. These systems promise to help doctors evaluate patients more
efficiently, and less expensively, than in the past.
Similar forms of artificial intelligence are likely to move beyond hospitals into
the computer systems used by health care regulators, billing companies and
insurance providers.​ Just as A.I. will help doctors check your eyes, lungs and
other organs, it will help insurance providers determine reimbursement
payments and policy fees.
Ideally, such systems would improve the efficiency of the healthcare
system.
But they may carry unintended consequences, a group of researchers
at Harvard and M.I.T. warns.
In a paper published on Thursday ​in the journal Science​, ​the
researchers raise the prospect of “​adversarial attacks”​ —
manipulations that can change the behavior of A.I. systems using tiny
pieces of digital data. By changing a few pixels on a lung scan, for
instance, someone could fool an A.I. system into seeing an illness that
is not really there, or not seeing one that is.
Software developers and regulators must consider such scenarios, as
they build and evaluate A.I. technologies in the years to come, the
authors argue. The concern is less that hackers might cause patients to
be misdiagnosed, although that potential exists. ​More likely is that
doctors, hospitals and other organizations could manipulate the A.I.
in billing or insurance software in an effort to maximize the money
coming their way.
In 2016, a team at Carnegie Mellon ​used patterns​ printed on eyeglass
frames to fool face-recognition systems into thinking the wearers were
celebrities. When the researchers wore the frames, the systems
mistook them for famous people, including Milla Jovovich and John
Malkovich.
A group of Chinese researchers ​pulled a similar trick ​by projecting
infrared light from the underside of a hat brim onto the face of
whoever wore the hat. The light was invisible to the wearer, but it
could trick a face-recognition system into thinking the wearer was,
say, the musician Moby, who is Caucasian, rather than an Asian
scientist.
Researchers have also warned that adversarial attacks could fool
self-driving cars into seeing things that are not there. By making small
changes to street signs, they have​ duped cars into detecting a yield
sign instead of a stop sign.
Late last year, a team at N.Y.U.’s Tandon School of Engineering
created virtual fingerprints capable of fooling fingerprint readers 22
percent of the time. In other words, 22 percent of all phones or PCs
that used such readers potentially could be unlocked.
The implications are profound, given the ​increasing prevalence of
biometric security devices​ and other A.I. systems. India has
implemented the world’s largest fingerprint-based identity system, to
distribute​ government stipends and services.​ Banks are introducing
face-recognition access to A.T.M.s. Companies such as Waymo, which
is owned by the same parent company as Google, are testing
self-driving cars on public roads.
(The New York Times)

1. Do you think technology can become addictive?


2. Why are young people more comfortable with technology than many older
people?
3. Should schools teach their students how to use technology?
4. Can technology cause social problems?
Which two industries might have been the most affected?

Das könnte Ihnen auch gefallen