Beruflich Dokumente
Kultur Dokumente
NOTE: This article was first published in Spanish and can be found here.
Very often we hear about news of new technologies that will do (or even do
already) in a more efficient way tasks carried out before by humans, but
that will bring about diverse debates and associated risks at the same time.
Most of the jobs that exist today could disappear in decades. As artificial
intelligence surpasses humans in more and more tasks, it will replace them
in more and more jobs.
According to the Infojobs report published this May and entitled “State of
the labor market in Spain”, 76% of the active population does not believe
that automation and new technologies are going to endanger their jobs.
Whether the changes are very deep or of less draft or if they are made in a
shorter or longer period of time, they will end up happening, and this will
have a very important impact not only on job losses but also on how
companies manage that matter and other problems that may be associated
with the use of new technologies, and that will affect their transparency
and the trust and credibility that they transmit to their stakeholders
and therefore their sustainability.
The so-called industry 4.0 has already arrived and it seems that the
companies are not very concerned about how they should modify their
corporate social responsibility to face the challenges and debates that
should be put on the table.
These algorithms are not at all simple, and they start from the basis that
after initial programming, cases are provided that the algorithm must solve.
These results are valued and validated by humans and the information on
whether it has been done right or wrong is reintroduced into the algorithm,
so that it learns and reprograms itself to find solutions that at the end
of the learning are even more accurate than human ones, but that have
turned the algorithm into something that cannot be scrutinized and we can
not know the exact reasons that have led to that solution or result.
It is more or less like teaching a person, that will give us correct results at
the end of learning but to which we can not scrutinize his/her brain to know
how he/she has reached a concrete result.
We can ask the person why and how he/she has arrived at a decision or
result and he/she can tell us, but the machine will not be simply a black box
that spits out results.
Among the problems that can arise from depending on decisions taken by
black boxes are those that Enrique Dans comments on his blog.
This could be impossible, even for systems that seem relatively simple on
the surface, such as applications and websites that use deep learning to post
ads or recommend songs.
The computers that run those services have been programmed and have
learned in ways we can not understand. Even the engineers who build these
applications cannot fully explain their behavior.
Beyond that public sector and its own idiosyncrasy, will companies be able
to account for these new technologies? How will they redefine their
corporate social responsibility to deal with new paradigms like the one that
is shaping up with the massive entrance of artificial intelligence in our life?
How will they manage the introduction of robots and the
consequent expulsion of their staff of human workers?
Many questions for which apparently there are few answers, at least made
publicly, and that go beyond the debate of whether companies should pay
taxes for their robots or if all people who do not work would have to receive
a guaranteed basic income.
We expect a near future that is very exciting and at the same time
disturbing, and the fact that it is not far away should give a touch of
attention to the official bodies and organizations to regulate in the best
possible way the transition to this new model of economy.
And also the companies should get down to work to make changes in their
CSR policies and in the way they relate to their stakeholders because the
areas to be dealt with can introduce specific issues that until now nobody
was considering.
Seeing the opaque operations of the artificial intelligence and the large
number of people who will lose their jobs because of it, it seems to me that
companies are going to have an arduous task ahead if they want to be
truly socially responsible.
Finally, I recommend the reader to take a look at the nine main ethical
questions posed by artificial intelligence and to perform a mental
exercise of how organizations should tackle those issues that affect them.