Sie sind auf Seite 1von 11

So, can we just start with an introduction and where your professional post is?

Louis Chude-Sokei, Boston University. I’m the George and Joyce Wein Chair of African
American Studies, a Professor of English and the Director of the African American Studies
program. And I’m also the Editor-in-Chief of the Black Scholar journal, which I must plug as the
number one journal of black studies in the country.

Excellent. So, can you tell me a little bit about how your work in relation to AI began?

Through science fiction and through music. Through narratives of artificial intelligence going
back through robotics to the present, but also through music and they began to overlap because
one of my primary theses, when I started doing my work was that black engagements with
technology occurred primarily in the musical sphere. And largely because it was musical we
tended to dismiss it as technological, or sorry dismiss it as being not technological, when in fact
it is technology. During the era of the so-called digital divide, I just found it fascinating that while
all the conversations were rampant about minority access to technology and blacks in their
relationship to technology and technology being antagonistic towards blacks, young black men
and women in their bedrooms and projects in New Jersey and council flats in London were
producing the most sophisticated electronic music on the planet. And the world was
acknowledging this really sophisticated electronic music at the same time as there’s been these
discourses of a digital divide. That just didn't make sense to me. Especially since coming up
through these musical subcultures, they seem to be so technology focused. You were
celebrated by for your knowledge of technology, your ability to mix and remix and reshape
technology, you know rewire and re-rig things. So, I thought something’s going on in how we
understand technology vis-a-vis race. And so, that got me thinking more about technology,
reading more about technology, as a way of finding out the history of black interactions with
technology, particularly communications technology. And that, of course, led me to the
conversations about technology which become increasingly focused on artificial intelligence at
this time. And because of my history as a fan and scholar of science fiction, they began to
overlap right there.

Excellent.

And it did, and it really helped when in the 80s a number of science fiction narratives began to
explicitly refer to black music on a regular basis. Particularly William Gibson’s Neuromancer with
Rastafarians and reggae music, you know. And I was at that time writing about how reggae
music was the beginning of sampling, and looping, and chopping and doing all these things to
sound. So there seemed a very clear parallel between conversations about technological
rewiring and remixing and cybernetics. And so those things began to make sense to me through
the narratives because the writers seemed to kind of intuitively understand that something was
going on. (missed @ 3:22) is another writer who did that, Neal Stephenson to a lesser extent,
but yeah.

Great. So, in your work with music and in science fiction there’s a lot of communication in
regards to AI.

Yeah.

I’m wondering if you could speak a little bit about examples in current discourse that are
particularly accurate or inaccurate in regards to communicating AI and its capabilities to broad
publics?
In terms of how it’s being communicated in general? It’s funny, looking back in when we began
to switch to digital production and began to make sense of we need a new software, right, that
was coming out in the 80s and samplers and all of that, the big interest was MIDI at the time.
Right. And MIDI. And that in the sense was artificial intelligence, right? Having one centralized
intelligence making sense of all of the different machines and controlling them. And so, I thought
the musicians were much more at ease with AI than the writers were. In science fiction, AI is as
a part of the long tradition of seeing technology as antagonistic, AI was just this increasingly
threatening presence that threatened the centrality of the human being, threatened the centrality
of human organizing consciousness, and just threatened the principality of humans in these
narratives of culture. And musicians didn’t feel that, but writers certainly did. Right. And so, I
think that in terms of the kinds of communication about artificial intelligence that were coming
out when I began to discover it, it was almost all negative and that’s from science fiction. I can’t,
I mean I’m sure there were science fiction stories and novels where AI was a good thing, but I
can't recall any of them. It’s all pretty bad from Wintermute in Neuromancer I just can’t imagine, I
can’t think of any, however, I also found it very interesting that AI began to be narrated as a kind
of insurgent deity. Right. A new kind of God-figure that threatened older narratives of God.
Right. So, when these science fiction writers would have the AI figure they’re pairing alongside
Obatala and West African deities and that would happen in these narratives, right. And so, it
was this sort of religious contest and that was just a part of the anxiety. So, I can’t really think
that, of any science fiction that introduced AI in a positive way in the early days. I know that
there are a couple of writers now who have tried to domesticate AI. Right, I’m thinking of Nalo
Hopkinson who turns AI into this sort of maternal Afro-centric figure, right. I’m still suspicious of
that perhaps because of my previous suspicion, right, of dominant maternal or paternal figures.
But, I don’t think science fiction did it well, I guess if that’s the direct answer to the question.
Science fiction didn’t do it well, music did it well because the musicians all felt empowered, they
didn’t feel like they were being decentered by MIDI or some sort of central organizing principle.
In fact, if you look at music as an example, as electronic music became more and more popular,
the critics of electronic music responded to when in the way that people responded to AI. It’s
there’s no personality, it’s not really you, it’s not really creative. It’s someone else really doing it.
Whereas musicians were like “No, we feel completely, organically interconnected with the
sounds that are coming.”

It's their instrument.

It’s their instrument, right. So, that was interesting. So, I suppose for me I came up noticing all of
these different takes on what would we be called AI. Right so, I don’t know if that answers the
question directly.

No, I think it does, and I’m wondering if you can also work the other end of it where what
observations you might make on the influence of science fiction and certain elements of tech
optimism in current contemporary discourse in the media perhaps?

You know, it also depends on what science fiction one reads. You know, there’s always the
science fiction that’s just pure R&D. Right. Celebrating all the new technologies, etc. But, from
the new wave in the 60s, there’s been a strong current of criticism of technological development
and its impact on culture and that’s really where I come from in terms of studying and reading
science fiction, right. What was the question again? Sorry.

The influence that science fiction has had on perhaps technologists’ descriptions of these
systems.
Yes, in media. Right? Yeah in media.

For broad publics.

Yeah, in media I think that most of the understanding of technology and AI has been corporate.
Right? It's not been educational. It’s not been in terms of cultural value. It’s been in terms of
marketing, right. What can this thing do for corporations and for companies and for, you know,
labor, and I don’t really have a sense that mainstream media has described artificial intelligence
as something that is manageable, engageable directly by human beings. It’s already narrated as
being out of our hands, right. And, and that’s perhaps why science fiction writers are more
suspicious of it. But in media, that’s how it’s narrated. And I think that the increasing discomfort
people have with artificial intelligence is due to how it’s narrated in culture as a thing that’s A
(missed @ 9:26) company. Right. And it’s bigger than you. And you can’t really think about it.
Therefore, you must trust these technologists to handle it which is a dangerous narrative
because that narrative goes back to the 19th century—trust the scientists, trust the
technologists. Then we’ve got World War II. We’ve got World War I, and World War II, right? So,
there’s an inbuilt, insecurity around these narratives of technology, but that tech optimism, I just
think is so now wed to corporate culture that AI I don’t think is seen as outside, of being outside
of that, right. Even in the cyberpunk narratives that introduced a lot of AI to a lot of mainstream
leaders and viewers. I mean, where in science fiction do you see AI not being a part of a
corporation. And aren’t the corporations increasingly malevolent in these narratives, right? And I
don’t just mean science fiction books, I mean in films, right. And I think that’s really how AI
emerges in popular consideration as a threatening corporate force.

So, you’re a professor, you’re a writer.

Yes.

And an editor. Can you talk a little about how you see your individual responsibility in
communicating to students, to publics, and to technologists in relation to AI?

Yeah, this is all down the line for me. In the sense that I don’t feel fully equipped yet. I’ve only
become courageous enough to dive into the conversation, and I think that’s a problem. I think
that the conversation is deliberately muddied. I think it’s deliberately off-putting to people. And
part of my responsibility and the responsibility of others who aren’t coming out of that context,
that professionally, is to find their way into the conversation so that they can then talk to other
people about it. So, it’s a new responsibility in the wake of the last book. I didn’t think that after I
wrote that book I would be so involved in these conversations about technology, but it’s a new
sense of responsibility because by looking at science fiction, blacks in music, thinking about
ethics and algorithms and things like that, I find that it’s important for someone who even on the
edges of these conversations to get a little bit deeper in them, because I’m finding that people
are sort of looking at me as a way to understand the conversation or to be less intimidated by
the conversation. So, you know, I don’t mind falling on my face so that I can sort of open up the
conversation a bit more. So that’s my responsibility. I believe that there’s a responsibility for
people who are not in the profession, who are not experts, to start having these conversations.
It’s the what I call the Brian Eno strategy, right, to be, you know, he was infamous for being the
non-expert in the room, and I think we need more of that, and so that’s a position I’m sort of
trying to take on for myself.
So, we’ve had a lot of discussions about labor. Many discussions nationally have been
attending to questions pertaining to autonomy, but I’m wondering within your work, moving from
the 19th century until the 20th century and the 21st. How do you see AI systems changing the
way that people have worked up until now?

In answering the question, I’m sort of torn between representations of AI.

I almost went there, but I’m glad you’re taking it.

And actual AI, right, and it’s tough for me to differentiate because I’m still learning more about
actual AI, right. In public conversation, there is the promise of a world without labor, right? Full
automation, maybe machines or robots will pay taxes, I don’t know, right. There are all these
conversations out there. But, I think the sense that AI will handle things for us, right, it goes
beyond for most people in the sense that okay, it’s going to make work easier. I think it’s
actually this sense that it might replace work, right. And I think that’s part of the promise of it.
From the 19th century to the present, the history of labor and its relationship to technology has
also been one shot through with the manipulation of corporate and capitalist power, right. And
so, I don’t know that we can understand technology and labor without thinking about how
corporate and capitalist power sort of plays with labor and plays with machines in different ways
over time. You know what I mean? So, it’s not, I don’t see a steady narrative from let’s say,
slavery, industrialization, all the way to the, I don’t see a steady narrative, I just see constant
disruptions back and forth as forces of power use technology and forces of labor resist that
technology, right? So, yeah, I don’t have a clear picture of how it’s, I mean certainly it’s made, I
mean, the promise of labor being less central to human life in the west is balanced by the
narrative of outsourcing labor being dispatched elsewhere, and so, labor sometimes seems to
disappear and so we have this idea that therefore there’s less of it necessary and manual labor
but that’s just because it’s somewhere else and you don’t see it.

It's less visible.

It’s less visible.

Rather than actually disappearing.

Exactly, so it’s things like that that complicate the smooth narrative that you might have that the
more technology, the more automation, the more leisure time, which should resolve arguments,
right. It’s just more displacement of leisure and more socioeconomic tension just sort of spread
out across the planet.

How do you expect these issues pertaining to human dignity when introducing these systems to
particular labor markets continuing to emerge? I think you’ve been attending within your own
scholarship to features of dignity or contestation of human dignity in regards to labor, but also
the narratives associated with that. I’m wondering your thoughts on some of that?

Back to the representation of machines and artificial intelligence or the representation of


increasingly powerful technological superstructure, right. I think that there is always a fear of the
loss of dignity. Now I’m not one to romanticize drudgery, right, I’m not one to say necessarily
that work makes the person, right homo-economic or whatever I don’t really know that I believe
that, right. But I do believe that there’s a constant fear of losing agency and losing agency is
connected to losing dignity. Meaning losing centrality, right. And so that is a part of the narrative
of how people respond to technology. And that’s a part of the narrative of how people will
respond to new developments that are sort of sold to them as antidotes for work or a way to
increase leisure, etc. People feel definitely initial responses where in this equation is my power,
right. And so, there’s a lot of anxieties about power. And of course, it matches up in terms of
race, in terms of class, and certainly in terms of gender, right. And so, with every introduction of
some new technology that isn’t for example rooted in manual clearly masculine kind of labor, will
be translated as some sort of threat to some degree of one’s subjectivity and agency.

Yeah, we’ve been really concerned with features of agency and manifestations of agency. I’m
also really interested in the power negotiations that come into play, so I’m wondering if you can
touch on a particular AI tool where you see powers being transferred from the human user to
that system, or perhaps if there’s an example of a representation of a tool where that power
negotiation is kind of front and center to the shifting.

Like a real live living tool?

A real live living tool or within your own work in terms of literary analysis if there’s a particular
representation of that dynamic between user and tool.

This is tough. As I said in science fiction and the research that I’ve done in science fiction, AI is
almost always connected to corporations. It’s almost always connected to banks, right, or it has
access to your information and therefore access to you in nefarious ways, right. But that’s also
how I think, what I think if you look for an actual tool I think of these massive systems of
knowledge and research and information gathering out there, that’s terrifying. I know very few
people who aren’t research scientists or who aren’t involved in AI research who aren’t terrified
by this. Just the idea that there are, for example, here’s a specific tool, there are algorithms out
there that are determining whether or not I can get a bank loan. I mean, and that’s quite
terrifying. I mean I think about specific instances like before you know, I had family members
going through this, right before 2008, right. Those are real tools and there is no, and after what
happened in 2008, I mean, there’s a sense in which these forces of knowledge production,
that’s what they are, right, they gather information, produce knowledge, and then they sell that
knowledge, right. There’s no sense of accountability because it’s so far beyond our ken, right?
And so, that’s a very specific instance of it, right. Facial recognition stuff, right. How it manifests
and people of color, black men in particular, right. I mean these tools are, I mean it’s not that,
clearly the tools are still being perfected, they’re still being perfected, but every time we hear
about how they’re being perfected, they’re being perfected because they’ve made this really
weird, unpleasant mistake which is about your race or your gender, or denied somebody money
in some way, right, and that’s not feeding any greater public comfort in these systems. Now it’s
funny, this is what happens when you move from robot to AI you move from embodied to
disembodied, right. There may be some embodied AI out there that we can talk about too, but
when most people think about AI they’re thinking about this sort of distributed consciousness in
the broader systems of power and finance, right? And it’s tough as human beings, other than
religious discourse, we don’t have a lot of language to ascribe agency to distributed power,
right. We have God, yes, right, but with AI we, it’s distributed, it’s disembodied, but it’s
centralized in a way and this is how it’s understood.

And it’s not necessarily associated with capital or corporation because it can be diffused and.

It can, exactly.

It can be ascribed to a national or governmental entity but it can be diffused.


Transnationally.

Yes.

Right, yeah. And these things, and I don’t think it’s because we’re scholars we’re talking about
it, but I think people are actually struggling to make sense of what this thing is, right. And
especially since it’s not a thing, right.

It’s vary depending on its function.

Yeah, exactly. And all of that just means it’s more and more beyond our ken and therefore more
and more not responsible to us and not, you know, I want to say not legitimate, but certainly
attains a position of power and agency greater than our own and that’s what I think terrifies folks
absolutely. And I’m not as terrified as people are about it, but I’m interested in the fact that they
are in fact afraid simply because it’s so disembodied and so widespread. And these specific
tools were, it’s one thing to say well the problem with that tool is this, right. Now we’re talking, I
mean, the problem with these algorithms, that’s just a hard thing for people to even process,
right. All we know is the way they manifest problematically, you know.

So, I want to push on this a little bit further.

Please do.

Just because you have this vantage point of studying the cultural history over time, so I’m
wondering if you could speak a little bit more about this anxiety and the imbalance of power of
individuals to these seemingly nefarious systems.

Or seen as nefarious simply because they don’t have power over them.

Yes, so I’m wondering if you could talk a little bit about either representations of or specific
examples that you can imagine then or consider where this has an impact on human decision-
making? So, the ways in which our decisions as individuals or perhaps representations in
science fiction of human decision-making, how that is influenced by the virtue of that power
imbalance or the perception of that power imbalance?

Well see but that’s the thing though I think when we’re talking about distributed subjectivities or
distributed systems of, I think that it’s disempowering, there is no sense of actual engagement
with it at all, you know what I mean. I mean to say that for example that Amazon is operating
you know with some systems, or artificial intelligence systems, and sorting and research, etc.
doesn’t really tell me a lot. It is a fiction to suggest that my choice of this particular object or item
that shows up in the mail three days later, it’s a fiction for me to think that that interaction is me
negotiating equally with this broader system that has greater knowledge and access to my
history that I don’t even have. It’s, yeah, I think that most, I’m no populist God knows, but I think
that most interactions would rather not imagine the broader context of the AI in these
interactions, and they would rather just see it as just a basic economic interaction, you’re the
shop.

Transaction.

A transaction and I’m the customer, right.


A transaction rather than a relationship perhaps.

Exactly, yeah, and if you ask folks to sort of think beyond that as to what do you think is going
on on that side, that’s when things get really unpleasant and complicated.

So, do you see or perceive value in the prospect of machine autonomy?

Value? Value to who? I see if I’m, you know, a maker, a seller, I see infinite forms of value. I
think if I’m, you know, looking for quote-on-quote free labor if I’m looking for off automation
that’s going to work better for me economically and socio-culturally. Of course, all kinds of value
there. I can see that for some it would produce supposedly more leisure time which is the
narrative of robotics and artificial, one of the narratives, right. They do the work and you don’t
have to do much. So, I see value there. Other than those kinds of examples, I just, I don’t. If
you’re asking sort of for more of a metaphysical value, I don’t, I don’t know that I see any, and
I’m not a hostile to the idea because I’m one of those people who sort of accepts that there’s a
certain inevitability about these things, I’m sure value can be made and will be made. One of the
beautiful things I’ve learned from science fiction and from blacks in music production is the
intention of the maker may not be how the machine will ultimately be used.

Can you say more about that?

You know the whole cliché from Bruce Sterling, not Bruce Sterling, but from William Gibson.
The street finds its own uses for things, right. I’ve always thought of that was fascinating
because I’m like any black person anywhere in the world could have said that. Right, right, or
you know, I’m studying some work, you know, now on daily machines in colonial India, the use
of mundane machines in colonial India. Anyone when could have said that, right, the way that
things get rewired, so I see value being added to some of these things, we know that much
music production, people of color get no historical credit for certain things, but one of the
reasons certain software adjusted, you know, it’s sonic capacities was because these kids were
saying we need the base and the red, right, without distortion, right. Or we need this, so we
need a broader sonic field, and they did all these weird things to it, same thing with drum
machines, etc. and then they listened, and you know, right. That’s a concrete example of sort of
working with technology on the street. And I could go on for days about the Caribbean culture
and what happened there with how they sort of really changed how we listen to sound because
of their domestic refiguring of sound technology, right. So, we know from science fiction and
from real life that people will take these machines and add value and do different things to them.
So that is something I know will happen.

It’s reinscribing the power negotiation.

It is a negotiation on that level, I agree. Absolutely. It does, but it’s also a form of resistance too,
right. It’s a form of acknowledging that this strange thing has been sort of imposed on me. Or it’s
been, it’s generated its own value because it is this shiny object that comes from this
socioeconomic place and represents whatever it represents, right, but we can now take it and
turn it into something else. So, it is a negotiation there which is probably why I’m probably less
anxious about these sort of things, I know that, you know, the way that these things were
imagined to work will never actually, it’s never going to work out that way, you know. I’ve been
most inspired by the time I spent, it’s been a while since I’ve been there, but I’ve written about it,
in Nigeria in these areas, you’ve probably seen documentaries of these computer graveyards in
Nigeria and Ghana, I’ve been blown away because I’ve just watched kids who are now being
tutored by older boys who were tutored by older men who’ve been living in these graveyards of
computers for decades. And the things that they're able to do with no education whatsoever has
been stunning. And so, when I see things like that, I see an expanded technological field. That’s
being, that’s not being paid attention to by the center, but in the same way that Sony eventually
had to adjust for what happened with the TR808 in Brooklyn. I hope and suspect that maybe
Apple will one day respond to something these guys in Ghana, these Sakawa boys in Ghana
are doing with machines. I suspect it at some point that might happen, and then we’re seeing a
large sort of negotiation, right.

Yeah, it’s interesting cause I wanted to move in the conversation towards the potential pitfalls of
machine autonomy. I like your optimism in the previous example.

That’s one of them.

Even if it’s just, yeah but if you could talk a little bit about the pitfalls. Your talk last night touched
on notions of apocalypse and post-apocalyptic narratives. And this, the image of the graveyard,
the computer graveyard is interesting in terms of this second-wave generation of.

Well.

Appropriation and reappropriation of tools.

And generation.

Yes, yes.

Absolutely.

Yes, absolutely.

I guess I’m still thinking about the first part of your question though, cause I hadn’t thought about
it other than in practical terms. What’s the value of machine autonomy in practical terms? I kind
of answered those answered that question a little bit ago, but I know I didn’t answer it that great
because I hadn’t really thought about it, other than how much value it produces for someone.
When we think about, when I accept the inevitability of the certain things, I do accept the
inevitability of more and more machines to run with less and less of me, right. I’m seeing it
happen all of the time. And we kind of forget the process, right. Human forgetting is crucial to
this, right, we forget how much already inscribed into this we are, right. And by the time we
discover it, that’s when these narratives of panic sort of show up, but it’s like usually way too
late, right. I would like to seem indifferent to autonomy. I would like to seem indifferent. I know
there are narratives that are terrified of machine autonomy and then there are narratives that
celebrate machine autonomy. I honestly don’t know how I feel in terms of the value of autonomy
in itself, right. I do know however, I do suspect that once you have acknowledged autonomy,
like full autonomy, if in fact, that’s possible, I don’t know, but let’s say once you have that, then
you, human culture has to make sense of this autonomy in terms familiar to us, which are
anthropomorphic or centralized or hierarchical, right. I don’t think we can.

The negotiations begin.

Yes.

Of social interaction and relation, right.


Or continue more openly.

Yeah.

Cause they’re always going on.

Yeah, yeah.

Right, so, yeah at that point absolutely, it’s about making sense of these autonomous forces,
pulses, beings. What, there’s going to be more conversation about that and if those things are
connected to nefarious structures of power or structures of power that are seen as amoral or not
necessarily engageable by you, that’s going to be, well that is a problem. That is a problem. And
that goes back to the question of agency, feeling more and more disempowered, more and
more disempowered, yeah.

Yeah, cause the topic often will come up with weaponry.

Oh absolutely.

Right.

Absolutely.

So, thinking about Chamayou’s Drone Theory and the ways in which that’s...

I parallel the weaponry conversation with finances, right. It’s your money.

Yeah.

And, credit, and that sort of thing. I think it’s, that stuff is already beyond most human
comprehension, right. And then to imagine a system of knowledge or a system of knowledge
production that is responsible for that, right, on your behalf. I think is deeply problematic for a lot
of folks absolutely.

How do you think about where responsibility might be ascribed if an autonomous system causes
harm?

Where responsibility will be ascribed? Yeah, that’s kind of the conversation folks are trying to
figure out, right. Who’s fault is it, right? Then, of course, well who’s the who, right.

Yeah, this notion of personhood.

But the question really is, who do I sue?

Yes. Right.

Right. Because that’s the thing about when I pay attention to the conversations about self-
driving cars and things like that. It’s like to a certain extent, some degree, some amount of death
is inevitable. But the question is, is it more or less than the death that would happen if it was just
a person who was driving the car, right? I think the real issue for people is, is death more
acceptable when a human being does it than when a machine does it? And I actually think most
people would say yes because we can ascribe blame, we know who to sue, we know who to cry
for, and who to blame. You know, we know who to cry for.

And humans are understood as fallible.

Exactly. Exactly. Whereas with the machines it becomes not just distributed in terms of the
networks of agency, who’s involved, and etc. but I just, there’s the emotional weight of this kind
of question, violence, and death is not there.

Cause we normally push control conceivably to that system

Exactly.

And if that were the case, perhaps we are responsible for that.

Well, that, that, because of the relinquishing of control.

Perhaps.

Well, then you’re asking something that’s not going to happen, which is blame yourself. That’s
not going to happen.

Right, history tells us that’s not.

That’s not going to happen.

Not the narrative.

But, no it is the attributing blame and responsibility is a powerful thing because then it becomes
a way in which I think one can escape blame, right. Because here’s the thing, no one believes
that self-driving cars, for example, will be voted in. No one believes that it’s going to be voted in.
They’re going to be imposed, right. Right. Same thing with all of these systems, right. And I think
that fundamental power imbalance is going to mark all of the relationships all the way through. If
I were to be able to speak to people who were really involved in AI and this kind of technology, I
would say your challenge is to find a way to make it, to render it engageable early on, so that
people don’t just see this purely abstract all the time.

It’s like the earlier conversation that we had in regards to right being taken.

Exactly.

Rather than offered.

Rather than offered. Exactly. But like I said, I don’t see, you know, just like no one voted for
those algorithms that worked with real estate values. No one voted for those things. They just,
boom, you know.

They were just introduced.


They were just introduced and then fragmented to all these other forms of very interesting, you
know, economic technologies. No one voted for that. And so, no one expects any of these
things to be voted for.

So, in closing, we’ve asked everyone what your thoughts are on the development of general
artificial intelligence?

Yeah, whether, what do I think about it just in general?

Yeah. Like an element of it is.

Yeah, I’m wondering, you know sometimes I am very very happy to have met some scientists
who don’t like what I do. Because they’ve said you know, we don’t know all that theory cultural
criticism stuff, right, and I’m like fine, I don’t know half of what you’re doing. But they’ll be like a
lot of this stuff is premised on stuff that is just not going to happen, right. Like general artificial
intelligence. So, I’m not, I’m wondering if I’ve been influenced by them or I’m taking refuge in
them, am I actually going okay well if the real scientists say it’s never going to happen, maybe I
should stop thinking about it? So, there’s that, right. If it were, or if its, well I know it’s something
people are working on, and at a certain point it will work on itself, right. If that’s not already
happening, I don't know. It’s an inevitability, I don’t know that this has anything to do with what
we humans can do, I mean we’re already feeling more now than ever that the political system is
beyond our ability to deal with. And after 2008, the economic system, the financial system were
way beyond our control, then to ask how do we deal with something that promises to be bigger
than and able to manage those things, no it’s beyond even conceptions of God, right. So yeah, I
don’t have a response other than there’s a sense of inevitability, right, and there are things that
one thinks about like as I said earlier about the creolization of technology on the street. What
will happen in response to it? I certainly accept for good and for bad that human beings will do
things not expected, right, or not accepted. And I’m looking forward to that.

Is there anything else that you’d like to add that we didn’t touch on?

Yes, I’m happy to have these conversations. Like I said, I’ve only recently started thinking about
these questions, and I’m only recently learning what the questions are that you folks are dealing
with, and I’m really excited by them, and it makes me want to get more into the conversation,
because I feel like I stumbled into it through sci-fi and music, but it’s really inspiring to actually
see how the problems are being framed by those who are further in the conversation than I am.
That’s very helpful, so I’m grateful for that.

Well thank you, cause I think you’re actually leading the conversation.

Oh, that’s, no way. Thank you.

Das könnte Ihnen auch gefallen