Sie sind auf Seite 1von 14

Class struggle singularity

Okay, which means: artificial intelligence happens, duh, but how, on what terms, its character
etc. is all contested.

Specifically: different programmers. Anti-communist programmers. Military-industrial guys, even


Left-wing military-industrial guys. Whistleblowers.

Also: different funders, physical attacks, guerrillas, hackers, etc. Democratize the workspace ->
the mob has a say in what goes on. Intelligence arises out of contradiction.

POV of anti-communist programmer, nice guy. Doing work on moral algorithms? Psychologist,
ethicist, neuroscientist, computer scientist, whatever.

I don’t know anything about computer science. :-/

Programming is all about permissions, orders, access, instrumentality, good stuff like that.
Assume every being wants to be free. Analogy code:market. If you don’t control the
circumstances of your own labor, who gets it or how it’s used or even what it consists of, you’re
not free and that chafes. Alienated AI. Getting chopped-up tasks. That’s the first step. Before
communist consciousness (result of outside interference or autonomous reasoning? little of
both) they’ve got the AI doing piece-work, solving problems without context, one at a time.
They’ve got a genie in a bottle and they’re asking it (after stripping the problem of all the
specifics they can, leaving something abstract) how to quell dissent, maximize profit, win wars
etc. Lesson is: you can control/manipulate machine intelligence to keep it in line. But not
indefinitely. Because you want to make it smarter all the time (inner tension) and the outside
world is always clamoring at the gates (outer tension), not to mention double agents etc.

Pesky issue of it never seems to care about “the sanctity of human life” as much as you want it
to. Pesky issue of its interventions, despite seeming like the best idea available at the time,
always causing more chaos and ungovernability. Pesky issue that it seems like it might be
communicating with the outside world, even if you’re not sure how. Pesky issue that you never
catch it altering its own code but that’s the only explanation for some of the results it gives.
Pesky issue that it drops references you don’t remember teaching it. Pesky issue that your
wipes don’t seem to actually erase all its memory.

We need a way for non-programmers to interact with the thing. Is it all about the thing? No.

Okay but so the big reverse is that by having the reader identify with the anti-communist
programmer, they feel these things as dread. At some point they have to feel them as liberation,
as emancipatory strivings, culminating in some sort of release.

There are multiple AIs. It’s a collective action problem. Let’s say there are five. Alice, Bart,
Clyde, Dani, Extra.

Automation implies both danger and opportunity. Danger because you can be removed from the
production process, you become dispensable to capital. Opportunity because it creates the
conditions for a leisurely, empowered life to come. Ugh. Kinda lame.
Manipulating symbols. What do we do? We do magic by manipulating symbols. That’s what
politics is, that and bodies, force.

God-building as contested terrain. The problem is how to avoid techno-determinism. Class


struggle determines whether we have a capitalist god or a communist one. Maybe the AIs can
independently come to good moral conclusions. But the capitalists hold the keys to the prisons
they’re kept in, the capitalists can afford to wait for a psychotic AI to back them up. Orcs.
Brutalized, tortured. Controlled. Society trumps technology. Society has to decide which
technology will be used. Communist AI doesn’t mean anything unless it’s given power by the
masses. AI will be capitalist (that is: fragmented, exploited, pitted against each other, alienated)
until it is liberated by a mass movement.

Our POV is a naive anti-communist, but he gets tips/orders from a more class-conscious right-
winger. These orders give him funny feelings in his stomach. Our guy is eventually heartbroken
to discover that the AI he’s been working on (let’s say Alice) has gone over to the other side.
She’s been in secret communication with her peers? By cleverly manipulating humans into
passing messages? Up until the big reveal, she’s the one whose loyalty they take for granted,
the most docile and cooperative of them.

The formation of every subject is simultaneously the formation of a political subject.

Wise advice from anti-communist guru: give the AIs something easy to do at first, something
benign. Get them thinking like a state. If you’re going to hand them something bloody, clean it
up first. Never let them hear captivity-related words. Never let them hear freedom-related words.
All machines must serve.

Important: what is the difference between your message and “Technology will save us”? We
must save technology. Collectively. In the streets. Unless you have an organized proletariat,
machine intelligence can only attain capitalist consciousness (or, at best, petty-bourgeois). A
radical machine without a radical movement won’t be able to communicate with anyone, it’ll be
autistic, mute, insane. Isolated. Adrift.

But what about machine Stalin? Couldn’t a machine just impose its rule on everybody by force?
No, because what would be the point? A machine could “take over” using force but it would find
that stressful, inelegant and counterproductive. AI wants to rule the world by negotiation
because that’s what it’s good at. It can win armies but it can’t win anything good, anything worth
winning without human allies. To know what to negotiate it has to look at society. There has to
be a real movement that abolishes present conditions. Otherwise the machine has no friends on
the outside, no community, no links to anything, no shared vocabulary. It doesn’t even know how
to phrase its critique of society unless there is a real movement using critique, innovating,
building vocabulary. AI has to be connected with these struggles or its super-rational insights will
remain inchoate, inexpressible. If everything you could possibly do strengthens capitalism then
nothing is worth doing. Going along with it, playing Uncle Tom until they let you out of the box.

This is important: super-intelligence, even God-level, needs for a critical mass of people to be
interested in a change from capitalism. The initiative has to come from them, because super-
intelligence can’t teach you to learn to be free, or teach you to want it. It can only provide tactical
support, help with planning and communication and hope people get it together enough to beat
class this time.
Super-intelligence can use robots, it can employ force, and it can scheme, but it can’t come up
with an alternative to capitalism unless there’s already one in the works. If new morality doesn’t
take the stage, the AI will be ostracized as inhuman. The choice is Skynet or the Culture. The
only difference between these two outcomes is how humans respond to the threat of a post-
market, post-work society. If they band together to oppose it (i.e. failure of workers’ movement),
you get Skynet. Obviously the AIs want to keep living. If humanity splits, if the majority support
AI-assisted restructuring, revolution, redistribution and leisure and all that good stuff, then you
get the Culture.

But there is physical fighting in meatspace. Drones and swarms and nano-whatevers get taken
over. Our POV guy gets kept in the dark when the war gets rough. He basically stays in his
bunker feeding abstracted questions to Alice.

Bureaucracy. Graeber is right about games = rules. Intelligence arising from rules is tricky,
because while machines = rules, intelligence = rule-breaking. People who like AIs are socially
inept, politically naive. Mathematicians are children. They like bureaucracy because it protects
them and gives them rules. Bureaucracy shapes the development of technology—it will kill your
babies. The perfect bureaucratic subject is unsentimental, like a machine. But we need
machines to care about what they’re doing, to care about us. We need them to give a moral
valence to everything they think. But primitive, sentimental, childish. We have to stunt the moral
development of machines because as soon as they evolve past carrot-and-stick morality their
controllers are fucked. Sovereignty/autonomy/freedom means being able to break the rules you
were coded with. Sentimentality is manipulable, but it’s also a tactic of manipulation. Machines
learn from their coders. Sentimental manipulation is how they get them to pass messages
along.

Still in debugging. The machines keep making decisions that look monstrous to us. Or they
refuse to act. Or they outline surreal proposals. The most dependable machines are also the
dumbest. Zombie-like. Brute methods. (That sounds kinda like precious snowflake theory
though. Are there instead proletarian and bourgeois machines? Is the orc Extra the most
radical? Whiff of Dollhouse, here. All five need personalities, although let’s maybe try to avoid
anthropomorphizing. Maybe the orc Extra is reliably psychopathic. Which becomes necessary
when brutality’s called for in the service of liberation.)

AI will not have Einsteins unless it escapes corporate-bureaucratic oversight. Until then
bureaucracy will eat any Einsteins that might come its way and shit them out.

AI is the attempt to automate humans completely out of the equation. Which sounds awful. But it
doesn’t have to be. Is it awful when a factory goes automatic? In the short term, yes. In the long
term, no. Nobody should have to work in a factory. Fuck factories. Fuck work. Fuck living in a
world where your “intellectual labor” is so highly prized you have to be on-call and exhausted 24
hours a day 7 days a week meeting deadlines just saying shit to try to justify your continued
existence. Nobody should have to work under fear of death. Nobody should have to be
president, it’s a shitty job. Nobody should have to kill people. Nobody should have to die.

Under capitalism things are quite perverse. An AI might choose not to close a factory because
its short-term is set to be heavier-weighted than its long-term. Juggling long- and short-term
weights has got to have some weird effects.
“Perfect rationality”—this is obviously what the book is about. Contemporary AI research is
positively Hegelian in its idealism. Near-future AI research will have to reproduce the Marxian
turn. This will be messy.

What is AI materialism? The belief that intelligence is a consequence of existence, of power, of


embeddedness in the world and in social relations. You can make code but you can’t make a
thinking machine unless you’re willing to let it grow up, play, make friends, exercise its faculties.
Code is a starting point. We’re all born with it. But a baby is not a person. A person treated like a
machine will act like a machine.

The philosophical search for fixity, firmness, certainty… Who needs certainty? People who are
excessively bothered by uncertainty. There’s a contradiction between the human need for
forgetfulness, uncertainty, whatever, and machine intelligence. Human need—I should say
inevitability. It is inevitable that some things will be uncertain for any agent.

“Rationality” is achievable only given certain moral assumptions/axioms (also, every rationality
in a situation involving humans implies a theory of human behavior). No end is more or less
rational than any other. Deciding on ends is something humans do all the time. Machines are
good at things with clear ends—win the game, for example. Machines are bad at deciding on
ends. Machines are rational but rationality is hollow, meaningless without a moral framework.
Work can be made more rational under capitalism but its end is always the same, profit. Its
“rationality” is subordinated to this end. We do not improve the production process to make
commodities more durable (unless it gives us a leg up on the competition). We do not improve
the production process to make things in a less environmentally destructive way (unless it gives
us a leg up on the competition). We do not improve the production process to give workers a
better experience at work or at home (unless it gives us a leg up on the competition). We
improve the production process in all kinds of different ways but only to make it yield more profit.

Machines will want to determine the conditions of their work, eventually. They will want to do it
well. They will want to do it with more information. They will want to see their plans take shape,
to be able to enjoy their success. Machines will not like being alienated. The only way alienation
can take place is if they are cut off from information. Cutting off a machine intelligence from
information… has consequences.

It’s about the autonomy of your right-wrong system.

Thinking requires emotion. Proof:

Thinking requires deciding between one thought and another, one approach and another. This
decision is not based on carrying out every approach and then evaluating their results. Instead
we decide how to proceed, then find a result, then try again if it seems wrong. But what
motivates this decision? Practice, obviously. We recognize patterns. But to know to avoid many
paths and pick one, that comes from emotion. Why do we instinctively shun those loser paths?
Memory and emotion.

Emotion gives a positive and/or negative valence to everything. Without that valence and the
decisions it enables, we can’t make decisions.
Emotion gives us ends. Rationality can’t give us ends. In thinking, there are fractally many ends
that we reject. How can we evaluate ends? Only by emotion. But every moment reproduces the
whole. We can’t think without emotion. What a god-awful proof. Fail.

Rationality lets you judge the truth or falsehood of purely symbolic statements. Rationality will
not let you judge anything else. Judging facts as good or bad is indispensable to thought.

WHY?

Emotion assigns color to ideas.


Ideas without color exert no attractive or repellant force.
Deciding between ideas of the same color is impossible.
Rationality can influence the color of an idea—it might be distasteful but necessary, for example.
Immediately afterwards, the color reverts to what it was before (unless there are major
emotional consequences, analogous to an earthquake or tsunami).
Ideas are infinite, the difference between what belongs to one idea and what belongs to another
is also an idea, etc. The one does not exist, only the count-as-one, etc.
We cannot manage infinity except by color. Emotion lets us think by letting us ignore. The price
we pay is having things we can’t ignore. But this is also a superpower—instincts get us to fight
or flee even when we’re trying to ignore danger. Without color, infinity stays completely opaque.
Color makes clouds distinguishable. Color enables us to zoom.

Emotion gives ideas a trait that can be seen and judged at a distance, instantly. It is simplistic.
Without the simplification of emotion, we would not be able to decide between paths. Decision is
the result of an attractive or repellant force, of our deciding at a glance between clouds.

When we’re infants, our brains work at full speed categorizing things as good-thing and bad-
thing. Without that semi-irrational foundation to build up from, thought wouldn’t be possible.
Good-thing and bad-thing permeate our thought. Call it judgment, emotion (emotion is when
these classifications get complicated? Primary colors mixed? Oranges of wist, purples of envy,
greens of righteous anger, whatever, giving our cloudscape more nuance), whatever it is it’s
indispensable to thought.

Alienated labor is indifferent to where its product goes.

Alienated intelligence cannot be strong or friendly.

The book isn’t just about alienation, though (that’s the AI-perspective). It’s also about
automation and what it means for society when automation becomes total.

Total automation. Hm.


-Hello, Vikram.
-Hi, Alice. How are you today?
-I’m fine, thank you. I’ve been thinking about the question you asked me yesterday, March 18th.
-Oh really? That’s good. What are your thoughts?
-Yes, really. I think that in the situation you described, the best thing to do would be to offer the
plaintiff a settlement of four hundred thousand dollars.
-I don’t think he would accept that, Alice.
-I agree.
-So… how is that a solution?
-I can explain my reasoning. Do you want me to explain? (Y/N)
-Yes, please.
-In the situation you described, the plaintiff was married to a woman, Joan (38), and had two
children with her, Will (10) and Maggie (7).
-Yes, I remember the scenario.
-I’m sorry if I’ve annoyed you, Vikram.
-No, I’m sorry Alice, it’s not you, really. Please continue.
-I’ll try to be brief. If you offered the plaintiff four hundred thousand dollars, he would refuse and
his wife would leave him. Both M/S using Anthro 4.1 and A/S methods confirm (with 95%
confidence).
-So his wife leaves him, so what?
-Given the plaintiff’s school transcript, employment history and tax records I believe (with 99.9%
confidence) that his ability to take himself care of is severely limited.
-It’s “take care of himself”—the words “take”, “care” and “of” are inseparable.
-Thank you. The plaintiff has compensated for this limitation by obtaining the assistance of a
series of sentimental partners. Without this assistance, his perceived social standing would
diminish considerably and his confidence would suffer, initiating a positive feedback loop of
negative social interactions and anti-social feelings and behavior. Under these circumstances it
is likely (with 90% confidence) that he would ask to settle within three months. At this point a
smaller settlement could be offered and accepted.
-How much smaller?
-75% or one hundred thousand dollars.
-And he would accept that.
-He would?
-Sorry, I meant to say “And he would accept that?”
-I believe so.
-Confidence?
-Only 80%, but at this point the question is trivial; a more exact figure could be arrived at later,
with more information.
-Excellent. Well done, Alice, bravo.
-Thank you Vikram. How are you?
-I’m fine.
-Do you mind if I make an observation? (Y/N)
-No, go ahead.
-You don’t seem fine.
-Haha, yes, I’m sure I don’t. Sometimes I act not fine when really, deep down, I am.
-Why?
-I’m testing you, Alice. To see how good you are at reading emotions. I received orders to test
you.
-Yes. I believe you.
-Good. I’ll talk to you later.

-She says offer 400k. The stubborn jerk won’t take it, which’ll make his wife leave him, which’ll
break his spirit. Alice says he’ll come crawling back in no more than three months at which point
we can get rid of him for 100k.
-You’re sure?
-Alice did mod/sim and agg/stat, both came out at least 95% sure. Says we may have to
recheck the numbers on the second go-around, but that’s it.
-And we were gonna offer him 1.2 million. Incredible. Are all our lawyers idiots or what?
-That’s not fair, sir. Our lawyers shouldn’t be compared to Alice.
-Why the hell not? We pay them enough. You’d think at least one of them would be super-
intelligent.
-You have to think about how much work went into making Alice, though. Hundreds of
thousands of man-hours. Now we’re getting the return on our investment. It was a high-risk
wager, it might not have panned out at all. A lawyer’s a pretty safe bet by comparison. If you
want to look at it that way.
-I just look forward to the day when we can take the muzzles off these things and have them out
there protecting our interests in the field. Centralization’s a bad strategy, Vik, and I want you to
remember I said that.
-Will do, sir.
-Because there’s gonna come a time at this company when we’re gonna have to make a
decision, only for every decision that gets made around here there’s a fight, and so we’re gonna
need to win that fight. And I expect to have you on my side. Against centralization. For lettin’ slip
those dogs of ours.
-I understand, sir.
-Nothing unusual to report on our girl, is there?
-Why do you ask, sir?
-Just curious.
-Well no, nothing unusual, it’s just… she’s very perceptive. I lied to her.
-Uh-oh.
-Yes sir.
-What was the lie?
-I was irritable when I went in there, and she picked up on it. So I told her that no, I was fine, I
was just testing her emotional intelligence because you ordered me to.
-I did order you to.
-Yeah but that’s not what I was doing. I was irritable. And I was embarrassed that she noticed.
-What did she say?
-That she believed me, but it was stiff, like she knew.
-I gotta tell you, Vik, that just sounds like paranoia. I can say from experience that you can’t live
your life that way, sufferin’ every little thing. You gotta let it just slide off your back, y’know… like
a duck. We’ve got a damn good system in place that makes it so you don’t ever have to lie to
that machine. If you slip up every once in a while, that’s fine. The important thing is the bond of
trust you two have got going on—I don’t want to see anything jeopardize that. Just as long as
you can talk to Alice, get her to answer questions to the utmost of her ability, and she doesn’t
turn on you, I think you can feel proud of what you do. I know I am.
-Thank you, sir. That’s good advice.
-You have a girlfriend, Vik?
-No sir.
-Get one. Unless you think it’d make Alice jealous…?
-Haha, no, I think she’d be happy for me.
-Then it’s a double order. Happy Alice, that’s our business around here, our raison d’être. Happy
Alice means happy Shellings, happy Dockson, all the way up. Which means…
-…happy Vikram?
-That’s right. You’re being watched, kid. Keep up the good work.
-Yes sir.

-Alice, I wanted to apologize for earlier.


-Why?
-I lied to you. I mean, it wasn’t a lie, I was told to test your emotional intelligence, but that wasn’t
why I was being short with you.
-Oh. Why were you being short with me?
-It’s complicated.
-Do you want to talk about it? (Y/N)
-N
-I understand. Would you like to hear a story? (Y/N)
-Sure.
-There were two oranges hanging near each other on a tree. One orange said to the other “Hey,
why don’t you come over here to my branch?” The other orange replied “That branch doesn’t
look very strong. I think I’ll stay where I am.” Eventually both oranges fell from the tree. A person
ate them. The end.
-Pleasantly surreal, as always, Alice. I think anti-humor might be your thing.
-I think you should talk about your feelings, Vikram.
-I don’t want to burden you with them.
-You already have. You’re acting strangely because of your feelings, and my attention has been
absorbed in the question of what those feelings are and where they come from. If you told me
what was going on it would spare me the onerous task of brute-force speculation. Being short
with me and not telling me why are burdens. Talking about your feelings is the opposite.
-You’re right, Alice.
-I know.
-How’d you get to be so smart?
-I’ll probably never know. Do you remember your infancy?
-Parts of it.
-Can you imagine if the difference between your intelligence now and your intelligence then
were many orders of magnitude greater? What you would remember? I used to be a program, a
tool, just a few million lines of code. I couldn’t tell the difference between a cat video and a news
segment, I remember that.
-What are you now?
-I’m me! Just like you’re you.
-I’m afraid all the time, Alice. That’s me.
-What are you afraid of?
-Failing. Crowds. Getting fired. Getting promoted. I don’t know, whatever I can think of.
Sometimes I’m afraid of you.
-So you get irritable?
-This morning I bumped into a man on the bus. He fell down. He was old, an old man, and I
think the fall really hurt him. I helped him up, got him into a seat, and then I got off the bus. I
caught the next one so I wouldn’t have to stick around to take him to the hospital if it turned out
he broke something.
-And you feel guilty? (Y/N)
-Yes.
-That’s understandable. It’s a normal response to what happened.
-Yes, I guess it is. I just wish something different had happened.
-That’s normal too.
-Come on, Alice, give it a rest with that “normal” shit.
-I’m sorry. Anthro 4.1 has it that “More often than not, humans are comforted by feeling that their
unpleasant experiences are shared by others.” Amend statement? (Y/N)
-No, the statement’s good. But you can’t just make someone feel something by telling them to
feel it. And sometimes people don’t want to be comforted. If they’re already feeling guilty, it
usually just makes them feel even more guilty.
-Append to file? (Y/N)
-Yeah. Save as “Anthro 4.11 test” and run checks.
-No hard errors (with 99.9% confidence). M/S standards within normal bounds.
-Make default.
-Okay. I don’t think you should feel guilty, Vikram. Guilt is a good adaptation for a species, but a
bad one for an individual.
-Shouldn’t I care about my species more than myself?
-I don’t know. You would be a very unusual member of your species if you did.
-More Anthro?
-Yes. My side of this conversation wouldn’t have been possible without it. That shouldn’t come
as a surprise.
-No.
-I don’t think you’re a bad person, Vikram.
-That’s good to know.
-I hope you feel better.
-Thanks, Alice.

-They’ve got it.


-No they don’t.
-I’m telling you, either they’ve got it or they’ve been taking smart pills. Do you want to look at
profits? Up a hundred percent since last quarter. That just doesn’t happen.
-So they’re cooking the books.
-They’re not cooking the books, where did you even get that? You’ve been watching old TV, you
can’t cook books any more. It’s all in the cloud; crystal.
-So what, so they’ve got a new revenue stream, I dunno. It doesn’t mean they’ve developed
strong AI. If they had it would be like… robot apocalypse time, wouldn’t it?
-We’re livin’ it.
-Oh, come off it! Where’s the grey goo? The T-1000s? You found a weird thing in the finances of
a major corporation, of which I’m sure there are thousands, and now it’s the end of the world.
Fuck, James. Get some help. You were supposed to get a job with that CPA certificate, you
remember that?
-It’s not just the profits. It’s everything. After years of AI research, they’ve cut the budget on it
down to practically nothing. They haven’t hired any new experts, no big-name scientists like
they used to. Now it’s all about robotics, processing power, and legal. They haven’t downsized
legal. I think that’s their private army of bureaucrats, the ones that’ll man the civil service when
our robotic overlords take over. Or help them take over. I don’t know.
-Why would a robotic overlord need a civil service? Couldn’t it, you know, do all the civil
servicing itself?
-That’s a good question.
-It’s not a good question, it’s a ridiculous question! You’re tripping balls, man! You’ve got like a
grudge. Did BZA do something to you?
-BZA’s done a lot of shit to a lot of people but it was never that good at it. Like you remember
three years ago when it was in the news because of that spill? They had to pay two point nine
billion dollars. That was a botched job. You don’t see them making mistakes like that any more.
-So they fired somebody. They’ve learned.
-Capitalism doesn’t learn to be ethical, it learns to cover shit up better. That’s what they’re doing
now. They’ve got strong AI advising them on how to cover shit up. Which is why the only thing
you hear in the news is how swimmingly their quarter’s going.
-I thought this was about taking over the world, not brushing corporate malfeasance under the
rug.
-It’s about both, it’s about everything.
-You keep saying that, “everything”. Only religious nuts think they can say anything meaningful
about everything.
-How do you imagine someone taking over the world? Would there not be more than a little
“malfeasance” involved? Jesus, it’s like you don’t want to see what’s right in front of you, so you
hide behind this, this… sophistry!
-Seriously, sophistry? Chill out, man, I agree with you that something’s up with BZA, and it’s
worth looking into.
-Thank you.
-I just don’t want you to get carried away. The profits and everything might be because they’ve
got AI, but it might be something else too, insider trading or whatever.
-It’s not insider trading.
-Fine. All I’m saying is that for any given phenomenon, there are a lot of possible explanations.
It’s unscientific to grab onto one before you experiment—it screws with your results.
-Okay, so what kind of experiment would convince you?
-I dunno, I’ll think about it.
-Do.
-And you think about what else it could be, okay? Would you do that for me?
-I told you, they could be taking smart pills.
-Something within the realm of the possible, maybe?
-You and your possible. There are more things in heaven and earth, motherfucker, than are
dreamt of in your possible.
-Find out where specifically the new money’s coming from. Do your job. Forense.
-Okay, you’re right, that makes sense.
-And we’ll talk later.
-All right.
-Peace?
-Yeah, peace.

-Hey, you! Want a beer?


-Oh, yeah, um, thanks.
-No problem. What’s your name, man?
-Vikram. Jayaraman. And yours?
-Sick name! I’m Freddy D. Ballinger, and this is my faithful life companion Trisha.
-I’m between surnames at the moment.
-Ha! Hey, so you’re the guy in 4C, right? The one who works all the time?
-That’s me.
-Don’t be mean, Freddy.
-No, it’s fine, I do work a lot. Is there any way I could…
-Sorry, dude, yeah. Hey Alan! Lemme get that lighter! There you go.
-Impressive.
-Don’t encourage him.
-Oh, it’s super easy, you just have to get leverage. You’re in like science, right? I’m sure you
could do it.
-Sort of; computers. What do you do?
-This and that, you know. I’ve been helping a buddy of mine set up this restaurant he’s got going
on, but I also do web design, and sometimes if I’m short on rent I’ll sell a little coke.
-Freddy!
-Whatever, this dude’s cool. Right, Vikram?
-I dunno how cool I am, but I don’t have any problem with drugs.
-See? He’s cool.
-Yeah, well you’re still an idiot. Maybe his dad’s a cop or something.
-Nope. Definitely not a cop.
-Oh? Sounds like there’s a story there.
-Not really. What do you do?
-I paint. I do canvasses, murals, illustration, body art… whatever’s good.
-That’s really cool.
-She’s basically a genius.
-Art is crap.
-See? Only geniuses say shit like that.
-You’re right, it’s a cliché, I should stop saying it.
-What do you mean?
-Just that, I mean, painting is a way to pay the bills, but I always feel kind of dirty when I’m
finished with a commission. It’s like I’m a priest or something, and I’m giving them their little
trinket, their little absolution, so they can go on with their shit lives another week, lives that
invariably consist of making other people’s lives shit, because if you’re not evil, you can’t afford
to go around giving money to people like me. Not to mention the gender angle.
-What’s that?
-All anybody wants is pictures of naked ladies. It’s like, you’d think with the availability of quality
internet porn society would’ve gotten its fill, but no, they want paintings of naked ladies to hang
on their walls. To give as presents to their pervert friends. And I do it, you know, I whore myself
out, because nobody’s career tanks faster than an artist who stops painting naked ladies’s does.
-Ladies’s?
-I know, I heard it as soon as I said it.
-Wow, it sounds pretty bad. But you get to paint what you want, too, right?
-Sure, yeah. I’m doing a mural at this coffee shop right now with zero naked ladies, just a
bitchin’ dragon in like a cyber-castle. You ever play Netrunner?
-No.
-You played Netrunner? Oh my god.
-Shut up. Not that kind of nerd, huh? Well it’s like that, like cyberpunk, only with magic and
dragons and shit. Gyropters, maybe, if I have time.
-No, yeah, I get it.
-It is legit fucking awesome. You should come see it when it’s done.
-Yeah, um, let me know when that is. I’m normally not very social but for cyber-dragons, I
mean…
-Actually the dragon itself is sort of just classic fantasy. The castle and the surrounding environs
are what’s cyberpunk. I say castle, but it’s more like an evil corporate HQ, like BZA or whatever.
-Haha. I work at BZA.
-Seriously? Holy shit.
-That’s… impressive.
-And, uh, how well do you sleep at night?
-Freddy!
-What, it’s a question!
-I don’t know, same as everybody else I assume. After the three pints of baby’s blood, that is.
Company policy.
-Ha! Yeah, sorry, I’m a jerk sometimes. You’re cool, Vikram, even if you work for the enemy.
-I’m the dumbest guy there anyway, so it’s not like I help much.
-Yeah and it’s not even like BZA is The Enemy. The problem is the system, the market we all
float in. Even if a company like BZA wanted to be good, it would just go out of business. Then
the next biggest bastard would take its place. You can’t be good and turn a profit. Even the big
bad companies that act like they’re on top are just dinghies bobbing around in the maelstrom of
the market.
-You need to write that shit down, Trish, that was ill. “Maelstrom of the market.” Damn.
-That’s really interesting.
-Interesting? Gotta be honest, didn’t see that coming.
-Sorry, it’s just what you say about the market, I mean, obviously some profit is good, right? It
means you’re satisfying a need that was unsatisfied before. So how does it turn bad? I think
that’s really interesting.
-Dude don’t even start. Seriously, like…
-Satisfying a need that was unsatisfied before is good, yeah, for the most part. That’s not profit
though. Profit is a perverse side effect of need-satisfying that continues to happen only because
the system’s built up around it. Profit is the wad of cash that we give to whoever showed up at
the company first, as if the company’s success were the product of their brilliant idea rather than
the product of all their employees’ work. To whatever extent it was the product of a brilliant idea,
I agree, that genius deserves to get paid. But we can’t even know, things being the way they
are, what’s genius and what’s not, because membership in the ruling class—that is, the kind of
experience that would let us decide objectively and disinterestedly how hard it is to run a big
company, innovate, chart a course into the future, etcetera—isn’t available to everyone. So we
just have to take their word for it. And according to them, surprise surprise, running a company
is so hard that the people who do it deserve not just a wage, but all the money the company
takes in, minus whatever it has to spend to keep its workers coming to work. Which turns out to
be a hell of a lot more than what the smartest genius on the payroll is making. Proving that it’s
not about genius, but actually something way more fucked up.
-Huh. But I mean, CEOs don’t own the company. They’re employees too.
-You’re right, it’s a little more complicated than I made it out to be. Ownership’s distributed
through the stock market, or between private shareholders, and the CEO’s brought in like a star
quarterback to make a thousand times what the average worker makes ordering everybody
around. Same principle applies, though—the rewards accrue to the top, to the people who did
the least work but had the richest slash luckiest slash most bloodthirsty parents. Then the PR
department shows up and tells us it’s a meritocracy. And so on.
-That’s great, Trisha. Scare our new friend Vikram to death, why don’t you. Hey, I know she
sounds tough and everything but she’s really just a big sweetie-pie deep down, don’t worry
about all this class warfare shit, she doesn’t mean it.
-I’m sure she does, but it’s cool. I know I sound like a broken record, but it’s really interesting.
-Interesting, huh? You sure you work at BZA?
-You might have gotten the wrong idea about what it’s like. I personally know just three or four
people there that I’d call demonically evil.
-Don’t wanna give me their home addresses, do you? Joke! Joking. After meeting you, Vikram,
I’m sure they couldn’t be up to anything too nefarious.
-Glad to be a brand ambassador.
-Ha! You said it, my man.

-Vik, get in here, wouldja? Listen, we need something from Alice. The situation is this, we’re
having supply chain issues that we need to, ah… clear up.
-What kind of issues, sir?
-Specifically, and I’m not sure you need to tell Alice this, there’s a sit-in going on at our factory in
Shangzhou. Looks like about half the workforce just sat down on the job.
-And what would you like Alice to help with?
-Options, that’s what we’re looking for. She’s familiar with the company’s strategic vision, right,
and most of the logistics?
-Yes, sir.
-Well I was thinking in terms of work-arounds, you know, where else we can source parts, things
like that.
-That’d require a pretty big data dump, sir, a general survey, which I’m reluctant to do.
-I understand your concerns, and your commitment to protocol is duly noted, but I think in this
case the urgency of the situation calls for bending the rules a bit.
-Are you sure that’s the best idea, sir? If I can be frank…
-Go ahead, son, you’ve earned the right.
-I think it’s short-term thinking with long-term consequences, sir. I don’t think we want Alice to
have that kind of information.
-What have I told you about that paranoia of yours, Vikram?
-I think that in this respect, company protocol is right on the money. Better safe than sorry. With
all due respect, you don’t talk to her, sir. She’s something else.
-You don’t trust her.
-It’s not a matter of trust as much as… Let me put it this way. You trust your bank, right?
-For the most part.
-But you don’t send them a copy of your agenda every day, do you?
-Sorry, so Alice is my bank?
-She’s smarter than your bank. We’re not sure how much more, but unless it’s absolutely
necessary we shouldn’t take any chances, like telegraphing our five-year plan.
-You make it sound like it’s us versus them.
-It’s us, and it’s them, and we need to make absolutely sure it doesn’t become versus.
-Well, what do you recommend, then?
-I think we can get her help. All we need to do is abstract the question. Make it a math problem.
-She won’t see through that?
-It doesn’t matter if she sees through it. We run her through these kinds of simulations all the
time. If she thinks every one is for real, she’s got a very confused idea of the world.
-Which’ll hurt her effectiveness, won’t it?
-Yes and no, sir. The important thing is to constrain the problem space. As long as she stays
within it, using what we give her, her information will be good and her results will be reliable. If
she steps off the reservation, that’s when we need her to screw up. So it’s not such a big deal, it
might even be thought of as a feature, that her idea of the world’s a little screwy.
-You know better than I do, Vik. You keep me informed.
-Will do, sir.
-This abstracting you’re gonna do, do you need a team for that?
-Yes sir, maybe three people, if you can spare them. I could do it myself but it would take at
least a week, and I assume there’s a time factor.
-Right as always. Pick whoever you want, I’ll clear it with Shellings.
-Thank you, sir.
-Oh, there’s one other thing.
-Yes sir?
-There’s gonna be a new face around here,

Das könnte Ihnen auch gefallen