Beruflich Dokumente
Kultur Dokumente
Okay, which means: artificial intelligence happens, duh, but how, on what terms, its character
etc. is all contested.
Also: different funders, physical attacks, guerrillas, hackers, etc. Democratize the workspace ->
the mob has a say in what goes on. Intelligence arises out of contradiction.
POV of anti-communist programmer, nice guy. Doing work on moral algorithms? Psychologist,
ethicist, neuroscientist, computer scientist, whatever.
Programming is all about permissions, orders, access, instrumentality, good stuff like that.
Assume every being wants to be free. Analogy code:market. If you don’t control the
circumstances of your own labor, who gets it or how it’s used or even what it consists of, you’re
not free and that chafes. Alienated AI. Getting chopped-up tasks. That’s the first step. Before
communist consciousness (result of outside interference or autonomous reasoning? little of
both) they’ve got the AI doing piece-work, solving problems without context, one at a time.
They’ve got a genie in a bottle and they’re asking it (after stripping the problem of all the
specifics they can, leaving something abstract) how to quell dissent, maximize profit, win wars
etc. Lesson is: you can control/manipulate machine intelligence to keep it in line. But not
indefinitely. Because you want to make it smarter all the time (inner tension) and the outside
world is always clamoring at the gates (outer tension), not to mention double agents etc.
Pesky issue of it never seems to care about “the sanctity of human life” as much as you want it
to. Pesky issue of its interventions, despite seeming like the best idea available at the time,
always causing more chaos and ungovernability. Pesky issue that it seems like it might be
communicating with the outside world, even if you’re not sure how. Pesky issue that you never
catch it altering its own code but that’s the only explanation for some of the results it gives.
Pesky issue that it drops references you don’t remember teaching it. Pesky issue that your
wipes don’t seem to actually erase all its memory.
We need a way for non-programmers to interact with the thing. Is it all about the thing? No.
Okay but so the big reverse is that by having the reader identify with the anti-communist
programmer, they feel these things as dread. At some point they have to feel them as liberation,
as emancipatory strivings, culminating in some sort of release.
There are multiple AIs. It’s a collective action problem. Let’s say there are five. Alice, Bart,
Clyde, Dani, Extra.
Automation implies both danger and opportunity. Danger because you can be removed from the
production process, you become dispensable to capital. Opportunity because it creates the
conditions for a leisurely, empowered life to come. Ugh. Kinda lame.
Manipulating symbols. What do we do? We do magic by manipulating symbols. That’s what
politics is, that and bodies, force.
Our POV is a naive anti-communist, but he gets tips/orders from a more class-conscious right-
winger. These orders give him funny feelings in his stomach. Our guy is eventually heartbroken
to discover that the AI he’s been working on (let’s say Alice) has gone over to the other side.
She’s been in secret communication with her peers? By cleverly manipulating humans into
passing messages? Up until the big reveal, she’s the one whose loyalty they take for granted,
the most docile and cooperative of them.
Wise advice from anti-communist guru: give the AIs something easy to do at first, something
benign. Get them thinking like a state. If you’re going to hand them something bloody, clean it
up first. Never let them hear captivity-related words. Never let them hear freedom-related words.
All machines must serve.
Important: what is the difference between your message and “Technology will save us”? We
must save technology. Collectively. In the streets. Unless you have an organized proletariat,
machine intelligence can only attain capitalist consciousness (or, at best, petty-bourgeois). A
radical machine without a radical movement won’t be able to communicate with anyone, it’ll be
autistic, mute, insane. Isolated. Adrift.
But what about machine Stalin? Couldn’t a machine just impose its rule on everybody by force?
No, because what would be the point? A machine could “take over” using force but it would find
that stressful, inelegant and counterproductive. AI wants to rule the world by negotiation
because that’s what it’s good at. It can win armies but it can’t win anything good, anything worth
winning without human allies. To know what to negotiate it has to look at society. There has to
be a real movement that abolishes present conditions. Otherwise the machine has no friends on
the outside, no community, no links to anything, no shared vocabulary. It doesn’t even know how
to phrase its critique of society unless there is a real movement using critique, innovating,
building vocabulary. AI has to be connected with these struggles or its super-rational insights will
remain inchoate, inexpressible. If everything you could possibly do strengthens capitalism then
nothing is worth doing. Going along with it, playing Uncle Tom until they let you out of the box.
This is important: super-intelligence, even God-level, needs for a critical mass of people to be
interested in a change from capitalism. The initiative has to come from them, because super-
intelligence can’t teach you to learn to be free, or teach you to want it. It can only provide tactical
support, help with planning and communication and hope people get it together enough to beat
class this time.
Super-intelligence can use robots, it can employ force, and it can scheme, but it can’t come up
with an alternative to capitalism unless there’s already one in the works. If new morality doesn’t
take the stage, the AI will be ostracized as inhuman. The choice is Skynet or the Culture. The
only difference between these two outcomes is how humans respond to the threat of a post-
market, post-work society. If they band together to oppose it (i.e. failure of workers’ movement),
you get Skynet. Obviously the AIs want to keep living. If humanity splits, if the majority support
AI-assisted restructuring, revolution, redistribution and leisure and all that good stuff, then you
get the Culture.
But there is physical fighting in meatspace. Drones and swarms and nano-whatevers get taken
over. Our POV guy gets kept in the dark when the war gets rough. He basically stays in his
bunker feeding abstracted questions to Alice.
Bureaucracy. Graeber is right about games = rules. Intelligence arising from rules is tricky,
because while machines = rules, intelligence = rule-breaking. People who like AIs are socially
inept, politically naive. Mathematicians are children. They like bureaucracy because it protects
them and gives them rules. Bureaucracy shapes the development of technology—it will kill your
babies. The perfect bureaucratic subject is unsentimental, like a machine. But we need
machines to care about what they’re doing, to care about us. We need them to give a moral
valence to everything they think. But primitive, sentimental, childish. We have to stunt the moral
development of machines because as soon as they evolve past carrot-and-stick morality their
controllers are fucked. Sovereignty/autonomy/freedom means being able to break the rules you
were coded with. Sentimentality is manipulable, but it’s also a tactic of manipulation. Machines
learn from their coders. Sentimental manipulation is how they get them to pass messages
along.
Still in debugging. The machines keep making decisions that look monstrous to us. Or they
refuse to act. Or they outline surreal proposals. The most dependable machines are also the
dumbest. Zombie-like. Brute methods. (That sounds kinda like precious snowflake theory
though. Are there instead proletarian and bourgeois machines? Is the orc Extra the most
radical? Whiff of Dollhouse, here. All five need personalities, although let’s maybe try to avoid
anthropomorphizing. Maybe the orc Extra is reliably psychopathic. Which becomes necessary
when brutality’s called for in the service of liberation.)
AI will not have Einsteins unless it escapes corporate-bureaucratic oversight. Until then
bureaucracy will eat any Einsteins that might come its way and shit them out.
AI is the attempt to automate humans completely out of the equation. Which sounds awful. But it
doesn’t have to be. Is it awful when a factory goes automatic? In the short term, yes. In the long
term, no. Nobody should have to work in a factory. Fuck factories. Fuck work. Fuck living in a
world where your “intellectual labor” is so highly prized you have to be on-call and exhausted 24
hours a day 7 days a week meeting deadlines just saying shit to try to justify your continued
existence. Nobody should have to work under fear of death. Nobody should have to be
president, it’s a shitty job. Nobody should have to kill people. Nobody should have to die.
Under capitalism things are quite perverse. An AI might choose not to close a factory because
its short-term is set to be heavier-weighted than its long-term. Juggling long- and short-term
weights has got to have some weird effects.
“Perfect rationality”—this is obviously what the book is about. Contemporary AI research is
positively Hegelian in its idealism. Near-future AI research will have to reproduce the Marxian
turn. This will be messy.
The philosophical search for fixity, firmness, certainty… Who needs certainty? People who are
excessively bothered by uncertainty. There’s a contradiction between the human need for
forgetfulness, uncertainty, whatever, and machine intelligence. Human need—I should say
inevitability. It is inevitable that some things will be uncertain for any agent.
“Rationality” is achievable only given certain moral assumptions/axioms (also, every rationality
in a situation involving humans implies a theory of human behavior). No end is more or less
rational than any other. Deciding on ends is something humans do all the time. Machines are
good at things with clear ends—win the game, for example. Machines are bad at deciding on
ends. Machines are rational but rationality is hollow, meaningless without a moral framework.
Work can be made more rational under capitalism but its end is always the same, profit. Its
“rationality” is subordinated to this end. We do not improve the production process to make
commodities more durable (unless it gives us a leg up on the competition). We do not improve
the production process to make things in a less environmentally destructive way (unless it gives
us a leg up on the competition). We do not improve the production process to give workers a
better experience at work or at home (unless it gives us a leg up on the competition). We
improve the production process in all kinds of different ways but only to make it yield more profit.
Machines will want to determine the conditions of their work, eventually. They will want to do it
well. They will want to do it with more information. They will want to see their plans take shape,
to be able to enjoy their success. Machines will not like being alienated. The only way alienation
can take place is if they are cut off from information. Cutting off a machine intelligence from
information… has consequences.
Thinking requires deciding between one thought and another, one approach and another. This
decision is not based on carrying out every approach and then evaluating their results. Instead
we decide how to proceed, then find a result, then try again if it seems wrong. But what
motivates this decision? Practice, obviously. We recognize patterns. But to know to avoid many
paths and pick one, that comes from emotion. Why do we instinctively shun those loser paths?
Memory and emotion.
Emotion gives a positive and/or negative valence to everything. Without that valence and the
decisions it enables, we can’t make decisions.
Emotion gives us ends. Rationality can’t give us ends. In thinking, there are fractally many ends
that we reject. How can we evaluate ends? Only by emotion. But every moment reproduces the
whole. We can’t think without emotion. What a god-awful proof. Fail.
Rationality lets you judge the truth or falsehood of purely symbolic statements. Rationality will
not let you judge anything else. Judging facts as good or bad is indispensable to thought.
WHY?
Emotion gives ideas a trait that can be seen and judged at a distance, instantly. It is simplistic.
Without the simplification of emotion, we would not be able to decide between paths. Decision is
the result of an attractive or repellant force, of our deciding at a glance between clouds.
When we’re infants, our brains work at full speed categorizing things as good-thing and bad-
thing. Without that semi-irrational foundation to build up from, thought wouldn’t be possible.
Good-thing and bad-thing permeate our thought. Call it judgment, emotion (emotion is when
these classifications get complicated? Primary colors mixed? Oranges of wist, purples of envy,
greens of righteous anger, whatever, giving our cloudscape more nuance), whatever it is it’s
indispensable to thought.
The book isn’t just about alienation, though (that’s the AI-perspective). It’s also about
automation and what it means for society when automation becomes total.
-She says offer 400k. The stubborn jerk won’t take it, which’ll make his wife leave him, which’ll
break his spirit. Alice says he’ll come crawling back in no more than three months at which point
we can get rid of him for 100k.
-You’re sure?
-Alice did mod/sim and agg/stat, both came out at least 95% sure. Says we may have to
recheck the numbers on the second go-around, but that’s it.
-And we were gonna offer him 1.2 million. Incredible. Are all our lawyers idiots or what?
-That’s not fair, sir. Our lawyers shouldn’t be compared to Alice.
-Why the hell not? We pay them enough. You’d think at least one of them would be super-
intelligent.
-You have to think about how much work went into making Alice, though. Hundreds of
thousands of man-hours. Now we’re getting the return on our investment. It was a high-risk
wager, it might not have panned out at all. A lawyer’s a pretty safe bet by comparison. If you
want to look at it that way.
-I just look forward to the day when we can take the muzzles off these things and have them out
there protecting our interests in the field. Centralization’s a bad strategy, Vik, and I want you to
remember I said that.
-Will do, sir.
-Because there’s gonna come a time at this company when we’re gonna have to make a
decision, only for every decision that gets made around here there’s a fight, and so we’re gonna
need to win that fight. And I expect to have you on my side. Against centralization. For lettin’ slip
those dogs of ours.
-I understand, sir.
-Nothing unusual to report on our girl, is there?
-Why do you ask, sir?
-Just curious.
-Well no, nothing unusual, it’s just… she’s very perceptive. I lied to her.
-Uh-oh.
-Yes sir.
-What was the lie?
-I was irritable when I went in there, and she picked up on it. So I told her that no, I was fine, I
was just testing her emotional intelligence because you ordered me to.
-I did order you to.
-Yeah but that’s not what I was doing. I was irritable. And I was embarrassed that she noticed.
-What did she say?
-That she believed me, but it was stiff, like she knew.
-I gotta tell you, Vik, that just sounds like paranoia. I can say from experience that you can’t live
your life that way, sufferin’ every little thing. You gotta let it just slide off your back, y’know… like
a duck. We’ve got a damn good system in place that makes it so you don’t ever have to lie to
that machine. If you slip up every once in a while, that’s fine. The important thing is the bond of
trust you two have got going on—I don’t want to see anything jeopardize that. Just as long as
you can talk to Alice, get her to answer questions to the utmost of her ability, and she doesn’t
turn on you, I think you can feel proud of what you do. I know I am.
-Thank you, sir. That’s good advice.
-You have a girlfriend, Vik?
-No sir.
-Get one. Unless you think it’d make Alice jealous…?
-Haha, no, I think she’d be happy for me.
-Then it’s a double order. Happy Alice, that’s our business around here, our raison d’être. Happy
Alice means happy Shellings, happy Dockson, all the way up. Which means…
-…happy Vikram?
-That’s right. You’re being watched, kid. Keep up the good work.
-Yes sir.
-Vik, get in here, wouldja? Listen, we need something from Alice. The situation is this, we’re
having supply chain issues that we need to, ah… clear up.
-What kind of issues, sir?
-Specifically, and I’m not sure you need to tell Alice this, there’s a sit-in going on at our factory in
Shangzhou. Looks like about half the workforce just sat down on the job.
-And what would you like Alice to help with?
-Options, that’s what we’re looking for. She’s familiar with the company’s strategic vision, right,
and most of the logistics?
-Yes, sir.
-Well I was thinking in terms of work-arounds, you know, where else we can source parts, things
like that.
-That’d require a pretty big data dump, sir, a general survey, which I’m reluctant to do.
-I understand your concerns, and your commitment to protocol is duly noted, but I think in this
case the urgency of the situation calls for bending the rules a bit.
-Are you sure that’s the best idea, sir? If I can be frank…
-Go ahead, son, you’ve earned the right.
-I think it’s short-term thinking with long-term consequences, sir. I don’t think we want Alice to
have that kind of information.
-What have I told you about that paranoia of yours, Vikram?
-I think that in this respect, company protocol is right on the money. Better safe than sorry. With
all due respect, you don’t talk to her, sir. She’s something else.
-You don’t trust her.
-It’s not a matter of trust as much as… Let me put it this way. You trust your bank, right?
-For the most part.
-But you don’t send them a copy of your agenda every day, do you?
-Sorry, so Alice is my bank?
-She’s smarter than your bank. We’re not sure how much more, but unless it’s absolutely
necessary we shouldn’t take any chances, like telegraphing our five-year plan.
-You make it sound like it’s us versus them.
-It’s us, and it’s them, and we need to make absolutely sure it doesn’t become versus.
-Well, what do you recommend, then?
-I think we can get her help. All we need to do is abstract the question. Make it a math problem.
-She won’t see through that?
-It doesn’t matter if she sees through it. We run her through these kinds of simulations all the
time. If she thinks every one is for real, she’s got a very confused idea of the world.
-Which’ll hurt her effectiveness, won’t it?
-Yes and no, sir. The important thing is to constrain the problem space. As long as she stays
within it, using what we give her, her information will be good and her results will be reliable. If
she steps off the reservation, that’s when we need her to screw up. So it’s not such a big deal, it
might even be thought of as a feature, that her idea of the world’s a little screwy.
-You know better than I do, Vik. You keep me informed.
-Will do, sir.
-This abstracting you’re gonna do, do you need a team for that?
-Yes sir, maybe three people, if you can spare them. I could do it myself but it would take at
least a week, and I assume there’s a time factor.
-Right as always. Pick whoever you want, I’ll clear it with Shellings.
-Thank you, sir.
-Oh, there’s one other thing.
-Yes sir?
-There’s gonna be a new face around here,