Sie sind auf Seite 1von 19

How to Fix Facebook—Before It Fixes Us

An early investor explains why the social media platform’s business model is such
a threat—and what to do about it.

by Roger McNamee
January/February/March 2018
https://washingtonmonthly.com/magazine/january-february-march-2018/how-to-fix-facebook-
before-it-fixes-us/

Ch
ris Matthews

I n early 2006, I got a call from Chris Kelly, then the chief privacy officer at

Facebook, asking if I would be willing to meet with his boss, Mark Zuckerberg. I had
been a technology investor for more than two decades, but the meeting was unlike any I
had ever had. Mark was only twenty-two. He was facing a difficult decision, Chris said,
and wanted advice from an experienced person with no stake in the outcome.

When we met, I began by letting Mark know the perspective I was coming from. Soon,
I predicted, he would get a billion-dollar offer to buy Facebook from either Microsoft or
Yahoo, and everyone, from the company’s board to the executive staff to Mark’s
parents, would advise him to take it. I told Mark that he should turn down any
acquisition offer. He had an opportunity to create a uniquely great company if he
remained true to his vision. At two years old, Facebook was still years away from its
first dollar of profit. It was still mostly limited to students and lacked most of the
features we take for granted today. But I was convinced that Mark had created a game-
changing platform that would eventually be bigger than Google was at the time.
Facebook wasn’t the first social network, but it was the first to combine true identity
with scalable technology. I told Mark the market was much bigger than just young
people; the real value would come when busy adults, parents and grandparents, joined
the network and used it to keep in touch with people they didn’t get to see often.

My little speech only took a few minutes. What ensued was the most painful silence of
my professional career. It felt like an hour. Finally, Mark revealed why he had asked to
meet with me: Yahoo had made that billion-dollar offer, and everyone was telling him to
take it.

It only took a few minutes to help him figure out how to get out of the deal. So began a
three-year mentoring relationship. In 2007, Mark offered me a choice between investing
or joining the board of Facebook. As a professional investor, I chose the former. We
spoke often about a range of issues, culminating in my suggestion that he hire Sheryl
Sandberg as chief operating officer, and then my help in recruiting her. (Sheryl had
introduced me to Bono in 2000; a few years later, he and I formed Elevation Partners, a
private equity firm.) My role as a mentor ended prior to the Facebook IPO, when board
members like Marc Andreessen and Peter Thiel took on that role.

In my thirty-five-year career in technology investing, I have


never made a bigger contribution to a company’s success than I
made at Facebook. It was my proudest accomplishment. I
admired Mark Zuckerberg and Sheryl Sandberg—whom I helped
Mark recruit—enormously.
In my thirty-five-year career in technology investing, I have never made a bigger
contribution to a company’s success than I made at Facebook. It was my proudest
accomplishment. I admired Mark and Sheryl enormously. Not surprisingly, Facebook
became my favorite app. I checked it constantly, and I became an expert in using the
platform by marketing my rock band, Moonalice, through a Facebook page. As the
administrator of that page, I learned to maximize the organic reach of my posts and use
small amounts of advertising dollars to extend and target that reach. It required an
ability to adapt, because Facebook kept changing the rules. By successfully adapting to
each change, we made our page among the highest-engagement fan pages on the
platform.

My familiarity with building organic engagement put me in a position to notice that


something strange was going on in February 2016. The Democratic primary was getting
under way in New Hampshire, and I started to notice a flood of viciously misogynistic
anti-Clinton memes originating from Facebook groups supporting Bernie Sanders. I
knew how to build engagement organically on Facebook. This was not organic. It
appeared to be well organized, with an advertising budget. But surely the Sanders
campaign wasn’t stupid enough to be pushing the memes themselves. I didn’t know
what was going on, but I worried that Facebook was being used in ways that the
founders did not intend.

A month later I noticed an unrelated but equally disturbing news item. A consulting firm
was revealed to be scraping data about people interested in the Black Lives Matter
protest movement and selling it to police departments. Only after that news came out
did Facebook announce that it would cut off the company’s access to the information.
That got my attention. Here was a bad actor violating Facebook’s terms of service,
doing a lot of harm, and then being slapped on the wrist. Facebook wasn’t paying
attention until after the damage was done. I made a note to myself to learn more.

Meanwhile, the flood of anti-Clinton memes continued all spring. I still didn’t
understand what was driving it, except that the memes were viral to a degree that didn’t
seem to be organic. And, as it turned out, something equally strange was happening
across the Atlantic.
W hen citizens of the United Kingdom voted to leave the European Union in June

2016, most observers were stunned. The polls had predicted a victory for the “Remain”
campaign. And common sense made it hard to believe that Britons would do something
so obviously contrary to their self-interest. But neither common sense nor the polling
data fully accounted for a crucial factor: the new power of social platforms to amplify
negative messages.

Facebook, Google, and other social media platforms make their money from
advertising. As with all ad-supported businesses, that means advertisers are the true
customers, while audience members are the product. Until the past decade, media
platforms were locked into a one-size-fits-all broadcast model. Success with advertisers
depended on producing content that would appeal to the largest possible audience.
Compelling content was essential, because audiences could choose from a variety of
distribution mediums, none of which could expect to hold any individual consumer’s
attention for more than a few hours. TVs weren’t mobile. Computers were mobile, but
awkward. Newspapers and books were mobile and not awkward, but relatively cerebral.
Movie theaters were fun, but inconvenient.

When their business was limited to personal computers, the internet platforms were at a
disadvantage. Their proprietary content couldn’t compete with traditional media, and
their delivery medium, the PC, was generally only usable at a desk. Their one advantage
—a wealth of personal data—was not enough to overcome the disadvantage in content.
As a result, web platforms had to underprice their advertising.

Smartphones changed the advertising game completely. It took only a few years for
billions of people to have an all-purpose content delivery system easily accessible
sixteen hours or more a day. This turned media into a battle to hold users’ attention as
long as possible. And it left Facebook and Google with a prohibitive advantage over
traditional media: with their vast reservoirs of real-time data on two billion individuals,
they could personalize the content seen by every user. That made it much easier to
monopolize user attention on smartphones and made the platforms uniquely attractive to
advertisers. Why pay a newspaper in the hopes of catching the attention of a certain
portion of its audience, when you can pay Facebook to reach exactly those people and
no one else?

Whenever you log into Facebook, there are millions of posts the platform could show
you. The key to its business model is the use of algorithms, driven by individual user
data, to show you stuff you’re more likely to react to. Wikipedia defines an algorithm as
“a set of rules that precisely defines a sequence of operations.” Algorithms appear value
neutral, but the platforms’ algorithms are actually designed with a specific value in
mind: maximum share of attention, which optimizes profits. They do this by sucking up
and analyzing your data, using it to predict what will cause you to react most strongly,
and then giving you more of that.

Algorithms that maximize attention give an advantage to negative messages. People


tend to react more to inputs that land low on the brainstem. Fear and anger produce a lot
more engagement and sharing than joy. The result is that the algorithms favor
sensational content over substance. Of course, this has always been true for media;
hence the old news adage “If it bleeds, it leads.” But for mass media, this was
constrained by one-size-fits-all content and by the limitations of delivery platforms. Not
so for internet platforms on smartphones. They have created billions of individual
channels, each of which can be pushed further into negativity and extremism without
the risk of alienating other audience members. To the contrary: the platforms help
people self-segregate into like-minded filter bubbles, reducing the risk of exposure to
challenging ideas.

It took Brexit for me to begin to see the danger of this dynamic. I’m no expert on British
politics, but it seemed likely that Facebook might have had a big impact on the vote
because one side’s message was perfect for the algorithms and the other’s wasn’t. The
“Leave” campaign made an absurd promise—there would be savings from leaving the
European Union that would fund a big improvement in the National Health System—
while also exploiting xenophobia by casting Brexit as the best way to protect English
culture and jobs from immigrants. It was too-good-to-be-true nonsense mixed with
fearmongering.

Meanwhile, the Remain campaign was making an appeal to reason. Leave’s crude,
emotional message would have been turbocharged by sharing far more than Remain’s. I
did not see it at the time, but the users most likely to respond to Leave’s messages were
probably less wealthy and therefore cheaper for the advertiser to target: the price of
Facebook (and Google) ads is determined by auction, and the cost of targeting more
upscale consumers gets bid up higher by actual businesses trying to sell them things. As
a consequence, Facebook was a much cheaper and more effective platform for Leave in
terms of cost per user reached. And filter bubbles would ensure that people on the Leave
side would rarely have their questionable beliefs challenged. Facebook’s model may
have had the power to reshape an entire continent.

But there was one major element to the story that I was still missing.

S hortly after the Brexit vote, I reached out to journalists to validate my concerns

about Facebook. At this point, all I had was a suspicion of two things: bad actors were
exploiting an unguarded platform; and Facebook’s algorithms may have had a decisive
impact on Brexit by favoring negative messages. My Rolodex was a bit dusty, so I
emailed my friends Kara Swisher and Walt Mossberg at Recode, the leading tech
industry news blog. Unfortunately, they didn’t reply. I tried again in August, and nothing
happened.

Meanwhile, the press revealed that the Russians were behind the server hack at the
Democratic National Committee and that Trump’s campaign manager had ties to
Russian oligarchs close to Vladimir Putin. This would turn out to be the missing piece
of my story. As the summer went on, I began noticing more and more examples of
troubling things happening on Facebook that might have been prevented had the
company accepted responsibility for the actions of third parties—such as financial
institutions using Facebook tools to discriminate based on race and religion. In late
September, Walt Mossberg finally responded to my email and suggested I write an op-
ed describing my concerns. I focused entirely on nonpolitical examples of harm, such as
discrimination in housing advertisements, suggesting that Facebook had an obligation to
ensure that its platform not be abused. Like most people, I assumed that Clinton would
win the election, and I didn’t want my concerns to be dismissed as inconsequential if
she did.

My wife recommended that I send what I wrote to Mark Zuckerberg and Sheryl
Sandberg before publishing in Recode. Mark and Sheryl were my friends, and my goal
was to make them aware of the problems so they could fix them. I certainly wasn’t
trying to take down a company in which I still hold equity. I sent them the op-ed on
October 30. They each responded the next day. The gist of their messages was the same:
We appreciate you reaching out; we think you’re misinterpreting the news; we’re doing
great things that you can’t see. Then they connected me to Dan Rose, a longtime
Facebook executive with whom I had an excellent relationship. Dan is a great listener
and a patient man, but he was unwilling to accept that there might be a systemic issue.
Instead, he asserted that Facebook was not a media company, and therefore was not
responsible for the actions of third parties.

In the hope that Facebook would respond to my goodwill with a serious effort to solve
the problems, I told Dan that I would not publish the op-ed. Then came the U.S.
election. The next day, I lost it. I told Dan there was a flaw in Facebook’s business
model. The platform was being exploited by a range of bad actors, including supporters
of extremism, yet management claimed the company was not responsible. Facebook’s
users, I warned, might not always agree. The brand was at risk of becoming toxic. Over
the course of many conversations, I urged Dan to protect the platform and its users.

The last conversation we had was in early February 2017. By then there was increasing
evidence that the Russians had used a variety of methods to interfere in our election. I
formed a simple hypothesis: the Russians likely orchestrated some of the manipulation
on Facebook that I had observed back in 2016. That’s when I started looking for allies.

On April 11, I cohosted a technology-oriented show on Bloomberg TV. One of the


guests was Tristan Harris, formerly the design ethicist at Google. Tristan had just
appeared on 60 Minutes to discuss the public health threat from social networks like
Facebook. An expert in persuasive technology, he described the techniques that tech
platforms use to create addiction and the ways they exploit that addiction to increase
profits. He called it “brain hacking.”

In February 2016, I started to notice a flood of viciously


misogynistic anti-Clinton memes originating from Facebook
groups supporting Bernie Sanders. I knew how to build
engagement organically on Facebook. This was not organic.
The most important tool used by Facebook and Google to hold user attention is filter
bubbles. The use of algorithms to give consumers “what they want” leads to an
unending stream of posts that confirm each user’s existing beliefs. On Facebook, it’s
your news feed, while on Google it’s your individually customized search results. The
result is that everyone sees a different version of the internet tailored to create the
illusion that everyone else agrees with them. Continuous reinforcement of existing
beliefs tends to entrench those beliefs more deeply, while also making them more
extreme and resistant to contrary facts. Facebook takes the concept one step further with
its “groups” feature, which encourages like-minded users to congregate around shared
interests or beliefs. While this ostensibly provides a benefit to users, the larger benefit
goes to advertisers, who can target audiences even more effectively.

After talking to Tristan, I realized that the problems I had been seeing couldn’t be
solved simply by, say, Facebook hiring staff to monitor the content on the site. The
problems were inherent in the attention-based, algorithm-driven business model. And
what I suspected was Russia’s meddling in 2016 was only a prelude to what we’d see in
2018 and beyond. The level of political discourse, already in the gutter, was going to get
even worse.

I asked Tristan if he needed a wingman. We agreed to work together to try to trigger a


national conversation about the role of internet platform monopolies in our society,
economy, and politics. We recognized that our effort would likely be quixotic, but the
fact that Tristan had been on 60 Minutes gave us hope.
O ur journey began with a trip to New York City in May, where we spoke with

journalists and had a meeting at the ACLU. Tristan found an ally in Arianna Huffington,
who introduced him to people like Bill Maher, who invited Tristan to be on his show. A
friend introduced me over email to a congressional staffer who offered to arrange a
meeting with his boss, a key member of one of the intelligence committees. We were
just starting, but we had already found an audience for Tristan’s message.

In July, we went to Washington, D.C., where we met with two members of Congress.
They were interested in Tristan’s public health argument as it applied to two issues:
Russia’s election meddling, and the giant platforms’ growing monopoly power. That
was an eye-opener. If election manipulation and monopoly were what Congress cared
about, we would help them understand how internet platforms related to those issues.
My past experience as a congressional aide, my long career in investing, and my
personal role at Facebook gave me credibility in those meetings, complementing
Tristan’s domain expertise.

With respect to the election meddling, we shared a few hypotheses based on our
knowledge of how Facebook works. We started with a question: Why was Congress
focused exclusively on collusion between Russia and the Trump campaign in 2016? The
Russian interference, we reasoned, probably began long before the presidential election
campaign itself. We hypothesized that those early efforts likely involved amplifying
polarizing issues, such as immigration, white supremacy, gun rights, and secession. (We
already knew that the California secession site had been hosted in Russia.) We
suggested that Trump had been nominated because he alone among Republicans based
his campaign on the kinds of themes the Russians chose for their interference.

We theorized that the Russians had identified a set of users susceptible to its message,
used Facebook’s advertising tools to identify users with similar profiles, and used ads to
persuade those people to join groups dedicated to controversial issues. Facebook’s
algorithms would have favored Trump’s crude message and the anti-Clinton conspiracy
theories that thrilled his supporters, with the likely consequence that Trump and his
backers paid less than Clinton for Facebook advertising per person reached. The ads
were less important, though, than what came next: once users were in groups, the
Russians could have used fake American troll accounts and computerized “bots” to
share incendiary messages and organize events. Trolls and bots impersonating
Americans would have created the illusion of greater support for radical ideas than
actually existed. Real users “like” posts shared by trolls and bots and share them on
their own news feeds, so that small investments in advertising and memes posted to
Facebook groups would reach tens of millions of people. A similar strategy prevailed on
other platforms, including Twitter. Both techniques, bots and trolls, take time and
money to develop—but the payoff would have been huge.

Our final hypothesis was that 2016 was just the beginning. Without immediate and
aggressive action from Washington, bad actors of all kinds would be able to use
Facebook and other platforms to manipulate the American electorate in future elections.

These were just hypotheses, but the people we met in Washington heard us out. Thanks
to the hard work of journalists and investigators, virtually all of these hypotheses would
be confirmed over the ensuing six weeks. Almost every day brought new revelations of
how Facebook, Twitter, Google, and other platforms had been manipulated by the
Russians.

We now know, for instance, that the Russians indeed exploited topics like Black Lives
Matter and white nativism to promote fear and distrust, and that this had the benefit of
laying the groundwork for the most divisive presidential candidate in history, Donald
Trump. The Russians appear to have invested heavily in weakening the candidacy of
Hillary Clinton during the Democratic primary by promoting emotionally charged
content to supporters of Bernie Sanders and Jill Stein, as well as to likely Clinton
supporters who might be discouraged from voting. Once the nominations were set, the
Russians continued to undermine Clinton with social media targeted at likely
Democratic voters. We also have evidence now that Russia used its social media tactics
to manipulate the Brexit vote. A team of researchers reported in November, for instance,
that more than 150,000 Russian-language Twitter accounts posted pro-Leave messages
in the run-up to the referendum.

The week before our return visit to Washington in mid-September, we woke up to some
surprising news. The group that had been helping us in Washington, the Open Markets
team at the think tank New America, had been advocating forcefully for anti-monopoly
regulation of internet platforms, including Google. It turns out that Eric Schmidt, an
executive at Alphabet, Google’s parent company, is a major New America donor. The
think tank cut Open Markets loose. The story line basically read, “Anti-monopoly group
fired by liberal think tank due to pressure from monopolist.” (New America disputes
this interpretation, maintaining that the group was let go because of a lack of collegiality
on the part of its leader, Barry Lynn, who writes often for this magazine.) Getting fired
was the best possible evidence of the need for their work, and funders immediately put
the team back in business as the Open Markets Institute. Tristan and I joined their
advisory board.

Our second trip to Capitol Hill was surreal. This time, we had three jam-packed days of
meetings. Everyone we met was already focused on our issues and looking for guidance
about how to proceed. We brought with us a new member of the team, Renee DiResta,
an expert in how conspiracy theories spread on the internet. Renee described how bad
actors plant a rumor on sites like 4chan and Reddit, leverage the disenchanted people on
those sites to create buzz, build phony news sites with “press” versions of the rumor,
push the story onto Twitter to attract the real media, then blow up the story for the
masses on Facebook. It was sophisticated hacker technique, but not expensive. We
hypothesized that the Russians were able to manipulate tens of millions of American
voters for a sum less than it would take to buy an F-35 fighter jet.

In Washington, we learned we could help policymakers and their staff members


understand the inner workings of Facebook, Google, and Twitter. They needed to get up
to speed quickly, and our team was happy to help.
Tristan and I had begun in April with very low expectations. By the end of September, a
conversation on the dangers of internet platform monopolies was in full swing. We were
only a small part of what made the conversation happen, but it felt good.

F acebook and Google are the most powerful companies in the global economy. Part

of their appeal to shareholders is that their gigantic advertising businesses operate with
almost no human intervention. Algorithms can be beautiful in mathematical terms, but
they are only as good as the people who create them. In the case of Facebook and
Google, the algorithms have flaws that are increasingly obvious and dangerous.

Thanks to the U.S. government’s laissez-faire approach to regulation, the internet


platforms were able to pursue business strategies that would not have been allowed in
prior decades. No one stopped them from using free products to centralize the internet
and then replace its core functions. No one stopped them from siphoning off the profits
of content creators. No one stopped them from gathering data on every aspect of every
user’s internet life. No one stopped them from amassing market share not seen since the
days of Standard Oil. No one stopped them from running massive social and
psychological experiments on their users. No one demanded that they police their
platforms. It has been a sweet deal.

A week before the 2016 election, I emailed Zuckerberg and


Sandberg, suggesting that Facebook had an obligation to ensure
that its platform not be exploited by bad actors. They each
responded the next day, saying: We appreciate you reaching out,
but think you’re misinterpreting the news.
Facebook and Google are now so large that traditional tools of regulation may no longer
be effective. The European Union challenged Google’s shopping price comparison
engine on antitrust grounds, citing unfair use of Google’s search and AdWords data. The
harm was clear: most of Google’s European competitors in the category suffered
crippling losses. The most successful survivor lost 80 percent of its market share in one
year. The EU won a record $2.7 billion judgment—which Google is appealing. Google
investors shrugged at the judgment, and, as far as I can tell, the company has not altered
its behavior. The largest antitrust fine in EU history bounced off Google like a spitball
off a battleship.

I t reads like the plot of a sci-fi novel: a technology celebrated for bringing people

together is exploited by a hostile power to drive people apart, undermine democracy,


and create misery. This is precisely what happened in the United States during the 2016
election. We had constructed a modern Maginot Line—half the world’s defense
spending and cyber-hardened financial centers, all built to ward off attacks from abroad
—never imagining that an enemy could infect the minds of our citizens through
inventions of our own making, at minimal cost. Not only was the attack an
overwhelming success, but it was also a persistent one, as the political party that
benefited refuses to acknowledge reality. The attacks continue every day, posing an
existential threat to our democratic processes and independence.

We still don’t know the exact degree of collusion between the Russians and the Trump
campaign. But the debate over collusion, while important, risks missing what should be
an obvious point: Facebook, Google, Twitter, and other platforms were manipulated by
the Russians to shift outcomes in Brexit and the U.S. presidential election, and unless
major changes are made, they will be manipulated again. Next time, there is no telling
who the manipulators will be.

Awareness of the role of Facebook, Google, and others in Russia’s interference in the
2016 election has increased dramatically in recent months, thanks in large part to
congressional hearings on October 31 and November 1. This has led to calls for
regulation, starting with the introduction of the Honest Ads Act, sponsored by Senators
Mark Warner, Amy Klobuchar, and John McCain, which attempts to extend current
regulation of political ads on networks to online platforms. Facebook and Google
responded by reiterating their opposition to government regulation, insisting that it
would kill innovation and hurt the country’s global competitiveness, and that self-
regulation would produce better results.
But we’ve seen where self-regulation leads, and it isn’t pretty. Unfortunately, there is no
regulatory silver bullet. The scope of the problem requires a multi-pronged approach.

First, we must address the resistance to facts created by filter bubbles. Polls suggest that
about a third of Americans believe that Russian interference is fake news, despite
unanimous agreement to the contrary by the country’s intelligence agencies. Helping
those people accept the truth is a priority. I recommend that Facebook, Google, Twitter,
and others be required to contact each person touched by Russian content with a
personal message that says, “You, and we, were manipulated by the Russians. This
really happened, and here is the evidence.” The message would include every Russian
message the user received.

This idea, which originated with my colleague Tristan Harris, is based on experience
with cults. When you want to deprogram a cult member, it is really important that the
call to action come from another member of the cult, ideally the leader. The platforms
will claim this is too onerous. Facebook has indicated that up to 126 million Americans
were touched by the Russian manipulation on its core platform and another twenty
million on Instagram, which it owns. Together those numbers exceed the 137 million
Americans who voted in 2016. What Facebook has offered is a portal buried within its
Help Center where curious users will be able to find out if they were touched by
Russian manipulation through a handful of Facebook groups created by a single troll
farm. This falls far short of what is necessary to prevent manipulation in 2018 and
beyond. There’s no doubt that the platforms have the technological capacity to reach out
to every affected person. No matter the cost, platform companies must absorb it as the
price for their carelessness in allowing the manipulation.

Second, the chief executive officers of Facebook, Google, Twitter, and others—not just
their lawyers—must testify before congressional committees in open session. As
Senator John Kennedy, a Louisiana Republican, demonstrated in the October 31 Senate
Judiciary hearing, the general counsel of Facebook in particular did not provide
satisfactory answers. This is important not just for the public, but also for another
crucial constituency: the employees who keep the tech giants running. While many of
the folks who run Silicon Valley are extreme libertarians, the people who work there
tend to be idealists. They want to believe what they’re doing is good. Forcing tech
CEOs like Mark Zuckerberg to justify the unjustifiable, in public—without the shield of
spokespeople or PR spin—would go a long way to puncturing their carefully preserved
cults of personality in the eyes of their employees.

T hese two remedies would only be a first step, of course. We also need regulatory

fixes. Here are a few ideas.

First, it’s essential to ban digital bots that impersonate humans. They distort the “public
square” in a way that was never possible in history, no matter how many anonymous
leaflets you printed. At a minimum, the law could require explicit labeling of all bots,
the ability for users to block them, and liability on the part of platform vendors for the
harm bots cause.

Second, the platforms should not be allowed to make any acquisitions until they have
addressed the damage caused to date, taken steps to prevent harm in the future, and
demonstrated that such acquisitions will not result in diminished competition. An
underappreciated aspect of the platforms’ growth is their pattern of gobbling up smaller
firms—in Facebook’s case, that includes Instagram and WhatsApp; in Google’s, it
includes YouTube, Google Maps, AdSense, and many others—and using them to extend
their monopoly power.

This is important, because the internet has lost something very valuable. The early
internet was designed to be decentralized. It treated all content and all content owners
equally. That equality had value in society, as it kept the playing field level and
encouraged new entrants. But decentralization had a cost: no one had an incentive to
make internet tools easy to use. Frustrated by those tools, users embraced easy-to-use
alternatives from Facebook and Google. This allowed the platforms to centralize the
internet, inserting themselves between users and content, effectively imposing a tax on
both sides. This is a great business model for Facebook and Google—and convenient in
the short term for customers—but we are drowning in evidence that there are costs that
society may not be able to afford.

Third, the platforms must be transparent about who is behind political and issues-based
communication. The Honest Ads Act is a good start, but does not go far enough for two
reasons: advertising was a relatively small part of the Russian manipulation; and issues-
based advertising played a much larger role than candidate-oriented ads. Transparency
with respect to those who sponsor political advertising of all kinds is a step toward
rebuilding trust in our political institutions.

Fourth, the platforms must be more transparent about their algorithms. Users deserve to
know why they see what they see in their news feeds and search results. If Facebook
and Google had to be up-front about the reason you’re seeing conspiracy theories—
namely, that it’s good for business—they would be far less likely to stick to that tactic.
Allowing third parties to audit the algorithms would go even further toward maintaining
transparency. Facebook and Google make millions of editorial choices every hour and
must accept responsibility for the consequences of those choices. Consumers should
also be able to see what attributes are causing advertisers to target them.

Facebook, Google, and other social media platforms make their


money from advertising. As with all ad-supported businesses, that
means advertisers are the true customers, while audience
members are the product.
Fifth, the platforms should be required to have a more equitable contractual relationship
with users. Facebook, Google, and others have asserted unprecedented rights with
respect to end-user license agreements (EULAs), the contracts that specify the
relationship between platform and user. When you load a new operating system or PC
application, you’re confronted with a contract—the EULA—and the requirement that
you accept its terms before completing installation. If you don’t want to upgrade, you
can continue to use the old version for some time, often years. Not so with internet
platforms like Facebook or Google. There, your use of the product comes with implicit
acceptance of the latest EULA, which can change at any time. If there are terms you
choose not to accept, your only alternative is to abandon use of the product. For
Facebook, where users have contributed 100 percent of the content, this non-option is
particularly problematic.

All software platforms should be required to offer a legitimate opt-out, one that enables
users to stick with the prior version if they do not like the new EULA. “Forking”
platforms between old and new versions would have several benefits: increased
consumer choice, greater transparency on the EULA, and more care in the rollout of
new functionality, among others. It would limit the risk that platforms would run
massive social experiments on millions—or billions—of users without appropriate prior
notification. Maintaining more than one version of their services would be expensive for
Facebook, Google, and the rest, but in software that has always been one of the costs of
success. Why should this generation get a pass?

Customers understand that their “free” use of platforms like


Facebook and Google gives the platforms license to exploit
personal data. The problem is that platforms are using that data in
ways consumers do not understand, and might not accept if they
did.
Sixth, we need a limit on the commercial exploitation of consumer data by internet
platforms. Customers understand that their “free” use of platforms like Facebook and
Google gives the platforms license to exploit personal data. The problem is that
platforms are using that data in ways consumers do not understand, and might not
accept if they did. For example, Google bought a huge trove of credit card data earlier
this year. Facebook uses image-recognition software and third-party tags to identify
users in contexts without their involvement and where they might prefer to be
anonymous. Not only do the platforms use your data on their own sites, but they also
lease it to third parties to use all over the internet. And they will use that data forever,
unless someone tells them to stop.
There should be a statute of limitations on the use of consumer data by a platform and
its customers. Perhaps that limit should be ninety days, perhaps a year. But at some
point, users must have the right to renegotiate the terms of how their data is used.

Seventh, consumers, not the platforms, should own their own data. In the case of
Facebook, this includes posts, friends, and events—in short, the entire social graph.
Users created this data, so they should have the right to export it to other social
networks. Given inertia and the convenience of Facebook, I wouldn’t expect this reform
to trigger a mass flight of users. Instead, the likely outcome would be an explosion of
innovation and entrepreneurship. Facebook is so powerful that most new entrants would
avoid head-on competition in favor of creating sustainable differentiation. Start-ups and
established players would build new products that incorporate people’s existing social
graphs, forcing Facebook to compete again. It would be analogous to the regulation of
the AT&T monopoly’s long-distance business, which led to lower prices and better
service for consumers.

Eighth, and finally, we should consider that the time has come to revive the country’s
traditional approach to monopoly. Since the Reagan era, antitrust law has operated
under the principle that monopoly is not a problem so long as it doesn’t result in higher
prices for consumers. Under that framework, Facebook and Google have been allowed
to dominate several industries—not just search and social media but also email, video,
photos, and digital ad sales, among others—increasing their monopolies by buying
potential rivals like YouTube and Instagram. While superficially appealing, this
approach ignores costs that don’t show up in a price tag. Addiction to Facebook,
YouTube, and other platforms has a cost. Election manipulation has a cost. Reduced
innovation and shrinkage of the entrepreneurial economy has a cost. All of these costs
are evident today. We can quantify them well enough to appreciate that the costs to
consumers of concentration on the internet are unacceptably high.

Increasing awareness of the threat posed by platform monopolies creates an opportunity


to reframe the discussion about concentration of market power. Limiting the power of
Facebook and Google not only won’t harm America, it will almost certainly unleash
levels of creativity and innovation that have not been seen in the technology industry
since the early days of, well, Facebook and Google.

Before you dismiss regulation as impossible in the current economic environment,


consider this. Eight months ago, when Tristan Harris and I joined forces, hardly anyone
was talking about the issues I described above. Now lots of people are talking, including
policymakers. Given all the other issues facing the country, it’s hard to be optimistic that
we will solve the problems on the internet, but that’s no excuse for inaction. There’s far
too much at stake.

Das könnte Ihnen auch gefallen