Sie sind auf Seite 1von 19

Running head: CONTENT FILTERING 1

Social Media:
The Challenge of Content Filtering
Crystal S. Stephenson
University of South Florida
CONTENT FILTERING 2

Abstract

Following the recent massacre in Christchurch, New Zealand and the gunman’s ability to live-

stream his assault on Facebook, many were shocked and outraged that such a tragedy could make

it past the technological tools, filters and moderators designed to prevent it. What went wrong?

Or, as this paper explores, what is wrong with the current system for content moderation on

social media platforms? Upon extensive review of the academic literature and media reporting,

the research indicates that implementation of artificial intelligence (AI), machine learning

algorithms, and human content moderators have collectively improved efforts to thwart or

otherwise detect spam and pornography but have proven ineffective at identifying hate speech

and violent content. The algorithms are not yet advanced enough and AI technologies not

optimally developed to successfully monitor over two billion registered users. While some

proposals are discussed, including a time delay for publishing livestreamed content, the

researcher concludes that no solution is imminent. But the literature also shows that social media

conglomerates are working extensively in the interim to resolve the problem of content

moderation before another tragedy is broadcast across their platforms.


CONTENT FILTERING 3

“Social Media: The Challenge of Content Filtering”


The recent tragedy in New Zealand not only sparked outrage, shock and grief across the

globe but also brought to the forefront of international attention an ongoing problem for social

media platforms. The gunman live-streamed the shootings on Facebook in real time without

hurdle or censure, and while Facebook deleted the footage within 17 minutes of posting, the

damage had been done. The livestream instantly went viral across platforms from YouTube to

Reddit and filtered back to Facebook all over again. The challenge of content filtering is not a

new one in light of dangerous YouTube challenges and Facebook’s highly publicized failure to

circumvent Russian interference in U.S. elections. But the recent tragedy in Christchurch,

broadcast for the world to see, highlighted a major flaw in social media networks’ ability to filter

content instantaneously or effectively moderate content short of adequate resources and methods

that can meet the demand. Current methods of filtering primarily include machine learning,

artificial intelligence, and human moderators, but they are neither technologically advanced

enough to differentiate a real mass shooting from the advanced graphics of a PS4 game nor free

of human error. This paper will focus on the challenge as outlined, beginning with a review of

some recent examples of dangerous or controversial breach of content guidelines, followed by a

thorough exploration of the current methods relied upon by social media platforms to filter

content, including how they work and why they fail, and conclude with some proposals that may

help to mitigate the issue until technologies are developed or improve. As more instances like the

Christchurch massacre slip through the cracks, social media companies are scrambling to fix the

problem and develop a viable and effective solution to this otherwise pressing challenge.
Facebook’s challenge to moderate live-streaming content has never been so glaringly

obvious and consequential than as witnessed by the recent tragedy in Christchurch, New

Zealand. On March 15, 2019, a gunman opened fire on two mosques in Christchurch during
CONTENT FILTERING 4

Friday Prayer resulting in 50 casualties. What made this mass shooting all the more horrific is

that the gunman broadcast the terrorist attack on Facebook Live in real time for the entire world

to see. While the graphic video was deleted by Facebook after 17 minutes live on the platform,

this “episode highlights the fraught difficulties in moderating live content, where an innocuous

seeming video can quickly turn violent with little or no warning” (Cox, 2019). The gunman used

an app called LIVE4, “which streams footage from a GoPro camera to Facebook Live”, but

LIVE4 has “no technical ability to block any streams while they are happening,” says the CEO,

Alex Zhukov. Once the stream was removed from Facebook, copies of the video were

subsequently reposted across the Internet, from Reddit and Twitter to YouTube and back to

Facebook again. According to Billy Perrigo of Time, “The episode underscored social media

companies’ Sisyphean struggle to police violent content on their platforms” like a “game of

what-a-mole” (2019).
The mass shooting in New Zealand is not the first time that violet content has made it

past Facebook moderators. In 2017, Steve Stephens broadcast the random “cold-blooded killing

of an elderly stranger on the streets of Cleveland on Facebook Live in what he dubbed an ‘Easter

day slaughter’” (Yahoo7 News, 2017). Most troubling is that, it took “Facebook two hours to

take down the Steve Stephens content,” which by then had been viewed over 150,000 times. In

2015, “a gunman shot and killed two television journalists while they were on air at a shopping

mall in Moneta, Virginia” (Alba, 2015), and shortly thereafter, “he posted first-person videos of

the shooting on Facebook and Twitter, which spread quickly through two of the world’s most

popular media platforms.” Not only did the video play automatically for unsuspecting users to

witness, but once again, the eight minutes it was up on Facebook gave viewers enough time to

copy and repost across other social media platforms. The YouTube copy, for example, was

viewed more than 2,000 times before being deleted, and the copy reposted to Facebook remained
CONTENT FILTERING 5

up for five hours with over 39,000 views before it was taken down. For their part, Twitter

reportedly does not mediate content, but rather, relies on creators to mark their upload as

sensitive material per their policy, and the tweet is not immediately removed unless or until

Twitter users complain or flag it as inappropriate. In 2016, the shooting deaths of five police

officers in Dallas, Texas, were streamed across social media platforms in real time. And the

evening prior, “Diamond Reynolds livestreamed the last moments of her boyfriend Philando

Castile’s life after police in Falcon Heights, Minnesota shot him during a traffic stop”

(Lapowsky, 2016). Four years after the live-streamed shooting of two television journalists, and

as evidenced by the recent live-streamed massacre in New Zealand, Facebook and others have

proven ineffective, making little to no progress in moderating their live-stream content.


YouTube has also experienced difficulty and backlash over their inability or reluctance to

moderate controversial content over the years, namely as a platform for a variety of dangerous

and controversial challenges to go viral among young and impressionable viewers. Internet

“challenges” are “a cultural phenomenon defined as ‘Internet users recording themselves taking a

challenge and then distributing the resulting video through social media sites, often inspiring or

daring others to repeat the challenge” (Quinn, 2018). And with the “immortalization of the

Internet, these challenges tend to resurface with each season and varied cohorts of teens” seeking

the acceptance among peers and the thrill of the risk. The first challenge dates back to 2001 when

the “Cinnamon Challenge” spread virally on pre-streaming video. The most widely recognized

challenge of late is the “Tide Pod Challenge,” which has resulted in multiple hospitalizations for

severe burns to the mouth, damage to the respiratory tract or esophagus, vomiting, diarrhea, and

seizures. The Tide Pod challenge became a social media phenomenon in 2016, when teenagers

began recording themselves eating a Tide Pod and posting it to YouTube. “Enough teenagers

have engaged in the Tide Pod Challenge that it’s warranted public health scrutiny” (Mukherjee,
CONTENT FILTERING 6

2018) and prompting Procter & Gamble to release a statement condemning the act. “According

to the Association of Poison Control Centers, 25% of the 220 teens who were exposed to Tide

Pods last year consumed them intentionally, and half of the 37 cases in 2018 were intentional”

(Mukherjee, 2018).
Another dangerous challenge that spread across YouTube is the “Fire Challenge,” which

involves “a combustible liquid being placed on the skin and people lighting themselves on fire”

(Quinn, 2018), resulting in serious burns, scarring, disfigurement, and at least two reported

deaths. But the final straw for YouTube was the “Bird Box Challenge” inspired by the Netflix

movie, which entailed performing mundane activities like walking across the street or driving a

car while blindfolded. In the days following a car crash by a teenager attempting the “Bird Box”

challenge, YouTube updated their policies and guidelines to explicitly ban dangerous pranks and

challenges, “including activities that cause ‘severe emotional distress’ for kids or make any target

think they’re in ‘serious physical danger’” (Fingas, 2019). However, there is still a “two-month

grace period where YouTube won’t apply a strike against channels that violate the policy,” and

reserves the right to “remove any offending videos posted before or during that period.” In other

words, videos may continue to be uploaded for viewing, albeit temporarily, and may not be

deleted until reported by users or given an opportunity for the creator to remove it themselves

first. The company has issued a statement to media explaining that, they are “mainly concerned

with videos that show children either hurt or in dangerous situations” (Fingas, 2019), and if the

video in question “doesn’t go too far but might not be suitable for kids, YouTube will apply age

restrictions.”
While YouTube provides for age restrictions, their safety features and filter controls for

YouTube Kids has been featured in the news for their failure to catch dangerous content. In 2018,

a Florida mother discovered clips of suicide instructions spliced in the middle of one of the
CONTENT FILTERING 7

cartoon videos her son was watching on YouTube Kids. The footage revealed a man in

sunglasses instructing children on how they can slit their wrists. The mother immediately “put

out a call to action to different groups to report the video to get it removed from the site” (Cross,

2019), but it took YouTube Kids a week to pull it down. Less than a year later, she found the

video again on YouTube, and “once again, after the video was flagged by her and others, it took a

couple of days for YouTube to pull it” from their platform. Furthermore, following the incident,

she explored YouTube Kids more in depth and discovered other disturbing content, including

sexual exploitation, abuse, human trafficking, gun and domestic violence, and one video,

“inspired by the popular ‘Minecraft’ video game, even depicted a school shooting” (Cross,

2019). She inevitably went directly to news media outlets to share her experience and plead to

YouTube to do a better job of screening and monitoring the videos intended for YouTube Kids.
Propaganda and harmful rhetoric have also proven challenging to successfully moderate

and circumvent for social media companies. The spread of hate speech, debunked science,

Holocaust deniers, and most recently, the evidence of Russian interference in the 2016 U.S.

presidential election, have all plagued Facebook in recent years. While “sockpuppets” are

nothing new to social media platforms, their seemingly meteoric rise as of late has raised concern

for Facebook and others as serious consequences have proven result. Sockpuppets are fake

accounts or “any online identity created and used for purposes of deception” (Parsons, 2018, p.

355), the “nefarious purposes” of which range from cyberbullying and the spread of harmful

rhetoric to foreign election-meddling and fraud. “People are highly dependent on online social

networks” (Gupta & Kaushal, 2017, p. 1) for communication, entertainment and daily news, but

cyber criminals have also discovered a convenient platform “for carrying out a number of

malicious activities.” Fake accounts created in mass can be classified by categories of duplicates,

user-misclassified or undesirable accounts, but their detection is generally “done on the basis of
CONTENT FILTERING 8

the user activities and their interaction with other users on Facebook through the analysis of user

feed data” and machine learning algorithms, and heavily dependent on user-reporting and flags.

Facebook has touted the development of an “immune system to address challenges owing to fake

accounts by building classifiers” (p. 2), but numerous experts have surmised “that it may not be

solving the problem” as sockpuppets “have been constantly evolving over the years to evade

detection.”
Among the most highly publicized of these nefarious activities perpetrated over social

media networks is Russia’s “‘interference operation’ that made use of Facebook, Twitter, and

Instagram” (Osnos, 2018) during the 2016 U.S. presidential election. To date, Special Counsel

Robert Mueller has charged thirteen Russian operatives with engaging in propaganda efforts to

disrupt the election. “The Internet Research Agency, a firm in St. Petersburg working for the

Kremlin, drew hundreds of thousands of users to Facebook groups optimized to stoke outrage,”

as well as published approximately eighty thousand posts reaching millions of Americans,

organized offline rallies, and purchased “Facebook ads intended to hurt Hillary Clinton’s

standing among Democratic voters.” Indeed, the Internet Research Agency achieved quite the

impact, as “Facebook estimates that the content reached as many as a hundred and fifty million

users” (Osnos, 2018), leading to the admission by President Donald Trump’s digital-content

director that, “Without Facebook we wouldn’t have won.” While Facebook has been working

extensively to identify and remove fake accounts, including a sweep of “thirty-two accounts

running disinformation campaigns that were traced to Russia” and “six hundred and fifty

accounts, groups, and pages with links to Russia and Iran” (Osnos, 2018), these removals are “a

sign either of progress or of the growing scale of the problem.” This also highlights one of

Facebook’s biggest hurdles to content moderation, which is reactionary rather than perfecting

proactive methods to circumvent the content and fake accounts in the first place.
CONTENT FILTERING 9

Content moderation, according to Myers West, is “the governance mechanisms that

structure participation in a community to facilitate cooperation and prevent abuse” (2018, p.

4368), which often “relies on the work of outsourced laborers, freelance workers who minute by

minute scroll through the worst of the Internet’s garbage and make assessments manually as to

whether it upholds the community guidelines.” At present, Facebook relies on two primary

methods of checking the content uploaded to their platforms, namely “content recognition

technology, which uses artificial intelligence to compare newly-uploaded footage to known illicit

material” (Perrigo, 2019), and the supplemental efforts of thousands of human content

moderators. As a subset of artificial intelligence (AI), machine learning utilizes algorithms and

statistical models, which relies on patterns and inference to filter accordingly. But considering

the many aforementioned examples, it is fair to surmise that their moderation tools and efforts

have been unsuccessful. Community users still “remain critical to the functioning of content

moderation” (Myers West, 2018, p. 4368), as companies rely heavily “on users flagging content

they deem objectionable in order to identify what needs to be removed.” In other words, much of

the objectionable content slips through the filters currently in place to prevent it and users must

notify the platform in order to delete it. Once again, content moderation has proven reactionary

rather than proactive. Facebook reportedly “believes highly-nuanced content moderation can

resolve” many of the problems featured here, “but it’s an unfathomably complex logistical

problem that has no obvious solution, that fundamentally threatens Facebook’s business, and that

has largely shifted the role of free speech arbitration from governments to a private platform”

(Koebler & Cox, 2018). While Mark Zuckerberg “believes that Facebook’s problems can be

solved”, the “experts do not.”


Machine learning is an “area of research that has produced most of the recent

breakthroughs in AI” (de Saint Laurent, 2018, p. 735) and proven to “have tremendous potential”
CONTENT FILTERING 10

in application. “Machine learning was born at the meeting point between statistics” and computer

science, and “refers to any statistical method where the parameters of a given model are ‘learnt’

from a dataset through an iterative process, usually to predict an output” (p. 737). The weakness

of machine learning as a method of content moderation, however, is the unknown values and

variables to effectively predict and moderate a given output. For instance, machine learning is so

complex that, “the exact parameters that will minimize the error function cannot be calculated

directly, but only estimated through a gradient descent,” meaning the process is challenged by

the “sheer volume of parameters and hyperparameters that need to be tuned in by researchers or

appropriately fit or select a model, and that cannot be learnt during the training of the model” (de

Saint Laurent, 2018, p. 737). Facebook, for one, has lauded their AI tools, “many of which are

trained with data from its human moderation team” (Koebler & Cox, 2018), but experts say that

the “algorithms are not advanced enough yet to reliably” (Perrigo, 2019) detect and remove

violent footage the first time it is uploaded. For instance, an algorithm can easily confuse

“footage of a first-person-shooter video game with real-life violent footage” as it currently

operates.
At its core, AI is defined as “a cross-disciplinary approach to understanding, modeling,

and replicating intelligence and cognitive processes by invoking various computational,

mathematically, logical, mechanical, and even biological principles and devices” (de Saint

Laurent, 2018, p. 736). In other words, AI technologies are intended to “develop machines able

to carry out tasks that would otherwise require human intelligence.” Facebook and other social

media companies rely heavily on AI technology for detection and removal of objectionable

content and that which violates their community standards and guidelines. However, while AI

has proven success in “identifying porn, spam, and fake accounts” (Koebler & Cox, 2018), the

technology has not been “great at identifying hate speech” due to the nuance of human language
CONTENT FILTERING 11

and proven ineffective at filtering violent content. For example, Facebook’s AI technology has

detected “just 38 percent of the hate speech-related posts it ultimately removes, and at the

moment it doesn’t have enough training data for the AI to be very effective outside of English

and Portuguese” (Koebler & Cox, 2018). To be certain, learning how “to successfully moderate

user-generated content” of two billion registered users “is one of the most labor-intensive and

mind-bogglingly complex logistical problems Facebook has ever tried to solve.” Facebook’s

content moderation practices are hindered by “failures of policy, failures of messaging, and

failures to predict the darkest impulses of human nature” (Koebler & Cox, 2018), and further

impacted by “technological shortcomings” and honest mistakes. The “fundamental reason for

content moderation – its root reason for existing – goes quite simply to the issue of brand

protection and liability mitigation for the platform,” and those “gatekeeping mechanisms the

platforms use to control the nature of the user-generated content” has yet to be perfected, as

evidenced by the tens of thousands of moderation errors every day (Koebler & Cox, 2018). Due

to Facebook’s size and diversity, “it’s nearly impossible to govern every possible interaction on

the site.”
While social media networks “augment their AI technology with thousands of human

moderators who manually check videos and other content” every day, these companies “often

fail to recognize violent content before it spreads virally” (Perrigo, 2019). With roughly 7,500

Facebook moderators reviewing more than 10 million posts per week to monitor 2.2 billion

registered account holders, it is neither an enviable job nor an infallible strategy, but rather, a

critical necessity while AI technologies continue to improve. In 2017, Zuckerberg expanded

Facebook’s “community operations” team by adding to their existing 4,500 moderators,

“responsible for reviewing every piece of content reported for violating the company’s

community standards” (Newton, 2019), but by the end of 2018, “in response to criticism of the
CONTENT FILTERING 12

prevalence of violent and exploitative content on the social media network,” Facebook had come

closer to 15,000 content moderators to mitigate the bad press. A few short months later, in 2019,

the massacre in Christchurch, New Zealand unfolded for all the world to watch in horror on

Facebook Live.
Human moderators range from full-time to contractual employees, who work “around the

clock, evaluating posts in more than 50 languages, at more than 20 sites around the world”

(Newton, 2019) at an average pay rate of under $30,000 annually. “The job is psychologically

grueling” (Perrigo, 2019), with “workers exposed to the most grotesque footage imaginable on a

daily basis for low pay and with minimal mental health support.” Some social media companies

have vowed to take better care of their moderators following numerous reports of horrible

conditions, but the necessity of their job remains, and human error is inevitable. “Collectively,”

content moderators have “described a workplace that is perpetually teetering on the brink of

chaos” (Newton, 2019), in an “environment where workers cope by telling dark jokes about

committing suicide, then smoke weed during breaks to numb their emotions.” Employees are

always at risk of being “fired for making just a few errors a week,” and “those who remain live

in fear of the former colleagues who return seeking vengeance.” To be sure, the environment and

experiences of human moderators don’t bode well for the most successful and effective content

moderation as fear of the next panic attack looms heavy.


In summation, the algorithms used for machine learning are not yet advanced enough to

detect hate speech and violent content, AI technologies have failed to identify all objectionable

content for removal, and human content moderators are stressed and fallible, but livestreaming is

an entirely new beast to conquer. “What’s especially challenging about the Christchurch video is

that the attack wasn’t recorded and uploaded later, but livestreamed in real-time as it unfolded”

(Perrigo, 2019), and current methods in practice, from AI to human moderators, can’t possibly
CONTENT FILTERING 13

“detect a violent scene as it is being live-streamed” and take it down while it’s happening. While

murder and violence have been broadcast on Facebook and other social media platforms long

before the introduction of live-streaming, as evidenced here, there is “no perfect technology to

take down a video without a reference database” (Perrigo, 2019). It is virtually impossible to

“prevent a newly-recorded violent video from being uploaded for the very first time” as current

content-recognition technology works today. Content-recognition technologies use

“fingerprinting” models, meaning any company “looking to prevent a video being uploaded at all

must first upload the copy of that video to a database, allowing for new uploads to be compared

against that footage” (Perrigo, 2019). And even with a reference point, “users can manipulate

their versions of the footage to circumvent upload filters” by “altering the image or audio

quality.” While the “fingerprinting” technology is predicted to improve and “more variants of an

offending piece of footage can be detected” with time, “the imperfection of the current system in

part explains why copies” of objectionable videos continue to appear on platforms long after the

initial livestream. Social media companies have yet to develop the most “effective AI to suppress

this kind of content on a proactive basis” (Lapowsky, 2019), but many urge companies to “take a

blunt force approach and ban every clip” of a video by incorporating the same “fingerprinting

technology used to remove child pornography.” However, until the technology improves,

“mainstream social networks still rely on a system that’s catch-as-catch-can, because humans are

still better at making the nuanced judgements of whether or not potentially offensive content

should stay up” (Alba, 2015), which only cements the necessity of human content moderators

and reliance on community surveillance by flagging objectionable content.


While no viable long-term solutions have been developed yet, some proposals have been

made to mitigate the issue. In 2018, Facebook reportedly began designing computer chips that

would purportedly be “more energy-efficient at analyzing and filtering live video content”
CONTENT FILTERING 14

(Bloomberg, 2018). In theory, if “someone uses Facebook Live to film their own suicide or

murder,” these computer chips would make it possible “to take down that kind of content as it

happens.” Some have also proposed a real-time delay feature for the publication of livestreamed

recordings. By implementing “a significant delay between a live broadcast and its availability as

a shareable recording” (Ward, 2019), it “could help reduce the degree to which graphic

recordings are spread” by giving it time to “be run through content filters to pick up anything

suspicious.” From a behavior standpoint, this delay “places time between the act (live streaming)

and a potentially reinforcing consequence (greater ability to share the content and receive

attention).”
But if the statistics are any indication, all may not be as bleak as it would appear on the

surface, as social media companies have collectively improved their efforts to moderate and

remove harmful content over the past few years. For instance, Facebook reports the successful

detection of “nearly 100 percent of spam” (Koebler & Cox, 2019), including “99.5 percent of

terrorist-related removals, 98.5 percent of fake accounts, 96 percent of adult nudity and sexual

activity, and 86 percent of graphic violence-related removals,” all successfully “detected by AI,

not users.” In fact, based on “Facebook’s metrics, for every instance of the company mistakenly

deleting (or leaving up) something it shouldn’t, there are more than a hundred damaging posts

that were properly moderated that are invisible to the user.” While live-streaming still remains a

pressing challenge, “there’s simply no perfect solution” at this time, “save for eliminating user-

generated content altogether – which would likely mean shutting down Facebook” (Koebler &

Cox, 2019). The moderation of video is simply “harder than moderating text (which can be easily

searched) or photos (which can be banned once and then prevented from being uploaded again),”

and doing so while it is happening in real time is even more difficult. Unfortunately, for

Facebook, their innovative concept “to connect as many people as possible and figure out the
CONTENT FILTERING 15

specifics later continue to have ramifications” (Koebler & Cox, 2019), so they continue to learn

and adapt as they go. In the end, those “inside Facebook’s everything machine will never be able

to predict the ‘everything’ that their fellow humans will put inside it,” and if their “mission

remains to connect ‘everyone,’ then Facebook will never solve its content moderation problem”

entirely.
One of the most pressing challenges for social media networks today is their ability to

effectively moderate and censure content deemed violent or otherwise objectionable, and the

problem is exacerbated by the introduction of live-streaming video. What the massacre in New

Zealand brought to the forefront, and other examples highlighted here further prove, is that

current methods of machine learning, artificial intelligence, and human moderators are not

effective and advanced enough to prevent graphic content from going viral across social media

platforms for unassuming users to witness. While AI technology has successfully managed to

circumvent spam and sexually explicit content, machine learning algorithms have been less

effective in identifying hate speech, and the combination of all three moderation methods have

been unsuccessful in censuring live-streaming videos. Current content-recognition technology is

still improving, particularly in a reactionary sense as data and “fingerprinting” models are added

to the reference database for detection later. As Facebook, YouTube, and other large social media

platforms continue to develop technological solutions to the problem, thousands of human

moderators are employed to supplement efforts, but they remain fallible and prone to human

error. A brief look at the numbers reveals that Facebook is getting better at their content

moderating strategies, suggesting that perhaps the stated problem is not as damaging as the

media outlets report. That said, the live-streaming feature remains a pertinent concern with no

foreseeable solution, but you can guarantee that social media conglomerates are scrambling

behind closed doors to resolve the issue before the next tragedy is broadcast on their platform.
CONTENT FILTERING 16
CONTENT FILTERING 17

References
Alba, D. (2015, Aug 26). Should Facebook Block Offensive Videos Before They Post? Wired.
Retrieved from https://www.wired.com/2015/08/facebook-block-offensive-videos-post/
Bloomberg. (2018, May 25). Facebook Is Designing Chips to Help Filter Live
Videos. Fortune.Com, 1. Retrieved from

http://search.ebscohost.com.ezproxy.lib.usf.edu/login.aspx?

direct=true&db=buh&AN=129794316&site=eds-live
Cox, J. (2019, Mar 15). Documents Show How Facebook Moderates Terrorism on Livestreams.
Motherboard. Retrieved from

https://motherboard.vice.com/en_us/article/eve7w7/documents-show-how-facebook-

moderates-terrorism-on-livestreams
Criss, D. (2019, Feb 25). A Mom Found Videos on YouTube Kids That Gave Children
Instructions for Suicide. CNN Business. Retrieved from

https://www.cnn.com/2019/02/25/tech/youtube-suicide-videos-trnd/index.html
de Saint Laurent, Constance. (2018). “In Defense of Machine Learning: Debunking the Myths of
Artificial Intelligence.” Europe’s Journal of Psychology, 14(4), 734-747. Retrieved from

http://search.ebscohost.com.ezproxy.lib.usf.edu/login.aspx?

direct=true&db=edb&AN=134979044&site=eds-live.
Fingas, J. (2019, Jan 15). YouTube Bans Dangerous Challenges and Pranks (Updated).
Engadget. Retrieved from https://www.engadget.com/2019/01/15/youtube-bans-

dangerous-challenges-and-pranks/
Gupta, A., & Kaushal, R. (2017). Towards Detecting Fake User Accounts in Facebook. 2017
ISEA Asia Security and Privacy (ISEASP), 1-6. Retrieved from https://doi-

org.ezproxy.lib.usf.edu/10.1109/ISEASP.2017.7976996
Koebler, J., & Cox, J. (2018, Aug 23). The Impossible Job: Inside Facebook’s Struggle to
Moderate Two Billion People. Motherboard. Retrieved from

https://motherboard.vice.com/en_us/article/xwk9zd/how-facebook-content-moderation-

works
Lapowsky, I. (2016, July 8). We’ll Never Be Able to Look Away Again. Dallas and Minnesota
Prove It. Wired. Retrieved from https://www.wired.com/2016/07/well-never-able-look-

away-dallas-minnesota-prove/
Lapowsky, I. (2019, Mar 15). Why Tech Didn’t Stop the New Zealand Attack from Going Viral.
CONTENT FILTERING 18

Wired. Retrieved from https://www.wired.com/story/new-zealand-shooting-video-social-

media/
Mukherjee, S. (2018, Jan 18). YouTube Is Taking Down Videos of the ‘Tide Pod Challenge’
After Teens Keep Poisoning Themselves. Fortune.com. Retrieved from

http://search.ebscohost.com.ezproxy.lib.usf.edu/login.aspx?

direct=true&db=buh&AN=127434536&site=eds-live
Myers West, S. (2018). Censored, Suspended, Shadowbanned: User Interpretations of
Content Moderation on Social Media Platforms. New Media & Society, 20(11), 4366-

4383. Retrieved from http://ezproxy.lib.usf.edu/login?

url=http://search.ebscohost.com/login.aspx?direct=true&db=psyh&AN=2018-54870-

022&site=eds-live
Newton, C. (2019, Feb 25). The Trauma Floor: The Secret Lives of Facebook Moderators in
America. The Verge. Retrieved from

https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-

interviews-trauma-working-conditions-arizona
Osnos, E. (2018, Sep 17). Ghost in the Machine. New Yorker, 94(28), 32-47. Retrieved from
http://search.ebscohost.com.ezproxy.lib.usf.edu/login.aspx?

direct=true&db=aph&AN=131648108&site=eds-live
Parsons, J. (2018). Computer Concepts 2018. Boston, MA: Cengage Learning, Inc.
Perrigo, B. (2019, Mar 15). ‘A Game of What-a-Mole.’ Why Facebook and Others Are
Struggling to Delete Footage of the New Zealand Shooting. Time. Retrieved from

http://time.com/5552367/new-zealand-shooting-video-facebook-youtube-twitter/
Quinn, M. (2018). Internet “Challenges” and Teenagers: A Guide for Primary Care Providers:
Discussions Involving Internet Challenges with Children and Adolescents Should Be as

Sensitive as Those About Sexuality and Drug or Alcohol Use. Clinical Advisor, 21(7), 30.

Retrieved from http://search.ebscohost.com.ezproxy.lib.usf.edu/login.aspx?

direct=true&db=edsgao&AN=edsgcl.550301581&site=eds-live
Ward, T.A. (2019, Mar 17). Behavioral Products of the New Zealand Massacre: Prevention
Strategies. bsci21. Retrieved from https://bsci21.org/behavioral-products-of-the-new-

zealand-massacre-prevention-strategies/
Yahoo7 News. (2017, Apr 17). Manhunt for Killer Responsible for Facebook Live ‘Easter Day
CONTENT FILTERING 19

Slaughter’ of Elderly Man. Yahoo. Retrieved from https://au.news.yahoo.com/facebook-

live-murder-manhunt-after-elderly-cleveland-man-shot-in-the-head-35059157.html

Das könnte Ihnen auch gefallen