Sie sind auf Seite 1von 5

Marrott 1

Joshua Marrott

Allison Fernley

English 1010

February 27th, 2019

Transparency within our Algorithms

Discussions surrounding artificial intelligence are constantly in contention of whether or

not this technology is actually a good thing. However, one of the key arguments that always

keeps coming up in these discussions. “Should we disclose the algorithms in order to give

justifications for the decision?” This argument is called the “black box,” and the idea of

revealing this information or making it more understandable is called transparency. One of the

reasons this is such a hot topic is because the people that work on these algorithms don’t want to

reveal how they are constructed on the grounds of its theirs, or the companies they work for,

intellectual property. Another argument is the belief that we add more transparency to these

algorithms that it shifts the responsibility more towards the regulators and less of what the

computer program’s algorithms were geared to do in the first place. In the article “We need

transparency in Algorithms, but too much can Backfire,” by Kartik Hosanagar and Vivian Jair,

they make a case as to why providing more information in algorithms can be a more damaging

decision in artificial intelligence.

Kartik Hosanagar is an author who specifically argues about the fundamental issues

surrounding machine learning and the issues surrounding artificial intelligence in the future. He

is also a professor of technology and digital business at the Wharton School in Pennsylvania.

Vivian Jair is his research assistant that aided Hosanagar with his book A humans guide to
Marrott 2

Machine Intelligence: How Algorithms Are Shaping our Lives and How We Can stay In Control.

Her main contribution to the book was centered around algorithms and machine learnings

influence on society. This adds to the credibility of the writers because they have been doing

extensive research on the subject to ensure they provide good material for their book.

Additionally, the release of the article on Harvard Business Review is a well-rounded site that

provides correct material and interesting argumentative pieces.

In their article, Hosanagar and Jair talk about a couple of studies performs surrounding a

computer program giving students grades on papers they had written. This use of studies adds to

their ethos of the paper. These two studies were done on the grounds of what level of

transparency is too far, and by referencing the Stanford alumni performing them adds more ethos

to their article. I find that the use of this studies specifically compelling due to the fact of it was

based around scores on papers because sometimes when taking a class, we all feel we should get

a better grade than what sometimes is given. This reference of a credible study can be huge to

young readers. The first study is by Stanford professor Clifford Nass in 2013. He had troubled

students come up to him claiming that the grading of their papers was unfair due to one class

getting higher marks than the other. He then created a computer program to help boost their

scores that was graded by his TA, which had been known to poorly grade papers on average. The

second study was by a Rene Kizilcec, a Stanford PhD student who worked under Nass.

Kizilcec’s study surrounded openness and giving examples of why a student was graded the way

that they were. The way they had this study conducted was through a peer review grading

platform and when someone submitted their paper, they would get two scores. The first score

would be by a peer and the second would be by a computer algorithm. After the students

received their scores, they would rate the experience. Hosanagar and Jair argue that this study
Marrott 3

was definitely an eye-opening experience in the realms of transparency. Nass’s study was non-

transparent method, while Kizilcec’s study was fully transparent. But Kizilcec’s study also found

that more transparency was actually not as high rated as middle or low levels of transparency.

What Hosanagar and Jair have in ethos is also equal to what they have in terms of logos

as well. Further in the article they give a few examples as to why transparency in the grand

scheme of things is a “fool’s errand.” They talk about the arguments of intellectual property and

the shift of responsibility to the regulators of business, but what I found compelling in the article

was the “game” theory. They site the studies they talked about earlier in the article about how the

study can be exploited. Their argument was that if too much information was released then

people could then find loopholes in the algorithm. In terms of logos, this is a really persuasive

argument as to why strict rules of what information can be given out to a buyer of the algorithm.

If a company that bought the algorithm has a breach of security of something of the sort, that

puts more risk into that program being exposed. Hosanagar and Jair also explain that an isuee in

transparency with modern A.I. is actually the source code. Artificial Intelligence is all about a

program learning to operate without a human present, so naturally it must learn on the job. They

argue that since the source code is generally a few hundred lines of code, the rest is the training

of the program. This is definitely another example of where the logos of the authors is very

compelling because we can’t be more transparent if we are still trying to understand how a

machine comes to its decision.

I found this argument extremely interesting because I recently just began researching

more into the realms of modern-day artificial intelligence. Upon my reading, the idea of the

black box was very intriguing. The article I read previously was all about how more transparency

was the key to understanding A.I better. I think that the ethos and logos of the authors really
Marrott 4

make the article persuasive to the reader. Providing the studies as mentioned earlier with their

take on what can and can’t be revealed with A.I. The authors also make a logical argument as to

why more transparency may not be the best thing with citing the studies at Stanford.
Marrott 5

Work Cited

Hosanagar, Kartik and Jair Vivian, “We Need Transparency in Algorithms, But Too

Much Can Backfire,” Harvard Business Review, July, 23rd 2018, last updated July 25th

2018, https://hbr.org/2018/07/we-need-transparency-in-algorithms-but-too-much-can-

backfire , date accessed February 27th 2019

Das könnte Ihnen auch gefallen