Sie sind auf Seite 1von 8

RUNNING HEAD: ARTICLE CRITIQUE

Article Critique
Meghan Arias
George Mason University

ARTICLE CRITIQUE

Fask, Englander and Wang (2014) set out to compare whether students are more likely to
cheat when taking online examinations compared to exams given in traditional classroom
environments. They also attempted to compare these two testing environments to determine if
one produces an advantage over the other in exam grades. I chose this article for two main
reasons. First, cheating is a serious problem that educators have faced for decades, one that has
no easy solution. The more we learn about the causes and circumstances that encourage
academic dishonesty, the better educators will be able to combat it. Second, I am particularly
interested in the strengths and limitations of educational technology. The potential for increased
cheating in online courses would certainly be a significant limitation to counter the potential
benefits of online education. Since it is difficult enough for educators to stop cheating in
traditional classrooms where the instructor is physically present with the students, I believe it is
especially important to determine what differences there may be in cheating behavior in online
classrooms. It seems unlikely that educators will ever be able to eliminate academic dishonesty,
but we need to be aware of how the classroom environment, face to face or online, can influence
students likelihood of cheating.
Analysis
The article begins with a review of the literature surrounding cheating, particularly in
relation to online exams. The authors raised several issues that justify their work (Fask et al.
2014). First, one study showed that more than 80% of online tests are not proctored. Also of
concern are several findings estimating that the level of cheating in college has increased
anywhere from 39-82% since the 1960s. Regardless of the exact number of the increase, it does
appear clear that academic dishonesty is on the rise, which calls into question the level of trust
shown by over 80% of instructors who give students exams online without a proctor. With these

ARTICLE CRITIQUE

findings in mind, the authors noted how important it is to examine cheating in online testing
environments. They also criticized previous studies, which generally have reported no
statistically significant differences in cheating between online and traditional settings, for failing
to take into account the fact that the online environment is physically and psychologically
different from the in-class environment (Fask et al., 2014, p. 104). This critique leads directly
to the secondary question of their research, which examined what effect testing environments
had on test grades.

[Excellent introduction of the study concise and scholarly in tone]

The main research question of the article explored whether students are more likely to
cheat on online exams compared to exams in traditional, proctored classroom environments.
Fask et al. (2014) argued that while a true experimental design is always preferable, it would be
impossible to implement such a design for this study. According to the authors, it would not be
possible to create the four groups necessary for the 2x2 factorial design: an in-class, proctored
group; an in-class, un-proctored group; an online, proctored group; and an online, un-proctored
group. The authors argued that requiring students in an online class to take an exam in a
proctored environment goes against everything online courses are designed to do.
I find the authors objections to be somewhat overstated. Many online programs require
students to take exams in central locations and several methods utilize technology, such as
browser locks and video cameras, to control cheating in online classes. While the use of some of
these tools, such as requiring students to take an exam with a video camera on to monitor them
as they take the test, may raise ethical questions of their own, they are still potential options that
are currently in practice. Fask et al.s (2014) argument that leaving a group of students alone inclass to take an un-proctored exam would invite scandal (p. 105) is somewhat more believable.
It would be difficult for researchers to have students take an exam that affects their grade without

ARTICLE CRITIQUE

someone supervising and removing the grade would likely reduce the motivation to cheat.
However, education researchers often face the difficulty of being unable to perform research with
a true experimental design due to budgetary, situational or ethical restrictions. While their
altered design was not without flaw, the researchers developed a reasonable comprise to obtain
the best results possible within their limitations.
With these limitations in mind, Fask et al. (2014) set out to designed a strong, though
not true experimental, research design. The authors examined 44 students in two statistics
courses. Both groups were treated the same throughout the course, receiving the same lectures
and homework assignments from the same instructor, with the exception of the exam format.
One group took a practice and final exam online and the other took them in class. Though the
students were not randomly assigned to these classes, the authors assumed the classes were
approximately equivalent and compared several characteristics between the two groups to
confirm this assumption, finding few significant differences between the two classes (Fask et al.,
2014). While not ideal, this is a common practice in education as researchers often must work
with intact classes and comparing the two groups in this manner is appropriate when random
assignment is not possible.
In order to separate the effect of the environment from the likelihood to cheat, both
classes completed a practice test three days before the final examination in the same format they
would take the final exam (Fask et al., 2014). The researchers clearly informed the students that
the practice test was required, but would not affect their final grade. The authors believed this
would reduce the incentive to cheat on the practice test. This allowed Fask et al. (2014) to
examine the difference in cheating between the two conditions by removing the confounding
variable of testing environment.

ARTICLE CRITIQUE

The researchers found that students completing the online practice test had lower scores
than those taking the in-class exam (Fask et al., 2014). They suggested that students in this
environment might be at a disadvantage due to potential technical difficulties and the lack of
access to an instructor who could provide clarification for ambiguous questions. The final exam
scores, on the other hand, were significantly higher for the online group. The researchers
identified the cause of this difference to likely [be] the result of cheating in the online class
(Fask et al, 2014, p. 109). Despite the authors confidence, there could be other plausible reasons
for the variation in scores. For example, the practice test results could have been due to
inexperience with online testing and the higher final score a result of having gained some
familiarity during the practice exam. However, the authors gave little further analysis to their
assumption. Attributing the difference between the practice and final test scores to cheating was
a central piece of the researchers argument, so their conclusion would have been more
convincing if they spent more time addressing and rebuking alternate theories.
Conclusion
Despite the limitations of this study, it does fill a significant need in the literature on the
subject of academic dishonesty, particularly in online courses. As Hoekema (2010) noted the
prevalence of cheating on assignments and tests is notoriously difficult to measure accurately,
but anyone who asserts that it is absent from her campus is engaging in self-deception (Chapter
13, Ethics on Campus: Two modern Fallacies). Fask et al. (2014) pointed out that most of the
previous research on this subject was done by asking students to complete anonymous surveys
about their cheating habits in online courses. The authors argued that there is little incentive for
students to be honest on these reports. This argument is supported by the large variation in the
results of previous studies mentioned earlier. Additionally, the students own moral and ethical

ARTICLE CRITIQUE

predilections may influence their responses on such a self-report instrument. The sensible knave
(Pritchard, 2006) may lie to preserve his or her image as an ethical being. These students would
likely answer questions about the acceptability of cheating strongly in the negative, regardless of
whether they have engaged in such behavior, to maintain the illusion these individuals wish to
portray to others.
I also have some hesitation about the way the research was conducted. The courses Fask
et al. (2014) examined were statistics courses. The students all completed the mid-term exam in
class and were not informed of their final exam format until the final week of the course. It
seems these were traditional face-to-face courses, but the authors did allude to classroom/online
experiences (Fask et al., 2014, p. 105) so it can be assumed that all students had at least some
exposure to online assignments. However, I believe this raises an ethical dilemma. If the
students registered for a traditional, campus-based course and completed a traditional in class
mid-term with no indication that an online exam was a possibility, giving them a final exam in
this format is dishonest. [excellent observation]
While the term digital native is often used for todays college students, there is debate
as to whether this term is accurate (Jones, Ramanau, Cross & Healing, 2010). Therefore, it
cannot be assumed that students will feel comfortable with an online test. The authors did
address potential statistics anxiety; however, they did not indicate any attempt to identify
students who were uncomfortable using technology nor did they seem to consider the level of
anxiety students might feel if required to take an exam in an unfamiliar format. As a student
with some statistics anxiety and a high level of comfort with technology, I believe I would have a
difficult time taking a math exam online. With this high potential for discomfort, students could
explain away their cheating for social reasons, rather a character based explanation (Pritchard,

ARTICLE CRITIQUE

2006). It is not something in the students character that caused him or her to cheat, but the
unfairness of the situation thus shifting the blame for any wrongdoing outside of the self.
This area certainly deserves further attention in the research. The proliferation of online
classes will only continue to increase and as discussed earlier, the prevalence of cheating seems
to be on the rise. One study does not substantiate a theory, so researchers much could conduct
similar studies in an attempt to replicate the results found by Fask et al. (2014). I would also
suggest comparisons between two courses with the same content where students self-select
whether they are in the campus-based or online format. While the learning experiences would be
somewhat different and there may be systematic differences in the groups due to the selfselection, I believe this would still prove beneficial as only students comfortable with an online
format would be required to complete the exam in this way. Does it matter if students who
would not normally choose to complete an online course cheat on an online exam?
I believe it more vital to ask questions that revolve around what educators can do to
discourage cheating in online environments. Are students less likely to cheat in an online class
where they are required to have a video camera on while taking the test? Students in traditional
formats continue to find ways to cheat on exams even with the instructor sitting in the room with
them, so nothing we can do in online courses will entirely end the practice. However, armed
with knowledge, educators teaching in online environments can work to set up their courses in
ways that foster honesty in their students.

ARTICLE CRITIQUE

References
Fask, A., Englander, F., & Wang, Z. (2014). Do online exams facilitate cheating? An experiment
designed to separate possible cheating from the effect of the online test-taking environment.
Journal of Academic Ethics, 12(2), 101112. doi:10.1007/s10805-014-9207-1
Gentile, M. C. (2010). Giving voice to values: How to speak your mind when you know whats right.
New Haven, CT: Yale University Press.
Hoekema, D. A. (2010). Is there an ethicist in the house? How can we tell? In E. Kiss & J. P. Euben
(Eds.), Debating moral education: Rethinking the role of the modern university (Kindle version).
Durham, NC: Duke University Press.
Jones, C., Ramanau, R., Cross, S., & Healing, G. (2010). Net generation or Digital Natives: Is there a
distinct new generation entering university? Computers & Education, 54(3), 722732.
doi:10.1016/j.compedu.2009.09.022
Pritchard, M. S. (2006). Professional integrity: Thinking ethically. Lawrence, KS: University Press of
Kansas.
MeghanExcellent choice for this assignment and you introduced it well. A clear strength of your paper
was your consistent critical analysis throughout combined with your balanced view of what the
associated strengths and limitations of this study. You needed with a thoughtful set of future
research questions and areas of possible refinement to the existing study. Your paper would have
been strengthened with a more robust integration of key concepts and constructs from the
literature to support your claims and/or to demonstrate your command of those concepts (-1pt.).
Prof. Lucas

Das könnte Ihnen auch gefallen