Sie sind auf Seite 1von 31

Running head: ANNOTATED BIBLIOGRAPHY

Annotated Bibliography on Problems of Practice Topics


Kristy Rykard
University of South Carolina

kristyrykard@gmail.com
October 10, 2016
EDET 780, Section J61

ANNOTATED BIBLIOGRAPHY

Aagaard, J. (2015). Drawn to distraction: A qualitative study of off-task use of educational


technology. Computers & Education, 87, 9097. Retrieved from
http://dx.doi.org/10.1016/j.compedu.2015.03.010


Cyber-slacking

Summary: This article offers a postphenomenologically informed qualitative study of students' off-task use of
technology during class. Building on interviews with students in a Danish business college about their off-task
technology use, findings suggest that off-task activity is not always a conscious choice. Because of deeply
sedimented bodily habits, students often experience habitual distraction in the form of pre-reflective attraction
towards certain frequently visited websites (e.g., Facebook). Laptops are experienced as endowed with an attractive
allure that pulls you in (p. 90). It was determined that students were drawn to distraction because of the difficulty
of the lesson or the lesson structure. If the lesson is too hard, students become mentally exhausted and resort to
distracting behaviors as a way to relax for a moment. If the lesson is too easy, they get bored and end up off-task.
The structure of the lesson becomes relevant in situations where there is a break in activity, when students are
waiting with nothing to do, or if there is a prolonged activity such as a lecture, which becomes monotonous. In
these situations, students are often drawn to distraction.
Method and limitations: The study was conducted over a period of six months at a business college in Denmark
with a bring your own device policy. Aagaard sat in on and observed the classes of six teachers during this time
period, watching how students interacted with technology. At the end of the six months, he interviewed fourteen
students who volunteered. The students were young men and women between the ages of sixteen and twenty years
old. He chose only volunteers to avoid his own bias in selecting students that he had previously noticed
participating in off-task behaviors. The limitations, as described by the researcher, included the fact that the courses
he observed were compulsory; because students did not have a choice about being there, their distraction could have
been simply lack of motivation, rather than directly related to the access to technology. He would also like to
conduct further studies to determine if this same type of behavior occurs in situations outside of school and in
voluntary activities. In addition, the number of courses observed and students interviewed was insufficient. To
have more reliable data, a larger pool is necessary.
How this relates to my topic and other things Ive read? or What does this mean for my research?
This relates to my topic of cyber-slacking because it may explain why students are distracted by technology.
Perhaps it is a distraction that would occur regardless of the digital tools available. From this study, it could be
inferred that if students did not have technology in front of them during class, they would be drawn to distraction
in some other form. Also, knowing a possible cause of the distraction can help direct any future interventions to
avoid cyber-slacking in the classroom.
Other articles I need to locate and review:
Risko, E., Buchanan, D., Medimorec, S., & Kingstone, A. (2013). Everyday attention: mind wandering and
computer use during lectures. Computers & Education, 68.

ANNOTATED BIBLIOGRAPHY

Aiyegbayo, O. (2015). How and why academics do and do not use iPads for academic
teaching? British Journal of Educational Technology, 46(6), 13241332.
Retrieved from http://doi.org/10.1111/bjet.12202

Digital Lessons to
Increase Rigor

Summary: The author conducted a funded research study which evaluated how academics used their iPads for
academic practices at a mid-sized UK University. Academic practices for the purpose of this research study were
categorised as (1) teaching, (2) research and (3) administration. This paper focuses only on the teaching key
findings of the funded study (p.1324-1325). The SAMR model was used to determine at what level of rigor the
teachers employed their technology in the classroom. The SAMR model (Puentedura, 2006, as cited in Aiyegbayo,
2015) provides a framework to understand how educators progress in their use of technology for teaching and
learning purposes. The SAMR model has four key levels: Substitution, Augmentation, Modification and
Redefinition... The Substitution and Augmentation levels are categorised as the enhancement levels while the
Modification and Redefinition levels are categorised as the transformation levels (p. 1325). The results show that
54% of the respondents to the survey use their iPads for teaching. The other 46% who do not use their iPads for
teaching gave three main reasons in the survey and subsequent interviews: 1) They prefer to use other devices such
as laptops, 2) they dont know how to use the iPad for teaching, and 3) their students dont have iPads, and using it
to teach may leave these students at a disadvantage. In situations where academics did use their iPads for teaching,
it was determined that the majority of the lessons fell on the substitution or augmentation levels of the SAMR model.
There was no evidence of transformational use of iPads for teaching because none of the teaching tasks was either
signifi-cantly redesigned or newly created as a result of using the iPad (p. 1330).
Method and limitations: The study was conducted at a mid-sized UK University, which houses seven academic
schools. All faculty in two of the schools and senior faculty in the other five schools were issued iPads to encourage
digital literacy among their staff members. The data were gathered using both quantitative and qualitative methods.
Eighty-four academics completed a survey while 22 semi-structured interviews were conducted (p. 1324). The
interviewees were volunteers from the same pool of academics who completed the survey. The survey respondents
were comprised of 40 male teachers, 42 female teachers, and 2 who did not indicate their gender. The average age
of the participants was 47 years old. Three of the respondents students were also issued iPads. The other teachers
students were not also issued iPads, but they could bring their own devices. Although the author did not list any
limitations, there are still some that are evident. For example, only the use of iPads is evaluated, which does not
encompass other digital devices teachers may incorporate into their teaching. Also, there is a weak connection
between the use of the iPads and the purpose of determining the SAMR levels in this study. This could be more
fully examined.
How this relates to my topic and other things Ive read? or What does this mean for my research?
This relates to my topic about the levels at which teachers incorporate technology into their lessons. It sheds light
on the fact that teachers either dont teach using digital methods or they do so on the most basic levels of
substitution and augmentation because of the lack of training and pedagogical support. This could inform possible
interventions I could attempt, such as providing professional development for teachers in the use of their school
issued devices.
Other articles I need to locate and review:
Cavanaugh, C., Hargis, J., Kamali, T. & Soto, M. (2013). Substitution to augmentation: faculty adoption of iPad
mobile learning in higher education. Interactive Technology and Smart Education, 10, 4, 270284.
Puentedura, R. (2006). Transformation, technology, and education. Retrieved November 10, 2013, from http://
hippasus.com/resources/tte/puentedura_tte.pdf

ANNOTATED BIBLIOGRAPHY

Andert, D., & Alexakis, G. (2015). Virtual teaming and digital learning strategies:
Preparing students for a global workplace. MERLOT Journal of Online Learning and
Teaching, 11(1), 122148.


Digital Lessons to
Increase Rigor

Summary: The paper examines the changing role of technology in teaching group principles, concepts, and
theories to students competing in the global/virtual realities of teaming and group projects. The contribution presents
a comprehensive online/on-ground (i.e., hybrid or blended) course design that accelerates team and group theory
beyond the traditional live team application. The paper describes how students taking a [Teams and Group
Processes] class fully explored the variety of ever-expanding computer mediated communication platforms. They
used action-learning labs to test and apply the use of a variety of face-to-face, computer mediated communication,
and a blend of the two learning platforms to complete group assignments and discover the application of group
theory as it relates to group project planning, group development, and conflict resolution (p. 122). The purpose of
creating and testing this course was to inspire serendipitous learning. There were various quizzes, tests, and other
grades given in the course to encourage student participation, but the main focus of the study was on the
spontaneous learning that occurred through student led inquiry and discovery in an online platform. It was
determined that simply offering synchronous and asynchronous communication would provide limited effect if not
for the opportunities to exploit vicarious learning and educational serendipity through inquiry. Virtual platforms
expand the potential for serendipitous learning moments and that reverting to in-person performance of group
tasks severely reduced group exploration, serendipity, and chance learning (p. 131).
Method and limitations: The study was conducted in an online Teams and Group Processes course. As students
entered the course, they were divided into six teams. The course has sixteen modules. Throughout the modules,
students read and discussed articles about virtual teaming, group interaction, and team leadership. The final course
project enlists student project groups to design an online intervention and present a wholly-virtual intervention... that
would gain strong communication, build trust and cohesion, and promote effectiveness among virtual team
members[and] create an action plan for future improvement in communication among the geographically
dispersed team (p. 128). Students used various online tools for collaboration, including Google Drive, ooVoo,
Skype, GroupMe, and Second Life. These were not assigned by the teachers but were resourced and used at the
students discretion. Requiring students to find their own methods of interaction furthered the goal of serendipitous
learning. Summative data were gathered to assess the T&GP courses serendipity. Students completed a selfperception form offering their top five major learnings from the courseand the top five things they will do
differently as a result of the course learnings (p. 130). Some limitations were discussed by the authors. These
include the difficulty in measuring spontaneous, situational, and social learning. Because of this limitation, the
results are mainly anecdotal, rather than concrete numbers and data. In addition, the number of participants and
specific context of the study were not included in the paper. As these are important factors that affect the validity of
the results and their implications in my own situational context, I deem this a flaw in the presentation of this study.
How this relates to my topic and other things Ive read? or What does this mean for my research?
This article relates to my topic of increasing the rigor of digital lessons because it shows that when students are
challenged to use technology to design their own education, they experience more meaningful and lasting learning.
This could inform the implementation of more rigorous digital lessons in my own study, as I could replicate the use
of this inquiry method with my students, giving them more control over their learning and instituting a more
serendipitous erudition in my classroom.
Other articles I need to locate and review:
Kohn, A. (2012). Schooling beyond measure. Retrieved from http://alfiekohn.org /teaching/edweek/sbm.htm
Rosenshine, B. (2012). Principles of instruction: Research-based strategies that all teachers should know. American
Educator, 36(1), 12-19.

ANNOTATED BIBLIOGRAPHY

Cyber-slacking
Bowman, L. L., Levine, L. E., Waite, B. M., & Gendron, M. (2010). Can students really
multitask? An experimental study of instant messaging while reading. Computers & Education, 54(4), 927
931. Retrieved from http://doi.org/10.1016/j.compedu.2009.09.024
Summary: Students often multitask with electronic media while doing schoolwork. [This study] examined the
effects of one form of media often used in such multitasking, instant messaging (IM). [The researchers] predicted
that students who engaged in IMing while reading a typical academic psychology passage online would take longer
to read the passage and would perform more poorly on a test of comprehension of the passage. Participants were
randomly assigned to one of three conditions (IM before reading, IM during reading, or no IM) (p. 927). The
results revealed that those who IMed during reading, took significantly longer [22% - 59%] to read the passage
compared to those who IMed before reading and those who did not IM, even after the time taken to IM was
subtracted. Those who IMed before reading spent the least amount of time reading. No differences in test
performance (number correct) were observed for those who IMed before, those who IMed during, and those
who did not IM (p. 930).

Method and limitations: Eighty-nine college students (46 men and 43 women) aged 1746 yearsparticipated.
The majority of students were in their first (46%) or second-year of college (33%), White/European (74%), and
attended school full time (91%). Thirty-one percent lived in on-campus housing and 50% lived at home with parents.
Student academic majors were well distributed and came from all the schools in the University. Students were
enrolled in general psychology classes and received course credit for their participation (p. 928). Students were
given a 3,828-word passage on personality disorders to read using a computer program that could send them
simulated instant messages. Some students received IMs before reading the passage, some during, and some
received no IMs at all. After the reading/IMing experience, subjects took a 25 question multiple-choice test about
the reading. The overall mean performance was 53% correct. Students were given a questionnaire after the
experiment that asked about instruction clarity (99% agreed they were clear), whether the IMing experience was
realistic (71% said yes), and whether the reading passage was similar to what they would read in one of their
Psychology courses (82% agreed it was). One limitation indicated by the authors was that students were not given a
time limit on reading the passage, which would have more closely simulated in class assignments that must be
completed within a specific time frame.
How this relates to my topic and other things Ive read? or What does this mean for my research?
This relates to my topic of cyber-slacking because texting during class is the number one cyber-slacking activity my
colleagues and I have observed. Students believe that they have the ability to multi-task, send and receive messages,
and carry on full text conversations during all aspects of my class time. In this study, students had the opportunity to
re-read after engaging in the IMing, but my students cant rewind a lecture or group discussion to pick up what they
missed. This study could be a good basis for why cyber-slacking is a problem that needs to be addressed.
Other articles I need to locate and review:
Cutrell, E., Czerwinski, M., & Horvitz, E. (2001). Notification, disruption, and memory: Effects of messaging
interruptions on memory and performance. In Proceedings of Interact 2001, Tokyo, Japan. Retrieved from
http://research.microsoft.com/~horvitz/disruptmemory.htm
Fox, A. B., Rosen, J., & Crawford, M. (2009). Distractions, distractions: Does instant messaging affect college
students performance on a concurrent reading comprehension task? CyberPsychology & Behavior, 12, 51
53.
Levine, L. E., Waite, B. M., & Bowman, L. L. (2007). Electronic media use and distractibility for academic reading
in college youth. CyberPsychology & Behavior, 10(4), 560566.
Pashler, H. E. (1993). Doing two things at the same time. American Scientist, 81, 4855.
Pashler, H. (1994). Dual-task interference in simple tasks: Data and theory. Psychological Bulletin, 16, 220244.

ANNOTATED BIBLIOGRAPHY

akrolu, . (2014). Enriching project-based learning environments with virtual


manipulatives: A comparative study. Eurasian Journal of Educational Research, (55),
201221. Retrieved from http://dx.doi.org/10.14689/ejer.2014.55.12

Digital Lessons to
Increase Rigor

Summary: The purpose of this study is to investigate the effect of a [Project Based Learning] environment
enriched with [virtual manipulatives] by comparing it with a traditional PBL environment. The comparison is
focused on academic achievements in Quadratic Equations and Polynomials subjects and attitudes towards
mathematics courses The statistical analysis indicates that [experimental group] students significantly
outperformed [control group] students with respect to [achievement test] results. The change in attitudes towards
mathematics courses was not statistically significant among the two groups. The results of the study provided some
empirical evidence about the positive effects of VMs that are used to enrich PBL environments. Although changes
in attitudes have not been seen, positive academic achievements have been revealed in two subjects. Based on the
study, it is concluded that the combination of VMs and PBL may be an effective way to enhance students
understanding of mathematics subjects and to improve their academic achievements (p. 201-202).
Method and limitations: The study included two groups of high school students. One experimental group (EG;
N = 30: 14 male, 16 female) and one comparison group (CG; N = 30: 15 male, 15 female) were used in the study.
Both of the students in the EG and the CG received a mathematics course from the same teacher in 9th grade. They
have only a little introductory knowledge about quadratic equations and polynomials. Thus, their backgrounds about
the subjects can be considered similar (p. 204). In addition, students were given a pretest on the subjects at hand,
and no significant difference was found in the prior knowledge of the two groups. For the first four weeks of the
course, both the EG and CG received the same traditional instruction. After four weeks, the teacher began a PBL
assignment about Polynomials and Quadratic Equations. Both groups received six weeks to complete their projects,
and both groups experienced all the stages in the PBL process. Both groups were given limited instruction during
the PBL assignment, as they were expected to create their own learning through exploration and inquiry. At the end
of the six weeks, students gave presentations about what they had learned and took a post-test on their skills in the
subjects. During the project, the EG was introduced to an online repository of fifteen VMs developed by
mathematicians and teachers, where they could complete interactive activities, enter data and parameters, and form
quadratic equations. The EG was encouraged to use the VMs in their study and to help create the solution to their
PBL problems. In the CG, students were not aware of the VMs repository. They were told to complete projects,
such as completing traditional homework, by performing research on the Internet, by referring to teachers notes,
and by reading textbooks. They found various examples and used them to develop interpretations about the solutions
for the problems in the projects. In addition, they used wiki, forums, and some web sites specialized for school
mathematics (p. 206). The researchers point out one limitation: the teacher may have felt the EG needed more
support because they had never used VMs before. Therefore, she may have inadvertently given this group more
attention, which could have influenced their more significant achievement.
How this relates to my topic and other things Ive read? or What does this mean for my research?
This relates to my topic about increasing rigor through the use of digital tools. The study shows that when a
transformative digital tool, the VMs, was used, the EG outperformed the CG. This study could be used to show that
more rigorous technological interventions are needed in the classroom to improve student achievement.
Other articles I need to locate and review:
Guthrie,C. (2010). Towards greater learner control: web supported project-based learning. Journal of Information
Systems Education, 21(1), 121-130.
Liu, Y., Lou, S, Shih, R., Meng, H., & Lee, C. (2010). A case study of online project-based learning: The beer king
project. International Journal of Technology in Teaching and Learning, 6(1), 43-57.
Salajan, F., Perschbacher, S., Cash, M., Talwar, R., El-Badrawy,W. & Mount, G.J. (2009). Learning with web-based
interactive objects: An investigation into student perceptions of effectiveness. Computers & Education, 53,
632643.

ANNOTATED BIBLIOGRAPHY

Cheong, P. H., Shuter, R., & Suwinyattichaiporn, T. (2016). Managing student digital
distractions and hyperconnectivity: Communication strategies and challenges for
professorial authority. Communication Education, 65(3), 272289.
http://doi.org/10.1080/03634523.2016.1159317

Cyber-slacking

Summary: This paper investigates the communication practices that constitute professorial authority to manage
college students digital distractions in classrooms (p. 272). The researchers interviewed 65 college professors to
discover the methods they use to avoid off-task digital behaviors in their students and the constraints associated with
the uses of such techniques. Results of the interviews showed that most instructors utilize similar approaches to
manage cyber-slacking in their classrooms. The most popular strategy enacted, mentioned by 49 of [the]
interviewees, was the communication of rules to control digital distractions as codified in course syllabi and
protocols (p. 277). The second most popular classroom management practice included related acts of strategic
redirection to help channel students attention to their digital distractions back to instructors teachings in class (p.
278). These include actions such as asking students to pay attention or facilitating an interactive activity to boost
participation. The third major classroom management strategy was the enforcement of communicative sanctions,
which included public humiliation, personal reprimands, and disabling wireless access (p. 279). A fourth way of
managing digital distractions was to deflect and ignore it[,] making their students accountable for their
diversionary practices and consequences (p. 280). The results of the interviews also showed that instructors
experienced several difficulties in their classroom management with regards to technology. They felt that their
classrooms were sometimes too large to effectively manage the use of technology, they were often unable to detect
cyber-slacking, they werent sure that students were really being negatively affected by the off-task behavior, and
they didnt have time to be micro-managing what students were doing on their personal devices. Teachers also felt
torn because of the pressure to limit-engage (p. 282). They know the benefit of incorporating technology, but
they also feel the need to limit its uses in the classroom. The professors struggled with these conflicting necessities.
Method and limitations: Participants in this study were 65 full-time faculty membersin two American
universities (a public research university in the Southwest, and a private university in the Midwest). Instructors
interviewed were from 11 disciplines, with 55% of them in senior/full professor positions. Thirty-seven instructors
interviewed were female, 28 were male. On average, they have taught 17 years in college (p. 275). The semistructured interviews covered three sections: (a) instructors perceptions of classroom digital distractions and
implications of technology for their work and student learning (e.g., Have you encountered any issues within the
classroom concerning inappropriate or distracting Internet and technology use among students? and How do you
manage these problems and issues?), (b) assessment of their classroom management and challenges to their
authority (e.g., To what extent do you think your classroom management strategies are successful or not successful?
and Do you think instructors tend to be aware or unaware of how students are actually using their laptops and smart
phones for non-class purposes?), and (c) their digital media use (e.g., What do you typically do online?) (p. 276).
There are several limitations to this study. First, the results are skewed toward the views of senior faculty in these
universities. There needs to be more diversity among the faculty interviewed. Also, the results were anecdotal, selfreflective, and based on the lived experiences and professional opinions and perceptions of the professors who were
interviewed, and the participants reported limited confidence in determining the success of their strategies (p. 283).
In-class observations by a third party may be more reliable.
How this relates to my topic and other things Ive read? or What does this mean for my research?
This relates to my topic of cyber-slacking by confirming that this is a definite problem in all classrooms where
students have access to technology. In addition, it illustrates some of the common methods for dealing with cyberslacking and the constraints associated with the techniques, which can help inform the interventions I may use to
avoid cyber-slacking in my classroom as part of my own study.
Other articles I need to locate and review:
Kuznekoff, J. H., Munz, S., & Titsworth, S. (2015). Mobile phones in the classroom: Examining the effects of
texting, twitter, and message content on student learning. Communication Education, 64, 344365.
Kuznekoff, J. H., & Titsworth, S. (2013). The impact of mobile phone usage on student learning. Communication
Education, 62(3), 233252.
Mihailidis, P. (2014). A tethered generation: Exploring the role of mobile phones in the daily life of young persons.
Mobile Media and Communication, 2, 5872.

ANNOTATED BIBLIOGRAPHY

Davidson, R. (2015). Wiki use that increases communication and collaboration


motivation: A reflection several semesters later. Journal of Learning Design, 8(3),
92105.

Digital Lessons to
Increase Rigor

Summary: A wiki, which is an easily accessible and editable website, is [a] platform that provides the
opportunity for students to work on group projects without the barriers that arise from traditional group work. Whilst
wiki use is becoming more common, its use in education is patchy and pedagogical reasoning and evaluation of such
use is under-explored. This paper addresses the gap in pedagogy and evaluation in the context of accounting studies.
A traditional assessment task of writing an essay that involved a research and knowledge component was redesigned
to enable groups to communicate and collaborate at a distance using a wiki. Through participant observation and
student reflections of the group project, a wiki was found to be an effective platform to communicate and collaborate
on a group project and enabled different barriers to be broken down. Wikis provide ubiquitous access to group work,
organisation and version control, levels the playing field for dominant and shy students, and provides transparency
for non-performers and high achievers (p. 94).
Method and limitations: The study took place in a financial accounting course at The University of Adelaide,
Australia. There were 117 students enrolled in the course at the time of implementation. The majority of the
students were international, and English was their second language. In the past, a major essay was a component of
the assessment items. This was an individual assignment that included an element of research, analysis and writing
up of a given topic focused on one component of the course. Students would often lack motivation to do well as
they found the task uninteresting and difficult and so only wanted to pass, or at least get some marks, towards the
final grade (p. 98). For the purposes of this study, the instructor redesigned this assessment to incorporate
collaboration, enhance engagement, and make more meaningful learning. Random groups were assigned for the
task, and students were to imagine that they were new graduates working in a private sector firm. A conversation
with their manager demonstrated a lack of knowledge of the Australian accounting standard setting environment.
The manager later asked the graduate for further clarification. The graduate, along with the other new graduates (the
students group members) decide to gather and provide the manager with the information required by using a wiki
(p. 98). Students were given free reign to be creative and present the information however they chose. When the
wikis were completed students were also required to complete a reflective piece. This reflective piece required
students to comment on the assignment and include their views about working in groups, working on the wiki, the
learning process, and reflect on what they learnt and what they could have done better. This data along with
instructor observations form the basis of determining the effectiveness of the assignment (p. 99). One limitation
noted by the author is that it is based solely on reflective and observational data. Future implementations of this
assessment could be evaluated using a more rigorous means. Empirical data could be captured in the form of a
survey and evaluated using statistical analysis (p. 102).
How this relates to my topic and other things Ive read? or What does this mean for my research?
This relates to my topic of using digital tools to increase rigor in the classroom. In this experiment, the traditional
task of writing an essay to show understanding was transformed by the incorporation of technology. This
redefinition led to higher-level thinking, more meaningful learning, collaboration, and greater student engagement.
The findings are consistent with other things Ive read in that it shows that digital lessons are effective for increasing
the rigor of the classroom and redefining the learning taking place.
Other articles I need to locate and review:
Bonk, C. J. (2009). The world is open: How web technology is revolutionizing education. San Francisco, CA:
Jossey-Bass.
Chapman, K. J., Meuter, M. L., Toy, D., & Wright, L. K. (2010). Are student groups dysfunctional? Perspectives
from both sides of the classroom, Journal of Marketing Education, 32(1), 39-49.
Davidson, R. (2012). Wiki use that increases communication and collaboration motivation. Journal of Learning
Design, 5(2), 38-49. doi:10.5204/jld.v5i2.110

ANNOTATED BIBLIOGRAPHY

Domalewska, D. (2014). Technology-supported classroom for collaborative learning:


Blogging in the foreign language classroom. International Journal of Education and
Development Using Information and Communication Technology, 10(4), 2130.

Digital Lessons to
Increase Rigor

Summary: The study sets out to investigate how active and creative the students are when they are engaged in
a blogging activity [and] focuses on the phenomenon of blogging as a technologically enhanced support to
develop interaction and interrelatedness among learners in a foreign language course, particularly by the means of
providing and receiving feedback on students work (p. 24). Students were required to use the foreign language to
blog about their learning and respond through comments to each others blogs. While most of the students met the
requirements of the course, the interaction and interest in the activity was low. Timestamps on the blog entries show
that many were created shortly before they were due, indicating that students only interacted on the blog because it
was a requirement to pass the course and that they had no interest in the activity. In addition, their comments to
each other were either non-existent or short. Less than half of the posts received responses in the form of
comments (p.25). The findings of the study revealed interaction between bloggers was limited; thus, the study
indicates the restricted use of blog as a tool promoting collaboration in the foreign language classroom. Furthermore,
the results identified some of the problems related to technology-based learning and teaching (p. 21). The
students treated blogging as another teaching activity rather than a forum for exchange of opinions and information.
Hence, the idea of creating a community of active members with shared interests and goals has failed (p. 26).
Method and limitations: The twelve participants in the study were Thai students, ages 18-25, enrolled in a twelveweek English language foundations course for the purpose of preparing to study abroad. They were all high school
graduates, and three of them held bachelors degrees. Students were required as part of the course to post at least six
blog entries about the contents of the course, their learning, and their personal reflections. They were also asked to
comment on each others blog entries. Students were given five weeks to complete the assignment. A total of sixtytwo blog entries and thirty comments were analyzed in the study. The following research questions guided the
study: (1) how are the students involved in using blogs for creating the learning community? (2) what is the nature
of the students comments? (p. 24). The author mentioned some of the limitations in this study. Since the blogging
activity was in English, which was the second language for the students, the students uneasiness with the language
and fear of criticism from their peers may have been a factor in their limited engagement. In addition, the teacher
acted as the moderator of the course blog, and it may have been more effective for the students to moderate.
Furthermore, the use of blogging seems to be less effective than face-to-face interaction for learning a language, and
so the subject matter itself was a barrier to the success of the project.
How this relates to my topic and other things Ive read? or What does this mean for my research?
This relates to my topic of using digital tools to increase rigor in the classroom. Even though this experiment failed
for a language course, the study gives insight into future uses of the technique. This could inform my
implementations of similar digital collaboration tools during my study to improve rigor and higher-level thinking in
the classroom. Blogs provide an authentic outlet for writing, and authenticity is a significant factor in providing
challenging tasks. Therefore, this study could help me discover ways to improve the use of this digital tool for my
students.
Other articles I need to locate and review:
Bowers, C. (2011). Let them eat data: How computers affect education, cultural diversity, and the prospects of
ecological sustainability. University of Georgia Press.
Lenhart, A. (2010). Teens, cell phones and texting. Pew Research Center Publications. Retrieved from
http://pewresearch.org/pubs/1572/teens-cell-phones-text-messages
Lundstrom, K. & Baker, W. (2009) To give is better than to receive: The benefits of peer review to the reviewer's
own writing. Journal of Second Language Writing, 8(1), 30-43.
Subrahmanyam, K. & Greenfield, P. (2008). Online communication and adolescent relationships. The Future of
Children, 18(1), 119-146.

10

ANNOTATED BIBLIOGRAPHY

Duncan, D. K., Hoekstra, A. R., & Wilcox, B. R. (2012). Digital devices, distraction, and
student performance: Does in-class cell phone use reduce learning? Astronomy
Education Review, 11(1), 14. Retrieved from http://doi.org/10.3847/AER2012011

Cyber-slacking

Summary: Combining observation [and] surveythis research assesses the effects of technology use on student
attitudes and learning. Data were gathered in [five] introductory science courses at a major university. Results show
a significant negative correlation between in-class phone use and final grades, with use of cell phones corresponding
to a drop of 0.3660.08 on a 4-point scale where 4.0=A. These findings are consistent with research (Ophir, Nass, and
Wagner 2009, [as cited in Duncan, Hoekstra, & Wilcox, 2012]) suggesting students cannot multitask nearly as
effectively as they think they can (p. 1). Students self-reported an average of three cell phone uses per class period;
however, observation revealed an actual rate of 6.85 incidents of use per user, per class period. Higher incidents of
cell phone use correlated to lower grades in the course.
Method and limitations: Research was conductedat a large state university in the western U.S.The study
focused on in-class use of cellular phones in five [introductory astronomy] coursesUtilizing a mixed-methods
approach consisting of in-class observations [and] survey responses, [the researchers] examined the effects of
digital devices on student performance. Survey questions targeted demographics, student attitudes, and self-reported
levels of technology use (p. 1-2). Three hundred, ninety-two students reported their frequency of in-class cell
phone use, and these data were then correlated with their final course grades (p. 2). In addition, the researchers
conducted thirty-two observations of the same students to compare actual use to reported use. The limitations to this
study are as follows: Students self-reported that they used their phones an average of three times per class period,
but observational data indicates the frequency is much higher. It seems that students tended to underreport their cell
phone use during class. In addition, while the comparison between cell phone use and grades does show a negative
correlation, it does not prove causality. Also, this research is limited by a single institutional context; further
investigation is needed to determine whether these findings apply to learning behavior in other disciplines and
across educational contexts. [Lastly,] digital devices may be more likely to distract students in large, nonmajor
courses such as the ones studied here (p. 3).
How this relates to my topic and other things Ive read? or What does this mean for my research?
This relates to my topic of cyber-slacking because it shows a negative correlation between the off-task use of cell
phones during class and grades. Because cell phone use is a significant distraction in high school classrooms, this is
especially relevant to my study. This research is consistent with other things I have read, showing a negative effect
of technological distractions on learning. The study is a good foundation for my study, illustrating national context
and relevance of the problem to my practice.
Other articles I need to locate and review:
Ophir, E., Nass, C., and Wagner, A. (2009) Cognitive control in media multi-taskers. Proceedings of the National
Academy of Sciences, USA, 106, 15583.

11

ANNOTATED BIBLIOGRAPHY
Cyber-slacking
Gupta, N., & Irwin, J. D. (2016). In-class distractions: The role of Facebook and the
primary learning task. Computers in Human Behavior, 55, 11651178. Retrieved from
http://doi.org/10.1016/j.chb.2014.10.022

Summary: The current study sought to examine how [Facebook] FB distractions (goal-irrelevant FB intrusions)
are selected by attentional mechanisms in an educational setting, and how its purposeful selection (goal-relevant FB
intrusions) affects memory and comprehension on the primary learning task, for lectures of high (HI) as well as low
interest (LI) (p. 1174). The participants in the study listened to two lectures, during which some of the participants
were presented with simulated FB newsfeed popups. After the lectures, participants took a test on the lecture
content and the FB newsfeed content. In addition, survey data was used to determine the participants use of FB
while studying, their multitasking tendencies, and the validity of the experiment. Only 6 participants, on average,
reported to never have FB open while studying, working and listening to live and online lectures79% of
participants report[ed] that they receive more than six notifications to their mobile device or computer while
working or studying each dayUniversity students tend to self-initiate task switching with FB more frequently than
other forms of communicationParticipants also reported that they would be more likely to immediately attend to a
FB chat message, FB photo comment, FB wall-post and an FB image wall-post, than immediately attend to an email
or instant message (p. 1171-1172). The results show that students are more susceptible to FB distractions during LI
learning tasks. In addition, the results show that goal-relevant FB interruptions lead to lower comprehension scores,
but only in the HI lecture. Overall, comprehension of the lectures decreased as attention to FB interruptions
increased.
Method and limitations: One hundred and fifty students from Macquarie University, 35 males and 115 females,
aged from 17 to 28 (M = 19.51; SD = 2.246), participated in this study. Participants were required to be FB users
(p. 1168-1169). Participants used headphones to listen to two lectures, one HI about love and one LI about
qualitative research. Participants were randomly allocated to one of six conditions: (A) No FB intrusions (controls),
HI lecture. (B) No FB intrusions (controls), LI lecture. (C) Goal-relevant FB intrusions, HI lecture. (D) Goalrelevant FB intrusions, LI lecture. (E) Goal-irrelevant FB intrusions, HI lecture. (F) Goal-irrelevant FB intrusions,
LI lecture (p.1170). While they listened, participants in the FB intrusions groups were presented with simulated FB
newsfeed popups. Other participants received no FB interruptions. The posts were intended to be typical of the
kind that participants may view on their FB newsfeed[, including images, status updates, and photo
comments]The images were intended to be funny, to make FB as appealing as it would be to students using their
own accountsListening comprehension and memory of the two lectures were tested using 15 multiple-choice
questions for each lecture. memory of the FB material was tested using 15 forced choice (truefalse) questions
(p. 1169). Participants in the study completed a survey about their demographics, electronic communication usage,
dual-tasking during study, individual preferences for multi-tasking, familiarity with topics presented in the lecture,
enjoyment of the topics in the lectures, level of difficulty in maintaining concentration during the lectures, and
whether they felt the FB experience was realistic. The authors indicate some limitations to the study. First,
participants were alert and informed of the studys purpose, and they were instructed on what they had to do by the
experimenter. This might have caused the participants to summon more attentional resources, than might be the case
in a lecture theatre or at home while listening to online lectures. [Also,] in this study, the majority (90%) of the
participants were aged between 17 and 22 years old, and 76% of the sample was female, which limits the
applicability of the results to a wider age demographic and to males, respectively (p. 1176).
How this relates to my topic and other things Ive read? or What does this mean for my research?
This is relevant to my topic of cyber-slacking. Facebook and other social media often distract high school students;
therefore, this study that negatively connects Facebook distractions to learning helps give credence to this topic as a
problem of practice. In addition, the fact that it also analyzes the participants attention to certain types of Facebook
interruptions could help me determine possible interventions for helping students avoid such distractions.
Other articles I need to locate and review:
Adler, R. F., & Benbunan-Fich, R. (2013). Self-interruptions in discretionary multi-tasking. Computers in Human
Behavior, 29(4), 14411449. Retrieved from http://dx.doi.org/10. 1016/j.chb.2013.01.040.
Silvia, P., McCord, D., & Gendolla, G. (2010). Self-focused attention, performance expectancies, and the intensity
of effort: Do people try harder for harder goals? Motivation & Emotion, 34(4), 363370. Retrieved from
http://dx.doi.org/10.1007/ s11031-010-9192-7.

12

ANNOTATED BIBLIOGRAPHY
Digital
Differentiated
Instruction

Haelermans, C., Ghysels, J., & Prince, F. (2015). Increasing performance by differentiated
teaching? Experimental evidence of the student benefits of digital differentiation.
British Journal of Educational Technology, 46(6), 11611174. http://doi.org/10.1111/bjet.12209

Summary: This paper explores the effect of digital differentiation on student performance using a randomized
experiment. The experiment is conducted in a second year biology class among 115 prevocational students in the
Netherlands. Differentiation allowed students in the treatment group to work at three different levels (p.1161). At
the end of a twelve-week differentiation period, during which students worked on levels based on pre-test scores,
students post-test scores show that those who received the differentiation experienced a more significant
improvement from pre to post-test than the students who received no differentiation. The results show that there is
a significant effect of digital differentiation on the post-test score. This effect is robust to adding covariates such as
students ability, grade repetition, age, gender, class and average neighborhood income. There are no differential
effects when dividing students in three groups, by ability. The results imply that differentiation in large classrooms
is possible and beneficial for all students, once done digitally (p. 1161).
Method and limitations: The study was conducted on 115 second-year biology students at an average sized
secondary school in the Netherlands. The students ages ranged from 12 to 15, and fifty-seven of the students were
female, while fifty-eight were male. Fifty-seven students were randomly assigned to the treatment group; fifty-eight
were randomly assigned to the control group. The experiment lasted for twelve weeks, during which both groups of
students were engaged in 3 four-week topics of study. The first topic was metabolism and respiration, the second
topic was blood circulation, and the third topic was your health (p. 1165). Each unit had a pre and post-test. All
classes were computerized, and all tests and lessons were delivered digitally. The pretest in each unit was used to
determine the track for the treatment group but had no effect for the control group, other than to earn bonus points.
There were three different tracks: the practical prevocational track, the theoretical prevocational track and the
higher general track (p. 1166). After the students tracks were established, the students in the treatment group
completed lessons depending on the track they were assigned. The lessons at the practical prevocational track level
were written in more simple language, using fewer words and less complicated sentences. Furthermore, the pace of
the exercises would be a bit lower, which means these students studied the minimum amount of topics. The content
of the basic topics did not differ between the groups. The theoretical prevocational track had some extra topics
compared with the practical track and more difficult explanation and more challenging exercises. This was also the
case for the higher general track, where more topics were discussed (p. 1166). The control group students always
followed the theoretical prevocational track, which was the middle level. At the end of the twelve-week study, the
results of all the post-tests were combined to determine the results. One major limitation to this study is that testing
was restricted to the least academic track of the Dutch secondary education system. Whether this type of
differentiation also would work in the more heterogeneous classes of countries that do not track students early in
secondary education is not clear, although the literature does not give counter-indications on this matter (p.
1172).
How this relates to my topic and other things Ive read? or What does this mean for my research?
This relates to my topic of using digital tools to differentiate instruction. It illustrates one system that could be used
to implement differentiated instruction in the classroom: pretest, placement in groups based on pretest scores,
differentiated lessons for each ability group, and post-tests to determine growth. This article can inform my
implementation of a system for digital differentiated instruction in my own study.
Other articles I need to locate and review:
Burns, M. K., Kanive, R. & DeGrande, M. (2012). Effect of a computer-delivered math fact intervention as a
supplemental intervention for math in third and fourth grades. Remedial and Special Education, 33(3),
184191.
Pilli, O. & Aksu, M. (2013). The effects of computer-assisted instruction on the achievement, attitudes and retention
of fourth grade mathematics students in North Cyprus. Computers and Education, 61, 6271.
Reis, S. M., McCoach, D. B., Little, C. A., Muller, L. M. & Kaniskan, R. B. (2010). The effects of differentiated
instruction and enrichment pedagogy on reading achievement in five elementary schools. American
Educational Research Journal, 48(2), 462501.
Smith, G. E. & Throne, S. (2009). Differentiating Instruction with Technology in Middle School Classrooms.
International Society for Technology in Education.

ANNOTATED BIBLIOGRAPHY

Harper, B., & Milman, N. B. (2016). One-to-one technology in K12 classrooms:


A review of the literature from 2004 through 2014. Journal of Research
on Technology in Education, 48(2), 129142.
http://doi.org/10.1080/15391523.2016.1146564

13
Digital Lessons to Increase
Rigor, Digital Differentiated
Instruction,
& Cyber-slacking

Summary: This literature review examined empirical research conducted between 2004 and 2014 regarding 1:1
technologies in K12 educational settings (p. 129). The researchers examined the critical characteristics of each
of the 46 selected studies [and] three major themes related to student learning emerged from analysis: (a) student
achievement, (b) changes to the classroom environment, and (c) student motivation and engagement. In addition[,]
researchers examined the ways in which students used 1:1 technology in classrooms and the important challenges
facing technology integration in classrooms (p. 131). The findings in the research show that most researchers agree
that 1:1 technology has positive effects on classroom environment, such as improved learning experiences, more
effective individual and small group teacher/student interaction, and better differentiated and collaborative learning
opportunities. In the area of student engagement, they found mixed results. Some research claimed that student
engagement increased, but those studies only evaluated engagement levels at the beginning of the 1:1
implementation. Other studies showed that student engagement decreased as the novelty of the devices wore off, at
which point more inappropriate uses of technology were observed. It was also discovered that across several studies,
the most common student uses of technology were web browsing and word processing, which the research noted
was not the intended use for the 1:1 devices. The results also showed several impediments to 1:1 implementation in
the research, including stakeholder buy-in, teachers perceptions and trepidation of using new technology, steep
learning curves and lack of training, and insufficient technical support.
Method and limitations: The authors focused only on published, peer-reviewed, empirical studies of 1:1
technology in K12 school settings published between 2004 and 2014[They] decided to undertake multiple,
overlapping key-word searches to identify all relevant articles (p. 130). They selected forty-six articles from their
search. After identifying the set of articles to include in the review, [they] read each article to identify its critical
features and entered these data into a database. These included: Research questions addressed in the study; Types of
studies (quantitative, qualitative, mixed methods); Methodology; Content area; Number of participants; Grade level;
Data sources; Type of device studied; Key findings. [They] used the constant comparative method (Glaser, 1965;
Glaser & Strauss, 1967[, as cited by Harper & Milman, 2016]) to analyze and code the themes that emerged as
[they] examined the selected articles research questions and findings (p.130). The authors stated several
limitations: First, conducting research using online databases is not foolproofsearches are only as good as (a)
access to the specific databases, (b) the keywords indexed within these databases, (c) the keywords utilized by
individuals conducting searches of these databases, and (d) the manuscripts indexed within the various databases.
Second, conference papers and proceedings are rich in empirical research, but acquiring them is challenging and
inconsistent. Third, conceptual, theoretical pieces and implementation accounts contributed to [their] under-standing
of the issue; however, they lack the necessary peer-reviewed evidence to substantiate their use. Moreover, because
the peer review process for books and dissertations is not typically as rigorous as that required for peer-reviewed
research journals, they were not included. The variability in classroom contexts, research foci, and devices used also
is a limitation. It is challenging to compare studies with such diverse goals, objectives, purposes, and research
designs, just to name a few of the differences of the studies examined in this review of the literature (p. 139).
How this relates to my topic and other things Ive read? or What does this mean for my research?
This relates to all three of my topics: digital lessons to increase rigor, digital differentiated instruction, and cyberslacking. This study brings all of these topics together to discuss the trends in research in the new millennium. It
includes findings about how technology can raise student achievement, how it enhances differentiation opportunities,
and how it also can lead to inappropriate uses. Knowing that these topics are consistent across a larger body of
research is helpful in determining the national context of these issues and to also see their interconnectedness.
Other articles I need to locate and review:
Donovan, L., Green, T., & Hartley, K. (2010). An examination of one-to-one computing in the middle school: Does
increased access bring about increased student engagement? Journal of Educational Computing Research,
42(4), 423441.
Hur, J.W., & Oh, J. (2012). Learning, engagement, and technology: Middle school students three-year experience
in perva-sive technology environments in South Korea. Journal of Educational Computing Research, 46(2),
295312.

ANNOTATED BIBLIOGRAPHY

Hilton, J. T. (2016). A case study of the application of SAMR and TPACK for reflection on
technology integration into two social studies classrooms. The Social Studies, 107(2),
6873. http://doi.org/10.1080/00377996.2015.1124376

14
Digital Lessons to
Increase Rigor

Summary: This article provides a practical reflection on the two most popular lenses currently discussed in
education technology literature: the lenses of SAMR (Substitution, Augmentation, Modification, and Redefinition)
and TPACK (Technological Pedagogical Content Knowledge). This case study follows a yearlong integration of
iPad carts into two eighth-grade social studies classrooms. During the course of the year, two veteran teachers
systematically reflected on their technology use through each technology lens (p. 68). What becomes clear
through this analysis is that in a practical setting, SAMR is more useful in reflecting on the capacity to use a
particular technology to accomplish an instructional objective, rather than designing instruction to reach a particular
level of SAMRWhen prompted to reflect on [the] observationthat SAMR does not function in a hierarchal
fashion, both social studies teachers felt that the SAMR diagram, as currently visualized, mischaracterized the
practical application of SAMR in the classroom Connecting their thoughts to a more common notion in education,
the teachers suggested that SAMR functioned similarly to Blooms taxonomy, in that teachers would strive to reach
higher levels but would not neglect the lower levels in the process (p. 71). Though both social studies teachers
indicated that they consistently tried for a balanced blending of technology, pedagogy, and content (TPACK),
technical difficulties often drew them out of TPACK and into technical knowledge (TK)suggest[ing] a disconnect
between the TPACK theory and the realities of technology integration[Also,] because TPACK suggests a constant
effort to incorporate technology, these social studies teachers felt that the theory was not fully reflective of their
actual classrooms. A more nuanced view that they arrived at was that TPACK was most useful when considering
how to incorporate technology into learning activities that are already strong pedagogically and content wise As
to which was the preferred model, both teachers agreed that SAMR was the easier model to apply as a reflective lens.
They were able to learn more from thinking about their technology integration from a SAMR perspective, and they
were able to use SAMR to generate ideas about ways to modify future instruction to better make use of the available
technology. Though both teachers saw merit in the TPACK model, they felt that [it] was overly complex and that
functioning in the center of TPACK was an ideal to strive for that was often challenging due to practical technology
concerns (p. 72).
Method and limitations: The study took place in an urban school district in southwestern PA. The school
districts population was 33% minority and 47% lower socioeconomic status. During the 2014-2015 school year,
two veteran social studies teachers, who had been teaching for eighteen and fourteen years respectively,
implemented the use of two iPad carts with thirty iPads each into their 8th grade social studies classes. Information
was first collected throughout the year in the form of two technology journals into which both teachers noted their
efforts to integrate the iPads into their lessons and their reflections on the successes and struggles this entailed
Following the year of instruction, both teachers and the researcher met in two separate two-hour-long sessions,
following a semistructured interview format, during which the teachers were first given materials relating to each
technology model and then asked various predetermined questions and situational emergent questions to reflect on
their technology use (Robson 2002[, as cited in Hilton, 2016]) (p. 70). There are three limitations to this study: 1)
only two teachers participated, and a larger pool with more information would be more reliable, 2) the teachers
reflected back over their technology integration after the fact, and it would be more effective if they had studied the
two models before the start of the year and attempted to use them in their teaching for the study, and 3) a survey
with coded responses might provide more reliable data.
How this relates to my topic and other things Ive read? or What does this mean for my research?
This relates to my topic of increasing rigor in the classroom through the use of technology. Both of the two models
used in the study, SAMR and TPACK, are methods to evaluate the level at which teachers are incorporating
technology, which has a connection with the level of rigor in the lesson. These teachers perceptions and
evaluations of these two models can inform my choice for which model to use in my own instruction and
implementations of technology in my classroom for the purposes of my own research.
Other articles I need to locate and review:
Archambault, Leanna M., and Joshua H. Barnett. (2010). Revisiting technological pedagogical content knowledge:
Exploring the TPACK framework. Computers & Education, 55(4), 165662.

15

ANNOTATED BIBLIOGRAPHY

Hur, J. W., Shannon, D., & Wolf, S. (2016). An Investigation of relationships between
internal and external factors affecting technology integration in classrooms. Journal
of Digital Learning in Teacher Education, 32(3), 105114. Retrieved from
http://doi.org/10.1080/21532974.2016.1169959

Digital Lessons to
Increase Rigor


Summary: This study examined the significance and relationships of five factors influencing technology
integration. A proposed model was tested using structural equation modeling with self-reported data from 223 K12
school teachers in the United States (p. 105). Through a survey created by the authors, it was determined that the
teachers explanations of specific technology integration lessons indicated that although an increasing effort had
been made for teachers to utilize technology into day-to-day practices, technology was often used for teacherdirected activity (e.g., interactive white-boards, document cameras), low-level fact checking (e.g., interactive
response systems), classroom management (e.g., learning management systems), or low-technology-skill activities
(e.g., writing a research paper, creating a PowerPoint presentation). A minority of teachers reported using
technology for 21st-century skill-building activities such as collaboratively creating a digital video or digital poster
for sharing with people outside of school communities (p. 109-110). The authors concluded that the factors that
determined the teachers use of technology (or lack of) were the teachers value of technology (directly affected by
their confidence in using technology, professional development, and their principals support of technology
integration) and the schools budget to provide appropriate technology.
Method and limitations: The study was based on the teacher survey data from the Enhancing Education Through
Technology (EETT) program in the state of Alabama during the 2011 2012 school-year. The survey was
designed to examine the impacts of the EETT program by investigating six primary measurement scales. These
scales included Professional Development (PD), Principals Support (PS), Appropriate Budget (AB), Perceived
Benefit (PB), Perceived Self-Efficacy (PSE), and Technology Use (TU)The authors developed the survey based
on previous studies (p. 108). As part of program evaluation, [they] conducted numerous phone interviews with
technology coordinators and face-to-face interviews with participant teachers. The survey also included open-ended
questions asking participants to describe their technology integration lesson plans and the impacts of technology use
(p. 109). Two hundred, twenty-three teachers completed the survey. The average total teaching experiences were
12.7 years with a range from 0.5 to 38 years. Almost half of the participants (43%) reported themselves to be
advanced technology users, followed by 28% intermediate and 26% experts. Only four participants reported that
they were either a novice or a beginner. About 74% participants owned a smart phone, and 91% participants used an
Internet-connected laptop at home. In terms of ethnicity, 90.6% were Caucasian, followed by 7.2% African
American[, which is representative of the overall makeup of the Alabama teaching population]. Approximately 78%
were female teachers and 56.1% of participants had a masters degree About 43% were Language Arts teachers,
17% were Math, 10% were Science, and 11% were Social Studies teachers. There were 10 Special Education
teachers and eight Business/Technology teachers as well (p. 109). There were a few limitations discussed by the
authors. First, the findings were based on self-reported data of an online survey, and social desirability bias could
be a concern for the survey. Participants knew that they were asked to fill out the survey as part of a grant evaluation,
and this may have caused participants to provide socially desirable responses Another limitation is [the]
participant selection. Participants were teachers who worked in mostly low-income school districts in the
southeastern United States; generalization to other contexts should be done with caution as well (p. 113).
How this relates to my topic and other things Ive read? or What does this mean for my research?
This relates to my topic of providing rigorous lessons using digital tools. According to this article, teachers who do
not value technology, feel comfortable with technology, have appropriate professional development, have support
from their principals, and have a school with a budget adequate to provide appropriate technology do not incorporate
technology on the levels at which they should (if at all). This research could shed light on the reasons behind the
lack of rigorous digital lesson and provide insight into the types of interventions needed to improve in this area.
Other articles I need to locate and review:
Madden, M., Lenhart, A., Duggan, M., Cortesi, S., & Gasser (2013). Teens and technology 2013. Pew Research
Center Internet & American Life Project. Retrieved from http://www.pewinternet. org/2013/03/13/teensand-technology-2013/
Nikolopoulou, K., & Gialamas, V. (2015). Barriers to the integration of computers in early childhood settings:
Teachers perceptions. Education and Information Technologies, 20(2), 285301.

ANNOTATED BIBLIOGRAPHY

16

Cyber-slacking
Jackson, L. D. (2013). Is mobile technology in the classroom a helpful tool or a
distraction?: A report of university students attitudes, usage practices, and
suggestions for policies. The International Journal of Technology, Knowledge and Society, 8(5), 129140.
Summary: This research describes university students perceptions concerning mobile devices in the classroom.
Based on a survey conducted at the California Polytechnic State University, the following topics are reported and
discussed: 1) How do students use technology in class, including habits and web sites visited? 2) Do students
experience self-reported distraction from mobile devices, laptops, and tablets? 3) What issues concerning technology
and test security are students aware of? 4) What are students perceptions of the benefits of technology? and, (5)
What classroom policy suggestions do students offer? (p. 129). Some of the major findings of this study include:
(a) mobile phones are viewed as mostly a distraction by a majority of this sample (76%) whereas laptops are viewed
as mostly a helpful learning tool by 90% of the sample, (b) students would like faculty to consider their perspectives
when formulating policies concerning mobile technology, (c) most students think professors should articulate clear
policies, (d) roughly half of students are aware of ways in which mobile technology is used to cheat, and, (e) a
majority of students feel annoyed by complete bans of technology in the classroom (p. 137). Also, results show
that 39.1% of mobile phone use and 50% of tablet use during class is for nonacademic use only. The top
nonacademic websites visited included Facebook and Twitter. Lastly, 70% of students believe off-task use of
technology in the classroom is distracting to the person using it, and 31% believe it is also distracting to others.
Method and limitations: One hundred and two undergraduate students enrolled in four sections of
Communication Studies courses at the California Polytechnic State University participated in this research. The
sample was heterogeneous in terms of majors because all students are required to take introductory Communication
Studies classes. Every college in the university was represented. There were 61 females and 41 males, ranging in age
from 1722, with an average age of 18.5 yearsStudents were asked to participate by filling out a four-page
questionnaire anonymouslyThe questionnaire contained demographic information used to describe the sample,
and questions about (a) technology use, (b) websites visited during class, (c) perceptions and practices concerning
mobile devices (Likert scale items), (d) awareness of cheating on tests, (e) perceptions of effective and ineffective
class policies, and (f) recommendations to educators (p. 131). Some limitations of this study are that the data is all
self-reported, personal perceptions. No observations were completed to determine the accuracy of the reports. In
addition, while this study represents the perceptions of the students in one university, it may not represent the
feelings of students in other contexts.
How this relates to my topic and other things Ive read? or What does this mean for my research?
This relates to my topic of cyber-slacking by giving insight into student perceptions of technology in general, the use
of it for off-task, nonacademic purposes in the classroom, and policies set forth by instructors. Awareness of these
perceptions may be important in determining the extent to which this is a problem of practice, as well as to build
national context. In addition, it will be helpful to know student attitudes toward teacher-imposed regulations before
suggesting interventions after my study.
Other articles I need to locate and review:
Foster, A. L. (2008). Law professors rule laptops out of order in class. Chronicle of Higher Education, 54(40), A1A18.
Levine, L. E. (2002). Using technology to enhance the classroom environment. T.H.E. Journal, 26(6), 1618.
Tindell, D. R.&Bohlander, R. W. (2012). The use and abuse of cell phones and text messaging in the classroom: A
survey of college students. College Teaching, 60, 19.

17

ANNOTATED BIBLIOGRAPHY
Cyber-slacking
Junco, R. (2012). In-class multitasking and academic performance. Computers in Human
Behavior, 28(6), 22362243. Retrieved from http://doi.org/10.1016/j.chb.2012.06.031


Summary: This study examines the frequency with which students multitask during class and examines the
relationship between multitasking and academic performance as measured by actual overall semester grade point
average (GPA) (p. 2236). The author conducted an online survey of 1,774 college students to determine this
relationship. They were asked questions about the frequency with which they multitask during class and this was
compared with their GPA. The results showed the following rates of multitasking during class: (1) High
frequencytexting was the only ICT that falls into this category because 69% of students reported texting during
class; (2) Moderate frequencyusing Facebook, emailing, and searching for content not related to class fall into this
category as 2128% of students used these during class; and (3) Low frequencyIM and talking on the phone were
the two ICTs in this category because students rarely used them with only 410% reporting doing so (p. 2241).
Results show that using Facebook and texting during class were negatively predictive of overall semester GPA.
Even though emailing and searching were moderate-frequency activities like using Facebook, they were not
predictive of overall GPA As might also be expected, low-frequency activities were not related to semester
GPA studies shows that multitasking with certain technologiesspecifically, using Facebook and text
messagingwhile trying to learn relates to poorer long-term academic outcomes (p. 2241).
Method and limitations: There was a total of 1,774 participants, all US residents who were students at a four-year
public university in the Northeastern United States. Sixty-four percent were female The age of participants
ranged from 1756, though 88% were between 18 and 22 years old Thirty percent of students in the sample were
first year students, 24% were sophomores, 21% were juniors and 25% were seniors. Highest educational level
attained by either parent was as follows: 28% had a high school degree or less, 25% completed some college, 34%
were college graduates and 13% had a graduate degree (p. 2239). The sample was 91% Caucasian, 5% African
American, 2% Latino, 1% Asian American, and 2% other. The gender, race, and ethnic breakdown of the
sample was similar to that of the overall university population, excepting an overrepresentation of women in this
sample (p. 2239). They responded via a survey on SurveyMonkey.com. Frequency of multi-tasking during class
was evaluated using the question How often do you do the following activities during class? with prompts for
Facebook, IM, email, talking on the phone, texting, and searching for information online that is not related to the
class (p. 2238). They responded to this question on a Likert scale ranging from Never to Very Frequently (close
to 100% of the time). In addition, students gave researchers permission to obtain their actual high school grade
point averages (HSGPAs), submitted to the university during the admissions process[and] to access their [current]
academic records to obtain their overall semester grade point averages (GPAs) GPAs were measured on a 4.0
scale ranging from 0 to 4.0 (p. 2238-2239). Several limitations to the study are as follows: The major limitation
of this study is that it is cross-sectional and correlational and therefore it is impossible to determine the causal
mechanisms between ICT use during class and overall semester GPA (p. 2241). A related limitation is that, while
this sample was representative of the overall university population on which it is based, it may not be representative
of all institutions in the United States[Also,] the fact that participants were recruited via email and that the survey
was administered online is a further limitationThe students who responded to the survey [may] happen to be
regular users of email [who] may be more active users of technology and may multitask more than other students. A
final limitation was [that] all of the multitasking variables were assessed via self-report. This raises the issue of
whether students can accurately estimate their frequency of multitasking (p. 2242).
How this relates to my topic and other things Ive read? or What does this mean for my research?
This relates to my topic of cyber-slacking, since it draws a correlation between off-task behaviors during class and
students overall GPA. This study helps prove that the issue of cyber-slacking is a problem of practice and shows
that interventions may need to be created and implemented in order to reverse the decline in GPA due to this
phenomenon.
Other articles I need to locate and review:
Junco, R., & Cotten, S. R. (2012). No A 4 U: The relationship between multitasking and academic performance.
Computers & Education. Retrieved from http://dx.doi.org/10.1016/ j.compedu.2011.12.023.
Wood, E., Zivcakova, L., Gentile, P., Archer, K., De Pasquale, D., & Nosko, A. (2012). Examining the impact of
off-task multi-tasking with technology on real-time classroom learning. Computers & Education, 58(1),
365374.

18

ANNOTATED BIBLIOGRAPHY

Junco, R., & Cotten, S. R. (2012). No A 4 U: The relationship between multitasking and
academic performance. Computers & Education, 59(2), 505514. Retrieved from
http://doi.org/10.1016/j.compedu.2011.12.023

Cyber-slacking

Summary: The purpose of this study was to examine how college students multitask with ICTs [while completing
schoolwork outside of class] and to determine the impacts of this multitasking on their college grade point average
(GPA) (p. 505). The author conducted an online survey of 1,774 college students that asked questions about the
frequency with which they multitask while studying outside of class, and this was compared with their GPA. When
we examine the reported frequency of multitasking it appears that texting, Facebook, and email multitasking are
done most often; 51% of respondents reported texting, 33% reported using Facebook, and 21% reported emailing
while doing schoolwork somewhat or very frequently16% of students reported searching for information online
that is not part of schoolwork while doing schoolwork somewhat or very frequently. Instant messaging was the least
often used in multitasking, with 67% of respondents reporting that they never do this While doing schoolwork
outside of class, students reported spending an average of 60 min per day on Facebook, 43 min per day searching,
and 22 min per day on email. Lastly, students reported sending an average of 71 texts per day while doing
schoolwork The results show that the effects of multitasking on college GPA vary depending upon the
specific types of ICT use being examined. For instance, multitasking while using Facebook and texting were
associated with lower overall college GPA. Emailing, searching, talking on the phone, and instant messaging
multitasking measures were not associated with college GPA (p. 510). It was concluded that engaging in
Facebook use or texting while trying to complete schoolwork taxes the students limited capacity for cognitive
processing and precludes deeper learning (p. 511).
Method and limitations: There was a total of 1,774 participants, all US residents who were students at a four-year
public university in the Northeastern United States. Sixty-four percent were female The age of participants
ranged from 1756, though 88% were between 18 and 22 years old Thirty percent of students in the sample were
first year students, 24% were sophomores, 21% were juniors and 25% were seniors. Highest educational level
attained by either parent was as follows: 28% had a high school degree or less, 25% completed some college, 34%
were college graduates and 13% had a graduate degree (p. 509). The sample was 91% Caucasian, 5% African
American, 2% Latino, 1% Asian American, and 2% other. The gender, race, and ethnic breakdown of the
sample was similar to that of the overall university population, excepting a slight overrepresentation of women in
this sample (p. 509). They responded via a survey on SurveyMonkey.com. Frequency of multitasking was
evaluated by asking students How often do you do schoolwork at the same time that you are doing the following
activities? with prompts for searching for information online that is not part of schoolwork, Facebook, email, IM,
talking, and texting on their cell phones Students gave researchers permission to obtain their actual high school
grade point averages (HSGPAs), which were submitted to the university during the admissions process, [and]
their [current] academic records to obtain their overall grade point averages (GPAs). Overall GPAs were measured
on a 4.0 scale ranging from 0 for F to 4.0 for A (p. 508). The major limitation of this study is that it is crosssectional and correlational and therefore it is impossible to determine the causal mechanisms between ICT use and
overall GPA (p. 512). A related limitation is that, while this sample was representative of the overall university
population on which it is based, it may not be representative of all institutions in the United States [Finally,] all
of the multitasking variables were assessed via self-report. Investigators conducting further research on this topic
should keep in mind that asking students to estimate average time and time spent yesterday yield subtle
differences further research will attempt to make assessments of actual time spent on each ICT as well as actual
time spent multitasking, either through observations or other logging methods (p. 512).
How this relates to my topic and other things Ive read? or What does this mean for my research?
This relates to my topic of cyber-slacking because it discusses the rate at which students participate in off-task
behaviors while studying or completing schoolwork outside of class. Although this is not a variable that I could
control in my own research study, it is something that I would need to take into account when comparing rate of
cyber-slacking with students overall GPAs. This could help provide evidence that this issue is a definite problem
and help inform possible interventions.
Other articles I need to locate and review:
Rideout, V. J., Foehr, U. G., & Roberts, D. F. (2010). Generation M2: Media in the lives of 818 year olds. Menlo
Park, CA: Kaiser Family Foundation. Retrieved from http://www.kff.org/entmedia/upload/8010.pdf.

ANNOTATED BIBLIOGRAPHY

19

Cyber-slacking
Karpinski, A. C., Kirschner, P. A., Ozer, I., Mellott, J. A., & Ochwo, P. (2013). An
exploration of social networking site use, multitasking, and academic performance
among United States and European university students. Computers in Human Behavior, 29(3), 11821192.
Retrieved from http://doi.org/10.1016/j.chb.2012.10.011
Summary: This study investigates multitaskings impact on the relationship between [Social Networking Sites]
SNS use and Grade Point Average (GPA) in United States (US; n = 451) and European (n = 406) (p. 1182). The
authors conducted an online survey to garner their results. It was discovered that the majorities of US and
European students indicated that they do use SNSs or have them on in the background while studying (n = 277
[61.8%] and n = 278 [69.5%], respectively) (p. 1185). Results showed that the negative relationship between
SNS use and GPA was moderated by multitasking only in the US sample. Thus, the above negative relationship
does not change regardless of whether European students are multitasking or not. The US sample regression results
showed that SNS use in minutes/day was negatively predictive of overall GPA. Pertaining to the focus of the study,
the interaction between multitasking and SNS use was also a significant predictor. Unsurprisingly, this indicated that
for those who do not multitask with SNS(s) while studying, GPA was higher for those who spend fewer minutes on
SNS(s)/ day than those who spend more on SNS(s)/day. Additionally, for those who multitask with SNS(s) while
studying, GPA was higher for those who spend fewer minutes on SNS(s)/day compared to those who spend more on
SNS(s)/day. (p. 1189). Results highlight a negative relationship between SNS use and GPA for all students, and
also the negative effects of multitasking. SNS use is something done concurrently with studying or other academic
activities, and the negative relationship may be an indication of a deleterious effect of trying to carry out these two
thought/information processing processes at the same time. In other words, it is not the use of SNSs which are
deleterious, but the disruptive use of them (p. 1190).
Method and limitations: Data were collected from 590 undergraduate and 285 graduate students across the United
States and Europe (N = 875). The sample consisted of 216 (29.7%) male participants, and 615 female participants
(70.3%). The majority of participants identified themselves as Caucasian (n= 775; 88.6%), with the next largest
group identified as Black/African-American (n= 53; 6.1%). Other ethnicities represented included Asian (2.4%),
Hispanic/Latino (1.1%), Multi-Racial (1.0%), Middle Eastern (0.5%), Indian (0.2%), and American Indian/Alaskan
Native (0.1%) (p. 1183). The authors used an online survey to gather their data. In this survey, Section 1
asked respondents to provide demographic and other general information (e.g., age, rank in school, major,
multitasking perceptions). Section 2 invited students to provide academic information (e.g., GPA, hours spent
studying, extracurricular involvement). Section 3 asked about computer and Internet use (e.g., hours spent on the
Internet, computer familiarity). The fourth section was specific to SNS use (e.g., types of SNS used, minutes of SNS
use, multitasking and SNS use). Finally, the fifth section asked for participants SNS use reflections in a series of
open-ended questions (p. 1184). One of the main limitations of the current study is that due to the correlational
design, causation cannot be determined in examining the hypothesized relationship between SNS use and
GPAAnother limitation is that the sample, while expansive and covering a wide range of countries, cannot be
considered to be representative for all countries in Europe and all states in the US[Finally], the voluntary response
sample also is a limitation in that there is no way to corroborate self-reported information (p. 1191).
How this relates to my topic and other things Ive read? or What does this mean for my research?
This relates to my topic of cyber-slacking because it shows a negative correlation between digital multitasking while
studying and GPA. This research helps to prove that this issue is a definite problem of practice. In addition,
because it highlights some of the most common types of digital multitasking, it could help inform possible
recommendations for interventions to avoid this behavior.
Other articles I need to locate and review:
Ellis, Y., Daniels, B., & Jauregui, A. (2010). The effect of multitasking on the grade performance of business
students. Research in Higher Education Journal, 8, 110.
Kraushaar, J. M., & Novak, D. C. (2010). Examining the affects of student multitasking with laptops during lecture.
Journal of Information Systems Education, 21, 241252.

20

ANNOTATED BIBLIOGRAPHY

Kraushaar, J. M., & Novak, D. C. (2010). Examining the affects of student multitasking
with laptops during the lecture. Journal of Information Systems Education, 21(2),
241251.

Cyber-slacking

Summary: This paper examines undergraduate student use of laptop computers during a lecture-style class that
includes substantial problem-solving activities and graphic-based content. The study includes both a self-reported
use component collected from student surveys as well as a monitored use component collected via activity
monitoring spyware installed on student laptops. We categorize multitasking activities into productive (courserelated) versus distractive (non course-related) tasks. Quantifiable measures of software multitasking behavior are
introduced to measure the frequency of student multitasking, the duration of student multitasking, and the extent to
which students engage in distractive versus productive tasks. We find that students engage in substantial
multitasking behavior with their laptops and have non course-related software applications open and active about
42% of the time. There is a statistically significant inverse relationship between the ratio of distractive versus
productive multitasking behavior during lectures and academic performance. We also observe that students under
state the frequency of email and instant messaging (IM) use in the classroom when self-reporting on their laptop
usage (p. 241). The average student engages in frequent multitasking during class, generating more than 65 new
active windows per lecture with 62% of those windows being classified as distractive. There is, however, limited
and mixed support for the hypothesis that a higher frequency of multitasking is correlated with lower academic
performance levelsIM is the only multitasking subcategory with [Software Multitasking] rates that are negatively
correlated with quiz average, project, and final exam grades (p. 249).
Method and limitations: The study participants are 97 undergraduate students from three different sections of a
junior-level, required course in management information systems (MIS) taught during the fall 2006 semester at The
University of Vermont (UVM). At the time the study was conducted, The School of Business Administration (SBA)
at UVM had a laptop computer requirement and all students were required to bring a laptop to each MIS classThe
course was taught in a traditional lecture styleAcademic performance data were collected for each study
participant using the universitys student record keeping system[, including] cumulative grade point average
(GPA), the scholastic aptitude test (SAT) mathematics and verbal scores, and a UVM admission scoreInformation
on student perceptions of the SBAs laptop requirement and how they used their laptops in class was collected via
survey. The survey consisted of 27 questions divided into 5 sections (p. 243). The only survey question germane
to this study, was a multiple response question that asked students whether they used their laptops for Email, instant
messaging, note taking, surfing the Web, or playing games during the test bed class lectures. Using survey response
information we were able to compare the self-reported and spyware recorded use for the email and IM
categoriesStudents were given the opportunity to participate in the monitored use component of the study on a
volunteer basisStudents wishing to participate in the study installed the Activity Monitor spyware package from
SoftActivity (p. 244). When the Activity Monitor was running, it monitored the types of applications being used,
the dates and times they were used, the duration of the activity, which window was active when multiple windows
were open at one time, the keystrokes made by the student, and the URLs visited. Students turned the Activity
Monitor on at the beginning of class and off at the end. Students usage during class was later analyzed to
determine if each activity in which the student participated during class was productive (related to the class) or
distractive (off-task). One limitation to the study included the small sample size. Also, the course in the study
required the use of laptops, and the results may be different in other courses where laptop use is not a necessity. In
addition, it is possible that the laptop use monitored by the Activity Monitor software is not authentic, as students
knew they were being watched and may have altered their normal behavior. Lastly, while this study does show
limited negative correlation between cyber-slacking and academic performance, it does not indicate causality.
How this relates to my topic and other things Ive read? or What does this mean for my research?
This relates to my topic of cyber-slacking because it discusses the frequency with which students participate in offtask digital behaviors and the correlation of this activity to lower academic performance. The study provides a basis
of proof that the phenomenon is a problem in the classroom, and provides information about the most detrimental
off-task behaviors. This is consistent with most other things I have read in that it shows a negative effect of cyberslacking.
Other articles I need to locate and review:
Young, J. R. (2006), The Fight for Classroom Attention: Professor vs. Laptop. Chronicle of Higher Education,
52(39), A27-A29.

21

ANNOTATED BIBLIOGRAPHY

Kuznekoff, J., & Titsworth, S. (2013). The impact of mobile phone usage on student
learning. Communication Education, 62(3), 233252. Retrieved from
http://doi.org/10.1080/03634523.2013.767917

Cyber-slacking

Summary: This study examined the impact of mobile phone usage, during class lecture, on student learning.
Participants in three different study groups (control, low-distraction, and high-distraction) watched a video lecture,
took notes on that lecture, and took two learning assessments after watching the lecture. Students who were not
using their mobile phones wrote down 62% more information in their notes, took more detailed notes, were able to
recall more detailed information from the lecture, and scored a full letter grade and a half higher on a multiple
choice test than those students who were actively using their mobile phones (p. 223) The control groups average
grade [on the multiple-choice test] was 66%, while the average grade for the high-distraction [group] was 52%. In
practical terms, the difference in grade between those students that were actively texting/posting (high-distraction)
and those that were not (control group) was over one full letter grade, or roughly 13 percentage points (p. 247). On
the free-recall test, students in the control group scored 36% higher than the group with low rates of texting/posting
and 51% higher than the group with high rates of texting/posting (p. 247). In their note taking during the lecture,
students in the control group recorded 33% of the details. In comparison, students in the low-distraction group
recorded only 27% and in the high-distraction group only 20% (p. 247). Compared to those students who do not
text/ post, when students engage in these behaviors they will potentially record 38% fewer details in their notes,
score 51% lower on free-recall tests, and 20% lower on multiple-choice tests (p. 248).
Method and limitations: Participants in the study were students enrolled in one of several communication courses
at a large Midwestern universityStudents in the research pool were randomly assigned to one of several research
projects being conducted within the department, of which this project was one[There was a] total of 47
participants, 19 in the control group, 14 in the low-distraction group, and 14 in the high-distraction group. The age
of the participants ranged from 18 to 22, with the average age being 18. The majority of participants (55.3%) were
first-year students, 38.3% were sophomores, and 6.4% were juniors. The mean self-reported GPA of the participants
was 3.33 (SD=0.380)Students were shown a [12 minute] video lecture [about communication theories] and
instructed to take notesusing [the] paper providedas they normally would in a typical class. Students were
informed that at the end of the lecture they would be given a 3-min review period, and after this review they
would take several learning assessments (p. 238-240.) During the video lecture, participants randomly assigned to
the distraction groups received and responded to simulated text messages via a website they had logged into prior to
the start of the experiment. In the low-distraction condition, participants were automatically given a new simulated
text/post approximately every 60 secondsThe high-distraction groupautomatically received a simulated
text/post approximately every 30 seconds (p. 241). After the review period, participants completed two tests. The
free recall test provided students with themain headers from the lecturewithout corresponding detailsStudents
were given five minutes to fill in all details that they could remember from the lectureThe multiple-choice test
consisted of 16 questions covering material from the lecture. Students were given five minutes to complete the
multiple-choice test (p. 242). After the experiment, the students notes and their responses on the two tests were
analyzed. Some of the limitations of this study are the small sample size and lack of a pretest to determine students
prior knowledge of the lecture content. In addition, the researchers point out that the video lecture was not from a
real course and may have included too much information for twelve minutes, which could have contributed to some
of the poor scores. Finally, the simulated text messages may not have been authentic and representative of true the
text messaging interaction in which students partake.
How this relates to my topic and other things Ive read? or What does this mean for my research?
This relates to my topic of cyber-slacking, of which one category is texting during class, and this study explores the
effects of students participation in this activity. Since this type of digital off-task behavior is prevalent in the high
school classroom, this study gives a foundation as to why this phenomenon is a problem of practice. This research
is consistent with the majority of other things I have read, indicating a negative effect of texting in class.
Other articles I need to locate and review:
Burns, S., & Lohenry, K. (2010). Cellular phone use in class: Implications for teaching and learning a pilot study.
College Student Journal, 44, 805-810.
Wei, F.-Y. F., Wang, Y. K., & Klausner, M. (2012). Rethinking college students self-regulation and sustained
attention: Does text messaging during class influence cognitive learning? Communication Education, 61,
185-204. doi:10.1080/03634523.2012.672755

ANNOTATED BIBLIOGRAPHY

22

Cyber-slacking
Lepp, A., Barkley, J., & Karpinski, A. (2014). The relationship between cell phone use,
academic performance, anxiety, and satisfaction with life in college students.
Computers in Human Behavior, 31(1), 343350. Retrieved from http://doi.org/10.1016/j.chb.2013.10.049

Summary: The researchers in this study investigated the relationships between total cell phone use (N = 496) and
texting (N = 490) on Satisfaction with Life (SWL) in a large sample of college students (p. 343). Students took a
survey regarding these aspects, and their responses were analyzed to determine this relationship. In addition, the
researchers compared their findings from the survey with the participants overall GPAs. Cell phone use/texting
was negatively related to GPA and positively related to anxiety; in turn, GPA was positively related to SWL while
anxiety was negatively related to SWL (p. 343). For the population studied, high frequency cell phone users
tended to have lower GPA, higher anxiety, and lower Satisfaction with Life relative to their peers who used the cell
phone less often (p. 348).
Method and limitations: Five hundred, thirty-six undergraduate students (n = 370 females) participated in the
studyParticipants were undergraduate college students from a large, Midwestern US public university. A key
variable in this study was academic performance which the researchers objectively assessed using participants
actual, cumulative college Grade Point Average (GPA)...Participants were recruited during class time from courses
which typically attract students from a diversity of undergraduate majors (p. 345). Participants took a survey that
was constructed of seven separate sections; however, for this study, only four sections were used: (1) demographic
information, (2) the Satisfaction with Life Scale (SWLS; Diener et al., 1985[, as cited in Lepp, Barkley, and
Karpinski, 2014]), (3) the Beck Anxiety Inventory (BAI; Beck, Epstein, Brown, & Steer, 1988[, as cited in Lepp,
Barkley, and Karpinski, 2014]), and (4) questions about cell phone and texting use (Lepp et al., 2013[, as cited in
Lepp, Barkley, and Karpinski, 2014]). In addition, academic performance was measured using each participants
actual, cumulative GPA accessed through official university records (p. 345). The authors indicated several
limitations: First, the sample consisted of only college students enrolled at a single, large, public university in the
Midwestern United States[, and their] ability to generalize these results to other populations is limitedIn addition,
the relationships identified in this study should be investigated in younger students including high school and junior
high school as [cell phone use] is increasingly common among these populations. Finally, the relationship between
[cell phone use], anxiety and SWL should be studied in non-student populations and among diverse ethnicities and
socioeconomic groups (p. 349).
How this relates to my topic and other things Ive read? or What does this mean for my research?
This relates to my topic of cyber-slacking. The research shows that attachment to cell phones could be leading
students to lower GPAs, more anxiety, and lower satisfaction with life in general. Since cell phone use is a major
contributor to cyber-slacking in high school students, this research is relevant to that issue. The results are
consistent with other studies I have read, further confirming the negative link between cell phone use and academic
achievement.
Other articles I need to locate and review:
Hong, F. Y., Chiu, S. I., & Hong, D. H. (2012). A model of the relationship between psychological characteristics,
mobile phone addiction and use of mobile phones by Taiwanese university female students. Computers in
Human Behavior, 28, 21522159.
Jackson, L. A., von Eye, A., Witt, E. A., Zhao, Y., & Fitzgerald, H. E. (2011). A longitudinal study of the effects of
Internet use and videogame playing on academic performance and the roles of gender, race and income in
these relationships. Computers in Human Behavior, 27, 228239.
Palfrey, J., & Gasser, U. (2008). Born digital: Understanding the first generation of digital natives. New York
(NY): Basic Books.

ANNOTATED BIBLIOGRAPHY

McRae, P. (2016, February 23). Why are students growing tired, anxious and
distracted? The ATA News, p. 4. Alberta.

23
Digital Lessons to Increase
Rigor, Digital Differentiated
Instruction,
& Cyber-slacking

Summary: Researchers from the Alberta Teachers Association, the University of Alberta, Boston Childrens
Hospital and Harvard Medical School are working on a collaborative initiative, called Growing Up Digital (GUD)
Alberta, to better understand the scope of physical, mental and social consequences of digital technologies in areas
such as exercise, homework, identity formation, distraction, cognition, learning, nutrition, and sleep quality and
quantity (p. 4) To help determine this, they conducted a survey of 2,200 teachers and principals from across
Alberta about the impact of digital technologies on childrens and youths health, development and
learningOverall, teachers report that digital technologies certainly enhance their teaching and learning activities,
with inquiry-based learning (71 percent) being the area of greatest perceived enrichmentThe most common
instructional uses of digital technologies on a weekly basis are to provide access to a variety of learning resources
(79 percent), to enable communication with parents (79 percent), and to differentiate resources and materials to
support students who have a variety of learning needs (69 percent) (p. 4). However, negative changes in childrens
health were reported. Of particular note is theincrease in the number of students who demonstrateemotional
challenges (90 percent), social challenges (86 per cent), [the need for] behaviour support (85 percent) and cognitive
challenges (77 percent) [Also,] the following three conditions were reported to have increased: anxiety
disorders (85 percent), attention deficit disorder and attention deficit hyperactive disorder (75 percent), and mood
disorders such as depression (73 percent) In terms of media use, 43 percent of teachers frequently and 33
percent very frequently observe students multitasking with digital technologies. Of particular note is that a
majority (67 percent) of teachersbelieve that digital technologies are a growing distraction in the learning
environment. Those who believe students are negatively distracted by technology state the degree as very many
(48 percent) and almost all (11 percent) studentsGenerally, teachers and principals perceive that Alberta students
readiness to learn has been in steady decline. There is a strong sense among a majority of teaching
professionalsthat over the past three to five years students across all grades are increasingly having a more
difficult time focusing on educational tasks (76 percent), are coming to school tired (66 percent) and are less able to
bounce back from adversity (that is, lacking resilience) (62 percent) (p. 4).
Method and limitations: In December 2015, a stratified random sample of 3,600 teachers and principals from
across Alberta were invited to participate in a GUD survey. This request attracted more than 2,200 participants and
generated a sample that is highly representative of Albertas teaching population. The participants corresponded
closely to the professions teacher and principal demographics, including age, gender, K12 grade-level distribution,
assignment, teaching experience and geographic representation from all corners of the province (rural, small urban,
suburban and large urban) (p. 4). The survey asked teachers and principals to reflect on the value of educational
technology, student health and wellbeing, and student distraction and technology. One limitation to this study is that
it is based on teacher and principal perceptions only and does not include any quantitative data to support these
perceptions. In addition, these are the perceptions of teachers and principals in Alberta, and they may not be
generalizable to educators in other contexts.
How this relates to my topic and other things Ive read? or What does this mean for my research?
This relates to my topics of digital differentiation, cyber-slacking, and enhancing rigor using technology. Educator
perceptions of each of these topics are discussed in the research. However, the study relates mostly to cyberslacking. It shows a trend toward health issues such as attention deficit disorder, which could be a factor in cyberslacking behavior. The perceptions of educators and how they see these problems in their own experience are
important in laying the foundation for my research on any of these topics.
Other articles I need to locate and review:
GUD studies related to technology distraction

24

ANNOTATED BIBLIOGRAPHY

Ravizza, S. M., Hambrick, D. Z., & Fenn, K. M. (2014). Non-academic internet use in the
classroom is negatively related to classroom learning regardless of intellectual
ability. Computers & Education, 78, 109114. Retrieved from
http://doi.org/10.1016/j.compedu.2014.05.007

Cyber-slacking

Summary: The primary goal of the present study was to assess the relationship between natural portable device
use in the classroom and performance in a real academic setting (p. 109). A further goal of this study was to
investigate whether the relationship between technology use and classroom performance was influenced by
intellectual ability [The researchers] examined the relationship between classroom performance and use of
portable technology and accounted for overall intellectual ability and academic achievement using ACT composite
scores, which correlate very highly with independent measures of general intelligence (Koenig, Frey,&Detterman,
2008[, as cited in Ravizza, Hambrick, & Fenn, 2014]). The specific question [they] addressed was whether portable
technology use in the classroom would add significantly to the prediction of class grades, above and beyond
intellectual ability. Further, the magnitude of the relationship was assessed for those with high and low intellectual
ability (p. 110). Students in an introductory psychology class were surveyed about the frequency and duration of
their use of various portable devices in the classroom. Internet use negatively predicted exam scores and added to
the prediction of classroom learning, above a measure of intellectual ability (p. 109). Students reported spending
the most time texting in class followed, in descending order, by using the internet, accessing Facebook, and
checking email All four types of technology use were negatively correlated with final exam score, however, only
non-class internet use showed a significant relationshipThus, lower test scores were related to higher ratings of
using the internet for non-classroom purposes (p. 111).
Method and limitations: One hundred, seventy students enrolled in an Introductory Psychology class in Fall 2012
participate[d] in this experimentThe majority of the students who participated were freshman (62%) or
sophomores (24%) with only a small percentage of juniors (8%) and seniors (6%). This distribution was quite
similar to the distribution of the entire classWith participants' permission, [researchers] obtained ACT scores from
the university registrar (p. 110). A 9-question survey [was used] to assess the frequency and duration of texting,
accessing Facebook, checking email, and non-class related internet use during lectures...A final question assessed
the degree to which students thought that internet and phone use affected their learning of class material (p. 110).
The survey was administered during class time using Microsoft PowerPoint on three occasions in the
semesterStudents were told to answer the questions according to their typical classroom use throughout the
semester and not restricted to that particular day. Moreover, students were asked to respond about usage concerning
non-classroom based activities rather than portable device use for class-related purposes. Students responded to the
questions using iClicker response devices. Participants registered their clickers for class because they were also used
to assess attendance and class participation. This allowed us to link their responses on the survey with their test
scores (p. 111). There are some limitations to this study: First, students are likely to be under-reporting their use
of portable devices (Kraushaar & Novak, 2010[, as cited in Ravizza, Hambrick, & Fenn, 2014]). Although [the
researchers] assured students that their survey answers would not be examined until grades had been submitted to
the registrar, it is possible that some students may have worried about the confidentiality of their responses. Second,
[they] focused on portable device use during the encoding of information presented in the classroom, but students
also use portable devices while studying this information as well beyond the classroom (p. 112), and this distractive
use of technology should be studied further.
How this relates to my topic and other things Ive read? or What does this mean for my research?
This relates to my topic of cyber-slacking because it shows a negative correlation between the off-task use of
technology and exam scores. This is similar to other things Ive read, although it does indicate Internet use as being
the most detrimental, while previous studies have shown that Facebook and instant messaging/text messaging are
more injurious to performance. The study is helpful in laying the foundation for my own research on how cyberslacking affects the performance of my students.
Other articles I need to locate and review:
Aguilar-Roca, N. M., Williams, A. E., & O'Dowd, D. K. (2012). The impact of laptop-free zones on student
performance and attitudes in large lectures. Computers & Education, 59(4), 1300-1308.
Rosen, L. D., Lim, A. F., Carrier, L. M., & Cheever, N. A. (2011). An empirical examination of the educational
impact of text message-induced task switching in the classroom: educational implications and strategies to
enhance learning. Psicologa Educativa, 17, 163-177.

25

ANNOTATED BIBLIOGRAPHY

Risko, E., Buchanan, D., Medimorec, S., & Kingstone, A. (2013). Everyday attention:
Mind wandering and computer use during lectures. Computers & Education, 68,
275283. Retrieved from http://doi.org/10.1016/j.compedu.2013.05.001

Cyber-slacking

Summary: In the present investigation [the researchers] explore the impact of engaging in computer mediated
non-lecture related activities (e.g., email, surfing the web) during a lecture on attention to, and retention of, lecture
material. [They] test a number of predictions derived from existing research on dual tasking. Results demonstrate a
significant cost of engaging in computer mediated non-lecture related activities to both attention and retention of
lecture material, a reduction in the frequency of mind wandering during the lecture, and evidence for difficulty
coordinating attention in lectures with distractions present (p. 275). The present investigation has provided a
number of insights into attention during lectures. Participants in the computer condition, who responded to a series
of emails in addition to listening to the lecture, paid less attention to the lecture (determined in situ with thought
probes) and performed less well on the post-test than individuals in the control condition. (p. 280).
Method and limitations: Sixty-four students from Arizona State University participated for either class credit or
payment of $15Participants were 1863 years of age with a mean age of 24 (p. 277). Participants were
randomly assigned to the control or experimental group (p. 278). Participantswatched a pre-recorded
lecturetitled Introduction to Ancient Greek History: Lecture 2 (Kagan, 2007[, as cited in Risko, Buchanan,
Medimorec & Kingstone, 2013]) and consisted of only the lecturer lecturing (i.e., no slides). The first 60 min of
the lecture were used. The lectures were embedded with probes at 2:02, 5:16, 25:31, 35:36, 41:01 and 56:16 min.
The probe interrupted the lecture for 30s (after which the lecture resumed) and consisted of a black screen with
white font. The probe for the control condition contained one question with three options: What were you thinking
about? A. The lecture B. The time C. Something else. The probe for the computer condition contained the same
options but option B was replaced with The ComputerParticipants were provided with a sheet of paper and told
to write their response to the probesIn the experimental condition, participants were emailed ten tasks to complete.
These included watching two separate YouTube videos and reporting what they observed, searching the internet for
various items and their current price, posting a status on a Facebook page, as well as checking the weather
forecastA second computer (i.e., separate from the one presenting the lecture)was used by the participants in
the computer condition. The computer was there but off in the control conditionAt the conclusion of the lecture,
participants were given a ten-question multiple-choice test. Each question asked about facts presented during the
lecture and had 4 separate options, one of which was correct (p. 278). Results were determined based on the
percentage of time the participants were attending to the lecture (p. 279). One limitation of the research is that it
was carried out through a simulated lecture, and the findings may be different when conducted in an authentic
classroom setting with real life technological distractions.
How this relates to my topic and other things Ive read? or What does this mean for my research?
This relates to my topic of cyber-slacking. It shows that when students are interrupted by digital distractions during
class, it impedes their ability to be attentive and to retain information from the current class. This helps prove why
cyber-slacking is an issue in education that needs to be addressed.
Other articles I need to locate and review:
Eastwood, J., Frischen, A., Fenske, M., & Smilek, D. (2012). The unengaged mind: Defining boredom in terms of
attention. Perspectives on Psychological Science, 7, 482495.
Malkovsky, E., Merrifield, C., Goldberg, Y., & Danckert, J. (2012). Exploring the relationship between boredom
and sustained attention. Experimental Brain Research, 221, 5967.
Szpunar, K., Khan, N., & Schacter, D. (2013). Interpolated memory tests reduce mind wandering and improve
learning of online lectures. Proceedings of the National Academy of Sciences, USA, 110, 63136317.

26

ANNOTATED BIBLIOGRAPHY
Cyber-slacking
Rosen, L. D., Mark Carrier, L., & Cheever, N. A. (2013). Facebook and texting made me
do it: Media-induced task-switching while studying. Computers in Human Behavior, 29(3), 948958.
Retrieved from http://doi.org/10.1016/j.chb.2012.12.001

Summary: The current study observed 263 middle school, high school and university students studying for 15
min in their homes. Observers noted technologies present and computer windows open in the learning environment
prior to studying plus a minute-by-minute assessment of on-task behavior, off-task technology use and open
computer windows during studying. A questionnaire assessed study strategies, task-switching preference,
technology attitudes, media usage, monthly texting and phone calling, social networking use and grade point
average (GPA). Participants averaged less than six minutes on task prior to switching most often due to
technological distractions including social media, texting and preference for task-switching. Having a positive
attitude toward technology did not affect being on-task during studying. However, those who preferred to taskswitch had more distracting technologies available and were more likely to be off-task than others. Also, those who
accessed Facebook had lower GPAs than those who avoided it (p. 948). Although participants who accessed
Facebook one or more times during the study period had lower grade point averages[,]the negative impact of task
switching preference on academic performancewas not validated (p. 955).
Method and limitations: Participants (N = 279) were recruited and observed by students in an upper-division
general education course from the local Southern California areaOverall, the participants (N = 263) included
middle school students (N = 31; Mean age = 12.10), high school students (N = 124; Mean age = 16.27), lowerdivision university students (N = 49; Mean age = 22.39), and upper-division university students (N = 59; Mean age
= 23.80). Participants included 117 males and 146 females who represented the local Southern California areas
ethnic background: Asian/AsianAmerican/Pacific Islander (N = 22; 8.3%), Black/AfricanAmerican (N = 26;
9.9%), Caucasian (N = 62; 23.6%), and Hispanic/Latino/Spanish Descent (N = 115; 43.7%) (p. 951). Trained
observers used a Studying Observation Form developed for this study. The form included pre-observation data
concerning the study location technologies present in the learning environment at the beginning of the observation
period and windows open on a computer at the beginning of the observation period... In addition, a minute-byminute checklist included observations of the use of the following: (1) e-mail, (2) Facebook/MySpace, (3) IM/Chat,
(4) texting, (5) talking on the telephone, (6) television on, (7) music on, (8) music ear buds in ear, (9) reading a book,
(10) reading on a website, (11) writing on paper, (12) writing on the computer, (13) eating or drinking, and (14)
stretching/walking around as well as an indication of the main activity at that minute. Finally, at each minute
observers noted the number of windows open on the computer (p. 951). After the observation, observers conducted
an interview about the participants study strategies, preference for task-switching, technology attitudes, daily media
usage, cell phone usage, social networking usage, GPA, and why and how they task-switch during studying.
Limitations to this study include that no attempt was made to validate [the] observations using a second observer
for reliability assessment. Further, the observers were known to the participant and were situated in the study area
and although they were instructed to sit behind the participant they were not unobtrusive. This could have influenced
the studying behavior Second, the study was clearly biased in that the participants were not selected randomly and,
in fact, were known to the observers [Third], the study was limited by the selection of grade point average as a
measure of academic performance[Fourth], the study was limited by allowing participants to study any material
without regard to the type and/or difficulty. [Fifth], the study is limited in its correlational research design, which
does not imply causation [Finally,] it was assumed that all [electronic] communications were unrelated to the
material being studied. It is possible, however, that students were using these tools to communicate with fellow
students about the material and not for social purposes (p. 957).
How this relates to my topic and other things Ive read? or What does this mean for my research?
This relates to my topic of cyber-slacking by providing an alternate view to consider. This research shows that there
is no correlation between task-switching and GPA. It will be necessary to determine why this research is different
from other things Ive read. If it is because of the limitations of the study, I can use the knowledge to avoid such
limitations in my own research to make it more valid.
Other articles I need to locate and review:
Rose, E. (2010). Continuous partial attention: Reconsidering the role of online learning in the age of interruption.
Educational Technology Magazine: The Magazine for Managers of Change in Education, 50(4), 4146.

27

ANNOTATED BIBLIOGRAPHY

Sana, F., Weston, T., & Cepeda, N. J. (2013). Laptop multitasking hinders classroom
learning for both users and nearby peers. Computers & Education, 62, 2431.
Retrieved from http://doi.org/10.1016/j.compedu.2012.10.003

Cyber-slacking

Summary: This study examined the effects of in-class laptop use on student learning in a simulated classroom (p.
24). A total of seventy-eight participants watched a PowerPoint presentation and took a test to assess their learning.
Some of the participants multitasked on their laptops by completing predetermined online tasks during the lecture.
Others were instructed not to multitask at all or to keep their laptops in their bags during the lecture but were
possibly in view of someone who was multitasking. The researchers found that participants who multitasked on a
laptop during a lecture scored lower on a test compared to those who did not multitask, and participants who were in
direct view of a multitasking peer scored lower on a test compared to those who were not. The results demonstrate
that multitasking on a laptop poses a significant distraction to both users and fellow students and can be detrimental
to comprehension of lecture content (p. 24).
Method and limitations: Two experiments were conducted during this study. In Experiment 1, forty
undergraduate students from a large comprehensive university in a large Canadian city participated in the study (25
females; M age = 18.9 years, SD = 2.0). All participants were enrolled in an Introductory Psychology course and
received course credit for participating in the experiment. Participants represented a variety of undergraduate
disciplines (i.e., not only psychology). They were recruited using an online portal designed for psychology research,
which explained that the study involved listening to a class lecture and filling out a few questionnaires. Only
students who could bring a personal laptop to the experiment were invited to participateThe final data analysis
included two experimental conditions: multitasking (n = 20) and no multitasking (n = 20) (p. 25). During the
experiment, students watched a 45 minute PowerPoint presentation on introductory meteorology and were instructed
to use their laptops to take notes as usual during a lecture. During the presentation, participants in the multitasking
group were asked to complete twelve predetermined online tasks similar to typical student browsing during class,
such as a search to determine what time a specific show comes on TV that night. After the presentation, participants
took a timed forty-question, multiple-choice test that evaluated their basic retention of the information in the
presentation and their ability to apply the learned concepts to solve a problem. Multitasking participants also
completed a questionnaire about their perceptions of how multitasking affected their own learning or that of others
around them. In Experiment 2, thirty-eight undergraduate students from the same university participated in the
study (26 females; M age = 20.3 years, SD = 4.2). None had participated in Experiment 1. Recruitment procedures
and participant incentives were the same as in Experiment 1[Experiment 2] included two experimental
conditions: in view of multitasking peers (n = 19) and not in view of multitasking peers (n = 19)Thirty-six
undergraduate students were recruited to be confederates (p. 27). In this experiment, the confederates were
instructed to use their laptops to flip between browsing the Internet (e.g., email, Facebook) and pretending to take
notes on the lecture content as the lecture was presented. In fact, they were told they were not required to pay
attention to the lecture [The study participants were asked] to keep their laptops stored and to use the paper and
pencil provided by the experimenters to take written notes on the lecture content, just as they might normally do in
class (p. 28). The same PowerPoint was presented and the participants took the same test from Experiment 1. The
questionnaire after the test asked questions about the extent to which they were distracted by the confederates
multitasking and whether they felt that this distraction hindered their learning. The most noteworthy [limitation]
was that 43% of participants self-reported that they did not adhere to their assigned instructions ... For example, a
participant assigned to the Facebook multitasking condition may have multitasked on Facebook and on MSN (i.e.,
two forms of multitasking when they were instructed only to use one form), or chosen not to multitask on Facebook
at all (p. 25).
How this relates to my topic and other things Ive read? or What does this mean for my research?
This relates to my topic of cyber-slacking by showing that not only does this phenomenon negatively affect the
student who is participating in the off-task behavior, but it is also detrimental to the students classmates. This
research broadens the problem by showing how its negative affects are farther-reaching than previously assumed.
This could help form the basis of why this issue is in need of a resolution.
Other articles I need to locate and review:
Gasser, U., & Palfrey, J. (2009). Mastering multitasking. Educational Leadership, 66, 1419.
Styles, E. A. (2006). The psychology of attention (2nd ed.). England: Psychology Press.

28

ANNOTATED BIBLIOGRAPHY

Scalise, K. (2016). Student collaboration and school educational technology: Technology


integration practices in the classroom. Information Managers Journal of Educational
Technology, 11(4), 5363.

Digital Lessons to
Increase Rigor

Summary: This case study illustrates how student-to-student collaboration works in an unstructured environment
and ways to evaluate the collaboration. Nine teams of students across four countries were given an assignment to
complete the Arctic Trek scenario, an online collaborative module created for this study. Completing the module
required collaboration among the group members. They were given very little instruction but did have access to chat
and for communication. The results showed that all teams were able to complete level one collaboration: access the
collaborative space, begin to identify team members, and attempt to pose at least some initial questions and answers
to the team. Two teams were able to extend to [level two] of collaboration that involved establishing at least
some partial roles or turn taking in the collaborations, and employing the ability to share not only their initial
questions and answers, but some of the evidence and evaluation that the team collected or completed (p. 58). Three
of the teams achieved level three because their role planning efforts were more thorough and showed evidence of
being carried out as agreed upon including making adjustments during the task at times. The[se] teams
independently developed ways to systematically identify contributions from different team members, and engaged in
evidence reconciliation (p. 58).
Method and limitations: An online collaborative module was created for this study called the Arctic Trek
scenario and is based on the Go North/Polar Husky information website (www.polarhusky.com), a project originally
of the University of Minnesota. The Go North website is an online adventure learning project based around arctic
environmental expeditionsA digital collaborative notebook was employed in different trials of the Arctic Trek
task in four countries. Nine team notebooks are examined in this case studyThe notebook allows groups to
construct a collaboration onlineNo face-to-face collaboration opportunities are made available in the Arctic Trek
scenario, so the notebook and its associated tools form the full record of the collaborative work product (p. 56-57).
The researchers wanted to know what do the nine teams of students do when faced with opportunities to employ
collective intelligence? Student teams were provided with a link in the Arctic Trek activity and a secret code to
login into their shared document online with their assigned team members. They then had a mostly blank
collaborative work space with some associated tools such as a chat box to use for communication...The notebook
was intentionally left as an unstructured, simple device through which students could collaborate and share work
across their team. The unstructured nature of the approach meant that each team could employ the tool in the manner
that they thought best and student work across the team could be evaluated for attributes capturing how effectively
the team itself decided to employ their opportunity to collaborate. (p. 57). After the completion of the scenario, the
collaboration in the notebooks was evaluated. Twelve attributes were identified that [included three] levels: [Level
one included] accessing the digital tool being used for collaboration, making attempts to identify team members,
posing initial questions, and sharing simple answers. [Level two showed] some types of role allocation, planning
strategies and shared thinking for the collaborative process[Level three required] effective evidence sharing,
systematic execution, flexible adjustment and analysis during the activities, and attempting to come to a shared
understanding on tasks across the team (p. 57). One limitation identified by the author is the small sample size of
a case study Larger samples and additional methodologies are needed to confirm and more broadly generalize
interpretations (p. 58). In addition, the article does not give enough contextual information about the case study,
such as the demographics of the students in the teams, the duration of the study, or the specifics of where and how
the students completed the project. This is a definite flaw in the presentation of the study.
How this relates to my topic and other things Ive read? or What does this mean for my research?
This relates to my topic of providing more rigorous digital lessons. In this study, students set their own standards
and procedures for collaboration, which requires higher-level thinking. This is the type of collaboration students
will encounter in real world situations as they continue their education after high school or enter the work force. It is
essential that they build these skills now. In this study only some of the students had previously acquired the skills
necessary to successfully complete this task, which shows the necessity for the integration of such rigorous tasks in
the classroom.
Other articles I need to locate and review:
Wilson, M., & Scalise, K. (2012). Assessment of learning in digital social networks; Assessment of technologybased collaboration skills: Transforming assessment for the 21 century. Paper presented at the American
Educational Research Association, Vancouver, Canada.

ANNOTATED BIBLIOGRAPHY

Strother, D. (2013). Understanding the lived experiences of secondary teachers instructing


in one-to-one computing classrooms. (Doctoral dissertation). Retrieved from
ProQuest database. (3603035).

29
Digital Lessons to
Increase Rigor &
Cyber-slacking

Summary: In her study, Strother interviewed teachers to determine their perceptions of how education has changed
with the introduction of one-to-one devices in schools in the Midwest. Her dissertation research explored teachers
lived experiences (Moustakas, 1994, [as cited in Strother, 2013]) in teaching in a one-to-one computing environment.
Analyses of the data revealed six definitive key themes from the teachers perspectives of teaching in a one-to-one
environment. Those six themes were as follows: 1. Comfort Level with Technological Knowledge. 2. Importance of
the Internet. 3. Student Social Skills and Communication. 4. Student Behavior and Classroom Management. 5.
Student Accountability and Work Completion. 6. Assessment Practices. Teacher participants revealed that they
lacked the knowledge and experience to use technology effectively to enhance the learning process of students.
Teachers expressed appreciation for accessibility to information, student accountability, and customization of
assessments and grading practices. Classroom management and student communication and social skills were
deemed to be impacted negatively by some teacher participants (p. iii). The final conclusion was that one-to-one
computing devices significantly changed almost every aspect of education from the perspectives of the participants.
Method and limitations: Eight teachers from rural Midwest schools with one-to-one device distribution were
interviewed, two females and six males. Their teaching experience ranged from two years to over twenty years.
After the interview, participants also completed an online survey. In the interview and online survey, they were
asked questions about their perceptions of technology on their teaching, how their students construct knowledge,
collaboration opportunities, students social skills, classroom climate, their teaching philosophy, how they assess
learning, and the advantages vs. disadvantages of being in a one-to-one school. Some limitations to the study include
the small number of participant teachers and the rural nature of the schools. A larger pool of teachers from more
diverse teaching areas is necessary to fully show the overall perspectives of the field. In addition, the types of
devices the schools incorporate could be important in perception of teachers. Therefore, it would be prudent to
include participants from schools with various types of devices.
How this relates to my topic and other things Ive read? or What does this mean for my research?
This research relates to two of my topics, the use of digital lessons to increase rigor and cyber-slacking. The study
shows the feelings of teachers toward technology integration, how they feel they have been prepared through
professional development, and how technology has impacted their teaching and the learning taking place in their
classrooms. These emotions and perceptions related to digital learning can affect whether a teacher effectively uses
the one-to-one devices in her classroom, and at what level her lessons are if she does. Research like this will be
important in my study of this use (or lack of). Also, the teachers in this study indicated that classroom management
has been negatively affected by the introduction of one-to-one devices, which connects to my topic of cyberslacking. The results of this study could help prove or explain the problem of cyber-slacking.
Other articles I need to locate and review:
Behrens, J.T. & Mislevy, R.J. & DiCerbo, K.E. & Levy, R. (2010). An evidence centered design for learning and
assessment in the digital world. (CRESST Report 778). Retrieved from University of California, The
National Center for Research on Evaluation, Standards, and Student Testing website:
http://www.cse.ucla.edu/products/reports/R778.pdf
Drayton, B. Falk, J. Stroud, R. Hobbs, K. & Hammerman, J. (2010). After installation: Ubiquitous computing and
high school science in three experienced, high-technology schools. The Journal of Technology, Learning,
and Assessment, 9(3), 1-57.
Efaw, J. Hampton, S. Martinez, S. & Smith, S. (2004). Miracle or menace? Teaching and learning with laptop
computers in the classroom. Educause Quarterly, 3, 10-18.
Hassel, B.C. & Hassel, E.A. (2011). Teachers in the age of digital instruction. In C. Finn & D. Fairchild (Eds),
Education reform for the digital era (pp. 11-35). Washington, D.C.: Thomas Fordham Institute.
Holcomb, L.B. (2009). Results and lessons learned from 1:1 laptop initiatives: A collective review. TechTrends,
53(6), 49-55.
Lei, J. & Zhao, Y. (2008). One-to-one computing: What does it bring to schools? J. Educational Computing
Research, 39(2), 97-122.

30

ANNOTATED BIBLIOGRAPHY

Taneja, A., Fiore, V., & Fischer, B. (2015). Cyber-slacking in the classroom: Potential for
digital distraction in the new age. Computers & Education, 82, 141151. Retrieved
from http://dx.doi.org/10.1016/j.compedu.2014.11.009

Cyber-slacking

Summary: This study draws upon the augmented version of the theory of planned behavior, social learning theory
and the pedagogical literature to investigate the factors influencing students' attitudes and intentions to use
technology during class for non-class related purposes (p. 141). Two hundred, sixty-five undergraduate students
completed a questionnaire about their attitudes and intentions regarding technology use for off-task behavior during
class. The results demonstrate that students' attitudes are influenced by student consumerism, escapism, lack of
attention, cyber-slacking anxiety, and distraction by others' cyber-slacking behavior. Further, lack of attention is
shaped by intrinsic and extrinsic motivation, class engagement, and apathy towards course material (p. 141). It is
reasonable to infer that when students have positive feelings towards the use of the Internet and other technologies
in the classroom for non-class related purposes, these feelings reinforce their intentions to cyber-slack in the
classroom. Also, the students who perceive that they have the ability and can easily cyber-slack in classroom are
more willing to use the Internet and other technologies during scheduled class time for non-class related purposes
(p. 148). It seems that the importance of subjective and descriptive norms indicates that peers' and friends'
expectations and behaviors influence students' intention to cyber-slack in the classroom. This finding implies that
when students find their friends and peers are involved in cyber-slacking, they are themselves more willing to do
cyber-slacking[However,] the results reveal that perceived distraction by other students' cyber-slacking behaviors
exerts a negative effect on attitudes towards cyber-slacking in the classroom[In addition,] the significance of the
relationship between course apathy and lack of attention shows that if students are not able to get along with the
material or are not interested in the topics, they tend to lose their focus and attention. Also, [the authors] found that
class engagement exerts a negative effect on students' lack of attention in the classroom[Finally,] the results
revealed that intrinsic and extrinsic motivations exert a negative effect on students' lack of attention in the classroom
(p. 149).
Method and limitations: The participants in this study were undergraduate students in a northeastern public
college in the United States. Participation in the survey was voluntary. The final data included 156 (58.4%) males
and 109 (40.82%) females. 125 (46.8%) respondents were between the ages of 18 and 20, while 98 (34.1%) were
between 21 and 23 years of age. There were 63 freshmen, 39 sophomores, 79 juniors, and 83 seniors (p. 145). The
researchers used a questionnaire that measured students self-perception of their attitudes and intentions to use
technology for non-class related purposes in fourteen categories: apathy towards course material, attitude,
engagement, consumerism, cyber-slacking anxiety, descriptive norms, distraction by others, escapism, extrinsic
motivation, intention, intrinsic motivation, lack of attention, perceived behavioral control, and subjective norms.
The researchers adapted existing validated items from prior studies where possible and made minor modifications
to fit the context of [their] study.The items were measured using a seven-point Likert-type scale (p. 146). As
with other research, this study has some limitations that must be considered when interpreting the results. As [the
researchers] have used student samples from a northeastern college in the United States, the results may not be
generalizable to a broader student populationSecond, [the] study design uses cross-sectional data rather than
longitudinal data. Third, [they] have measured the intention towards cyber-slacking behavior and relied on selfreports instead of observing actual use, which was not practically possible because of privacy and ethical issues.
Students may underreport their cyber-slacking behavior if they are anxious about the confidentiality of their
responses (p. 150).
How this relates to my topic and other things Ive read? or What does this mean for my research?
This relates to my topic of cyber-slacking because it indicates the factors that influence students to participate in offtask behaviors. These reasons are important to consider when I make recommendations in my own research for how
educators can avoid such behaviors in their classrooms.
Other articles I need to locate and review:
Gerow, J. E., Galluch, P., & Thatcher, J. B. (2010). To slack or not to slack: Internet usage in the classroom. Journal
of Information Technology Theory and Application (JITTA), 11(3), 5e24.
Ozler, D. E., & Polat, G. (2012). Cyberloafing phenomenon in organizations: Determinants and impacts.
International Journal of eBusiness and eGovernment Studies, 4(2), 1e15.
Ravizza, S. M., Hambrick, D. Z., & Fenn, K. M. (2014). Non-academic internet use in the classroom is negatively
related to classroom learning regardless of intellectual ability. Computers & Education, 78,109e114.

31

ANNOTATED BIBLIOGRAPHY

Wood, E., Zivcakova, L., Gentile, P., Archer, K., De Pasquale, D., & Nosko, A. (2012).
Examining the impact of off-task multi-tasking with technology on real-time
classroom learning. Computers & Education, 58(1), 365374. Retrieved from
http://doi.org/10.1016/j.compedu.2011.08.029

Cyber-slacking

Summary: The purpose of the present study was to examine the impact of multi-tasking with digital technologies
while attempting to learn from real-time classroom lectures in a university setting. Four digitally-based multi-tasking
activities (texting using a cell-phone, emailing, MSN messaging and Facebook) were compared to 3 control groups
(paper-and-pencil note-taking, word-processing note-taking and a natural use of technology condition) over three
consecutive lectures. Comparisons indicated that participants in the Facebook and MSN conditions performed more
poorly than those in the paper-and-pencil use controlThese analyses indicated that participants who did not use
any technologies in the lectures outperformed students who used some form of technology. Consistent with the
cognitive bottleneck theory of attention (Welford, 1967[, as cited in Wood, Zivcakova, Gentile, Archer, De Pasquale,
& Nosko, 2012]) and contrary to popular beliefs, attempting to attend to lectures and engage digital technologies for
off-task activities can have a detrimental impact on learning. (p. 365). In summary, overall, not all multi-tasking
conditions yielded poorer performance than the traditional paper-and-pencil condition as predicted. However, it
appears that Facebook and MSN were more likely to serve as distractions that impact negatively on learning when
used during lectures. (p. 369).
Method and limitations: All 145 participants (116 females and 29 males) were randomly assigned to one of
seven conditions (with n = 21 in Facebook, Texting, Natural Technology Use, Word Processing only and paper-andpencil conditions and n = 20 in the MSN and email conditions). Approximately equal proportions of males (M age
=20.67, SD = 2.33) and females (M age = 19.56, SD = 1.19) were represented within each condition. Participants
were recruited from 2nd year research methods and statistics courses. The participants selected either 1.5 course
credits or $15 as compensation The study was comprised of three sessions. In all three sessions students were
given a 20-min lecture presentation on research methods, followed by a 15-item quiz and a fidelity measure[The]
lectures were actual course material presented during class instructional timeParticipants received instructions
for their randomly assigned conditionParticipants in the four multi-tasking conditions were required to use one of
4 social networking tools; texting via cell-phones, Email, MSN or they used Facebook. Participants in the MSN,
texting and email conditions exchanged messages with research assistants (p. 368). The research assistants
presented scripted questions in a pre-selected order(e.g., book a follow-up review appointment for the lecture,
followed by open ended questions involving school issues, such as current courses and exams, followed by other
current events exchanges such as Halloween) Participants in the Facebook condition completed a prepared
information scavenger hunt: an instruction sheet asking them to visit the Facebook profiles of several people to
find specific pieces of information in those profiles the natural use of technology group was allowed to use any
technology they wished throughout the experimental sessionFollowing each lecture, students completed one, 15item multiple-choice test. All questions pertained to material presented in the lecture for that session. Consistent
with course expectations, the multiple-choice questions reflected factual, application and synthesis level demands.
Participants were aware of the upcoming multiple-choice test and that the material being presented would be on their
final exam. Therefore, participants would have a natural incentive to attend to the material being presented (p. 368369). The limitations of the study are as follows: Participants indicated in the fidelity measure that they did not
always follow the instructions for their condition, completing more or less than the prescribed number of
multitasking activities. Also, in this study, participants were really dual-tasking rather than multitasking, so further
research is needed to clarify the results of multitasking.
How this relates to my topic and other things Ive read? or What does this mean for my research?
This relates to my topic of cyber-slacking because it illustrates the effects of students digital off-task behavior in a
real classroom setting. This evidence helps prove that this issue is a problem in the field and builds the national
context for the problem. Also, having this study as a resource for the types of activities that cause the most
significant problem gives insight into possible interventions I could recommend in my own study.
Other articles I need to locate and review:
Levy, H., & Paschler, H. (2001). Is dual-task slowing instruction dependent? Journal of Experimental Psychology:
Human Perception and Performance, 27(4), 862869.
Murphy, E., & Manzanares, M. (2008). Instant messaging in a context of virtual schooling: Balancing the
affordances and challenges. Educational Media International, 45(1), 4758.

Das könnte Ihnen auch gefallen