Sie sind auf Seite 1von 12

956613

research-article2020
PSSXXX10.1177/0956797620956613Bjerre-Nielsen et al.Student Smartphone Use and Academic Performance

ASSOCIATION FOR
Research Article PSYCHOLOGICAL SCIENCE
Psychological Science

The Negative Effect of Smartphone 1­–12


© The Author(s) 2020
Article reuse guidelines:
Use on Academic Performance May Be sagepub.com/journals-permissions
DOI: 10.1177/0956797620956613
https://doi.org/10.1177/0956797620956613

Overestimated: Evidence From a 2-Year www.psychologicalscience.org/PS

Panel Study

Andreas Bjerre-Nielsen1,2 , Asger Andersen1, Kelton Minor1 ,


and David Dreyer Lassen1,2,3
1
Copenhagen Center for Social Data Science, University of Copenhagen; 2Department of Economics,
University of Copenhagen; and 3Center for Economic Behavior and Inequality, University of Copenhagen

Abstract
In this study, we monitored 470 university students’ smartphone usage continuously over 2 years to assess the relationship
between in-class smartphone use and academic performance. We used a novel data set in which smartphone use and
grades were recorded across multiple courses, allowing us to examine this relationship at the student level and the
student-in-course level. In accordance with the existing literature, our results showed that students’ in-class smartphone
use was negatively associated with their grades, even when we controlled for a broad range of observed student
characteristics. However, the magnitude of the association decreased substantially in a fixed-effects model, which
leveraged the panel structure of the data to control for all stable student and course characteristics, including those not
observed by researchers. This suggests that the size of the effect of smartphone usage on academic performance has
been overestimated in studies that controlled for only observed student characteristics.

Keywords
academic performance, attention, mobile devices, multitasking, in-class concentration, productivity, distraction, open
materials

Received 9/19/19; Revision accepted 5/27/20

Smartphones have become pervasive in learning envi- & Chein, 2017). However, recent research also empha-
ronments. A recent survey study found that 96% of U.S. sizes that the results of studies on the effects of digital-
college students own a smartphone (Brooks & Pomerantz, device use might depend heavily on the usage context
2017), and additional evidence suggests that in-class from which the data are gathered and the subsequent
mobile-device use has become common (Felisoni & methods of analysis chosen by the researchers
Godoi, 2018; Ravizza, Uitvlugt, & Fenn, 2017). These (Adelantado-Renau et  al., 2019; Orben, Dienlin, &
devices enable students to engage in potentially learning- Przybylski, 2019; Orben & Przybylski, 2019; Whitlock
enhancing activities, such as taking notes, participating & Masur, 2019).
in online quizzes, or looking up pertinent facts on the In a nascent body of experimental and observational
Internet. However, the very same devices also connect research, psychologists have started to investigate the
students to an enticing menu of nonacademic stimuli impact of portable-device use on student learning
that may distract from the learning processes taking within the classroom setting using both self-reported
place in their immediate learning environments (Stokols,
2018). Aside from diverting attention, device use may
Corresponding Author:
also inhibit other cognitive pathways involved in learn- Andreas Bjerre-Nielsen, University of Copenhagen, Copenhagen
ing, including memory and reward processing (Chen & Center for Social Data Science
Yan, 2016; Uncapher & Wagner, 2018; Wilmer, Sherman, E-mail: andreas.bjerre-nielsen@econ.ku.dk
2 Bjerre-Nielsen et al.

and directly recorded measures. Experimental studies confounding factors. For instance, students with low
based on brief interventions have established that in- levels of self-control may use their phone more often
class multitasking with electronic devices can limit in class, study less intensively outside of class, and
short-term knowledge retention (Mendoza, Pody, Lee, perform worse on academic outcomes, with phone use
Kim, & McDonough, 2018; Risko, Buchanan, Medimorec, a symptom of limited self-control rather than a cause of
& Kingstone, 2010; Wood et al., 2012). Because of the lower grades (Wilmer & Chein, 2016). Likewise, grades
short testing periods and possible presence of experimental- and smartphone use may jointly reflect contextual factors
demand characteristics, these experiments do not neces- at the course level, such as topic, instructor, room, and
sarily generalize to the real-world learning environments class characteristics, which may limit the external validity
and longer evaluation cycles found in educational insti- of prior research reliant on single-course observations.
tutions. Observational studies, although unable to dem- In the present study, we used a mobile app to unob-
onstrate causality, overall have found similar negative trusively log students’ complete attendance and smart-
links between portable-device use and academic per- phone use across multiple courses over a period
formance (Fried, 2008; Grace-Martin & Gay, 2001; Kim spanning 2 academic years. This enabled us to examine
et al., 2019; Kirschner & Karpinski, 2010; Kraushaar & the link between students’ average smartphone use and
Novak, 2010; Lepp, Barkley, & Karpinski, 2014; Ravizza average grade as well as the relationship between stu-
et al., 2017; Uzun & Kilis, 2019). dents’ course-specific use and performance. Addition-
Most prior observational studies have relied heavily ally, we combined these data with a wide range of
on self-reported or self-initiated measures of device use. individual background controls, including administrative
However, self-reports of digital-device use have been data on high school academic performance, socioeco-
found to be inaccurate, exhibiting little relation to true nomic background, and surveyed personality measures.
usage (Andrews, Ellis, Shaw, & Piwek, 2015; Kim et al., We employed a novel measure of in-class smartphone
2019; Kraushaar & Novak, 2010; Orben & Przybylski, use to investigate the primary hypothesis that higher
2019). Only a small subset of the observational studies nonacademic device use during class is associated with
measured usage directly by digitally tracking students’ worse student performance. We expected to find a mod-
device behaviors and did so for only a few weeks, a erate to large negative relationship between students’
single course, or over a single term (Felisoni & Godoi, smartphone use and grades both when estimating with
2018; Grace-Martin & Gay, 2001; Kraushaar & Novak, a cross-sectional model, consistent with the existing
2010; Ravizza et al., 2017). Participants in the study by literature, and when estimating with a fixed-effects
Ravizza et al. (2017) were required to manually activate model that leveraged the dynamic nature of our panel
an online Web-activity logger at the start of every class. data to control for unobserved student and course
This approach overcomes problems of self-reporting characteristics.
but may inadvertently prime students by periodically
drawing their attention to their device use at the start
of every class measurement period. Self-activated track- Method
ing may also bias participation in the study through
selective logging and errors of omission. By comparison,
Participants
the recent advent of smartphone-activity tracking apps We collected data from September 2013 to September
provides a less invasive approach to collecting device 2015 as part of the Copenhagen Networks Study
interactions in the background (Felisoni & Godoi, 2018; (Stopczynski et al., 2014). Our study used data from 470
Kim et al., 2019). students at the Technical University of Denmark. Stu-
Existing studies have focused on the relationship dents volunteered to receive a smartphone that continu-
between academic performance and device use during ously recorded various behavioral measures over a
a single course or collected average measurements 2-year period, including location, social interactions
across several courses (Chen & Yan, 2016). Because among participants, and whether the phone screen was
these cross-sectional studies included only a single row turned on. The study was approved by the Danish Data
of measurements for each participating student, they Supervision Authority in 2013 and involved dynamic
could not control for unobserved variables that might informed consent. After consenting to participate in the
jointly determine device use and academic perfor- study, students had access to download their own logged
mance. Some studies directly measure and control for data and could withdraw from the study and have their
specific student traits that could confound the estimated data deleted at any time (see Stopczynski et al., 2014).
effects—including intelligence, motivation, and interest We planned to terminate data collection after 2 years
(Ravizza, Hambrick, & Fenn, 2014; Ravizza et  al., because of the anticipated costs of maintaining data col-
2017)—but other student characteristics may act as lection, replacing broken phones, and attrition over time.
Student Smartphone Use and Academic Performance 3

Table 1.  Descriptive Statistics for the 470 Students in the Sample

First Third
Variable Minimum quartile Mdn quartile Maximum M SD
Age (years) 19.0 20.0 21.0 21.5 28.8 21.0 1.6
Danish high school grade point average 3.8 7.3 8.6 9.6 11.7 8.4 1.8
Parents’ maximum years of education 9.8 14.2 15.0 17.0 20.0 15.3 2.4
Parents’ mean income (per 10,000 Danish krone) 12.5 37.4 46.6 58.5 183.0 50.0 22.8
Big Five Inventory  
 Agreeableness 2.2 3.5 3.9 4.1 4.8 3.8 0.5
 Conscientiousness 2.0 3.0 3.4 3.9 4.8 3.4 0.6
 Extraversion 1.7 3.0 3.4 3.9 4.8 3.4 0.7
 Neuroticism 1.1 1.9 2.4 2.8 4.1 2.4 0.6
 Openness 2.1 3.2 3.6 3.9 4.8 3.5 0.5
Locus of control 2.0 6.0 8.0 9.0 12.2 7.8 2.2
Body mass index 16.5 20.7 22.4 24.0 34.8 22.7 3.0
Male (1 if male, 0 if female) .78  
Smoker (1 if smoker, 0 if nonsmoker) .25  

Note: Data were collected in September 2013. For data-privacy reasons, each quantile was calculated as the average of the five
observations around the actual quantile. For further details about the variables, see the Student Background Variables section.

The size of our sample far exceeds those of earlier obser- geolocation data (for details about how we inferred atten-
vational studies using smartphones in terms of both indi- dance, see Kassarnig, Bjerre-Nielsen, Mones, Lehmann, &
viduals followed and data points per individual (Felisoni Lassen, 2017). We combined these two measures to com-
& Godoi, 2018; Kim et  al., 2019). Table 1 displays pute in-class smartphone use. This variable measured—
descriptive statistics for the 470 students meeting the for each course in which a student was enrolled—the
inclusion criteria, outlined below, at the start of the study percentage of attended class time that the student spent
in September 2013. with his or her phone screen turned on (for details about
Students in the Copenhagen Networks Study were how we addressed breaks between classes, see the Sup-
excluded from the analysis if they met any of the fol- plemental Material). As an example, let sij denote the
lowing criteria: They had no grade data (76 students), value of in-class smartphone use for student i in course j:
attended fewer than 10 hr of course classes (60 stu- If sij is equal to 5, it means that student i’s phone screen
dents), were missing background variables (157 stu- was turned on 5% of the time that he or she attended
dents), were enrolled in only one graded course (44 classes in course j. In our cross-sectional analysis, we
students), or were enrolled in courses with no other employed each student’s average in-class smartphone
participants of our study (three students). After apply- use across all attended courses as our measure of smart-
ing these criteria to the initial sample of 810 students, phone use. We also computed out-of-class smartphone
we excluded 340 students. Additional details of the use, which measured the percentage of time between
data-filtering process are described in the Supplemental 6 a.m. and 1 a.m. that a student’s mobile-device screen
Material available online, in which we also confirm that was turned on and the student was not attending class.
our results are robust to including the students with Table 2 displays descriptive statistics for the smartphone-
minimal class attendance and the students with missing use measures.
background variables in the analysis. When activated by an event, such as receiving a text
message or pressing a button, smartphones have a default
period of time before the screen turns off. Our measure
Measures of smartphone use could therefore include time inter-
Smartphone use. For each student, we computed a vals in which the student’s attention was not directed
measure of phone usage at the 15-min level by aggregat- toward the phone, but the screen was still turned on.
ing the time during which the screen of the student’s Our data suggest that the most prevalent default turn-off
phone was turned on during each 15-min time bin of the times are 10 s and 30 s (for details, see the Supplemental
experiment. We also measured each student’s attendance Material). Because almost all of the smartphone use in
in scheduled classes by merging administrative data from our sample was in sessions longer than 35 s, and inter-
the Technical University of Denmark about the location mittent periods of screen activation can still captivate
of the student’s scheduled classes with mobile-device attention, we assumed that our measure of smartphone
4 Bjerre-Nielsen et al.

Table 2.  Descriptive Statistics for the Variables Collected During the Study

First Third
Variable Minimum quartile Mdn quartile Maximum M SD
In-class smartphone use 0.1 3.9 6.6 11.0 41.1 8.2 6.1
Average in-class smartphone use 1.0 4.9 7.4 11.1 28.7 8.6 5.3
Out-of-class smartphone use 1.0 5.8 7.7 10.7 20.8 8.3 3.8
Course grade −3 4 7 10 12 7.3 3.8
Grade point average −1.6 5.1 7.2 9.2 12.0 6.9 2.9

Note: In-class smartphone use, course grade, and out-of-class smartphone use were measured on the student-
in-course level (3,385 observations). Average in-class smartphone use and grade point average were measured
on the student level (470 observations). For data-privacy reasons, each quantile was calculated as the average of
the five observations around the actual quantile.

use exclusively recorded time when the student was Western Europe (Stopczynski et al., 2014). Second, these
attending to his or her phone. data were anonymized and merged with national regis-
We did not observe what kind of phone activity the tries on secure servers with limited access at Statistics
students engaged in. Thus, our measure of smartphone Denmark. Here, we obtained student age, gender, and
use did not divide device activity into academic and high school GPAs as well as mean annual income and
nonacademic use. Although this is a limitation of the maximum number of years of education of students’ par-
tracking app used in this study, prior research indicates ents. Parental mean annual income was measured in units
that the majority of in-class device use typically consists of 10,000 Danish krone. As a proxy for intelligence, high
of nonacademic activities, such as texting and social school GPAs were computed from grades on the same
media (Appel, Marker, & Gnambs, 2020; Gehlen-Baum scale as the university grades (Roth et al., 2015).
& Weinberger, 2014; Ravizza et al., 2017). Furthermore,
in contrast to the online-activity measures used in pre-
vious studies, our measures captured all smartphone
Structure of the data
screen exposure, including both online and off-line The filtered data had 3,385 observations, each contain-
(e.g., app based and system based) mobile-device inter- ing the grade, in-class smartphone use, and student
actions, all of which can occupy attention. background variables of a specific student in a specific
course. Each observed student participated in multiple
Academic performance. We obtained the students’ courses, and each observed course had multiple stu-
course grades and study programs from university admin- dents who were part of our experiment. However, each
istrative records. Grades were given on the Danish grad- student did not participate in all observed courses, and
ing scale and consisted of seven possible values: −3, 0, 2, each course did not contain all observed students as
4, 7, 10, and 12. In our cross-sectional analysis, we used participants. Thus, our data were structured as an
each student’s average grade across all courses, whereas unbalanced panel (Wooldridge, 2010). The median
the panel analysis employed the course-specific grades. number of recorded courses per student was seven, and
Table 2 displays descriptive statistics for the course grades the median number of observed students per course
and grade point averages (GPAs). was six. The data contained 470 unique students and
401 unique courses.
Student background variables.  We obtained various We used our panel data to construct cross-sectional
student characteristics for our sample by merging it with data by averaging each student’s grades and in-class
two sources. First, we obtained surveyed personality smartphone use across all the student’s courses. The
measures (the Big Five Inventory and locus of control) resulting data set had 470 observations, each containing
and health status (body mass index and whether the stu- the average grade, average in-class smartphone use,
dent smoked cigarettes) from the Copenhagen Networks and background variables of a specific student. We refer
Study itself. The Big Five Inventory and locus of control to models estimated on the panel data set as panel models
were measured with the standard survey items adminis- and to the models estimated on the cross-sectional data
tered at the beginning of the study period (Goldberg, set as cross-sectional models. An advantage of the panel
1993; Rotter, 1993). The distribution of personality traits in models relative to cross-sectional models is that they
the Copenhagen Networks Study was previously shown allow for estimation of within-student effects instead of
to be unbiased with regard to the general population of only between-student effects (Orben et al., 2019).
Student Smartphone Use and Academic Performance 5

Table 3.  Panel-Model Specifications

Model Specification Control variable


1a gij = α + bsij + εij None
1b gij = α + bsij + γtxi + εij Observed student background variables
1c gij = α + bsij + γtxi + ηj + εij Observed student background variables and course fixed effects
1d gij = α + bsij + µi + εij Student fixed effects
1e gij = α + bsij + µi + ηj + εij Student and course fixed effects

Note: For all model specifications, we used two-way clustering (Cameron & Miller, 2015) to cluster standard errors at
both the student and course levels. See the Models section for details.

Sample-size determination Model 1e is an example of a type of panel model


called a fixed-effects model (McNeish & Kelley, 2019;
Our data were collected as part of the longitudinal Wooldridge, 2010), and the student- and course-specific
Copenhagen Networks Study (Stopczynski et al., 2014), intercepts μi and ηj, respectively, are called the student
which was conducted prior to the formulation of our and course fixed effects. Estimation is possible with a
research question. Thus, as in other recent research standard linear regression, where μi and ηj are esti-
employing digital traces from large behavioral data sets mated by encoding each student and course as a
(Rafaeli, Ashtar, & Altman, 2019), our sample size was dummy variable. The main advantage of a fixed-effects
not predetermined by our research question or expected model is consistent estimation of model parameters
effect size. However, compared with the sample size under weaker assumptions about the data, whereas
and observation period in previous similar observa- mixed-effects models, traditionally more prevalent in
tional studies (see the introduction), those in the pres- psychological research for clustered data, rely on stron-
ent study were respectively larger and longer. Moreover, ger statistical assumptions in order to be consistent
we did select our models and construct our variables (McNeish & Kelley, 2019; Wooldridge, 2010). The key
prior to seeing the results of the models. additional assumption of mixed-effects models is the
exogeneity assumption, which requires that the student-
and course-specific intercepts be random and thus
Models uncorrelated with in-class smartphone use (McNeish &
Models using panel data.  Table 3 displays the speci- Kelley, 2019; Wooldridge, 2010). This assumption is
fications of the models that we estimated on our panel more justifiable in experimental studies, in which the
data. The grade and in-class smartphone use of student i variable of interest can be random by design. However,
in course j are denoted by gij and sij, respectively. All the in observational studies such as ours, this strong
models estimated linear associations between course assumption was likely to be violated (McNeish & Kelley,
grades and smartphone use, but each model controlled 2019) because unobserved individual and course char-
for a different set of additional explanatory variables. We acteristics were almost certainly correlated with in-class
considered Model 1e the most accurate estimator of the smartphone use. For example, low teacher quality,
linear effect of in-class smartphone use on grades because which was unobserved, could have affected both smart-
the student- and course-specific intercepts μi and ηj con- phone use and grades. Indeed, we view the strength
trolled for all student and course characteristics that of our panel-data approach as residing in the fact that
remained fixed during the study. Examples of such stu- it could control for such confounding student and
dent characteristics are gender, age at baseline, intelli- course characteristics even when they were unob-
gence, stable personality traits, impulsivity, persistent served. In addition to these theoretical considerations,
anxiety, and the socioeconomic conditions of the stu- our choice of a fixed-effects model over a mixed-effects
dent’s upbringing. Examples of such course characteris- model was also supported by a Hausman specification
tics include quality of the teacher, difficulty of the course test, χ2(1, N = 3,385) = 8.48, p = .004, which is the stan-
material, class size, and room layouts. We therefore con- dard statistical test used to assess the consistency of a
sidered Model 1e to be our main model, and we esti- null model with random effects against the correspond-
mated the other models only to enhance our understanding ing fixed-effects model (Wooldridge, 2010).
of it. In this section, we therefore first present Model 1e, To make our results comparable with those of the
and then we explain why each of the other models is existing observational studies, we estimated Model 1b,
interesting as a point of comparison. which is a pooled model. This model was simply a linear
6 Bjerre-Nielsen et al.

Table 4.  Correlation Coefficients Between Measures of In-Class Multitasking and Measures of Academic Performance From
Previous Observational Studies and the Present Study

Sample
Study Design size Device Outcome r
Fried (2008) Survey 137 Laptop Exam score −.17
Karpinski, Kirschner, Ozer, Mellott, and Ochwo (2013)  
 Europe Survey 406 Laptop & phone GPA −.27
  United States Survey 451 Laptop & phone GPA −.60
Lepp, Barkley, and Karpinski (2014) Survey 496 Phone GPA −.20
Ravizza, Uitvlugt, and Fenn (2017) Field study 61 Laptop Exam score −.25
Felisoni and Godoi (2018) Field study 43 Phone Exam score −.31
Present study Field study 470 Phone GPA −.32

Note: GPA = grade point average.

regression of course grade on observed student char- Cross-sectional model.  To make our research compa-
acteristics, similar to the cross-sectional model below rable with the existing literature, we also estimated a lin-
but with an observation per student-in-course combina- ear regression model with the cross-sectional data (Model
tion (thus, multiple observations per student). There- 2), which had the following specification:
fore, Model 1b ignored the panel structure of the data
and thus had neither mixed nor fixed effects. The vector gi = α + bsi + γ t x i + εi ,
xi in the model specification (see Table 3) contains the
values of the observed background variables 1 for stu- where gi and si are, respectively, the average grade and
dent i. Similar to the models in existing observational in-class smartphone use of student i across all of the
studies, Model 1b controlled only for observed student courses that they attended during the experiment. The
characteristics when assessing the association between vector xi contains the values of the background vari-
device use and academic performance. Consequently, ables for student i. Consistent with models in the exist-
its estimate of b is inconsistent if grades and in-class ing literature, the cross-sectional model had only a
smartphone use are confounded by variables omitted single observation for each student in our sample, and
by the researcher (Wooldridge, 2010). Substantial dif- the effect of in-class smartphone use on academic per-
ferences between the estimates of b in Models 1b and formance was estimated, controlling for a broad set of
1e thus suggest that there are important confounding student-level background variables.
factors that were not captured by our background
variables.
We estimated Models 1a, 1c, and 1d to investigate
Results
how the estimated coefficient of in-class smartphone We first report correlation measures, proceed with the
use responded to the addition of different controls. output of the panel models (Models 1a–1e in Table 3),
Model 1a provided a baseline without any controls. and finish by comparing these with the output of our
Model 1c controlled for observed student characteristics cross-sectional model (Model 2).
and course fixed effects but not student fixed effects.
Model 1d controlled for student but not course fixed
Correlations
effects.
To account for the fact that two or more data points To make our study more comparable with prior inves-
belong to the same individual or course (i.e., the panel tigations, we compared our estimated correlation coef-
structure), we adjusted the standard errors. This is anal- ficient between students’ average smartphone use
ogous to the procedure for random effects, in which across courses and GPAs with estimates from earlier
standard errors are computed such that they reflect observational studies (see Table 4). Our estimated cor-
sampling clusters, which in our case were students and relation (r = −.32) is similar in magnitude to most previ-
courses. We followed Cameron and Miller (2015) and ously found correlations. Its 95% confidence interval
used two-way clustering to cluster standard errors at (CI), [−.40, −.24], contains correlations from four of the
both the student and course levels. This clustering of six earlier studies. The CI was estimated from 10,000
standard errors accounts for possible correlation of bootstrap samples.
error terms within individuals and courses (Cameron & Figure 1 shows the pairwise correlations between
Miller, 2015). students’ background variables and their GPA, average
Grade Point Average In-Class Smartphone Use Out-of-Class Smartphone Use
Grade Point Average
In-Class Smartphone Use
Out-of-Class Smartphone Use
High School Grade Point Average
Age
Parents’ Mean Income
Parents’ Max Years of Education
Body Mass Index
Agreeableness
Extraversion
Neuroticism
Openness
Conscientiousness
Locus of Control
−.50 −.25 .00 .25 .50 .75 −.50 −.25 .00 .25 .50 .75 −.50 −.25 .00 .25 .50 .75
Correlation Correlation Correlation

Gender

Smoker

−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
Difference in Group Means Difference in Group Means Difference in Group Means
Fig. 1.  Correlations and between-group differences for grade point average, average in-class smartphone use, and average out-of-class smartphone use. The top row shows correla-
tions between students’ background variables and each of the three key variables. The bottom row shows mean differences between gender (men – women) and between smokers
(smokers – nonsmokers), separately for each of the three key variables. Error bars show 95% confidence intervals. We calculated the confidence intervals by bootstrapping the data
10,000 times, determining the values on each of the bootstrapped samples, and taking the 5th and 95th percentiles as the end points of the intervals.

7
8 Bjerre-Nielsen et al.

in-class smartphone use across all attended courses, Table 5.  Estimated Effect of In-Class Smartphone Use on
and out-of-class smartphone use. The left-hand plot Student Course Grades
shows that students’ GPA was more strongly correlated Adjusted
with in-class smartphone use than with any of the stu- Model b̂ 95% CI for b̂ β̂ R2
dent background variables, except for high school GPA.
This shows that there is a substantial negative associa- 1a −0.132 [−0.165, −0.099] −0.213 .04
tion between smartphone use and GPA, although it 1b −0.081 [−0.110, −0.051] −0.129 .19
might not be a causal relationship. The center plot 1c −0.072 [−0.099, −0.045] −0.116 .38
1d −0.046 [−0.077, −0.016] −0.074 .43
shows that in-class smartphone use was almost as
1e −0.028 [−0.057, 0.001] −0.045 .59
strongly correlated with high school GPA as with uni-
versity GPA. One explanation for this is that students Note: The specifications of the models can be found in Table 3.
with high smartphone use in university courses may The b̂ and b̂ columns display estimated coefficients for analyses in
also have been heavy smartphone users in high school, which both grades and in-class smartphone use were, respectively,
unstandardized and standardized. For specifications of clustered
which could conceivably have contributed to a lower standard errors, see the Models section. CI = confidence interval.
high school GPA. Another explanation, however, is that
both correlations are caused by underlying student
characteristics that confound smartphone behavior and Stok, & Baumeister, 2012) is one candidate for such an
academic performance in both high school and univer- unobserved confounding student trait, among others.
sity learning environments. Further, the figure shows that Finally, we note that—like the rest of our panel
a number of the background variables were positively models—Model 1e’s estimate of b is negative and its CI
correlated with in-class smartphone use but negatively contains mostly negative values. This suggests that
associated with GPA and vice versa. This suggests that at higher in-class smartphone use is associated with lower
least part of the observed correlation between smart- course grades, even when models control for all fixed
phone use and GPA is explained by confounding factors student and course traits. However, the estimated asso-
measured by the subset of background variables that we ciation is quite small; the standardized coefficient
observed. Finally, the right-hand plot shows that in-class shows that increasing smartphone use by 1 standard
smartphone use and out-of-class smartphone use were deviation (6.1%) is associated with only a 0.045 decrease
highly correlated (r = .66). However, compared with in- in standardized grades. The CI includes zero, but only
class use, out-of-class use was a weaker predictor of GPA. barely. Thus, if our results were to be interpreted as a
failure to reject the null hypothesis that there is no
effect of in-class smartphone use, this conclusion would
Models using panel data
be very sensitive to choice of confidence band. 2
Table 5 reports the results of our models based on
panel data. Model 1e’s estimate of b is around one third
of the estimate found by Model 1b, and Model 1e’s CI
Cross-sectional model
for b does not include the estimate found by Model 1b. As expected, the estimate of b found by the cross-
As explained in the Method section, this suggests that sectional model (Model 2) was close to the estimate
the estimate found by Model 1b was severely biased found by the panel model that controlled for only
because of confounders that were not fully captured observed background variables (Model 1b in Table 5),
by the student background variables. The majority of and the two models had largely overlapping CIs for b.
the bias is explained by the student fixed effects—this Thus, the cross-sectional model estimated the regres-
is seen in the substantial difference when the model sion coefficient of average in-class smartphone use as
controlled for all stable student traits (including latent −0.099 (95% CI = [−0.142, −0.056]). Estimating the
characteristics) compared with only the measured stu- cross-sectional model with standardized grades and
dent background variables (compare either the results standardized average in-class smartphone use yielded
of Model 1d with Model 1b or the results of Model 1e a regression coefficient of −0.181 for smartphone use.
with Model 1c). The fixed course control accounted for Thus, a 1-standard-deviation increase (5.3%) in students’
the remaining drop in coefficient size (compare either average in-class smartphone use predicts that the stu-
the results of Model 1e with Model 1d or the results of dent’s GPA is 0.181 standard deviations lower under the
Model 1c with Model 1b). This suggests that latent cross-sectional model. In summary, the cross-sectional
confounders primarily consisted of student traits, fol- model reproduces the findings of existing cross-sectional
lowed by contextual course characteristics. In the Dis- studies, showing a negative and moderate association
cussion section, we consider why students’ disposition between students’ in-class device use and academic
of self-control (Ridder, Lensvelt-Mulders, Finkenauer, performance, even when controlling for a broad range
Student Smartphone Use and Academic Performance 9

of student background variables. However, this associa- additional increase in variance explained and a drop
tion should not be interpreted as a substantive causal in the estimated coefficient of smartphone use, suggest-
relation because the results of our panel models suggest ing two candidate possibilities. First, these observed
that such estimates are likely biased away from zero controls may function as poor proxies for the full set
because of unobserved confounders. A full regression of latent cognitive and neurobehavioral characteristics
table for the cross-sectional model can be found in the that compose the construct of self-control. Second,
Supplemental Material. other unmeasured or as-of-yet unidentified student
traits may further confound the association between in-
class smartphone use and performance. Given the mag-
Discussion
nitude of additional variance explained and the decline
Unlike previous research, the present study inspected in the coefficient of smartphone use by the inclusion
the influence of directly recorded in-class mobile- of student and course fixed effects, it is very possible
device use and academic performance across mul- that one or more unobserved individual and contextual
tiple courses and years for a large cohort of university traits jointly influence smartphone use and academic
students. When we modeled a cross-sectional analy- performance. In this regard, our results also suggest that
sis (Model 2) on a data set with one observation per research on device use and learning should not neglect
individual—in accordance with the existing literature—we the role of contextual course-level factors that may influ-
confirmed the hypothesis that higher in-class smart- ence learning and device use (Stokols, 2018). Researchers
phone use, averaged across courses, was associated who control for—or directly investigate—the influence
with a substantially lower average grade, even when of both course-specific traits (teaching style, teacher
controlling for a broad range of observed background engagement, course topic, student density, student-
variables. As expected, a similar association between teacher visual access) and place-based characteristics
students’ course grades and smartphone use was found (lighting, noise levels, classroom size, seating arrange-
on a data set with multiple observations per individual ments) can contribute practical insights about the con-
in a model with the same set of measured control vari- text of in-class smartphone use. Thus, in the future,
ables (Model 1b). However, when we leveraged the researchers investigating device use and performance
panel data to control for all fixed student and course should try to explore these psychological and contex-
characteristics (Model 1e), we found that the magnitude tual frontiers. To this end, panel models can provide a
of the estimated association decreased substantially, pragmatic tool for comparing models that control for
and although it remained mostly negative, its CI all unobserved stable individual or contextual charac-
included zero. As discussed in the Results section, this teristics with models without fixed effects that control
difference suggests that the association between grades for only the limited subset of measured confounds. Con-
and in-class smartphone use was confounded both by trasting the variance explained and the magnitude of
student characteristics that were not captured by our drop in coefficient size between models with and with-
student background variables and by contextual effects out fixed effects can help to assess omitted variable bias.
from course characteristics. Our results strongly suggest that our uncertainty about
Students’ disposition of self-control (Ridder et  al., not-well-understood psychological and environmental
2012) is one possible candidate for a confounding stu- mechanisms that may jointly influence smartphone use
dent trait that we did not directly observe in our data. and academic performance should be considerable.
Recent studies found that—after controlling for Big Five Our findings seem at odds with the randomized con-
Inventory personality domains—self-control capacity trolled trials that found that nonacademic device use
explained behavioral differences in delaying immediate causes demonstrably worse performance (Risko et al.,
device use (Berger, Wyss, & Knoch, 2018; Wilmer & 2010; Wood et  al., 2012). Apart from the aforemen-
Chein, 2016), and a recent review found that self-control tioned concerns of external validity, one possible expla-
positively predicts academic performance (Duckworth, nation for this discrepancy might be that students’
Taxer, Eskreis-Winkler, Galla, & Gross, 2019). Thus, high in-class smartphone use has little to no variation across
levels of in-class smartphone use might primarily be a courses, making it impossible to disentangle its influ-
signal of low self-control and not itself a substantial ence from confounding factors (e.g., self-control) in a
cause of lower academic performance. Notably, our model that included student-specific intercepts. We
extensive set of measured background control variables address this concern in the Multicollinearity in Fixed
(Model 1b) did include behavioral and cognitive cor- Effects Models section in the Supplemental Material.
relates of self-control (body mass index, smoking status, Another explanation is that our measurement of mobile-
locus of control, and personality domains). However, add- device use did not include adjacent laptop use. If students
ing fixed effects to control for all stable student and course switch nonacademic device use from smartphone to lap-
characteristics (Model 1e) yielded both a substantial top in some courses but not others, the student-level
10 Bjerre-Nielsen et al.

variation of in-class smartphone use could contain preprocessed the data under the guidance of A. Bjerre-
noise, and such noise could potentially account for Nielsen. A. Andersen and A. Bjerre-Nielsen analyzed the
finding little or no correlation between course-specific data, and all the authors interpreted the results. A. Andersen,
in-class smartphone use and grades. However, another A. Bjerre-Nielsen, and K. Minor drafted the manuscript.
All the authors provided critical revisions and approved
advantage of using course fixed effects is that they
the final manuscript for submission.
remove such noise at the aggregate level, including
Declaration of Conflicting Interests
systematic switching between smartphones and laptops The author(s) declared that there were no conflicts of
caused by teaching style, teacher quality, availability of interest with respect to the authorship or the publication
the Internet, and lack of battery in afternoon classes. of this article.
What is left are students’ idiosyncratic differences across Funding
courses, which is likely to be less of a problem. This research was supported by the University of Copen-
Still, one limitation of this study is that we observed hagen 2016 initiative (Social Fabric), the Copenhagen Cen-
only in-class device use for smartphones. As already ter for Social Data Science, the Center for Economic
discussed in the Measures section, another caveat is Behavior and Inequality (financed by a grant from the
that we did not observe the specific mobile phone apps Danish National Research Foundation), and DISTRACT
Advanced Grant Project No. 834540 from the European
and on-screen activities that constituted overall device
Research Council.
use. An additional limitation is that our sample con-
Open Practices
sisted of Danish university students who selected to R code for this study has been made publicly available via
participate. Some selection was internal to the student the Open Science Framework and can be accessed at
body. We can see that nonparticipants had a slightly https://osf.io/b83av. Because of rules for data processing
lower GPA (Kassarnig et al., 2018), so our results speak at Statistics Denmark, we are not allowed to share the data
for the subpopulation tracked. set. However, access to the data can be requested via
In conclusion, results from our panel-data analysis e-mail to the corresponding author. The design and analy-
of 2 years of student smartphone activity challenge the sis plans for this study were not preregistered. The com-
interpretation of existing empirical results on digital- plete Open Practices Disclosure for this article can be
device use and academic outcomes. Our results suggest found at http://journals.sagepub.com/doi/suppl/10.1177/
0956797620956613. This article has received the badge for
that there are individual and course traits that confound
Open Materials. More information about the Open Prac-
the relationship between in-class device use and aca-
tices badges can be found at http://www.psychologi
demic performance and that these are difficult to control calscience.org/publications/badges.
for with salient background covariates. Critically, our
results indicate that controlling for all fixed individual
and course factors reduces the estimated negative effect
of in-class smartphone use on academic performance by ORCID iDs
almost two thirds (see Table 5). Researchers and educa-
tors should therefore exercise caution when estimating Andreas Bjerre-Nielsen https://orcid.org/0000-0003-3057-5975
Kelton Minor https://orcid.org/0000-0001-5150-0775
correlations between in-class device use and academic
performance from cross-sectional data, even with a rich
set of controls. In particular, they should avoid making Acknowledgments
causal claims from this type of observational data. We thank Robert Böhm for helpful suggestions. We also thank
Instead, individuals seeking to establish causality in real- participants at the Center for Economic Behavior and Inequal-
world learning settings should pursue observational or ity for helpful discussion and comments.
quasiexperimental studies with research designs that are
more suited for robust causal inference than cross- Supplemental Material
sectional studies. These studies should ideally unobtru- Additional supporting information can be found at http://
sively observe all of students’ electronic-device activity journals.sagepub.com/doi/suppl/10.1177/0956797620956613
across courses and over several terms.
Notes
Transparency 1. The background variables were age, gender, high school
Action Editor: Bill von Hippel GPA, parents’ mean income and maximum years of educa-
Editor: D. Stephen Lindsay tion, measures of six personality traits, body mass index, and
Author Contributions whether or not the student smokes (see details in the Student
A. Bjerre-Nielsen developed the study concept. A. Bjerre- Background Variables section).
Nielsen and D. D. Lassen designed the study, with con- 2. The standard error of Model 1e’s estimate of b is 0.013. Thus,
tributions from K. Minor and A. Andersen. A. Andersen zero would be included in a 95.5% CI but not in a 94.5% CI.
Student Smartphone Use and Academic Performance 11

References in college classrooms: A long-term measurement study.


Com­puters & Education, 141, Article 103611. doi:10.1016/j
Adelantado-Renau, M., Moliner-Urdiales, D., Cavero-Redondo,
.compedu.2019.103611
I., Beltran-Valls, M. R., Martínez-Vizcaíno, V., & Álvarez-
Kirschner, P., & Karpinski, A. (2010). Facebook and aca-
Bueno, C. (2019). Association between screen media use
and academic performance among children and ado- demic performance. Computers in Human Behavior, 26,
lescents: A systematic review and meta-analysis. JAMA 1237–1245.
Pediatrics, 173, 1058–1067. Kraushaar, J. M., & Novak, D. C. (2010). Examining the affects
Andrews, S., Ellis, D. A., Shaw, H., & Piwek, L. (2015). Beyond of student multitasking with laptops during the lecture.
self-report: Tools to compare estimated and real-world Journal of Information Systems Education, 21, 241–251.
smartphone use. PLOS ONE, 10(10), Article e0139004. doi:10 Lepp, A., Barkley, J. E., & Karpinski, A. C. (2014). The rela-
.1371/journal.pone.0139004 tionship between cell phone use, academic performance,
Appel, M., Marker, C., & Gnambs, T. (2020). Are social media anxiety, and satisfaction with life in college students.
ruining our lives? A review of meta-analytic evidence. Computers in Human Behavior, 31, 343–350.
Review of General Psychology, 24, 60–74. McNeish, D., & Kelley, K. (2019). Fixed effects models versus
Berger, S., Wyss, A. M., & Knoch, D. (2018). Low self-control mixed effects models for clustered data: Reviewing the
capacity is associated with immediate responses to smart- approaches, disentangling the differences, and making
phone signals. Computers in Human Behavior, 86, 45–51. recommendations. Psychological Methods, 24, 20–35.
Brooks, D. C., & Pomerantz, J. (2017). ECAR study of under- Mendoza, J. S., Pody, B. C., Lee, S., Kim, M., & McDonough,
graduate students and information technology, 2017. I. M. (2018). The effect of cellphones on attention and
Retrieved from https://library.educause.edu/-/media/ learning: The influences of time, distraction, and nomo-
files/library/2017/10/studentitstudy2017.pdf phobia. Computers in Human Behavior, 86, 52–60.
Cameron, A. C., & Miller, D. L. (2015). A practitioner’s guide Orben, A., Dienlin, T., & Przybylski, A. K. (2019). Social
to cluster-robust inference. Journal of Human Resources, media’s enduring effect on adolescent life satisfaction.
50, 317–372. Proceedings of the National Academy of Sciences, USA,
Chen, Q., & Yan, Z. (2016). Does multitasking with mobile 116, 10226–10228.
phones affect learning? A review. Computers in Human Orben, A., & Przybylski, A. K. (2019). Screens, teens, and
Behavior, 54, 34–42. psychological well-being: Evidence from three time-use-
Duckworth, A. L., Taxer, J. L., Eskreis-Winkler, L., Galla, diary studies. Psychological Science, 30, 682–696.
B. M., & Gross, J. J. (2019). Self-control and academic Rafaeli, A., Ashtar, S., & Altman, D. (2019). Digital traces:
achievement. Annual Review of Psychology, 70, 373–399. New data, resources, and tools for psychological-science
Felisoni, D. D., & Godoi, A. S. (2018). Cell phone usage research. Current Directions in Psychological Science, 28,
and academic performance: An experiment. Computers 560–566.
& Education, 117, 175–187. Ravizza, S. M., Hambrick, D. Z., & Fenn, K. M. (2014). Non-
Fried, C. (2008). In-class laptop use and its effects on student academic Internet use in the classroom is negatively
learning. Computation & Education, 50, 906–914. related to classroom learning regardless of intellectual
Gehlen-Baum, V., & Weinberger, A. (2014). Teaching, learn- ability. Computers & Education, 78, 109–114.
ing and media use in today’s lectures. Computers in Ravizza, S. M., Uitvlugt, M., & Fenn, K. (2017). Logged in and
Human Behavior, 37, 171–182. zoned out: How laptop Internet use relates to classroom
Goldberg, L. R. (1993). The structure of phenotypic personal- learning. Psychological Science, 28, 171–180.
ity traits. The American Psychologist, 48, 26–34. Ridder, D., Lensvelt-Mulders, G., Finkenauer, C., Stok, M.,
Grace-Martin, M., & Gay, G. (2001). Web browsing, mobile & Baumeister, R. (2012). Taking stock of self-control: A
computing and academic performance. Educational meta-analysis of how trait self-control relates to a wide
Technology & Society, 4, 95–107. range of behaviors. Personality and Social Psychology
Karpinski, A. C., Kirschner, P. A., Ozer, I., Mellott, J. A., Review, 16, 76–99. doi:10.1177/1088868311418749
& Ochwo, P. (2013). An exploration of social network- Risko, E., Buchanan, D., Medimorec, S., & Kingstone, A.
ing site use, multitasking, and academic performance (2010). Everyday attention: Mind wandering and com-
among United States and European university students. puter use during lectures. Computers in Human Behavior,
Computers in Human Behavior, 29, 1182–1192. 68, 275–283.
Kassarnig, V., Bjerre-Nielsen, A., Mones, E., Lehmann, S., Roth, B., Becker, N., Romeyke, S., Schäfer, S., Domnick, F.,
& Lassen, D. D. (2017). Class attendance, peer similar- & Spinath, F. M. (2015). Intelligence and school grades:
ity, and academic performance in a large field study. A meta-analysis. Intelligence, 53, 118–137.
PLOS ONE, 12(11), Article e0187078. doi:10.1371/journal Rotter, J. B. (1966). Generalized expectancies for internal
.pone.0187078 versus external control of reinforcement. Psychological
Kassarnig, V., Mones, E., Bjerre-Nielsen, A., Sapiezynski, Monographs: General and Applied, 80(1), 1–28.
P., Lassen, D. D., & Lehmann, S. (2018). Academic per- Stokols, D. (2018). Social ecology in the digital age: Solving
formance and behavioral patterns. EPJ Data Science, 7, complex problems in a globalized world. San Diego, CA:
Article 10. doi:10.1140/epjds/s13688-018-0138-8 Academic Press.
Kim, I., Kim, R., Kim, H., Kim, D., Han, K., Lee, P. H., Stopczynski, A., Sekara, V., Sapiezynski, P., Cuttone, A., Madsen, M.
. . . Lee, U. (2019). Understanding smartphone usage M., Larsen, J. E., & Lehmann, S. (2014). Measuring large-scale
12 Bjerre-Nielsen et al.

social networks with high resolution. PLOS ONE, 9(4), Wilmer, H. H., & Chein, J. M. (2016). Mobile technology hab-
Article e95978. doi:10.1371/journal.pone.0095978 its: Patterns of association among device usage, intertem-
Uncapher, M. R., & Wagner, A. D. (2018). Minds and brains poral preference, impulse control, and reward sensitivity.
of media multitaskers: Current findings and future direc- Psychonomic Bulletin & Review, 23, 1607–1614.
tions. Proceedings of the National Academy of Sciences, Wilmer, H. H., Sherman, L. E., & Chein, J. M. (2017).
USA, 115, 9889–9896. Smartphones and cognition: A review of research explor-
Uzun, A. M., & Kilis, S. (2019). Does persistent involvement ing the links between mobile technology habits and cog-
in media and technology lead to lower academic perfor- nitive functioning. Frontiers in Psychology, 8, Article 605.
mance? Evaluating media and technology use in relation doi:10.3389/fpsyg.2017.00605
to multitasking, self-regulation and academic perfor- Wood, E., Zivcakova, L., Gentile, P., Archer, K., De Pasquale,
mance. Computers in Human Behavior, 90, 196–203. D., & Nosko, A. (2012). Examining the impact of off-task
Whitlock, J., & Masur, P. K. (2019). Disentangling the asso- multi-tasking with technology on real-time classroom
ciation of screen time with developmental outcomes and learning. Computers & Education, 58, 365–374.
well-being: Problems, challenges, and opportunities. Wooldridge, J. M. (2010). Econometric analysis of cross section
JAMA Pediatrics, 173, 1021–1022. and panel data (2nd ed.). Cambridge, MA: MIT Press.

Das könnte Ihnen auch gefallen