Sie sind auf Seite 1von 9

A comparison between paper-based and online learning in

higher education_1081 727..735


Lisa Emerson and Bruce MacKay
Lisa Emerson is an Associate Professor in the School of English and Media Studies at Massey University. Bruce
MacKay is a Research Associate in the Institute of Natural Resources, Massey University. Address for correspon-
dence: Dr Lisa Emerson, Massey University, English and Media Studies, Private Bag 11 222, Palmerston North,
4410, New Zealand. Email: L.Emerson@massey.ac.nz
Abstract
To date researchers have had difculty establishing reliable conclusions in studies com-
paring traditional forms of learning (eg paper-based or classroom based) vs online
learning in relation to student learning outcomes; no consistent results have emerged,
and many studies have not been controlled for factors other than lesson mode. This
paper compares the effects of presenting two versions of lessons on punctuation that
differed only in their mode of presentation. 59 students completed a pre-lesson ques-
tionnaire, and after the lessons completed another questionnaire plus the NASA-TLX
which tests subjective cognitive workload stress. The results showed that students who
sat the lessons on paper performed 24% better than those who sat the lessons online.
Reasons for this difference in learning outcomes are considered, but no clear reason is
apparent in the data from this study. The study sounds a note of caution in terms of
the move by tertiary institutions to online and/or blended learning, and suggests
further studies are required which assess learning outcomes in different mode of
learning.
Introduction
The movement in higher education to replace or supplement traditional pedagogical methods (eg
paper-based or face-to-face learning) with online learning has seen considerable acceleration in
the last few years, especially in relation to distance learning. While there is a body of research
which compares web-based courses with traditional classroom-based courses (see, for example,
Gal-Ezer & Lupo, 2002; Hughes, McLeod, Brown, Maeda & Choi, 2007; Meyer, 2003; Olson &
Wisher, 2002; Toth, Fougler & Amrein-Beardsley, 2008), no consensus has yet emerged on the
impact of change of mode on student learning. Because one of the conventional methods of
teaching distance and on-campus students is through study guides and other forms of paper-
based instruction, the relationship between mode of learning and learning outcomes invites
further exploration.
The advantages of online learning are clearly established: for example, web-based learning can be
used to meet the needs of non-traditional students, leading to more open access to higher edu-
cation (Mottarella, Fritzsche & Parrish, 2004), and allows more exibility for learners (Allen &
Seaman, 2006). Furthermore, in contrast to some other forms of learning, such as paper-based
learning, online learning is seen as providing more interactivityin relation to peers, tutors and
the course material itself (Li, 2007)and this interactivity is seen as having impacts on student
learning and motivation:
British Journal of Educational Technology Vol 42 No 5 2011 727735
doi:10.1111/j.1467-8535.2010.01081.x
2010 The Authors. British Journal of Educational Technology 2010 Becta. Published by Blackwell Publishing, 9600 Garsington Road, Oxford
OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA.
Interaction transports the student to a new cognitive environment which motivates and activates the
student ... . [it] promotes active engagement of students in the learning process and leads to improved
academic achievement. (Katz & Yablon, 2002, p. 70)
Yet, the pace of change from traditional to online learning has not been embedded on rm
evidence that online learning does, in fact, lead to better, or even equivalent, student outcomes or
experiences (Toth et al, 2008), and recent work has sounded a note of caution. Njenga and
Fourise (2008) for example, challenge the haste with which online learning is being promoted
and adopted, commenting that elearning in higher education ... is being created, propagated and
channeled ... . without giving educators the time and opportunity to explore the dangers and
rewards of elearning on teaching and learning (p. 1). Their concerns are that the claims of
elearning (for example, that it saves time and resources, and enhances student learning) are
untested and that there are few voices expressing scepticism or researchers empirically testing
these claims.
Certainly, the empirical studies which have tested the impact of different modes of learning
on student learning suggest the need to pause and examine more carefully the impact of mode of
learning on learning outcomes before tertiary institutions fully commit to a change of learning
mode. The results of studies to date have been conicting. Rivera and Rice (2002), for example,
nd no differences in outcomes for students enrolled in an online class, traditional face-to-face
class, or web enhanced class. By contrast, research by Hughes et al (2007) and Maki, Maki,
Patterson and Whittaker (2000) showed that web-based students outperformed students enrolled
in a face-to-face class. Mottarella, Fritzsche and Parrish (2004), Wang and Newlin (2000), and
Waschull (2001); however, found that students enrolled in a web-based or web-enhanced class
achieved lower grades than those enrolled in a traditional face-to-face classroom, even when
the GPAs for the three groups were comparable. Mottarella, Fritzsche and Parrish (2004) suggest
that their ndings may be a result of how learning is measured across these groups (ie by
declarative knowledge) and suggest that other forms of assessment may yield different results.
Newlin and Wang (2002) similarly call for more rigorous research to investigate web-based
students outcomes.
It should be noted, however, that the studies discussed above relate to whole courses, which may
be affected by multiple factors. For example, comparisons of whole courses conducted online or
face-to-face cannot be assured that the content of both courses is identical: differences in empha-
sis or detail or explanation are inevitable. Similarly, the charisma, or otherwise, of a face-to-face
teacher may impact on learning outcomes and hence on a comparative study. Another factor
which may impact on the efcacy of online teaching is how, and to what extent, the instructor
engages with the class on line. In such uncontrolled studies, mode of learning may be only one of
several differences within the course being comparedand these multiple factors may explain
the conicting results of such studies.
By contrast, this study focuses on a single set of lessons which have been carefully designed to be
identical in structure and content, and which differ only in their mode of delivery. This study is a
comparison of the learning experiences and learning outcomes of students randomly allocated to
a set of lessons on paper and online. In particular, it focuses on these questions:
Did students studying online have more positive or negative learning experiences than those
who studied the set of lessons online? Included in this is a measure of the workload stress
experienced by the two different groups.
Which of these groups achieved better mastery of the material, and how did these results
correlate with students prior attitudes and expertise, experience of the set of lessons, and
workload stress.
728 British Journal of Educational Technology Vol 42 No 5 2011
2010 The Authors. British Journal of Educational Technology 2010 Becta.
Method
An interactive online set of lessons on apostrophe usages, Interactive grammar! was developed
using Chous (2003) model of interactivity (for more detail, see Emerson & MacKay, 2006). The
programme covered three topics relating to apostrophe usage: contractions, simple possession
and exceptional possession. Students were able to study a set of small lessons on each topic, and
then engage with a formative test to test themselves on their prociency before choosing to either
study another lesson on the same topic, or move onto the next topic. In the formative tests, when
student answers were incorrect, the student was provided with prompting questions plus the
correct answer for each question. At the completion of the set of lessons, students were given a
summative test of 25 randomly selected questions which covered all three topics; the results of
this test were used as an indicator of mastery of the topic.
The online lessons were replicated as a paper-based study guide. The subject matter of the lessons
ie punctuation, meant that the material could easily be adapted to a paper-based version because
the material is almost entirely text based. The content and the structure of the study guide were
identical to the online programme. Students were able to check their answers to all tests (except
the nal summative test) with an answer booklet which provided the same information as the
online test answers. At the end of the paper-based lesson, students sat the same summative test as
their online counterparts to assess their mastery of the material.
The trial we undertook involved 59 participants (85% female, 63% aged 1826, 39 on-campus
and 20 off-campus students), who were randomly allocated one of two toolseither the web-
based tool Interactive grammar! or the paper-based study guideto complete the full programme
on apostrophe usage.
Procedure
Prior to starting the programme on apostrophes, participants completed a pre-test questionnaire
which posed questions about their present attitudes to, and experience of, learning how to use
apostrophes. Quantitative answers were assessed through a 1-to-7 Likert-type response scale;
more qualitative data was collected through a series of open questions. In particular we wanted
to assess students level of condence and experience in using these skills before they undertook
the lessons.
Students were then randomly allocated to one form of the study programme (either web or paper
based). When they had completed the set of lessons, they were given a summative test to assess
their mastery of the topic.
They then lled in two post-test questionnaires. The rst sought feedback on the lesson through
a series of questions using either a 1-to-7 Likert-type response scale or short answer qualitative
responses. In particular we wanted to nd out whether their condence in using the skills had
increased and to identify aspects of the lessons that they liked or disliked. The second post-test
questionnaire focused on levels of cognitive workload stress.
Assessment of subjective cognitive workload stress
Workload stress was assessed using a paper-based form of the NASA-task load index (the NASA
TLX), a measure of subjective cognitive workload stress. Noyes, Garland and Robbins (2004)
dene cognitive workload stress as the interaction between the demands of a task that an
individual experiences and his or her ability to cope with these demands. It arises due to a
combination of the task demands and the resources that a particular individual has available (p.
111). The NASA-TLX assesses subjective workload as a function of a series of demandsmental,
physical, temporal, performance, effort, and frustration (Luximon & Goonetilleke, 2001; Rubio,
Diaz, Martin & Puente, 2004) which it presents on a series of indices. Various approaches have
A comparison of paper-based and online learning 729
2010 The Authors. British Journal of Educational Technology 2010 Becta.
been devised to measure objective workload, but subjective methods are still preferred, due to
their ease of use, small cost and established efcacy. A number of instruments are available but
we employed the NASA-TLXbecause generally the literature suggests it has more sensitivity than
other measures such as SWAT (Subjective Workload Assessment Technique; see, for example,
Rubio et al, 2004, or Charlton & OBrien, 2002)
What we specically wanted to knowwas whether the web-based lesson placed a higher cognitive
workload on participants compared with a more familiar paper-based lesson. Researchers such as
Bunderson, Inouye and Olsen (1989) and Clariana and Wallace (2002) who have investigated
test mode effect, ie the concept that identical paper-based and computer-based tests will not
obtain the same result (p. 593), suggest that there is sufcient empirical data to establish that
students respond differently to material onthe web compared withmaterial onpaper, but boththe
actual causes and outcomes of those differences remain unclear. Our conjecture was that cogni-
tive workload may be a factor in that difference. This conjecture is supported by the work of Noyes
et al (2004) who used the NASA-TLX to examine test mode effect and suggested that computer-
based tests required more effort than the paper-based ... test (p.112). This nding was also
supported by our earlier study (Emerson & MacKay, 2006).
Responses on the Likert-type scales and the NASA-TLX workload index were analysed by the
Kruskal-Wallis test using the non-parametric procedure NPAR1WAY of SAS (SAS Institute,
2001). Performance in the summative test was analysed using the analysis of variance procedure
of SAS while the relationship between summative test scores and the NASA-TLX workload scores
was examined using the linear regression procedure REG. As preliminary analyses revealed no
signicant differences between gender within groups and between modes or study location for
any of the variables measured, the data were pooled over gender and location for the analysis.
Results
In general, students allocated to both lesson modes were positive about their current understand-
ing of grammar and punctuation prior to undertaking the programme on apostrophe usage
(Table 1). The groups showed no signicant differences in response to punctuation usage, in
terms of having been taught these skills, and being condent in their use of punctuation in
general, and their attitudes towards the importance of good punctuation. They displayed similar
levels of condence in terms of grammar, punctuation and apostrophe usage.
The only signicant difference between the two groups was that students in the paper-based mode
agreed more strongly than their web-mode counterparts that they had mastered punctuation
skills at high school (Table 1).
Students in both groups were equally very positive about their experience and perceived under-
standing of the lesson material (Table 2). The lessons themselves were regarded positively in
terms of their structure, design and approach, and students, in both lesson modes, were positive
with the notion that they had a better understanding of apostrophe usage having completed the
lesson.
However, analysis of the students scores of the summative test revealed that students inthe paper
mode performed about 24% better than their web-mode colleagues (23.2/25 vs. 16.9/25;
Table 3).
Mean NASA-TLX scores for students in the web mode were similar to those in the paper mode
(Table 3) as was the range of NASA-TLX scores for both lesson modes (Figure 1).
While there were no differences in mean workload stress between the two lesson modes, the
relationship between mastery result and workload stress differed. For the paper lesson mode,
increasing workload stress had negligible impact on performance until about a NASA-TLX score
730 British Journal of Educational Technology Vol 42 No 5 2011
2010 The Authors. British Journal of Educational Technology 2010 Becta.
of about 12 after which there was rapid decline in performance (Figure 1). In contrast, there was
a steady and signicant decline in performance of students who had done the web-based lesson
with increasing workload stress across the entire range of stress scores encountered (test
score = 19.760.37 [NASA-TLX score]; R
2
= 0.25, F[1, 26] = 8.84, p < 0.006).
Discussion
The results of the pre-test suggest that the two groups were sufciently similar to allow robust
comparison. Only one measure showed signicant difference: the extent to which the students
Table 1: Mean scores
z
for pre-test questionnaire
Question
Lesson mode
c
2
p Paper Web
I was taught the conventions of grammar and punctuation
at primary school
5.2 5.0
ns
0.005 0.944
I feel I mastered these skills at primary school 4.5 4.1
ns
0.538 0.463
I was taught the conventions of grammar and punctuation
at high school
4.7 5.0
ns
0.633 0.426
I feel I mastered these skills during my time at high school 5.2 4.4* 3.917 0.048
I feel very condent about my ability to use correct
grammar
5.3 4.9
ns
2.479 0.115
I feel very condent about my ability to use the correct
conventions of punctuation
5.3 5.0
ns
0.694 0.405
I understand how to use apostrophes and feel that I employ
them correctly most of the time
5.5 5.4
ns
0.125 0.724
I think the ability to use correct grammar and punctuation
is important
6.6 6.2
ns
0.055 0.814
ns
Not signicant.
*Signicant (p < 0.05).
z
Likert-type scale from 1 (strongly disagree) to 7 (strongly agree).
Table 2: Mean scores
z
for post-test questionnaire
Question
Lesson mode
c
2
p Paper Web
I feel that I understand the rules of apostrophe usage for
contraction having completed this lesson
6.3 6.3
ns
0.058 0.809
I feel that I understand the rules of apostrophe usage for
possession at the completion of this lesson
6.2 6.0
ns
1.099 0.294
I found the lessons clear and easy to understand 6.3 6.3
ns
0.355 0.551
I could see where I was having problems understanding and
applying the material
5.9 5.7
ns
0.956 0.328
It was easy to nd my way around the lessons 6.1 6.5
ns
1.592 0.207
I thought the lessons were fun 5.3 5.6
ns
0.514 0.462
The structure of the lessons (lesson > try it out > test) worked
well in terms of aiding my learning
6.4 6.3
ns
0.416 0.519
I would recommend these lessons to anyone else who was
having trouble with apostrophes
6.4 6.4
ns
0.055 0.814
ns
Not signicant.
z
Likert-type scale from 1 (strongly disagree) to 7 (strongly agree).
A comparison of paper-based and online learning 731
2010 The Authors. British Journal of Educational Technology 2010 Becta.
had mastered these skills at high school, perhaps suggesting that the paper-based group had
longer-standing condence in the use of punctuation. However, present levels of condence were
similar between both groups (Table 1).
The most outstanding result of this study is that students learning outcomes (as measured by the
summative test; Table 3) for the paper-based lessons are signicantly higher than those for the
online lessons. The reasons for this outcome need to be examined.
First, this nding cannot be attributed to students prior learning or condence or the time taken
to complete the lesson. Regardless of prior learning or condence levels or time on task, students
studying the paper-based lesson achieved higher mastery scores than those studying the lessons
online (Table 3).
Second, we considered whether there some aspect of mode of delivery that affects the differences
in learning outcome. We tested a number of hypotheses, but were unable to come to a clear
conclusion. Our rst hypothesis was that one mode of delivery had higher levels of workload
Table 3: Analysis of work stress index and mastery test scores and lesson duration
Variable
Lesson mode
Statistical criterion p Paper Web
Work stress (NASA-task load index) score 7.79 7.87
ns
0.004
z
0.950
Summative test score (/25) 23.2 16.9 1.46
y
0.0001
Time taken to complete lesson (min.) 32.2 27.4
ns
6.46
y
0.139
ns
Not signicant.
y
Least signicant difference
0.05,52
.
z
c
2
Kruskal-Wallis test.
NASA-TLX score
0 2 4 6 8 10 12 14 16
T
e
s
t

s
c
o
r
e

(
/
2
5
)
6
8
10
12
14
16
18
20
22
24
26
On-campus students; paper based
On-campus students; web based
Off-campus students; paper based
Off-campus students; web based
Figure 1: Relationship between work stress index (NASA-task load index score) and summative test score
732 British Journal of Educational Technology Vol 42 No 5 2011
2010 The Authors. British Journal of Educational Technology 2010 Becta.
stress which caused the difference in learning outcomes. However, this is not supported by the
data (Table 3). Certainly, this study suggests that increases in workload stress correlated with
poorer learning outcomesbut this applied to both learning modes. Noyes et al (2004) and
Emerson and MacKay (2006) both suggest that the online material may require more effort than
the paper-based material. However, the ndings on workload stress in this study do not support
this hypothesis: the range of workload stress was similar for both groups (Figure 1).
Next, we hypothesised that students sitting the online version of the lesson might engage with the
material differently than those who were sitting the lesson on paper: that online tests may be
perceived by students in the same light as, for example, social networking quizzes (which are
common in forums such as Facebook) which might mean that students would not reect to the
same level, or for the same length of time, as those engaged with the paper-based lessons (because
paper-based quizzes are more likely to be associated with formal educationally based tests). In
other words, did the online students treat the lesson more like an online game or social quiz? Such
speculation is supported by the arguments of researchers such as Mehlenbacher, Miller, Coving-
ton and Larsen (2000) who caution that in our desire to promote active learning, we may be
guilty of promoting more interactive learning environments, environments that give immediate
responses to students but that do not necessarily facilitate reection or a careful consideration of
all the materials and tasks (p. 177).
If this were the explanation for the signicant differences in learning outcomesif instant feed-
back and interactivity causes the student to engage less deeplythen we would need, as teachers,
to consider how to counteract this tendency. However, support for this hypothesis from this study
is not strong. The quantitative results show no signicant difference between the two groups in
terms of how much they agreed with the statement that the lessons were fun (Table 2), and
there was no difference in terms of how long students took over the lessons (Table 3). However,
the qualitative feedback was arguably more expressive and enthusiastic from the online version,
and, interestingly, some of the students who sat the paper-based lessons recommended that the
lesson be placed online:
This format of multichoice questions would be better on a computer. Answers could be marked
automatically.
Using different mediums (such as multimedia or web-based programs) may help people struggling with the
subject.
In this study, however, the notion that interactivity mitigated against deep learning was a weak
hypothesis and would need further investigation.
One other possible explanation for the difference in learning outcomes between the two groups
may lie with the way we assess student learning in relation to online learning. Mottarella et al
comment:
Perhaps neither standardized tests nor grades capture the strengths that may be present in web-based
pedagogy. It is therefore possible that course grades and standardized achievement scores will need to be
supplemented with other measures un order to best capture what web instruction has to offer in terms
of student learning outcomes ... web instruction may facilitate more active learning and deeper and
critical thinking applied to course material and result in improved ability to apply course content to novel
situations ... . (Mottarella, Fritzsche and Parrish, 2004, p. 54)
Because we used standardised testing in this study, it is not possible to test this hypothesis.
However, this suggests further research, using a range of assessment methods, may be necessary.
A further issue, suggesting a quite different line of further enquiry, is the possibility that the
interactivity within the online version of the set of lessons (which was limited only to formative
quizzes) was too limited and did not fully take advantage of the full range of interactivity pre-
sented by online learning environments. It may be that a wider range of interactive features may
have strengthened the online version.
A comparison of paper-based and online learning 733
2010 The Authors. British Journal of Educational Technology 2010 Becta.
Linked to this is the notion that the subject matter of the lesson used in this study, ie punctuation,
which lends itself to text-based instruction, may have advantaged paper-based instruction over
web-based instruction because such a topic cannot make use of the more interactive or more
sophisticated instruction methods available in web-based instruction, eg use of video and web-
based social interaction. Acomparative study of a lesson based on a different subject, eg the study
of physiology, which could incorporate more complex web-based instruction methods which
could not be replicated in paper-based form (eg multi-layered diagrams and/or videos of dissec-
tion), or a multi-disciplinary subject that required the kinds of social interaction possible online,
might yield quite different results.
However, in such instances, an exact comparison of mode, using identical subject matter and
structure, could not be achieved; which raises the questions of whether an empirical comparative
study such as this, which may not be able to employ the distinctive advantages of both modes of
instruction, is the best way to evaluate the strengths of online versus paper-based instruction.
Conclusions
These latter concerns raise multiple questions: how can we effectively measure the impact of
model of learning on learning outcome? Is it possibleor desirableto separate lesson mode
fromthe content and structure of instruction? And what is the impact of the subject of the lesson
on the desirability of mode of instruction?
Clearly, on a number of levels, the results of this study should lead us to further investigation of
the impact of learning mode on learning outcomes, using a range of methods and subjects. Does
online learning achieve better or equal results in relation to student learning outcomes in com-
parison with more traditional modes of learningand does this differ across subject? Howcan we
effectively answer this question? And how can we harness the strengths of both online learning,
with its increased opportunities for interactivity, and the more traditional forms of learning?
Another area for future research concerns the ways in which students engage with online learn-
ing. In particular, we need to consider and extend Mehlenbacher, Miller, Covington and Larsens
(2000) discussion of whether instant interactivity works against deep learning, or whether a web
environment, which may be associated with online games and other social modes of activity,
means students engage less intensively with learning material. If this is impacting on how stu-
dents learn online, then we need to nd ways to counteract this tendency. We may also need to
consider the possibility that different forms of interactivity may impact on learning outcomes
differently, and develop empirical approaches to testing this hypothesis. Furthermore, it would be
useful to conduct further study into Mottarella, Fritzsche and Parrishs (2004) contention that
other forms of assessment may be needed to determine the strengths of online learning.
In the meantime, we sound a note of caution concerning the rapid move of institutions of higher
education towards online instruction. Until we have a clearer picture of the impact of learning
mode on student learning outcomes, we need to consider how best to develop teaching and
learning strategies that incorporate the strengths and opportunities presented by online learning
with the strengths of more traditional modes of learning. And central to this exploration must be
the enhancement of our students learning outcomes.
References
Allen, I. & Seaman, J. (2006). Growing by degrees: online education in the United States. 2005 The Sloan
Consortium.
Bunderson, C. V., Inouye, D. K. & Olsen, J. B. (1989). The four generations of computerized educational
measurement. In R.L. Linn (Ed.), Educational measurement (3rd ed.). The American Council on Education/
Macmillan series on higher education. New York, NY: American Council On Education, Macmillan
Publishing Co., 367407.
734 British Journal of Educational Technology Vol 42 No 5 2011
2010 The Authors. British Journal of Educational Technology 2010 Becta.
Charlton, S. & OBrien, T. (2002). Human factors testing and evaluation. Mahwah, NJ: Lawrence Erlbaum
Associates.
Chou, C. (2003). Interactivity and interactive functions in web-based learning systems: a technical frame-
work for designers. British Journal of Educational Technology, 34, 265279.
Clariana, R. & Wallace, P. (2002). Paper-based versus computer-based assessment: key factors associated
with the test mode effect. British Journal of Educational Technology, 33, 5, 593602.
Emerson, L. & MacKay, B. R. (2006). Subjective cognitive workload, interactivity, and feedback in a web-
based writing program. Journal of University Teaching and Learning Practice, 3, 1, 114.
Gal-Ezer, J. & Lupo, D. (2002). Integrating internet tools into traditional CS distance education: students
attitudes. Computers and Education, 38, 319329.
Hughes, J. E., McLeod, S., Brown, R., Maeda, Y. & Choi, J. (2007). Academic achievement and perceptions of
the learning environment in virtual and traditional secondary mathematics classrooms. The American
Journal of Distance Education, 21, 4, 199214.
Katz, Y. J. & Yablon, Y. B. (2002). Who is afraid of university internet courses? Educational Media Interna-
tional, 39, 1, 6973.
Li, M.-H. (2007). Lessons learned from web-enhanced teaching in landscape architect studios. International
Journal on E-Learning, 6, 2, 20052008.
Luximon, A. & Goonetilleke, R. S. (2001). Simplied subjective workload assessment technique. Ergonomics,
4, 3, 229243.
Maki, R. H., Maki, W. S., Patterson, M. & Whittaker, P. D. (2000). Evaluation of a web-based introductory
psychology course: I Learning and satisfaction in online vs. lecture courses. Behaviour Research Methods,
Instruments and Computers, 32, 230239.
Mehlenbacher, B., Miller, C. R., Covington, J. S. &Larsen, J. S. (2000). Active and interactive learning online:
a comparison of web-based and conventional writing classes. IEEE Transactions on Professional Communi-
cation, 43, 166184.
Meyer, K. (2003). The webs impact on student learning. T.H.E. Journal Online, Retrieved 13 August 2009
from http://www.thejournal.com/magazine/vault/A4401.cfm
Mottarella, K., Fritzsche, B. &Parrish, T. (2004). Who learns more? Achievement scores following web-based
versus classroom instruction in psychology courses. Psychology Teaching and Learning, 4, 1, 5154.
Newlin, M. H. & Wang, A. Y. (2002). Integrating technology and pedagogy: web instruction and seven
principles of undergraduate education. Teaching of Psychology, 29, 325330.
Njenga, J. K. & Fourise, L. C. H. (2008). The myths about elearning in higher education. British Journal and
Educational Technology, 39, 7, 114.
Noyes, J., Garland, K. & Robbins, L. (2004). Paper-based versus computer-based assessment: is workload
another test mode effect? British Journal of Educational Technology, 35, 1, 111113.
Olson, T. M. & Wisher, R. A. (2002). The effectiveness of Web-based instruction: An initial inquiry. Inter-
national Review of Research in Open and Distance Learning, 3, Retrieved September 10, 2009, from http://
www.irrodl.org/content/v3.2/olsen.html
Rivera, J. &Rice, M. (2002). Acomparisonof student outcomes and satisfactionbetweentraditional and web
based course offerings. Online Journal of Distance Learning Administration, 3, 110.
Rubio, S., Diaz, E., Martin, J. &Puente, J. M. (2004). Evaluationof subjective metnal workload: Acomparison
of SWAT, NASA-TLX, and workload prole methods. Applied Psychology: An International Review, 53, 1,
6167.
SAS Institute (2001). SAS/STAT software: changes and enhancements, release 8.2. Cary, NC: SAS Institute Inc.
Toth, M., Fougler, T. S. & Amrein-Beardsley, A. (2008). Post implementation insights about a hybrid degree
program. TechTrends, 52, 3, 7679.
Wang, A. Y. & Newlin, M. H. (2000). Characteristics of students who enroll and succeed in psychology
web-based classes. Journal of Educational Psychology, 28, 143146.
Waschull, S. B. (2001). The online delivery of psychology courses: attrition, performance and evaluation.
Teaching of Psychology, 28, 143146.
A comparison of paper-based and online learning 735
2010 The Authors. British Journal of Educational Technology 2010 Becta.

Das könnte Ihnen auch gefallen