Sie sind auf Seite 1von 8

Bloom’s taxonomy based approach to learn basic programming

Anabela Gomes
ISEC – Engineering Institute of Coimbra – Polytechnic Institute of Coimbra
CISUC – Center for Informatics and Systems of the University of Coimbra
Portugal
anabela@isec.pt

António José Mendes


CISUC – Center for Informatics and Systems of University of Coimbra
Portugal
toze@dei.uc.pt

Abstract: Results in programming courses are often very disappointing. There are a variety of
reasons that can originate this situation such as the study methods, the student’s abilities and
attitudes, the nature of programming and also some psychological aspects. However, the approach
used in presenting problems to the students can also have an influence. We suppose that not all
students can become brilliant programmers. We also agree that to be successful in a programming
course, students should be able to build programs with some complexity from scratch. But to reach
this stage takes time and teachers should be aware of what students know and what they don´t
understand at each learning stage. To many students learning should be progressive, starting with
simpler questions (not complete programs), progressing gradually towards more difficult ones. The
support of some learning taxonomy, like Bloom’s taxonomy, can be very useful for this purpose. In
this paper we analyze the impact of such a strategy in the context of a course that involves
assembly programming learning.

Introduction
The high failure rate in programming courses suggests that learning computer programming is a difficult process for
many students. The literature includes many descriptions of failure cases all over the world, independent of
programming languages or paradigms used. Some authors have discussed different reasons for such problems (Gray,
2007), (Sloane & Linn, 1988), (Soloway & Spohrer, 1989), (Jenkins, 2002), (Lahtinen et. al, 2005).
We think that there are several reasons that cause difficulties when learning to program. Teaching methods, study
methods, student’s abilities and attitudes, the nature of programming and also some psychological effects can be
mentioned (Gomes & Mendes, 2006). In previous research we investigated the impact that some personal
characteristics may have had in the performance of learning to program. On one hand, previous programming
experience showed a strong impact in the performance in learning to program, even when using a different language
or paradigm, which is not surprising, as we believe general programming abilities are far more important than
language specific details. On the other hand learning styles didn’t seem to have a clear impact on learning to
program, as we couldn’t find a clear correlation between learning styles and performance in an introductory
programming course (Gray et. al, 2007), (Gomes et. al, 2006). In the same study we analyzed students’ strategies
when solving programming problems and found some connection with their preferential learning styles. Therefore,
many students use different strategies, according to their learning styles, but that didn’t reflect in significant
differences in their performance, at least when measured by the final grades they got.
We believe that to modify the current picture in introductory programming courses it is necessary to improve several
areas, such as student motivation, attitudes, study methodologies and skills. However in this paper we will focus on
teaching strategies. This will be done in the context of an Informatics Technology course, where students are
expected to acquire basic understanding about modern computer architecture, and also how to learn and use
assembly programming as a way to exercise and consolidate knowledge.
A literature review shows that some different approaches and tools have been proposed to help students and teachers
reach their objectives. We can find hands-on program instrumentation and simulation projects (Hsu, 2000), (Vollmar
& Sanderson, 2005). These were considered good tools to teach computer architecture to students who may have
limited background in hardware design. Tools like Atom (Srivastava & Eustace, 1994), Shade (Cmelik, & Keppel,
1994) and Edu.LMC (Pedrosa et. al, 2006) can be mentioned. The utilization of hypothetical simpler processors,
allows more control on the complexity that students have to deal with, adapting it to their current knowledge level
(Claver, 2004). Other authors proposed a new assembly language designed with pedagogical purposes (Silverman
and Martin, 2008). Several other authors have proposed new approaches and discussed their relevance in the context
of computer science programmes (Buckner, 2006), (Agarwal & Agarwal, 2004), (Hunter, 2005), (Powers, 2004).
Many times programming teachers ask students to develop complete programs beginning in the early stages of the
course. Normally simple programs are asked in early stages, and problem complexity grows as the course develops.
We believe that this strategy may be inadequate for many students, especially those that can’t create programming
solutions even to simple problems. That means that the complexity is too high for these students right from the
beginning, leading to frustration, lack of motivation and failure. We agree that the different activities and problems
should follow an increasing level of difficulty, but the entry level should be easy enough, so that all students can
have success in the early learning activities. A taxonomy of educational objectives, such as Bloom’s, can be a good
reference to understand this question. In fact, as pointed out in (Lister, 2000), “This traditional approach jumps to
the fifth and sixth levels of Bloom's Taxonomy of Educational Objectives, when these last two levels depend upon
competence in the first four levels”. We think that before students write complete programs they should master
easier tasks related with the first and intermediate levels of Bloom's Taxonomy of Educational Objectives. Also, as
mentioned in (Lister, 2000), “students should first be taught to read programs before they write programs. As
children, in our early school years, the ability to read and understand our native language outstripped our capacity to
write fragments of that language”. A literature review shows that some different approaches and tools have been
proposed to help students and teachers reach their objectives.

Taxonomies of learning
Taxonomies of educational objectives are used worldwide to describe learning outcomes and assessment results,
reflecting a student learning stage. Usually they divide educational objectives into three domains: cognitive,
affective and psychomotor. As described in (Fuller et. al, 2007), some, such as Bloom’s taxonomy, treat each of
these as a one-dimensional continuum (Bloom et. al, 1956), others, like the revised Bloom’s taxonomy describe the
cognitive domain using a matrix (Anderson et. al, 2001). Yet others, like the SOLO taxonomy, use a set of
categories that describe a mixture of quantitative and qualitative differences between student performances (Biggs &
Collis, 1982). There are also taxonomies that use all three domains equally. However, existing research on the use of
learning taxonomies in computer science focuses on the cognitive domain. There are diverse taxonomies proposed
in the literature. A very well known taxonomy is the original Bloom’s taxonomy (Bloom et. al, 1956). It is a
classification of the different objectives and skills that educators set for students. Skills in the cognitive domain are
divided in six levels. Starting from the lowest order processes to the highest, they are: Knowledge, Comprehension,
Application, Analysis, Synthesis and Evaluation. Higher levels build on lower ones and are considered more
complex and closer to complete mastery of a subject matter.
Bloom’s taxonomy has been revised (Anderson et. al, 2001). The authors changed the nouns listed in Bloom’s
model into verbs, to correspond with the ways learning objectives are typically described (see table 1).

Categories Cognitive Processes


1. Remember Recognizing, Recalling
2. Understand Interpreting, Exemplifying,
Classifying, Summarizing,
Inferring, Comparing, Explaining
3. Apply Executing, Implementing
4. Analyse Differentiating, organizing,
Attributing
5. Evaluate Checking, Critiquing
6. Create Generating, Planning, Producing
Table 1: Revised Bloom’s Taxonomy
The SOLO (Structure of the Observed Learning Outcome) taxonomy (Biggs and Collis, 1982) makes no reference to
the learner performance cognitive characteristics or to the affective dimension. It focuses on the content of the
learner’s response to what is being assessed. It aims to identify the nature of that content and the structural
relationships within that content. The content could be designed to assess knowledge, cognitive skills, or underlying
values. The taxonomy can be used to establish the relationships expected between these different types of content.
Fuller and colleagues make a discussion of existing taxonomies, stressing the strengths and weaknesses of each one.
They also propose a new taxonomy and discuss how this can be used in application-oriented courses such as
programming (Fuller et. al, 2007).
However we agree with Raymond Lister when he says that “specific to the teaching of elementary programming: the
lower two levels should emphasise the skill of reading and comprehending code, the intermediate two levels should
emphasise the writing of small fragments of code, but within a well defined context, and the upper two levels should
emphasise the writing of complete non-trivial programs” and that “students should first be taught to read programs
before they write programs” (Lister, 2000). Our work was based on this same idea.

The study
This study took place in the Department of Computer Science of the Superior Institute of Engineering - Polytechnic
Institute of Coimbra, in the 4th trimester of the academic year 2007/2008. It involved students enrolled in the
Informatics Engineering course. The experiment was based on Informatics Technology (IT) course that includes
basic Assembly programming learning. IT is placed in the programme 1st year - 4th trimester. We chose this course
mostly because we had full freedom over it to arrange the activities in order to conduct our experiment. This was a
big advantage because in most cases it is difficult to convince other teachers to change their habits and materials. It
is not so common to find programming learning research based on Assembly. That was another reason for our
choice. Often introductory programming courses consider the computer as a “black box” and students have to learn
to program without a clear knowledge of what is inside. Some of them act as if there was a “magical” hidden mind
inside the computer, which would allow it to infer what is meant but left unsaid in the program, instead of blindly
performing what is written (Du Boulay et. al, 1999). So, the idea of Informatics Technology is to “open” the
computer, transforming the “black box” into a “glass box”, allowing students to know how their programs are really
executed.

Course context

As mentioned before, our research took place in the context of an Informatics Technology course. The full
Informatics Engineering degree was reformulated this year, as a consequence of the application of Bologna
deliberations. One of the bigger changes was to move from a semester organization to a trimester organization. With
this new structure the number of different courses each student follows simultaneously is half of what it was before,
each one doubled in time load for half the period of time. The duplication of contact hours allowed a larger
proximity, integration and student support during the course period. It also produced a bigger availability for the
development of critical, auto-critical and discussion skills. However, as the course takes only a trimester, the time
for the students to develop problem solving skills and, in general, programming skills is much shorter. This is an
important difficulty, as we believe students need time to develop those skills and a more intensive approach may not
be the most appropriate to most students.
Course reformulation under the Bologna process necessarily took into consideration reflections about teaching
methodologies. As teachers, we had to rethink materials and look for strategies that could work better in this new
context. In the case of Informatics Technology we took special care in the interconnection of knowledge coming
from several previous courses (IT is in the 4th trimester), trying to contribute to a widened base formation, which is
one of Bologna premises. Consequently the type of activities we used weren’t only focused on the development of
Assembly programs as an objective in itself. Student attention was also directed to basic questions related to lower
level concepts. They were concerned mainly with instruction behaviour, how they are executed internally and their
meaning, not only in a computational way, but also in comparison with other programming languages constructs
(students had some knowledge of procedural programming from previous courses).
The trimester approach implies that students have many contact hours during each week. This makes it even more
important to use a diversified strategy that includes different types of activities and keeps the time dedicated to
formal classes low enough to avoid students getting tired and bored. Thus, we included diversified activities after
each traditional teaching period. We also decided not to use a large programming project, to be developed
essentially out of classes (common in many programming courses). Instead, we proposed small tasks to be done in
class under teacher supervision and support. It is our conviction that a semester organization allows more time to out
of class activities, not so convenient in the trimester approach.
However, we consider that this new approach had many positive results, especially in assiduity levels and students’
participation in the activities, which clearly increased when compared with previous years. The classroom activities
also lead us to try a different assessment approach. This will be described in the next section.
Even though there were positive results concerning assiduity and participation, student performance still fell short of
the expected. Possibly the course’s smaller time frame and the lack of a larger project had a negative impact on the
students’ experience, impairing their ability to deal with more complex tasks.

Study

Considering the pedagogical strategy followed during the course we decided to structure the final exam according to
Bloom’s taxonomy. That is, instead of asking essentially complete programs (as was usual before), we structured the
exam in three groups. The first group included increasingly difficult questions (according to Bloom’s Taxonomy). It
was structured in different levels. The first level included four questions that asked students to identify the place
where the variables were declared, to indicate the space occupied by some variables and to say if a code fragment
included a cycle. We can say that this set of questions tested the knowledge level of Bloom’s taxonomy. The second
level included questions to test if students were able not only to identify some instructions, but also to understand
their role in a given program. The third level had questions to test if students were able to analyse and understand a
complete program. These last two levels essentially tested code reading and understanding skills and can be
classified in the Comprehension level of Bloom’s Taxonomy.
In the second group of questions students were asked to write a small code fragment within a well defined context
(to complete a given program). In the last group, the students had to implement a complete program from scratch.
A total of 145 students (132 male and 13 female) appeared for the exam. The number of freshmen was 67 (63 male
and 4 female). For the purpose of our study we considered only freshmen, as we wanted to consider only students
who were in the same circumstances when the course started. To analyze student performance we considered any
answer graded with more than 50% satisfactory.
In the first group of 4 questions (first level) most students had satisfactory results. The percentage of students with
satisfactory results in these questions is presented in table 2.

Question Results
1a 95.3%
1b 75.0%
1c 68.9%
1d 47.3%
Table 2: Satisfactory results in the first level

Although these questions were all very easy, we can notice a decrease in the percentage of satisfactory answers that
can be associated to a slight increase of difficulty in the questions. The low result in the fourth question is probably
connected with the fact that it involved the repetition concept, which is usually a more difficult topic for many
students.
Results in the second level of questions can be seen in table 3.

Question Results
1e 77.0%
1f 75.7%
1g 64.9%
1h 61.5%
Table 3: Satisfactory results in the second level
In our opinion these questions involved a higher level of understanding. The results were slightly lower than in the
first level, but the differences were not very high. A much higher difference is found in the third level, as can be
seen in table 4.
Question Results
1i 33.1%
1j 33.8%
Table 4: Satisfactory results in the third level

It is important to stress the lower percentage of satisfactory marks in the third level, revealing the students difficulty
in understanding complete programs, when compared with understanding the role of single instructions within a
given program.
Not surprisingly, results in groups 2 and 3 were much lower as can be seen in table 5.

Group Results
2 15.73%
3 9.46%
Table 5: Satisfactory results in groups 2 and 3

These low values stress the big gap that exists between understanding a portion of the code, building small pieces of
code and constructing a bigger program from scratch. Teachers will be familiar with students who can follow the
lectures in the programming course, who can dissect and understand programs, but who are totally unable to write
their own program. They have not mastered all the processes; they can code, but they cannot produce an algorithm
(Jenkins, 2002).

Discussion

Although our experiment was based on the final exam results, we believe that it can contribute to a useful discussion
about the type of activities to use in similar courses and their impact in student motivation. As mentioned before, the
activities we proposed to the students during classes were organized according to Bloom’s Taxonomy. This is quite
different from an approach where most activities ask students to develop complete programs that solve some
proposed problem. Many students are not able to create those programs, installing a feeling of inability, which often
leads to lack of motivation and abandonment (typically programming courses have high abandon rates when
compared with other courses). The approach we used, including lower difficulty level activities, gave weaker
students a sense that they were able to do some activities correctly, which had some positive impacts in the course.
The positive impact we felt in the course was measured in the first place by the higher attendance rate in the
classroom, in comparison with the previous years. The feedback received from the students, during the classes, also
indicates a higher interest demonstrated by the level of participation. Many students commented that they were more
motivated with this approach. They felt able to understand some programming and to answer correctly to some
questions, thus giving them some sense of confidence. Even though the progress was slow paced, the students felt
more confident and able to learn (some) programming. Students also commented that they become aware that
learning to program is a process consisting of different stages and not a unique stage that is impossible to reach.
They understood that it is a process that takes time and requires a lot of effort and intensive study methods in order
to obtain positive results.
The higher level of confidence and motivation was also shown by the higher number of students who appeared for
the exam. This number was higher when compared with previous years, but also when compared with other courses
they had in the same trimester. The students knew in advance that the final exam would have a group of questions
that wouldn’t require them to create complete programs. In no way did we tell them that answering those types of
questions correctly would be enough to pass (in fact we said the contrary), Just the same, they felt that they had
better chances to pass. In the end, the results were not very different from usual in previous years. Of course group 2
and 3 question had a higher weight in the final grade and many students failed regardless of having good marks in
group 1 questions.
We consider that students must be able to develop complete programs and not only be able to read and interpret
individual code fragments. We believe that exams and other assessment methods should in the first place test this
competence. However, we think that the proposed approach could be used in classes, intermediate formative tests
and in some parts of exams.
In our course we had an initial intention to do intermediate exams according to this approach. However, the short
time available for courses in the trimester based structure prevented us to follow our initial intention. We think that
this structure is not beneficial for several courses, particularly those that involve programming learning. We think
students need more time to fully develop their programming abilities and take advantage of the increased motivation
and participation levels we noticed as a consequence of the approach we followed.
Finally, we consider that the approach we followed has clear pedagogical advantages to weaker students. They can
become more confident and motivated. For teachers it is easier to identify each student’s doubts and misconceptions
and to help them to overcome those difficulties. Students’ specific difficulties are more complicated to identify when
they are asked to write complete programs due to the multiple difficulties associated to this task.

Conclusions
It is known that programming learning is complex to many students. In the past we studied different factors. In this
paper we analysed the possible influence of organizing learning activities according to an increasing difficulty level
following Bloom’s Taxonomy. The final exam also had a group of questions organized following the same
approach.
Although the final results were similar to those obtained in previous years, we noticed that this methodology caused
an increase of motivation and confidence, especially among weaker students. We believe that more time was
necessary for weaker students to develop the necessary skills to be successful in the course. In fact, the final
objective remains, as students should be able to develop programs that solve specific problems. Our future plan is
the application of the proposed methodology in a semester long course, to see if with more time students are able to
reach higher ability levels. Although we defend the traditional method to assess students, we agree that teachers will
be able to contribute to an improvement of the students learning. It can be done through the use of some taxonomy
as reference for guiding the type of activities in classroom. With this, teachers will be able to better diagnosis the
students’ difficulties and better guide them with suitable activities.

Acknowledgements
The authors would like to thank all students that participated in the experiment and especially to the management
council of the Coimbra Superior Institute of Engineering and professors Cristiana Areias and Cristina Chuva.

References
Agarwal, K. & Agarwal, A. (2004) Do we need a separate assembly language programming course? Journal of
Computing Sciences Colleges, 19 (4), 246-251.
Anderson, L.W., Krathwohl, D.R., Airasian, P.W., Cruikshank, K.A., Mayer, R.E., Pintrich, P.R., Raths, J. &
Wittrock. (2001). M.C. A taxonomy for learning and teaching and assessing: A revision of Bloom's taxonomy of
educational objectives. Addison Wesley Longman, Inc, New York.
Biggs, J.B. & Collis, K.F. (1982). Evaluating the quality of learning: The SOLO taxonomy (Structure of the
Observed Learning Outcome), Academic Press, New York.
Bloom, B.S., Engelhart, M.D., Furst, E.J., Hill, W.H. & Krathwohl, D.R. (1956). Taxonomy of Educational
Objectives: Handbook 1 Cognitive Domain. Longmans, Green and Co Ltd, London.
Buckner, K. (2006). A non-traditonal approach to an assembly language course. Journal of Computing Sciences in
Colleges, 22 (1), 179-186.
Claver, J. M., Castillo, M. I. and Mayo, R. (2004). Improving Instruction Set Architecture Learning Results.
WCAE'04, Workshop on Computer Architecture Education. Munich, Germany. ACM Press, New York, 60-66.
Cmelik, B. & Keppel, D. (1994). Shade: a fast instruction-set simulator for execution profiling. ACM SIGMETRICS
Conference on Measurement and Modeling of Computer Systems. Nashville, Tennessee, United States. ACM
Press, New York, 128-137.
Du Boulay, B., O'Shea, T. and Monk, J. (1999). The black box inside the glass box. International Journal of Man-
Machine Studies, 51(2), 256-277.
Fuller, U., Johnson, C.G., Ahoniemi, T., Cukierman, D., Hernán-Losada, I., Jackova, J., Lathinen, E., Lewis, T.,
Thompson, D. M., Riedesel, C. & Thompson, E. (2007). Developing a Computer Science-specific learning
Taxonomy. ACM SIGCSE Bulletin, 39 (4), 152-170.
Gomes, A., Carmo, L., Bigotte, E. & Mendes, A. J. (2006). Mathematics and programming problem solving. 3rd E-
Learning Conference – Computer Science Education, (CD-ROM), Coimbra, Portugal.
Gomes, A. & Mendes, A. J. (2007). Learning to program - difficulties and solutions. International Conference on
Engineering Education – ICEE’07 (CD-ROM), Coimbra, Portugal.
Gomes, A. & Mendes, A.J. (2008). A study on student’s characteristics and programming learning. ED-MEDIA’08 -
World Conference on Educational Multimedia, Hypermedia & Telecomunications. Wien, Austria, Chesapeake,
VA: AACE, 2895-2904.
Gray, W.D., Goldberg, N.C. & Byrnes, S.A. (2007). Novices and programming: Merely a difficult subject (why?) or
a means to mastering metacognitive skills? [Review of the book Studying the Novice Programmer]. Journal of
Educational Research on Computers, 9 (1), 131-140.
Hsu, W.T. (2000). Experiences integrating research tools and projects into computer architecture courses. Workshop
on Computer architecture education. Vancouver BC. ACM Press, New York.
Hunter, S. B. (2005). Teaching assembly language without using (as much) assembly language. Journal of
Computing Sciences in Colleges, 20 (5), 68-78.
Jenkins, T. (2002). On the difficulty of learning to program. 3rd Annual LTSN_ICS Conference. Loughborough
University, United Kingdom. The Higher Education Academy, 53-58.
Lahtinen, E., Mutka, K.A. & Jarvinen, H.M. (2005). A Study of the difficulties of novice programmers. 10th Annual
SIGSCE Conference on Innovation and Technology in Computer Science Education. Monte da Caparica,
Portugal. ACM Press, New York, 14-18.
Lister, R. (2000). On Blooming First Year Programming, and its Blooming Assessment. Australasian Conference on
Computing Education. Melbourne, Australia. ACM Press, New York, 158-162.
Pedrosa, I., Mendes, A.J. & Rela, M. (2006). edu.LMC and other LMC simulation approaches: contributions to
Computer Architecture Education using LMC Paradigm. Education for the 21st Century — Impact of ICT and
Digital Resources. Springer Boston, Boston, 393-397.
Powers, K. D. (2004). Teaching computer architecture in introductory computing: why? and how? in 6th Conference
on Australasian Computing Education. Dunedin, New Zealand. Australian Computer Society, Darlinghurst,
Australia, 255-260.
Silverman, R. & Martin, M. J. (2008). Design of a Pedagogical Assembly Language and Classroom Experiences.
Journal of Computing Sciences in Colleges, 23 (4), 208-214.
Sloane, K.D. & Linn, M.C. (1988). Instructional Conditions in Pascal Programming Classes. In R. E. Mayer (Ed.),
Teaching and learning computer programming, (pp.137-152). Lawrence Erlbaum Associates, Hillsdale, New
Jersey.
Soloway, E. & Spohrer, J. (1989). Studying the Novice Programmer. Lawrence Erlbaum Associates, Hillsdale, New
Jersey.
Srivastava A. & Eustace, A. (1994). ATOM: a system for building customized program analysis tools.
Programming Language Design and Implementation. Orlando, Florida, United States. 196-205.
Vollmar, K. & Sanderson, P. (2005). A MIPS assembly language simulator designed for education. Journal of
Computing Sciences in Colleges, 21 (1), 95-101.

Das könnte Ihnen auch gefallen