Sie sind auf Seite 1von 9

This article was downloaded by: [Heriot-Watt University]

On: 28 December 2014, At: 18:01


Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Comparative Education
Publication details, including instructions for authors and
subscription information:
http://www.tandfonline.com/loi/cced20

Knowledge and numbers in education


a b
Harvey Goldstein & Gemma Moss
a
University of Bristol
b
Institute of Education, University of London
Published online: 20 Jun 2014.

Click for updates

To cite this article: Harvey Goldstein & Gemma Moss (2014) Knowledge and numbers in education,
Comparative Education, 50:3, 259-265, DOI: 10.1080/14681366.2014.926138

To link to this article: http://dx.doi.org/10.1080/14681366.2014.926138

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the
“Content”) contained in the publications on our platform. However, Taylor & Francis,
our agents, and our licensors make no representations or warranties whatsoever as to
the accuracy, completeness, or suitability for any purpose of the Content. Any opinions
and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sources
of information. Taylor and Francis shall not be liable for any losses, actions, claims,
proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or
howsoever caused arising directly or indirectly in connection with, in relation to or arising
out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Conditions of access and use can be found at http://www.tandfonline.com/page/terms-
and-conditions
Downloaded by [Heriot-Watt University] at 18:01 28 December 2014
Comparative Education, 2014
Vol. 50, No. 3, 259–265, http://dx.doi.org/10.1080/14681366.2014.926138

EDITORIAL
Knowledge and numbers in education

This special issue takes as its core theme the relationship between knowledge and
numbers in education, with a particular emphasis on the diverse forms of knowledge
that emerge from the collection and use of numerical data within education, and the
knowledge communities they help create who understand, analyse and respond to
Downloaded by [Heriot-Watt University] at 18:01 28 December 2014

the data in different ways. Numerical data encode more or less information depending
upon how they are formed, with which context of use in mind, according to whose
interpretative rules. Statistics as a knowledge field sets out very precise conditions
for the treatment of the numerical data it collects and then analyses. Once the data
are assembled and stabilised as objects of (empirical) investigation, statisticians
would expect to reflect on the interplay between the objective structure of mathematics
that allows logical inferences to proceed from clearly stated assumptions and the nature
of those assumptions encoded in the data. How much one might be entitled to conclude
from such inferences is assessed in this light. The robustness of the data, the aptness of
the method and the strength of the claims made will all be rigorously tested in debate
within the field. But such data do not remain within the statistical community. On the
contrary they travel out into public, policy and educational domains that appropriate
them for other purposes and test them in different ways.
This special issue sets out to explore this dynamic at work by reviewing the for-
mation, interpretation and use of statistical data in a range of different settings where
judgements about the quality of literacy and education are formed. The papers have
been developed from the standpoint of different disciplinary traditions which consider
statistical data through the lens of their field’s particular interests, methods and analytic
concerns, shaped by their longer institutional and discursive histories (Manzon 2009).
How this broader epistemological landscape itself shifts is a matter that Cowen (2014)
explores in some depth, taking as his point of departure the ways in which comparative
traditions have variously dealt with the issue of ‘outcomes’ and ‘transferability’ from
one education system to another at different points in time.
Historical studies in the social sciences have tracked the interrelationship between
the emergence of statistical data and the formation of the state (Hacking 1990; Porter
1995; Desrosieres 2002; Vincent 2014); they have identified how statistical accounts
of the social world shape and are shaped by the development of different statistical tech-
niques and their uptake in different social settings (Wooldridge 1994; Stigler 2002). But
statistical understandings do not stand still. They are social phenomena that change
their form and purpose over time as they are harnessed in different contexts of use
(Lawn 2013; Moss 2014). The propositions they encode about the social world are
not immutable but vary in line with broader discursive formations. Where once they
played an important role in informing the state about its citizens, statistical data now
also represent the state’s activities to the citizen, in terms that the citizen is invited to
judge (Byrne 2002, 46–49; Novoa and Yariv-Mashal 2003; Landahl and Lundahl
2013). In part this happens through what has been described as ‘governing by

© 2014 Taylor & Francis


260 Editorial

numbers’, the processing of the data in a series of very public acts (Grek 2009; Ozga
et al. 2011; Davis et al. 2012).
Much of the attention to numerical data in contemporary education has focused on
their rapid uptake as policy tools and the re-ordering between different communities of
practice in education that has followed from their use (Pereyra et al. 2009; Lingard and
Sellar 2013). The rise to prominence of numerical data in education policy has been
striking. They now act as the starting point for very visible and public debates on the
quality of national education systems and their worth (Waldow, Takayama, and
Sung 2014); as tools for governing education through oversight of the flows of data
the system produces (Ozga et al. 2011) and as the means of auditing ever more
closely the complex social processes of teaching and learning that the data attempt to
encode (Grek et al. 2009; Meyer and Benavot 2013). When embedded in strong
accountability frames coupled with institutional practices of inspection and oversight,
Downloaded by [Heriot-Watt University] at 18:01 28 December 2014

such as those now in place in England, the data make it possible to sustain a high level
of external control over what teachers do, the choices they make about the curricula
they follow and the pedagogy they enact (Ball, Maguire, and Braun 2012; Bradbury
2014).
Comparative studies have drawn attention to how international large-scale assess-
ment data have become a key means of levering policy change both within and
across different national education systems (Raffe 2013; Crossley 2014). Whilst the
data provide a common framework for a discussion based on standardised terms, this
ignores the lack of standardisation in the processes that generate the numbers. In policy-
makers’ hands, the data act as running records of achievements in the past measured
against the potential for improvements in the future. They are used to sanction action
and hasten change, opening up opportunities for those in the business of prescribing
policy solutions at system level to influence the direction policy takes next (Barber
and Mourshed 2007; Ball and Exley 2010; Kamens 2013; OECD 2013; Waldow,
Takayama, and Sung 2014). Belief in the capacity of such data to improve system per-
formance, change pupil outcomes or transform system efficiency underpins the advice
the OECD gives national governments about how to reform what they do. This is
reflected in the discourse that governments themselves employ as they bring their
system data into line with OECD advice (Luke 2010; OECD 2011). From within
this discourse the primary focus is on collecting the data efficiently: caveats about
their aptness for purpose are treated as ‘technical’ issues that can in principle be
solved while the practical business of running an education system continues.
As the data circulate new epistemic communities form (Davis Cross 2013). Com-
parative studies have begun to track the interactions that happen around the data
between policymakers and policy brokers, producing new forms of practical and
applied knowledge, or ‘know-how’, that circulate at speed. Such interactions happen
at some distance from and outside of the older checks and balances on knowledge for-
mation that originated within the institutional structures of the universities and other
professional communities that were once committed to thinking about pedagogy in
slow time (Ozga 2014; Schriewer 2014). New ways of making and disseminating
knowledge intertwine with the data, re-ordering the space and time in which education
policy gets made (Lawn and Lingard 2002; Moss 2009).
The policy discourse is about transparency, about what the numbers can explain and
predict. The data are presented as speaking for themselves to the public body at large.
Yet numbers are harder to read than this discourse claims. They are abstractions, sub-
suming a range of potential differences into a small number of indicators to be used in
Editorial 261

models that set parameters to how the data can be understood (Desrosieres 2002; Gold-
stein 2004). Analytic operations lead to summaries of the data that obscure some
aspects to reveal others, governed by the trade-offs and choices the analyst makes
along the way (Byrne 2002; Desrosieres 2010). These decision points and processes
may not be immediately obvious to the non-specialist, and the tools the expert commu-
nity deploys to construct the data receive little scrutiny in public discourse (Goldstein
and Leckie 2008; Lauder et al. 2010). Those who appropriate the data to meet an
immediate policy need often pay scant heed to what the indicators encode and conver-
sely what they in practice ignore (Goldstein 2004; Merry 2011). In pursuit of their own
goals, policymakers may not pause to consider the limits set by the models in use or
reflect on what has been made thinkable or unthinkable through the choice of statistical
design (Ballestero 2012). In policy circles this often matters far less than the strength of
confirmation the data appear to offer for a given point of view (see Levačić 2014). This
Downloaded by [Heriot-Watt University] at 18:01 28 December 2014

is knowledge exchange as risk management: the statistical data strengthen confidence


in policy choices politicians have already made (see Martens and Niemann 2010;
Waldow, Takayama, and Sung 2014).
Desrosieres (2010) draws a useful distinction between investigating statistics as
tools of proof and statistics as tools for governance. The first refers to the procedures
through which the data are formed, the second to the uses to which they are put as gov-
ernments act. To redress the comparative neglect of tools of proof as the object of socio-
logical enquiry, Desrosieres outlines what he describes as the upstream stages of
quantification, the making of numbers from social phenomena, which then compose
the variables that drive the models in use. He distinguishes three different stages in
the construction of statistical data:

1) that of quantification properly speaking, i.e. the making of numbers, 2) that of the uses
of numbers as variables, and finally, 3) the prospective inscription of variables in more
complex constructions, models. (114)

The first level, the translation of social phenomena into indicators and measures, has
attracted critical commentary in international development circles and from education-
ists, with questions asked about whether the indicators deployed are fit for purpose and
how else they might be composed (Alexander 2008; Davis, Fisher, and Kingsbury
2012; Winthrop and Anderson Simons 2013; Unterhalter 2014). Further transform-
ations take place as the numbers that vary from person to person, the variables, enter
statistical models which will explore their association. These are all important elements
in statistical design. They play a part in what Desrosieres (2010) describes as a chain of
quantification in which construction of the data creates greater levels of abstraction that
feed back into social understanding and make the data meaningful. This is itself a social
act. From this perspective statistical analysis cannot occupy the position of the neutral
informer, standing outside of the current terms of debate and ‘objectively’ describing
the data it finds (Byrne 2002; Goldstein 2004). But how to bring the data’s construction
and the particular sets of interests that underpin the choices made back into public view
remains an open question.
Against this background, and through the data they select, the papers in this special
issue provide different accounts of the place of numerical data in public life, their
diverse roles within different epistemic traditions, their cooption into policy, and
their making, remaking and (re)interpretation as part of professional practice. In
general, the papers shift attention from how statistics as an academic discipline
262 Editorial

organises its own affairs (an issue we return to in the Epilogue) to how outsiders and
non-specialists encounter and make sense of the data, including in this latter category
the policymakers and administrators who use the data to act in the world.
Vincent’s paper presents a historical study of the Registrar-General’s reports pub-
lished during the nineteenth century, in which the registers of births, deaths and mar-
riages came to be seen as a source of data for counting literacy and illiteracy in the
population at large. Vincent focuses on the contemporary arguments over the credibility
of the data as a source for measuring illiteracy, and how interpretation of the data
changed over time. The paper considers the continuities between the conundrums
facing administrators in the past with the dilemmas facing those trying to calculate
the prevalence of literacy at a national level now. There is a persisting debate about
how far it is possible to isolate the individual’s level of literacy from the community
of which they are part.
Downloaded by [Heriot-Watt University] at 18:01 28 December 2014

Cowen provides a theoretical overview from within the field of comparative edu-
cation that raises key questions about comparative education’s contemporary function
and role. Core organising principles that have variously focused discussion – outcomes
from education, transferability and methods – are tracked back over time. From this per-
spective the prominence of the Programme for International Student Assessment
(PISA) data in public discourse becomes emblematic of wider shifts in the epistemo-
logical landscape from which comparative education does not stand apart. In this
way, the paper relocates discussion to the interaction between the epistemological
structuring of the field, the terms of debate it musters and the wider social contexts
against which it operates.
Waldow, Takayama and Sung’s paper reflects on the construction of reference
societies that large-scale surveys such as PISA and the rankings they generate help
create. Working within a comparative tradition that focuses on the phenomenon of
policy borrowing as a locally responsive practice that re-shapes what it borrows for
local ends, the paper compares newspaper coverage of Asian countries’ PISA
success in Germany, Australia and South Korea between 2001 and 2012. This raises
questions about how far ‘externalisation’ of education problems and solutions helps
solidify the terms of the educational ‘crisis’ that each country’s policy discourse
already revolves around.
Bradbury’s paper explores the role attainment data currently play in the early years
curriculum in England, drawing on ethnographic case studies in two early years set-
tings. She reflects on how teachers manage an assessment process that tries to track
many different aspects of children’s classroom behaviour and audit the learning oppor-
tunities they participate in, drawing attention to some of the paradoxical responses this
generates from teachers. What this means for the reliability and validity of the data that
are used to hold early years teachers to account and how such a cumbersome and
unwieldy system of assessment evolved are strong themes in the discussion.
Levačić’s paper takes as its starting point the World Bank’s adoption of per-student
funding as a key policy tool designed to create more efficient use of resources in
education and considers its uptake as a means of achieving this end in transition
countries that were until recently part of the Soviet bloc. Drawing on the economics
of education and the policy implementation literature, and writing from the analyst’s
perspective, the paper considers the range of permutations in the calculations designed
to balance efficiency with equity, alongside the political dilemmas that ensue in the
wider social context as the funding formulae are introduced. The consequences of
moving from older patterns of resourcing to the new have different implications for
Editorial 263

equity and access in different settings that the formulae cannot always predict or necess-
arily solve.
Moss’s paper uses a range of interdisciplinary tools to consider how literacy attain-
ment data were formed in two contrasting periods in the English educational system, the
1860s and the 1950s, and the diverse purposes they were expected to serve as admin-
istrative tools. The account sets the data in their social and material contexts of use,
exploring the form the data took and how their presumed sufficiency for the task
was shaped by contemporary explanations for what governed differences in outcomes
from education. In each case the technologies of calculation differed, as did their impact
on educational practice. The paper describes the uptake and the eventual demise of the
particular combination of data and discourse which for a time held sway.
The strength of this collection is its reflection on the diversity of uses to which
numbers can be put. This is now partly driven by the intervention of global organis-
Downloaded by [Heriot-Watt University] at 18:01 28 December 2014

ations such as OECD and UNESCO which are using numbers to shape the discourse
within and between countries. It is, however, important to understand that despite
the current significance of these moves, there is a broader perspective on the role of
numbers in education that should not be lost and to which we will return in the epilogue.

Harvey Goldstein
University of Bristol

Gemma Moss
Institute of Education, University of London

References
Alexander, R. 2008. Education for All, the Quality Imperative and the Problem of Pedagogy.
CREATE Research Monograph no. 20. Brighton: University of Sussex.
Ball, S., and S. Exley. 2010. “Making Policy with ‘Good Ideas’: Policy Networks and the
‘Intellectuals’ of New Labour.” Journal of Education Policy 25 (2): 151–169.
Ball, S. J., M. Maguire, and A. Braun. 2012. How Schools do Policy: Policy Enactment in the
Secondary School. London: Routledge.
Ballestero, Andrea. 2012. “Transparency Short-Circuited: Laughter and Numbers in Costa
Rican Water Politics.” PoLAR: Political and Legal Anthropology Review 35 (2): 223–241.
Barber, M., and M. Mourshed. 2007. How the World’s Best-performing School Systems Come
Out on Top. London: McKinsey.
Bradbury, A. 2014. “Early Childhood Assessment: Observation, Teacher ‘Knowledge’ and the
Production of Attainment Data in Early Years Settings.” Comparative Education 50 (3):
322–339.
Byrne, David. 2002. Interpreting Quantitative Data. London: Sage.
Crossley, M. 2014. “Global League Tables, Big Data and the International Transfer of
Educational Research Modalities.” Comparative Education 50 (1): 15–26.
Cowen, R. 2014. “Ways of Knowing, Outcomes, and ‘Comparative Education’: Be Careful
What You Pray For.” Comparative Education 50 (3): 282–301.
Davis, K., A. Fisher, and B. Kingsbury, eds. 2012. Governance by Indicators: Global Power
Through Classification and Rankings. Oxford: Oxford University Press.
Davis Cross, M. K. 2013. “Rethinking Epistemic Communities Twenty Years Later.” Review of
International Studies 39 (1): 137–160.
Desrosieres, A. 2002. The Politics of Large Numbers. A History of Statistical Reasoning.
Cambridge, MA: Harvard University Press.
264 Editorial

Desrosieres, A. 2010. “A Politics of Knowledge-Tools: The Case of Statistics.” In Between


Enlightenment and Disaster: Dimensions of the Political Use of Knowledge, edited by L.
Sangolt, 111–129. Brussels: Peter Lang.
Goldstein, H. 2004. “Education for All: The Globalization of Learning Targets.” Comparative
Education 40 (1): 7–14.
Goldstein, H., and G. Leckie. 2008. “School League Tables: What Can They Really Tell US?”
Significance 5 (2): 67–69.
Grek, S. 2009. “Governing by Numbers: The PISA ‘Effect’ in Europe.” Journal of Education
Policy 24 (1): 23–37.
Grek, S., M. Lawn, B. Lingard, J. Ozga, R. Rinne, C. Segerholm, and H. Simola. 2009.
“National Policy Brokering and the Construction of the European Education Space in
England, Sweden, Finland and Scotland.” Comparative Education 45 (1): 5–21.
Hacking, I. 1990. The Taming of Chance. Cambridge: Cambridge University Press.
Kamens, D. H. 2013. “Globalization and the Emergence of an Audit Culture: PISA and the
Search for ‘Best Practices’ and Magic Bullets.” Chapter 5. In PISA, Power and Policy:
Downloaded by [Heriot-Watt University] at 18:01 28 December 2014

The Emergence of Global Educational Governance, edited by H. D. Meyer and


A. Benavot, 117–140. Oxford: Symposium Books.
Landahl, J., and C. Lundahl. 2013. “(Mis)Trust in Numbers: Shape Shifting and Directions in
the Modern History of Data in Swedish Educational Reform.” In The Rise of Data in
Education Systems: Collection, Visualization and Use, edited by M. Lawn, 57–78.
Didcot: Symposium Books.
Lauder, H., D. Kounali, T. Robinson, and H. Goldstein. 2010. “Pupil Composition and
Accountability: An Analysis in English Primary Schools.” International Journal of
Educational Research 49 (2–3): 49–68.
Lawn, M. ed. 2013. The Rise of Data in Education Systems: Collection, Visualization and Use.
Didcot: Symposium Books.
Lawn, M., and B. Lingard. 2002. “Constructing a European Policy Space in Educational
Governance: The Role of Transnational Policy Actors.” European Educational Research
Journal 1 (2): 290–307.
Levačić, R. 2014. “Using Quantitative Data in World Bank per Student Funding Reform
Projects: Data, Designs and Dilemmas in Transition Countries.” Comparative Education
50 (3): 340–356.
Lingard, B., and S. Sellar. 2013. “‘Catalyst Data’: Perverse Systemic Effects of Audit and
Accountability in Australian Schooling.” Journal of Education Policy 28 (5): 634–656.
Luke, A. 2010. “Will the Australian Curriculum Up the Intellectual Ante in Primary
Classrooms?” Curriculum Perspectives 30 (3): 59–64.
Manzon, M. 2009. “The Necessary and the Contingent: On the Nature of Academic Fields and
of Comparative Education.” Comparative Education Bulletin 9: 5–22.
Martens, K., and D. Niemann. 2010. Governance by Comparison – How Ratings & Rankings
Impact National Policy-making in Education. TranState Working Papers 139. University
of Bremen.
Merry, S. E. 2011. “Measuring the World: Indicators, Human Rights, and Global Governance.”
Current Anthropology 52 (S3): S83–S95.
Meyer, H., and Benavot, A. 2013. PISA, Power, and Policy: The Emergence of Global
Educational Governance. Oxford: Symposium Books.
Moss, G. 2009. “The Politics of Literacy in the Context of Large-Scale Education Reform.”
Research Papers in Education 24 (2): 155–174.
Moss, G. 2014. “Putting Literacy Attainment Data in Context: Examining the Past in Search of
the Present.” Comparative Education 50 (3): 357–373.
Novoa, A., and T. Yariv-Mashal. 2003. “Comparative Research in Education: A Mode of
Governance or a Historical Journey?” Comparative Education 39 (4): 423–439.
OECD. 2011. Lessons from PISA for the United States, Strong Performers and Successful
Reformers in Education. Paris: OECD.
OECD. 2013. Education at a Glance 2013: OECD Indicators. Paris: OECD.
Ozga, J. 2014. “Knowledge, Inspection and the Work of Governing.” Sisyphus 2 (1): 16–38.
Ozga, J., P. Dahler-Larsen, C. Segerheolm, and H. Simola, eds. 2011. Fabricating Quality in
Education: Data and Governance in Europe. London: Routledge.
Editorial 265

Pereyra, M. A., G. González, A. Luzón, and M. Torres. 2009. PISA under Examination:
Changing Knowledge, Changing Tests and Changing Schools. Granada: CESE.
Porter, T. M. 1995. Trust in Numbers. The Pursuit of Objectivity in Science and Public Life.
Princeton, NJ: Princeton University Press.
Raffe, D. 2013. “What Is the Evidence for the Impact of National Qualifications Frameworks?”
Comparative Education 49 (2): 143–162.
Schriewer, J. 2014. “Neither Orthodoxy Nor Randomness: Differing Logics of Conducting
Comparative and International Studies in Education.” Comparative Education 50 (1): 84–101.
Stigler, S. M. 2002. Statistics on the Table: The History of Statistical Concepts and Methods.
Cambridge, MA: Harvard University Press.
Unterhalter, E. 2014. “Measuring Education for the Millennium Development Goals:
Reflections on Targets, Indicators, and a Post-2015 Framework.” Journal of Human
Development and Capabilities: A Multi-Disciplinary Journal for People-Centered
Development. doi:10.1080/19452829.2014.880673.
Vincent, D. 2014. “The Invention of Counting: The Statistical Measurement of Literacy in
Downloaded by [Heriot-Watt University] at 18:01 28 December 2014

Nineteenth-Century England.” Comparative Education 50 (3): 266–281.


Waldow, F., K. Takayama, and Y. K. Sung. 2014. “Rethinking the Pattern of External Policy
Referencing: Media Discourses Over the ‘Asian Tigers’ PISA Success in Australia,
Germany, and South Korea.” Comparative Education 50 (3): 302–321.
Winthrop, R., and K. Anderson Simons. 2013. “Can International Large-Scale Assessments
Inform a Global Learning Goal? Insights from the Learning Metrics Task Force.”
Research in Comparative and International Education 8 (3): 279–298.
Wooldridge, A. 1994. Measuring the Mind: Education and Psychology in England, c.1860–
c.1990. Cambridge: Cambridge University Press.

Das könnte Ihnen auch gefallen