Sie sind auf Seite 1von 9

1.

Introduction

Some forty years ago, Donald T. Campbell succinctly outlined the goal of an experimental society where rational; empirical evidence would be a crucial basis for societal decision making:

The United States and other modern nations, Campbell proclaimed, should be ready for an experimental approach to social reform, an approach in which we try out new programs designed to cure specific social problems. (Campbell, 1969)

Forty years later the most popular and powerful politician on the planet (President Barack Obama) chose to express the following point in what may have been the most listened to political address of all time:

The question we ask today is not whether our government is too big or too small, but whether it works, whether it helps families find jobs at a decent wage, care they can afford, a retirement that is dignified.

Where the answer is yes, we intend to move forward. Where the answer is no, programs will end. (Inaugural address).

In reviewing the current status and future prospects for the evaluation of Grants and Contributions I thought it might be helpful to situate the problem in a broader historical field. The first quote is no means the beginning of program evaluation but it condenses some of the initial promise of the field. The idea of rational management based on empirical data can be traced back to at least the encyclopaedists and the notion of political arithmetic. In fact, statistics initially referred to measures of state and there is a rich tradition of what might be called technocratic approaches to societal management.

In the sixties the current movement fund its roots in the search for a great society in the United States and a little later the just society in Canada. Social science was to be harnessed to the pursuit of not just better standard of living but better quality of life. Along with social indicators the fledgling evaluation research movement was to substitute hard empirical evidence of what works and what doesnt for the vagaries of less informed political choices. I quote Donald Campbell because he outlined arguably the most sophisticated model of how evaluation would assess the counterfactual hypothesis of what the world would

have looked like if the program hadnt existed. The search for falsification was in keeping with his preference for experimentalism and his epistemological debt to Karl Popper. Sometimes we would use non experimental designs but only while adjusting for and recognizing the threats to internal and external validity these entailed.

In the full flush of what Daniel Bell called the emergence of a knowledge-based Post Industrial society this was am extremely alluring idea. No longer would be slaves the irrational caprices of impressions, intuitions and vested interests. Hard evidence and rationalism would supplant these clearly inferior methods of deciding.

So here we are roughly four decades later and it we should be encouraged that no less a figure than President Obama sought to remind the world of the importance of this idea in the short time he had to make what will probably be his most listened to political statement. But a cynic might wonder why it was necessary to make this plea some four decades later if it was such an obvious idea in the middle part of the last century. Shouldnt evaluation research have already supplanted the more capricious and irrational forms of decision-making it was intended to replace?

Clearly the answer is no and while we should not abandon this alluring idea we should be clear eyed about the fact that after forty years it is hard to make a clear case that we have made much real progress to Campbells original idea of an experimental society. So are we packing our bags for a trip we are never going to take? What are the main obstacles to making progress to having causal evidence of the incremental impacts of programs and policies enjoy the influence that we all believe they should. Should we be looking at methodological barriers? Are the answers perhaps more in the diverse and often conflicting interests underlying the modern state? 2.0 The recent Canadian context. I use this broad framing to enjoin the question of how we are doing with the-more specific evaluation of Grants and Contributions

In Canada, evaluation research was institutionalized under the rubric of program evaluation. The initial Treasury Board guidelines clearly laid out the notion of causal impacts and program evaluation took on a unique role distinct from audit, review, and other forms of management consulting. The policy and its operationalization within the federal government have evolved through time. A view shared by several authorities I have discussed this topic with suggest that the

function reached its pinnacle, in terms of quality and influence at the erstwhile HRDC.

At that time, HRDCs program evaluation branch was doing large scale effectiveness evaluations which were producing important and often counterintuitive findings about what labour market innovations worked and did not work. During the major process of Social Policy Review, these materials actually were a significant component of the materials used to brief the minister and his committee.

In fact, we developed blended models of evaluation results and polling results which were integrated into a segmented model of recommendations and advice on social policy renewal (Lifelong Learning and the World of Work: A Segmented Perspective, 2005). These briefing materials became a prominent feature of the debate about social policy and EI reform.

On a number of occasions, I have had the opportunity to brief the most senior levels of elected and officials in Ottawa (including the Deputy to Minister, Prime Minister and the Cabinet). This particular exercise is the only time I recall having the opportunity to speak to the most senior bureaucratic and political audiences

about evaluation results. I think it was the accessible and timely blend of results and opinion data which enhanced this reception. Unfortunately, despite the plethora of examples where more superficial polling results have had enormous influence on the most senior levels of decision-making, it is almost unheard of to imagine the more rigorous results of evaluation research exerting such influence. The very image of the minister or Prime Minister confronted with a political crisis summoning his favoured evaluator to tell him what really works is so far fetched it is almost humorous. Yet, we would think nothing about the same scenario seeing the favoured pollster offering advice. What is wrong with this picture? In the balance of this talk, I want to turn to the question of why utilization has been dramatically less than the initial proponents dreamed of and still far short of the familiar goal that the President Obama cited in his inaugural address. Are the barriers methodological? With better methods, could we improve the validity of our conclusions and then see greater utilization? Are the barriers more practical? The often obsfucated language of program evaluation renders results inaccessible or their timeliness is suspect and by the time answers are available the questions have faded or shifted to other ground. Or, do key barriers lie in the nature of the contradictory interests and incentives underlie the bureaucratic and political realms of the modern state? We will consider all three areas in the rest of this speech and conclude that there is room for improvement in each of these. The main reason,

however, that some cynics might claim that evaluation research has spent 40 odd years packing its bags for a trip it never seems to take lies in the realm of contradictory interests and incentives. 3.0 Grants and Contributions : special areas of concern Lets turn more specifically to the area of Grants and Contributions, HRSDC and the uneasy mixture of methodological and organizational issues underlying the evaluation of Grants and Contributions.

Grants and Contributions are the truly flexible and responsive portion of government. Unlike legislated entitlement programs, they allow government to present a more agile face to citizens. For instance, the current panoply of infrastructure investments being delivered under the aegis of the Economic Action Plan (EAP) are the federal governments key tool for attempting to jump start a moribund economy.

The newer Treasury Board thinking on risk management reflects some of Campbells original concept of societal experimentation. If governments are to improve their responsiveness and effectiveness then embodying some of what Schumpeter called creative destructionism is a good idea. Better still, allowing hard evidence of what did not work guide the destructive portion of the cycle is

the exact point that President Obama was referring to in his inaugural address (Where the answer is no, programs will end.).

Grants and Contributions are by their more ad hoc and protean nature best suited to implementing a more agile, experimental government. Yet, the implementation of this vision is fraught with difficulties and the record to date is inauspicious. Why is this the case?

First of all, there are magnified methodological problems in the case of Grants and Contributions. The program logic is often fuzzier and the accountability regimes are more diffuse and less clearly developed. The diverse scale and nature of different projects/investments within the overall program often makes it very difficult to classify, compare, and integrate conclusions.

Secondly, Grants and Contributions often produce a more direct mixture of the political and bureaucratic realms of government. Some have speculated that disputes about who gets to cut which ribbons in which ridings have severely retarded the implementation of the EAP. We dont need to speculate about the potentially incendiary nature of this mixture, however. Any casual review of the recent history of this department will vividly illustrate some of the key points we

are making. As we review this it is important to bear in mind the diverse and often contradictory vested interests of the political world, the program managers and the evaluator.

The dismantling and reconstruction of HRDC in its new incarnation, HRSDC, illustrates some of the very points I want to make about the relative influence of rational empirical knowledge in the world of decision-making.

Das könnte Ihnen auch gefallen