You are on page 1of 111


June & July 2012, No. 173, $6.00



A P u b l i c at i o n o f t h e H o ov e r I n s t i t u t i o n
s ta n f o r d u n i v e rs i t y

J UNE & J ULY 2012, No. 173

3 RESHAPING GLOBAL HEALTH Time for a structural and philosophical shift Mark Dybul, Peter Piot, & Julio Frenk 19 RATIONING BY ANY OTHER NAME Reasons for resisting the push to limit medical care Amitai Etzioni 29 FERTILITY DECLINE IN THE MUSLIM WORLD A demographic sea change goes largely unnoticed Nicholas Eberstadt & Apoorva Shah 45 THE MANY FACES OF ISLAMIST POLITICKING Learning to govern after the Arab Spring Camille Pecastaing 57 THE RESILIENCE OF ARAB MONARCHY How hereditary rulers should respond to popular pressure Ludger Khnhardt 69 FORTY YEARS OF ORIGINALISM The development and future of a judicial philosophy Joel Alicea

81 READING INTO THE CONSTITUTION Peter Berkowitz on Living Originalism by Jack M. Balkin 88 SHABBY SOVIET REALITY Marshall Poe on Red Plenty by Francis Spufford 94 BUSINESS ETHICS, SHARPENED Kurt R. Leube on Business Ethics and the Austrian Tradition in Economics by Hardy Bouillon 98 FALLING FROM GRACE Paul Kengor on The Man in the Middle: An Inside Account of Faith and Politics in the George W. Bush Era by Timothy S. Goeglein 104 BEING BISMARCK Henrik Bering on Bismarck: A Life by Jonathan Steinberg

A P u b l i c at i o n o f t h e H o ov e r I n s t i t u t i o n
s ta n f o r d u n i v e rs i t y

POLI CY Review
Ju n e & Ju ly 2 0 1 2 , N o . 1 7 3

Editor Tod Lindberg

Research Fellow, Hoover Institution

Consulting Editor Mary Eberstadt

Research Fellow, Hoover Institution

Managing Editor Liam Julian

Research Fellow, Hoover Institution

Office Manager Sharon Ragland

Policy Review (issn 0146-5945) is published bimonthly by the Hoover Institution, Stanford University. For more information, write: The Hoover Institution, Stanford University, Stanford ca 94305-6010. Or visit Periodicals postage paid at Washington dc and additional mailing offices. POSTMASTER: Send address changes to Policy Review, Subscription Fulfillment, P.O. Box 37005, Chicago, il 60637-0005. The opinions expressed in Policy Review are those of the authors and do not necessarily reflect the views of the Hoover Institution, Stanford University, or their supporters.

Subscription information: For new orders, call or write the subscriptions department at Policy Review, Subscription Fulfillment, P.O. Box 37005, Chicago, il 60637. Order by phone Monday through Friday, 8 a.m. to 5 p.m. Central Time, by calling (773) 7533347, or toll-free in the U.S. and Canada by calling (877) 705-1878. For questions about existing orders please call 1-800-935-2882. Single back issues may be purchased at the cover price of $6 by calling 1-800-935-2882. Subscription rates: $36 per year. Add $10 per year for foreign delivery. Copyright 2012 by the Board of Trustees of the Leland Stanford Junior University.

E d i t o r i a l a n d b u s i n e s s o f f i c e s : Policy Review, 21 Dupont Circle nw, Suite 310, Washington, dc 20036. Telephone: 202-466-3121. Email: Website:

Reshaping Global Health

By Mark Dybul, Peter Piot, & Julio Frenk

ovement along the arc of development has been propelled by new worldviews and the creation of institutions to respond to them. In the 19th and 20th centuries, development efforts evolved from colonial expansion to missionary zeal, the aftermath of two world wars, the Cold War, economic self-interest, and postcolonial guilt. Numerous private and public organizations were created to respond to shifting demands, including multilateral and bilateral organizations wholly or partially dedicated to global health.

Mark Dybul is a distinguished scholar and co-director of the Global Health Law Program and the inaugural global health fellow at the George W. Bush Institute. Peter Piot directs the London School of Hygiene and Tropical Medicine. Julio Frenk is the dean of the Harvard School of Public Health. The authors acknowledge and are enormously grateful for the significant contributions of Gordon Brown and are deeply thankful to Eugenia Pyntikova for her expert editing and research.
June & July 2012 3 Policy Review

Dybul, Piot, & Frenk

The opening ten years of the 21st century arguably were the decade of global health. Resources increased significantly and many millions of lives were saved and improved. The rapid expansion in global health was part of a broader conceptual movement that created core principles for the use of resources in a new era in development. The first expression of new thinking was the historic Monterrey Consensus, which was later refined by the Paris Declaration and the Accra Accord. The foundational principle outlined in those agreements is a move from paternalism to shared responsibility and mutual accountability. Key to shared responsibility are leadership and strategic direction for the use of resources by the country in which they are deployed (country ownership). Achieving country ownership requires good governance, a results-based approach, and the engagement of all sectors of society. The focus on Several large global health institutions were born specific diseases out of the heady days of the opening of this century; they were intended to reflect and be responsive to has exposed the demands of a new generation in development. fault lines in Governments in emerging economies such as delivering Mexico, Thailand, China, and Brazil have developed innovative models and invested significant resources services in in the health of their people. Although governments places where in many middle-income countries provide a great share of health resources, many of the gains in lowpeople suffer income countries and aspects of gains in middlefrom multiple income countries have been financed and supported by newly created disease-specific programs includhealth issues. ing the Global Fund to Fight aids, Tuberculosis, and Malaria (the Global Fund); the U.S. Presidents Emergency Plan for aids Relief (pepfar) and Malaria Initiative (pmi); and the Global Alliance for Vaccines and Immunizations (gavi). In addition, the Bill and Melinda Gates Foundation and other philanthropists became major investors in global health, and numerous public-private partnerships and product development partnerships were created. The large funding organizations have supported many country-owned programs that have saved and lifted up millions of lives while being the driving force in shifting the benchmark of success in global health and development from the amount of money committed to results achieved. Furthermore, health became part of the worlds top agendas, including at the g8, the un Security Council, caricom, and the African Union. However, the focus on specific diseases has imposed and exposed fault lines in delivering services in places where many suffer from multiple health issues at the same time or at varying points in their lives. Although studies have shown that hiv interventions have reduced overall mortality and that malaria and immunization programs have reduced childhood mortality in the near term, it seems highly likely that more lives will be durably saved if a
4 Policy Review

Reshaping Global Health

person afflicted by different health problems has access to services for all of them. Although there are limited supportive data, we believe it is likely that an integrated approach focused on the health of a person and community is more cost-effective than a silo approach focused on a specific disease or health threat. Yet, existing global health institutions were designed for specific diseases and have not effectively shifted to embrace a broader vision. The resources currently available could have significantly greater impact with a more rational global health strategy and institutional structure focused on stewardship of available resources to achieve public goods what is commonly called global health architecture. Put more directly, today and every day, people will die and lives will not be improved because of the way global health is governed and implemented. Therefore, there is an urgent moral imperative that It is time for a we act now. But there is also a complementary aspect of realpolitik to reorient global health archi- Bretton Woodstecture to the public good: Economic and political style agreement realities make financing of inefficient programs and to guide a new institutions unsustainable. Support for a radical change in the current global health architecture is international therefore in the interest of every disease- or issuehealth strategy specific advocate. and rationalize In 1944, a historic meeting occurred at a hotel in Bretton Woods, New Hampshire, and created the its structure. International Monetary Fund and the International Bank for Reconstruction and Development (the World Bank). The new institutions were established to rationalize global economic policy and secure organizational order amidst chaotic and splintered systems to lift the world from the devastation of World War II. As we approach the 35th anniversary of the Alma Ata Declaration calling for universal access to essential health services and the final years of the Millennium Development Goals (mdg), there is no realistic chance of achieving many of the global health targets despite progress in many countries. Following a decade of unprecedented expansion, global health is at a significant crossroads, with the World Health Organization (who) facing a major budgetary shortfall and many multilateral and bilateral programs bracing for limited growth or significant cuts. As we approach the post-mdg era, now is the time for a new framework to establish an accelerated trajectory to achieve a healthy world. The conceptual foundations of a new era in global health and development have already been established to guide a new international strategy. We propose, below, some principles of implementation to translate those ideals into lives saved and lifted up. It is also time for a Bretton Woods-style international agreement to rationalize the institutional structure of global health led by the g20, with the active engagement and leadership of the emerging economies and other middle- and low-income countries. We acknowledge that our prinJune & July 2012 5

Dybul, Piot, & Frenk

ciples are mainly applicable to the structures created for low-income countries, but in certain cases they have broader relevance. In addition, the approach outlined could be a model for other areas of development.

Focus on the health of persons

ignificant advances in disease-specific programs have saved and improved many millions of lives and have definitively shattered the paternalistic and pernicious myth that low- and middle-income countries are not capable of designing and implementing national programs to tackle complex health issues, including chronic disease management. Disease-specific programs have also revealed important problems, including disparities and inadequacies in health systems, the crucial role of sectors beyond the health sector for improving health, and the inefficient and duplicative use of resources in the financing and implementation of global health. The renewed discussion of the importance of health systems is possible because of disease-specific programs, not despite them. In a paradoxical way, the rapid expansion and success of disease-specific programs has made clear the need to move beyond them. At the base of the health care pyramid, in villages and communities around the world, it makes little sense to define health by reference to a specific disease. A mosquito is equally content to make a meal of an hiv-positive or an hiv-negative person. A child whose mothers life is first saved by antiretroviral therapy or sleeping under a bed net but is then lost while she gives birth is no more likely to survive or go to school than the child whose mother dies from hiv or malaria and, of course, the woman and her community are no better off. Without immunizations, other health care, and nutrition in the first months of life, a newborn is increasingly exposed to infections that can lead to stymied physical and emotional growth or death. As the child grows without clean water, diarrheal diseases pose the greatest threat to survival. As children reach adolescence and adulthood, they and their offspring enter into the same cycles of health risk that demonstrate the need for integrated health systems. In fact, because high levels of multiple diseases and health threats in the same locale create a forum of competing risks, interventions to save ones life from a single disease or health threat could increase the probability that the person will die from another disease or threat that is prevalent in the same environment. For example, when a maternal death is averted, that same woman is now exposed to the risk of developing cervical or breast cancer. Such realities should never be a pretext for inaction, and disease specific programs have been important in the evolution of global health. But as we look to the future, focus on the health of a person requires ever-expanding integration. Silos in health not only make little sense to local providers, they make little sense to policymakers. Although significant investments in disease-specif6 Policy Review

Reshaping Global Health

ic programs can have positive ripple effects on other areas of health, such benefits are often haphazard and unintentional. Each program has its indicators, processes, records, communications, logistics, and supply chain systems that may overwhelm an already weak health system. Inefficiencies waste significant financial resources and also strain limited management and human resource capacity. Although it must be proven, there is likely a multiplier effect to integrated health, achieving better outcomes for several diseases for less money and providing a more sustainable approach to global health. It is also important to recognize that the goal of health interventions is healthy people and populations. While health systems are an essential means to achieve that primary goal, creating health systems is a secondary objective not a primary goal. The ecosystem of interrelated and competing risks of disease and death, accompanied by the potential multiplier effect of more rational service delivery, has prompted heads of state, first ladies, ministers of finance and health, and local care providers to weave together integrated programs from funding dedicated to disease-specific initiatives and to call on the global health community to support their efforts to focus on the health of a person rather than a particular issue. If we take seriously the foundational principle of country ownership for a new era in global health and development, a significant shift must begin to an integrated approach to health that is centered on the survival and health of individuals, families, and communities rather than on the eradication of specific diseases. In a similar way, the post-mdg strategy could focus on the overall development needs of a person rather than one aspect such as health.

Public health drives resource allocation

o understand the need for integrated health, it is important to begin with the health of a person. In fact, all good public health begins with the health of individuals. However, it is equally important to view the promotion of healthy individuals in a context of maximizing health outcomes for families, communities, and nations. Unfortunately, even in the most advanced economies, funding for health is limited. Policymakers must consider how to save and improve the largest number of lives and to have the greatest impact on their society with the resources that are available. The latter point is often lost in discussions of global health and is an important aspect of understanding public health as a public good. Prevention efforts provide some of the best returns on investment in personal and public health. The number of persons made healthy is important, but who gets sick and dies also matters. Any death is tragic, but health threats that cause premature death and disproportionately affect the productive and reproductive segments of a population can have a greater impact on society. The potential for a new health threat to rapidly endanger a large
June & July 2012 7

Dybul, Piot, & Frenk

proportion of a population requires an equally rapid response. As in economics, where consumer confidence can affect the health of an economy, intangibles such as panic around a perceived threat, belief in the quality of services in a health facility, in the benefits and side effects of vaccines, or in the gravitas of persons delivering health prevention messages can impact health. A refocus on health as a public good and the implications for resource allocations requires a new discussion and consensus on the ideal and the achievable.

Human rights, shared responsibility

uman rights have been the bedrock of advances in global health since at least the 19th-century public health movements and the formation of the World Health Organization. It is essential to maintain the centrality of human rights in any discussion of access to health care. However, it is equally important to provide policy space that respects the ideal of universal access while setting achievable goals and pragmatic approaches to create global and local institutions, both public and private, to provide it. Too often a devotion to human rights serves as the backdrop to making commitments that cannot be met without concrete plans, financial commitments, or institutions to ensure they are achieved. Too often, high-income countries or the g8 set targets or create initiatives without the engagement of the implementing countries. While this might have been acceptable and even necessary to secure resources a decade ago, todays issues require a new framework for shared responsibility and mutual accountability. That framework includes the following key elements: Partnership, not abdication. In the near term, significant resources are needed from high-income countries. But there is much that must be done nationally and locally to establish the policies, organizations, programs, systems, and support needed for effective implementation. There is also a need for good governance to prevent corruption and optimal use of the resources that are available. Unfortunately, implementation of shared responsibility often leads to bad outcomes. Countries are frequently left to design their own programs and proposals without sufficient input from funders about what is required, or without the technical support the countries want and need. Things fall apart during the review or grant negotiation process when requirements and demands surface, effectively eliminating country ownership while creating tension on all sides. Shared responsibility does not mean abdicated responsibility. It means working together. Partnership is also undermined when countries are exposed to international partners who plan without countries involvement, in some ways the opposite problem Transition planning for financial responsibility. It is essential that all countries contribute financial resources to the health of their own people.
8 Policy Review

Reshaping Global Health

The very low levels of national financing for public health in certain countries, including emerging economies, are not acceptable. And nothing is more likely to halt interest in global health than recent data that some governments have treated increased international resources for health as an opportunity to redirect their own funding to other areas. It is also essential that clear parameters and processes be established to transition health care support from international sources to local sources. Bilateral and multilateral financiers have attempted such transition planning without much success. Effective transitions will require different time horizons for each country, but developing a global agreement on the framework must begin now and be enforced. A new structure for global health must create transition parameters with teeth. It is difficult to conceive of a more effective way to create shared responsibility and mutual accountability that would transform health care.

Principal financiers of integrated national health strategies

o move from disease-specific programs to support for integrated public health within the tangled web of multilateral and bilateral financing and technical institutions for low- and certain middleincome countries is virtually impossible. Such institutions were created at specific times to meet specific demands or fill gaps and were structured for those purposes. In many cases they have been operating for decades and their structures and cultures have become entrenched. True integration requires these institutions to give up a lot of turf, and many of the internal and interorganizational incentives have evolved to defend control of processes and resources. Fundamentally, integration is not in the genes of existing bilateral and multilateral institutions. As an example, within the un system alone, and in spite of the existence of unaids, eleven organizations are engaged in hiv/aids; the Global Fund provides 24 percent of external funding and the U.S. government 45 percent. Other bilateral institutions are in the game, as are large contributors like the Bill and Melinda Gates Foundation. Multiply that across all areas of health and one has a sense of the enormity of the challenge of coherent governance to support integrated health services. A good start: increased coordination. Focusing on the coordination of existing organizations makes sense. They were created for a purpose and nearly all have done much good. There is much merit in utilizing the vast experience and expertise they possess. There has been much admirable thought and experimentation with forms of coordination across and within areas of global health to achieve some degree of integration. Most efforts to date have focused on process. The Three Ones initiative started by African
June & July 2012 9

Dybul, Piot, & Frenk

countries with the U.S., the uk, and unaids aimed to improve coherence in the hiv/aids field by committing funders to work under one national action framework, one national coordinating body, and one national monitoring and evaluation system. As part of the One un initiative, unaids led a process under the Global Task Team to identify the optimal role for each of the eleven un organizations that are engaged in hiv/aids. As another example, the h4+ Group comprising members from unfpa, unicef, the World Health Organization and the World Bank was established to work jointly on maternal and child health. The Health 8 (h8) brings together the heads of seven multilateral organizations as well as the Bill and Melinda Gates Foundation to harmonize policies and activities. The investment Recently the World Bank, gavi, and the Global Fund have agreed to coordinate and collaborate on in hiv treatment health systems strengthening, although details, including what health systems strengthening means was the first on the ground, require elucidation. time in the Bilateral partners are also in the game. pepfar history of global and pmi in the Bush administration and the Global Health Initiative in the Obama administration are health that a designed to have one strategic approach and intechronic disease grated programming across multiple agencies and was addressed. departments of the U.S. government and to partner with other bilateral and multilateral institutions. Recently, the George W. Bush Institute and Obama administration joined with Susan G. Komen for the Cure, unaids, and private corporations and foundations to create the Pink Ribbon Red Ribbon Initiative to use the investments in chronic care for hiv/aids. The investment in hiv treatment was the first time in the history of global health that a chronic disease was addressed, creating a foundation to combat cervical and breast cancer and a model for other chronic diseases. Perhaps the most promising effort to coordinate was the International Health Partnership (ihp+) because it focused on national plans for health with cost estimates. One of the principal problems was a lack of commitment to finance the plans. Each of the coordination efforts can point to successes and improvements. But thus far, such coordination efforts have fallen short of expectations. In addition, the proliferation of global health institutions over half a century has been mirrored by a decade of proliferating efforts to coordinate. While well-intentioned, these efforts have imposed significant transaction costs on already stretched local governments and partners that devote significant time and resources to planning big ideas which often lead to no real change. The next step: principal financiers. Efforts to coordinate global health institutions have revealed the fundamental problem: There is no mechanism to finance integrated national health strategies. Funds are shared across myr10 Policy Review

Reshaping Global Health

iad bilateral and multilateral institutions for specific issues or diseases. Even if other barriers were overcome through coordination, that fatal flaw would remain. Principal financiers for integrated national health strategies could provide the evolutionary jump to a 21st century approach. In essence, principal financiers would serve as the mechanism to fund integrated national health strategies in an accountable and transparent way, with results measured and reported for specific health outcomes but delivered in a coherent fashion. The highest-level indicators would be crude and disability-adjusted life expectancy, as well as death rates for adults and children, but there would also be outcomes, outputs, and process markers for each health intervention. Principal financiers would promote country ownership by being responsive to local demand for inte- Multiple funding grated health services and harmonized funding and streams within by providing a tool against internal barriers to and across the achieving more rational approaches to health care delivery. Bureaucracy is bureaucracy no matter health-related where it is found. Multiple funding streams within mdgs create and across the health-related mdgs have the perverse incentive for low- and middle-income couna perverse tries to follow the bad example of high-income incentive to countries and create silos. Principal financiers could produce an innovative benefit by encouraging build silos. national processes to integrate. A funding source with incentives for integration could lead to more rational governance in low- and middle-income countries than in high-income countries and serve as models for the latter to follow. That would be true development partnership and would add to the rapidly growing list of lessons the developed world can learn from the developing world. However, there are already good examples of integration in low- and, in particular, middle-income countries that have not yet been adopted by others. A Bretton Woods-type meeting for global health could consider the best approach to financing integrated health. Options include creating something new, or transforming current institutions to meet the demands of the 21st century. Candidates in the latter category include the World Bank and the Global Fund, each with strengths and weaknesses. Recent changes in leadership at both institutions, and reforms that began with the former executive director of the Global Fund, could provide an opportunity for the Bank and Fund to develop their own, or collaborative, institutional change to maximize financing for integrated health delivery. In addition, a concerted effort to explore options for principal financiers could create some healthy competition and spur innovative proposals to recreate existing institutions to maximize the return on investment in global health. Moving beyond the pooled funding debate. Pooled funding schemes, including basket funds or sector-wide assistance programs, have dominated
June & July 2012 11

Dybul, Piot, & Frenk

global health discussions for more than a decade and were among the most contentious issues in the negotiations around the founding documents of a new era in development. Global principal financiers focused on integrated health systems, with transparency and accountability that supports countryowned national health strategies spanning the public and private sectors, providing an opportunity to transcend the debate. The greatest hurdle for supporting pooled funding by certain international partners has been a concern about following the money in countries. The creation of a clear, resultsbased system with routine and standardized reporting against outcome and process indicators to initiate and maintain funding, as discussed here, should provide the detailed information that some have found lacking in existing, country-level pooled mechanisms. Indeed, it is precisely this approach that has allowed funders concerned about country-level pooled mechanisms to support globally pooled funds such as the Global Fund and the World Banks International Development Association program. A principal financier builds on such approaches while rationalizing and streamlining the global architecture to maximize results and minimize duplication and waste.

Non-health-sector actors must engage

he founding documents of a new era in development established shared responsibility and mutual accountability beyond the scope of governments. Governments as a whole, not just ministries of health, are ultimately responsible for the health of their people and must set national policy, lead planning processes, set normative and regulatory frameworks, provide oversight of ethical standards, and provide stewardship of the social response to health challenges. Achieving the health of individuals and communities and nations requires the engagement of different relevant departments in government, but also requires nongovernmental actors and, increasingly, actors beyond the health sector. Civil society has played a key role in advocating for increased resources for health and in ensuring accountability and transparency. Faith-based organizations are responsible for 30 to 70 percent of health infrastructure in Africa. Community, traditional faith, and private-sector leaders have played an instrumental role in promoting behavior changes that link individuals and families to health services. The key role of the private sector is finally being recognized as an important element of global health and development. The private sector could play a particularly useful role in rationalizing the structure of global health. Bringing the looming pandemic of noncommunicable diseases under control will require a major engagement of various sectors in government, the food and beverage industry, urban planning, etc. Many prevention activities occur outside of conventional health infrastructure and professions. For example, local civil society, including community, faith, and tribal leaders, can have a disproportionate impact on
12 Policy Review

Reshaping Global Health

prevention interventions, stigma, and health service uptake in both a positive and negative direction. An unfortunate example of the latter is the reemergence of polio in Nigeria and seven neighboring nations after faith leaders promulgated the belief that the vaccine was an attempt to control population. Discussions of health systems are often limited to care, treatment, and clinical prevention activities. However, when considering the public good, it is essential to cast a wider net. An effective approach to integrated health, one that begins with the health of a person and moves on to public health, requires the full integration of effective approaches to prevention, care, and treatment using both conventional and unconventional vehicles.

Accountability and transparency

he phrase accountability and transparency is used so frequently in global health and development that it often loses its meaning. In a time of scarce resources, it is essential that accountability and transparency be clearly defined and be the foundation of all activity. The following parameters are a bare minimum and should serve as the basis for a new structure for global health governance. Implied in the parameters is mutual accountability a principle highlighted in the Monterrey Consensus ten years ago. All partners funders, implementers, those accessing services, technical support providers, policymakers, advocates must be accountable to each other. Accountability and transparency are not for some actors and organizations only: They must apply to all who are engaged, from the global to the individual level. Resources meet commitments. The rhetoric of commitments to health rarely matches reality. Overall, during 19982004, the g8 complied on average with only 45 percent of its commitments made during the annual meeting of leaders. As of 2009, Canada and the United States have achieved 107 and 111 percent of their commitments to Africa, respectively. Others in the group were as low as six percent. This shortcoming is not unique to international partners. African heads of state from 54 countries committed to provide fifteen percent of their budgets for health in 2001; as of 2010, only six have reportedly met the target. Aggressive but achievable goals. The era of advocates demanding unachievable new commitments, and organizations and leaders acquiescing with the full knowledge that they are unreachable, must end. It is the job of advocates to push the limits; it is the job of policymakers to make commitments that will be met. Repeatedly setting unachievable targets and failing to meet them shatters a sense of accountability and perpetuates commitments that no one intends to keep. But that does not mean big ideas should not be pursued. The yearly increases in resources for global health during the past decade were largely
June & July 2012 13

Dybul, Piot, & Frenk

driven by the confidence created by setting, and then meeting, annual targets towards multiyear goals. Results-based financing. At this distance, it is difficult to remember that pepfar was heavily criticized for setting numeric targets for prevention, treatment, and care. Global health was supposed to be too complicated for something as pedestrian as specific goals. Despite criticism, funding was evaluated yearly and shifted based on progress towards targets. That resultsbased approach was essential to securing increased resources. One concern about moving from financing specific diseases to financing integrated health services is a loss or dilution of results-based financing. However, the purpose of an integrated approach to health delivery is to improve an array of health outcomes and reduce morbidity and mortality. National health strategies that promote integrated service delivery through an expansion of primary health care providers and facilities and links to the community still have indicators for each component piece. Monitoring health progress must include an understanding of change in the major causes of disease, disability, and death. And no health system can know if it is succeeding, or modify interventions to improve outcomes, unless progress on specific diseases is addressed. This is the essence of what has been called the diagonal strategy, as a synthesis of pure vertical and horizontal approaches. In fact, one significant advantage of a principal financier would be reconciliation of the many different indicators currently required by multilateral and bilateral development partners that are often not used to promote better service delivery. Indicators could actually promote integration and resultsbased financing. For example, a high-level indicator on the number of pregnant women receiving antiretroviral therapy in certain countries where hiv is a major cause of maternal death could drive a reduction in the perinatal transmission of hiv, increased hiv treatment, and an improvement in maternal mortality. The purpose of integrated delivery is to save and lift up more lives for the same investment. Cost per outcome is an important indicator that also must be measured, as is financial protection against the risk of catastrophic expenditures. Results-based reporting. For the most part, a country with a track record of good stewardship of resources and high performance is burdened by the same reporting requirements as a country with a history of corruption and poor results. That makes little sense. It is unnecessary to absorb significant human and financial resources from both the international funder and the country implementing programs. Of course, all programs need to collect and report top line results. But it should be possible to develop performance strata with a gradation of reporting requirements. In fact, reduced reporting requirements could be a powerful incentive for strong performance. 21 st-century technology. Effective accountability and transparency require 21st-century tools to collect, synthesize, analyze, manage, and store programmatic and financial data, as well as information on human resources, logis14 Policy Review

Reshaping Global Health

tics, and other aspects of management. The systems used for global health were largely built in the 20th century. There are efforts to update and retrofit them. But resources have not been invested at the level that is required. There is one advantage: In many low- and middle-income countries, there are few, if any, data collection systems in place. To some degree, then, there is an opportunity for a technological leap that could be transformational. Clear standards for success, and an evaluation process. For transparency and accountability to be meaningful, it is important to define success and to establish processes to regularly evaluate and modify programs. Bureaucracy and inertia will prevent change in any endeavor without clearly established parameters and systems to evolve as the facts on the ground dictate.

Technical support: A conflict of interest

ilateral programs, as well as some un agencies and the World Bank, provide both technical support to develop and design programs and direct program funding, often through calls for applications for grants that are informed by technical experts working for, or affiliated with, the funding institution. There is an unintentional but inherent conflict of interest in setting standards and providing technical support and procurement mechanisms while also funding programs. That conflict of interest undermines country ownership. Technical experts often have divergent views; there is nearly always more than one way to implement standards and many options for implementing partners and procurement agents. If organizations and institutions provide both technical support and financing for implementation there is an unavoidable tendency for the program dollars to follow the path laid out by the technical advisors. When both the technical support and financing are provided by an external source, the options for national programs to choose can be significantly constrained, thereby limiting country ownership. If national leaders were responsible, with technical support they value, for designing the strategies and operational plans to be financed externally as a supplement to their own financial, human, and other resources, country ownership would increase exponentially. A division of labor among global health organizations in which a principal financier would provide the vast majority of resources for program implementation and other multilateral organizations, including much of the un system, and bilateral organizations, would both provide technical support and increase country ownership, while also being a more rational strategy for delivering integrated health services. Rather than a division of labor by specific disease categories, the plurality of global health actors should primarily distribute responsibilities according to functions.
June & July 2012 15

Dybul, Piot, & Frenk

The whos central role

t has often been said that if the World Health Organization did not exist it would have to be created. But it likely would not be created with its current structure and function. The who is essential to set global standards and to perform key surveillance and monitoring functions, as well as evaluation for accountability. Over the past decades, driven in part by the demands of funders, the who has ventured into extensive implementation and other areas beyond its original mission and core competencies. A symptom of what is wrong with the current institutional architecture of global health is the paradox that, at a time of financial expansion around disease-specific programs, there is severe underfunding of the knowledgerelated global public goods that are essential for improving health outcomes. The current financial crisis at the who provides an opportunity to redirect its structure and function to focus on its core strengths and the value it adds to global health. No other institution can provide global standards, surveillance, and accountability. Many other organizations can and do provide excellent technical support and program implementation, though. Unfortunately, austerity measures seem to be cutting across the board rather than protecting core competencies. A consideration of governance structures of the who, including the autonomy of the regional offices and changes that would allow for greater engagement of nonhealth stakeholders to maximize its key convening authority, could strengthen the institution. With a view to a more rational global architecture for health, the whos fiscal challenges could be an opportunity for a more rational approach to global health governance while ensuring its preeminence among global health institutions, whose roles should also be reviewed.

Competition and innovation

s the arc of development and global health progresses, a certain degree of healthy pluralism and competition will be essential to ensure openness to evolution. A division of labor that keeps principal financiers and multilateral and bilateral partners in the game not only promotes integration of health services today but also helps ensure innovation tomorrow. If a principal financier operated inefficiently or began to violate the principles of development, others would be standing by to step in, and financing responsibility could shift back to bilateral or multilateral partners. Healthy competition among agencies was an important factor in the growth and success of pepfar and the Global Fund, and has served as a driver for efforts on multilateral coordination. It will be an important factor in ensuring innovation and efficiency in a new division of labor. Of course, the
16 Policy Review

Reshaping Global Health

principle of competition and innovation is relevant for many aspects of global health architecture beyond financiers and providers of technical support.

Innovative financing for sustainability

he term sustainability is as prolific as are transparency and accountability and is, perhaps, even less well defined. It is clear that traditional mechanisms for development and global health high-income countries using their tax base to finance services in low- and middle-income countries is insufficient to provide integrated health services for all who need them. There has been much emphasis on innovative financing, including an important recent high-level un task force. Thus far, much of the effort has focused on repackaging old mechanisms, such as taxes (for example, on airfares or financial transactions). While there is merit in that approach, it has its limitations both in resources raised and in contributing to the proliferation of financiers. For example, unitaid was originally intended to raise resources for the Global Fund but then developed its own institutional priorities. Mechanisms to guarantee financing for technologies for example novel vaccines or drugs for diseases that occur only in low-income countries and therefore have no competitive market showed promise to stimulate innovation. However, they too relied on the traditional tax base of high-income countries and did not ensure resources to deliver the new tools. In addition, because of the way the U.S. government manages budget cycles, it has been difficult for the largest global health funder to participate. Until recently, European governments could account for the guarantees off budget or off the balance sheet until the bill came due. That flexibility was eliminated with new accounting requirements for the European Union following the financial crisis. With few exceptions, mechanisms for innovative finance have been developed and driven by high-income countries. To achieve sustainable financing, the direct and deep engagement of emerging economies and other middleand low-income countries is essential. Truly innovative financing mechanisms such as health bonds that would require back-end commitments by large institutions and risk-taking investors on the front end can and should require the commitment of resources by countries that would benefit. Several countries have begun to experiment, with some notable successes, with public and private insurance, including national health insurance, and other vehicles to ensure integrated health services. Certain middle-income countries have made significant progress. But because such approaches cross the bounds of restrictions on out-year financing and programmatic silos, it has been difficult to develop steady resource flows for low-income countries. There should be a significant effort to evaluate avenues to link macrofinance programs that support economic growth and trade with global health.
June & July 2012 17

Dybul, Piot, & Frenk

Although out of pocket expenditures are significant, international contributions for health can rival national budgets and be a significant source of foreign exchange and cash flow into many low- and even some middle-income countries. Health is already linked to macrofinance unintentionally. It is time that it is linked intentionally. The unfulfilled promise of innovative finance could be the clearest demonstration of the need for a Bretton Woods-type agreement led by the g20 countries to restructure global health with principal financiers and with more flexible mechanisms for the 21st century.

s delegates convened at a hotel in Bretton Woods, New Hampshire, in 1944, it was clear that the existing global finance governance mechanisms were too divided and chaotic to cope with the world economic situation. As we emerge from a decade of rapid expansion in global health that began with the conceptual foundations for a new era in development and approach the post-mdg era, now is the time for a Bretton Woodsstyled consensus to create a new architecture for the governance of global health. It is as clear today as it was in 1944 that existing structures were created for a different time and that a 21st-century approach to global health requires a radical restructuring of 20th-century institutions to support coherent, country-owned, national health strategies that engage all sectors in design and implementation; that begin with the health of people to design integrated systems for public good in an accountable and transparent way; that balance human rights with pragmatism and shared responsibility; and that are underpinned by innovative approaches to finance ultimately leading to an orderly transition of funding towards national mechanisms driven by economic growth. The investments being made are not being maximized. Bringing coherence and direction to the institutional structure of global health could radically improve investment outcomes and propel global heath from a 20th- to a 21st-century approach. Governments, civil society organizations, and the private sector all have a key role to play in designing a new global health architecture and sustainable financing. A critical first step is to rationalize the tangled web of global health through principal financiers separated from technical support organizations and with a leading stewardship role for the who. This radical vision can be achieved only with the leadership of an expanded g20 which includes more low income countries and the active participation of other emerging economic powers and middle- and low-income countries. A bold restructuring of global health architecture could establish models and lessons learned for other areas of development. A focus on the health of a person could provide insights for a post-mdg era that focuses on creating the opportunities needed for every human being to realize his or her full potential. That is an audacious vision, but the recent history of global health and a long history of great human achievements teach us that what seems impossible can be done. The only question that remains is: Will it be done?
18 Policy Review

Rationing by Any Other Name

By Amitai Etzioni

ewspapers and magazines do not usually regurgitate ideas that have been bandied about for decades, especially when they are replayed one more time by the same leading author. Hence, it is telling that the New Republic republished in mid-2011 the brief by Daniel Callahan (this time co-authored with Sherwin Nuland). The authors call for a ceasefire in Americas war against death, arguing that those who surrender gracefully to death may die earlier than [is now common], but they will die better deaths. They urge the medical profession and ultimately, the American people to undergo a cultural shift they argue is necessary to prevent the otherwise inevitable financial failure of our health care system. This shift will replace a medical culture of cure with a culture of care. They note that rationing and
Amitai Etzioni is director of the Institute for Communitarian Policy Studies at George Washington University. He is indebted to Courtney Kennedy for research assistance on this article.
June & July 2012 19 Policy Review

Amitai Etzioni
limit-setting will be necessary to bring about this change. Callahan and Nuland point to evidence that little progress has been made in our quest for cures for chronic diseases (like Alzheimers) or will likely be made in our efforts to significantly extend our life expectancy. Given the marginal benefit and high cost of medical advancements, they argue that we need to invest much more of our limited funds in preventive, affordable care, rather than in strenuous efforts to wring a few more years out of life. Focusing on care for the elderly, the authors call on us to abandon the traditional open-ended model (which assumes medical advances will continue unabated) in favor of more realistic priorities namely, reducing early death and improving the quality of life for everyone. They further advocate age-based prioritization, giving the highest priority to children and the lowest to those over 80. Callahan and Callahan sometimes comes across as though he Nuland call on advocates providing only palliative care to those who, as summarized by Beth Baker in her 2009 us to abandon interview with Callahan, have lived a reasonably the traditional full life of, say, 70 to 80 years, offering them high quality long-term care, home care, rehabilitation open-ended and income support, but not extraordinary and model of expensive medical procedures. That is, we should medical care. ration health care for our elders, granting them mainly ameliorative care rather than vainly seeking to cure the unyielding chronic illnesses that plague them. In other texts, his argument is more hedged. However, he tends to hold that quality of life is more important than length of life, especially given that the last years of our lives are miserable, as our minds wander, and we are beleaguered by incurable diseases. Otherwise, our futile battle against death may doom most of us . . . [to an end] . . . with our declining bodies falling apart as they always have but devilishly and expensively stretching out the suffering and decay. They hence determine that the cutoff point, the age at which we should put our elderly on ice, is 80. As we shall see shortly, whether one reads Callahans statements as stark or as more nuanced, his argument faces the same basic challenges. Daniel Callahan is the co-founder of a premier bioethics research institution, the Hastings Center. It has played a major role in the development of bioethics in the United States, and indeed the world. (Callahans co-author, Sherwin Nuland, was a practicing surgeon for 30 years and has authored several books on life, death, and medicine.) However, this essay (as well as previous writings by Callahan on the same subject) is neither a work of scholarship nor of policy analysis but of political advocacy. It employs emotive terms, rhetorical devices, and vague formulas to advance a cause. Thus, the New Republic article recommends that seniors be granted a primary care period, which at first blush sounds much less troubling than to argue that elder Americans will be provided only palliative care. However, on sec20 Policy Review

Rationing by Any Other Name

ond thought, one recalls that primary care is the gate to secondary and tertiary care (such as surgeries, kidney dialysis, hip replacements, and chemotherapy). Hence, if this gate is shut, primary care becomes largely ameliorative care! As part of their advocacy, the authors frame numbers to alarm us. For instance, they state that the cost of Alzheimers is expected to reach $189 billion in 2015 and will rise to a trillion dollars in 2050. This assumes no improvement in treatments, even of the kind l -Dopa provided for Parkinsons and antiviral medications for hiv (two illnesses that were not cured, but the lives of those affected were made much better, longer, and more productive), let alone a partial cure (which we did find for several cancers). It disregards that, by 2050, the economy is going to be much larger as well and, hence, a nonThe authors set propagandist way to deal with such figures is to preup numerous sent them as a percentage of g d p and not as absolute numbers. straw men for In the same vein, the authors keep setting up example, that straw men and then slaying them. Thus, they argue that Americans seek to conquer disease and Americans seek live forever, citing one source who declares, We do to conquer not appear to be moving to a world where we die disease and to without experiencing disease, functioning loss, and disability. No evidence is presented that this is what live forever. Americans expect and, moreover, even if such evidence exists such daydreams do not provide a moral foundation for ruling whether we should stop seeking to extend life and curb the ravages of disease. Callahan and Nulands exhibit number one is infectious disease. They argue that forty years ago, it was commonly assumed that infectious disease had all but been conquered. This is false, Callahan and Nuland say, pointing out the advent . . . of hiv as well as a dangerous increase in antibiotic-resistant microbes. Ergo, we should note that infectious disease will never be eliminated but only, at best, become less prevalent. This is lessening of prevalence is dismissed as if it were an unworthy goal. The authors obviously deal with the U.S. because a mountain of good has recently been achieved overseas in exactly this department. In the U.S., the main reason relatively little has been achieved in fighting infectious disease is because many of them were largely licked by an earlier generation. Spurning efforts to eliminate diseases simply because new ones creep up is like resigning yourself to living in squalor because your home is only going to get messy again. Oddly, the example the authors cite as a sign that we ought to surrender to the inexorable, the spread of hiv, is a research field in which great achievements were made in the past decade. Callahan and Nuland express exasperation with the endless issuing of promissory notes by medical researchers that have not been paid. With
June & July 2012 21

Amitai Etzioni
regard to chronic diseases, a major counterexample can be found in the very significant improvements in the treatment of diabetes in recent years. In short, even if it is true that the pace of progress in medical care is slowing, it is by no means nearly as unproductive as the authors maintain. And the value of the achievable should not be dismissed because it does not meet some elusive dream; it should be appreciated because of the good that it is delivering.

End of life or age-based rationing?

ne of the major findings of the research on health care costs is that Americans use up more medical resources in the last year of their life than in any other previous years. For instance, findings from the 199296 Medicare Beneficiary Survey indicated that mean annual medical expenditures . . . for persons aged 65 and older were $37,581 during the last year of life versus $7,365 for nonterminal years. These and other such statistics seem at first blush to provide strong support for the Callahan thesis. However, at a second glance, one notes that many of these statistics apply to all last years of life whether that of premature babies too small to make it, a young person with advanced cancer or aids, or that of select senior citizens. The relevant criteria is not age but rather the likelihood that a person can be cured, or at least restored to a meaningful life, able to love and serve, and whether his or her end is near. To put it differently, Callahan makes it sound as though as soon as the body turns 80, there is an abrupt change in our medical condition. The opposite is true: Our bodies gradually change, both before and after that age, and at a different pace for different people. Much of what Callahan is talking about holds for the last months, maybe year, of life, not for those who simply had 80 candles on their birthday cake. And as the average lifespan has been extended by eight to nine years since 1960, many of these years for many of those older than 80 were far from miserable. If ration we must, we should limit care for all those who have a terminal illness and medically determined short time to live whatever their age. Callahan claims that Americans (i.e., all of us) seek immortality, are unwilling to face death, and believe in the ability to extend life forever and a day. Actually, the caring professions have developed and society has embraced a way to proceed, which is not based on age. Namely, once a person is determined, typically by a physician, to have no more than six months to live, they are referred to a hospice (whatever their age), and there they get the Callahan treatment ameliorative rather than therapeutic care. Moreover, the fact that millions of Americans write living wills and many ask to sign do-not-revive orders shows that Americans can and do deal with end of life issues. Closely related is the question of what constitutes a worthy or productive
22 Policy Review

Rationing by Any Other Name

life. Callahan and Nuland draw on a vague concept of being able to manage society. Once we go down this path, many others will draw, as they already often do, on the stream of earning. This is a very treacherous basis on which to allocate health care resources. It does not respect assets other than moneymaking, such as the ability to give love (for which grandparents are quite well-suited, I can attest, as someone who has passed 80 and has thirteen grandchildren) or to be creative and serve the community through volunteerism. And it suggests that homemakers and those with serious handicaps are less worthy human beings than the moneymakers. Indeed, if earnings were the basis for rationing health care, the best and most exhaustive care would go to movie stars, heads of hedge funds, those who sell large amounts of fraudulent mortgages, and drug dealers. We best continue to respect all life and not allow the government to determine what makes a good life and when it is no longer a worthy one. And we best ensure that enough hospices are available and of good quality to those who choose to move there once they find out that their life is nearing a close, whatever their chronological age.

Slippery slopes
evisiting these ideas now ought to be subject to particularly close scrutiny, because they are republished in the context of alarm about rising deficits and mushrooming health care costs. Hence, this kind of writing is likely to be used as ammunition by those who want to reduce public support for health care in general and for the elderly in particular. One ought to note that the fact that we are squeezed for funds can be used to justify rationing health care, and to insist that now we simply must limit those above a certain age to receiving mainly or only ameliorative rather than therapeutic care. It logically follows like a hangover after a night of boozing that the cut-off age should be lowered if our economic condition further deteriorates. Once we set such a limit and accept the cultural shift Callahan and Nuland call for, one that treats age-based rationing as morally justifiable, it is simple to show that, on average, those who are, say, 77 to 80, produce less and have greater care costs than those who are younger. But then we can likewise say that about those who are somewhere between 72 and 77, between 65 and 72, and so on. And if other countries are to follow such a model, they will surely have to set a lower age. Fifty-five for El Salvador, and 43 for Afghanistan? The possibility that using age-based rationing to ratchet down care will lead to troubling outcomes is far from mere speculation. For example, reports have indicated that in Britain, people older than 50 have been discouraged from seeking kidney dialysis treatment.
June & July 2012 23

Amitai Etzioni
Moreover, it is essential to note that the concept of quality of life is a particularly slippery one. Once we cease to respect life per se and cherish only life of good quality, we truly open the door to defining which lives deserve to be saved and which do not, especially in hard economic times. One may say that we are very short of resources, and hence must resort to rationing. However, this should be considered only if there are no other places to reduce health care costs places where cost-cutting can be much more readily justified. And as I will show shortly, there is a surprisingly long list.

No assured reallocation
nother major weakness of Callahans thesis is that it is based either on a complete misunderstanding of the way the American polity works or an unwillingness to face what it would take to introduce a regime of the sort Callahan advocates. Callahan first off treats health care as if it were a hermetically sealed, discrete political and economic system. In this Never Never Land, if fewer funds are allotted to elderly care, ipso facto, more will be available for child care and for younger people in general. This assumption ignores that elder care is largely publically financed, while younger care is not. Hence, if tomorrow the government collects, say, $100 billion less in tax revenue from Americans to pay for Medicare, there is no reason to assume that these dollars will be employed for preventive care, youth care, or for any other form of health care. And even if the funds remain within the public sector, it does not follow that reducing the Medicare outlays will not flow to some other expenditure, from ethanol subsidies to paying for the bombing of Libya, or to food stamps or raises for civil servants or God knows what else. Some of these are worthy goals, but one should ask not only if they outrank helping the elderly to make even relatively small but high-cost health gains, but also what mechanism could be developed to ensure that whatever is cut from senior care will end up where it is supposed to land. One of Callahans great merits is that as a fellow communitarian, he recognizes that we have obligations to the common good and not just rights and entitlements. (The issue was raised recently when neither Presidents Bush nor Obama called on Americans to make any sacrifices in the wake of 9/11 and the wars in which the U.S. is engaged.) However, before I would call on anyone to give up any beneficial medical interventions they seek, I would ask them if save we must to smoke less, drive less, and give up on status goods, among many other things. And in contrast to those who see our seniors as privileged and our youth as deprived, I see most seniors as having made lifelong contributions to society, while the youths turn has yet to come. Even if the cuts have to be made within the health care system, there are other ways to proceed.
24 Policy Review

Rationing by Any Other Name

Other ways
o argue against age-based rationing and the navet of reallocation is not to suggest that the cost of Medicare or, more precisely, of health care should not be reduced. However, there are other ways which a normative analysis suggests should be considered long before one turns to reductions in therapeutic care for seniors and, more generally, to cutbacks in medical research and investment in new technologies. If we must make cuts in Medicare, we ought first to make far more strides in reducing harmful activities. There are an estimated 44,000 to 98,000 preventable deaths due to medical error each year, according to the 1999 U.S. Institute of Medicine report To Err is Human. While the report has been highly regarded and frequently cited over the past decade, a 2009 Centers for Disease Control and Prevention study found that 99,000 patients succumb to hospital-acquired infections annually. Experts hold that nearly all of those deaths are preventable. Study after study shows that even relatively small changes can reap major benefits. These include measures such as getting health personnel to cut their fingernails shorter, wash their hands even more often, use typed rather than handwritten drug prescriptions, use electrical shavers rather than razors (in preparations for surgery), getting doctors to pay more mind to comments by nurses, and so on. The results are detailed in Safe Patients, Smart Hospitals, a book co-authored by Peter Pronovost and Eric Vohr, which advocates integrating strictly followed checklists into health care procedures, as well as abandoning the hierarchical structure of hospitals that often leaves nurses hesitant to challenge doctors when they make mistakes. The book then shows the great reductions in medical errors that follow the introduction of checklists. Atul Gawande, a Harvard Medical School surgical professor, similarly argues for systematic checklists, offering numerous examples of greater success due to checklists, not only in the medical field but also in fields like aviation, a comparison that John Nance makes extensive use of in his book Why Hospitals Should Fly. According to the Office of Management and Budget, aligning Medicares drug payment policies with Medicaids policies would save $135 billion over ten years. Next, we should cut reimbursements for those interventions for which there are no demonstrated benefits. Twenty percent of all medical expenditures were estimated to pay for medical care that is inappropriate and unnecessary, according to a 1990 study by the rand Corporation. Consistent with these findings, Henry Aaron, a leading expert at Brookings, noted that both 2008 presidential candidates put forward proposals for curtailing waste in the U.S. health care system . . . based on estimates that various medical procedures are used inappropriately as much as one third of the time in the United States. Among them are testing patients who have
June & July 2012 25

Amitai Etzioni
advanced, life-threatening illnesses for other diseases for which preventable treatment could not be provided in the time they have left on this earth, and screening for colon cancer, which, according to many experts, is inadvisable for the elderly, as it can result in complications that outweigh the potential benefits. Again, it is unlikely that waste will be completely eradicated, but surely significant strides could be made. And this particular opportunity for cost reduction and improved efficiency has been recognized by Obamas Affordable Care Act, which establishes an advisory board specifically tasked with identifying and recommending policies to eliminate such waste in the Medicare program. Equally important is to reduce administrative costs. The United States spends at least twice as much on administrative costs for health care as many other countries. For instance, a 2003 comparative study found that U.S. administrative costs amounted to $30 out of every $100 spent on health care, compared to $17 in Canada. There are many reasons we cannot match Canadas ways, but if we cut only part of the difference in administrative overhead, we would save tens of billions each year. One way this may be achieved is by using capitation, rather than reviewing every intervention. Another option is to follow the uk Tory governments example and pursue what is called the Big Society program. It allots a pool of funds to the physicians serving a given area and lets them make the allocation decisions within nationally established guidelines. A study by the Commonwealth Fund found that if U.S. administrative costs could be reduced to merely the same level as the average for countries with mixed public-private insurance systems, $55 billion per year would be saved. Some experts snicker when people argue that one can achieve major savings by reducing fraud and abuse. 60 Minutes, though, has documented how the Medicare fraud industry in South Florida is now larger than the cocaine industry, due to the relative ease of swindling Medicare. There is less risk of exposure and less risk of punishment if caught. Crooks buy patient lists and bill the government for expensive items, ranging from scooters to prostheses, to the tune of some $60 billion a year. Because Medicare is required by law to pay all bills within 30 days and has a small accounting staff, it often cannot vet claims before the checks must be issued. By the time Medicare authorities find out a storefronts bills are phony, the crooks have closed their operation and opened one next door under a different name. It does not seem too difficult to imagine that Medicare could be given more time and more resources to reduce such fraud. In a series of articles on health care costs published in the New York Times in late 2011, Ezekiel Emanuel, M.D., Ph.D., suggests a number of ways to cut costs in the health care industry. He proposes implementing electronic health records and streamlining the billing process, which could save $32 billion per year, with twenty percent of that savings going to the government. He also suggests that $80 billion could be saved each year if chronically ill patients can be encouraged to use high touch medicine, a system of
26 Policy Review

Rationing by Any Other Name

coordinated, high-quality care in which patients conditions are frequently monitored to prevent crises that lead them to emergency rooms. In short, one can readily demonstrate that before one denies beneficial health care to people of any age, even if the benefits are limited, there are other major areas to reduce outlays and put our health system on sound economic footing. It is morally repugnant to deny people beneficial health care in order to save money before one engages in much stronger efforts to reduce harmful and useless interventions and to curb fraud, abuse, and costly paperwork. All this can be accomplished without giving up the war against death a war that we know cannot be won, but that nevertheless should be fought, if only to wrestle out of deaths arms as many worthy years for as many people as possible. Moreover, I have no trouble envisioning an America in which, thanks to improved health care, including changes in lifestyles and in the environment, the average American lives to be 100 years old and works until he is 80. The average work week for all Americans would be reduced to, say, 25 hours, so that there would be work for all. Average incomes would be lower, and hence people would buy fewer goods but spend more time in social and transcendental activities that are low in cost, such as hanging out with family, reading, taking walks, meditating, observing sunsets, and praying.

The foundations of moral judgments

fter i published a brief along the preceding lines about ways to reduce health care costs and thus Medicare outlays, Callahan posted a comment in the Hastings Bioethics Forum blog under the title The Political Use of Moral Language. He raised issues whose importance extends well beyond the future of health care and the ethical ways to rein in its costs, as important as these are in their own right. Callahan wrote:
Amitai Etzioni, a prominent social scientist and leader of a communitarian movement, published an article in February arguing that it would be immoral to cut Medicare or Social Security benefits unless we first eliminate a range of pathologies in our health care system. If we must make cuts, he wrote, we ought first to cut those budget items that in effect pay for harmful activities and then those without discernible social benefits. He had in mind such long-time villains as excessive administrative overhead, waste and fraud, direct-to-consumer advertising, unnecessary treatments, and medical error. He was right to identify those failings, all of which reflect a bad health care system. And as a fellow communitarian, I welcome his support for a solid and equitable social safety net. But are those on the other side of the aisle immoral? At what point does a political issue or posiJune & July 2012 27

Amitai Etzioni
tion pass from simply being unfair, wrong-headed, or dangerous in some way or other, to being immoral? . . . Ad hominem arguments combined with slippery slope predictions have become the accepted rhetorical style of conservative opponents of communitarian, social justice convictions. Nothing is added, and much that is harmful is introduced into the public debate by the word immoral. My own observation is that neither liberals (a.k.a. progressives) nor conservatives have a monopoly on morality. That our communitarian crowd favors a strong social safety net is a tribute to our wise (even if politically controversial) judgment about the common good, not a sign of superior morality.

Callahans comments show that even a bioethics giant, and a fellow communitarian, can make a mistake, and not a trivial one. Sadly he is not alone in adopting a culturally relativistic definition of what is moral. And hence, of course, when there is no consensus, there are no moral standards and we are told there is nothing on which to base our moral judgments. As I see it, there is a limited set of universal moral truths human rights, for instance. Life and health over death and illness in all but exceptional circumstances, for example. These truths are, as the founding fathers put it so well, self-evident. (As a deontologist would put it, these are moral causes that speak to us directly.) In the subject at hand, I need no community to approve a standard that will inform me, if one has a choice between saving money by cutting reimbursement for beneficial procedures, say kidney dialysis, or cutting the funds that pay doctors who run two cts on the same patient on the same day or blowing money because insurers refuse to use the same claim form, which is the moral direction to go. It may not be politically practical, but there is no question what is right. (And the fact that one may find some very limited conditions under which the suggested statement will not hold just shows that some philosophers are sharp, not that we lack foundations for moral judgments.) Callahan correctly points out that I use normative arguments for a political purpose. All political acts and decisions have a moral dimension, and if we do not judge, it will not stop others from laying moral claims, just mute our side. Moreover, is this bad? I am trying to shame and lose votes for those who pass immoral laws that provide obscene profits to health insurers and exorbitant salaries for their executives while cutting funds for health care for poor children and many more such policies. I stand content to be judged accordingly. Callahan truly crosses a line when he jumps from my position that some people make immoral choices to argue that they must be bad people (his non sequitur), and therefore accuses me of ad hominem attacks. As I see it there are some bad people, those who have no moral conscience, the psychopaths. Most people struggle between their debased and nobler sides, and I am out to give whatever support I can to their better angels.
28 Policy Review

Fertility Decline in the Muslim World

By Nicholas Eberstadt & Apoorva Shah

June & July 2012

here remains a widely perceived notion still commonly held within intellectual, academic, and policy circles in the West and elsewhere that Muslim societies are especially resistant to embarking upon the path of demographic and familial change that has transformed population profiles in Europe, North America, and other more developed areas (un terminology). But such notions speak to a bygone era; they are utterly uninformed by the important new demographic realities that reflect todays life patterns within the Arab world, and the greater Islamic world as well. Throughout the Ummah, or worldwide Muslim community, fertility levels are falling dramatically for countries and subnational populations and

Nicholas Eberstadt holds the Henry Wendt chair in political economy at the American Enterprise Institute, where Apoorva Shah served as research fellow. They would like to offer thanks to Kelly Matush of AEI for her assistance in preparing this paper, and also to Heesu Kim, Mark Seraydarian, and Daksha Shakya, Mauro De Lorenzo, and Philip I. Levy for their help and suggestions.
29 Policy Review

Nicholas Eberstadt & Apoorva Shah

traditional marriage patterns and living arrangements are undergoing tremendous change. While these trends have not gone entirely unnoticed, no more than a handful of pioneering scholars and observers have as yet drawn attention to them and their potential significance.1 In this essay we will detail the dimensions of these changes in fertility patterns within the Muslim world, examine some of their correlates and possible determinants, and speculate about some of their implications.

The global Muslim population

here is some inescapable imprecision to any estimates of the size and distribution of the Ummah an uncertainty that turns in part on questions about the current size of some Muslim majority areas (e.g., Afghanistan, where as one U.S. official country study puts it, no comprehensive census based upon systematically sound methods has ever been taken), and in part on the intrinsic difficulty of determining the depth of a nominal believers religious faith, but more centrally on the crucial fact that many government statistical authorities do not collect information on the religious profession of their national populations. For example: While the United States maintains one of the worlds most extensive and developed national statistical systems, the American government expressly forbids the U.S. Census Bureau from surveying the American public about religious affiliation; the same is true in much of the eu, in the Russian Federation, and in other parts of the more developed regions with otherwise advanced data-gathering capabilities. Nevertheless, on the basis of local population census returns that do cover religion, demographic and health survey (dhs) reports where religious preference is included, and other allied data-sources, it is possible to piece together a reasonably accurate impression of the current size and distribution of the worlds Muslim population. Two separate efforts to estimate the size and spread of the Ummah result in reasonably consistent pictures of the current worldwide Muslim demography profile. The first, prepared by Todd M. Johnson of Gordon-Conwell Theological Seminary under the aegis of the World Christian Database, comes up with an estimate of 1.42 billion Muslims worldwide for the year 2005; by that reckoning, Muslims would account for about 22 percent of total world population. The second, prepared by a team of researchers for the Pew Forum on Religion and Public Life, placed the total global Muslim
1. Two works in particular may be saluted in this regard: Youssef Courbage and Emmanuel Todds A Convergence of Civilizations: The Transformation of Muslim Societies Around The World (Columbia University Press, 2011), and David P. Goldmans How Civilizations Die (And Why Islam Is Dying Too) (Regnery, 2011). The former is a translation of a 2007 study by two noted French demographers; the latter, a wide-ranging and provocative exposition by an American public intellectual. Neither work has to date received the readership it deserves.


Policy Review

Fertility Decline in the Muslim World

population circa 2009, a few years later, at roughly 1.57 billion, which would have been approximately 23 percent of the estimated population at the time. Although upwards of one fifth of the worlds population today is thereby estimated to be Muslim, a much smaller share of the population of the more developed regions adheres to Islam: perhaps just over three percent of that grouping (that is to say, around 40 million out of its total of 1.2 billion people). Thus the proportion of the worlds Muslims living in the less developed regions is not only overwhelming, but disproportionate: Well over a fourth of the population of the less developed regions something close to 26 or 27 percent would be Muslim, to go by these numbers. Most of the worlds Muslim population inhabits a tropical and semitropical expanse that stretches across Africa and Asia from the Atlantic shores of Mauritania and Morocco to the Pacific archipelagos of Indonesia and the Philippines. The great preponderance of the worlds Muslims live in Muslimmajority countries 73 percent according to the World Christian Database, nearly 80 percent according to the Pew Forum study (which lists 49 countries and territories in Asia, Africa, and Europe that it identifies as Muslimmajority). Another tenth of the Ummah (roughly 160 million people as of 2009) lives within India, where Muslims are a religious minority. In all, eight countries today account for over 60 percent of the worlds Muslim population: Indonesia, Pakistan, India, Bangladesh, Egypt, Nigeria, Iran, and Turkey. Note that only one of these eight is an Arab society in the Middle East.

Fertility decline in Muslim-majority countries

ince the overwhelming majority of todays Muslims live in Muslim-majority countries, and since those same countries are typically overwhelmingly Muslim (by the Pew studys estimate, 43 of those 49 countries and places are over two-thirds Muslim, 40 of them over 90 percent Muslim), we can use national-level data on fertility for Muslimmajority countries as a fairly serviceable proxy for examining changes in fertility patterns for the Muslim world community. For our purposes, the advantage here is that a number of authoritative institutions most importantly, the United Nations Population Division (unpd) and the United States Census Bureau (uscb) regularly estimate and project population trends for all the countries in the world. The unpd provides estimates and projections for period total fertility rates (births per woman per lifetime) for over 190 countries and territories across the planet for both the late 1970s and the 2005 to 2010 period. Using
June & July 2012 31

Nicholas Eberstadt & Apoorva Shah

these data, we can appraise the magnitude of fertility declines in 48 of the worlds 49 identified Muslim-majority countries and territories.2 One way of considering the changes in fertility in these countries is to plot a 45-degree line across a chart and to compare fertility levels from three decades ago on one axis against recent fertility levels on the other axis. A country whose fertility level remains unchanged over time will remain exactly on this plotted line. If the fertility levels of the earlier time are plotted on the x-axis and the more current fertility levels on the y-axis, any country whose fertility level rises over time will be above the plotted line, whereas a country experiencing fertility decline will be located below the plotted line; the distance of these data points from the plotted line indicates the magnitude of a countrys absolute drop in fertility over these decades. The results from this exposition of data are displayed in Figure 1. As may be seen, according to unpd estimates and projections, all 48 Muslim-majority countries and territories witnessed fertility decline over the three decades under consideration. To be sure: For some high-fertility or extremely-highfertility venues in sub-Saharan Africa, where tfrs (total fertility rates) in the six to eight range prevailed in the late 1970s, declines are believed to have been marginal (think of Sierra Leone, Mali, Somalia, and Niger). In other places where a fertility transition had already brought tfrs down around three by the late 1970s, subsequent absolute declines also appear to have been somewhat limited (think of Kazakhstan). In most of the rest of the Muslim-majority countries and territories, however, significant or dramatic reductions in fertility have been registered and in many of these places, the drops in question have been truly extraordinary. With respect to absolute changes in tfrs, the population-weighted average for the grouping as a whole amounted to a drop of an estimated 2.6 births per woman between 1975 and 1980 and 2005 and 2010 a markedly larger absolute decline than estimated for either the world as a whole (-1.3) or the less developed regions as a whole (-2.2) during those same years. Fully eighteen of these Muslim-majority places saw tfrs fall by three or more over those 30 years with nine of them by four births per woman or more! In Oman, tfrs plummeted by an astonishing 5.6 births per woman during those 30 years: an average estimated pace of nearly 1.9 births per woman every decade. As for relative or proportional fertility declines, here again the record is striking. The estimated population-weighted average for the Muslim-majority areas as a whole was -41 percent over these three decades: by any histori2. The unpd does not offer estimates for Kosovo and while the uscb does calculate current demographic trends for that country, its estimates do not extend back to the 1970s. Note that that unpd calculates period tfrs rather than cohort tfrs that is to say snapshot or synthetic estimates of fertility, as if a woman completed her childbearing on the schedules for women of all childbearing ages at that time, rather than actual completed childbearing patterns for women from given birth years or cohorts. While there can be important differences between period and cohort estimates of tfr, this matter will not detain us here.


Policy Review

Fertility Decline in the Muslim World

figure 1 Total fertility rates in the Muslim world, 197580 vs. 200510

8 7 Total fertility rates (200510) 6 5 4 3 2 1 1

Total fertility rates (197580)

Source: Population Division of the Department of Economic and Social Affairs of the United Nations cal benchmark, an exceptionally rapid tempo of sustained fertility decline. In Secretariat, World Population Prospects: The 2 0 1 0 Revision, available at aggregate, the proportional decline in fertility for unpd/wpp/unpp/panel_population.htm (accessed November 16, 2011) Muslim-majority areas

wasareas as a whole was -41 percent over these three decades: by any historical benchmark, an exceptionally rapid tempo of sustained fertility decline. In aggregate, the proportional decline in fertility for Muslim-majority areas was again greater than for the world as a whole over that same period (-33 percent) or for the less-developed regions as whole (-34 percent). Fully 22 Muslim-majority countries and territories were estimated to have undergone fertility declines of 50 percent or more during those three decades ten of them by 60 percent or more. For both Iran and the Maldives, the declines in total fertility rates over those 30 years were estimated to exceed 70 percent. Given the differences in timing for the onset of sustained fertility declines in different settings around the world, it is possible for the above statistics to present a biased picture. It is possible to imagine, for example, that dramatic fertility declines might have taken place in other regions at earlier dates, with fertility declines tapering off during these years when the declines in the Muslim-majority areas were so manifestly dynamic: If that were the case, it would be possible to exaggerate the robustness of these Islamic fertility declines in comparison to other parts of the world. Yet while this is a theoretical possibility, empirical results do not corroborate such a contingency. Table 1 makes the point. Based on unpds estimates and projections of fertility patterns for all countries and territories for the entirety of the postwar era (1950 to 2010), it isolates the top ten fertility declines, as meaJune & July 2012 33

Nicholas Eberstadt & Apoorva Shah

sured by both absolute and proportional change in tfrs, registered over any 20-year period. This approach will eliminate any timing bias from our selection of 1975 to 1980 and 2005 to 2010 as the period for which to analyze fertility declines. table 1 The ten biggest declines in total fertility rates (births per woman) in the postwar era: most rapid 20 -year total fertility rate decline in absolute terms
time period 19851990 19851990 19701975 19801985 19551960 19751980 19701975 19801985 19701975 19601965 to 20052010 to 20052010 to 19901995 to 20002005 to 19751980 to 19952000 to 19901995 to 20002005 to 19901995 to 19801985 absolute decline -5.33 -4.91 -4.70 -4.57 -4.50 -4.29 -4.20 -4.18 -3.92 -3.89

major area, region, country or area

Oman Maldives Kuwait Iran (Islamic Republic of) Singapore Algeria Mongolia Libyan Arab Jamahiriya Viet Nam Mauritius

Source: Population Division of the Department of Economic and Social Affairs of the United Nations Secretariat, World Population Prospects: The 2010 Revision, available at unpd/wpp/unpp/panel_population.htm (accessed November 16, 2011)

As may be seen in Table 1, six of the ten largest absolute declines in fertility for a two-decade period yet recorded in the postwar era (and by extension, we may suppose, ever to take place under orderly conditions in human history) have occurred in Muslim-majority countries. The four very largest of these absolute declines, furthermore, all happened in Muslim-majority countries each of these entailing a decline of over 4.5 births per woman in just 20 years. (The world record-breaker here, Oman, is estimated to have seen its tfr fall by over 5.3 births per woman over just the last two decades: a drop of over 2.6 births per woman per decade.) Notably, four of the ten greatest fertility declines ever recorded in a 20-year period took place in the Arab world (Algeria, Libya, Kuwait, and Oman); adding in Iran, we see that five of these top ten unfolded in the greater Middle East. No other region of the world not highly dynamic Southeast Asia, or even rapidly modernizing East Asia comes close to this showing. When ranking the top ten historical fertility declines during any 20-year period by country in terms of proportional rather than absolute drops in tfrs, only four of the top ten fertility drops to date have occurred in Muslim-majority countries and only two of the top four were Muslimmajority areas (Iran and the Maldives islands). What may be especially note34 Policy Review

Fertility Decline in the Muslim World

worthy here, nonetheless, is that places like Kuwait, Oman, and Iran all effected fertility declines of over two-thirds in just 20 years and that this pace of change exceeded the tempo of fertility decline in almost all of the Pacific Rim societies; the bric economies; and the other non-Muslim emerging market economies.

Some comparisons with the United States

iven the extraordinary indeed, as we have just seen, often historically unprecedented fertility declines that a number of Muslim-majority populations have sustained over the past generation, it is now the case that a substantial share of the Ummah is accounted for by countries and territories with childbearing patterns comparable to those of contemporary, affluent, Western, non-Muslim populations. The low fertility levels for the Muslim-majority societies in question, it should be noted, have generally been achieved on substantially lower levels of income, education, urbanization, modern contraception utilization, and the like than those that characterize the more developed regions with which their fertility levels currently correspond today. We can highlight this point by comparing fertility in todays Muslimmajority populations with that of the United States. America of course is not a typical oecd country in terms of its fertility level (quite the contrary, there is an unsettled argument among demographers today as to whether the U.S. exhibits demographic exceptionalism) but as the leading developed society, comparisons with the United States can place Muslim-majority fertility patterns in a sort of developmentalist perspective. When contraposing unpd estimates or projections of fertility for diverse Muslim-majority countries and territories for the 2005 to 2010 period against those of the U.S. states and the District of Columbia for the year 2007, tfrs in a great many Muslim-majority populations look quite American these days. To go by unpd figures, for example, Algeria, Bangladesh, and Morocco all have fertility levels corresponding to the state of Texas, while Indonesias is almost identical to Arkansass. Turkey and Azerbaijan, for their part, are on par with Louisiana, while the tfr in Tunisia looks like that in Illinois. Lebanons fertility level is lower than New York States. As for Iran, its fertility level today is comparable with those of the New England states, the region in America with the lowest fertility. No state in the contemporary U.S., however, has a fertility level as low as Albanias. All in all, according to these unpd figures, 21 Muslim-majority populations would seem to have fertility levels these days that would be unexceptional for states in the U.S. (with the possible exception of Albania, whose fertility level might arguably look too low to be truly American). As of 2009, these 21 countries and territories encompassed a total estimated population of almost 750 million persons: which is to say, very nearly half of the
June & July 2012 35

Nicholas Eberstadt & Apoorva Shah

total population of the Ummah. These numbers, remember, exclude hundreds of millions of Muslims in countries where Islam is not the predominant religion. Taking this into account, it could well be that a majority of the worlds Muslims already live in countries where their fertility levels would look entirely unexceptional in an American mirror. To be sure just as fertility varies among the 50 American states, so it differs by region in many predominantly Muslim societies. But such geographic differences further emphasize the extent to which fertility levels for a great portion of the Ummah has come to correspond with levels taken for granted nowadays in more-developed, non-Islamic Western societies. Let us take the example of Turkey. For the period 2000/2003, according to a Turkish dhs, the countrys overall tfr was 2.23. That average, however, was strongly influIn Turkey, enced by the distinctively high fertility levels of eastfertility levels ern Turkey (a largely Kurdish region), where a tfr were comparable of 3.65 was recorded. In much of Turkey, tfrs of 1.9 or less prevailed. Istanbuls tfr, for instance, to Hawaii; even was less than 1.9 which is to say, it would have in south Turkey, been equivalent to the corresponding level for France in those same years. Placed in an American fertility levels perspective, eastern Turkeys fertility levels are off were just about the scale but for Turkey as a whole, fertility levels were comparable to Hawaii, and even for comparathe same as in tively fecund south Turkey, fertility levels were just Nebraska. about the same as in Nebraska. For their part, if north Turkey, west Turkey, central Turkey, and Istanbul had been part of the U.S., they would have qualified as low-fertility states. Only six of Americas 50 states, for example, had lower fertility than Istanbul around that time. Consider next the case of Iran. As we have seen, over the past generation Iran has registered one of the most rapid and pronounced fertility declines ever recorded. By 2000, according to Irans dhs of that same year, the tfr for the country as a whole had dropped to 2.0, below the notional replacement level of 2.1. But there were also great regional variations within Iran, with some areas (such as the largely Baluchi provinces of Sistan and Baluchistan in the east and the largely Kurdish West Azerbaijan province in the west) well above replacement, and much of the rest of the country far below replacement. Note in particular that Tehran and Isfahan reported fertility levels lower than any state in the U.S. With a tfr of 1.4, indeed, Tehrans fertility level in 2000 would have been below the average for the eu-27 for 2002 (tfr 1.45), well below 2000 fertility in such places as Portugal (1.54) and Sweden (1.54), and only slightly higher than such famously low-fertility European countries as Italy (1.26) and Germany (1.38). Admittedly, our use of the U.S. as a comparator for fertility levels in Muslim-majority areas perforce excludes the tremendous swath of the pre36 Policy Review

Fertility Decline in the Muslim World

sent-day Ummah where fertility levels are (at least for now) higher than in present-day America. The point of our selection, however, is to emphasize just how very much of the Ummah can be included in such a comparison nowadays. This is a very new development: 30 years ago, barely any Muslim-majority country or territory would have registered fertility levels low enough to permit approximate comparison to corresponding fertility levels in any U.S. state. As of 1977, period tfrs for Utah, always Americas most fertile state, were just under 3.6, while according to unpd estimates the very lowest tfrs in the late 1970s for any Muslim-majority populations would have been for Kazakhstan (3.1) and Azerbaijan (3.6). Thus in just 30 years, the total population of Muslim-majority areas whose fertility levels could be reflected in a contemporaneous American mirror has thus risen from under 20 million to nearly three quarters of a billion. By any benchmark, this qualifies as a remarkable change. Furthermore, indications suggest that the change has progressed still further since the 2005 period. Whereas the unpd offers only five-year-span estimates and projections for fertility levels, uscb provides annual figures. According to these numbers, the total fertility rate for Saudi Arabia in 2011 would be 2.31 a lower level than recorded recently for such U.S. states as South Dakota and Idaho. At projected tfrs of 2.96 and 2.97, respectively, Libyas and Egypts fertility levels for 2011 would be roughly on par with fertility for Americas large domestic Hispanic population with a tfr of 2.91 as of 2008. Even places like Pakistan (uscb projected tfr for 2011: 3.17) and the West Bank of Palestine (3.05) would, in this assessment, appear to be rapidly approaching the day where their fertility levels could be comparable to levels displayed by geographic regions or broad national ethnic groups within the United States today. Put another way: Unbeknownst to informed circles in the international community, and very often even to those in the countries in question, fertility levels for Muslim-majority populations around the world are coming to look more and more American.

Socioeconomic trends
ow is the extraordinary demographic transformation described here to be accounted for? Typically, demographers and other social scientists in our era attempt to explain fertility changes in terms of the socioeconomic trends that drive (or at least accompany) them. We can presume to examine some of the correspondences between socioeconomic trends and fertility change through analysis at the national level for Muslim-majority states, given the wealth of national-level socioeconomic statistics that have been collected by government statistical authorities, the United Nations, the World Bank, and other agencies and institutions. We know, of course, that the 48 Muslim-majority countries and territories for which the unpd provides demographic estimates encompass a rich diverJune & July 2012 37

Nicholas Eberstadt & Apoorva Shah

sity of national histories, cultures, languages, and specific traditions. But if we analyze this collectivity as a single group in other words, as if there were something distinctive about Muslim-majority countries per se we can conduct a preliminary inventory of readily apparent broad socioeconomic associations with fertility change for this, the lions share of the population of the contemporary Ummah. Social science research strongly suggests that contemporary Muslim societies are distinctive from non-Muslim societies in a number of other behavioral respects: Does this distinctiveness also obtain with patterns of fertility and fertility change? A century of social science research has detailed the historical and international associations between fertility decline and socioeconomic modernization (as represented by increasing income levels, educational attainment, urbanization, public health conditions, and the like). Those associations, not surprisingly, are immediately evident in simple cross-country correlations between national fertility levels and these respective socioeconomic variables, using data for the less-developed regions circa 2005. For the less developed regions as a whole, fertility levels tend to decline across countries with greater urbanization, per capita income, female literacy, utilization of modern contraceptive methods, and infant survival prospects with associations between those fertility changes and those different socioeconomic variables lowest for urbanization and highest for infant mortality (simple r-squares from 0.33 to 0.75). For female literacy, modern contraceptive use, per capita income, and infant mortality, the simple coefficients of determination (r-squares) with fertility levels for countries in the less-developed regions all exceed 60 percent; put another way, each of these factors by itself tracks with 60 percent or more of the difference in fertility levels between countries around the year 2005. Clearly, those are very robust associations, considering all the particularities and unique characteristics that necessarily distinguish any country from all others. But just as clearly, these broad associations between fertility change and material measures of modernization or socioeconomic development are not the whole story here. Nearly two decades ago, a path-breaking study by Lant Pritchett, Desired fertility and the impact of population policies, made the case that desired fertility levels (as expressed by women of childbearing age in dhs surveys) were the single best predictor for actual fertility levels in the less developed regions. Sure enough, as Figure 2 demonstrates, dhs surveys conducted since that study reveal a 90 percent association between wanted fertility and actual fertility levels in the 41 less-developed countries for which such recent data were available. This finding still flies in the face of much received opinion in population policy circles. In particular, it seems to challenge the notion that family planning programs, by encouraging the prevalence of modern contraceptive use, may make an important independent contribution to reducing fertility levels in developing countries, especially by reducing what is called excess fertility or unwanted fertility. It has often been difficult to test that proposition
38 Policy Review

Fertility Decline in the Muslim World

figure 2 Total fertility rates 200005 vs. wanted total fertility rates, c. 2000

8 7 Total fertility rate, 200005 6 5 4 3 2 1 0

y = .95x + 1.04 R2 = .89

Wanted total fertility rate, most recent year

Source: Wanted tfr: Macro International Inc, 2009. measure dhs stat compiler. Available at, March 30 2009.; tfr: Population Division of the Department of Economic and Social Affairs of the United Nations Secretariat, World Population Prospects: The 2008 Revision, available at (accessed June 09, 2009)

in a methodologically sound and rigorous manner, as the aforementioned Pritchett study observed and as Pritchett himself argued, methodologically sound investigations seemed to suggest that the demographic impact of family planning programs tended to be marginal. Preliminary analysis of more recent dhs surveys would seem to corroborate Pritchetts findings. In reviewing the correspondence in recent dhs surveys between excess fertility (defined here as the difference between actual fertility levels and reported levels of desired fertility) and the prevalence of modern contraceptive use, we find no observable correspondence whatsoever between these two factors. Socioeconomic forces, to be sure, may well affect the desired family sizes that women of childbearing ages report in these dhs surveys in fact they surely do. But the critical determinant of actual fertility levels in Muslim and non-Muslim societies alike at the end of the day would appear to be attitudinal and volitional, rather than material and mechanistic. How do the various factors mentioned thus far interact in influencing fertility levels in Muslim-majority countries? We may get a sense of this complex interplay from the hints offered by an initial multivariate analysis of international fertility difference reported in recent dhs surveys. The first regression equation in Figure 3 attempts to predict fertility levels in 41 Muslim and non-Muslim less-developed countries on the basis of per capita income, literacy rates, prevalence of modern contraceptive use, and
June & July 2012 39

Nicholas Eberstadt & Apoorva Shah

desired fertility. Taken together, changes in these four variables can be associated with over 90 percent of the differences in fertility levels in this sample of countries. For this sample of countries, however, only two of these variables emerge as meaningful (statistically significant): desired fertility and per capita income. Interestingly enough, the literacy and contraceptive use variables in this regression were not only statistically insignificant, but each came out with calculated coefficient values not appreciably different from zero. table 2 Determinants of total fertility rates: What the regressions equations suggest
dependent variable: total fertility rate explanatory variables Wanted total fertility rates (most recent year) Ln gdp pp, 2005 (1990 geary-khamis international $) Contraceptive use (%, married women 1549) Literacy rate (female 15+, most recent year) Muslim country dummy variable .912 41 .718** (6.01) .460* (-2.68) -.003 (-0.33) -.002 (-0.50) .733** (6.78) -.300 (-1.74) -.000 (-0.02) -.007 (-1.64) -.426* (-2.47) .923 41

R2 (unadjusted) Number of observations * = significant, ** = significant at 1%

Sources: Angus Maddison, Per Capita gdp ppp (in 1990 Geary-Khamis dollars), Historical Statistics for the World Economy: 1-2008 ad, table 3, available at; measure dhs stat compiler, available at (accessed March 30, 2009)

Note: t-scores in parentheses.

The second equation adds an additional factor to the regression for predicting fertility levels: a dummy variable for Muslim-majority population. Introducing this variable changes the results in an intriguing way: Now per capita income loses its statistical significance (if barely) so that only desired fertility retains its statistical significance out of the original four independent variables from the first equation. But the dummy variable for Muslimmajority in this second equation is statistically significant: And perhaps sur40 Policy Review

Fertility Decline in the Muslim World

prisingly, the value of this variable is negative. This is to suggest that, at any given level of per capita income, literacy, and contraceptive use, Muslimmajority societies today can be expected to have fewer children than their counterparts in non-Muslim societies nowadays! Why should this be so? Developmentalist theories, with their emphasis on the primacy of material and structural transformations, cannot offer much insight into this mystery. Nor would it seem that what might be called the contraceptivist theories favored by those who see family planning policies as a major instrumental factor in eliciting fertility decline in less developed regions. Although Muslim-majority countries, as we have seen, apparently tend to have substantially lower fertility levels nowadays than nonMuslim comparators when holding income, literacy, contraceptive use, and desired fertility constant, People in the Muslim-majority countries also tend to have signifiUmmah can be cantly lower levels of modern contraception use than non-Muslim countries at the same income lev- expected, today, els. Holding income constant, modern contraception to have fewer usage was approximately fourteen percentage points lower in Muslim than in non-Muslim majority socichildren than eties in the 1980s, and remained eleven percentage people in nonpoints lower 20 years later. Despite such characteristically more limited use of modern contraception, Muslim societies. the pervasive, dramatic, and in some cases historically unprecedented declines in fertility highlighted earlier in this chapter took place nonetheless. Much more research is warranted to glean a greater understanding of the social, economic and other factors involved in the ongoing transformation of fertility levels and family patterns within the Ummah today. What we would simply wish to emphasize at this point is the critical role human agency appears to have played in this transformation. Developmentalist perspectives cannot explain the great changes underway in many of these countries and territories in fact, various metrics of socioeconomic modernization serve as much poorer predictors of fertility change for Muslim-majority populations than for non-Muslim populations. Not to put too fine a point on it: Proponents of developmentalism are confronted by the awkward fact that fertility decline over the past generation has been more rapid in the Arab states than virtually anywhere else on earth while well-informed observers lament the exceptionally poor development record of the Arab countries over that very period. By the same token, contraceptive prevalence has only limited statistical power in explaining fertility differentials for Muslim-majority populations and can do nothing to explain the highly inconvenient fact that use of modern contraceptives remains much lower among Muslim-majority populations than among non-Muslim societies of similar income level, despite the tremendous fertility declines recorded in the former over the past generation.
June & July 2012 41

Nicholas Eberstadt & Apoorva Shah

Put another way: Materialist theories would appear to come up short when pressed to account for the dimensions of fertility change registered in large parts of the Ummah over the past generation. An approach that focuses on parental attitudes and desires, their role in affecting behavior that results in achieved family size, and the manner in which attitudes about desired family size can change with or without marked socioeconomic change may prove more fruitful here.

Implications of the decline

e have made the empirical case here that a sea change in fertility levels, and by extension, in attendant patterns of family formation, is now underway in the Islamic world even if this sea-change remains curiously unrecognized and undiscussed even in the societies it is so rapidly transforming. Why this should be the case is an important question, but one that will not detain us here. Instead, we shall conclude by touching a few of the more obvious implications of these big demographic changes for the years ahead.

Downward revision of population projections. In its 2000 revisions of World Population Prospects, unpd medium variant projections envisioned a population for Yemen of 102 million people; in its 2010 revisions, the 2050 medium variant projection for Yemen is 62 million (uscb projections for Yemen for 2050 as of this writing are even lower: under 48 million). Unanticipated but extremely rapid fertility declines would likewise militate for downward revisions in the trajectory of future demographic growth in other Muslim-majority areas. Coming declines in working-age (15-64 ) population. If the current prospect for Muslim-majority countries and territories entails coping with the challenges of finding employment for continuing and even increasing increments of working age manpower, in the foreseeable future an increasing number of Muslim-majority countries may face the prospect of coping with manpower declines. If current uscb projections prove accurate, Lebanons fifteen to 64 cohort would peak in the 2023 and would shrink more or less indefinitely thereafter. On the trajectories traced out by current uscb projections, another thirteen Muslim-majority countries would also see their conventionally defined working-age populations peak, and begin to decline, before 2050.3 Over the past generation, we should remember, demographic authorities for the most part underestimated the pace and scale of fertility decline in Muslim regions sometimes very seriously. If underestima-

3 . These countries are Algeria, Azerbaijan, Indonesia, Iran, Kazakhstan, the Maldives, Morocco, Qatar, Tunisia, Turkey, Turkmenistan, the United Arab Emirates, and Uzbekistan.


Policy Review

Fertility Decline in the Muslim World

tion is still the characteristic error in fertility projections for these populations, this would mean that manpower declines would commence earlier than envisioned for the countries in question and that additional countries and territories might experience workforce decline before 2050. A wave of youthquakes. With rapidly declining fertility rates, the arithmetic of population composition makes for inescapable youthquakes: temporary, but sometimes very substantial, increases in the fraction of young people (say, aged fifteen to 24 or 20 to 29) as a proportion of total population. Depending on the social, economic, and political context, such youthquakes can facilitate rapid economic development or can instead exacerbate social and political strains. Tunisia passed through such a youthquake some time ago, and Iran is experiencing the tail end of one today; Yemen and Palestine, among other Muslim-majority societies, have yet to deal with theirs. Rapid population aging on relatively low income levels. The lower a country or territorys fertility, the more powerful the demographic pressure for population aging over the subsequent generation. With extremely rapid fertility decline and the descent into sub-replacement fertility a number of Muslim-majority populations are already set on course for very rapid population aging. Over a dozen Muslimmajority populations, under current uscb projections, would have higher fractions of their national populations over the age of 65 by 2040 than the U.S. today. Today these same places enjoy only a fraction of U.S. per capita income levels; even with optimistic assumptions about economic growth, it is hard to envision how they might attain contemporary oecd income levels much less contemporary oecd educational profiles or knowledge-generation capabilities by the time they reach contemporary oecd aging profiles. How these societies will meet the needs of their graying populations on relatively low income levels may prove to be one of the more surprising and unanticipated challenges of the fertility revolution now underway in the Ummah. The remarkable fertility declines now unfolding throughout the Muslim world is one of the most important demographic developments in our era. Yet it has been hiding in plain sight that is to say, it has somehow gone unrecognized and overlooked by all but a handful of observers, even by specialists in the realm of population studies. Needless to say, such an oversight is more than passing strange, and we do not propose to account for it here. Preconceptions about the nature of Muslim society and Muslim family values may or may not help explain why these dramatic developments have come as such a surprise to so many otherwise well-informed students of the international scene. By the same token, the essentially frozen nature of
June & July 2012 43

Nicholas Eberstadt & Apoorva Shah

politics in so many Muslim-majority countries over the past generation (at least until the Arab Spring) may or may not have encouraged in many quarters the unwarranted presumption that rhythms of life beneath these seemingly unchanging Muslim-world autocracies were unchanging as well. Whatever the case may be, the great and still ongoing declines in fertility that are sweeping through the Muslim world most assuredly qualify as a revolution a quiet revolution, to be sure but a revolution in which hundreds of millions of adults are already participating: and one which stands to transform the future.


Policy Review

The Many Faces of Islamist Politicking

By Camille Pecastaing

uch has been made of the recent success of Islamist parties in national elections that followed the Arab Spring. While some praise Islamism as the first genuine expression of popular sovereignty in a long time, others warn of an Islamic winter. They read Islamist candidates as a fifth column for a fundamentalist theocracy, or at best for an illiberal democracy where individual liberties suffer under the overbearing presence of religion in the public sphere. Both readings are wrong because history is not yet written, and the Islamists know no better than anyone where their recent success might take them. And while their ascent appears almost universal they are now in government in many Arab countries their accession to the highest levels of power has been contextual. Starting from common origins, they followed different routes to get there. More than anything, what they showed over the decades in the wilderness was pragmatism and adaptability to challenging environments, characteristics that they will have to
Camille Pecastaing is the author of Jihad in the Arabian Sea, and contributor to the Hoover Institutions working group on Islamism and the International Order.
June & July 2012 45 Policy Review

Camille Pecastaing
draw upon to move from the conquest to the exercise of power in the postArab Spring era. The constraints imposed on Islamists were not just the authoritarian regimes that suppressed them, and the downfall of the autocrats will not hand over a quiescent society to the supporters of sharia. The temptation of doctrinal social engineering may exist, but the actual impact Islamists will have will be limited by the kind of society they inhabit, the resources they have to work from, and the choices forced upon them. The challenges are numerous. They have to maintain decent diplomatic relations with a West deeply suspicious of fundamentalist agendas. They have to prevent a smoldering culture war between minorities and liberals on one hand, and the socially ultraconservative Salafis on the other, from The Arab Spring tearing apart the social fabric. And they have to address daunting socioeconomic issues inherited has brought on from decades of inadequate developmental models, compounded by a youth bulge that cannot find procalls for a new economic order, ductive employment. Many observers have missed the leftist, populist for a rupture core of the Arab Spring. Worriers dread Khomeini with a capitalism and bin Laden when they should be looking for Nasser or Hugo Chavez. Protesters demanded seen as Western democracy: elections not intended as an end but as a and exploitative. vehicle for transparency and accountability, for the eradication of corruption and nepotism. There are demands for jobs and higher wages and income redistribution, and feminist demands starting with wage equality for women. There are also calls for a new economic order, for a rupture with a capitalism seen as Western and exploitative, as an economic system uniquely designed to benefit the elites. In the confusion of the new era, the Islamic green blurs with the socialist red, as the religion carries the calls for justice and equity that were once a staple of the left. This affords Islamists a wide scope of populist claims that rake in the votes. But conceptual contradictions will be hard to reconcile over the long term after Islamists get to rule. Mainstream Islamists convey the economically liberal instincts of a socially conservative mercantile class. They are anchored in the global economy, which really means they want to preserve foreign economic aid and especially trade relations with the European Union. Equally daunting is the looming culture wars between the Salafis and the leftists, feminists, and religious minorities. The Salafis are reactionary social vigilantes whose ranks spawned a militant minority known to the world as jihadis. The Salafis have often winked at the violence of the jihadis, as both share revolutionary aspirations, but unlike their brethren engaged in armed struggle, Salafis generally eschew the political for the ethical. They preach and outbid each other in virtuous signaling, intruding in the public space but not trying to take it over. With the Arab Spring, the public space became
46 Policy Review

The Many Faces of Islamist Politicking

more fluid, especially with regard to the highly symbolic place of women in it. In Saudi Arabia, some have demanded to be able to drive; in Egypt, others have claimed the right to expose the female body, all of which is anathema to profoundly conservative segments of society. And so the Salafis came to the barricades. This is not good news for the group of Islamist activists who rose to power on the wave of the Arab Spring. Those men have been deeply scarred by oppression, which has meant for some torture and decades in prison a history of suffering that garnered much sympathy and votes in the recent elections. As they now stand on the wreckage of autocracy, they seem genuinely wary of states unfettered authority. But whatever their personal tolerance for individual liberties, the need for internal order will force them to referee between their right Its demise has and their left. They might even have to impose on often been both, racking up adversaries in the process. Political Islam is not the new thing. Its demise has proclaimed, but often been proclaimed, but Islamism lives on Islamism lives because it is adaptable and adaptive. A century of activism shows no pattern in the relation to power. on because If some have taken violent shortcuts to seize it, most it is adaptable have been patient social organizers, learning through and adaptive. experimentation and hardship the merits of strategic compromise. They had advantages. Political Islam was a natural way for people to transition from the disappearing structures of an agrarian society under the last Sultans-Caliphs, who relied on clerics for local administration to the crowded anonymity of a modern urban environment. Islamism gratified followers with familiar narratives of hope and salvation, anchoring a society swept away by an exploding demography and epochal change in a tradition that imagines itself virtuous. It was the same with the puritanical breakout of the Wests 19th century; a world torn asunder by the industrial revolution and all the drinking and gambling that wages afforded the laborious classes. The Christian bourgeoisie then sought refuge in a snobbish morality of the Victorian era and in racist nationalism. An insecure present leads the mind to romanticize the past, and to overshoot its normative paradigms. This is the dark side of the Islamic revival, when the missionaries turned into vigilantes, when the Salafis formed militias. Like the Mormons in the time of Brigham Young, the Salafis have sought to create a society purified from the corruption of the world. Minorities and often the wealthy were crisply outlined as the corrupt other against which the community imbued its distinctive identity with collective pride. And there was security in numbers. The mass movement grew from its own gravitational expansion. Islamists excelled at creating a collective dynamic, and in a stressed society that was not well-protected by the rule of law, the collective had a tendency to impose on the individual. The Islamist surge was a child of its time: Born at the turn of the 20th
June & July 2012 47

Camille Pecastaing
century, it really got going in the 1930s, the heyday of communism and fascism. Islamism imbibed the reactionary zeal of Western nationalism along with its antithesis: the revolutionary passion of Western socialism. It saw itself as a transnational conservative revolution aimed at recreating a global Ummah (Muslim community) ruled by sharia. From its Western peers, Islamism also learned that it could not rely on ideological appeal only. The quest for power demanded structures and institutions that are material rather than spiritual affairs. There are costs, and people to be fed, demands to be accommodated. Long-term development requires efficient and accountable management, responsiveness to the needs of members, and ultimately solvency. Early on, Islamists left the field of ideology and moved into institutionalization.

Tithing, racketeering, and rents

n theory, ideological movements live off contributions from their followers dues in the secular sphere, tithes in the religious one. The very motor of proselytizing is a redistributive Ponzi scheme: The resources of existing members are pooled to lure in new members. This has been the work of Dawa, of religious social work: Muslims have connected to the new Islam through faith-based institutions providing free health care, counseling, and schooling. Like any such scheme, ideologies spread until numbers grow and enthusiasm wanes. Success will attract newcomers less and less interested in contributing and more and more interested in the entitlements that come with membership a job, or a stipend, or a free ride to redemption. The tide turns when receipts equal expenditures, and an ideological movement in that situation has reached its limits if it is to rely on tithing only. Islamist movements have all faced those constraints, and the only way to overcome them is to acquire power. Power allows shifting from voluntary to coerced tithing, which would be more aptly called racketeering. One contemporary example is the al-Shabab movement in Somalia. Originally the youth movement of local Islamic courts, al-Shababs militiamen rose to prominence fighting the Ethiopian troops who invaded Somalia in late 2006. Militarily successful, they have since imposed through violence a fundamentalist utopia, racketeering a vulnerable population. But the vulnerability of the Somalis was also that of the al-Shabab, and when famine hit the region in the summer of 2011, and people started dying or migrated in droves across the border to refugee camps and the foreign aid they could find there, the al-Shabab fragmented. Their dwindling resource base could not sustain their existing level of organization, and all the Islamist fervor on the ground, never that high to begin with, could not make up for it. The most durable form of racketeering is that executed by a sovereign eager to see its resource base thrive, if only to have more to tax from. In a
48 Policy Review

The Many Faces of Islamist Politicking

few cases Islamist movements have either taken over a state, or lived in such close association with one that their resources became those of the state. State-building in Arabia has been from the start, in the mid-18th century, a material affair carried out on the shoulders of Wahhabism, a fundamentalist narrative that gave meaning and justification to the worldly project of Saudi rulers. It is at times difficult to distinguish where Wahhabism ends and Saudi realpolitik begins. This longstanding association, often betrayed or remodeled, still stands in the 21st century. The mutaween, the religious police, are paid to enforce appropriate public behavior. The sentences of antiquated laws an embarrassment for Saudi diplomacy have to be executed as a matter of national sovereignty. Mosques, madrassas, and religious universities are built by the state, clerics and educators are employed by the state, while their counterparts across the Muslim world are at the receiving end of Saudi generosity. The ideological fortunes of Wahhabism went hand in hand with the material fortunes of the al-Sauds. The Arabian business model has always paid close attention to revenues: from withholding tribute owed to undeserving overlords to looting, from foreign aid to oil sales. By the 1950s, royalties from oil companies filled the treasury of what would become a fundamentalist rentier state, giving permanence to the Saudi enterprise, even sparing it from further taxation. It could be argued that oil is the single most important factor to explain the rise of Islamism since the 1970s. Muslim migrants who flowed to the oil-rich countries of the Gulf were exposed to the local forms of the faith (as were some Christian converts), and returned home a more fundamentalist lot. Petrodollars financed conservative congregations throughout the Muslim world and in the diaspora. Whether through governmental or nongovernmental channels, petrodollars paid for the Afghan Jihad against the Soviet Union, which formed the future cadres of al-Qaeda.

The Iran connection

f saudi arabia has been an essential if partly unwitting financial backer of radical Islam, Iranian sponsorship has been more purposeful. The radical fringe of the Iranian clergy that imposed itself in the wake of the 1979 revolution has been a poor administrator of the national economy, and 30-some years later popular rancor abounds because of poor standards of living and the self-inflicted wounds of an international pariah status. Nonetheless, there are enough revenues from energy exports for the Iranian theocracy to get by, buying off a segment of the electorate and putting thousands of Basiji street thugs protecting the regime on state payroll. From the early days of the revolution, the brutality of the Islamic Republic has had less to do with religious doctrine than with a tenuous grip on power, what with the liberal opposition, the war with Iraq, and the persistent economic failings. The state of emergency imposed in the 1980s after
June & July 2012 49

Camille Pecastaing
Saddam Husseins aggression became addictive for the regime, and when that war ended Tehran artificially maintained the pressure with a game of cat and mouse with the United States. Irans nuclear program and the occasional seizure of weapon shipments from Iran to neighboring militias opposed to American designs are reminders that, for all its fiduciary shortcomings, this is a middle income country with enough of a surplus to meddle in regional affairs. Then again, Tehrans real contribution to the treasury of movements like the Lebanese Hezbollah and the Palestinian Hamas is anybodys guess, and it is the treasury that matters. Hezbollah took control of the Lebanese government in the summer of 2011, and Hamas has alone run Gaza since 2007. It is well known that they did not get there on the basis of ideological seduction only, but by spending The idea that big spenders big money on their constituents, which got them elected. Both provide an array of social services like Hamas and schools, clinics, counseling, employment in security forces to populations grossly neglected by offiHezbollah cial authorities and exposed to the devastation of could ever be wars those two movements are paradoxically financially self- accused of having provoked. If it is easy to understand the material appeal of Hamas and Hezbollah, sufficient defies it is more arduous to follow the money trail that the imagination. made their success possible. There is a degree of tithing, of semi-extortion from the local business class, and foreign remittances. In a September 2010 interview with Thanassis Cambanis, Mahmoud Komati, the deputy chief of Hezbollahs politburo, admitted that while his movement hoped to develop independent revenue streams, it had not reached 50 percent self-sufficiency yet. Iranian aid is openly acknowledged by Hezbollah. It is a cash operation. Given the tight blockade of the Gaza Strip, Tehrans support of Hamas is more complicated. Syria, an Iranian ally, harbors Hamas-in-exile, but Damascus is far from Gaza. Official foreign aid to Gaza has been restricted since Hamas took over, specifically to prevent the blacklisted Islamist movement from diverting those funds. Nevertheless, the imf has reported a double-digit growth rate for Hamas-run Gaza, twice that of the West Bank a high growth figure which has to do with postwar reconstruction. Hamass financial resilience is puzzling given the stunted and introverted nature of the economy of Palestine, where a first-rate business consists of bottling sodas for the local market. The main exports of Palestine are not goods but emotions, and foreign aid and remittances are almost the exclusive sources of financial inflows. The money the vastly unemployed population of Gaza receives from abroad has fed traffic in the tunnels under the border with Egypt, allowing Hamas to skim off and redistribute revenues through its social works. Sara Roy, who closely studied Islamic charities in Gaza, is discrete about sources of funding: She points to donations from the community.
50 Policy Review

The Many Faces of Islamist Politicking

Lately, Hamass Islamic Foundation has even started investing in for-profit business ventures ranging from amusement parks to bakeries. Hamass inevitable divorce from a Syrian regime busy crushing Islamist insurgents Sunni insurgents who share a pedigree with Hamas is likely to strain whatever relationship exists between Tehran and Gaza, and will demonstrate the degree of Hamas financial autonomy. The idea that big spenders like Hamas and Hezbollah could ever be financially self-sufficient defies the imagination, but the pattern has been observed with other nonstate actors in conflict zones, like Kurdistan and Somalia. A strong demography amidst political chaos creates masses of refugees, who send remittances back home. Those remittances, and other foreign aid, pay for smuggled goods. That makes for a runt local economy, but large enough to be taxed by an armed group. At the very least, money buys the weapons that protect and, at the same time, intimidate the people. In the most advanced nations, the ratio of public expenditures to gdp is in the range of 40 to 50 percent. There are no reliable figures for areas where militias have carved out a territory, but the most forward-looking and solvent militias seem to have enough going around to pay for basic services that earn the loyalty of the population, creating over time a symbiotic relationship similar to that between nation and state. That even gets them the votes to legitimize their authority if they bother with the finesse of electioneering.

The electoral route

slamist movements of Pakistan have tried the electoral route, but their rise came about through an evolving symbiosis with a weakening military state. Islamist nongovernmental organizations that issued from the Deobandi and Ahle-Hadith movements were a palliative for the shortcomings in educational and social services, in the same way that the militias coming from those movements were a palliative for the shortcomings of the military. In a page taken from Irans textbook, Pakistani security forces nurtured organizations such as Lashkar-e-Taiba, the Haqqani Network, and the Afghan Taliban to pursue strategic goals. Thanks to diversified revenue streams, the creatures escaped in part the control of their creator. While the Afghan Taliban would never have emerged around 1994 from Deobandi madrassas without funding and equipment from the Pakistani government, a diversified resource base has protected them from subsequent policy reversals from Islamabad. When they ruled Afghanistan until 2001, the Taliban could tax the busy truck trade between Central Asia and Karachi. Some were then, and still are, involved in lucrative opium trafficking, which binds them to local growers and allegedly builds shadowy bridges with Pakistani state officials. Like the Khmer Rouge in its time, the Taliban are the worse kind of financially solvent ideologues. The abuse they have visJune & July 2012 51

Camille Pecastaing
ited on the Afghan population is directly related to their financial autonomy, all the more impactful that Afghanistan is so poor and their opponents, a ragtag assemblage of warlords, are themselves so brutal and corrupt. Running for office has been a constant ambition for many Islamist movements, for that was the natural route to power and money. Political parties were spun off from social works whenever the regimes allowed it, which was an infrequent occurrence. In Turkey, a strong military core garbed since the 1950s by a constrained democracy kept communists and Islamists at bay. But Islamists patiently built a political machine, fed by the contributions of rural migrants and emigrants disenfranchised by a fragmented party system dominated by fickle and corrupt elites. Following a financial collapse that shook the establishment, the Islamist ak party was Running for elected to power in 2002, and has since defanged the office has been military and crushed the secular opposition. With a public debt at 50 percent of gdp, and a structural a constant budget deficit, the Islamists in power have been anything but frugal. But they have sustained a strong ambition for rate of growth, reaping the benefits of genuine libermany Islamist alizing reforms started in the 1980s. Hardball playmovements, for ers in a merciless democratic game, they have used the courts to harass critics and competitors, all the that was the while using public spending and patronage to satisfy natural route their constituents. It is hard to see a transition of power there unless a large scandal or an economic to power. bust brings them down to earth. In Egypt, the Muslim Brotherhood has been severely repressed since the mid-1940s, and particularly so under the regime of Abdel Nasser in the 1950s. An arrangement of sorts was reached after the military defeat of June 1967, and the Brotherhood was allowed to operate in the social sphere as long as it did not engage in politics. If members ran in parliamentary elections, it was as unaffiliated candidates, and they never competed for all the seats. But Egyptian society was allowed to become more religious, the regime sometimes winking at the reactionary judgments of sharia courts. The Brotherhood was useful for the military regime. First, its social activities palliated the lack of state welfare, whether in rural Upper Egypt or in the slums of Cairos overgrowth. Second, it lived on the same turf as the traditional left, eating away at the communists. Third, the ubiquitous socialminded Islamists of the Brotherhood were lumped together with the jihadis who had challenged the state militarily in the 1990s and presented to the West as a pretext to maintain military rule. Few in the West believed that the Brotherhood was al-Qaeda, but few dared to test that hypothesis either. This relationship of convenience between the officers and the Islamists carried until the Arab Spring forced the army to jettison Mubarak and to revisit the terms of their arrangement with the Brotherhood. In the immature political landscape of postArab Spring
52 Policy Review

The Many Faces of Islamist Politicking

Egypt, the Brotherhood stands out as a powerful faith-based machine, wellfunded and capable of efficient grassroots work. Its financial firepower comes from contributions of the domestic middle class and from the community of Egyptian expatriates. It counts wealthy individuals in its top ranks, like Khairat el-Shater, a businessman and the movements number two. When the rolling elections began in November, the Brotherhoods newly founded Freedom and Justice Party exceeded all expectations, as did the al-Nour party a rare instance of Salafis competing electorally. As this success translates into seats in the government, with ministries will come budgets, contracts, and more money for social works and help greasing the wheels of reelection. If Turkey is an example, there are many ways to play that game. If the Islamists wanted a share of the perks of government, they were reluctant to take full responsibility for Egypt. It is only after being faced with their own success, or rather with the vacuum that is the opposition, that the Brotherhood resigned itself to field a candidate for the presidency. They first ran al-Shater, quickly disqualified because of a recent stint as a political prisoner, and then more comfortably ralled behind Abdelmonem Abolfotoh, a physician and moderate Islamist who had taken a safe distance with the Brotherhood. In Yemen, the Islamist al-Islah has been the main opposition party since the 1990 reunification of North and South. It is an odd alliance of religious fundamentalists and tribal interests the leader of al-Islah is always the sheikh of the largest tribal confederation. Fundamentalism has been doing well in Yemen; its imams are said to be well-funded by regional charities, and they can tap into a vast impoverished and illiterate population. Fundamentalism has allowed the Saudis to keep in check the elusive Yemeni president, Ali Abdullah Saleh, who himself let them be because proselytizing was a way to bind the formerly socialist south to his realm. But relations soured, and for the last two presidential elections, al-Islah allied with their former socialist nemesis to oppose the bid of Saleh and his kin to remain indefinitely in power. The Arab Spring sealed the divorce, and from March to November the leaders of al-Islah have been fighting it out with forces loyal to Saleh in the streets of the capital Sanaa. When a Saudi-brokered compromise was finally implemented at the end of the year, the Islamists of al-Islah were brought into the new government.

Up from repression
ther regimes never tolerated Islamists, bringing to bear against them the full force of authoritarianism. In Baathist Syria, the city of Hama was shelled by artillery in 1982 following years of unrest from the local Muslim Brotherhood. In Tunisia, leaders of Ennahda, an Islamist party that professed nonviolence, were sentenced to death and spent years in prison. For two decades, under President Ben Ali, the police monitored
June & July 2012 53

Camille Pecastaing
mosque attendance, and too observant young men could be arrested and detained for extended periods. In Qaddafis Libya, almost 1,300 prisoners, mostly Islamists, were reportedly massacred following a 1996 riot in Tripolis Abu Salim prison. Islamists who escaped the repression of those regimes were forced into exile, some in London, where they reconciled their ideology with liberalism, others in Afghanistan, where they connected with the mujahedeen milieu from which al-Qaeda emerged. This scorched-earth legacy makes the Islamist resurgence in Syria, Libya, and Tunisia the great surprise of the Arab Spring. In Libya, the Islamist current remains ill-defined. Qaddafi was killed by a mob chanting Allah Akbar the same hymn that accompanied the hanging of Saddam Hussein but such sentiments do The most not reveal a focused political agenda. Abdelhakim Belhadj, a former mujahedeen arrested in Bangkok striking rebirth in 2004 and rendered to Qaddafis jails, famously took place in led a rebel unit in the conquest of Tripoli. But Belhadj went out of his way to explain that his days Tunisia, where Islamist Ennahda as a jihadi were over. In Syria, the protest movement issued from the Arab Spring, and brutally repressed inherited a by Assads security forces, has struggled to escape a Sunni sectarian character, despite insistent appeals to revolution in minorities. In the first few months, the Syrian which it didnt Muslim Brotherhood morphed from a skeleton of exiles to a front for the rebellion within the newlyparticipate. formed Syrian National Council, a shadow government in exile. But the longer the civil war drags on in Syria, the more diverse the breeds of Islamists who find there a terrain to thrive, including old-fashioned jihadis. The most striking rebirth took place in Tunisia, where Islamist Ennahda inherited a revolution in which it did not participate. The aftermaths of revolutions are volatile periods, with fragile regimes hijacked by radical, authoritarian factions: the French Jacobins, the Russian Bolsheviks, the Iranian Komitehs. In Tunisia, the soul-searching, post-revolutionary phase saw the formation of an abundance of new parties. During those months, the historical leader of Ennahda, Rashid al-Ghannushi, returned from London and mobilized local supporters and donors. Rumors of funding from conservative Arabia were belied by the strict monitoring of the electoral agency. The Islamists were funded by local businessmen, like Nejib Gharbi, a wholesaler and spokesman of the movement. Their natural constituency was the hinterland, bypassed by the two decades of economic growth, and its unemployed or underpaid children amassed in the suburbs of the rich coastal cities. Their long suffering at the hands of the despised regime also appealed to the coastal middle class, which they courted by claiming as their own the more liberal, progressive culture familiar to the Tunisian bourgeoisie. The verdict of the October elections was unequivocal: The government was theirs.
54 Policy Review

The Many Faces of Islamist Politicking

With power came the responsibility to quickly formulate a policy to respond to the economic predicament. In their first months in power, Ennahda walked a tightrope between indulging popular demands and preserving macro accountability. Their plan to boost growth in the short term through a phase of government spending before achieving self-sustaining growth at some point in the future seems lifted right out of a Keynesianinspired imf policy paper. There is no doctrine here; they are not driven by Quranic injunction but by the need to placate and reassure. Everyone gets something: public sector jobs for the poor, investment opportunities for the rich, and a plan of action for the imf. The instinct of the Islamists is to let people make money and use charity as a means of redistribution, but policies that create growth and jobs rarely deliver greater equality. In the end, the most religious aspect The Arab world of their economic orthodoxy is its wishful nature: once flirted with The inherent prayer for everything to go well, socialism; the because there are so many ways in which things could go wrong. Free elections were not enough to legacy of social placate the street, and the Islamists have struggled to reformers was restore order, having no choice but to betray the spirit of the revolution and send forces to beat up picked up by the protesters. monarchies and Countries in the region had been doing relatively well in the decade that preceded the Arab Spring, Islamists. thanks in part to a controlled liberalization of their economies that brought foreign investments to selected sectors deals often directed toward those blessed with political patronage. The liberal opening allowed cronies of the regimes to open local sweatshops and beach resorts that thrived on low wages, as well as to run commodity monopolies that skimmed off the surplus of a suffering middle class. Add to that the oil boom and real estate speculation, and you have good growth numbers but not the kind of developmental model needed by countries experiencing a youth bulge, with half of the population under 30, and which is said to need to create 50 million jobs within a decade. This model of unequal growth, with upward social mobility biased toward the politically connected, was forcefully rejected in the streets of Tunis and Cairo. Before it became Islamist, the Arab world once flirted with socialism, and the paradox is that the legacy of great social reformers like Nasser and Bourguiba was picked up by the conservative monarchies and the Islamists. It is the conservative regimes imbued with Islamic legitimacy either as descendants of the Prophet Muhammad (the dynasties of Morocco and Jordan) or as custodians of the Holy Mosques (the Saudis) that for all those years kept their eyes on social and economic indicators and invested in their people. In Morocco, the Arab Spring was answered with a constitutional reform followed by a general election, which brought the Islamists of the Party of Justice and Development to form the government. Monarchal
June & July 2012 55

Camille Pecastaing
rule hardly skipped a beat through all of it. In Saudi Arabia, the welfare tap was turned up. The revolution passed them by. Monarchy is no panacea: Bahrains was saved by foreign military intervention, and Jordans horizons remain clouded. But the Arab juntas, with their tired revolutionary (Algeria, Libya) and military (Egypt, Syria, Yemen) rhetoric, faltered. The abundance of natural resources was not even a factor: Libya had a greater endowment per capita than Saudi Arabia, but the lazy regimes that relied on rents of all kinds to maintain a debilitating status quo were wiped out one after another.

The Arab community

he contagious nature of the Arab spring reveals a degree of community across the region, based on Islam and the Arabic language, as well as on a shared imaginary and economic predicament. The pan-Arab Ummah was not as dead as once thought, but commonality is not uniformity. The Islamist political parties are the children of their nations, and when in government they will have to contend with their own social conservatism, which will not be to everyones liking in a pluralist society. An Islamist winter will not easily smother the calls for individual liberty and dignity that were so spontaneously and stridently expressed in the Arab Spring. And Islamists face a greater challenge than a controversial social agenda. The Muslim world is poised to grasp the benefits of its demographic dividend: a vast population of working age. If the Islamists really want power, they should understand that they are only as strong as the society over which they preside. It is time to put their reputation for hard work and probity to work in the pursuit of economic growth.


Policy Review

The Resilience of Arab Monarchy

By Ludger Khnhardt

evolutions are not mechanical processes of social engineering. They unfold as an intrinsically unpredictable flow of events. Structurally, revolutions will go through phases, often through contradictory periods. Hardly any revolution will evolve without turbulences and phases of consolidation. And revolutions do not happen without moments of stagnation, surprising advancement, and unexpected transformation. The beginning of the Arab Spring in 2011 has not been of a different nature. It started as a fundamental surprise to most, took different turns in different countries, and was far from over by the end of 2011. Transatlantic partners are fully aware of the stark differences among Arab countries. They realize the genuine nature of each nations struggle for democracy. Yet, they are inclined to take the Western experience with democracy as the key benchmark for judging current progress in the Arab world. The constitutional promise of the U.S. or the success of the peaceful revolutions in Eastern and Central Europe in 1989 and 1990 are inspiring, but one must be cautious in applying them to the Arab Spring. Preconditions have to be taken into account. Beside, the
Ludger Khnhardt is director of the Center for European Integration Studies (zei) at Bonn University.
June & July 2012 57 Policy Review

Ludger Khnhardt
history of Europes 19th and 20th century also suggest room for failure in the process of moving toward rule of law and participatory democracy. Some cynics have already suggested that the Arab Spring could be followed by an Arab autumn or even winter. Even if one discards such visions as inappropriate self-fulfilling prophecy, certain European experiences should probably not be forgotten: In the 1830s, Germany experienced its own Spring toward pluralism and democracy called Vormrz. That German spring movement (Sturm und Drang) was essentially a cultural uprising without the follow-up of transformational political change. In 1848, across Europe, revolutionary upheavals promoted the hope for an early parliamentary constitutionalism across the continent. In most places, this hope was soon to be replaced by variants of a restrictive consolidation of the ancient For the time, regimes. In 1989, the experience of Romania deviatthe Arab Spring ed strongly from most of the peaceful revolutions across Europe. Ousting and even killing the former has evolved dictator was a camouflage for the old regime to preinto the prelude vail for almost another decade. While the rest of of revolutionary Central and South Eastern Europe struggled with regime change and renewal, Romania prolonged transformation regime atrophy and resistance to renewal. For the time being, the Arab Spring has evolved that will go on for many years. into the prelude of a revolutionary transformation that will go on, most likely for many more years to come. The Prague Spring of 1968, in the former Czechoslovakia, comes to mind: It was welcomed with euphoria in the West and in secrecy by many citizens under communist rule in the east of Europe. Yet, it turned out to be just the beginning of a transformative period in the communist world. It took another two decades before a substantial change of the political order in most communist states came about. The Prague Spring was the spring of a generation, not the spring of a year. No matter what direction the Arab Spring may take in the years ahead, two trends are startling: First, the Arab Spring has initiated a wide range of different reactions and trends around the Arab world. The homogenous Arab world is a myth. Likewise, the notion that Arab societies are permanently stagnant and immobile is a myth. The quest for dignity, voice, and inclusion under rule of law, and a true structure of social pluralism, has been the signature of peaceful protest all over the Arab world. The reactions of incumbent regimes have demonstrated a variety of strategies but also different levels of strength, legitimacy, and criminal energy. Second, and more surprising, is the relative resilience of the Arab monarchies to the Arab Spring: Morocco and Jordan, Saudi Arabia and Oman, Kuwait and the United Arab Emirates, and Qatar and Bahrain have been reasonably unaffected and remain stable (in spite of the temporary clashes in Bahrain and their oppression with the help of Saudi Arabias army). While the quest for dignity, voice,
58 Policy Review

The Resilience of Arab Monarchy

and inclusion has posed a challenge to all regimes in the Arab world, Arab monarchies emerged relatively undisturbed from the first wave of popular unrest and protest. This contrasts with the protest against personal rule in most Arab republics: the flight of a corrupt president whose security apparatus was no longer predictable (Tunisia); the arrest of a deposed president who seemed to be in fullest command of his nations security apparatus but could not maintain support of his army (Egypt); the eventual deposition of a ruler who was torn between security factions and split traditional loyalties (Yemen); the criminal attack on its own people by the security forces loyal to a beleaguered president (Syria); the oppression of all potential unrest by an old regime still clutching to absolute power (Algeria); and the military defeat of a dictator after he had How is it that launched a war against his own people (Libya) were hereditary variations of a complex theme across Arab republics monarchies in 2011. Lebanon has been a special case for years, with its own transformational revolution (the Cedar seem to be less Revolution) going on since 2005. Iraq and Sudan affected by the have also been of a unique character due to their specific domestic and geopolitical position in the Arab protests past decade. How can one explain the almost paradoxical phe- than other forms nomenon that hereditary monarchies at least for of government? the time being seem to be less affected by the protest against personal rule and patrimonial authoritarianism that has resonated across the Arab world? One initial observation is undeniable: Saudi Arabia is particularly interested in supporting Arab monarchies. In fact, Saudi Arabia may even be interested in preventing too-far-reaching democratization in Arab republics. But the vested interests of the Saudi family and its financial leverage alone do not explain why Arab monarchies tend to be more resilient to the current wave of protest heard all over the Arab world. One has to go beyond the obvious and look for structural explanations. Most evident and well beyond the Arab world is the fact that power based on traditional legitimacy continues to play a stabilizing role in the transformation of societies and their political systems. Usually, republican, authoritarian, personal rule built on a political ideology (e.g., independence, socialism, nationalism, development) can only be maintained through a security apparatus and the pressure this apparatus can exert on a rising popular demand for change. In contrast, traditional hereditary rule seems to be able to maintain power with more respect, possibly even with acquired legitimacy, and with less need for the exercise of violence against citizens. The most interesting question stemming from this observation is: Do we know what it may take for monarchies to be successful over time? It is not enough to simply recall the religious roots of Arab monarchical legitimacy, especially the case in Saudi-Arabia and in Morocco. No matter their reliJune & July 2012 59

Ludger Khnhardt
gious or moral-based authority, the historical record of monarchies confronted with the pressure for change is mixed. Reference to traditional religious sources of legitimacy has not, in the past, been enough for some monarchies to survive the winds of change with which their societies were confronted. While going beyond this perspective, several insights into the nature of hereditary systems have stood the test of societal change. They are pertinent and may be a useful mirror to keep in mind as the future path of hereditary rule in the Arab world unfolds.

The historical fate of hereditary rule

he historical record of hereditary rule when confronted with the challenges of social, political, or economic transformation, or even revolution, has not been very impressive. From the 17th century (Great Britain) to the 19th century (France, Spain, Portugal, Brazil, Mexico) and to 2 0 th century (Germany, Russia, Austria-Hungary, Yugoslavia, Ethiopia, China, Greece, Cambodia, Persia, Nepal, Egypt, Libya, Iraq), more monarchies were toppled than rebuilt whenever their societies were fundamentally transformed. The current European hereditary monarchies (United Kingdom, Denmark, Norway, Sweden, the Netherlands, Belgium, Spain, Luxemburg, Monaco, Liechtenstein) as well as nonEuropean monarchies (Japan, Malaysia, Thailand, Brunei, Bhutan, Cambodia, Tonga, Lesotho, Swaziland plus the Arab monarchies) are rather the exception to the rule the global trend seems to favor republican political order as the answer to socioeconomic and political modernization (and many of these monarchies are such only figuratively). However, restorations in Great Britain (in the 17th century) and in Spain (in the 20th century) as well as the transformation of imperial rule in Japan after 1945 indicate the potential for the revival of hereditary rule in times of great upheaval. The panorama of an ongoing survival of almost two dozen monarchies and systems of hereditary rules should not obscure the more than 2,000-year-old electoral monarchy of the Catholic Church. After all, the Pope is also head of state of the Holy See. The main lessons to be drawn from the survival or revival of hereditary rule elsewhere could be of inspirational insight for the future of contemporary Arab hereditary rulers. These lessons include the need to prevent or terminate warfare with, or the threat of violence toward, any neighbor. Consolidated monarchies across the world have recognized the legitimacy of borders and the sovereign rights of their neighbors. This, in turn, has helped consolidated monarchies stay out of international conflicts over territory or power. For Arab monarchies, this global experience would imply that for the sake of their own interest they would be well-advised to search for peace with Israel: to recognize Israel and to facilitate a two-state solution that would allow Israel to live in security and an independent Palestinian state to
60 Policy Review

The Resilience of Arab Monarchy

live in decency, without any border dispute between either of the two states and between them and the Arab monarchies. Among the fundamental lessons to be learned from the struggle of monarchic survival elsewhere would be the need to turn the monarchy from a rule of fear into a symbol of respect and national unity. Consolidated monarchies have been able to disconnect the court from the national security apparatus and to project the monarch as the benevolent symbol of national unity, sometimes coupled with a certain religious authority. For Arab monarchies, this global experience would imply full parliamentary control over security forces and the military; initiate lustration processes aimed at bringing to justice past crimes of the security apparatus without deconstructing the security apparatus as such; and introduce strict rule of law over all security forces and military authorities, Historically, without sidelining them from the future processes of hereditary society and politics. rule has not A third essential suggestion would imply the need to separate authority from power. Consolidated done well when monarchies have decoupled their traditional authoriconfronted by ty from the daily business of politics and the structure of national power. They have accepted an indesocial, political, pendent government and parliamentary rule as the or economic main source of national political power. Consolidated monarchies have surrendered their challenges. power to constitutional rule and thus maintained their symbolic and traditional authority. For Arab monarchies, this global experience would imply empowering parliamentary governance through a prime ministerial system with full accountability to the respective parliamentary majority; to terminate the appointment of prime ministers or members of parliaments, including the upper houses; to initiate a process of rewriting the national constitution aimed at properly organizing a new national consensus framed by a constitution-based parliamentary monarchy. A fourth and certainly not final lesson to be learned would be the need to disassociate personal wealth from the wealth of the country. In consolidated monarchies, the personal budget of the monarch and the court has been disconnected from the sources of wealth of the country. The budget of todays monarchs may still be less accountable than other elements of public spending, but the allocation of the courts budget in consolidated monarchies is no longer based on the rulers arbitrary access to public goods. For Arab monarchies, this global experience would imply separating state funds from the funds available for the monarch and his entourage, and installing parliamentary control over the allocation of resources for the hereditary sovereign and a solid system of accountability for auditing these resources. It is obvious that the specific historical, political, sociological, and economic context in which the democratization and parliamentarization of, say, the monarchies of Norway, Sweden, Denmark, the Netherlands, Belgium,
June & July 2012 61

Ludger Khnhardt
and Spain took place cannot repeat itself in todays Arab world. Arab transformation will continue down its own contingent path, which will likely be one that doesnt follow Western expectations and aspirations.

Key challenges for Arab Spring success

he path for those countries that have been able to successfully transform from personal rule to parliamentary monarchy has always been long and often arduous. In most cases, it went through similar areas, worth recalling as the Arab Spring unfolds. Originally, personal rule was based on control of territory and people. Gradually, intermediary elites were installed by the ruler or emerged against the initial will of the ruler. In a long process, they advanced the notion of legal rule over personal rule in a long process (e.g., the Magna Carta). Arab hereditary monarchs would be well-advised to respond to the quest for freedom and justice from within their citizenry with a sustained support of independent legal structures. The growing diversification of economic activities especially the emergence of capital-based production and division of labor generated functional elites (bankers, owners of trading houses and production) who had demanded political inclusion and participation. Arab hereditary monarchs ought to support the establishment of independent representation of functional elites (including business associations and trade unions), recognizing them as a genuine sphere of open and legitimate political discourse, with the objective to fully participate in the public policy dialogue. The calls for political inclusion of a new bourgeoisie led to an advanced rule of law and opened the way for democratic participation, which in turn stabilized the sociopolitical system. Arab hereditary monarchs should do their utmost to help their societies move beyond the prevailing oligarchic structures of a rent-seeking mindset. It is here that the experience of Turkeys economic development may be a template for the transformation necessary in the Arab world, beyond the Arab monarchies. Time and again, parliamentary rule came from aspirations of personal rule in the name of contingent social, cultural, and intellectual ideas and ideologies. However, no republican dictator was ever able to exercise the natural features of traditional rule over such a long time that he could translate his rule into legitimate hereditary succession. Today, North Koreas ruling family and the ruling family of Assad in Syria and in a limited way the regimes of Kabila in Congo and of Ali Bongo Ondimba in Gabon are exceptions to this rule. Yet these contemporary hereditary dictatorships have been unable to generate legitimacy for their specific versions of authoritarian or pseudo-democratic hereditary succession. A democratic exception to this phenomenon is provided by Singapore: the third prime minister, Lee Hsien Loong, is the son of the first prime minister, Lee Kuan Yew. Arab hereditary
62 Policy Review

The Resilience of Arab Monarchy

monarchs would be well-advised to remove any family member from public offices. Most personal and patrimonial rulers in postcolonial societies resorted to similar mechanisms to maintain their position: patronage, clientelism, theft, corruption, crime, and violence. When republican dictators lack the features of traditional authority they try to resort to charismatic rule, violence, and coercion, none of which can generate the necessary features required for transition toward legitimate hereditary succession. Arab hereditary monarchs should match political openness and transparency with personal modesty and decency in spending behavior. For now, the strongest source of authority of contemporary monarchies in the Arab world (and elsewhere) is the traditional legitimacy attributed to their rule. Beside learning Contemporary from other consolidated monarchies, the current monarchies Arab hereditary rulers might think about addressing the key structural challenges that are vital for a strongest source peaceful and sustainable transformation in their of authority societies. Among the insights would be understanding the is the traditional importance of consolidating open spaces in which a legitimacy that pluralistic civil society can thrive. Rulers should is attributed to relate these open spaces to the political arena, and include open political spaces in the national dialogue their rule. on constitutional reform. Another set of necessary steps to stabilize a monarchy under pressure would be rehabilitating the authority of the public sphere by promoting multiparty systems. All too often, failed monarchic systems tend to rely on the support of a state party that pretends to deal with social concerns but really prevents social preferences from being fully expressed. Only a fullfledged pluralistic, multiparty system can aggregate social interests and advance these interests onto the political agenda and into the decision-making process. Election thresholds of three to five percent may guarantee that these multiparty systems help consolidate the new constitutional consensus. Reform-oriented monarchs need to promote strong legal sector reforms including all levels of the judiciary and the penitentiary system, and initiate public education programs that raise the awareness of the primacy of rule of law over any system of personal patronage, coercion, or arbitrariness. Most importantly among all the necessary reforms (and yet all too often neglected) is the need to promote private investment both domestic and international with the prime aim of providing sustainable employment opportunities for the young generation. Reforming a hereditary political system is impossible if the economic order underlying this system and usually intrinsically linked to it will change, too. Usually this requires breaking up the monopolistic and oligarchic structures that almost intrinsically connect hereditary political rule with accumulated economic privileges and powers.
June & July 2012 63

Ludger Khnhardt
These kind of feudal ligatures are not able to generate private initiative that goes beyond sustaining the ruling elites. In the end, in a hereditary political system as much as in any other republican democracy, only a stable middle class based on education and vocational training can guarantee long-term stability. This also happens to be the case in practically all Arab societies today.

Transatlantic partners

he arab spring has opened a new chapter in the political history of the Arab world. The outcome is far from predictable. It may vary from country to country and it may drag on with different speeds and intensity for years, if not decades. It began thanks to the courage of nonviolent people who wanted to revitalize their societies on the basis of dignity, freedom, and justice. In a geopolitical context, the historic opportunity the Arab Spring represents will, at least, lead to two fundamental reconfigurations: On the one hand, the traditional prejudice according to which Africa is divided between North Africa and Sub-Saharan Africa will end. The issue of overcoming personal rule and introducing constitutional change aimed at enabling law-based pluralistic democracy is as pertinent in most of SubSaharan Africa as it is in the Arab world. In both regions the issue reflects the deficits of postcolonial politics. Hence, the Arab Spring has been watched with great intensity in Sub-Saharan Africa with enthusiasm among young people and with worry among some of the petrified postcolonial elites (in Uganda, for example). In the years to come, one might predict, the Arab Spring will repeat itself in several sub-Saharan societies. There, it will most likely bring about the same mixed picture of success, stagnation, and failure we see in the Arab world. Thus, it will support the trend (and the need) for a differentiated perception of Africa. Instead of continuously and erroneously imagining Africa as one, the long-term constitutional effect of the Arab Spring will help distinguish between an emerging Africa of successful political transformation beyond the postcolonial era, and a stagnating Africa that remains trapped in postcolonial structures of personal rule and patrimonialism. On the other hand, transatlantic partners will have to redefine their strategies toward the Arab world. Neither policies of fear and stereotypes based on distorted notions of identity nor attitudes of benevolent paternalism will help to redefine American and European relations with the Arab societies and their emerging new political structures. Transatlantic partners need to engage the Arab world and eventually Africa, too in a comprehensive agenda of transformation. As for the transatlantic partners the United States and the European Union it will be necessary to move beyond the traditional security para64 Policy Review

The Resilience of Arab Monarchy

digm. For a long time, Arab monarchies were considered Western security partners based on geopolitical considerations, with little consideration for domestic issues. In the future, the Arab monarchies can be stable security partners of the West if their legitimate domestic stability provides the ground for predictable international behavior. The necessary transformation processes will accompany Arab hereditary rulers for many years to come. Transatlantic partners ought to engage Arab monarchies in multifold processes of transformation aimed at advancing modernized monarchies that eventually accept the frame of parliamentary constitutionalism. The notion of parliamentary monarchy may be new to Arab hereditary systems. It is, however, not impossible to achieve, as other monarchies around the world have proven. In fact, it may well be the only realistic option if Arab monarchies are to prevail A new problem over time. for the West: Currently, transatlantic partners pursue independent strategies of cooperation with the Arab world. How to deal with In spite of a strong normative overlap, their strateIslamic parties gies also represent different interests and approaches. The U.S. and the eu were taken by surprise when that have gained the Arab Spring started. For the U.S., the main initial legitimacy at the issue seemed to be the impact of the Arab Spring on ballot box? the future of Israel. (Both the U.S. and Israel will have to learn that nothing will change for the better in the Middle East without a serious return to negotiations over a two-state solution that includes security for Israel and viable statehood for the Palestinians.) For the eu, the initial approach to the Arab Spring was technocratic, as enshrined in the eus Neighborhood Policy toward its eastern and southern neighbors. The eu will have to realize that providing more program and project support for democratic transformation is no strategy to respond to comprehensive uprising. The first wave of democratic elections in Arab reform countries has posed a new set of problems for the West: How to deal with Islamic political parties that gained legitimacy at the ballot box? Western analysts and media differentiate between radical parties and more moderate, reformist parties. The Islamic parties that surfaced in the aftermath of regime change in the Arab world did indeed begin to take different routes. One set of parties found inspiration in the Turkish Justice and Development Party and declared loyalty to constitution-based rule of law, the legitimacy of the secular state, and the desire to contribute to a renewal of public morale based on Islamic norms. Another set of parties promulgated the reconciliation of democratic constitutionalism and sharia. Finally, a third set of parties is undecided on the recognition of constitutional liberties and political pluralism. In most Arab transformational countries, the Western world may have to learn to deal with moderate Islamic governments. This is as challenging for some in the West as the idea of continuing to cooperate with Arab monarchies that
June & July 2012 65

Ludger Khnhardt
underwent only gradual transformation. As for the future of Arab monarchies, the West might end up with both hereditary rulers and Islamic governments. Formulating a reasonable and positive strategy to cope with such situations is urgent. In the end, from a Western point of view, the current opening of the Arab political space should be seen as a golden opportunity. The United States and the European Union would be well-advised to define a joint strategy for future engagement with the Arab world. The strategys formative ideas should be transformation and legitimacy, its long-term objectives stability and partnership, and its driving instruments geared at promoting civil society and the private sector. In the global past, some monarchies went through stages of transformation that stretched over cenThe spirit of turies. The hereditary rulers in the Arab world may the Arab Spring not have so much time. What was truly new about the events of 2011 is the spirit of the Arab Spring: is truly new. It the self-empowerment of Arab societies, bringing has brought back dignity and hope to frustrated and marginalback dignity and ized societies, enabling millions of citizens to act as proud, self-confident, and open partners of their hope to millions neighbors. This might only be the first step in a long, vexed journey. Currently, the main focus among of frustrated transatlantic partners is on the future of Arab citizens. republics, which are torn between the most extreme possible scenarios. Some may think that Arab monarchies will be the last to reform and hence can be neglected. There are good reasons to argue for the opposite. Unreformed Arab monarchies could undermine any progress currently made in Arab republics. But reformed, transformed, and consolidated Arab monarchies could become reliable agents for change and legitimacy in a renewed Arab world. In this context, even the future of regional groupings in the Arab world has become an open issue. When Saudi Arabia invited Morocco and Jordan to join the Gulf Cooperation Council, the other gcc partners were not amused. No matter how realistic this perspective in the end may be, it certainly indicates that Saudi Arabia is prepared to strengthen Arab monarchies at the expense of cohesion and deepened integration in the Gulf region. While Morocco and Jordan are engaged in a new phase of advanced internal reforms, the Saudi offer was rather understood as a means to curb and curtail reforms that may challenge the existing structure of power in Morocco and Jordan. A realistic assessment cannot exclude new periods of stagnation and resistance to change in certain Arab monarchies. Skeptics point to the financial support from Saudi Arabia for the more traditional, if not extremist, part of the Islamic movement in Egypt and say its an indication of reactionary potential, both in revolutionary Arab republics as well as in nonreformist
66 Policy Review

The Resilience of Arab Monarchy

monarchies. Optimists laud the support for the Libyan rebels offered by Qatar and the relative openness in Qatar itself, symbolized by the headquarters of Al Jazeera tv there. It is difficult to foresee which side will eventually prevail. The intrinsic links between Arab monarchies and Arab republics are manifold. Yet the West ought to understand that the instability of autocratic Arab republics does not necessary imply that Arab monarchies are the most resistant to change in the Arab world and that eventually they will disappear. The next steps in the ongoing Arab revolution cannot be predicted. Though hopes can fail, it would be premature to downplay the potential of hereditary Arab monarchies to transform themselves into parliamentary monarchies. The advantage of the Gulf monarchies is their strong economic basis. Arab oil and gas resources are not unlimited, however. Moreover, the social structure of the Gulf monarchies (with large, poorly treated migrant labor populations) is not without implications for the future of Arab societies. Moving from rent-seeking structures to complex functional economies in which Arab youth will find its legitimate and happy place in life will not be easier in Arab monarchies than in Arab republics. Political surprises especially in the process of succession cannot be ruled out. But the Arab Spring, at last, has opened the windows of change in the Arab world. This is a promising new beginning that allows the West a new look at Arab societies and the implications of their development. The future of Arab monarchies is one of the important aspects in the new mapping of the Arab world.

June & July 2012


Forty Years of Originalism

By Joel Alicea

n the immediate aftermath of his 2010 election as the newest senator from Utah, Mike Lee spoke before a crowd of enthusiastic practitioners, scholars, and students at the Federalist Society National Lawyers Convention. Senator Lee focused on the role of Congress in constitutional interpretation, and he ended his remarks with the following pledge: I will not vote for a single piece of legislation that I cant reconcile with the text and the original understanding of the U.S. Constitution. The senators statement rejected the idea that the Supreme Court is the only relevant constitutional interpreter in the federal system and struck at the heart of the living Constitution, the notion that the original meaning of the Constitution is not binding on todays government officials. By requiring adherence to the original meaning of the constitutional text, Senator Lee sided with originalism. The late scholar Gary Leedes once complained that while originalists ask the federal judiciary to be originalist, they permit the electorally accountable officials substantial leeway. The Congress can interpret the tenth amendment and the necessary and

Joel Alicea is a student at Harvard Law School.

June & July 2012 69 Policy Review

Joel Alicea
proper clause virtually as it pleases. Senator Lees speech represents a forceful reply to Leedess challenge: Congress must be originalist, too. The senators pledge highlights a remarkable fact about American constitutionalism today: Only a generation removed from the constitutional revisions of the Warren and Burger Courts, originalism has not only established itself as a respectable interpretive theory in the federal judiciary, but it has also been taken up by some members of Congress. Even a major-party presidential candidate, Newt Gingrich, has pledged that as president he would interpret the Constitution using originalism. Such a state of affairs was unthinkable decades ago when, as Judge Robert Bork characterized the conventional wisdom of the era, lawyers came to expect that the nature of the Constitution [would] change, often quite dramatically, as the personnel of the Supreme Court change[d]. But it was precisely because of an article by then-Professor Bork that so much has changed and that Senator Lees pledge was possible. Borks 1971 article in the Indiana Law Journal, Neutral Principles and Some First Amendment Problems, is widely recognized as having launched modern originalist theory. While Professor Noah Feldman has underlined the role Justice Hugo Black played in the development of modern originalism, it was not until Borks article in 1971 that the modern originalist movement took flight. Thus, having just passed the 40th anniversary of that landmark essay, it is appropriate that we survey how modern originalism began, how it has changed, and what challenges lie ahead.

The birth of modern originalism

lthough justice antonin Scalia is fond of saying that originalism was once orthodoxy within the judiciary, Johnathan ONeill, professor of history, helpfully reminds us that traditional textual originalism and contemporary originalism should not be ahistorically equated. As ONeill tells the story, what we might think of as originalism in the 18th and 19th centuries was heavily influenced by the traditional notions of statutory interpretation articulated by William Blackstone in his Commentaries on the Laws of England. Modern originalism, by contrast, focuses more on historical sources to determine the meaning of the constitutional text, such as the records of the state ratification debates. In this sense, modern originalism is much more of a historians art than that of a Blackstonian lawyer, and it makes sense to think of modern originalism as distinct from the 19th-century brand that preceded it. The distinguishing features of modern originalism could only be vaguely perceived in Borks Indiana Law Journal article, which, as Bork said at the time, did not offer a complete theory of constitutional interpretation but rather set out to attack a few points that may [have been] regarded as salient in order to clear the way for such a theory.
70 Policy Review

Forty Years of Originalism

Borks article began with and focused on a fundamental question: When is authority legitimate? The Warren Court, he wrote, had posed the issue in acute form because the issue of authority arises when any court either exercises or declines to exercise the power to invalidate any act of another branch of government. When the Court reviews the constitutionality of legislation, it confronts what Bork later called the Madisonian Dilemma. The dilemma arises from two competing principles of the American constitutional system: majority rule and minority rights. The system assumes that the will of the majority will be decisive in most instances. But there are some areas, Bork noted, where society consents to be ruled undemocratically . . . by certain enduring principles believed to be stated in, and placed beyond the reach of majorities by, the Constitution. He claimed that the Supreme Court was entrusted with The identifying the power to define both majority and minority freedom through the interpretation of the features of Constitution. But this responsibility, he continued, modern imposes severe requirements upon the Court, because it follows that the Courts power is legiti- originalism could mate only if it has . . . a valid theory, derived from only be vaguely the Constitution, of the respective spheres of majoriperceived in ty and minority freedom. If the Court does not have such a theory and instead decides cases based Borks 1971 on its own value choices, it necessarily abets the journal article. tyranny either of the majority or of the minority. This conclusion owed much to the work of Borks Yale colleague and friend, Alexander Bickel, who famously defined the counter-majoritarian difficulty inherent in judicial review. The problem facing the Supreme Court, then, is to find a theory based on neutral principles derived from the Constitution. Unfortunately, as Bork observed, it would not be amiss to point out that the principles required of the Warren Courts decisions never did put in an appearance. According to Bork, judicial review by the Warren Court was founded on a flawed premise: that courts must make fundamental value choices in order to protect our constitutional rights and liberties. But if the Constitution already makes those fundamental value choices, then for the Court to do so is profoundly illegitimate. The Constitution might make the choice by granting permission The Congress shall have Power to lay and collect Taxes, Duties, Imposts, and Excises or by withholding it Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof. When the Court imposes its own value choices rather than implementing those embodied in the Constitution, it privileges its own views over those of the democracy and, in so doing, as Bork wrote, claims for the Supreme Court an institutionalized role as perpetrator of limited coups detat. However, the key insight of Borks article was not that constitutional interpretation must be guided by neutral principles. The conceptual breakJune & July 2012 71

Joel Alicea
through of his essay was his tripartite notion of neutral derivation, definition, and application. According to Bork, the problem with past attempts to fashion neutral principles was that the focus had been entirely on neutrally applying the principles essentially prizing consistency across cases. Bork argued that these efforts did not go far enough. Using Griswold v. Connecticut, the 1965 Warren Court decision often thought to have found a general constitutional right to privacy, Bork attempted to show that it is equally important to define principles with sufficient precision that they become capable of neutral application. But if the Court defines the principle however it pleases, this is just as illegitimate as applying a principle in inconsistent ways, since the definition will necessarily embody the justices value choices rather than the peoples. How a principle is defined, then, must also be accomplished in a neuIt was Raoul tral fashion. Berger who, with While this commentary on Griswold would on its own probably have been enough to get Bork into the possible hot water in his 1987 confirmation hearings, he exception of went further and questioned the very idea of a conBlack, made the stitutional right to privacy. Having already argued that the right to privacy embodied in Griswold was first foray into neither neutrally defined nor capable of neutral the practice application, Bork stated that the derivation of a privacy right was utterly specious. Bork saw no of modern general right to privacy in the Constitution: When originalism. the Constitution has not spoken, the Court will be able to find no scale, other than its own value preferences, upon which to weigh . . . respective claims. But how are principles to be neutrally derived? Bork offered two possible methods. First, principles can be derived by taking from the Constitution rather specific values that text or history show the framers actually to have intended and which are capable of being translated into principled rules. Second, some principles are derivative rights i.e., rights that will lead [a citizen] to defend them in court and thereby preserve the governmental process from legislative or executive deformation. By this latter category, Bork seems to have had in mind rights that are instrumentally valuable to a societys functioning or to the protection of fundamental rights, but he never explicitly defined what he meant. For our purposes, it is Borks first method of neutral derivation that is most important: the original intent of the Framers. Bork was right when he said that Neutral Principles and Some First Amendment Problems did not set forth a comprehensive theory of constitutional interpretation. Its purpose was to identify the shortcomings of the Warren Courts jurisprudence in an effort to show the need for a new interpretive framework, one based on neutral principles. All of this work was necessary to clear the way for modern originalism, and Bork offered the original intentions of the Framers as the
72 Policy Review

Forty Years of Originalism

interpretive theory that could meet the requirements of neutral derivation and definition and solve the Madisonian Dilemma. But because the article was an opening salvo, it left fundamental questions of theory and methodology unanswered. Bork never defined what he meant by original intent, never answered who the Founders were, and never said why the Founders were the relevant group from which to derive the Constitutions meaning. Why should not the ratifiers of the Constitution be the relevant referent? These questions and more would be answered by subsequent originalists and to some extent by Bork himself in the decades that followed. But Bork had opened the way for such discussions to take place. He had cleared the underbrush of tangled constitutional theory that had spread wildly for decades, and constitutional theorists could now look afresh at the foundational questions of constitutional theory. It did not take long for others to begin answering those questions.

Originalism in transition
he story of how originalism developed over the next 40 years is well known among scholars. Professor Richard Kay has called it the standard story about the originalist approach to constitutional interpretation. Johnathan ONeills Originalism in American Law and Politics is an ambitious attempt at a comprehensive retelling, but it is Professor Keith Whittingtons essay The New Originalism that has been most important in recent years. A few years after Bork published Neutral Principles and Some First Amendment Problems, Justice William Rehnquist delivered a lecture called The Notion of a Living Constitution. The lecture reinforced Borks critique of the Warren Court as illegitimate. Rehnquists argument, like Borks, was grounded in the idea of popular sovereignty: The people are the ultimate source of authority; they have parceled out the authority that originally resided entirely with them by adopting the original Constitution and by later amending it. For the Court to modernize the Constitution rather than stick to its limitations was therefore a usurpation of the popular sovereign. Borks clarion call had been answered by a member of the Supreme Court. However, it was Professor Raoul Berger who, with the possible exception of Justice Black, made the first foray into the actual practice of modern originalism. His consequential 1977 book Government by Judiciary made the explosive claim that the Supreme Courts 1954 decision in Brown v. Board of Education was irreconcilable with the original intent of the Fourteenth Amendment. Berger, like Bork and Rehnquist, grounded his theory in popular sovereignty and used sentences descended from Neutral Principles, such as Bergers claim that the Constitution represents fundamental choices that have been made by the people, and the task of the Courts is to effectuJune & July 2012 73

Joel Alicea
ate them . . . When the judiciary substitutes its own value choices for those of the people it subverts the Constitution by usurpation of power. In the process of developing his historical analysis, Berger ventured a few answers to the questions raised by Borks 1971 article. For instance, Berger defined original intent as shorthand for the meaning attached by the Framers to the words they employed in the Constitution and its Amendments. All of this went a long way toward showing originalisms staying power. It was becoming clear that originalism was no speed bump on the way to a second era of living constitutionalism. As scholars began to grapple with this reality, the intellectual assault on originalism began in earnest. The first body blow came from Professor Paul Brest, who in 1980 pointed out what became known In the mid-1980s, as the summing problem. Brest argued that there may be instances where a framer had a determinate Attorney intent but other adopters had no intent or an indeGeneral Edwin terminate intent. If different Framers had different intentions, how could there be a single, homogenous Meese began original intent? And even if such an intent could publicly pushing be found, how could scholars know the contours of it? This latter question related to levels of generality. originalism in Assuming for the sake of argument that the a series of Founders only intended to protect political speech with the Free Speech Clause the position adopted lectures and by Bork in Neutral Principles how broadly did articles. the Founders define political speech? Originalists were starting to confront difficult theoretical and conceptual dilemmas. The next serious challenge came from Professor H. Jefferson Powell, who argued that the original intent of the Framers was that the Constitution should not be interpreted using original intent. If originalism was based on how the Framers thought about the Constitution, then perhaps the correct originalist interpretation of the Constitution was to use nonoriginalist methods for deciding cases. This argument was actually anticipated by Brest in his 1980 article, but it was Powell who did the historical legwork to make it plausible, and it was Powell who subsequently engaged in a years-long scholarly battle with Berger over the accuracy of the formers research. It was around this time in the mid-1980s that Edwin Meese, the attorney general, began publicly advocating originalism in a series of lectures and articles. Combined with the Reagan administrations laser-like focus on confirming suitable judges to the federal bench, Meeses speeches showed that originalism had catapulted itself from the pages of law review articles to become the default interpretive theory of the Republican Party. One of the judges confirmed during this time was Scalia, who stepped in to play a major role in the scholarly debates of the 1980s. Brest and Powells criticisms presupposed that originalism was based on the original intentions
74 Policy Review

Forty Years of Originalism

of the Framers, i.e., how the Framers believed the Constitution would be interpreted in the future. If this premise was removed, Brest and Powells criticisms carried much less force. Scalia proposed a theoretical shift: Change the label from the Doctrine of Original Intent to the Doctrine of Original Meaning. Scalia argued that originalism should not focus on how the Framers thought the Constitution would be interpreted. After all, those expectations were not part of the text that was ratified, and determining those intentions was speculative. What governed was the original meaning of the text, the meaning that the public at large attached to the words of a constitutional provision when it was ratified. A scholar or judge should use all available sources to understand how the words of a provision were defined at a particular moment in history, not to understand how the Framers expected it would be Originalists had defined by future interpreters. to acknowledge This shift from original intent to original meaning that there are was followed by three other significant changes in originalist theory. First, a sprawling debate over the times when nature of interpretation took place from the late the original 1970s through the 1990s and produced theories that could justify originalism in terms of what it means meaning of a to interpret. Professors Richard Kay and Larry constitutional Alexander were the most notable originalists who provision is argued that to accurately interpret a document simply was to adhere to the original intentions of the unknowable. author. This was an important evolution in originalist theory because it opened the door to explaining originalism without reference to popular sovereignty. Second, originalists were forced to acknowledge that there are times when the original meaning of a constitutional provision is unknowable or has hazy contours. This led scholars to begin distinguishing between constitutional interpretation and constitutional construction. As Whittington who was probably the most important early exponent of this distinction sees it, interpretation is the activity of discovering the meaning of the Constitutional text and applying it directly to the facts at hand, whereas construction refers to the process of applying the Constitution in circumstances where the text does not have a readily discernible or applicable original meaning. Constructions are essentially political in nature; they are attempts to fill in the gaps between known constitutional meanings and are thus owed less deference than interpretations. The constitutional constructions label was initially intended to be descriptive of how constitutional law operates, but it cleared a path for some theorists to attempt to build normative arguments legitimizing departures from original meaning. Finally, theorists began deemphasizing judicial restraint as a central tenet of originalism. Since Borks article, originalists had insisted on the virtue of judges who deferred to the political branches and excoriated judges who
June & July 2012 75

Joel Alicea
were comfortable wielding power. But originalists began to argue that there was nothing inherently virtuous about a restrained judge; what mattered was what the Constitution commanded. If the judge thought the original meaning of the Constitution required him or her to invalidate a piece of legislation, the judge should not hesitate to do so. Professors Whittington and Randy Barnett were two of the theorists at the forefront of this movement, though each scholar approached constitutionalism from very different standpoints. Whittington has hypothesized that the move away from judicial restraint resulted from the rise of conservative judges in the federal judiciary and the desire to exercise that power in a conservative direction. Whatever the reason, the shift happened, and the stage was set for the theoretical turmoil of the first decade of the 21st century.

The future of originalism

t a 2006 lecture to a group of constitutional theorists, Professor James Fleming opened his remarks by posing a startling question: Are we all originalists now? He was quick to provide the answer: It would not mean much to claim that this display shows that we are all originalists now. Indeed, we are witnessing the Balkinization of originalism. Flemings question highlighted the central problem facing originalism as it passes year 40 its internal incoherence. After decades of theoretical mutation, originalism has arrived at the point where there are numerous varieties of originalism, and the only thing they agree upon is their rejection of living constitutionalism. As the scholars Thomas Colby and Peter Smith have argued, originalism is no longer a single, coherent, unified theory of constitutional interpretation, but rather a smorgasbord of distinct constitutional theories that share little in common except a misleading reliance on a single label. All of this led Fleming to conclude his lecture by flipping the question around on his audience and asking whether, in fact, all of those in the audience were now living constitutionalists. How did we arrive at the point where the originalism label has no core meaning? Consider where our story left off and compare it with where it began. Borks article set out to answer the question when is authority legitimate? Both his question and his answer were rooted in a theory of popular sovereignty, yet by the close of the 20th century theories of originalism had sprung up that did not depend on the popular sovereignty narrative. Borks essay argued vigorously for a humble judiciary that acted only when it had a clear mandate to do so, but by the end of the century, some of originalisms foremost scholars had rejected judicial restraint. And while some originalists stuck to the original intentions of the Founders, others moved on to various shades of original meaning. The internal consistency of modern originalism had begun to unravel.
76 Policy Review

Forty Years of Originalism

This loosening of the theoretical bounds of originalism created a vacuum into which stepped Barnett and others. These theorists have advanced theories under the banner of originalism that bear little resemblance to Borks 1971 article, or indeed to any of the other originalist theories advocated since 1971. Barnetts libertarian vision of the Constitution begins with a categorical rejection of popular sovereignty as a means of legitimating the Constitution. He argues that laws are owed obedience only to the extent that they are (1) necessary to protect the rights of others and (2) proper insofar as they do not violate the preexisting rights of the persons on whom they are imposed. Barnett contends that the Constitution as originally understood enshrined lawmaking procedures that are likely to produce laws that meet these two criteria. But if constitution- Originalism is no al interpreters can essentially revise those procedures longer a single, via living constitutionalism, then the citizenry has no confidence that the criteria will be met under the coherent, unified new procedures, and the legitimacy of the theory of Constitution is thrown into doubt. For Barnett, origconstitutional inalism is a means to achieving the preservation of sound lawmaking procedures. interpretation, Central to Barnetts theory is the idea of unenubut rather a merated rights embodied in the Ninth Amendment. To protect these rights, Barnett creates a presump- smorgasbord . . . tion of liberty, by which he means that laws will no longer be presumed constitutional. Rather, the burden would fall to the government to prove that any given law meets the two criteria Barnett identifies as necessary for legitimate government. Consider the ocean that exists between Barnetts theory and Borks. First, Barnett rejects popular sovereignty where Bork made it central to his theory. Second, Barnett prizes unenumerated rights and designs his theory to protect them. Bork, who once referred to the Ninth Amendment as an inkblot, was very skeptical of having courts enforce unenumerated rights. Finally, Barnetts presumption of liberty could not be further removed from the judicial restraint of Borks Neutral Principles. Whereas Bork placed a heavy burden on judges to show that a law was clearly unconstitutional before striking it down, Barnett would presume a law is unconstitutional unless it could be proven otherwise. To call both men originalists makes one wonder what originalism could possibly mean. Others share responsibility for bringing about this crisis within originalism, Professor Jack Balkin being the most notable among them for his use of the interpretation/construction distinction to argue that originalism and living constitutionalism are two sides of the same coin. But we need not explore the full extent of modern originalisms present indeterminacy. Suffice it to say that as modern originalism enters its fifth decade of existence, its coherence is now severely compromised, and only a reassessment of the
June & July 2012 77

Joel Alicea
boundaries and core commitments of originalism can restore its solid foundation. The work of solving this indeterminacy crisis will likely play a major role in originalisms future. But just as modern originalism faces a great theoretical threat, it is presented with a real conceptual opportunity to which Senator Lees lecture calls attention. Since Bork, originalism has focused almost exclusively on the role of the judiciary in constitutional interpretation. Whittington has accurately explained that originalism was a reactive theory motivated by substantive disagreement with the recent and then-current actions of the Warren and Burger Courts; originalism was largely developed as a mode of criticism of those actions. Indeed, a real flaw of Borks article was its uncritical acceptance of judicial supremacy, the idea that the The originalism Supreme Court is the final arbiter of the Constitutions meaning. As Senator Lee pointed out of Bork, for in his remarks, Congress has a responsibility, as a branch of government coequal with the judiciary, to instance, can provide no good evaluate the constitutionality of legislation before passing it. His pledge to abide by originalism when reason why conducting this evaluation speaks to fertile theoretiCongress should cal ground that has yet to be tilled: the application of originalism to congressional and presidential connot also be stitutional interpretation. Professors Neal Katyal and Michael Ramsey originalist. have made tentative forays into this field. Originalism and the Legislature and An Originalist Congress, both written by this author, take the topic on directly and argue that the internal logic of various schools of originalism would require Congress to be originalist when it interprets the Constitution. But a good deal is left to be done in this area, and the interest in originalism among members of the political branches ought to inspire scholars to apply originalism extrajudicially. Borks originalism, for instance, can provide no good reason why Congress should not also be originalist. Why is it any more acceptable for Congress to favor majority or minority tyranny by substituting its value choices for those in the Constitution? Why should Congress not have to derive neutral principles when it evaluates the constitutionality of legislation? Borks response may be that the Supreme Court can correct for unconstitutional acts of Congress, but there are a whole range of procedural reasons why the Court might not get a chance to review the constitutionality of a particular statute. It might also be said that the legislature, as the peoples representatives, have a right to make such value choices, but this position constitutes methodological suicide for an originalist. If the will of todays electorate can trump the original meaning of the Constitution, then whatever Congress does is constitutionally legitimate and it would be an usurpation of authority for the Court to invalidate statutes by appealing to the
78 Policy Review

Forty Years of Originalism

Constitutions original meaning. Borks originalism necessarily entails an originalist Congress. The primary drivers of the effort to consider whether and how originalism applies outside of the judiciary have been public officials, a rarity in the development of constitutional theory. The reemergence of a vigorous brand of constitutional conservatism has been widely commented on, from National Review to the New York Times. The Tea Party movement no doubt has had much to do with this, and the unprecedented assertions of federal power during the Obama administration have likewise contributed to a constitutional backlash. Senator Lees speech to the Federalist Society is by far the most articulate version of legislative originalism provided by a public official, but the principles he espoused are shared by a growing faction within the conservative movement. There is reason to think that the rise of lawmakers who take constitutional interpretation seriously will continue, and originalism will have to confront the theoretical implications of this development in American constitutionalism. Originalist scholars have failed to grapple with the emerging realities of legislative originalism and the crisis of indeterminacy within originalism, but the coming decade will no doubt provide ample opportunities to do so. As originalists look forward to marking half a century of modern originalism, building a more comprehensive theory that accounts for the political branches and reestablishes originalisms core principles would be a worthy project. For those seeking to take up this task, Borks Neutral Principles and Some First Amendment Problems is not a bad place to start.

June & July 2012


Books Reading into the Constitution

By Peter Berkowitz
Jack M. Balkin. Living Originalism. H a rva r d U n i v e rs i t y P r e s s . 4 7 4 Pages. $35.00. ow far do the presidents powers reach in wartime? May the government restrict political expenditures by corporations and unions? Is it within Congresss authority to compel all people, on pain of paying a substantial fine, to purchase health insurance? It is a remarkable fact about liberal democracy in America that left and right agree that to answer such hard questions we must consult the 224-yearold document that brought this country into being, and abide by what it requires, prohibits, and permits. And it

Peter Berkowitz is the Tad and Dianne Taube Senior Fellow at the Hoover Institution, Stanford University, where he chairs the task force on national security and law. His writings are posted at
June & July 2012 81

is a prominent feature of our polarized politics that the quest to determine the Constitutions meaning concerning the great issues of the day excites vehement disagreement between left and right. The two sides bring to the search for the Constitutions meaning competing theories. Progressives tend to view the Constitution as a kind of living organism that grows and develops, and should be adjusted or altered by courts in response to unfolding circumstances. Conservatives generally believe that however much circumstances may have altered, courts are bound by the Constitutions original meaning. Both the doctrine of the living constitution and the doctrine of originalism derive support from common sense and from sober observation of liberal democracy in America. On the one hand, as living constitutionalists emphasize, times change, norms evolve, and some of the Constitutions clauses the First Amendment prohibition on laws abridging freedom of speech, the Eighth Amendment prohibition on cruel and unusual punishment, the Fourteenth Amendment promise that no state shall deny any person life, liberty, or property without due process of law and no state shall deny any person the equal protection of the laws seem to invite courts to apply openended terms such as cruel and unusual, speech, due process, and equal protection in light of the best available understandings. On the other hand, as originalists stress, the original meaning, or range of meanings, of constitutional provisions, serve as the starting point and anchor for analysis that properly regards the Constitution as the Constitution proclaims itself to be as the supreme
Policy Review

law of the land; any congressional statute, presidential action, or state law that conflicts with the Constitutions original meaning should be rejected as unconstitutional; and by showing fidelity to the original meaning of the Constitution, which is the most authoritative statement of the peoples will and reason, judges maintain courts democratic legitimacy. Contrary to much conventional wisdom in the legal academy, which sees the living constitution and originalism as diametrically opposed schools of constitutional theory, Jack M. Balkin, Knight professor of constitutional law and the First Amendment at the Yale Law School, sides with common sense and sober observation. The choice between living constitutionalism and the doctrine of original meaning, he argues, is a false one: Properly understood these two views of the Constitution are compatible rather than opposed. To vindicate this conciliatory claim, Balkin offers a constitutional theory, framework originalism, which views the Constitution as an initial framework for governance that sets politics in motion, and that Americans must fill out over time through constitutional construction. Like originalists, Balkin insists on the need for fidelity to the original meaning of the Constitution, and in particular, the rules, standards, and principles stated by the Constitutions text. But he believes that fidelity to original meaning also requires, in the spirit of living constitutionalism, the development of constitutional constructions that best apply the constitutional text and its associated principles in current circumstances. Balkin is certainly not the first pro82

gressive legal scholar to attempt to root his constitutional theory in the Constitution. In Freedoms Law: The Moral Reading of the American Constitution (1 9 9 7 ) New York University Law Professor Ronald Dworkin sought to show how fidelity to the Constitution required judges to respect grand principles due process, fairness, and justice so as to ensure that the laws of the land treat persons with equal concern and respect. And in Active Liberty (2005), Supreme Court Justice Stephen Breyer argued that fidelity to the Constitution obliged judges to give priority to reaching decisions that promote citizens participation in political life. In both books, however, the authors professed concern for the meaning of the Constitution seemed a transparent guise for reading into the Constitution their progressive moral and political priorities. In contrast, Balkin takes history seriously and adroitly integrates law and politics. He has read massively political debates, speeches, cases, contemporary legal scholarship and synthesized prodigiously. His intricate elaboration of framework or living originalism, which at its most ambitious amounts to an account of what American constitutional government is and ought to be, provides much to admire and much to ponder and sets a new standard in grand constitutional theory. Balkin insists that original meaning is binding, while contending that fidelity to it requires interpreters to distinguish between instances in which the Constitution identifies a bright line rule Article II, Section 1 requires thatthe president must be at least 35 years of age and instances such as the Due Process and Equal Protection Clauses
Policy Review

in which the Constitution proclaims general principles whose original meaning stays the same but whose application can be expected to change. He explains how culture, social movements, political mobilization, and electoral politics inevitably and properly interact with judicial interpretation while insisting on the obligation to preserve the all-important distinction between politics and law. And apart from a single lapse in the final three pages in which he adduces George W. Bush administration officials connected to the policy of enhanced interrogation as instances of constitutional evil, he writes about conservative legal theory and conservative politics critically without that anger or disdain that disfigures much academic legal theory. Yet in the end Balkin cannot resist the temptation to which Professor Dworkin and Justice Breyer thoroughly succumbed. The sometimes ingenious arguments that Balkin develops to show how originalism, correctly understood, requires in controversial issue after controversial issue decisions that advance progressive goals and prohibit outcomes favored by conservatives violate the sensible strictures that he sets forth for determining and respecting original meaning. They also further underscore that constriction of the liberal imagination that permits academic progressives to not merely view progressive goals as, all things considered, good public policy, but to equate them with reason, morality, and the law. a number of serious objections that conservatives will benefit from confronting squarely. Balkin agrees with Scalia that judges are bound by the original meaning of the Constitution. He agrees that the Constitutions original meaning consists in how those who were voting for its ratification or ratification of its Amendments would have understood it. He agrees that original meaning must be determined by standard forms of inquiry into Constitutional text, structure, and history. And he agrees that judges are bound by original meaning and barred from substituting for it their moral and political judgments. Famously, however, Scalia maintains that even when the Constitution inscribes a standard or principle say, cruel and unusual in the Eighth Amendment it actually constitutionalizes the original interpretation of that standard or principle, for example, the meaning of cruel and unusual in 1791 when the Eighth Amendment was ratified. In contrast, Balkin argues that the constitutionalization of standards and principles obliges judges to apply the reasonable understanding of those standards and principles in light of contemporary challenges and contemporary norms and understandings. This, he stresses, does not give judges a license to do as they please. Before they can be applied, the meaning of standards and principles must be ascertained through exacting historical study, and must be consistent with the Constitutions underlying structural principles and goals. Scalias fundamental error, according to Balkin, is to confuse fidelity to the Constitutions original meaning with fidelity to the original expected application. Of course, when the Constitution articulates a determinate

o define framework originalism, Balkin distinguishes it from the approach of originalisms most famous proponent, Supreme Court Justice Antonin Scalia, and raises

June & July 2012

rule, there is no gap: the original meaning of the requirement that the president be at least thirty-five years of age coincides with its expected application, in the founding era as well as today. But when the Constitution articulates an indeterminate standard such as unreasonable searches and seizures in the Fourth Amendment or a general principle such as the Fourteenth Amendments equal protection of the laws it is reasonable to suppose and Balkin marshals considerable historical evidence indicating that so it was understood at the founding and in the years after the Civil War when the Fourteenth Amendment was debated and ratified that the Constitution instructs judges to determine the import and scope of the standards and principles in light of developing norms and changing social and political circumstances. After all, if the Constitutions exclusive purpose had been to bind the American people to determinate rules, it would have been possible to specify in the Fourth Amendment the kinds of searches and seizures that were forbidden, or to limit the reach of the Fourteenth Amendments promise of equal protection of the laws so as to, say, exclude women by stating that no state shall deny persons equal protection of the law on account of race, color, or previous condition of servitude, as does the Fifteenth Amendment in securing the right to vote for the newly freed blacks. Scalias equation of original meaning with original expected application, according to Balkin, gives rise to many difficulties, particularly in regard to precedent. Originalists are forced to treat many precedents, which they regard as errors (in particular those providing constitutional legitimation of

the regulatory and welfare state forged by the New Deal) as, in Scalias words, pragmatic exceptions to the imperatives of originalism that must be preserved for the sake of political stability. This reflects, Balkin argues, an admission contrary to the tenets of Scalias originalism that legitimacy can come from public acceptance of the Supreme Courts decision, or from considerations of prudence or economic cost. In addition, contends Balkin, in deciding which precedents to leave undisturbed and which to limit or overturn, judges committed to originalism reinstate the kinds of policy decisions and vest in judges the kind of discretion that the theory was meant to preclude. Finally, originalism misrepresents the American political tradition, Balkin asserts, because it treats as regrettable errors in regulation, welfare provision, and civil rights protection the assumption of responsibilities and the performance of tasks by government that have come to be seen by significant majorities as genuine achievements of American constitutionalism and sources of pride. The most significant limitation of Scalias originalism, in Balkins eyes, is that it cannot account for how political and social movements and postenactment history shape our constitutional tradition. In fact, argues Balkin, the Constitution is always both at rest and in motion. While the original meaning of the Constitution does not change, except by the amendment procedures laid out in Article V, the application of the text, since it includes standards and principles, does. Indeed, because conditions are always changing, new problems are always arising, and new forms of social conflict and
Policy Review

grievance are always emerging, the process of argument and persuasion about how to apply the Constitutions text and principles is never ending. Balkin recognizes, at least formally, that the drama of constitutional democracy that he depicts is a two way street: Views about salient norms and the proper reach and application of constitutional principles may be governed as much by a conservative aspiration for a restoration of the Constitutions lost meaning concerning, say, the right to bear arms as by a progressive aspiration for redemption of the Constitutions grand promises as, for example, in the Preambles aspiration to form a more perfect Union. But in the course of his book, Balkin makes clear that the progressive idea of redemption, or the claim that our Constitution is always a work in progress imperfect and compromised, but directed toward its eventual improvement, is the worthy aspiration. He proclaims that his theory of constitutional interpretation is also a theory of redemptive constitutionalism (italics in the original). He does not proclaim that his theory of constitutional interpretation is also a theory of constitutional restoration. In other words, on Balkins reading, the Constitution is fundamentally a progressive document whose original meaning gives priority to redemption and allows but does not respect restoration. Yet recovering what is inevitably lost and the wisdom embodied in tradition are also vital imperatives for all legal systems. To the extent that Balkins theory, or at least his application of his theory, recognizes restoration, it is only to recover the Constitutions lost progressive meaning and show that it authorizes progressive policies. This governing moral and politJune & July 2012 85

ical intention distorts his understanding of the Constitutions original meaning and hinders his quest for fidelity. onsider balkins analysis of the modern regulatory and administrative state and the Commerce Clause, to which he devotes his longest chapter and which he asserts serves as a good test for the plausibility of any theory of constitutional interpretation. He argues that the New Deal is consistent with the Constitutions original meaning, its text, and its underlying principles. And that Article I, Section 8, Clause 3 of the Constitution, which provides that Congress shall have the power . . . to regulate commerce . . . among the several states, played a pivotal and, in Balkins eyes, entirely legitimate role in justifying New Deal expansion of government power. To be sure, it was necessary for the Supreme Court to apply new constructions or interpretations of the unchanging powers that the Commerce Clause conferred on Congress to legitimate the new functions that the political branches fashioned to deal with emerging challenges of regulation and administration the United States faced as a rising industrial world power. But properly understood, argues Balkin, the Commerce Clause is an extremely broad and flexible grant of power to Congress whose extreme breadth and flexibility are built into its original meaning but cannot be defined or exhausted by its original application. Attempting to turn the tables on conservative originalists, Balkin charges that they have tended to view the commerce power through modern eyes and therefore have misread it. Whereas conservatives have identified

the original meaning of commerce narrowly with the trade of commodities or somewhat less narrowly with economic activity in general, Balkin argues that those are constructs created by 19thcentury courts. In contrast, in the 18th century, commerce meant intercourse, and it had strongly social overtones (italics in the original). In Balkins interaction theory of commerce, the commerce clause gives Congress the authority to regulate all sorts of interactions among the states, whether they are used for commercial or noncommercial purpose, provided they present a federal problem. In addition to failing to grasp the breadth of the 18th-century concept of commerce, conservative originalists have failed to appreciate the flexibility inscribed in the original meaning of federal and enumerated powers, a phrase Balkin believes is more accurate than limited and enumerated powers. According to Balkin, in the founding era the Constitution was understood to give Congress power to legislate in all cases where states are separately incompetent or where the interests of the nation might be undermined by unilateral or conflicting state action. Federal power kicks in where state power cannot solve the problems that arise between states: Properly understood, the commerce power authorizes Congress to regulate problems or activities that produce spillover effects between states or generate collective action problems that concern more than one state. A target of regulation only lies beyond Congresss power under the commerce clause, asserts Balkin, if Congress cannot reasonably conclude that an activity presents a federal problem.

If Balkin were correct that the original meaning of commerce included social interactions generally and that an enumerated power is a power Congress needs to deal with a problem that states cannot be expected to solve on their own or collectively, then he is also correct that the continually expanding modern activist state is unproblematic and that the individual mandate of the Affordable Care Act falls easily within Congresss authority under the Commerce Clause. But Balkins account of the original meaning of the Commerce Clause is not correct. First, while he correctly notes that one of the meanings of commerce in the 18th century as todays dictionaries will confirm remains true of the word was much wider than trade or economic activity, including social relations generally, he asserts without adequate investigation that commerces widest meaning was incorporated into the Constitution and was consistent with the publicly understood meaning of the Constitutions use of the term. It wasnt. Oddly, given Balkins dedication to recovering the Constitutions original meaning, his books extensive index does not contain a single entry for The Federalist, the collection of 85 newspaper articles authored by Alexander Hamilton, John Jay, and James Madison, which became the authoritative exposition of the Constitution and the deservedly most famous contribution to the ratifying debates. And Balkins 100-plus pages of footnotes contain one passing reference to Federalist 1 0 in which he distorts Madisons view.1 In The Federalist, the term commerce generally does not refer to social relations of all sorts.
Policy Review

Rather, and contrary to Balkins contention that such distinctions were introduced into American constitutional discourse by 19th-century courts, The Federalist regularly contrasts commerce to agriculture and manufacturing and routinely uses the term to refer to the activities of trade, finance, and exchange through markets. Second, Balkin bases his extremely flexible understanding of the meaning of federal and enumerated powers on a misreading of a statement by one of the key founders, James Wilson, in the Pennsylvania ratifying convention in 1787. Balkin quotes Wilson:
Whatever object of government is confined, in its operation and effects, within the bounds of a particular state, should be considered as belonging to the government of that state; whatever object of government extends, in operation or effects, beyond the bounds of a particular state, should be considered as belonging to the government of the United States.

Balkin takes this to mean that Congress may legislate in all cases where states are separately or collectively incompetent. But he does not pay enough attention to Wilsons phrase object of government, which does not refer to any
1. Balkin claims Madison argues that the values of a national political majority . . . may often be more moderate and better protect the right of minorities than those of a smaller more homogeneous political community. In fact, in Madisons view, moderation comes not from the values of the national majority (which Madison does not discuss), but from the Constitutions sound institutional design: Representation enlarges and refines the will of the people, and the size and diffusion of the population either prevents dangerous passions from arising in the majority or prevents majorities from carrying out schemes of oppression.

object at which government may aim but rather governments legitimate objects or purposes. We can be confident that Wilson assumed a distinction between legitimate and illegitimate objects of government action for reasons connected to the third major flaw in Balkins interpretation of the original meaning of the Commerce Clause, which is his demotion or suppression of the supreme underlying structural principle of the Constitution and of its highest goal: The principle is that of limited government and the goal is the securing of individual liberty. Federalist proponents and Anti-Federalist opponents of the Constitution were united in the conviction that the central task of government was to secure individual liberty, and that the central challenge was to forge a national government that would be powerful enough to secure liberty but robustly limited so that it would not wield its power to crush liberty. Accordingly, in Federalist 45 Madison reaffirmed the notion that pervades The Federalist that The powers delegated by the proposed Constitution are few and defined. Those which are to remain in the state governments are numerous and indefinite. In that context, Madison observes that the power to regulate commerce is a new power beyond those contained in the Articles of Confederation but which few oppose and from which no apprehensions are entertained. Balkins reading of the Commerce Clause, which turns the Constitution into a charter of unenumerated and virtually limitless powers, makes nonsense of Madisons assurances that the Commerce Clause is consistent with the Constitutions assignment of few and defined powers

June & July 2012

to the federal government. It also relegates the Constitutions supreme underlying structural principle of limited government to irrelevance. And it turns the goal of securing individual liberty, which both sides in the debate over the ratification of the Constitution agreed was paramount, into an afterthought. To determine whether the Constitution gives Congress authority to compel people to buy health insurance or pay a substantial fine and other hard questions such as how far the presidents powers reach in wartime and whether the government can restrict the political expenditures of corporations and unions it is certainly not sufficient to recognize that ours is first and foremost a constitution of limited powers whose primary purpose is to secure the rights shared equally by all. But it is indispensably necessary. story, but it has many nonfiction passages. Its characters are invented, but they are based on real people. Its about the ussr, but its author is not a Soviet expert, nor does he know Russian. Most remarkably, it focuses on one minor episode in Soviet history, but it explains the nature and demise of the ussr better than any book fiction or nonfiction I have ever read. Whether you are interested in the history of the Soviet Union or not, I am certain you will enjoy this marvelous book. It reminded me of Orwell at his best. But if you are interested in Soviet history, as I am, the book will have special significance. For many decades now, professors, pundits, and politicians have debated a single important question about the ussr: Could the Soviet project to build communism have succeeded, or was it doomed to failure from the start? In Red Plenty, Spufford offers a brilliant answer. In order to understand that answer, however, you must also understand the long and often heated intellectual struggle in which it is situated. Red Plenty has a backstory and, since I was there to witness it, allow me to tell you about it. I first went to the Soviet Union, ironically enough, in 1984. I had studied Russian, and I had taken a few classes and read a few books about the ussr. Being a disaffected youth, I was intrigued by the idea of socialism. It seemed to have great promise, and it gave me something to say about my surroundings other than, Man, this sucks. I paid especially close attention to what those in the New Left said about the ussr, about the way the American and Soviet systems were converging. Thus I flew to Moscow believing that the ussr, though regret88 Policy Review

Shabby Soviet Reality

F r a n c i s S p u f f o r d . Red Plenty. Graywolf Press. 448 pages. $16.00.

By Marshall Poe

r a n c i s s p u f f o r d s Red Plenty is a strange and wondrous thing. Its a novel, but its also a history. It tells a made-up

Marshall Poe teaches history at the University of Iowa and is the host of New Books in History (

tably not politically free, was culturally sophisticated, economically prosperous, and thoroughly progressive. I half expected to find a country a bit like the United States, though without Republicans. I found no such place. Housing in the ussr was a problem. We lived three to a room in a dilapidated, cockroachinfested panel building. Though it had a large custodial staff, it was always filthy. We had water, but most of it was cold. We had heat, but the only way to regulate it was by opening and closing the window. We had light, but if a bulb went out it was not likely to be replaced anytime soon. Food in the ussr was a problem. There were no grocery stores as such, but rather dirty, poorly stocked, and unimaginatively named shops Bread, Meat, Produce. There were almost no restaurants, and fast-food joints were nothing at all like McDonalds. There were what we would call farmers markets. They had a lot of food, but almost no one in Moscow could afford it. Finished goods in the u s s r were a problem. There were, as far as I knew, only two department stores in Moscow. Neither even remotely approached J.C. Penney in quality or quantity of items on sale. There were sundry stores, but they seemed to sell nothing but pencils, pens, and paper, all of an inferior grade. Transportation in the ussr was a problem. The Moscow Metro was wonderful, but if it didnt go where you wanted you faced a series of bad options: Slog through the icky mud (sidewalks were rare), take a smelly bus (often late and uncomfortably packed), or hire a junky cab (Soviet cars have a welldeserved reputation). Believe it or not, I had an excellent time during my stay,
June & July 2012 89

largely thanks to my many warm Russian friends. Thats the general picture I recall. But what I remember even more vividly are anecdotes, telling episodes in which the true nature of Soviet life revealed itself. The time I asked for a menu at a famous Georgian restaurant and was told there was no point. The time I witnessed a grown man cry because his Soviet jeans had fallen apart. The time I went to a state motor pool to illicitly buy gas. The time I saw drivers stop their cars during a shower to put their precious windshield wiper on. The time I saw a friend bribe a traffic cop. The time I found large bone fragments in my soup. The time I exchanged money at a large multiple of the official rate with a black-marketer. The time I went to a foreign currency store to buy food for a sick kid. The time I had my glasses fixed by a moonlighting toy-maker. The time I saw old women sweeping the streets with homemade brooms. The time I bought beer by the bucket out of a tanker truck. The time I watched a dogs carcass decompose over the course of months right on the street. The time I took the train from St. Petersburg to Helsinki and had the feeling someone had turned on the lights. p o n m y r e t u r n to the U.S., I entered graduate school at Berkeley to study Russian history. The professor who guided my studies of the Soviet Union was a famously good scholar and advisor. I really liked him, as did everyone. As it happened, he was very sympathetic to the New Left and its understanding of Soviet history. According to this view, capitalism was bad and dying and

socialism was good and inevitable. The Bolsheviks were brought to power by a popular revolution led by the working class with the kind assistance of the retrograde peasantry. Once in power, Lenin and company began to build something like socialism, though it wasnt true socialism. After Lenins death, however, Stalin seized power and betrayed the lofty goals of the revolution. What he built was definitely not true socialism. When Stalin expired, Khrushchev returned the party to the Leninist course and resumed socialist construction. He was ousted, demonstrating that the ussr was still politically immature, but nonetheless socialist construction continued under new leadership. The contemporary Bolsheviks werent there yet, but the numbers looked good. According to Soviet statistics, the average standard of living in the ussr was approaching that of many Western countries, and without alienated labor. Thus was E.P. Thompson, intellectual star of the New Left and author of the then-ubiquitous Making of the English Working Class, able to write: Soviet society, which is in important respects more classless although it is certainly less free, is more advanced [than Western societies]. Those Western societies, Thompson said unkindly, were dominated by that old bitch gone in the teeth, consumer capitalism. My advisor in matters Soviet agreed with much of this, as did many on the Berkeley faculty. As my studies progressed, I discovered that there were scholars, though not many of them, who didnt believe any of this. Robert Conquest and Richard Pipes, for example, offered a very different interpretation of Soviet history. They began from the premise

that, on the evidence itself, capitalism was good and communism was bad. The former, they noted, had made people prosperous and free, while the latter whatever its supposed liberating potential made them poor and bound. As for the Bolsheviks, they had mounted a successful coup and claimed popular support in order to win popular support. Once in power they did indeed begin to build communism. But there was no good Lenin, bad Stalin divide. Stalin finished what Lenin had started. So when Stalins successors proclaimed that the period of socialist construction was over and the period of real existing socialism had begun, we should believe them. The Bolsheviks were not building communism, as the New Left claimed; they were living in it. The Soviet Union was communism in the flesh. The Soviet statistics, of course, were a sham: The ussr was shabby, poor, and corrupt, as anyone who had been there could tell you. But Conquest and Pipes went further. They said that the writing was on the wall for the Bolsheviks, and it had been for a long time. The Communist Party might have had the trust of the people once, but it was losing that trust because it could not deliver on its promises. Oppression and lies might delay the end, but eventually the citizens of the Soviet Union would say enough. It was inevitable. So here were two answers to the Soviet question circa 1987: The New Left said the party could and would build socialism, while those we might call the realists said it already had built socialism and the results doomed it. I confess I didnt know what to think, but my experiences in the Soviet Union rather than anything I read
Policy Review

made me tend toward the realist position. The Soviet Union I had seen was, in fact, shabby, poor, and corrupt. Moreover, I saw no sign of anything resembling faith in the system. Most of the people I knew had passed from bitter cynicism to hopeless acceptance. In any event, the Soviet question received a decisive answer in 1 9 9 1 when the ussr collapsed practically overnight. By luck, I was there to watch the Soviet flag come down from the Kremlin for the last time. hich brings me back to Red Plenty. There was a time when English writers were all socialists of one stripe or another, with some obvious and notable exceptions. No more. Spufford has no sympathy whatsoever for the New Lefts now defunct understanding of the Soviet Union. In Red Plenty, Spufford essentially explores how Soviet communism undermined itself by raising expectations, systematically subverting the programs designed to meet them, and then raising them again. He does so by focusing on a peculiar and little-known episode: the cybernetics initiative of the 1950s and early 1960s. It was one in a long string of Soviet attempts to make communism deliver prosperity. These included, in chronological order: nationalizing everything (didnt work), eliminating money (didnt work), stealing from the peasantry (didnt work), enslaving class enemies in work camps (didnt work), collectivizing the peasantry (sort of worked, but of course not for the peasantry), five-year planning (worked, but only for heavy industry), and hauling German factories back to Russia (didnt work). After World War II, the
June & July 2012 91

party had run out of fresh ideas to bring the wealth to the masses. So it simply told people that they were prosperous (didnt work). Spufford picks up the story with Khrushchev. He had had enough of the lying: Stalin was a bastard, the people were poor, and something new was needed to get things moving. But what? The imaginative general secretary had many (too many, it turned out) answers, but the one most germane for Red Plenty is science. Truth be told, he was not exactly the first to claim that science would fill the socialist horn of plenty. Stalin put great stock in science as well, or rather a pseudoscience called Marxism. He was, however, very suspicious of all the real sciences because they were bourgeois. To be of any use in the ussr, Stalin said, the real sciences had to be made Marxist. Stalin actually wrote quite a bit about how to make a real science into a Marxist science. He was, after all, a universal genius. Naturally it was all nonsense, as the infamous Lysenko affair and his Lamarckian plants demonstrated. But Khrushchev took a different view. In 1957 he had seen with his own eyes how Soviet scientists used purloined bourgeois know-how (German, as it happened) to overtake and surpass the West by launching Sputnik. What a coup! This was just the sort of magic Nikita Sergeyevich was looking for, he wanted more of it, and he cared not one whit what class interest it reflected. The particular brand of scientific magic then on offer, at least in the realm of socialist planning, was called cybernetics. During the Second World War, the generals gave German, American, and British scientists the task

of designing machines that could accomplish computationally intensive tasks, such as targeting and firing weapons faster and more accurately than their human operators could. The machines they built were, essentially, the first really practical robots: They gathered information, interpreted it, and adjusted to the result. In other words, they regulated themselves, and to great effect. Norbert Weiner, a mathematician, was one of the scientists who designed these self-regulating systems. He also had a philosophy degree, so he quickly saw their broader significance. He correctly noted that they were found everywhere in nature. Every atom, every molecule, every cell, every organ, every living thing, every environment, the Earth, the solar system, the galaxy, and even the universe itself was a self-regulating system. This being so, he concluded, the study of self-regulating systems constituted a kind of super-science, a discipline that ordered all the ordinary sciences. He called this discipline cybernetics. For a generation of scientists, cybernetics represented the most promising avenue to the theory of everything. For Khrushchev and his scientific advisors, it represented the most efficient road to having everything. Cybernetics would, finally, bring prosperity to the Soviet Union. It would do what Gosplan could not: make the Soviet economy into a vast, efficient, self-regulating machine that brought supply into perfect harmony with demand for everyone. The party poured resources into cybernetics. It also spent a lot of money on propaganda about cybernetics, just so the people would know that their leaders were on the case with a fresh and promising idea.

Red Plenty is a fictionalized account of the cybernetics program. Spufford tells the story through a series of interwoven vignettes, each inhabited by sharply drawn characters roughly based on real people. We meet starry-eyed mathematicians who are sure equations are the key to making socialism work. We meet academic careerists who care more about the way the political wind is blowing than the way knowledge is progressing. We meet members of the intelligentsia who are carefully testing the waters of thaw-era freedom. We meet party hacks who know nothing will change and are glad of it. We meet collective farmers who dont have a proverbial pot to piss in. We meet journalists who struggle to come to grips with the reality of Soviet oppression. We meet corrupt institute directors who want only to feather their nests. We meet factory managers who are desperate to fulfill the plan, or at least to appear to do so. We meet black-marketers who help them get what they need to fulfill said plan. We meet the gangsters who protect the black-marketers. And we even get to meet Khrushchev himself, both as he vows to overtake and surpass the West on his way to America and as he muses on why it all went wrong in forced retirement. There are certain scenes in the book that are both remarkably telling and memorable: the city boys discovery of squalid poverty in a village right outside Moscow; the apparatchik explaining to the nave scientist that money means nothing in the Soviet economy; the black-marketer being shaken down by both the gangsters and police. I could go on and on, but I dont want to spoil it for you.
Policy Review

Of course the cybernetics initiative came to nothing, as Spufford explains in vivid colors. And here we come to the underlying message of Red Plenty: The Soviet system real existing socialism ensured that prosperity, let alone freedom, would never come to the ussr. The minute the Bolsheviks decided central planning would completely supplant markets, the die was cast. Very quickly certainly by the end of the nep in the 1920s period the Soviet Union reached a profoundly sub-optimal equilibrium, as the economists might say. Naturally the party saw that things werent going to plan. As Khrushchev said, communism was supposed to make people rich, not keep them in poverty. The party thus launched repeated reform efforts aimed precisely at bringing wealth to the masses. All of them, however, were in vain because they were aimed at perfecting a fundamentally flawed instrument the centrally planned economy. The cybernetics initiative is a perfect (if minor) example. Khrushchev thought it would allow planners to rationally allot resources across the entire economy so as to maximize efficiency and, thereby, raise standards of living. But, as Spuffords characters learn, it could never work. Neither cybernetics nor any other science could determine what everyone wanted and give it to them there was just too much to know and no way to know it. Besides, a host of interests in the system didnt want to see this nut cracked and were deathly afraid of attempts to crack it. Millions of good Soviet citizens lived on the systems inefficiency and corruption. What would they do without it? Its not fair to say that the party mounted this and other reform efforts just to look busy. It
June & July 2012 93

is, however, fair to say that the party recognized that they went a good way to convince the masses that prosperity was coming even though everyone in the Party knew that it wasnt. We shouldnt think, however, that starry-eyed programs like the cybernetics initiative did nothing to alter the Soviet system. As the cyberneticians themselves would tell us, no self-correcting, self-sustaining system is completely closed. The human body, an excellent example of such a system, relies on many inputs that it cannot produce: water, food, air, etc. When these resources are unavailable, the human body breaks down (Lenins body being the only known exception). The Soviet system depended on many such inputs, but the most important of them was legitimacy. As weve noted, the Bolsheviks promised that if everyone would do as they were told, prosperity would follow. They very publicly mounted reform efforts to make good on this promise. Again and again they failed. They managed to remain in power by means of coercion, deception, and more promises. But all the while they were depleting their stock of legitimacy. Soviet citizens knew what socialism was supposed to be and they knew what it actually was. Every time they had no hot water for a month, were unable to find sausage, stood in long lines for shoddy shoes, and waited in vain for a wretched city bus, they witnessed the yawning gap between the partys aspirations and its accomplishments. This, they ironically said, was Soviet reality. Confidence eroded. People grew cynical. Since the party was neither willing to scale back expectations nor to adopt the market

reforms that might lead to their partial fulfillment, it had no way to regain the trust of the people it supposedly served. By the 1980s, when I first went to the ussr, the Communist Party had nothing left. No one believed that the promise of 1917 could be made good, not even party members themselves. An essential input was gone and the system collapsed as it had been destined to do from the very beginning. subject that not only suffers from the slippery vagueness of its customary terminology but is also still firmly in the grip of self-appointed ethicists and moralists. Apparently a deliberate ambiguity and the prevailing zeitgeist pay off politically as well as in academic circles. The book was originally published in German as volume IX of the e c a e f book series Studien zur Wirtschafts- und Gesellschaftsordnung and is now available in this splendid translation. Bouillon is one of Germanys leading social philosophers and currently professor of philosophy at the University of Trier (an old, midsized town in the triangle of France, Luxembourg, and Germany). He also serves as director for institutional research at smc University, Vienna, and as academic deputy director for the New Direction Foundation, a new think tank based in Brussels. His numerous books and essays have been translated into several languages, including Chinese. This book is not politically correct and thus will almost certainly provoke some heated debates. In the course of four chapters, with short concluding remarks in the fifth, Bouillon offers here quite a few original insights and explanations to the understanding of business ethics and its decisive central part, namely the definition of a morally just economic action. After all, can such vaguely defined slogans as corporate social responsibility, the ubiquitous word sustainability, or the equally insuppressible phrase social justice really provide for a productive discussion in business ethics? Unlike the majority of authors in the field, Bouillon takes a firm stand and confronts the current semantic and
94 Policy Review

Business Ethics, Sharpened

Hardy Bouillon. Business Ethics and the Austrian Tradition in Economics. Routledge. 192 Pages. $60.00. t is somewhat unfortunate that the English title of this book does not immediately attract the interest of the large readership it definitely deserves. Hardy Bouillons Business Ethics and the Austrian Tradition in Economics is a hugely rewarding read and arguably ranks among the most important contributions to the field of business ethics in recent years. It is demanding, and a long overdue and serious challenge to a

By Kurt R. Leube

Kurt R. Leube is professor of economics emeritus and research fellow at the Hoover Institution, Stanford University. He also serves as academic director of the European Center of Austrian Economics, based in the Principality of Liechtenstein.

intellectual confusion by contesting the most decisive part of business ethics: justice in moral economic actions. He leaves no doubt that business ethics as an academic discipline presents itself mostly as politically biased and only on rare occasions as a science grounded in logic following clear definitions. Thus, through detailed examinations of the basic assumptions of the body of current business ethics, right from the start Bouillon unrelentingly points out the various shortcomings within the subject, which are due to the sloppy and unconvincing language used by most contemporary ethicists. The philosophically untrained reader will be grateful not only for Bouillons gentle and clear step by step introduction to the world of precise philosophic thinking but also his reminder of the implication compliance rule, which is increasingly ignored. According to this principle, a logical conclusion may never have an implication that is not already implied in the subject. Or as he puts it: a logical conclusion may not smuggle in new information and claim validity at the same time. This is especially important for his discussion of the distinction between the empirical and normative aspects of business ethics. In order to follow his arguments it is also important to understand his newly introduced term methodological individualist ethics, which he describes as an ethics that corresponds with one of the main methodological pillars of the Austrian School of economics, namely its methodological individualism (a term coined in 1908 by Joseph A. Schumpeter). Unlike established business ethics, which asserts that there is a moral connection within enterprises, or nations, or any
June & July 2012 95

other entity, Bouillon argues that only human beings and their deeds can be categorized as moral. This assumption is important, as firms or other organizations are not actors within Bouillons methodological individualist ethics. It follows that social responsibility or value to society, among other popular and widely used phrases, can only be interpreted as a composite of many individual actions. The precise operational definitions of his terms and concepts which he offers along with some elucidations to commonly used words and phrases such as economic goods, property, or markets pave the way for a very useful, step-by-step guide taking readers from a definition of economic action to the definition of moral economic action. In order to lead us to the proper understanding of what he labels moral economic actions, Bouillon provides an interesting and so far ignored correlation to Carl Mengers (the founder of the Austrian School of economics) eminent four prerequisites of an economic good. A thing can become an economic good only, Menger wrote, if the following prerequisites are
simultaneously present: 1) A human need. 2) Such properties as render the thing capable of being brought into a causal connection with the satisfaction of this need. 3) Human knowledge of this causal connection. 4) Command of the thing sufficient to direct it to the satisfaction of the need. Only when all four of these prerequisites are present simultaneously can a thing become a good.

Bouillon, in good Austrian tradition,

argues that we always utilize economic goods according to our subjective preference rankings. It is for this reason that he concludes that in order to be identified as an economic action such an action must be performed under the condition of scarcity and must also be subjectively useful. This statement is important for the understanding of his innovative definition of a moral economic action, because inasmuch as the characterization of moral action . . . is accepted, we can conclude that economic action must be compatible with this characterization for being entitled to use the label moral economic action (his italics). In other words, the decisive part of the academic field of business ethics is the moral economic action, or at least, as Bouillon puts it, moral economic action should form it. In the third chapter Bouillon helps us grasp and follow his arguments for a workable definition of a morally economic justice. This chapter is arguably the most important. Especially with todays confusion of language in political thought (as Friedrich von Hayek wrote), and in the face of the ongoing controversies, and with the muddle surrounding the ambiguous concept of social justice right from the onset, Bouillon makes it clear that there are no more or less just actions, as there are more or less courageous, generous, and moderate acts. He who suggests a norm that is not entirely compatible with justice, willy-nilly recommends injustice a cause most people try to avoid, for instance by ways of redefining justice. At the end of the day, should we try to adapt the concept of justice to existing social norms (which we will never fully comprehend)? Or should we rather use these mostly intu96

itively felt social norms to test whether they can stand up against what Bouillon calls formal justice? By and large, in following Hayeks theory of social evolution, Bouillon argues that any social order grows up because individuals unintentionally act within general rules of universal application. The emergence of a social order thus is only possible because individuals act in a certain predictable way with respect to one another. Those groups which have the most effective sets of personal rules of conduct will survive and expand more easily than others. However, the overall effect of observing these rules cannot be known in advance, just like the winners of a game cannot be deduced from looking at or analyzing the rules. In other words, men did not simply design a set of social norms or rules and impose them upon the environment. Social norms may evolve over time from voluntary conventions, from contracts, or even loose agreements between people. These social standards are a sort of structure of interrelated parts that display some predictability and regularity due to the rules that govern their behavior. Our mind is itself a system that undergoes permanent changes as a result of our efforts to adapt to new situations. It is a sort of process of continuous and simultaneous classification, and constant reclassification, on many levels of impulses proceeding in it in any moment. This arrangement is applied in the first instance to all sensory perception but in principle to all the kinds of mental entities, such as emotions, concepts, images, drives, etc. that we find to occur in the mental universe. It seems that the whole order of sensory qualities, all the differences in the
Policy Review

effects of their occurrence, could be exhaustively accounted for by a complete explanation of all their effects in different combinations and circumstances. We can go along with Bouillons argument concerning economic moral actions and label them as just so long as they do not interfere with the liberty of others (he thoroughly describes liberty in his second chapter). However, the real question about the politically popular concept of social justice still remains to be investigated. Is his understanding of formal or commutative justice (always demanding a restitution of injustices and only indicating what one person is due from another under generally accepted conditions) weakened or seriously damaged by the introduction of that omnipresent phrase social justice? Over the past 150 years, it seems the concept of social justice has successfully replaced the clear meaning of distributive justice and has inspired generations of social policymakers. Although the model of distributive justice can easily be traced back to Aristotle, according to Hayek the synonymous use of distributive justice and social justice was introduced by John Stuart Mill; the usage has since avoided any serious discussion. It is worthwhile to recall Mills own language of 150 years ago:
Society should treat all equally well who have deserved equally well of it, that is, who have deserved equally well absolutely. This is the highest abstract standard of social and distributive justice; towards which all institutions, and the efforts of all virtuous citizens, should be made in the utmost degree to converge.
June & July 2012 97

Very importantly, however, Bouillon points out that social justice differs from the Aristotelian use of distributive justice, insofar as it seems to intentionally discard the decisive aspects of individual success and accomplishment. Among the most important reasons why the term is so popular, according to Bouillon, is that it appeals to those deeply rooted natural instincts that were appropriate in small tribal societies and equally small organizations. But over the span of hundreds of thousands of years, we have gradually developed into a modern mass society that is radically different from its forebears, and we function now on the principles of equal treatment and free cooperation. And still, most arguments in favor of social justice assume that there is a certain set quantity of goods or services like a cake that can be sliced and then distributed according to abstract moral principles such as need or merit, rather than according to the principles by which the goods or services were produced in the first place. In markets, however, there is no such distinction. Income is distributed according to the anticipated marginal productivity of factors. A persons income in a market economy, therefore, is a function of the value of his or her services to others. It is thus inappropriate to assume that there is any merit in a moral sense in his or her actions. The notion of social justice becomes meaningless in a society of free people because only a mixture of an individuals skills and luck determines the outcome. In other words, the term social justice is mostly used to imply that a particular distribution of wealth or income between various members of a

society is fairer or more just. The neverending trust in the concept of social justice seems to have emerged among other erroneous beliefs from a misconception of a society. However, we must clearly distinguish between two types of a society: one which is the result of the spontaneous interaction of a multitude of people with different purposes and goals, like markets; and another which is a result of a deliberate design, determined by a shared purpose and goal, such as a club, a corporation, or an organization. To repeat, the notion of social justice is meaningless in a society of free people in which only a mixture of skills and luck determines the outcome. The final chapter is a highly instructive summary in which Bouillon not only reiterates the countless logical contradictions of diverse concepts and the consequences of these inconsistencies for the contemporary intellectual misconceptions in business ethics. He also applies the definitions and insights provided in the previous chapters to politically sensitive and fiercely contested topics such as organ trade and abortion, insider trading and data protection, or the politically most-appropriate cases of bribery and corruption. In light of current debates about market failures, dazzling bonuses for bankers or ceos, special taxation for the rich, or the famous but vacuous phrase fair share, the topic of business ethics is discussed almost everywhere. These debates are regrettably argued either with sloppy terminology or without any deeper understanding of the complex subject. Hardy Bouillons slim book is an important and illuminating contribution, a mustread in business ethics.

Falling from Grace

Timothy S. Goeglein. The Man in the Middle: An Inside Account of Faith and Politics in the George W. Bush Era. B&H Books. 272 Pages. $19.99.

By Paul Kengor

he man in the Middle: An Inside Account of Faith and Politics in the George W. Bush Era is not your typical memoir of the Bush years, nor of any White House. To be sure, readers looking for noteworthy details and insights into the Bush presidency from both a policy and historical perspective will get just that. They will not be disappointed. But they should first brace themselves for a gripping personal story of one mans fall and redemption. And whether you are a person of faith or not, or the coldest policy wonk, you are certain to be drawn in from the opening pages of Tim Goegleins autobiographical account. The story begins vividly with Goeglein, for seven years George W. Bushs deputy director of the White House Office of Public Liaison, dropping to his knees, trembling in his office

Paul Kengor is professor of political science at Grove City College. His books include The Crusader: Ronald Reagan and the Fall of Communism and Dupes: How Americas Adversaries Have Manipulated Progressives for a Century.
Policy Review

in the building next to the White House. He had just gotten an e-mail from an old journalist friend. I opened the e-mail, read it once, felt the blood drain from my head, got down on my knees next to my desk, and was overcome with a fear and trepidation as never before, writes Goeglein. My only prayer, which I repeated again and again, was God help me. God help me. Goeglein knew immediately it would be the worst day of his life, with a trial he had never faced, a cross perhaps too heavy to bear. The journalist asked Goeglein if he had plagiarized part of a recent column he had written for his hometown newspaper. Was it true? Goeglein, who strived to live his life as a man of character and integrity literally writing and lecturing on virtue answered honestly. It was indeed true. And it was worse than she knew, as she and the whole world would soon learn. Goeglein writes: When I sent that email reply, acknowledging what I had done, in all my guilt and shame, I knew events of that day would move rapidly toward my resignation from the White House and service to a president I loved and respected. He continues: Every one of the principles I held and espoused, every one of the values I was raised by truth in all things, character above intellect, unquestioned integrity before God and man alike every mentor who ever invested part of his or her life in me, I had violated and violated completely. My hypocrisy was transparent, and I was guilty as charged. What a prideful fool I was, and it was all my fault without excuse or exception. Pride cometh before the fall, and now Tim Goeglein fell hard.
June & July 2012 99

Immediately, his resignation followed, as did the scandalous front pages, the ridicule, the embarrassment, the political damage to the Bush White House, the ordeal of facing friends and family. Having been taken by pride and the temptation of the dark side, Goeglein allowed himself one solace: He vowed to never again darken the doorstep of the West Wing. Ah, but he would have no choice: A few days later, after a weekend from hell, Goeglein got a call from George W. Bushs chief of staff. The president wanted to talk to him. Goeglein steadied himself for a welldeserved trip to the woodshed. Here would be a fitting final punishment, a deserving dnouement to this sordid episode, a justly lasting, ugly memory to soil his seven years of otherwise solid service. George Bush would surely tell him that he had been a grave disappointment. Tim Goeglein walked alone toward the open Oval Office door. He heard the presidents voice: Timmy, is that you? Please come in. It was just Goeglein and Bush, no one else. Goeglein quickly tried to say the first words, Mr. President, I owe you a . . . . Bush stopped him: Tim, I want you to know I forgive you. But Mr. President, interjected Goeglein, I owe you . . . . Tim, Bush interrupted, I have known mercy and grace in my own life, and I am offering it to you now. You are forgiven. Goeglein persisted: But Mr. President, you should have taken me by the lapels and tossed me into Pennsylvania Avenue. I embarrassed you and the team; I am so sorry.

George W. Bush wasnt much interested. He finished: Tim, you are forgiven, and mercy is real. Now we can talk about this, or we can spend some time together talking about the last seven years. The president escorted Goeglein over to the two couches in the Oval Office. No, sit here, he told Goeglein, gesturing to the chair of honor in front of the fireplace, reserved for distinguished guests. They talked about their families. Here, of course, was an incredibly busy man, into the final stretch of his presidency, overwhelmed with so many domestic and international issues, absorbing so many arrows and indignities, and, yet, he gave this staffer his heart and time. The meeting came to an end. As the president escorted Tim Goeglein to the door, Goeglein was certain he would never see him again a bittersweet moment, and Goeglein choked up. But then George Bush offered yet something more: Tim, he said, I would like you to bring Jenny and your two sons here to the Oval Office so I can tell them what a great husband and father you are. Goeglein was stunned. Two days later, he heard from the White House scheduler. That meeting, too, took place and was likewise unforgettable. To learn more, one must read Goegleins book. And for such reasons alone, the book is worth the time and price. In fact, if for nothing more than a glimpse into this thoroughly and honestly human portrait of George W. Bush, this book is worth it. No matter what the lefts demonization, George W. Bush was a decent man, and this book is a testimony to that.

As I researched George W. Bush for my own book on him a few years ago, I found a speech he gave back in Texas in December 1 9 9 9 . He talked then about how one behaves when no one else is watching. Character is what happens when no one else is watching. Here, in this moment with Tim Goeglein, no one else was watching, and character is what happened. We know it only through Goegleins words. But we can also gain much more on George W. Bush from Goegleins memoir. There is plenty of material to both feed the inner wonk and add to the historical record on the Bush years. Goegleins perspective contains four core elements of keen interest. First, there are the policy aspects of the book, which tend to gravitate around areas that Goeglein specialized in via his outreach to faith-based groups. These range from the presidents crucial early decision on embryonic stem-cell research, which came shortly before September 11, to matters like gay marriage, the family, and Supreme Court picks, among others. Second, Goeglein offers an accounting of the two elections that narrowly got Bush into the White House, including a chapter on the Florida recount and the values voters critical to Bushs electoral success. Third, this book contains something of a manifesto on conservatism, badly needed as conservatives seek to support a Republican presidential candidate in 2012. Goegleins understanding of conservatism is a thread that runs naturally from chapter to chapter. He effortlessly synthesizes the thinking of great conservative minds from Russell Kirk to Edmund Burke, from Newman to Chesterton, from Whittaker Chambers to Richard
Policy Review

Weaver to Evelyn Waugh, from Tocqueville to T. S. Eliot. His synthesis is grounded in the American Founders, their principles, and a careful appreciation of the interrelationship between faith and freedom. Goeglein also readably integrates the intriguing personal relationships he forged with conservative founders like William F. Buckley Jr. and Russell Kirk toward the end of their lives. Really, Goegleins insights into conservatism provide a compelling cohesion to his narrative, one that teaches the reader a philosophy for politics and life. His own story and experiences (and struggles) are vindication of the ordered liberty that Kirk advised for all Americans and their country. Finally, Goeglein gives valuable insights into George W. Bushs war decisions, from Afghanistan to Iraq. hen you think about it, it is astonishing that this former governor of Texas, who had hoped to shape a presidency around a largely domestic agenda rooted in what he called compassionate conservatism, found himself thrust into foreign policy amid some of the darkest days in the history of the republic. That happened, of course, on September 11, 2001, as he was reading a book to students in a Florida public school. September 11, Bush often said, changed everything. Goeglein presents this episode much in the same way that it presented itself in the life of George W. Bush. Following a chapter on faith and compassion, Goeglein abruptly talks about driving to work in Washington the morning of September 11 under the clearest of blue skies, the most beautiful day of the year. He, too, like the
June & July 2012 101

president, began his day routinely. But then lightning struck. Goeglein likewise tried to rapidly assemble and piece together what was happening. Like his president, he was thunderstruck. Given his faith-based role, Goeglein was the man who organized the inspiring National Cathedral service shortly after the attacks, which included Billy Graham, the Clintons, the Gores, and all of official Washington. He secured the participation of a frail Graham, but had to figure out how to get him to town. He worked with the frazzled, under-the-gun faa to literally clear the skies for the elderly church statesman to fly in. The combination of Grahams sermon and Bushs speech, wrote Goeglein, in a Russell Kirk-like thought, had seemed to put Jerusalem and Athens . . . in perfect equipoise that morning. Not quite so balanced, however, was the world that now unraveled outside the walls. Goeglein writes of the immediate sense among the presidents staff of how much things had changed in the ten months since his inaugural. It was truly a new world, instantly transformed from relative peace and prosperity in January to the near-constant stories of horror now streaming in to the White House. The presidents daily schedule was now driven almost exclusively by 911, writes Goeglein, even as domestic concerns continued on one track. Over the next two years, and especially an intense period from late 2001 to early 2003, the White House grappled with interpreting the new world and its challenges and, most acutely, how to respond and make its case. The clash of civilizations was intensifying, terms like jihad were now part of

everyday vernacular, and the presidents team wondered how much of its response would be or would appear to be jaw-jaw (meaning diplomacy) or war-war. Of course, war-war meant going first into Afghanistan and then Iraq. If 9 / 1 1 had changed George W. Bushs presidency, the decision to go to war in Iraq changed him physically. Goeglein recalls the presidents Oval Office announcement on March 19, 2003: As he declared the United States would invade Iraq, everyone who worked for him saw instantly the gravity of his tone and mien physically transform his appearance; both his face and his body language visibly changed. Bush, notes Goeglein, was a reluctant war president. Nonetheless, this was George W. Bushs hour for choosing. Goeglein does not brush aside the downturns, from the absence of wmd stockpiles to the U.S. death toll prior to the surge, but he wants history to appreciate Bushs leadership qualities in the face of not only very difficult decisions but extraordinarily harsh and unrelenting criticism. He saw in Bush the traits of a great leader: resoluteness and steadfastness instead of indecision; independence of analysis and thought; prudence instead of reaction; and what the great Irish statesman Edmund Burke called moral imagination; which is the ability to see your way forward informed by an ethical view not tarnished by political considerations. We can certainly disagree with Bushs individual choices within the war and even the overall decision to go to war, but when it comes to Bushs sin102

gular focus and leadership, it is difficult to disagree with anything Goeglein says. Bush was extraordinarily decisive and resolute and, yes, apolitical. As to the latter, the worst assessment of Bushs thinking during the entire war period remains Ted Kennedys utterly absurd statement in 2003, when the Massachusetts senator claimed that Bush pursued war in Iraq for political purposes: This was made up in Texas, growled Kennedy, announced . . . to the Republican leadership that war was going to take place and was going to be good politically. In fact, at the moment Bush decided to pursue a highly risky path to war, he was still surfing an unprecedented wave of popularity. Political scientists speak of the rally-round-the-flag phenomenon a boost that presidents receive during national tragedy. Typically, this lift lasts a few weeks. Yet, Bushs postSeptember 11 jump lasted over a year the longest rally-round-the-flag peak ever recorded. And he gave it up to go to war. He sacrificed political fortune for what he thought was right. He never regained that post-September 11 upsurge. Moreover, if body bags piled up, he could easily have been a oneterm president. Kennedys statement was not only mean and unfair but irrational and just plain dumb and sadly all too common among the presidents wild detractors. For his part, George W. Bush did not respond to this kind of nonsensical, outrageous criticism, even when many wanted him to. That, too, was part of Bushs unique leadership. On that, Goeglein explains: Even when his opponents were at their bitterest, he refused to lower himself to their level.
Policy Review

The most constant refrain I heard from the presidents supporters is that he needed to hit back, to get in the proverbial sandbox and throw some verbal punches. Harry Reid said this war is lost. It was a criticism utterly mild compared to the stunning demagoguery that Goeglein and the White House staff gauged daily and even hourly: Bush was a fascist, an American Hitler; the prison at Gitmo was like a Soviet gulag; Bush lied, kids died. On and on and on. The nature of Goegleins job, which was to reach out to the presidents core conservative and faith-based constituency, and share with them the presidents war narrative, was such that he dealt with war criticisms incessantly. He knew what the presidents critics and supporters alike were thinking. He knew the supporters wanted the faithbased president to, in essence, quit turning the other cheek to the point where Bush often seemed a doormat for his detractors. The presidents advocates were tired of watching him curl up into the fetal position as his tormentors kicked away. They wanted him to kick back, or at least respond effectively. Bush did not do that. I still wish he would have. Nonetheless, even then, as Goeglein observes, Bush was not given to despair or desolation, even in the most desperate, dispiriting days of the occupation of Iraq. Why was that? Goeglein answers that question as well:
I came to see because he talked about it frequently that the president believed this war was between good and evil, between liberty on one hand and tyranny on
June & July 2012 103

the other. He often framed his arguments in terms of justice to the consternation of his critics. . . . The president believed in good and evil, in right and wrong, and was comfortable putting things in that perspective. There was no valuesfree foreign or domestic policy in the Bush White House and he saw the war in those terms. The enemies of freedom were evil, and the defenders of liberty were good.

Bush withstood the smears on his personal reputation and character. As Goeglein notes, Bush remained comfortable in his own skin and did not need nor seek validation from others. Goeglein is right: Among the political class, that lack of a need for validation alone makes [Bush] unique. So much for the critics. But how will history remember the presidents performance? Goeglein is confident that Bush will be confirmed in his major decisions by history, even though I heard him say on multiple occasions he did not give much thought to his legacy. Will that confirmation occur? Time will tell. And if it does, Bush, as he himself has indeed said, may not live to see the results. The legacy may be one he never witnesses in this lifetime. i m g o e g l e i n s memoir is about much more than George W. Bush and 9/11 and wars in the Middle East. It is somewhat of a disservice here to focus mostly on those elements, although such will be the chief historical contributions of this book. But they are far from the only contributions. A major bonus to this mem-

oir is Goegleins informed understanding of the conservative movement and what it means to be a conservative. Beyond that still, and at a deeper level, this is a stirring tale of a considerable fall from grace. That fall began its path of redemption through the human grace of a gracious president that Tim Goeglein both served and, in a bad lapse of judgment, also disserved. The human grace of Bush, rooted in faith, tapped into a divine grace that began the process of healing for Goeglein. Bush himself knew that grace from his own personal weaknesses. He knew how and where to access it. Tim Goegleins White House account has a redeeming element not found in the vast majority of conventional White House memoirs. We learn much about the autobiographical subject but also, mostly uniquely, learn something even better about the presidential subject at the heart of the story. chance to observe the Prussian statesman up close was Benjamin Disraeli, who as leader of the Tory opposition first met Bismarck at the Russian ambassadors residence in London in the summer of 1862. On this occasion, Bismarck, on the verge of assuming power, spelled out his plans for Prussian greatness under his leadership:
I shall soon be compelled to undertake the conduct of the Prussian government. My first care will be to reorganize the army, with or without the help of the Landtag [the legislature] . . . As soon as the army shall have been brought into such a condition to inspire respect, I shall seize the first best pretext to declare war against Austria, dissolve the German Diet, subdue the minor states, and give national unity to Germany under Prussian leadership. I have come here to say this to the Queens ministers.

Jonathan Steinberg. Bismarck: A Life. Oxford University Press. 592 Pages. $34.95. tto von bismarck may have had his share of shortcomings, but a lack of ambition wasnt one of them. Among those who had a

By Henrik Bering

Being Bismarck

For clarity of intent, this is hard to beat. Afterwards Disraeli warned the Austrian envoy: Take care of that man. He means what he says. Bismarcks indiscretion was also legendary, and he was perfectly willing to gripe publically about his complicated relationship with his master, emperor William I. Sixteen years later, Disraeli, now as British prime minister, attended a private dinner at the Bismarck residence during the Congress of Berlin and afterwards recorded his impressions in his report to Queen Victoria:
I sat at the right hand of P. Bismarck, and never caring to eat in public, I could listen to his Rabelaisian monologues: endless
104 Policy Review

Henrik Bering is a writer and critic.

revelations of things he ought not mention. He impressed me never to trust princes or courtiers; that his illness was not as people supposed brought on by the French war, but by the horrible conduct of his sovereign, etc. etc.

more sinister version starts appearing, stressing the irrational and violent aspects of Bismarcks character. His contemporaries saw something of the Evil One in him. Steinberg quotes the British ambassador Odo Russell who wrote that the demonic is stronger in him than in any man I know. f not actually born unpleasant, Bismarck soon got the hang of it: His cold, calculating nature comes through already in his youth, which Steinberg recreates in exquisite detail. One of his closest friends when studying at the University of Gttingen was a young Bostonian, John Lothrop Motley, who later became a diplomat and in whose novel Mortons Hope Bismarck appears under the name of von Rabenmarck:
His dress was in the extreme of the then Gttingen fashion. He wore a chaotic coat without collar or buttons, and as destitute of color as of shape. Enormously wide trousers and boots with iron heels and portentous spurs . . . A faint attempt at moustachios, of an infinite color, completed the equipment of his face, and a huge saber strapped around his waist, that of his habiliment.

In the report, Disraeli also noted the discrepancy between Bismarcks bulk and his well-bred voice: The contrast between his voice, which is sweet and gentle, and his ogre-like form, is striking. Disraeli, the only contemporary whose intellect matched the Prussians, is one of many voices presented in Jonathan Steinbergs splendid biography Bismarck: A Life. As the above passages testify, one of the most fascinating aspects of Bismarcks character is what Steinberg calls his brutal, disarming honesty, which comes mixed with the wiles of a confidence man. Bismarck transformed his world more completely than anybody during the 19th century with the exception of Napoleon, writes Steinberg, and he did this purely though the strength of his personality, which cowed everyone around him. He never had sovereign power but he had a kind of sovereign self. The aim of his book is to demonstrate how this dominance played out in practice, to present him as he was seen through the eyes of others and as he emerges from his own, often very lively letters. As the author notes, for decades after World War II, conservative German historians tended to portray Bismarck as embodying the essence of a visionary and responsible statesmanship, as opposed to Hitlers rashness. At the end of the 20th century a truer,
June & July 2012 105

A few days later, they meet again in the street, where von Rabenmarck in short order challenges three students to duels on Mensurschlger, the special sword used for manly scarification, and forces a fourth to jump over his stick. But once they have returned to his rooms, a different side is revealed: There, said von Rabenmarck, entering the room and unbuckling his belt, and throwing the pistols and schlger

on the floor. I can leave my buffoonery for a while and be reasonable. The conversation then turns to how he got admitted to the dueling society, the hardest of the student societies to gain admittance to: I suppose you made friends of the president and the senior as you call him, and other magnates of the club, said I. To which von Rabenmarck responds:
No, I insulted them all publicly, and in the grossest possible manner . . . and after I had cut off the seniors nose, sliced off the con-seniors upper lip, moustachios and all, besides bestowing less severe marks of affection on the others, the whole club in admiration of my prowess and desiring to secure the services of so valorous a combatant voted me in by acclamation . . . I intend to lead my companions here, as I intend to lead them in after-life. You see I am a very rational sort of person now. You would hardly take me for the crazy mountebank you met in the street half an hour ago.

blocked their bedroom door with a chest of drawers. The next morning at 6:30 Bismarck knocked on their doors. No response. He then called from the courtyard. Still no response. Whereupon he fired two pistol shots through the window. A handkerchief on a stick immediately appeared in the window as a sign of surrender. But Bismarck was not content with a life as the local Junker landowner, an existence the dullness of which he had once mockingly described in a letter, imagining himself as ending up a wellfed Landwehr militia officer with a moustache, who curses and swears a justifiable hatred of Frenchmen and Jews until the earth trembles, and beats his dogs and his servants in the most brutal fashion, even if he is tyrannized by his wife. Bismarcks frustrated ambition at this time Steinberg compares to a massive engine with a steam boiler at high pressure and the wheels locked by cast-iron brakes. russias landed aristocracy formed a tightly knit elite where everybody knew each other. The problem facing Bismarcks class was how to retain their power and privileges at a time of profound change, as an emerging modern state with a growing middle and working class threatening their way of life. They hated the free market, free peasants, free movement of capital and labor, free thought, Jews, stock markets, banks, cities and a free press, writes Steinberg. In 1848, William Is brother and predecessor, King Frederick William IV, had in their view humiliated himself by caving in to public pressure and granting a slightly less oppressive constitu106 Policy Review

Clearly, there was method to Bismarcks madness. Later in life, his bullying manner reasserted itself during negotiations with the Austrians, where we find him playing cards with an Austrian diplomat and bent on scaring the poor man with the aggressiveness of his play. Bismarcks wild ways earned him the sobriquet of the mad Junker. Two friends once visited him at Kneiphof, one of the family estates in Pomerania. After a night of heavy drinking, they all agreed to rise early in the morning. His guests had second thoughts and

tion, though the king still retained control of the army and the civil service. Conservatives were appalled, as one does not negotiate with revolution. Their bible was Burkes Reflections on the Revolution in France, providing them with the arguments for Junker rule from above. When taking a seat in the Diet the year before, Bismarcks approach was similar to the one he had practiced in the dueling societies: He linked up with the Pietists, the most reactionary protestant element of the Junker class. Not that Christian notions of charity or forgiveness meant much to him. As Steinberg notes, his beliefs could be discarded as easily as his extravagant outfit on that day in Gttingen with Mottram. But the Pietists provided a useful platform from which to launch his political career. His parliamentary debut in May 1847 Steinberg sees as typical of his subsequent speeches in the Landtag and in the Reichstag, displaying complete contempt for the members of these bodies, dramatic gestures, violent ideas, couched in sparkling prose, but delivered in easy conversational tones. The royal family were well aware of his brilliance, but also wary of it. During the turbulence of 1848, King Frederick William IV had made a note: Bismarck: To be used only when the bayonets rule without limit. Instead, the King appointed Bismarck ambassador to the German Confederation, consisting of 39 states, whose General Assembly representing the sovereigns met in Frankfurt, and where Austria at this point was calling the shots. And when William took over as prince regent in 1857 after his brothers stroke, he packed Bismarck off as
June & July 2012 107

envoy to the court of St. Petersburg, where he served for four years. Interestingly, when setting off to Russia, only reluctantly was he granted permission to wear a majors epaulettes by the military authorities. The man who was later to appear wearing a fearsome Pickelhaube had never served in the regular army, only in the reserves, and the army only agreed after he argued that nobody would take him seriously without a proper military rank. But Bismarck dreamt of bigger things. In his famous 1857 letter to Leopold von Gerlach, his mentor among the Pietists, he outlines his political thinking. What has become known as realpolitik is a policy devoid of principle or moral scruple, based solely on a practical pursuit of the national interest. Prussia should be free to use any means available and make alliances or deals with whoever serves her interests of the moment. He even contemplates a temporary alliance with the archenemy, France, to scare the German princes into submission. The idea is to retain maximum flexibility. As he put it in a follow-up letter to a shocked von Gerlach, One cannot play chess if sixteen of the 64 squares are forbidden from the beginning. In every great career, there is usually a person who provides the final, vital assistance. In Bismarcks case it was the minister of war, Albrecht von Roon, who paved the way for his elevation to combined prime minister and foreign minister after parliament had spent the summer of 1862 deadlocked. Two incidents almost derailed Bismarcks career before it got started. On going to meet King William for his appointment, he learned that the king strongly consid-

ered abdicating, as the Landtag had rejected his army reform; Bismarck managed to talk him out of it. The other incident was Bismarcks first speech as prime minister, his famous blood and iron speech held in the budget committee in the Landtag, in which he declared that Prussias borders were untenable and would have to be settled, not by speeches and majority decisions . . . but by blood and iron. The speech caused an uproar with liberals, who saw it as the first step towards a royal military dictatorship. It could have been his last speech, as the queen was keen on getting him fired for having acted irresponsibly. Resolutely, he stops the kings train on its way to Berlin and persuades his majesty to let him stay on. As Steinberg notes, if William had abdicated in 1862 or if Bismarck had been fired after his blood and iron speech, his career would have been over there and then, and German history would have been very different. But the blood and iron speech merely stated in a public forum what Bismarck had said in smaller settings to Disraeli and others: His goal was to use foreign war to forge internal cohesion and unleash the forces of German nationalism, which would undermine the local rulers in the other German states. First, he picked a fight with the Danes over the duchies of Schleswig Holstein, an issue about the origins of which Lord Palmerston famously quipped that only three people ever understood: One is dead, the other has gone mad, and I have forgotten, but which Steinberg manages to explain in lucid fashion. This gambit was risky because by treaty the British, Russians,

and French had a right to intervene, but their attention was elsewhere. Then, as outlined to Disraeli, he turned a disagreement with Austria over the occupation of the duchies into a cause for war, resulting in the AustroPrussian War of 1 8 6 6 : Thanks to Prussias superior ability to concentrate its forces, the Austrians were beaten in six weeks and their hegemony ended. But Bismarck was careful not to make the peace too severe, as he needed the Austria as a subservient ally later on. Thus there were no annexations and no victory parade, though William had wanted one. Afterwards Bismarck marveled how brilliant military victories make the best basis for diplomatic arts. Everything went as if oiled. Bismarcks final move was to engineer a conflict with the French while making them look like the aggressors. In this he was greatly aided by the folly of Louis Napoleon, without which, Steinberg notes, Bismarck could never have united Germany. In 1851 Louis Napoleon had staged a bloodless coup and was now emperor, with heady visions of emulating his famous uncle. Napoleon might wear a famous name, but he had inherited none of his uncles military genius, and his army was a meal ticket affair, stuck in its old routines. In the war, Napoleon suffered the humiliation of being taken prisoner at Sedan, but unexpectedly, the French decided to fight on in a guerilla-type campaign. Bismarck again needed a quick end to the war so as to foreclose the possibility of other great-power intervention. Ruthlessly, he suggested bombarding Paris. As commander of the 3rd army, Crown Prince Frederick argued that killing civilians was dishonorable,
Policy Review

but Bismarck prevailed, and thus saddled the Germans with a reputation for Schrecklichkeit beastliness. The bombardment of Paris and the annexation of Alsace-Lorraine, which Bismarck actually warned against, guaranteed that the French would be itching for a rematch. Having secured the agreement of the southern German states to unify, the high point of Bismarcks career came when William was proclaimed emperor in the Hall of Mirrors in Versailles on January 18, 1871. As Friedrich von Holstein, a foreign ministry official recalls, Bismarck was furious with pastor Rogge who, as the theme for his sermon on the festive occasion, had chosen Come hither, ye princes and be chastised, which does indeed strike one as slightly ironic. espite all his successes, Bismarcks hold on power was precarious. Lacking a solid parliamentary power base of his own support for his Reich party consistently stayed below ten percent, only once reaching 13.6 percent he was totally dependent on William I, his rather decent but unforceful master. Bismarcks was a public of one, writes Steinberg. But William could dismiss him at any point, and Bismarck knew that the empress was against him, as was Crown Prince Frederick and his wife Princess Victoria, both confirmed liberals. This meant that he had to win all his battles with the emperor, bending him to his will, and to prove his indispensability by constantly engineering crises. This was exhausting for both. Said William: It is hard to be Kaiser under Bismarck.
June & July 2012 109

With Bismarcks political genius went some serious character flaws. As Steinberg notes, in medieval terms, he displayed the two deadly sins of gluttony and wrath. His appetite and his drinking capacity were legendary: A menu from January 1878 features oysters and caviar, followed by venison soup, followed by trout, followed by morel mushrooms and smoked breast of goose, followed by wild boar in Cumberland sauce, saddle of venison, apple fritters, cheese and bread, marzipan, chocolate, and apples. Here we eat until the walls burst, as his personal assistant Christoph Tiedemann noted. Gorging is not uncommon among people who work under severe stress. As concerns his wrath, Steinberg details Bismarcks countless bouts of anger and hypochondria, which increased the more successful he became. Mostly, these episodes were triggered by domestic politics, which was messier than foreign policy. As Steinberg notes, Not even Bismarck could run a modern state, and he would allow nobody to share it with him. He bitterly complained to his brother over those who keep knocking at my door with questions and bills that I could bite the table. He constantly threatened to resign, and for long periods withdrew to his estate, where he would sit sulking in the Prussian mist with his calf-size dogs. Naturally, he was the boss from hell: He was incapable of admitting mistakes, but blamed them on his underlings, whom he took pleasure in humiliating. Yet, notes Steinberg, his staff worshipped him. Tiedemann provides the reason why a sensible man like himself would put up with such histri-

onics: There is something great to live ones life in and through so great a man, and be absorbed by his thoughts, plans, and decisions, in a certain sense to disappear into his personality. At certain points, Bismarck was close to the edge: During the siege of Paris, a colonel wrote: Bismarck begins really to be ready for the madhouse. In the Reichstag, we find him raving about his stenographers sabotaging him. And the book details how his doctor had to swaddle him in warm blankets and hold his hand. By putting him on a diet, the doctor saved his life. Under it all lay a deep pessimism, the roots of which Steinberg ascribes to the feeling that his class had no future. until he realized he needed them against the socialists, whereupon he called off the persecution. At this time, Germany also experienced a wave of anti-semitism, which Steinberg sees as representing a revulsion of a deeply conservative society against liberalism, with Richard Wagner as the first prophet of modern anti-semitism. As Steinberg notes, while Bismarck didnt create anti-semitism and was himself able to make certain exceptions when he found people useful, he shared all the standard prejudices of his class, and did not nothing to distance himself from them. Tensions in the new Reich were exacerbated by economic woes. After years of boom, helped by French war reparations, came the crash of 1873, which was followed by a long depression. Seeking common ground between the Junkers and the large artisan class in a shared hatred of free markets, capitalism, and free mobility, Bismarck responded by introducing a number of protectionist measures. Bismarcks main concern was the threat represented by the urban workers. Preferring to act preemptively by making changes from the top, rather than face increased pressure from the bottom, he introduced a state system of social security involving accident, sickness, old age and disability insurance, in the belief that this would stem the calls for change. Still, notes Steinberg, the Social Democrats kept gaining seats, thanks to universal suffrage, formerly his best weapon, but now increasingly impossible to control. In fact, says Steinberg, had he known how easily the princes would have given in, he would never have introduced universal suffrage in the
110 Policy Review

omestically, he practiced the same kind of tactics that he did in foreign policy. According to the 1871 constitution, there was universal male suffrage, but the voters lacked essential democratic rights and were without influence on the crucial areas of power, which remained under Junker control: The new Germany retained all the worst features of Prussian semi-absolutism and placed them in the hands of Otto von Bismarck, writes Steinberg. To preserve royal power, Bismarck busied himself with playing off different groups against each other, town versus country, liberals versus conservatives, free-marketers versus protectionists. His first target after unification were the Catholics, who, after the addition of the southern states, he thought would become too influential, and against whom he launched the so called Kulturkampf or war of culture, imprisoning or exiling bishops and priests

first place. His other mistake had been to assume that the masses backed the monarchy, as they had Napoleon 111 in France, overlooking the fact that while France still remained very much an agricultural nation, that no longer applied to Prussia. His reaction to an attempt on the kaisers life in 1878 and recorded by Tiedemann reveals his mindset: He immediately saw the possibilities inherent in the situation: accuse the liberals of being unpatriotic, dissolve the Reichstag, and get a new constitution to replace that of 1870, one that did away with that pesky universal suffrage. Which was exactly what he had planned to do in 1890, had events not overtaken him. n 1888, the so-called Year of the Three Kaisers, the Reich had three emperors, all within 100 days: Wilhelm I died at the age of almost 91 and was followed by his son Frederick III, who had represented England and liberalism, but who was terminally ill with cancer. His death paved the way for the 2 9 -year-old Kaiser William II, an immature and inexperienced youth who liked strutting around in fancy uniforms and was addressed as darling by his homosexual coterie. The new kaiser, whom Bismarck had referred to contemptuously to as that silly boy, did not want to stand in his prime ministers shadow, so he fired him in March 1890: He wanted to rule on his own, he wanted Germany to have a place in the sun, and he was completely out of his depth. The stage was set for the cataclysm of 191418. According to Steinberg, Bismarck carries part of the blame for this cataJune & July 2012 111

strophe by having prevented Germany from entering on a more liberal course: He bequeathed to his successors an unstable structure of rule, writes Steinberg, preserving a grossly unbalanced parliamentary system [which] continued to give the small class of Junker landlords a permanent veto on progress. The constitution, in which a strong chancellor bullies a weak king, was tailor-made to suit his own needs. The author cites British Foreign Secretary Sir Edward Greys comparison of Germany to a huge rudderless battleship, and adds: Bismarck arranged it that way; only he could steer it. Ultimately, it all boiled down to preserving his own hold on power. The means were Olympian, the ends tawdry and pathetic, Steinberg concludes. And he sees a direct Junker link from Bismarck to Adolf Hitler in Hindenburg, the old field marshal and hero of the battle of Tannenberg who, as president of the Weimar Republic, handed Hitler Bismarcks old job. The Junkers thought they could manage Hitler. Thus Steinbergs view of Bismarck appears harsher in tone than the one found in Christopher Clarks history of Prussia, Iron Kingdom, and in places one might have wished for a little more emphasis on the instances where Bismarck showed moderation: As noted above, he warned against annexing Alsace-Lorraine, for instance, but the military insisted. But like Clark, Steinberg notes that there was nothing inevitable about Germanys descent into darkness. The great shame was that Wilhelm I lived so long, almost reaching the age of 91, he writes. The crown prince was a man of liberal

ideas, which could have allowed Germany to follow the parliamentary path of other European nations. As for the concept of realpolitik, in the American context it is usually associated with Henry Kissinger, who was seen as a politician in the Bismarckian mold, accused by the right of being excessively pragmatist and accomodationist and by the left of being morally unscrupulous, propping up a variety of dictators. Significantly, Zbigniew Brzezinski, Jimmy Carters national security adviser, entitled his memoirs Power and Principle to set himself apart from Kissinger. In Brzezinskis view, a balance needed to be struck between pragmatism and idealism, between a traditional concern with the national interest and a commitment to furthering American values of freedom and democracy. Unfortunately, Brzezinskis boss Jimmy Carter got the balance all wrong. There was a surfeit of principle and precious little power to back it up with. Too morally fastidious, Carter failed to support the shah and instead got Khomeini. And whatever the goals, the crucial ingredient remains a big stick. As Bismarck rightly pointed out, diplomacy becomes so much easier when backed by force.


Policy Review