You are on page 1of 49

The Energy Future

David Pratt

June 2011

Contents
Civilization and energy
Energy and power
Fossil fuels
CO2 – friend or foe?
Renewable energy
Nuclear power [updated 1/14]
New energy [updated 3/12]
Sources

Civilization and energy

Fossil fuels have an energy density far higher than traditional, renewable energy sources, and
their large-scale use has resulted in the total energy consumption of human societies rising to
unprecedented levels. Vaclav Smil writes:

Traditional societies drew their food, feed, heat, and mechanical power from
sources that were almost immediate transformations of solar radiation (flowing water
and wind) or that harnessed it in the form of biomass and metabolic conversions
that took just a few months (crops harvested for food and fuel), a few years (draft
animals, human muscles, shrubs, young trees), or a few decades (mature trees) to
grow before becoming usable. In contrast, fossil fuels were formed through slow but
profound changes of accumulated biomass under pressure; except for young peat,
they range in age from 106 to 108 [1 to 100 million] years. A useful analogy is to see
traditional societies as relying on instantaneous or minimally delayed and constantly
replenished solar income. By contrast, the modern civilization is withdrawing
accumulated solar capital at rates that will exhaust it in a tiny fraction of the time
needed to create it. (2010a, 710-11)

Pre-agricultural societies consumed around 10 billion joules (gigajoules, GJ) of energy per
capita per year, roughly divided between food and phytomass (vegetation) for open fires. By the
late 19th century the figure had risen to about 100 GJ per capita in industrial England, nearly all
of it coming from coal. A century later, the major economies of the European Union, as well as
Japan, averaged around 170 GJ per capita, with coal, oil, and natural gas all contributing a
significant share. By 2005, the average annual consumption in the US had risen to more than
330 GJ per capita; 39% comes from oil, 27% from natural gas, 23% from coal, and virtually all
the rest from hydro and nuclear power.

There are enormous inequalities in wealth distribution and energy use both between different
countries and within them. In 2000, the affluent countries, containing just 20% of the global
population, consumed about 70% of the commercial total primary energy supply (TPES).

The United States, with less than 5% of the world population, consumed about 27%
of the world’s commercial TPES in 2000, and G7 countries (the United States,
Japan, Germany, France, the UK, Italy, and Canada), whose population adds up to
just about 10% of the world’s total, claimed about 45%. (Smil, 2010a, 715)

The annual consumption of commercial energy in the poorest countries of sub-Saharan Africa
(Chad, Niger) is less than 0.5 GJ per capita. A third of all countries have an average per capita
energy use of less than 10 GJ per capita.

With less than a sixth of all humanity enjoying the benefits of the high-energy
civilization, a third of it is now engaged in a frantic race to join that minority, and
more than half of the world’s population has yet to begin this ascent. The potential
need for more energy is thus enormous. (716)

High levels of affluence and consumerism do not automatically mean higher levels of individual
happiness and satisfaction with life. Smil says that pushing beyond 110 GJ per capita has not
brought many fundamental quality-of-life gains, while pushing beyond 200 GJ per capita has
largely been counterproductive. He writes: ‘the US falls behind Europe and Japan in a number
of important quality-of-life indicators, including much higher rates of obesity and homicide,
relatively even higher rates of incarceration, lower levels of scientific literacy and numeracy, and
less leisure time’ (2010a, 725). In high-energy societies, ‘a large part of TPES goes into short-
lived disposable junk and into dubious pleasures and thrills promoted by mindless advertising’
(Smil, 2008a, 387).

There is no possibility of an energy consumption of over 150 GJ per capita, currently enjoyed
by one-sixth of humanity, being extended to the rest of the world during the next few
generations. There are voices in the privileged West that oppose any significant industrial
development in the poorer nations on the grounds that it would be unsustainable and critically
damage the environment. Those who preach that message should perhaps set a good example
by switching off all their electrical appliances and gadgetry and withdrawing entirely from our
modern technological society.

About 1.2 billion people still live on less than $1 per day, and almost 3 billion on less than $2
per day. It is a fact of life that all nations want to pursue the path of economic growth and that
increasing the amount of electricity they generate would help raise many people out of poverty.
In the West, every material social advance in the 20th century depended on the proliferation of
inexpensive and reliable electricity (see rossmckitrick.weebly.com).

In sub-Saharan Africa the infant mortality rate is commonly over 100 (or even 150) deaths per
1000 live births. Infant mortality rates of less than 30 typically correspond to per capita energy
use of at least 30-40 GJ per year. Infant mortality rates of less than 20 are found only in
countries consuming at least 60 GJ per capita, and rates of less than 10 are found only in
countries using more than about 110 GJ. Female life expectancy of over 70 years typically
corresponds to per capita energy use of at least 45-50 GJ per year, while a female life
expectancy of over 75 requires about 60 GJ, and of over 80 about 110 GJ (Smil, 2008a, 346).
Electricity consumption and the Human Development Index (e-reports-ext.llnl.gov).
HDI takes account of life expectancy, literacy, education, and per capita GDP.

Energy poverty means insufficient access to affordable, reliable and safe energy services to
support economic and human development. 1.4 billion people lack access to electricity, and 2.7
billion rely on traditional biomass – such as wood, crop residues, and dung – for cooking and
heating. Household air pollution from the use of biomass in inefficient stoves causes over 1.45
million premature deaths per year (IEA, 2010, 7, 13). Greater access to liquid and gaseous
fuels and electricity would reduce poverty and improve human health. Even doubling the poor
world’s average per capita energy consumption to about 40 GJ per year would be sufficient to
guarantee a decent standard of living and quality of life.

This article explores the pros and cons of various conventional and alternative sources of
energy, and outlines likely near-term developments.

Energy and power

Basics
A force changes an object’s state of rest or motion. Force = mass x
acceleration. A force of 1 newton imparts an acceleration of 1 m/s2 to
a mass of 1 kg. 1 N = 1 kg m/s2

Energy is the ability to do work. Energy = force x distance. 1 joule is


the energy expended (or work done) in applying a force of one newton
through a distance of one metre. 1 J = 1 N m

Power is the rate of using energy (or doing work). Power = energy /
time. A power of 1 watt is equal to an energy flow of 1 joule per
second. 1 W = 1 J/s

The amount of energy generated or consumed is measured in joules; power is the rate at which
energy is generated or consumed and is measured in watts. There are however many other
units of energy and power. For instance, 1 horsepower equals 746 watts. Energy consumption
is often expressed in kilowatt-hours (kWh; 1 kW = 1000 W). For instance, using a computer and
LCD monitor with a total power rating of 110 W for one hour consumes 0.11 kWh of electricity.
The total energy consumption of an average European is 125 kWh per day, while the average
American consumes about 250 kWh per day (MacKay, 2009, 104).

Two important measures for comparing different sources of energy are energy density and
power density. Energy density refers to the amount of energy contained in a given unit volume,
area, or mass. Power density often refers to the amount of power that can be generated per
unit land or water area.

The energy density of wood is, at best, 17 million joules (megajoules) per kilogram (MJ/kg), for
good-quality bituminous coal it is 22-25 MJ/kg, and for refined oil products it is around 42
MJ/kg. That is why coal is preferred over wood, and oil over coal. Vaclav Smil (2010b, 18)
writes:

the more concentrated sources of energy give you many great advantages in terms
of their extraction, portability, transportation and storage costs, and conversion
options. If you want to pack the minimum volume of food for a mountain hike you
take a granola bar (17 J/g) not carrots (1.7 J/g). And if you want to fly across the
Atlantic you will not power gas turbines with hydrogen: the gas has a gravimetric
density greater than any other fuel (143 MJ/kg) but its volumetric density is a mere
0.01 MJ/L [megajoules per litre] while that of jet fuel (kerosene) is 33 MJ/L, 3,300
times higher.

Energy exists in different forms: chemical, thermal (heat), nuclear, electrical, mechanical,
radiant, etc. In any conversion of one form of energy into another, some energy is lost, and the
efficiency is therefore always less than 100%. Over time, technological advances have made
higher efficiencies possible.

Traditional hearths and fireplaces had efficiencies below 5%. Wood stoves were
usually less than 20% efficient. Coal stoves doubled that rate, and fuel-oil furnaces
brought it to nearly 50%. Efficiencies of natural-gas furnaces were initially below
60%, but by the 1990s there was a large selection of furnaces rated at about 95%.
(Smil, 2010a, 713)
Thomas Edison’s Jumbo dynamo (scienceservice.si.edu). Edison opened Pearl
Street Station – the first central power plant in the US – in Manhattan, New York, in
1882. His generators converted less than 2.5% of the heat energy in coal into
electricity. Some modern coal-fired power plants can convert nearly half of the coal’s
heat energy into electric power, and the electricity produced is 105 times cheaper
than that produced by Edison (Bryce, 2008, 54, 68).

An examination of the power density (expressed as energy flux per unit of horizontal surface) of
different energy sources starkly reveals the limitations of renewable energy sources compared
to fossil fuels. The estimated values for each source can vary by an order of magnitude or more
depending on the precise details and conditions of the facilities in question, and what is
included in the calculation, but the general message is clear.

Power source Power density (W/m2)


Nuclear up to 4000
Natural gas 200 - 2000
Coal 100 - 1000
Solar photovoltaic 4 - 10
Wind 0.5 - 1.5
Biomass 0.5 - 0.6

(Smil, 2010b, 2008a)

Focusing on the situation in the US, Robert Bryce (2008, 84, 86, 93) gives the following figures:

Power source Power density (W/m2)


S Texas nuclear plant 56
Natural gas well 53
Marginal gas well 28
Marginal oil well 5.5 to 27
Solar photovoltaic 6.7
Wind turbines 1.2
Biomass power plant 0.4
Corn ethanol 0.05
What these figures mean is that renewable energy facilities would have to be spread over areas
ten to ten thousand times larger than today’s fossil fuel energy facilities to produce the same
amount of power. Although this is not an impossible feat, it poses many regulatory, technical
and logistical challenges, and it would take several decades to put such a system in place –
even if there were no local opposition. For as Robert Bryce (2008, 92) says:

Energy sources with high power densities have the least deleterious effect on open
space. They allow us to enjoy mountains, plains, and deserts without having views
obstructed or disturbed by spinning wind turbines, sprawling solar arrays, towering
transmission lines, or miles of monocultured crops. ...
Energy projects with small footprints are not only green, they reduce the potential
for NIMBY [not-in-my-backyard] objections.

Fossil fuels

Photosynthesis is the process whereby plants, algae and certain species of bacteria convert
carbon dioxide (from the atmosphere) and water (from the ground), with the help of sunlight,
into sugars (carbohydrates) and oxygen (released as waste). The photosynthetic conversion of
solar radiation into stored biomass energy has a very low efficiency – just 2% for the most
efficient plants in Europe (MacKay, 2009, 43). Although biomass itself has a low energy density,
fossil fuels such as coal, oil and natural gas are highly concentrated stores of photosynthetic
energy.

Due to the enormous amount of geologic energy invested in their formation, fossil
fuel deposits are an extraordinarily concentrated source of high-quality energy,
commonly extracted with power densities of 102 or 103 W/m2 of coal or hydrocarbon
fields. This means that very small land areas are needed to supply enormous
energy flows. In contrast, biomass energy production has densities well below 1
W/m2 ... (Cleveland, 2011)

According to the prevailing biogenic theory, fossil fuels formed from the fossilized remains of
dead plants and animals by exposure to heat and pressure in the earth’s crust over millions of
years. Fossil fuels are considered nonrenewable resources because reserves are being
depleted much faster than new ones are being made. Based on proven reserves at the end of
2009 and production in that year, it is estimated that coal will last another 119 years, oil 46
years, and natural gas 64 years (BP, 2010). For over a century people have been forecasting
the imminent exhaustion of commercially extractable fossil fuel reserves, but all such
predictions have come to nothing because new reserves are constantly being discovered and
new ways are being found to access previously inaccessible resources. Nevertheless, coal and
hydrocarbons are finite resources and fossil-fuelled civilization cannot last for ever, so it makes
sense to look for alternatives. The weakest reason for doing so is the supposed urgent need to
reduce greenhouse gas emissions in order to prevent ‘catastrophic man-made global warming’
(see next section).

A very significant development is the ongoing discovery of very ancient and continental rocks in
the world oceans. These finds, along with various other lines of geological and geophysical
evidence, indicate that large areas of the present ocean floors were once continents, and
contradict the plate-tectonic theory that the ocean crust is nowhere older than 200 million years
and has an entirely different (basaltic) composition compared with continental crust (Sunken
continents; Vasiliev & Yano, 2007). Dong Choi (2007) writes:
The new picture – that continental ‘oceanic’ crust (or sunken continents) underlies
the Mesozoic-Cenozoic basins and basalts – is a great gift for the oil industry. They
now have positive scientific grounds for exploring deep-sea sedimentary basins.
Currently, hydrocarbons are produced in 1,800 m of water off Brazil and exploration
is progressing in much deeper waters worldwide ... In the coming 10 to 15 years,
basins with 3,000 to 4,000 m of water will become the most active area for
exploration and exploitation.

The controversial abiogenic theory also deserves a mention. It states that some petroleum may
originate from carbon-bearing fluids that migrate upward from the mantle, rather than from
ancient biomass, and that there is far more petroleum and natural gas on earth than commonly
thought. Its proponents cite the presence of methane on Saturn’s moon Titan and in the
atmospheres of Jupiter, Saturn, Uranus and Neptune as evidence of the formation of
hydrocarbons without biology. The theory has some laboratory data to support it (kth.se;
carnegiescience.edu; portal.acs.org; wnd.com).

In 2009 fossil fuels accounted for about 87.7% of the world’s primary energy consumption:
petroleum 34.8%, coal 29.3%, natural gas 23.7%. Hydroelectric accounted for 6.3%, nuclear
5.4%, and other sources (solar, tidal, geothermal, wind, wood, waste) less than 1%. World
energy consumption grew at an average of about 2.8% per year from 1999 to 2008 but fell by
1.1% in 2009 as a result of the global economic recession (BP, 2010). The reason for the
continued dominance of coal and hydrocarbons is that they can provide reliable power from
relatively small areas, at affordable prices and in the enormous quantities required.

Every source of energy production takes a toll on the environment and the aim should be to
minimize it. The combustion of fossil fuels releases air pollutants, such as nitrogen oxides
(NOx), sulphur dioxide, volatile organic compounds, and heavy metals. It also releases carbon
monoxide, which is highly toxic, and carbon dioxide, which is nontoxic but which in recent times
has been demonized as an evil, ‘polluting’ gas. In addition, fossil fuel burning generates
sulphuric, carbonic, and nitric acids, which fall to earth as acid rain. And it releases radioactive
materials, notably uranium and thorium.

One proposed way of reducing CO2 emissions is carbon capture and storage (or sequestration)
(CCS). The idea is to capture CO2 from, say, fossil fuel plants and store it so that it doesn’t
enter the atmosphere, e.g. in deep geological formations or deep ocean masses. CCS is
receiving billions in funding in the US and Europe, but it’s unlikely to work because the volumes
of CO2 are too large and the technical problems and costs are too big (Bryce, 2008, 162-5). If
10% of global annual CO2 emissions – 3 billion tons – were compressed to about 1000 pounds
per square inch, it would have the same volume as all the oil produced around the world in a
year. The equivalent of 41 supertankers of oil would have to be disposed of every day. Even if
enough suitable locations were found, the cost of handling the CO2 would be enormous. To
power the CO2 capture process, power plants would have to produce up to 28% more
electricity.

Coal

Coal-fired plants emit mercury, lead, chromium and arsenic, which are very damaging if
ingested in sufficient quantities. Exposure to mercury, a neurotoxin, has been linked to higher
risks of autism, impaired cognition, and neurodegenerative disorders (e.g. Alzheimer’s
disease). In the US, coal-fired power plants emit an estimated 41-48 tons of mercury per year,
but this accounts for less than 0.5% of all the mercury in the air we breathe. For comparison:
US forest fires emit at least 44 tons per year, cremation of human remains discharges 26 tons,
Chinese power plants eject 400 tons, and volcanoes, subsea vents, geysers and other sources
spew out another 9000-10,000 tons per year (wmbriggs.com). The long-term effects of air
pollution from large-scale coal combustion are highly uncertain: estimates of the number of
premature deaths caused by emissions from a 1 gigawatt coal-fired power plant range from
0.07 to 400,000 (Smil, 2008a, 350).

The dense smog that coal burning once caused in western cities is now plaguing industrializing
countries, such as China, where 16 of the world’s 20 most polluted cities are located. It seems
that a country only begins to seriously tackle air pollution once it reaches a certain level of
prosperity. Many coal plants now ‘scrub’ the smoke coming out of their stacks to remove
sulphur and fly ash; the millions of tonnes of fly ash and sulphate-rich scrubber sludge used to
be landfilled, but nowadays a large proportion is put to various uses in agriculture and industry.

Coal mining techniques such as strip mining and mountaintop removal are cheaper than
underground mining but result in huge swaths of blighted landscape. More than 1 million acres
of Appalachian mountains and forest have been levelled in the US since the mid-1990s, with
the connivance of Congress (Bryce, 296).

Coal-fired power station at West Burton, Nottinghamshire, England


(en.wikipedia.org). Situated on a 1.7 km2 site, it has an installed capacity of 2000
MW and provides electricity for around 2 million people. The grey/white stuff coming
out of the huge cooling towers is not smoke, and certainly not CO2 (which is
colourless), but steam/water vapour condensing in the air. The two chimneys in the
centre of the picture are emitting smoke and CO2.

Integrated gasification combined cycle (IGCC) technology turns coal into gas (syngas) and
removes impurities, resulting in lower emissions of sulphur dioxide, particulates (fine particles,
such as soot), and mercury. Excess heat from the primary combustion and generation passes
to a steam cycle, resulting in improved efficiency compared to conventional pulverized coal.
The main problem facing IGCC is its extremely high capital cost.

Despite all its negative characteristics, coal continues to be used on a vast scale for a simple
reason: cost. In the developing nations in particular, coal-fired power plants are often the most
affordable option for power generation, especially in countries with large coal reserves, like
China, India, and Indonesia. On an average day, the world consumes about 66.3 million barrels
of oil equivalent in the form of coal. Between 2007 and 2008, global coal use increased by
about 800 million barrels of oil equivalent; that increase alone is about 25 times greater than the
energy produced by all the solar panels and wind turbines in the US in 2008 (Bryce, 59-60).

Oil

Oil is commonly regarded as a dirty fuel that is polluting the air and water and destroying the
planet. Extracting, transporting, processing and burning oil can certainly have many harmful
effects on humans and the environment – through oil spills, air pollution, and accidents at
refineries, pipelines and drilling rigs, etc. It is, however, superior to coal in nearly every respect
– it has a higher energy density and power density, burns more cleanly, is easier to transport,
and its uses are virtually limitless (for instance, hydrocarbons are essential feedstocks for
plastics and industrial chemicals). For all its problems, oil provides unprecedented mobility,
comfort and convenience. It supplies the fuel for the two prime movers in the modern
industrialized world: the diesel engine and jet turbine, which came into widespread use in the
1950s and 60s. Global commerce depends on global transportation, and the latter depends
almost exclusively on oil. Oil’s share of the primary energy market has declined from 48% in
1973 to 35% in 2008, but the world will continue using it for a long time to come (Bryce, 207).

Jet engine.
General Electric’s GE90-115B turbofan aircraft engine,
the most powerful gas turbine engine in the world.

Gas

Over the past few years the estimated recoverable natural gas resources worldwide have risen
sharply, partly due to a surge in new natural gas liquefaction capacity and to improved
technologies that can extract vast quantities of gas from shale deposits. New gas reserves are
being found even faster than new oil reserves. In 2009 the International Energy Agency (IEA,
2009, 49) estimated recoverable global gas resources at about 850 trillion cubic metres –
enough for 280 years at the current global rate of consumption (Bryce, 8). European countries
see the shale gas revolution as an opportunity to reduce their dependence on Russian gas.

Natural gas (methane) is cleaner than oil and coal. During combustion, it releases no
particulates, nor does it release significant quantities of serious pollutants such as sulphur
dioxide or nitrogen oxides. It emits about half as much CO2 as coal, which means that it is less
‘green’ in this respect, since CO2 is plant food and higher concentrations demonstrably green
the earth (see next section). Producing gas from coal beds, tight sands, and shale deposits
does, however, require large numbers of wells to be drilled fairly close together. And like the oil
industry, the gas industry has caused cases of groundwater contamination.

The best single-cycle gas turbines – which discharge their hot gas – can convert about 42% of
their fuel to electricity, whereas combined-cycle gas turbines use the turbines’ hot exhaust
gases to generate steam for a steam turbine, enabling them to convert as much as 60% –
making them the most efficient electricity generators. Thanks to their compactness, mobile gas
turbines generate electricity with power densities higher than 15 kW/m2 and large (>100 MW)
stationary set-ups can easily deliver 4-5 kW/m2 (Smil, 2010b, 8-10).
Pratt & Whitney’s 60 MW SwiftPac gas turbine with a footprint of 700 m2. (masterresource.org)

The reliance on biomass in Asia, Latin America and Africa is a major cause of deforestation,
desertification and erosion. About 37% of the world’s population relies on fuels such as straw,
wood, dung or coal to cook their meals. These low-quality fuels, combined with inadequate
ventilation, often result in the living area being filled with noxious pollutants, including soot
particles, carbon monoxide, benzene, formaldehyde, and even dioxin. As mentioned earlier,
1.45 million people worldwide are dying premature deaths every year due to indoor air pollution.
The best solution to this problem is cleaner-burning, high-energy-density liquefied petroleum
gas (LPG), such as propane and butane, and also kerosene (paraffin).

Electric cars

To reduce hydrocarbon use, several governments are introducing incentives to promote the
sale of electric vehicles, which do not emit CO2 or tailpipe pollutants, though the electricity they
consume may of course have been produced from fossil fuels. At present, all-electric cars are
still hampered by limited range, slow recharge rates, and lack of recharging stations. Although
their refuelling costs are low, the high cost of their battery packs makes them significantly more
expensive than conventional internal combustion engine vehicles and hybrid electric vehicles
(which combine an internal combustion engine with electric propulsion). Ongoing
advancements in battery technology will make electric vehicles more viable. And further
improvements might come from using ultracapacitors for storing electricity.

Gasoline holds 80 times as many watt-hours per kilogram as a lithium-ion battery, and ethanol
holds more than 50 times as many (Bryce, 190). However, internal combustion engines are
fairly inefficient at converting on-board fuel energy to propulsion as most of the energy is
wasted as heat; they use only 15% of the fuel energy content to move the vehicle or to power
accessories, while diesel engines can reach an on-board efficiency of 20%. Electric vehicles
have on-board efficiency of around 80%, they do not consume energy while at rest or coasting,
and regenerative braking can capture as much as one-fifth of the energy normally lost during
braking (en.wikipedia.org).

A hybrid Toyota Prius (silver) has crashed into the back of an all-electric Tesla
Roadster (red), pushing it under a Volkswagen Touareg SUV which had been in
front of it, and which in this particular tussle came out on top. No major injuries were
reported. (wired.com)

Another proposal is hydrogen cars. David MacKay (129-31) describes hydrogen as a ‘hyped up
bandwagon’. Hydrogen, he says, ‘is not a miraculous source of energy; it’s just an energy
carrier, like a rechargeable battery’. With today’s technology, converting energy to and from
hydrogen can only be done inefficiently, and hydrogen is a less convenient energy storage
medium than most liquid fuels because of its bulk. BMW’s Hydrogen 7 car requires 220% more
energy than an average European car. The all-electric Tesla Roadster is 10 times more energy-
efficient than the Hydrogen 7.

Diesel and gasoline vehicles are not overly reliant on elements such as neodymium and
lanthanum, whereas these are critical ingredients in making hybrid and electric vehicles (Bryce,
199). China has a virtual monopoly on the world’s supply of neodymium and other lanthanides
(or ‘rare earths’), which have special magnetic properties. As well as being used in batteries,
these elements are essential commodities in solar panels, wind turbines, and computers. About
90% of the world’s lithium, an essential element in high-capacity batteries, comes from just
three countries: Argentina, Chile and China.
The US imports its oil and oil products from 90 different countries, whereas it is dependent on
only a tiny handful for its lanthanides and lithium – something those calling for ‘energy
independence’ tend to ignore. As Bryce says, ‘the United States, like every other country, will
continue to depend on the global marketplace to obtain the commodities it needs’ (137). He
points out that the US already produces about 74% of the primary energy it consumes – ‘a fact
seldom mentioned by the many neoconservatives and energy posers who have been sounding
the alarm about the evils of foreign energy’ (78).

CO2 – friend or foe?

There is a lot of talk nowadays about ‘combating climate change’ – an absurd expression which
makes it sound like humans can stop the climate from changing. To achieve this they would
need to control the sun, the earth’s orbit, the earth’s interior, its oceans and their currents, the
biosphere, and key processes taking place in the atmosphere. It’s remarkable that ‘climate
change’ has come to be virtually synonymous with ‘man-made climate change’ – which in turn
is usually understood to mean climate change caused by greenhouse gas emissions from fossil
fuel combustion, though some researchers believe that land-use changes (urbanization,
deforestation, etc.) and pollutants such as black carbon (soot), mainly emitted by developing
nations, have a greater impact on climate than greenhouse gas emissions.

The mainstream view, as articulated by the UN’s climate panel (the IPCC), is that ‘most’ of the
warming over the past 50 years is ‘very likely’ the result of anthropogenic greenhouse gas
emissions. And that unless drastic measures are taken to slash emissions and switch to
renewable sources of energy, the result will be dangerous, runaway warming. As already noted,
there are good reasons to gradually reduce our dependence on carbon-based fuels, but the
claim that this is necessary to save the world from catastrophic global warming is based on
shoddy science and hot air (see Climate change controversies; Climategate).

The earth has generally warmed since the depths of the Little Ice Age three or four hundred
years ago, but in fits and starts, and most of the warming has been in nighttime, winter
temperatures in the northern hemisphere. During the Medieval Warm Period (c. 950-1300) it
was warmer than today, as it was in Roman times and during the Holocene Climate Optimum
(3500-6000 years ago). During the last major ice age (Pleistocene), each of the last four
interglacials, going back nearly half a million years, was several degrees warmer than today.
Reconstructed extra-tropical (30-90°N) mean decadal temperature variations
relative to the 1961-1990 mean, showing the Roman Warm Period (RWP), Dark
Ages Cold Period (DACP), Medieval Warm Period (MWP), Little Ice Age (LIA) and
Current Warm Period (CWP). (Idsos, 2011)
The last 450,000 years. (Idsos, 2011)

The average global temperature is officially said to have increased by about 0.8°C over the
course of the 20th century. It rose until about 1947, cooled slightly until about 1977, peaked in
1998, and has been essentially flat since 2001, despite rapidly rising CO2 levels. The IPCC
claims that only the warming of the past 50 years is mainly attributable to man-made
greenhouse gases – even though nothing happening today is in any way unprecedented or
outside the range of natural climate variability (wattsupwiththat.com; wattsupwiththat.com); the
IPCC attributes all earlier warming and cooling periods to natural factors – solar, orbital,
oceanic, tectonic, etc.

To support its claim that very recent warming is due to man, the IPCC cites the results of
climate computer models programmed by scientists who believe that greenhouse gases are a
major driver of climate change. Other variables are adjusted until the model outputs
approximately match the temperature record of the last hundred years or so. Then the major
role assigned to CO2 is removed from the models while all the other settings are left
unchanged. Not surprisingly, the models are now incapable of matching the temperature
record. To cite this as proof that CO2 drives the climate is sheer sophistry.

It goes without saying that if the role assigned to CO2 is reduced while the role of natural factors
is increased, the models can still be tuned to match the past. But the fact that models can be
adjusted in different ways to reproduce past temperatures says nothing at all about whether any
of the models is an accurate representation of how the climate really works. In fact the models
are well known to have serious shortcomings, particularly in their handling of the hydrological
cycle (water vapour, clouds, precipitation), and they have consistently overestimated the rate of
warming.
NASA’s Goddard Institute for Space Studies (GISS) admitted in 2007 that the current
uncertainties in the climate impact of total solar irradiance (TSI) and aerosols (tiny solid
particles or liquid droplets suspended in a gas) ‘are so large that they preclude meaningful
climate model evaluation by comparison with observed global temperature change’
(indarticles.com). This is a rare admission, but it was made when seeking funding for a new
remote-sensing satellite – the Glory satellite, which was launched on 4 March 2011 but failed to
reach orbit. Using long-term projections of future climate made by flawed, unvalidated models
as the justification for draconian, trillion-dollar emission-cutting measures seems rather
irrational, especially when such models can’t even reliably predict the local weather more than
a few days in advance.

Dry air is composed of 78% nitrogen, 21% oxygen, 0.9% argon, plus various trace gases, such
as carbon dioxide. At present, the atmospheric concentration of CO2 is just under 400 parts per
million (ppm), i.e. just under four hundredths of one per cent (0.04%). Man-made warming
proponents emphasize that according to the ice-core record, this level is higher than any seen
in the past 650,000 years. Analysis of air bubbles trapped in Antarctic ice cores tends to
indicate that the atmospheric CO2 concentration ranged from about 180 to 300 ppm in previous
interglacials – even though many of them were several degrees warmer than today. What
receives far less emphasis are the uncertainties surrounding the ice-core record.

First, the presence in ice of liquid water alters the original composition of the air in gas
inclusions; this can deplete CO2 by 30 to 50%, mostly in the upper layers of the ice sheets.
There have also been clear instances of data selection and manipulation by man-made
warming proponents (Jaworowski, 2009; Schmitt, 2010). Second, studies of leaf stomata (pores
through which plants take in CO2) often show higher and more variable atmospheric CO2 levels
than the ice cores. They suggest that pre-industrial CO2 levels were commonly in the 360 to
390 ppm range. Third, an analysis by Ernst-Georg Beck and others of about 100,000 direct
measurements of CO2 in the atmosphere made from 1812 to 1961 shows that atmospheric CO2
levels have varied very widely, with peaks of around 360 ppm in the 1820s and 380 ppm in the
1940s, and are closely correlated with sea surface temperatures; the vast majority of these
measurements are rejected by mainstream climatologists (biomind.de; Climate change
controversies).
Average atmospheric CO2 concentrations measured in the 19th and 20th centuries.
The values used in ‘consensus’ CO2 reconstructions are circled; the other
measurements are rejected. (Jaworowski, 2009)

Mankind puts 6 to 8 billion tonnes of carbon (GtC) into the atmosphere every year. The oceans
and biosphere emit 190 to 235 GtC annually. The annual increase in atmospheric CO2 is
around 3 or 4 billion tonnes – all of which is attributed to man, on the false assumption that the
rest of nature is in equilibrium (i.e. the annual amounts of CO2 emitted and absorbed by the
oceans and biosphere allegedly balance). There is strong evidence that the residence time of
CO2 in the atmosphere is 5 to 15 years – and not up to several hundred years as the IPCC
assumes. This undermines the claim that anthropogenic CO2 emissions are responsible for the
entire increase of CO2 in the atmosphere (Glassman, 2010a, b; Middleton, 2010; Jaworowski,
2009). Significantly, ice-core data show a close match between temperature and CO2 during the
last ice age, but temperatures rose several hundred years before increases in atmospheric
CO2. This is because rising temperatures cause the oceans in particular to release more of the
CO2 dissolved in them.

The impact of man-made CO2 on climate has been grossly exaggerated; earth’s climate is
mainly driven by natural forces. Greenhouse gases, which also include methane, nitrous oxide,
ozone, chlorofluorocarbons, and water vapour (the latter being by far the most potent), are
often likened to a ‘blanket’ around the earth, because they absorb certain frequencies of
infrared radiation reflected or emitted from the earth’s surface, thereby delaying the loss of heat
to space. So, other things being equal, more greenhouse gases in the atmosphere will cause
temperatures to rise; the general scientific opinion is that a doubling of atmospheric CO2 will
produce just over 1ºC of warming. But in a complex system like the climate, there are all sorts
of feedbacks, which either amplify warming (positive feedbacks) or mitigate it (negative
feedbacks).

As temperature increases, more ocean water evaporates into the atmosphere. Climate models
treat water vapour as a positive feedback that amplifies CO2-induced warming, leading to a
high ‘climate sensitivity’ of 1.5 to 6ºC according to different models, with the IPCC’s ‘best
estimate’ being 3ºC (meaning that temperature would rise 3º if the atmospheric concentration of
CO2 doubled). However, water vapour also condenses to form clouds, which are the most
important factor affecting how much of the sun’s radiation reaches the earth’s surface. Cloud
cover is a highly dynamic factor, whereas climate models treat it as a constant. Some
researchers have highlighted the fact that rising temperatures result in more low-level clouds,
which have a cooling effect (negative feedback), resulting in a climate sensitivity of around
0.5ºC (Spencer, 2009; Glassman, 2009). The relative stability of the earth’s temperature over
geologic time (see figure below) indicates that the climate is dominated by negative, stabilizing
feedbacks; the earth is a self-regulating organism, with alternating cycles of warming and
cooling.

All the greenhouse models predict a ‘hotspot’ about 10 km up in the troposphere above the
tropics, due to the alleged positive water-vapour feedback. But over the interval 1979 to 2009,
model-projected temperature trends are two to four times larger than observed trends in both
the lower and mid-troposphere (McKitrick et al., 2010). Instead of adjusting their models, man-
made warming believers have responded by trying to adjust the measurements (Van Andel,
2011). The models assume that relative humidity remains constant under the influence of global
warming at all heights in the troposphere, but the past 50 years have seen a marked decline in
the humidity of the upper troposphere, pointing to a negative climate feedback (Jaworowski,
2009).

Left: Hotspot predicted by climate models. Right: Observations show no hotspot.


(scienceandpublicpolicy.org)

According to the theory pioneered by Henrik Svensmark, when solar activity – and the solar
magnetic field – weakens, more galactic cosmic rays (GCR) can penetrate the earth’s
atmosphere, resulting in more cloud condensation nuclei, greater cloud cover and lower
temperatures; the IPCC ignores this possible indirect influence of the sun on earth’s climate
(Glassman, 2010a; Van Andel, 2011). Early results from two separate experiments support the
GCR hypothesis (science.au.dk; wattsupwiththat.com). A 1% decrease in cloud cover could
raise global temperatures by 0.5ºC. Changes in the influx of cosmic rays show a better
correlation with 20th-century temperature trends than does CO2.
The heat content of the atmosphere is a thousand times smaller than the heat content of the
oceans. This means that a drop in ocean temperature of 1/1000ºC would raise the air
temperature by 1ºC. Over half the solar energy absorbed by the earth is absorbed in the
tropics, and there is a good correlation between sea surface temperatures in the tropical Pacific
and average global temperature. One of the ocean cycles that have a major impact on global
temperatures is the El Niño/Southern Oscillation (ENSO); El Niño means warmer temperatures,
and La Niña cooler temperatures. It’s interesting to note that during the cooling from 1947 to
1977 there were 7 El Niños and 14 La Niñas, during the warming from 1978 to 1998 there were
10 El Niños and 3 La Niñas, and since 1999 there have been 3 El Niños and 6 La Niñas
(weatherbell.com).

Red: sea surface anomalies in the tropical Pacific (20ºN-20ºS).


Blue: global temperature anomalies. (Van Andel, 2011)

Given the disinformation spread by warmists, it is not surprising that a recent survey revealed
stunning levels of public ignorance about carbon and CO2 (joannenova.com.au). Far from being
a ‘pollutant’, CO2 is a colourless, odourless, tasteless, benign gas that is a vital ingredient in
photosynthesis and plant growth, and essential to life on earth. That is why farmers artificially
increase the CO2 concentration in glasshouses to around three times the current atmospheric
level, often by piping in CO2 from nearby power plants. At times in the geologic past, the
atmospheric CO2 concentration has been over ten times higher than today, even during major
glaciations.
Global temperature and atmospheric CO2 over geologic time. (geocraft.com)

Rising CO2 levels are supposed to be producing a series of dire environmental consequences,
including dangerous global warming, catastrophic sea level rise, dangerous ocean acidification,
reduced agricultural output, the destruction of many natural ecosystems, and a dramatic
increase in extreme weather phenomena, such as droughts, floods and hurricanes. Craig &
Sherwood Idso (2011) present extensive evidence showing that ‘real-world observations fail to
confirm essentially all of the alarming predictions’, and stress that rising atmospheric CO2
concentrations ‘have actually been good for the planet, as they have significantly enhanced
plant productivity ..., leading to a significant “greening of the earth” ’.

Doubling the air’s CO2 concentration, for example, causes the productivity of herbaceous plants
to rise by 30 to 50% and the productivity of woody plants to rise by 50 to 75%. In addition,
atmospheric CO2 enrichment typically increases the efficiency of plant nutrient use and water
use. Without the ongoing rise in the air’s CO2 content, it will barely be possible to meet
humanity’s expanding food needs as the century progresses. As the Idsos (112) say: ‘In light of
the above, it is remarkable that many people actually characterize the ongoing rise in the air’s
CO2 content as the greatest threat ever to be faced by the biosphere, or that the U.S.
Environmental Protection Agency has actually classified CO2 as a dangerous air pollutant.’
Atmospheric CO2 needs to be above 150 ppm to avoid harming green plants, and would only
become harmful to humans at levels over 5000 ppm (Happer, 2011).

Just as in the Middle Ages Catholics believed they could avoid punishment for their sins by
buying indulgences from the Church, so emissions of CO2 are nowadays regarded as a sin, for
which we can buy a sort of environmental indulgence in the form of carbon credits.

Carbon emissions trading (also known as ‘cap-and-trade’) is deeply flawed, as even some
environmental groups recognize. The global carbon market was worth US$ 144 billion in 2009,
but most of this money circulates among banks, brokers speculating on price changes, and
companies hedging their risks, and little is available for funding actual emission reductions (Kill
et al., 2010, 105-6). In the EU Emissions Trading System (ETS) the initial allocation of free
permits enabled some of Europe’s largest emitters of greenhouse gases to reap huge windfall
profits; the 10 companies benefiting most will gain €3.2 billion in 2008-2012. During this period,
European power companies will gain windfall profits of between €23 and €71 billion because
they are passing on nonexistent costs for permit purchases to consumers. In 2008 and 2009,
ETS fraud resulted in VAT revenue losses of €5 billion (ibid., 21-2, 26, 39).

The Kyoto Protocol was signed in 1997 to reduce greenhouse gas emissions. By the time it
expires in 2012, only 6 of the 182 signatories are likely to have achieved their targets. The cost
of the Kyoto measures is estimated at $300 billion per year across all countries, and its
proponents have admitted that even if the targets were met, the reduction in global temperature
would be only 0.13°C by 2100 – and that’s assuming that the IPCC’s inflated value of climate
sensitivity is correct.

There are now calls for new agreements to cut emissions by as much as 80% – an utterly
unrealistic goal. By 2030 CO2 emissions from non-OECD countries will be nearly double those
of the OECD countries (the 34 most economically developed countries). Leaders of the
developing world have no intention of drastically reducing their use of coal and hydrocarbons,
as this would seriously depress living standards. An 80% reduction in US emissions would
mean that the US would emit about 1.2 billion tons of CO2 a year – the same level as in 1910.
This corresponds to per capita emissions of 2.7 tons of CO2 per year – a level to be found in
Cuba, North Korea and Syria, and lower than that in modern China (Bryce, 2008, 156). There
are currently no affordable, viable technologies that will allow countries to reduce their carbon
output by such an amount without seriously damaging their economies.

The relentless efforts by climate alarmists to exaggerate the negative effects of a warmer
climate are a source of endless entertainment. For instance, in 2005 the United Nations
Environment Programme predicted on its website that by 2010 there would be 50 million
climate refugees, who would be forced to flee rising sea levels, severer hurricanes and growing
food shortages. However, census figures for the islands that would supposedly be worst hit
show population increasing as normal. So UNEP quietly removed the idiotic prediction from its
website, though it forgot to delete the accompanying map (wattsupwiththat.com). Instead of
apologizing, the ‘experts’ responsible for the original forecast simply adjusted the date, and are
now predicting 50 million climate refugees by 2020. And true to form, the credulous mass
media have been uncritically hyping the latest claim. There are of course a certain number of
environmental refugees, such as people in northern Europe who emigrate south in search of
warmer climates!

Renewable energy

It is fashionable nowadays to promote renewable energy sources such as wind, solar, and
biofuels as the answer to all energy and environmental problems. But the low power density of
these sources means that they require vast areas of land (or water). Local residents and
environmentalists often oppose ‘green’ energy sources: e.g. people living near proposed wind
parks tend to oppose them because they disfigure the landscape and kill birds; conservationists
have opposed hydropower dams because they disrupt river ecosystems, kill spawning fish
populations, and release large amounts of methane from decaying vegetation along riverbeds;
a lawsuit filed against two proposed geothermal plants in California stated that they would
introduce highly toxic acids into geothermal wells and turn the lands into ‘an ugly, noisy, stinking
wasteland’; and the construction of a solar power plant in California has been held up due to
concerns about the welfare of a lizard (realclearscience.com; Bell, 2011).

Fossil fuels and nuclear energy have power densities 10 to 10,000 times greater than those of
renewable energy resources. David MacKay (2009, 112, 167, 367) gives the following figures:

Power source Power density (W/m2)


Nuclear 1000
Solar PV panels 5 to 20
Hydroelectric 11
Onshore wind 2
Offshore wind 3
Tidal stream 6
Tidal pools 3
Biomass 0.5
Corn for bioethanol 0.002 to 0.05
Rainwater (highlands) 0.24
Geothermal 0.017

Power densities of
fossil fuel
extraction
compared to
power densities of
renewable energy
conversions
(courtesy of Vaclav
Smil, 2008a, 312).
Thermal power
plants include
nuclear, coal, fuel
oil, and gas.
Phytomass is plant
biomass. Hydro
appears twice:
upper-course
hydrogeneration
has a higher
power density than
lower-course.

In July 2008 Al Gore called on the US to produce 100% of its electricity from renewable energy
and clean, carbon-free sources within 10 years. Vaclav Smil (2008b) comments:

To think that the United States can install in 10 years wind and solar generating
capacity equivalent to that of thermal power plants that took nearly 60 years to
construct is delusional. ...
It took 45 years for the US to raise its crude oil use to 20 percent of the total
energy supply; natural gas needed 65 years to do the same. As for electricity
generation, coal produced 66 percent of the total in 1950 and still 49 percent in 2007
...

In 2009 less than 8% of US energy consumption came from renewables: biomass 3.88%, hydro
2.68%, wind 0.70%, geothermal 0.37%, solar 0.11% (llnl.gov). In January 2011 President
Obama called for the US to generate 80% of its energy from ‘clean’ sources by 2035. This is
more realistic because in ‘clean energy’ he includes nuclear energy, ‘clean coal’ (i.e. coal plants
that use low-emission technology or carbon capture and storage), and natural gas, in addition
to traditional renewables.

In 2007 the European Union decided that by 2020 member states should achieve a set of
climate and energy targets known as the ‘20-20-20’ targets: 20% of energy consumption should
come from renewables; greenhouse gas emissions should be at least 20% below 1990 levels;
and primary energy use should be reduced by 20% by improving efficiency. These targets were
not based on a realistic analysis of what is feasible but on the fact that ‘20-20-20 by 2020’ has a
‘cute’ ring to it. A leaked UK government report says that achieving 20% renewables by 2020
would cost up to £22 billion (which it labelled ‘unreasonable’), and that a more realistic target
would be 9% renewables (guardian.co.uk).

The International Energy Agency (IEA) expects some $5.5 trillion to be spent on renewable
energy projects between now and 2030, by which time renewables could be providing 10% of
the world’s primary energy needs (Bryce, 286).

Before the industrial revolution everyone lived on renewables, but lifestyles and population
densities were very different then. An average person consumed about 20 kWh of energy per
day. Each person used 4 kg of wood per day, which required 1 hectare (10,000 m2) of forest per
person. The area of land per person in Europe in the 1700s was 52,000 m2, falling to 17,500 m2
in regions with the highest population density. Today the area of Britain per person is only 4000
m2, so even if the country were completely reforested, a traditional lifestyle would no longer be
possible (MacKay, 108).

The average energy consumption in the UK is 125 kWh per day per person, excluding imports
and solar energy acquired through food production. David MacKay (ch. 18) calculates that if we
make ‘pretty extreme assumptions’ (i.e. favourable to green energy) and ‘throw all economic,
social, and environmental constraints to the wind’, renewable energy sources could
theoretically produce 180 kWh per day per person. However, this would require covering an
area the size of Wales with wind turbines, an area half the size of Wales with solar panels, and
75% of the UK (i.e. all its agricultural land) with energy crops, and also building wave farms
along 500 km of coastline.

MacKay says that if we make realistic assumptions, and take into account likely public and
environmental objections, renewables could not produce more than 18 kWh per day per
person. He believes that energy consumption could eventually be almost halved through
conservation measures (home insulation, replacement of fossil fuel heaters with electric heat
pumps, and electrification of private and public transport), but this would still far exceed the
power provided by renewables. He argues that the UK’s own renewables would have to be
supplemented by ‘clean coal’, nuclear power, and/or other countries’ renewables (especially
solar power in deserts) (ch. 19). He believes that the same applies to Europe as a whole (ch.
30).
MacKay (2009, 109): ‘I fear that the maximum Britain would ever
get from renewables is in the ballpark of 18 kWh/d per person.’
Solar PV = solar photovaltaics (i.e. turning sunlight into electricity)
Solar HW = solar hot water (i.e. solar heating).

Wind

If we consider only the flux of the wind’s kinetic energy moving through the area swept by wind-
turbine blades, the power density is commonly above 400 W/m2 in the windiest regions. But
because wind turbines have to be spaced 5 to 10 rotor diameters apart to minimize wake
interference, the power density expressed as electricity generated per square metre of the area
occupied by a large wind farm is a small fraction of that figure. We also have to take into
account that a wind turbine’s rated capacity (the power generated in optimal wind conditions)
has to be reduced by the capacity factor (or load factor), i.e. the percentage of time that the
wind allows turbines to work optimally. This figure is commonly put at 30% for the UK, 22% for
the Netherlands, and 19% for Germany (MacKay, 267). This reduces year-round average
power densities for large-scale wind generation to no more than 2 W/m2.

If 10% of the US electricity generated in 2009 (45 GW) were to be produced by large wind
farms, they would have to cover at least 22,500 km2, roughly the size of New Hampshire (Smil,
2010b). If we covered the windiest 10% of the UK with wind turbines (delivering 2 W/m2), we
would generate 20 kWh per day per person – or half the power used by driving an average
fossil fuel car 50 km per day (MacKay, 33).

In 2009, 90% of the UK’s electricity needs was supplied by coal, gas and nuclear power, and
3% by wind (decc.gov.uk), but the aim is for wind power to provide nearly a third. The UK has
some 3500 wind turbines, but they generate no more than a single, medium-sized conventional
power station. The government wants to spend £100 billion on building 10,000 more turbines
over the next decade, plus another £40 billion on connecting them to the grid (Booker, 2011).

Wind farm at Ingbirchworth, West Yorkshire.

A recent study (Stuart Young Consulting, 2011) found that from November 2008 to December
2010 the average output of UK wind farms metered by the national grid was only 24% of rated
capacity. During that period, wind generation was below 20% of capacity more than half the
time, and below 10% of capacity over one third of the time. At each of the four highest peak
demands of 2010, wind output was only 4.7%, 5.5%, 2.6% and 2.5% of capacity. The report
refuted the claim that the generation gap during prolonged low-wind periods can be filled by
pumped storage hydroelectricity; this involves using wind-generated power to raise water to a
higher elevation when electricity demand is low and then releasing the water to flow through
hydroelectric turbines when demand increases. The report concluded:

It is clear from this analysis that wind cannot be relied upon to provide any
significant level of generation at any defined time in the future. There is an urgent
need to re-evaluate the implications of reliance on wind for any significant proportion
of our energy requirement.
Plans to erect some 800 giant wind turbines, up to 415 ft high, in the unspoilt hills of
mid-Wales are running into stiff opposition. The total cost of the project, including
100 miles of steel pylons to carry the electricity to the national grid, will be £2 billion.
The turbines will produce an average of around 300 MW. By contrast, the new gas-
fired power station at Langage near Plymouth cost £400 million, produces 895 MW,
and covers just a few acres. (blogs.telegraph.co.uk; telegraph.co.uk; centrica.com)

The Thanet wind farm 12 km off the coast of Kent is the world’s biggest offshore wind farm, with
100 turbines rising some 90 m above the sea over an area of 35 km2 (vattenfall.co.uk). The
rated capacity is 300 MW, but when the load factor (about 26%) is taken into account, it will
supply enough electricity for about 131,000 homes – a figure which drops to zero when the
wind isn’t blowing, and also when the wind is too strong, as the turbines then have to be
switched off. It was built by the Swedish energy company Vattenfall at a cost of £780 million.
On top of the £40 million in electricity sales, Vattenfall will receive at least £60 million a year in
renewable obligation certificates (ROCs). This is equivalent to a public subsidy of £1.2 billion
over the turbines’ 20-year service life – enough to build a 1 GW nuclear power station, which
can deliver 13 times more power than this wind farm (gl-w.blogspot.com).

A serious problem with offshore wind farms is the corrosive effects of sea water. At a big Danish
wind farm, Horns Reef, all 80 turbines had to be dismantled and repaired after only 18 months’
exposure to the sea air. The turbines at the Kentish Flats wind farm (also off the coast of Kent)
are having similar problems with their gearboxes, with one third having to be replaced during
the first 18 months (MacKay, 61).
In May 2011 a judge ordered the dismantling of the Serra del Tallat wind farm in
Spain because it did not have the proper planning permission (lavanguardia.com).
This shows that wind turbines really do create jobs: first to put them up, then to pull
them down.

The variability of the wind means that wind power (like solar power) is not ‘dispatchable’ –
meaning that you can’t necessarily start installations up when you most need them. Wind
turbines therefore have to be backed up by gas-fired plants or, in less wealthy nations such as
China, coal-fired plants, thereby making wind power more expensive than conventional power
generation. So adding wind (or solar) power to the grid does not replace an equivalent amount
of fossil-fuel generating capacity. A survey of US utilities revealed that wind power reduces the
installed power capacity at thermal power stations by 3 to 40% of rated wind capacity, with
many falling in the 20 to 30% range (Cleveland, 2011).

(energia.gr)
By the beginning of 2007, wind power accounted for about 13.4% of all the electricity generated
in Denmark, thanks to massive subsidies. But this has not resulted in energy independence, or
made a difference to the country’s CO2 emissions, coal consumption or oil use. When its
turbines produce more electricity than can be used, the Danes sell it to their neighbours, often
at subsidized, below-market rates. And when they don’t produce enough, large quantities of
hydropower are imported from Norway and Sweden, whose hydrogeneration is 30 times
Denmark’s wind production. In 2006, Denmark’s electricity rates were the highest in the world,
amounting to some $0.32 per kilowatt-hour – about 25% higher than in the Netherlands, which
had the next-highest rates, at $0.25 per kilowatt-hour (Bryce, ch. 10).

Germany and Denmark have discovered that if wind represents more than 4% of their grid-
generating capacity, the risks from unreliability and system damage from surges become
unacceptably high. Germany is now constructing several coal-fired plants for new capacity, and
reassessing the viability of its wind energy programme (Smith, 2011).

A large wind farm reduces annual CO2 emissions by considerably less than the annual
emissions of a single jumbo jet flying daily between Britain and America. Moreover, the
construction of wind turbines generates enormous CO2 emissions as a result of the mining and
smelting of the metals used, the carbon-intensive cement needed for their huge concrete
foundations, and the building of miles of road often needed to move them to the site (Booker,
2011). A typical megawatt of reliable wind power capacity requires about 32 times as much
concrete and 139 times as much steel as a typical gas-fired power plant (Bryce, 90). Moreover,
nearly all the wind turbines now being produced depend on a rare-earth element called
neodymium, whose supply is controlled by China. A direct-drive permanent-magnet generator
for a top-capacity wind turbine uses about 2 tonnes of neodymium-based permanent magnet
material.

The 8-km-wide, 30-m-deep lake of toxic waste at Baotou, China. Seven million
tonnes of waste a year are discharged into the foul-smelling lake by the rare-earth
processing plants in the background, with a devastating impact on local residents’
health. The region has over 90% of the world’s reserves of rare-earth metals,
notably neodymium, which is used to make magnets for wind turbines and hybrid
cars. (thegwpf.org)
Cattle can graze and crops can be grown beneath wind turbines but humans cannot live close
to them because the low-level noise caused by the massive blades disturbs sleep patterns and
can cause headaches, dizziness and other health problems. Wind turbines also cause other
hazards. On the basis of available data (which are not comprehensive), there was an average
of 103 accidents per year in the wind industry from 2005 to 2010, including 73 fatalities
(Caithness Windfarm Information Forum, 2011). Most incidents were due to blade failure, in
which whole blades or pieces of blade are thrown up to 1300 metres. Hence the proposal for a
buffer zone of at least 2 km between turbines and residential areas. Fire is the second most
common incident; because of the turbine height, the fire brigade can do little but stand and
watch. Some incidents were due to ice being thrown from the blades for up to 140 m.

The worldwide mortality rate for wind power is about 0.15 deaths per trillion watt-hours (TWh).
This is roughly the same as the figure for the mining, processing and burning of coal to
generate electricity according to some researchers, or half that figure according to others,
though this doesn’t include increases in mortality from the air pollution resulting from burning
coal (wind-works.org).

Another objection raised against wind power is bird kill, but this needs to be put in perspective.
The American Bird Conservancy estimates that every year between 100,000 and 440,000 birds
are killed by wind turbines in the US. But it also estimates that every year between 10 and 154
million birds are killed by power lines, between 10.7 and 380 million by traffic, and between 100
and 1000 million by glass (abcbirds.org). In Denmark an estimated 30,000 birds per year are
killed by wind turbines, and about a million by traffic. In Britain 55 million birds per year are
killed by cats (MacKay, 63).

Solar

Solar energy is the only essentially unlimited renewable resource. It can be harnessed and
used in different ways:
- Trees, plants and vegetation absorb solar energy through photosynthesis and store it in
chemical form. This energy is consumed directly when these materials are burned as fuel, or
eaten by humans and animals, or it may be turned into biofuels, chemicals, or building
materials.
- By means of solar thermal collectors (e.g. on roofs), sunlight can be used for direct heating of
buildings or water.
- Photovoltaics (PV) converts solar radiation directly into electricity by means of solar panels
composed of cells containing a photovoltaic material (e.g. silicon). The concentration of sunlight
onto photovoltaic surfaces is known as concentrated photovoltaics (CPV).
- Concentrated solar power (CSP) uses lenses or mirrors to concentrate a large area of sunlight
onto a small area; the concentrated light is then converted into heat which drives a heat engine
(usually a steam turbine) connected to an electrical power generator.

Covering the south-facing roof of homes with photovoltaics may provide enough electricity to
cover a large share of average electricity consumption, but roofs are not big enough to make
huge dent in our total energy consumption (MacKay, 40). When the sun goes behind clouds
photovoltaic production falls roughly 10-fold. Moreover, this method is less effective for two- or
three-storey homes and high-rise buildings.

Solar cells have a range of efficiencies, but the power densities of all types of solar power
generation are well below those of conventional energy sources:
While the best research cells have efficiencies surpassing 30% (for multijunction
concentrators) and about 15% for crystalline silicon and thin films, actual field
efficiencies of PV cells that have been recently deployed in the largest commercial
parks are around 10%, with the ranges of 6-7% for amorphous silicon and less than
4% for thin films. A realistic assumption of 10% efficiency yields 17 W/m2 as the first
estimate of average global PV generation power density, with densities reaching
barely 10 W/m2 in cloudy Atlantic Europe and 20-25 W/m2 in subtropical deserts.
(Smil, 2010b, 12)

So although the largest solar PV parks generate electricity with power densities roughly 5 to 15
times higher than for wood-fired plants, this is at best 1/10 and at worst 1/100 of the power
densities of coal-fired electricity generation. If only 10% of all electricity generated in the US in
2009 (45 GW) were to be produced by large PV plants, the area required (even with an
average power density of 8 W/m2) would be about 5600 km2.

No dramatic near-term improvements are expected either in the conversion


efficiency of PV cells deployed on MW scale in large commercial solar parks or in
the average capacity factors. But even if the efficiencies rose by as much as 50%
within a decade this would elevate average power densities of optimally located
commercial solar PV parks to no more than 15 W/m2.
Concentrating solar power (CSP) projects use tracking parabolic mirrors in order
to reflect and concentrate solar radiation on a central receiver placed in a high
tower. Still, power densities of CPS are not all that different from PV generation. ...
[O]ptimally located CSP plants will operate with power densities of 35-55 W/m2 of
their large heliostat [mirror] fields and with rates no higher than 10 W/m2 of their
entire site area. (Smil, 2010b, 13-14)

Environmental groups have criticized solar parks for taking up too much desert land, thereby
displacing certain animal and reptile species. The use of photovoltaic collectors is also
challenged because they contain highly toxic heavy metals, explosive gases and carcinogenic
solvents that present end-of-life disposal hazards (Bell, 2011).
SunEdison’s 6.2 MW Alamosa Solar PV Farm, Colorado. (xcelenergy.com)

Even covering 5% of the UK with 10%-efficient solar panels would yield only 50 kWh per day
per person (MacKay, 41) – or two-fifths of average power consumption. Assuming that
concentrating solar power in deserts delivers an average power per unit land area of about 15
W/m2, a total desert area of 1 million square kilometres would have to be covered with solar
cells to provide the world’s total power consumption of 15,000 GW. To supply everyone in
Europe and North Africa with an average European’s power consumption would require a
desert area of 360,000 square kilometres, equal to the area of Germany, or one a half times the
size of the UK or 16 times the size of Wales (MacKay, 178).

Solar and wind power reduce location flexibility because the facilities have to be located in the
sunniest or windiest regions, often requiring the addition of hundreds or thousands of kilometres
of power lines to transmit electricity to distant towns and cities. As with wind power, the
variability of solar power presents a problem. Electricity providers must be able to ramp power
generation up and down to meet the changing demand within a relatively narrow voltage range;
too little power causes a brownout, while too much produces a damaging surge. The only
alternative to having reserve gas or coal power generation capacity is to invest in energy
storage capability, such as pumped hydroelectric power, compressed air, flywheels or flow
batteries – which will add significantly to the already high cost. Solar power still faces the
challenge of developing a cheap photovoltaic cell.

As Robert Smith (2011, 21) says: ‘Solar has the biggest future potential, but it will also take the
longest time to develop into a major segment of national electric power generation. Solar power
has so far offered the most promise and delivered the least.’

Bioenergy

Biomass energy, or bioenergy, is energy from plants and plant-derived materials. Wood is still
the largest biomass energy resource in use today, but other sources include food crops, grassy
and woody plants, residues from agriculture or forestry, oil-rich algae, the organic component of
municipal and industrial wastes, and the fumes (methane gas) from landfills. As well as being
converted into electricity, biomass can be converted into liquid fuels, or biofuels, for
transportation purposes. The two most common biofuels are ethanol (made from corn and
sugarcane) and biodiesel (made from vegetable oil, animal fat, or recycled cooking grease).

Using biomass as a fuel releases CO2 and air pollutants such as carbon monoxide, nitrogen
oxides, volatile organic compounds, and particulates, in some cases at levels above those from
conventional fuel sources such as coal or natural gas (en.wikipedia.org). The main problem
with biomass is its low energy density and power density. The best power density of energy
crops in Europe is about 0.5 W/m2. If we were to cover 75% of the UK with bioenergy crops (i.e.
the entire area currently devoted to agriculture), and if we use a very conservative figure of 33%
for overall losses along the processing chain, we would still generate only 24 kWh of electricity
per day per person (MacKay, 43-4).

The annual global consumption of gasoline and diesel fuel in land and marine transport, and
kerosene in flying, is about 75 exajoules (million trillion joules). Even if the most productive
biomass alternative (Brazilian ethanol from sugarcane at 0.45 W/m2) could be replicated
throughout the tropics, the total land required for producing transportation ethanol would be
about 550 million hectares, just over one-third of the world’s cultivated land or nearly all the
agricultural land in the tropics (Smil, 2008a, 360-1).

The power density of bioethanol made from corn is only 0.22 W/m2. This means that about 390
million hectares (just over twice the entire cultivated area of the US) would be required to
satisfy the US demand for liquid transportation fuel. Moreover, if all the machinery required for
ethanol production were fuelled with ethanol, and the combustion of crop residues was used to
generate heat for distilling purposes, the power density would drop to, at best, 0.07 W/m2. The
US would then need to plant 1.2 billion hectares (over six times its entire arable area) with corn
for ethanol fermentation (Smil, 2008a, 361).

In the US, the lobbying and campaign contributions of large agri-business interests have led to
politicians passing legislation that has given 5.5 to 7.3 billion dollars a year in tax subsidies to
the booming corn ethanol business. It is now more profitable for farmers to grow huge acreages
of corn to make vehicle fuel rather than to feed people. Congress has introduced mandates
forcing automakers to manufacture ‘flex-fuel’ cars and forcing motorists to buy ethanol-blended
gasoline. Some 30% of the US maize (corn) harvest was expected to be used for ethanol by
2010, but this still accounts for less than 8% of US gasoline consumption (World Bank, 2008).

Ethanol can be used as a fuel for vehicles in its pure form, but it is usually added to gasoline to
increase octane and reduce emissions. Tests show that E85 (85% ethanol) reduces fuel
economy by 28% compared with regular gasoline (10% ethanol) (caranddriver.com). Greater
use of ethanol was supposed to reduce US dependence on oil. However, ethanol only replaces
one of the many products that refiners extract from crude oil. On average a barrel of crude oil
(42 gallons) yields about 20 gallons of gasoline (petrol). Other products include butane, jet fuel,
diesel fuel, fuel oil, and asphalt. Gasoline demand, both in the US and globally, is essentially
flat, while demand for ‘middle distillates’ – mainly diesel fuel and jet fuel, i.e. the liquids that
propel the vast majority of our commercial transportation machinery – is growing rapidly. And
ethanol cannot replace these fuels (Bryce, 185-6).

A World Bank policy research working paper concluded that up to 70-75% of the rise in food
prices from 2002 to 2008 was due to ‘large increases in biofuels production in the U.S. and EU’
and ‘the related consequences of low grain stocks, large land use shifts, speculative activity
and export bans’ (Mitchell, 2008, 17). An OECD (2008) report gave the following assessment:
The impact of current biofuel policies on world crop prices, largely through increased
demand for cereals and vegetable oils, is significant but should not be
overestimated. Current biofuel support measures alone are estimated to increase
average wheat prices by about 5 percent, maize by around 7 percent and vegetable
oil by about 19 percent over the next 10 years.

By raising the price of food worldwide, increased production of biofuels condemns more people
to chronic hunger and absolute poverty (defined as income less than $1.25 per day). Indur
Goklany (2011) estimates conservatively that this would lead to at least 192,000 excess deaths
per year, plus disease resulting in the loss of 6.7 million disability-adjusted life-years (DALYs)
per year. These exceed the estimated annual toll of 141,000 deaths and 5.4 million lost DALYs
that the World Health Organization attributes to global warming. But whereas death and
disease from poverty are a fact, attributing death and disease to global warming is highly
speculative.

In 2007 Jean Ziegler, the UN’s right-to-food rapporteur, said that the transformation of wheat
and maize crops into biofuel was having an ‘absolutely catastrophic’ effect on the world’s poor,
and called it a ‘crime against humanity’ (independent.co.uk). UN Secretary-General Ban Ki-
moon rejected his call for a five-year moratorium on biofuel production. The EU’s environment
commissioner acknowledged that the EU had underestimated the problems caused by biofuels,
but insisted that suspending the target fixed for biofuels was out of the question (dw-world.de).

The rush to boost biofuel production has led to huge tracts of tropical forests in Malaysia and
Indonesia being cleared to create palm plantations for biodiesel production – destroying unique
plant and animal species and eroding fragile tropical topsoil (Manning & Garbon, 2009, 39).

Cellulosic ethanol is made from non-food crops such as switchgrass and giant miscanthus, or
from wood chips and sawdust. But like other biomass energy projects, it is plagued by low
power density. To meet 10% of its oil needs from cellulosic ethanol, the US would need to plant
42.1 million acres in switchgrass – an area equal to about 10% of its cropland. Another problem
is that there is no infrastructure available to plant, harvest and transport the switchgrass or
other biomass source to the refineries. Moreover, as with corn ethanol, the amount of energy
gained by producing cellulosic ethanol is negligible (Bryce, 182-4).

The use of fossil fuels in iron and steel production will be difficult to replace. A return to charcoal
would be the only practical choice. Using tropical eucalyptus and the best Brazilian smelting
practices, half of Brazil’s total forested area in 2000 would have to be devoted to growing wood
for the world’s metallurgical charcoal – an unrealistic proposition. As far as the production of
nitrogenous fertilizer is concerned, natural gas is generally used as a source of hydrogen and
as a fuel (oil and coal are more cumbersome), and no large-scale nonfossil alternative to this
technique is commercially available at present (Smil, 2008a, 361).

MacKay (285) explains that algae for making biofuel are grown in water heavily enriched with
CO2, sometimes originating from power stations or other industrial facilities. Ponds in sunny
parts of the US fed with CO2 concentrated to 10% have a power density of 4 W/m2. Without the
concentrated CO2, productivity of algae drops 100-fold. However, MacKay cannot bring himself
to admit that CO2 enrichment of the atmosphere similarly boosts bioproductivity, preferring
instead to endlessly repeat his mantra of ‘carbon pollution’ – a cunningly chosen term since
some forms of carbon are pollutants. Perhaps he fears that pointing out the simple fact that
CO2 is plant food might damage sales of his book.

Nuclear power
All existing nuclear power plants involve nuclear fission, rather than nuclear fusion. In fission
reactions heavy nuclei release energy when they split into medium-sized nuclei, while in fusion
reactions light nuclei release energy when they fuse into medium-sized nuclei (as is said to
happen in stars). The energy released from nuclear fission reactions is some 10 million times
larger than that from chemical reactions. About 2000 tons of uranium-235 can release as much
energy as burning 4.2 billion tons – or 1 cubic mile – of oil (Bryce, 2008, 212).

Like fossil fuel power stations, nuclear power plants are thermal plants: the heat energy
released from the nuclear fuel turns water into steam which spins a turbine which drives an
electric generator. There are currently 440 nuclear reactors in operation worldwide, with a total
capacity of 375 GW, and they provide 13.8% of the world’s electricity. A further 61 reactors are
under construction. France obtains 75.2% of its electricity from nuclear power, the US 20.2%,
and the UK 17.9% (world-nuclear.org). Nuclear propulsion is used in several ships and
submarines, and some space probes are powered by radioisotope thermoelectric generators.

There are many different reactor designs, using various fuels and coolants. Most reactors
currently use uranium-235 (a uranium isotope whose nucleus contains 235 neutrons, making
up 0.7% of all natural uranium) and discard the remaining U-238. Known uranium resources are
expected to last for over 80 years; further exploration will undoubtedly uncover more reserves,
and if the cost of extracting uranium from seawater falls, there will be no danger of scarcity.
Fast breeder reactors use U-238 (99.3% of all natural uranium) as well as U-235. They convert
U-238 to fissionable plutonium-239, and obtain about 60 times as much energy from the
uranium (MacKay, 162-3). Experimental fast breeder reactors are operating in half a dozen
countries.
1300 MW nuclear power plant at Cattenom, France. (en.wikipedia.org)

New, safer and more powerful reactor designs are coming to the market. One of them uses
thorium instead of uranium; thorium is four times more abundant than uranium and easier to
mine, and there is enough to power reactors for thousands of years. Whereas standard
reactors use only about 1% of natural uranium, thorium can be completely burned up, and does
not produce any plutonium (which could be used for making a nuclear bomb). Far less waste is
produced than with traditional nuclear reactors, and it’s only dangerous for 300 years. It is
physically impossible for the plant to melt down, because if the power goes out the system
naturally cools off (embeddedlab.csuohio.edu).

Building a large nuclear plant costs billions of dollars, but the long-term operating costs are
lower than those of coal and natural gas plants, because nuclear fuel costs a fraction of coal
and gas. The per-kilowatt construction costs of nuclear power plants are similar to those of
constructing offshore wind projects. But while nuclear plants usually have a capacity factor of
90%, offshore turbines only produce power about a third of the time. Solar power is even more
expensive than offshore wind. A new 2700 MW nuclear plant at the South Texas Project costs
$13 billion, but to build a solar plant with the same capacity rating would cost about $16.2
billion. And in practice the solar facility would produce at least one-third less energy than the
nuclear reactor (Bryce, 262-4).

According to the International Energy Agency, new nuclear power plants that begin operations
between 2015 and 2020 will be able to produce electricity for about $72 per megawatt-hour,
whereas onshore wind costs will be about $94 per megawatt-hour. Nuclear will be among the
cheapest options, even when compared to coal-fired power plants that use high-efficiency or
ultra-supercritical combustion (Bryce, 259).

When operating, nuclear plants emit no CO2, but huge amounts of concrete and steel are
required in their construction. The IPCC estimates that the total life-cycle greenhouse gas
emissions (including construction, fuel processing and decommissioning) per unit of electricity
produced from nuclear power are less than 40 g CO2-equivalent per kWh, similar to those for
renewable energy sources (ipcc.ch).

Environmental groups tend to oppose nuclear power on the grounds that it is too expensive and
dangerous. Greenpeace International says that nuclear power is ‘an unacceptable risk to the
environment and to humanity’ and calls for all nuclear power plants to be closed down
(greenpeace.org). However, a number of high-profile environmentalists disagree (Bryce, 257-8).
For instance, James Lovelock, who pioneered the Gaia theory that the earth is a self-regulating
organism, believes that nuclear power is the only viable option for large-scale reductions in CO2
emissions, and says that nuclear energy has proved to be the safest of all energy sources.

Nuclear power has also been endorsed by astronomer Patrick Moore, a cofounder of
Greenpeace, who says: ‘Nuclear energy is the only non-greenhouse-gas-emitting power source
that can effectively replace fossil fuels while satisfying the world’s increasing demand for
energy’ (Bell, 2011). Both Lovelock and Moore are associated with Environmentalists For
Nuclear Energy (EFN). The nuclear industry, too, is trying to capitalize on current irrational fears
about ‘CO2 pollution’ by highlighting that nuclear power plants emit no CO2.

The volume of solid waste produced by nuclear reactors is relatively small, but a small portion
is highly radioactive. In the UK, the ash from 10 coal-fired power stations would have a mass of
4 million tons per year (about 40 litres per person per year), while nuclear waste from Britain’s
10 nuclear power stations has a volume of just 0.84 litres per person per year. Only 25 millilitres
of this is highly radioactive. Over a lifetime the total amount of high-level waste would cover just
one tenth of a square kilometre to a depth of 1 metre. By contrast, municipal waste in the UK
amounts to 517 kg per year per person, and hazardous waste 83 kg per year per person
(MacKay, 69-70, 367). In countries with nuclear power, radioactive wastes make up less than
1% of total industrial toxic wastes, much of which remains hazardous indefinitely (world-
nuclear.org).

High-level nuclear waste is first stored in cooling ponds at the reactor site for 40 to 50 years, by
which time the level of radioactivity has dropped 1000-fold. In some European countries this
waste is then reprocessed, with the uranium and plutonium being separated off for reuse. This
allows about 97% of the spent fuel to be recycled, leaving only 3% as high-level waste. The
plutonium is sub-weapons grade but can be used in fresh mixed oxide (MOX) fuel for nuclear
reactors. If the spent fuel is reprocessed, the separated waste is vitrified and sealed inside
stainless steel canisters. The final disposal of vitrified wastes, or of used fuel assemblies that
have not been reprocessed, requires their long-term isolation from the environment, usually in
stable geological formations some 500 metres deep. After being buried for about 1000 years,
the radioactivity will have dropped to a level similar to that of naturally-occurring uranium ore,
though in a more concentrated form (world-nuclear.org).
Storage pond for spent fuel at the Sellafield reprocessing plant in the UK.

Major commercial reprocessing plants are operating in France and the UK, and Japan plans to
bring one on line in 2012. Reprocessing is not allowed in the US, due to nuclear proliferation
concerns. It’s worth nothing that France, Israel, North Korea and Pakistan all developed nuclear
weapons before they developed nuclear power. Of these four, only France is now producing
significant amounts of electricity with fission.

At present, waste is mainly stored at individual reactor sites, though centralized underground
repositories that are well guarded and managed would be preferable. In the US the construction
of a permanent underground storage at Yucca Mountain in Nevada has effectively been
cancelled.

As for (hot) nuclear fusion, although tens of billions of dollars have been thrown at it, there are
no prospects of it becoming a viable source of power generation anytime soon, mainly due to
the enormous temperatures (up to 300 million degrees Celsius) and pressures required.
Tokamak reactors are considered the most likely means of achieving practical nuclear fusion
energy. Their basic design is a torus (or donut) within which intense magnetic fields confine
very hot plasma. In the 1970s fusion power was said to be 30 years away. Today, it is still said
to be 30 years away.
Inside a tokamak, with and without a plasma. (EFDA-JET)

Safety

The worst nuclear accident to date was the disaster at the Chernobyl nuclear power plant in the
Soviet Union in April 1986. During a test in which important control systems had been switched
off (in violation of safety regulations), a sudden power surge caused a steam explosion that
ruptured the reactor vessel and led to the destruction of the reactor core and severe damage to
the reactor building, which had no containment structure. The resulting plume of highly
radioactive smoke drifted over large parts of the western Soviet Union and Europe. The United
Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) puts the total
deaths from radiation at about 66 as of 2008 (unscear.org). As for the number of people who
could eventually die of radiation exposure as a result of Chernobyl, a 2005 report gave an
estimate of 4000 (who.int).

In 1979 one of the two units at the nuclear plant on Three Mile Island in the US suffered a
partial meltdown. The accident resulted in no deaths or injuries to plant workers or members of
nearby communities, but the bad publicity held back the development of nuclear power in the
US for decades.

Fears about nuclear accidents have been stoked more recently by events at the Fukushima
Daiichi nuclear power plant in Japan. The massive magnitude 9 earthquake and the ensuing
14-metre tsunami on 11 March 2011 led to a series of equipment failures and releases of
radioactive materials. Although the three reactors in operation shut down automatically after the
earthquake, the tsunami knocked out the emergency generators – something that could have
been avoided if the backup system had been better designed. This resulted in partial core
meltdown. The plant’s six boiling-water reactors are second-generation technology, nearly 40
years old. Many countries are now reevaluating their nuclear energy programmes. Under
pressure from public opinion, the German government has decided to close all its 17 nuclear
plants by 2022; since these plants produce 26.1% of its electricity, this will be a major
challenge.
Every energy industry has its share of accidents. In the fossil fuel industry there are drilling-rig
disasters, oil spills from shipping accidents, helicopters lost at sea, pipeline fires, refinery
explosions, coal mine accidents, and so on. According to EU figures, coal, lignite and oil have
the highest death rates, followed by peat and biomass power, with death rates above 1 per
gigawatt-year (GWy). Nuclear and wind come out best, with death rates below 0.2 per GWy
(MacKay, 168).

In the US, nuclear power has caused no known deaths, whereas candles kill 126 people a year,
and alcohol 100,000. In Britain nuclear power has generated 200 GWy of electricity and the
nuclear industry has had 1 fatality – an impressively low death rate compared with the fossil
fuel industry. For comparison, 3000 people per year die on Britain’s roads. Worldwide, the
death rate from nuclear power is estimated at 2.4 deaths per GWy, and the mortality rate is
expected to fall in the future. In the mid-1990s the mortality rate associated with wind power
was 3.5 deaths per GWy, but the figure had dropped to 1.3 deaths per GWy by 2000 (MacKay,
168, 175). In 2004 an estimated 1.2 million people were killed and 50 million more were injured
in motor vehicle collisions worldwide.

From 12 to 28 March 2011, while headlines were dominated by events at Fukushima, a total of
47 coal miners were killed in four accidents in China, 47 in two accidents in Pakistan, and 1 in
an accident in the USA. Most of the accidents involved gas explosions (en.wordpress.com). In
China the death rate in coal mines, per ton of coal delivered, is 50 times that of most nations;
there were 2600 fatalities in Chinese mines in 2009 alone.

Rescue workers retrieve the body of one of the miners killed in a methane gas
explosion in a coal mine in Baluchistan province, Pakistan, on 20 March 2011.
(coalmountain.wordpress.com)

Radiation and health

Heavy, unstable atoms undergoing radioactive decay emit three types of ionizing radiation:
alpha particles (helium nuclei), which cannot penetrate the skin and can be stopped by a sheet
of paper, but are dangerous in the lung; beta particles (electrons), which can penetrate into the
body but can be stopped by a sheet of aluminium foil; and gamma radiation (very high-
frequency electromagnetic radiation), which can go right through the body and requires several
centimetres of lead or concrete, or a metre or so of water, to stop it.

The radiation absorbed by any material is measured in grays (Gy): 1 Gy = 1 J/kg. The radiation
absorbed by humans – known as the effective dose – is usually expressed in sieverts (Sv) or
millisieverts (mSv); it is calculated by multiplying the absorbed dose (in grays) by a factor that
depends on the type of radiation and the type of tissue absorbing the radiation. One gray of
alpha radiation, for example, will have a greater effect than one gray of beta radiation on a
particular type of tissue, but one sievert of both produces the same biological effect. (In the US:
1 rad = 0.01 Gy; 1 rem = 0.01 Sv, or 10 mSv.)

The following table shows the symptoms corresponding to different doses of ionizing radiation
received within one day (niehs.nih.gov).

Dose Symptoms Outcome


0 - 0.25 Sv None –
(0 - 250 mSv)
0.25 - 1 Sv Some people feel nausea and loss of Bone marrow, lymph nodes
(250 - 1000 mSv) appetite and spleen damaged
1 - 3 Sv Mild to severe nausea, loss of Same as above, but more
(1000 - 3000 mSv) appetite, infection severe; recovery probable,
but not assured
3 - 6 Sv Severe nausea, loss of appetite; Death if untreated
(3000 - 6000 mSv) haemorrhaging, infection, diarrhoea,
peeling of skin, sterility
6 - 10 Sv Above symptoms plus central nervous Death expected
(6000 - 10,000 mSv) system impairment
Above 10 Sv Incapacitation Death

The natural background radiation exposure from sun, rocks and building materials in the US is
3.6 mSv/year on average. The following table gives the typical dose for various additional
exposures (niehs.nih.gov; new.ans.org).

Dose Activity
2.4 mSv/yr Working in the nuclear industry
0.01 mSv/yr Exposure to public from the nuclear industry
1.5 mSv/yr Airline crew flying 1200 miles a week
9 mSv/yr Airline crew flying to Tokyo (1 trip per week)
0.10 mSv Chest x-ray
7.0 mSv Chest CT scan
0.015 mSv/yr Exposure to public from accident at Three Mile Island
0.015 mSv/yr Exposure to TV viewers watching an average of 10 hours per week
Large, acute doses of radiation are harmful. But there is a widespread myth that any radiation
exposure, however small, carries a health risk. This is known as the linear no-threshold (LNT)
model. It implies that if a certain level of radiation exposure produces 1 cancer in a population
of 100 people, then one-tenth of that amount of radiation will produce 1 cancer in a population
of 1000. This is like saying that if 25 cups of water forced down the throat will generally cause a
person to die of drowning, then drinking 1 cup of water would produce a 1 in 25 chance of
drowning. This flawed model is behind claims that nearly a million people have already died as
a result of the Chernobyl disaster (en.wikipedia.org). It assumes that every particle or quantum
of ionizing radiation is likely to damage a cell’s DNA, producing mutations which lead to cancer.
As there are about 1 billion radioactive decays every day in the average adult body, we should
all be sick from cancer from a young age if that were true.

There is no scientific evidence that doses below about 50 mSv in a short time or about 100
mSv per year carry any risk; in fact there is abundant evidence that doses up to 100 mSv per
year can have beneficial effects. This is known as radiation hormesis (Hecht, 2009; Kauffman,
2003, 2006; world-nuclear.org). Low levels of exposure can stimulate the body’s defence and
repair mechanisms, whereas very high levels overwhelm them.

The main source of exposure for most people is naturally-occurring background radiation.
Levels typically range from about 1.5 to 3.5 mSv/year but can be more than 50 mSv/year.
Lifetime doses range up to several thousand millisieverts, but there is no evidence of increased
cancers or other health problems arising from these high natural levels. Studies have shown
that medical and industrial workers who are exposed to radiation above background levels
often have lower rates of mortality from cancer and other causes than the general public.

The main contributor to background radiation exposure is usually radon gas from radioactive
sources deep underground. Many healing springs and baths derive their benefits from low-dose
radiation in the water, usually in the form of absorbed radon gas. In Europe, the use of hot
springs with high radon content dates back some 6000 years (radonmine.com).

It is worth noting that radiation protection standards are based on the discredited linear no-
threshold model.

Interestingly, due to the substantial amounts of granite in their construction, many


public buildings including Australia’s Parliament House and New York Grand Central
Station, would have some difficulty in getting a licence to operate if they were
nuclear power stations. (world-nuclear.org)

There have been no deaths or cases of radiation sickness from the Fukushima nuclear
accident, but over 100,000 people had to be evacuated from their homes to ensure this. In May
2013 UNSCEAR reported that most Japanese people were exposed to additional radiation
amounting to less than the typical natural background level of 2.1 mSv per year. People living in
Fukushima prefecture are expected to be exposed to around 10 mSv over their entire lifetimes,
while for those living further away the dose would be 0.2 mSv per year. However, 146
emergency workers received radiation doses of over 100 mSv during the crisis, and will be
closely monitored (world-nuclear.org).

New energy

Over the past 150 years various scientists and inventors have designed energy-producing
devices that challenge the mainstream scientific understanding of the physical world (Tutt,
2001; Manning & Garbon, 2009). Some of the devices seem to be extracting energy from an as
yet unrecognized source – the all-pervasive ether. In principle, the ether could provide an
unlimited source of clean energy – an idea that repels some people, who automatically
associate abundance with irresponsible consumerism and plundering of the earth’s resources.

At present no ‘free-energy’, ‘over-unity’ or ‘new-energy’ devices are available on the market.


Jeane Manning and Joel Garbon write:

While capital pours into the conventional technologies, visionary scientists and
inventors researching breakthrough energy technologies are languishing due to lack
of funding, ignorance and indifference at political level, and often official obstruction
and even heavy-handed oppression. (2009, 36)

A few of the more promising candidate technologies for future energy generation are outlined
below.

Cold fusion/LENR

‘Cold fusion’ was born in March 1989 when Martin Fleischmann and Stanley Pons reported that
experiments with electrochemical cells filled with heavy water, using a palladium cathode, had
generated so much excess heat that it had probably come from the fusion of two deuterium
nuclei (deuterons). Since the experiments took place at room temperature and pressure, this
contradicted the mainstream view that the only form of fusion possible is thermonuclear fusion,
or ‘hot fusion’, which requires extreme temperatures and pressures to overcome the Coulomb
barrier and force two positively-charged nuclei to merge. As a result, ‘cold fusion’ was
dismissed as ‘voodoo science’ by most orthodox scientists, especially since several attempts to
replicate Fleischmann and Pons’ results were unsuccessful.

Nevertheless, small-scale research has continued to this day in countries such as the US,
Russia, China, Japan, Italy, France and Israel, and various anomalous phenomena have
repeatedly been verified, though some experimental results are very unpredictable. Cold fusion
is now often referred to as low-energy nuclear reactions (LENR), chemically-assisted nuclear
reactions (CANR), or condensed matter nuclear science (CMNS). There are many different
experimental setups but they usually include: a metal, such as palladium or nickel, in bulk, thin
films or powder; deuterium and/or hydrogen, in the form of water, gas or plasma; and an
excitation in the form of electricity, magnetism, temperature, pressure, laser beams, or acoustic
waves (en.wikipedia.org).

Replicated phenomena include anomalous amounts of heat, helium, small amounts of


occasional tritium and neutrons, and transmutation of one element (or isotope) into another.
The various reactions are thought to occur on or near the surface of certain special materials
containing hydrogen isotopes. And they take place without the application of high energy and
without the release of the harmful, high-intensity radiation normally associated with a nuclear
process. To overcome or penetrate the Coulomb barrier, hot fusion uses high energy, i.e. brute
force, whereas ‘cold fusion’ appears to involve a subtler process similar to that used by a
catalyst (Storms, 2010).

That modern science’s understanding of nuclear reactions is inadequate is also shown by the
evidence for transmutation in living organisms (Storms, 19). Starting with the work of Louis
Kervran in the 1960s, various researchers have established that organisms can create
elements they need by transmuting available elements. Moulds and yeasts, for example, are
able to increase the concentrations of potassium, magnesium, iron, and calcium in their cells.
The abundance of elements on earth has probably been modified by the presence of life, and it
may be possible to use bacteria to decontaminate soil. Nature apparently has gentler ways of
achieving transmutation and other nuclear reactions than the violent methods known to
mainstream science.

Nuclear fission and (hot) fusion involve ‘strong nuclear interactions’, which release energetic
electrons and alpha particles, and high-energy neutrons, gamma rays and X-rays. That is why
radiation containment structures for commercial fission reactors often have walls consisting of 1
metre thick reinforced concrete and 25 cm thick special steel plates. According to a theory
developed by Lewis Larsen and Allan Widom, LENR does not involve nuclear fusion in the strict
sense of the term, but rather weak-interaction neutron creation and ultra-low-momentum,
neutron-catalyzed, low-energy nuclear reactions (Krivit, 2009, 2010). ‘Weak interactions’ are
defined as any type of nuclear process that emits or absorbs a neutrino (a hypothetical,
electrically neutral particle with such a tiny mass that it barely interacts with other matter); an
example is beta decay, whereby a neutron in an unstable atom emits an electron and a neutrino
and turns into a proton.

According to the Widom-Larsen theory, the reason why LENR cells do not emit large fluxes of
high-energy neutrons is because nearly all ultra-low-momentum neutrons are absorbed locally.
And the reason researchers have seen little or no gamma emissions from LENR experiments –
of the kind associated with fission and fusion – is because gamma radiation is internally
converted into more-benign, infrared (heat) radiation.

Larsen and Widom see their theory as an extension of the mainstream ‘standard model’ of
particle physics, and that’s why they have managed to silence some long-standing critics of
‘cold fusion’. However, the standard model has many illogical and irrational features; a realistic
model requires an underlying, subquantum energy continuum – the ether (see The farce of
modern physics).

A large proportion of cold fusion researchers do not accept the Widom-Larsen theory on the
grounds that it cannot explain all the phenomena that occur. Its critics propose, for example,
that protons or deuterons, rather than neutrons, play a key role in transmutation (Storms, 2010).
At present there is no generally accepted, comprehensive theory of LENR.

Larsen (2008) argues that if successfully commercialized, low-energy nuclear reactions could
herald a new era of affordable, safe and clean energy.
Being nuclear, LENRs could potentially improve by many orders of magnitude the
density and longevity of energy storage compared with existing technologies such
as chemical batteries and electrostatic capacitors, and provide a vast array of cost
effective, scalable, portable, and distributed power generation systems that could be
deployed throughout the world. ... LENRs can be used to develop a safe nuclear
energy technology that does not create dangerous hard radiation and/or long-lived
radioactive and toxic wastes.

Larsen (2009) argues that LENR may also solve many serious public safety and environmental
problems associated with current nuclear fission technologies.

[R]adioactive nuclear waste in spent reactor fuel rods and assemblies could
potentially be processed onsite with LENR technology to transmute waste into
complex arrays of non-radioactive stable elements and isotopes. Exactly the same
approach could be used to get rid of fuel remaining in nuclear reactors after
permanent shutdown.

Italian inventor Andrea Rossi, with the help of Sergio Focardi, has developed a nickel-hydrogen
reactor, known as the Energy Catalyzer or E-Cat (ecat.com; e-catworld.com; pesn.com). The
device is said to work by infusing heated hydrogen into nickel with the aid of an unnamed
catalyst, resulting in the transmutation of nickel into copper and the release of heat; it allegedly
generates about 6 to 10 times more energy than it consumes. Rossi has been granted an
Italian patent, but his application for an international patent has run into difficulties because it is
not detailed enough. In a demonstration for an invited audience on 28 October 2011, a 1 MWth
device operated for over 5 hours at a power level of about 472 kW in self-sustained mode, but
Rossi says that the reactor is easier to control if the input power is kept on.

The E-Cat has generated great controversy both inside and outside the LENR community.
Exactly what is happening in the device remains unclear as Rossi has not published all the
necessary data; several critical questions have been raised about his claim that a proton is
added to the nickel nucleus (newenergytimes.com; en.wikipedia.org). Rossi says he is more
interested in making money than convincing sceptical scientists. An undisclosed branch of the
US military has reportedly shown interest in purchasing a 1 MW plant for 2 million euros and
ordered another 12. Rossi’s US-based company Leonardo Corporation hoped to start selling
home heaters in 2013, and several rival companies are planning to develop nickel-hydrogen
reactors (scientificexploration.org). Whether anything comes of these plans remains to be seen.
Andrea Rossi pictured with E-Cat.

Hydrinos

BlackLight Power Inc., founded by Randell Mills in 1991, claims to have discovered a new,
sustainable, nonpolluting energy source. The patented BlackLight process is said to involve the
formation of a previously unknown form of hydrogen called ‘hydrino’. Hydrinos are produced
when the electron in a hydrogen atom transitions to an energy state below its ‘ground state’,
resulting in a smaller-radius hydrogen atom – something which is impossible according to
orthodox science. This is accompanied by the release of large amounts of chemical energy,
which can generate power as either electricity or heat. The only consumable, the hydrogen fuel,
is obtainable from water, using only 0.5% of the electrical output.

Mills has received no government funding, but has attracted millions of dollars from private
investors. In 2009 and 2010 Rowan University scientists independently validated BlackLight’s
solid fuel chemistry and hydrino products. Recently, scientists at the Harvard Smithsonian
Center for Astrophysics verified the unique spectral emission of hydrinos (blacklightpower.com).

BlackLight Power is developing a Catalyst Induced Hydrino Transition (CIHT) cell, which
produces electricity by reacting hydrogen to form hydrinos.

The cost is forecast at $25 per kW with no dependence on the electrical grid, fuels
infrastructure, sun, wind, or other external variable power sources allowing the CIHT
cell to be autonomous. ... It is expected that CIHT will competitively, economically,
logistically, and environmentally displace essentially all power sources of all sizes:
thermal, electrical, automotive, marine, rail, aviation, and aerospace.
(blacklightpower.com)

Whether this is a realistic assessment or just hype remains to be seen.

Aetherometry

Paulo and Alexandra Correa have developed a detailed model of a dynamic ether, known as
aetherometry. Their experiments with electroscopes, ‘orgone accumulators’ (specially designed
metal enclosures or Faraday cages), and Tesla coils point to the existence of both electric and
nonelectric forms of etheric energy. They rule out a purely electromagnetic ether, such as the
zero-point field of quantum physics. They contend that ether units ‘superimpose’ to form
physical particles, which take the shape of a torus. Pursuing an insight of Wilhelm Reich, they
have found evidence that photons do not travel through space: the sun emits electric, etheric
radiation which can travel much faster than light, and photons are transient, vortex-like
structures generated from the energy shed by decelerating physical charges (such as
electrons). They argue that gravity is essentially an electrodynamic force, and have found
experimental evidence of antigravity (Aetherometry and gravity). Aetherometry proposes that
the rotational and translatory movements of planets, stars, and galaxies are the result of
spinning, vortical motions of ether on multiple scales.

The Correas have developed several power-generation technologies:


• the patented Pulsed Abnormal Glow Discharge (PAGD) plasma reactor, which produces
excess energy by setting up a resonance between accelerated electron plasma and local
etheric energy;
• the table-top Aetherometric Fusion Reactor, which uses hydrogen and deuterium to generate
both sensible heat and electricity without the need for any moving parts, while partially
regenerating the fuel. It employs two controlled nuclear reactions identified by aetherometry;
• the HYBORAC energy converter, which taps the latent heat of a Faraday cage and can supply
heat, mechanical work, and electricity around the clock using solar, atmospheric and geologic
sources of etheric energy;
• the patented self-sustaining aether motor, which extracts etheric energy from Faraday cage-
like enclosures or resonant cavities, living beings, the ground, vacuums, and atmospheric
antennas.

Paulo and Alexandra Correa holding PAGD reactors in their laboratory.


The Correas say that their PAGD technology has been ready for commercialization for well over
10 years. Yet despite intense efforts, ‘no sponsor has come forth to help this technology come
to fruition. Ecologist movements have been silent on the technology. Politicians, governments
and their granting agencies have refused to become involved unless total control is given to
them.’

Sources

Larry Bell, ‘Renewable energy delusions: getting a real grip on alternatives’, 15 March 2011,
blogs.forbes.com

Christopher Booker, ‘Why the £250bn wind power industry could be the greatest scam of our
age’, 28 February 2011, dailymail.co.uk

BP, BP Statistical Review of World Energy June 2010, 2010, bp.com

Robert Bryce, Power Hungry: The myths of ‘green’ energy and the real fuels of the future,
Philadelphia, PA: PublicAffairs, 2008

Caithness Windfarm Information Forum, Summary of wind turbine accident data to 31 March
2011, 2011, caithnesswindfarms.co.uk

Dong Choi, ‘Oceanic crust is continental; great, timely news for the oil industry!’ (Editorial), New
Concepts in Global Tectonics Newsletter, no. 45, 2007, 2

Cutler J. Cleveland, ‘Energy transitions past and future’, The Encyclopedia of Earth, 2011,
eoearth.org

Jeffrey A. Glassman, Internal modeling mistakes by IPCC are sufficient to reject its
anthropogenic global warming conjecture, 2009, rocketscientistsjournal.com

Jeffrey A. Glassman, The cause of earth’s climate change is the sun, 2010a,
rocketscientistsjournal.com

Jeffrey A. Glassman, On why CO2 is known not to have accumulated in the atmosphere & what
is happening with CO2 in the modern era, 2010b, rocketscientistsjournal.com

Indur M. Goklany, ‘Could biofuel policies increase death and disease in developing countries?’,
Journal of American Physicians and Surgeons, v. 16, 2011, 9-13, www.jpands.org

William Happer, The truth about greenhouse gases: the dubious science of the climate
crusaders, 2011, firstthings.com

Laurence Hecht, ‘Is the fear of radiation constitutional?’, 21st Century Science & Technology,
2009, 12-28, 21stcenturysciencetech.com

Craig D. Idso & Sherwood B. Idso, Carbon Dioxide and Earth’s Future: Pursuing the prudent
path, 2011, scienceandpublicpolicy.org

IEA (International Energy Agency), World Energy Outlook 2009, 2009, iea.org

IEA, Energy Poverty: How to make modern energy access universal?, 2010,
worldenergyoutlook.org

Zbigniew Jaworowski, ‘The sun, not man, still rules our climate’, 21st Century Science &
Technology, 2009, 10-28, 21stcenturysciencetech.com

Joel M. Kauffman, ‘Radiation hormesis: demonstrated, deconstructed, denied, dismissed, and


some implications for public policy’, Journal of Scientific Exploration, v. 17, 2003, 389-407,
scientificexploration.org

Joel M. Kauffman, Malignant Medical Myths: Why medical treatment causes 200,000 deaths in
the USA each year, and how to protect yourself, West Conshohocken, PA: Infinity, 2006

J. Kill, S. Ozinga, S. Pavett & R. Wainwright, Trading Carbon: How it works and why it is
controversial, FERN, 2010, fern.org

Steven B. Krivit, ‘The decoupling of cold fusion from LENR’, 2009, newenergytimes.com

Steve Krivit, Cold Fusion Is Neither, 2010, newenergytimes.com

Lewis Larsen, ‘Low energy nuclear reactions for green energy’, 2008, i-sis.org.uk

Lewis Larsen, ‘Safe, less costly nuclear reactor decommissioning and more’, 2009, i-sis.org.uk

David J.C. MacKay, Sustainable Energy – Without the Hot Air, Cambridge: UIT, 2009 (online
version: withouthotair.com)

Jeane Manning & Joel Garbon, Breakthrough Power: How quantum-leap new energy
inventions can transform our world, Vancouver: Amber Bridge Books, 2009

Ross McKitrick, Stephen McIntyre & Chad Herman, ‘Panel and multivariate methods for tests of
trend equivalence in climate data series’, Atmospheric Science Letters, v. 11, 2011, 270-7,
http://onlinelibrary.wiley.com/doi/10.1002/asl.290/abstract

David Middleton, CO2: ice cores vs. plant stomata, 2010, wattsupwiththat.com

Donald Mitchell, A note on rising food prices, World Bank Policy Research Working Paper
4682, 2008, wds.worldbank.org

OECD, Biofuel policies in OECD countries costly and ineffective, says report, 2008, oecd.org

Harrison H. Schmitt, ‘The role of Greenland and Antarctic cores in climate science’, Science
and Environmental Policy Project, 2010, 4-7, sepp.org

Vaclav Smil, Energy in Nature and Society: General energetics of complex systems,
Cambridge, MA: MIT Press, 2008a

Vaclav Smil, ‘Moore’s curse and the great energy delusion’, The American, November 2008b,
american.com

Vaclav Smil, ‘Science, energy, ethics, and civilization’, in: R.Y. Chiao et al. (eds.), Visions of
Discovery: New light on physics, cosmology, and consciousness, Cambridge: Cambridge
University Press, 2010a, 709-29, vaclavsmil.com

Vaclav Smil, Power density primer: understanding the spatial dimension of the unfolding
transition to renewable electricity generation, 2010b, vaclavsmil.com
Robert P. Smith, Toward Rational Energy Planning, 2011, scienceandpublicpolicy.org

Roy W. Spencer, Satellite and climate model evidence against substantial manmade climate
change, 2008, drroyspencer.com

Edmund Storms, ‘Status of cold fusion (2010)’, Naturwissenschaften, v. 97, 2010, 861-81,
www.springerlink.com/content/9522x473v80352w9, preprint: lenr-canr.org

Stuart Young Consulting, Analysis of UK Wind Power Generation: November 2008 to


December 2010, John Muir Trust, 2011, jmt.org

Keith Tutt, The Search for Free Energy: A scientific tale of jealousy, genius and electricity,
London: Simon & Schuster, 2001

Noor van Andel, CO2 and climate change, 2011, climategate.nl

B.I. Vasiliev & T. Yano, ‘Ancient and continental rocks discovered in the ocean floors’, New
Concepts in Global Tectonics Newsletter, no. 43, 2007, 3-17

World Bank, Biofuels: the promise and the risks, 2008, siteresources.worldbank.org

Climate change controversies

Climategate and the corruption of climate science

Homepage