Sie sind auf Seite 1von 181

1) Crude Oil Exploration a) Definition of Crude Oil 1) Crude Oil Exploration a) Definition of Crude Oil Cude oil is a black,

dark brown or greenish liquid found in porous rock formations in the earth. Biofuel is any fuel that is derived from biomass recently living organisms or their metabolic byproducts, such as manure from cows. It is a renewable energy source, unlike other natural resources such as petroleum, coal and nuclear fuels. One definition of biofuel is any fuel with an 80% minimum content by volume of materials derived from living organisms harvested within the ten years preceding its manufacture. Like coal and petroleum, biomass is a form of stored solar energy. The energy of the sun is "captured" through the process of photosynthesis in growing plants. (See also: Systems ecology) One advantage of biofuel in comparison to most other fuel types is that the energy within the biomass can be stored for an indefinite time-period and without any danger.

Sugar cane a biofuel Agricultural products specifically grown for use as biofuels include corn and soybeans, primarily in the United States, as well as flaxseed and rapeseed, primarily in Europe, and hemp is a growing crop around the world except for in America. Biodegradable outputs from industry, agriculture, forestry, and households can also be used to produce bioenergy; examples include straw, timber, manure, sewage, biodegradable waste and food leftovers. These feedstocks are converted into biogas through anaerobic digestion. Biomass used as fuel often consists of underutilized types, like charff and animal waste. Much research is currently in progress into the utilization of microalgae as an energy source, with applications being developed for biodiesel, ethanol, methanol, methane, and even hydrogen. On the rise is use of hemp, although politics currently restrains this technology.

Paradoxically, in some industrialized countries like Germany, food is cheaper than fuel compared by price per joule. Central heating units supplied by food grade wheat or maize are available. Biofuel can be used both for central- and decentralized production of electricity and heat. As of 2005, bioenergy covers approximately 15% of the world's energy consumption. Most bioenergy is consumed in developing countries and is used for direct heating, as opposed to electricity production. However, Sweden and Finland supply 17% and 19% respectively, of their energy needs with bioenergy, a high figure for industrialized countries. The production of biofuels to replace oil and natural gas is in active development, focusing on the use of cheap organic matter (usually cellulose, agricultural and sewage waste) in the efficient production of liquid and gas biofuels which yield high net energy gain. The carbon in biofuels was recently extracted from atmospheric carbon dioxide by growing plants, so burning it does not result in a net increase of carbon dioxide in the Earth's atmosphere. As a result, biofuels are seen by many as a way to reduce the amount of carbon dioxide released into the atmosphere by using them to replace non-renewable sources of energy. Noticeable is the fact that the quality of timber or grassy biomass does not have a direct impact on its value as an energy-source. Dried compressed peat is also sometimes considered a biofuel. However, it does not meet the criteria of being a renewable form of energy, or of the carbon being recently absorbed from atmospheric carbon dioxide by growing plants. Though more recent than petroleum or coal, on the time scale of human industrialisation, peat is a fossil fuel and burning it does contribute to atmospheric CO2.

History
Biofuel was used since the early days of the car industry. Otto Von Nicklous, the inventor of the combustion engine, conceived his invention to run on ethanol. While Rudolf Diesel, the inventor of the combustion engine, conceived it to run on peanut oil. The Ford Model T, a car produced between 1903 and 1926 used ethanol. However, when crude oil began being cheaply extracted from deeper in the soil (thanks to drilling starting in the middle of the 19th century), cars began using fuels from oil. Then with the oil shock of 1973 and 1979, there was an increase interests from governments and academics in biofuels. However, interest decreased with the counter-shock of 1986 that made oil prices cheaper again. But since about 2000 with rising oil prices, concerns over the potential oil peak, greenhouse gas emissions (Global rWarming), and stability in the Middle East are pushing renewed interest in biofuels. Government officials have made statements and given aid in favour of biofuels. For example, U.S. president George Bush said in his 2006 State of Union speech, that he wants for the United States, by 2025, to replace 75% of the oil coming from the Middle East

Types of high volume industrial biomass on Earth

Certain types of biomass have attracted research and industrial attention. Many of these are considered to be potentially useful for energy or for the production of bio-based products. Most of these are available in very large quantities and have low market value.

Algae Bagasse from Sugarcane Dried distiller's grain Firewood Hemp Jatropha Landscaping waste Maiden Grass Maize (corn) Manure

Meat and bone meal Miscanthus Peat Pet waste Plate waste Rice hulls Silage Stover Switchgrass Whey

Examples of biofuels
Biologically produced alcohols
Biologically produced alcohols, most commonly ethanol and methanol, and less commonly propanol and butanol produced by the action of bacteria see alcohol fuel.

Methanol, which is currently produced from natural gas, can also be produced from biomass although this is not economically viable at present. The methanol economy is an interesting alternative to the hydrogen economy. Biomass to liquid, synthetic fuels produced from syngas. Syngas in turn, is produced from biomass by gasification. Ethanol fuel produced from sugar cane is being used as automotive fuel in Brazil. Ethanol produced from corn is being used as a gasoline additive (oxygenator) in the United States. Cellulosic ethanol is being manufactured from straw (an agricultural waste product) by Iogen Corporation of Ontario, Canada. ETBE containing 47% Ethanol is currenttly the biggest biofuel contributor in Europe. Butanol is formed by A.B.E. fermentation (Acetone, Butanol, Ethanol) and experimental modifications of the ABE process show potentially high net energy gains with butanol being the only liquid product. Butanol can be burned "straight" in existing gasoline engines (without modification to the engine or car), produces more energy and is less corrosive and less water soluble than ethanol, and can be distributed via existing infrastructures. Mixed Alcohols (e.g., mixture of ethanol, propanol, butanol, pentanol, hexanol and heptanol, such as EcaleneTM), obtained either by biomass-to-liquid technology (namely gasification to produce syngas followed by catalytic synthesis) or by the MixAlco Process. GTL or BTL both produce synthetic fuels out of biomass in the so called Fischer Tropsch process. The synthetic biofuel containing oxygen is used as additive in high quality diesel and petrol.

Biologically produced gases


Biogas is produced by the process of anaerobic digestion of organic material by anaerobes. Biogas can be produced either from biodegradable waste materials or by the use of energy crops fed into anaerobic digesters to supplement gas yields. Biogas contains methane and can be recovered in industrial anaerobic digesters and mechanical biological treatment systems. Landfill gas is a less clean form of biogas which is produced in lanrdfills through naturally occurring anaerobic digestion. Paradoxically if this gas is allowed to escape into the atmosphere it is a potent greenhouse gas.

Biologically produced gases from wastes


Biologically produced oils and gases can be produced from various wastes:

Thermal depolymerization of waste can extract methane and other oils similar to petroleum. Pyrolysis oil may be produced out of biomass, wood waste etc. using heat only in the flash pyrolysis process. The oil has to be treated before using in conventional fuel systems or internal combustion engines (water + pH). One company, GreenFuel Technologies Corporation, has developed a patented bioreactor system that utilizes nontoxic photosynthetic algae to take in smokestacks flue gases and produce biofuels such as biodiesel, biogas and a dry fuel comparable to coal.

Biologically produced oils


Biologically produced oils can be used in diesel engines:

Straight vegetable oil (SVO). Waste vegetable oil (WVO) - waste cooking oils and greases produced in quantity mostly by commercial kitchens Biodiesel obtained from transesterification of animal fats and vegetable oil, directly usable in petroleum diesel engines.

Applications of biofuels
One widespread use of biofuels is in home cooking and heating. Typical fuels for this are wood, charcoal or dried dung. The biofuel may be burned on an open fireplace or in a special stove. The efficiency of this process may vary widely, from 10% for a well made fire (even less if the fire is not made carefully) up to 40% for a custom designed charcoal stove. Inefficient use of fuel may be a minor cause of deforestation (though this is negligible compared to deliberate destruction to clear land for agricultural use) but more

importantly it means that more work has to be put into gathering fuel, thus the quality of cooking stoves has a direct influence on the viability of biofuels. "American homeowners are turning to burning corn in special stoves to reduce their energy bills. Sales of corn-burning stoves have tripled this year [...] Corn-generated heat costs less than a fifth of the current rate for propane and about a third of electrical heat".

Direct electricity generation


The methane in biogas is often pure enough to pass directly through gas engines to generate green energy. Anaerobic digesters or biogas powerplants convert this renewable energy source into electricity. This can either be used commercially or on a local scale.

Use on farms
In Germany small scale use of biofuel is still a domain of agricultural farms. It is an official aim of the German government to use the entire potential of 200,000 farms for the production of biofuel and bioenergy. (Source: VDI-Bericht "Bioenergie - Energietrger der Zukunft".

Home use
Different combustion-engines are being produced for very low prices lately. They allow the private house-owner to utilize low amounts of "weak" compression of methane to generate electrical and thermal power (almost) sufficient for a well insulated residential home.

Rolling Network
Although decentralised biofuel production is possible the so called island operation bears problems with capacity and load balancing. In case vehicles for commuting and social or procurement trips may be used to transport energy we have a so called rolling network. We expect a higher efficiency with wood based biogas which may be purified in a home filling station and released into the natural gas network at work or special receiving gas stations. This kind of business is not bound to constant delivery amounts but very flexible in both directions. Ie. also gas refilling is possible if the wood gas production is low at the moment or the distance travelled was high. With so called plug in hybrid electric vehicles in theory it would be also possible to carry energy produced underway to work or to home and feed it into the grid. But this is less efficient and also less probable.

Problems and solutions


Unfortunately, much cooking with biofuels is done indoors, without efficient ventilation, and using fuels such as dung causes airborne pollution. This can be a serious health hazard;

1.5 million deaths were attributed to this cause by the World Health Organisation as of 2000. There are various responses to this, such as improved stoves, including those with inbuilt flues and switching to alternative fuel sources. Most of these responses have difficulties. One is that fuels are expensive and easily damaged. Another is that alternative fuels tend to be more expensive, but the people who rely on biofuels often do so precisely because they cannot afford alternatives. Organisations such as Intermediate Technology Development Group work to make improved facilities for biofuel use and better alternatives accessible to those who cannot currently get them. This work is done through improving ventilation, switching to different uses of biomass such as the creation of biogas from solid biomatter, or switching to other alternatives such as micro-hydro power.

Direct biofuel
Direct biofuels are biofuels that can be used in existing unmodified petroleum engines, like biodiesel and biobutanol. Another direct biofuel is E10 (in the US) - however, most cars, even those built prior to 1988, will run on E5 and apparently E20 can be used in most post1988 American cars without modification. Other than biodiesel, biobutanol and bioethanol, few direct biofuels, if any, exist.

International efforts
On the other hand, recognizing the importance of bioenergy and its implementation, there are international organizations such as IEA Bioenergy, established in 1978 by the International Energy Agency (IEA), with the aim of improving cooperation and information exchange between countries that have national programs in bioenergy research, development and deployment.

Biomass is a renewable energy resource derived from the carbonaceous waste of various human
and natural activities. It is derived from numerous sources, including the by-products from the timber industry, agricultural crops, raw material from the forest, major parts of household waste and wood. Biomass does not add carbon dioxide to the atmosphere as it absorbs the same amount of carbon in growing as it releases when consumed as a fuel. Its advantage is that it can be used to generate electricity with the same equipment or power plants that are now burning fossil fuels. Biomass is an important source of energy and the most important fuel worldwide after coal, oil and natural gas. Traditional use of biomass is more than its use in modern application. In the developed world biomass is again becoming important for applications such as combined heat and power generation. In addition, biomass energy is gaining significance as a source of clean heat for domestic heating and community heating applications. In fact in countries like Finland, USA and Sweden the per capita biomass energy used is higher than it is in India, China or in Asia.

Biomass fuels used in India account for about one third of the total fuel used in the country, being the most important fuel used in over 90% of the rural households and about 15% of the urban households. Instead of burning the loose biomass fuel directly, it is more practical to compress it into briquettes (compressing them through a process to form blocks of different shapes) and thereby improve its utility and convenience of use. Such biomass in the dense briquetted form can either be used directly as fuel instead of coal in the traditional chulhas and furnaces or in the gasifier. Gasifier converts solid fuel into a more convenient-to-use gaseous form of fuel called producer gas. Scientists are trying to explore the advantages of biomass energy as an alternative energy source as it is renewable and free from net CO2 (carbon dioxide) emissions, and is abundantly available on earth in the form of agricultural residue, city garbage, cattle dung, firewood, etc. Bio-energy, in the form of biogas, which is derived from biomass, is expected to become one of the key energy resources for global sustainable development. At present, biogas technology provides an alternative source of energy in rural India for cooking. It is particularly useful for village households that have their own cattle. Through a simple process cattle dung is used to produce a gas, which serves as fuel for cooking. The residual dung is used as manure. Biogas plants have been set up in many areas and are becoming very popular. Using local resources, namely cattle waste and other organic wastes, energy and manure are derived. A mini biogas digester has recently been designed and developed, and is being in-field tested for domestic lighting. Indian sugar mills are rapidly turning to bagasse, the leftover of cane after it is crushed and its juice extracted, to generate electricity. This is mainly being done to clean up the environment, cut down power costs and earn additional revenue. According to current estimates, about 3500 MW of power can be generated from bagasse in the existing 430 sugar mills in the country. Around 270 MW of power has already been commissioned and more is under construction.

The inside of the earth is very, very hot. Some of this heat energy escapes through volcanoes, geysers and hot springs. This natural heat energy is called geothermal energy. In countries such as New Zealand and Iceland, hot water and steam are near the earths surface. Escaping steam from the ground is collected and used to generate electricity in geothermal power stations. Geothermal power is generated by mining the earth's heat. In areas with high temperature ground water at shallow depths, wells are drilled into natural fractures in basement rock or into permeable sedimentary rocks. Hot water or steam flows up through the wells either by pumping or through boiling (flashing) flow. Experiments are in progress to determine if a fourth method, deep wells into "hot dry rocks", can be economically used to heat water pumped down from the surface. A hot dry rock project in the United Kingdom was abandoned after it was pronounced economically unviable in 1989. HDR programs are currently being developed in Australia, France, Switzerland and Germany. Magma (molten rock) resources offer extremely high-temperature geothermal opportunities, but existing technology does not allow recovery of heat from these resources. [edit]

Electrical generation
Geothermal-generated electricity was first produced at Larderello, Italy, in 1904. Since then, the use of geothermal energy for electricity has grown worldwide to about 8,000 megawatts of which the United States produces 2,700 megawatts. Three types of power plants are used to generate power from geothermal energy: Dry steam, flash, and binary. Dry steam plants take steam out of fractures in the ground and use it to directly drive a turbine that spins a generator. Flash plants take hot water, usually at temperatures over 200C, out of the ground, and allows it to boil as to rises to the surface then separates the steam phase in steam/water separators and then runs the steam through a turbine. In binary plants, the hot water flows through heat exchangers, boiling an organic fluid that spins the turbine. The condensed steam and remaining geothermal fluid from all three types of plants are injected back into the hot rock to pick up more heat. This is why geothermal energy is viewed as sustainable. The heat of the earth is so vast that there is no way to remove more than a small fraction even if most of the world's energy needs came from geothermal sources. The largest dry steam field in the world is The Geysers, about 90 miles (145 km) north of San Francisco began in 1960 which has 1360 MW of installed capacity and produces about 1000 MW net. Calpine Corporation now owns 19 of the 21 plants in The Geysers and is currently the United States' largest producer of renewable geothermal energy. The other two plants are owned jointly by the Northern California Power Agency and Santa Clara Electric. Since the activities of one geothermal plant affects those nearby, the consolidation plant ownership at The Geysers has been beneficial because the plants operate cooperatively instead of in their own short-term interest. The Geysers is now recharged by injecting treated sewage effluent from the City of Santa Rosa and the Lake County sewage treatment plant. This sewage effluent used to be dumped into rivers and streams and is now piped to the geothermal field where it replenishes the steam produced for power generation. Another major geothermal area is located in south central California, on the southeast side of the Salton Sea, near the cities of Niland and Calipatria, CA. As of 2001, there were 15 geothermal plants producing electricity in the area. CalEnergy owns about half of them and the rest are owned by various companies. Combined the plants produce about 570 megawatts. The Basin and Range geologic province in Nevada, southeastern Oregon, southwestern Idaho, Arizona and eastern Utah is now an area of rapid geothermal development. Several small power plants were built during the late 1980s during times of high power prices. Rising energy costs have spurred new development. Plants in Nevada at Steamboat near Reno, Brady/Desert Peak, Dixie Valley, Soda Lake, Stillwater and Beowawe now produce about 235 MW. New projects are under development across the state. Geothermal power is very cost-effective in the Rift area of Africa. Kenya's KenGen has built two plants, Olkaria I (45 MW) and Olkaria II (65 MW), with a third private plant Olkaria III (48 MW) run by Israeli geothermal specialist Ormat. Plans are to increase

production capacity by another 576 MW by 2017, covering 25% of Kenya's electricity needs, and correspondingly reducing dependency on imported oil. Geothermal power is generated in over 20 countries around the world including Iceland (producing 17% of its electricity from geothermal sources), the United States, Italy, France, New Zealand, Mexico, Nicaragua, Costa Rica, Russia, the Philippines (production output of 1931MW (2nd to US, 27% of electricity), Indonesia and Japan. Canada's government (which officially notes some 30,000 earth-heat installations for providing space heating to Canadian residential and commercial buildings) reports a test geothermal-electrical site in the Meager MountainPebble Creek area of British Columbia, where a 100 MW facility might be developed at that site. [edit]

Desalination
Douglas Firestone began working with evaporation/condensation air loop desalination about 1998 and proved that geothermal waters could be used as process water to produce potable water in 2001. In 2003 Professor Ronald A. Newcomb, now at San Diego State University Center for Advanced Water Technologies began to work with Firestone to enhance the process of using geothermal energy for the purpose of desalination. Geothermal Energy is a primary energy source. In 2005 testing was done in the fifth prototype of a device called the Delta T a closed air loop, atmospheric pressure, evaporation condensation loop geothermally powered desalination device. The device used filtered sea water from Scripps Institute of Oceanography and reduced the salt concentration from 35,000 ppm to 51 ppm w/w. [1] [edit]

Water injection
In some locations, the natural supply of water producing steam from the hot underground magma deposits has been exhausted and processed waste water is injected to replenish the supply. Most geothermal fields have more fluid recharge than heat, so re-injection can cool the resource, unless it is carefully managed. In at least one location, this has resulted in small but frequent earthquakes (see external link below). This has led to disputes about whether the plant owners are liable for the damage the earthquakes cause. [edit]

Heat depletion
Although geothermal sites are capable of providing heat for many decades, eventually they are depleted as the ground cools. [2] The government of Iceland states It should be stressed

that the geothermal resource is not strictly renewable in the same sense as the hydro resource. It estimates that Iceland's geothermal energy could provide 1700 MW for over 100 years, compared to the current production of 140 MW. However, the natural heat flow of the earth largely from radioactive decay does replenish the heat lost in geothermal heat mining. [edit]

Cost
Geothermal power is more competitive in countries that have limited hydrocarbon resources, such as Iceland, New Zealand, and Italy. During the period of low power prices in the 1980s to the recent rise in oil and gas prices, few geothermal resource areas in the United States were capable of generating electricity at a cost competitive with other energy sources. However, recent rises in power prices make geothermal more cost competitive. Not all areas of the world have a usable geothermal resource, though many do. Also, some geothermal areas do not have a high enough temperature to produce steam. In those areas, geothermal power can be generated using a process called binary cycle technology, though the efficiency is lower. Other areas do not have the water to produce steam, which is necessary for current plant designs. Geothermal areas without steam are called hot dry rock areas and methods for exploiting them are being researched. Also, instead of producing electricity, lower temperature areas can provide space and process heating. As of 1998, the United States has 18 district heating systems, 28 fish farms, 12 industrial plants, 218 spas and 38 greenhouses that use geothermal heat. Geothermal power is the use of geothermal heat for electricity generation. It is often referred to as a form of renewable energy, but because the heat at any location can eventually be depleted it technically may not be strictly renewable. Geothermal comes from the Greek words geo, meaning earth, and therme, meaning heat. Geothermal literally means "earth heat".

Types of geothermal sources

Geothermal energy sources.

Thermally active area, New Zealand. Geothermal power is generated by mining the earth's heat. In areas with high temperature ground water at shallow depths, wells are drilled into natural fractures in basement rock or into permeable sedimentary rocks. Hot water or steam flows up through the wells either by pumping or through boiling (flashing) flow. Experiments are in progress to determine if a fourth method, deep wells into "hot dry rocks", can be economically used to heat water pumped down from the surface. A hot dry rock project in the United Kingdom was abandoned after it was pronounced economically unviable in 1989. HDR programs are currently being developed in Australia, France, Switzerland and Germany. Magma (molten rock) resources offer extremely high-temperature geothermal opportunities, but existing technology does not allow recovery of heat from these resources. [edit]

Electrical generation
Geothermal-generated electricity was first produced at Larderello, Italy, in 1904. Since then, the use of geothermal energy for electricity has grown worldwide to about 8,000 megawatts of which the United States produces 2,700 megawatts.

Three types of power plants are used to generate power from geothermal energy: Dry steam, flash, and binary. Dry steam plants take steam out of fractures in the ground and use it to directly drive a turbine that spins a generator. Flash plants take hot water, usually at temperatures over 200C, out of the ground, and allows it to boil as to rises to the surface then separates the steam phase in steam/water separators and then runs the steam through a turbine. In binary plants, the hot water flows through heat exchangers, boiling an organic fluid that spins the turbine. The condensed steam and remaining geothermal fluid from all three types of plants are injected back into the hot rock to pick up more heat. This is why geothermal energy is viewed as sustainable. The heat of the earth is so vast that there is no way to remove more than a small fraction even if most of the world's energy needs came from geothermal sources. The largest dry steam field in the world is The Geysers, about 90 miles (145 km) north of San Francisco began in 1960 which has 1360 MW of installed capacity and produces about 1000 MW net. Calpine Corporation now owns 19 of the 21 plants in The Geysers and is currently the United States' largest producer of renewable geothermal energy. The other two plants are owned jointly by the Northern California Power Agency and Santa Clara Electric. Since the activities of one geothermal plant affects those nearby, the consolidation plant ownership at The Geysers has been beneficial because the plants operate cooperatively instead of in their own short-term interest. The Geysers is now recharged by injecting treated sewage effluent from the City of Santa Rosa and the Lake County sewage treatment plant. This sewage effluent used to be dumped into rivers and streams and is now piped to the geothermal field where it replenishes the steam produced for power generation. Another major geothermal area is located in south central California, on the southeast side of the Salton Sea, near the cities of Niland and Calipatria, CA. As of 2001, there were 15 geothermal plants producing electricity in the area. CalEnergy owns about half of them and the rest are owned by various companies. Combined the plants produce about 570 megawatts. The Basin and Range geologic province in Nevada, southeastern Oregon, southwestern Idaho, Arizona and eastern Utah is now an area of rapid geothermal development. Several small power plants were built during the late 1980s during times of high power prices. Rising energy costs have spurred new development. Plants in Nevada at Steamboat near Reno, Brady/Desert Peak, Dixie Valley, Soda Lake, Stillwater and Beowawe now produce about 235 MW. New projects are under development across the state. Geothermal power is very cost-effective in the Rift area of Africa. Kenya's KenGen has built two plants, Olkaria I (45 MW) and Olkaria II (65 MW), with a third private plant Olkaria III (48 MW) run by Israeli geothermal specialist Ormat. Plans are to increase production capacity by another 576 MW by 2017, covering 25% of Kenya's electricity needs, and correspondingly reducing dependency on imported oil. Geothermal power is generated in over 20 countries around the world including Iceland (producing 17% of its electricity from geothermal sources), the United States, Italy, France, New Zealand, Mexico, Nicaragua, Costa Rica, Russia, the Philippines (production output

of 1931MW (2nd to US, 27% of electricity), Indonesia and Japan. Canada's government (which officially notes some 30,000 earth-heat installations for providing space heating to Canadian residential and commercial buildings) reports a test geothermal-electrical site in the Meager MountainPebble Creek area of British Columbia, where a 100 MW facility might be developed at that site. [edit]

Desalination
Douglas Firestone began working with evaporation/condensation air loop desalination about 1998 and proved that geothermal waters could be used as process water to produce potable water in 2001. In 2003 Professor Ronald A. Newcomb, now at San Diego State University Center for Advanced Water Technologies began to work with Firestone to enhance the process of using geothermal energy for the purpose of desalination. Geothermal Energy is a primary energy source. In 2005 testing was done in the fifth prototype of a device called the Delta T a closed air loop, atmospheric pressure, evaporation condensation loop geothermally powered desalination device. The device used filtered sea water from Scripps Institute of Oceanography and reduced the salt concentration from 35,000 ppm to 51 ppm w/w. [1] [edit]

Water injection
In some locations, the natural supply of water producing steam from the hot underground magma deposits has been exhausted and processed waste water is injected to replenish the supply. Most geothermal fields have more fluid recharge than heat, so re-injection can cool the resource, unless it is carefully managed. In at least one location, this has resulted in small but frequent earthquakes (see external link below). This has led to disputes about whether the plant owners are liable for the damage the earthquakes cause. [edit]

Heat depletion
Although geothermal sites are capable of providing heat for many decades, eventually they are depleted as the ground cools. [2] The government of Iceland states It should be stressed that the geothermal resource is not strictly renewable in the same sense as the hydro resource. It estimates that Iceland's geothermal energy could provide 1700 MW for over 100 years, compared to the current production of 140 MW. However, the natural heat flow of the earth largely from radioactive decay does replenish the heat lost in geothermal heat mining.

[edit]

Cost
Geothermal power is more competitive in countries that have limited hydrocarbon resources, such as Iceland, New Zealand, and Italy. During the period of low power prices in the 1980s to the recent rise in oil and gas prices, few geothermal resource areas in the United States were capable of generating electricity at a cost competitive with other energy sources. However, recent rises in power prices make geothermal more cost competitive. Not all areas of the world have a usable geothermal resource, though many do. Also, some geothermal areas do not have a high enough temperature to produce steam. In those areas, geothermal power can be generated using a process called binary cycle technology, though the efficiency is lower. Other areas do not have the water to produce steam, which is necessary for current plant designs. Geothermal areas without steam are called hot dry rock areas and methods for exploiting them are being researched. Also, instead of producing electricity, lower temperature areas can provide space and process heating. As of 1998, the United States has 18 district heating systems, 28 fish farms, 12 industrial plants, 218 spas and 38 greenhouses that use geothermal heat.

hydroelectric and coal-fired power plants produce electricity in a similar way. In both cases a power source is used to turn a propeller-like piece called a turbine, which then turns a metal shaft in an electric generator, which is the motor that produces electricity. A coal-fired power plant uses steam to turn the turbine blades; whereas a hydroelectric plant uses falling water to turn the turbine. The results are the same. Take a look at this diagram (courtesy of the Tennessee Valley Authority) of a hydroelectric power plant to see the details:

The theory is to build a dam on a large river that has a large drop in elevation (there are not many hydroelectric plants in Kansas or Florida). The dam stores lots of water behind it in the reservoir. Near the bottom of the dam wall there is the water intake. Gravity causes it to fall through the penstock inside the dam. At the end of the penstock there is a turbine propeller, which is turned by the moving water. The shaft from the turbine goes up into the generator, which produces the power. Power lines are connected to the generator that carry electricity to your home and mine. The water continues past the propeller through the tailrace into the river past the dam. Hydroelectricity is electricity obtained from hydropower. Most hydroelectric power comes from the potential energy of dammed water driving a water turbine and generator. Less common variations make use of water's kinetic energy or undammed sources such as tidal power. Hydroelectricity is a renewable energy source. The energy extracted from water depends not only on the volume but on the difference in height between the source and the water's outflow. This height difference is called the head. The amount of potential energy in water is directly proportional to the head. To obtain very high head, water for a hydraulic turbine may be run through a large pipe called a penstock. While many supply public electricity networks, some projects were created for private commercial purposes. For example, aluminium processing requires substantial amounts of electricity, and in Britain's Scottish Highlands there are examples at Kinlochleven and Lochaber, designed and constructed during the early years of the 20th century. Similarly, the 'van Blommestein' lake, dam and power station were constructed in Suriname to provide electricity for the Alcoa aluminium industry. In many parts of Canada (the provinces of British Columbia, Manitoba, Ontario, Quebec and Newfoundland and Labrador) hydroelectricity is used so extensively that the word "hydro" is used to refer to any electricity delivered by a power utility. The government-run power utilities in these provinces are called BC Hydro, Manitoba Hydro, Hydro One (formerly "Ontario Hydro"), Hydro-Qubec and Newfoundland and Labrador Hydro respectively. Hydro-Qubec is the world's largest hydroelectric generating company, with a total installed capacity (2005) of 31,512 MW. hydroelectric and coal-fired power plants produce electricity in a similar way. In both cases a power source is used to turn a propeller-like piece called a turbine, which then turns a metal shaft in an electric generator, which is the motor that produces electricity. A coal-fired power plant uses steam to turn the turbine blades; whereas a hydroelectric plant uses falling water to turn the turbine. The results are the same. Take a look at this diagram (courtesy of the Tennessee Valley Authority) of a hydroelectric power plant to see the details:

The theory is to build a dam on a large river that has a large drop in elevation (there are not many hydroelectric plants in Kansas or Florida). The dam stores lots of water behind it in the reservoir. Near the bottom of the dam wall there is the water intake. Gravity causes it to fall through the penstock inside the dam. At the end of the penstock there is a turbine propeller, which is turned by the moving water. The shaft from the turbine goes up into the generator, which produces the power. Power lines are connected to the generator that carry electricity to your home and mine. The water continues past the propeller through the tailrace into the river past the dam. Hydroelectricity is electricity obtained from hydropower. Most hydroelectric power comes from the potential energy of dammed water driving a water turbine and generator. Less common variations make use of water's kinetic energy or undammed sources such as tidal power. Hydroelectricity is a renewable energy source. The energy extracted from water depends not only on the volume but on the difference in height between the source and the water's outflow. This height difference is called the head. The amount of potential energy in water is directly proportional to the head. To obtain very high head, water for a hydraulic turbine may be run through a large pipe called a penstock. While many supply public electricity networks, some projects were created for private commercial purposes. For example, aluminium processing requires substantial amounts of electricity, and in Britain's Scottish Highlands there are examples at Kinlochleven and Lochaber, designed and constructed during the early years of the 20th century. Similarly, the 'van Blommestein' lake, dam and power station were constructed in Suriname to provide electricity for the Alcoa aluminium industry. In many parts of Canada (the provinces of British Columbia, Manitoba, Ontario, Quebec and Newfoundland and Labrador) hydroelectricity is used so extensively that the word "hydro" is used to refer to any electricity delivered by a power utility. The government-run power utilities in these provinces are called BC Hydro, Manitoba Hydro, Hydro One (formerly "Ontario Hydro"), Hydro-Qubec and Newfoundland and Labrador Hydro respectively. Hydro-Qubec is the world's largest hydroelectric generating company, with a total installed capacity (2005) of 31,512 MW.

Hydroelectricity is electricity obtained from hydropower. Most hydroelectric power comes from the potential energy of dammed water driving a water turbine and generator. Less common variations make use of water's kinetic energy or undammed sources such as tidal power. Hydroelectricity is a renewable energy source. The energy extracted from water depends not only on the volume but on the difference in height between the source and the water's outflow. This height difference is called the head. The amount of potential energy in water is directly proportional to the head. To obtain very high head, water for a hydraulic turbine may be run through a large pipe called a penstock. While many supply public electricity networks, some projects were created for private commercial purposes. For example, aluminium processing requires substantial amounts of electricity, and in Britain's Scottish Highlands there are examples at Kinlochleven and Lochaber, designed and constructed during the early years of the 20th century. Similarly, the 'van Blommestein' lake, dam and power station were constructed in Suriname to provide electricity for the Alcoa aluminium industry. In many parts of Canada (the provinces of British Columbia, Manitoba, Ontario, Quebec and Newfoundland and Labrador) hydroelectricity is used so extensively that the word "hydro" is used to refer to any electricity delivered by a power utility. The government-run power utilities in these provinces are called BC Hydro, Manitoba Hydro, Hydro One (formerly "Ontario Hydro"), Hydro-Qubec and Newfoundland and Labrador Hydro respectively. Hydro-Qubec is the world's largest hydroelectric generating company, with a total installed capacity (2005) of 31,512 MW.

Importance
Hydroelectric power supplies 20% of world electricity. Norway produces virtually all of its electricity from hydro, while Iceland produces 83% of its requirements (2004), Austria produces 67% of all electricity generated in the country from hydro (over 70% of its requirements). Canada is the world's largest producer of hydro power and produces over 70% of its electricity from hydroelectric sources. Apart from a few countries with an abundance of it, hydro capacity is normally applied to peak-load demand, because it can be readily stored during off-peak hours (in fact, pumpedstorage hydroelectric reservoirs are sometimes used to store electricity produced by thermal plants for use during peak hours). It is not a major option for the future in the developed countries because most major sites in these countries having potential for harnessing gravity in this way are either being exploited already or are unavailable for other reasons such as environmental considerations. Regions where thermal plants provide the dominant supply of power utilize hydro power to provide the important functions of load following and regulation. This permits thermal plants to be operated closer to thermodynamically optimal points rather than varied continuously, which reduces efficiency and potentially increases pollutant emmissions.

Concurrently, hydro plants are then utiliized to provide for hour-to-hour adjustments (load following) and to respond to changes in system frequency and voltage (regulation), with no additional economic or environmental effect. [edit]

Advantages and disadvantages

House flooded since 1955, revealed following prolonged dry weather

Hydroelectric Reservoir Vianden, Luxembourg (tower)

Hydroelectric Reservoir Vianden, Luxembourg The major advantage of hydro systems is elimination of the cost of fuel. Hydroelectric plants are immune to price increases for fossil fuels such as oil, natural gas or coal, and do not require imported fuel. Hydroelectric plants tend to have longer lives than fuel-fired generation, with some plants now in service having been built 50 to 100 years ago. Labor cost also tends to be low since plants are generally heavily automated and have few personnel on site during normal operation. Pumped storage plants currently provide the most significant means of storage of energy on a scale useful for a utility, allowing low-value generation in off-peak times (which occurs because fossil-fuel plants cannot be entirely shut down on a daily basis) to be used to store water that can be released during high load daily peaks. Operation of pumped-storage plants improves the daily load factor of the generation system. Reservoirs created by hydroelectric schemes often provide excellent leisure facilities for water sports, and become tourist attractions in themselves. Multi-use dams installed for irrigation, flood control, or recreation, may have a hydroelectric plant added with relatively low construction cost, providing a useful revenue stream to offset the cost of dam operation. In practice, the utilization of stored water is sometimes complicated by demand for irrigation which may occur out of phase with peak electricity demand. Times of drought can cause severe problems, since water replenishment rates may not keep up with desired usage rates. Minimum discharge requirements represent an efficiency loss for the station if it is uneconomic to install a small turbine unit for that flow. Concerns have been raised by environmentalists that large hydroelectric projects might be disruptive to surrounding aquatic ecosystems. For instance, studies have shown that dams along the Atlantic and Pacific coasts of North America have reduced salmon populations by preventing access to spawning grounds upstream, even though most dams in salmon habitat have fish ladders installed. Salmon smolt are also harmed on their migration to sea when they must pass through turbines. This has led to some areas barging smolt downstream during parts of the year. Turbine and power-plant designs that are easier on aquatic life are an active area of research.

Generation of hydroelectric power can also have an impact on the downstream river environment. First, water exiting a turbine usually contains very little suspended sediment, which can lead to scouring of river beds and loss of riverbanks. Second, since turbines are often opened intermittently, rapid or even daily fluctuations in river flow are observed. In the Grand Canyon, the daily cyclic flow variation caused by Glen Canyon Dam was found to be contributing to erosion of sand bars. Dissolved oxygen content of the water may change from preceding conditions. Finally, water exiting from turbines is typically much colder than the pre-dam water, which can change aquatic faunal populations, including endangered species. The reservoirs of hydroelectric power plants in tropical regions may produce substantial amounts of methane and carbon dioxide. This is due to plant material in newly flooded and re-flooded areas being inundated with water, decaying in an anaerobic environment, and forming methane, a very potent greenhouse gas. The methane is released into the atmosphere once the water is discharged from the dam and turns the turbines. According to the World Commission on Dams report, where the reservoir is large compared to the generating capacity (less than 100 watts per square metre of surface area) and no clearing of the forests in the area was undertaken prior to impoundment of the reservoir, greenhouse gas emissions from the reservoir may be higher than those of a conventional oil-fired thermal generation plant [1]. In boreal reservoirs of Canada and Northern Europe, however, greenhouse gas emissions are typically only 2 to 8% of any kind of conventional thermal generation. Another disadvantage of hydroelectric dams is the need to relocate the people living where the reservoirs are planned. In many cases, no amount of compensation can replace ancestral and cultural attachments to places that have spiritual value to the displaced population. Additionally, historically and culturally important sites can be flooded and lost. Such problems have arisen at the Three Gorges Dam project in China, the Clyde Dam in New Zealand and the Ilsu Dam in Southeastern Turkey.

The Dnieper Hydroelectric Station (1927-32) was the centrepiece of Lenin's GOELRO plan.

Some hydroelectric projects also utilize canals, typically to divert a river at a shallower gradient to increase the head of the scheme. In some cases, the entire river may be diverted leaving a dry riverbed. Examples include the Tekapo and Pukaki Rivers. [edit]

Hydro-electric facts
[edit]

Oldest

Cragside, Rothbury, England completed 1870. Appleton, Wisconsin, USA completed 1882, A waterwheel on the Fox river supplied the first commercial hydroelectric power for lighting to two paper mills and a house, two years after Thomas Edison demonstrated incandescent lighting to the public. Within a matter of weeks of this installation, a power plant was also put into commercial service at Minneapolis. Duck Reach, Launceston, Tasmania. Completed 1895. The first publicly-owned hydro-electric plant in the Southern Hemisphere. Supplied power to the city of Launceston for street lighting. Decew Falls 1, St. Catharines, Ontario, Canada completed 25 August 1898. Owned by Ontario Power Generation. Four units are still operational. Recognised as an IEEE Milestone in Electrical Engineering & Computing by the IEEE Executive Committee in 2002.

[edit]

Largest hydro-electric power stations

Itaipu Dam The La Grande Complex in Quebec, Canada, is the world's largest hydroelectric generating system. The eight generating stations of the complex have a total generating capacity of

16,021 MW. The Robert Bourassa station alone has a capacity of 5,616 MW. A ninth station (Eastmain-1) is currently under construction and will add 480 MW to the total. An additional project on the Rupert River, currently undergoing environmental assessments, would add two stations with a combined capacity of 888 MW.

Itaip Guri Grand Coulee Sayano Shushenskaya Robert-Bourassa Churchill Falls Iron Gates

Brazil/Paraguay 1984/1991/2003 Venezuela 1986 United States 1942/1980 Russia 1983

Canada 1981 Canada 1971 Romania/Serbia 1970

14,000 MW 10,200 MW 6,809 MW 6,721 MW (2005) 5,616 MW 5,429 MW 2,280 MW

93.4 TW-hours 46 TW-hours 22.6 TW-hours 23.6 TW-hours (2005) 35 TW-hours 11.3 TW-hours

These are ranked by maximum power. [edit]

In progress

Three Gorges Dam, China. First power in July 2003, scheduled completion 2009, 18,200 MW

[edit]

Countries with the most hydro-electric capacity


Canada, 341,312 GWh (66,954 MW installed) USA, 319,484 GWh (79,511 MW installed) Brazil, 285,603 GWh (57,517 MW installed) China, 204,300 GWh (65,000 MW installed) Russia, 169,700 GWh (46,100 MW installed) (2005) Norway, 121,824 GWh (27,528 MW installed) Japan, 84,500 GWh (27,229 MW installed) India, 82,237 GWh (22,083 MW installed) France, 77,500 GWh (25,335 MW installed)

These are 1999 figures and include pumped-storage hydroelectricity schemes. Every day, the sea rises and falls twice. This rising and falling of the sea level is called the tide. It is caused by the attraction of the moon and the sun on the water in the ocean.

The difference in height of the water at high tide and low tide is used in a tidal power station to generate electricity. As the tide comes in, the water flows through the turbines to generate electricity. The same thing happens when the tide goes out and the water flows in the opposite direction. At present, there is only large-scale power station using tidal energy. This power station is in France. Hydrogen vehicle is an automobile which uses hydrogen as its primary source of power for locomotion. These cars generally use the hydrogen in one of two methods: combustion or fuel-cell conversion. In combustion, the hydrogen is "burned" in engines in fundamentally the same method as traditional gasoline cars. In fuel-cell conversion, the hydrogen is turned into electricity through fuel cells which then power electric motors. Hydrogen can be obtained from decomposition of methane (natural gas), coal (by a process known as coal gasification), liquified petroleum gas, biomass (biomass gasification), high heat sources (by a process called thermolysis), or from water by electrolysis. A primary benefit of using pure hydrogen as a power source would be that it uses oxygen from the air to produce water vapor as exhaust. Another benefit is that, theoretically, the source of pollution created today by burning fossil fuels could be moved to centralized power plants, where the byproducts of burning fossil fuels can be better controlled. Hydrogen could also be produced from renewable energy sources with no net increase in carbon dioxide emissions. However, as explained below, the technical challenges required to realize this benefit may not be solved for many decades. The main challenges in using hydrogen in cars are the very high costs and the low energy efficiencies; so far, there is not much likelihood of overcoming these challenges. Consequently, only a few demonstration vehicles have been made at high cost.

Research and prototypes


Hydrogen does not act as a pre-existing source of energy like fossil fuels, but a carrier, much like a battery. It is renewable in a realistic time scale, unlike fossil fuels which can take millions of years to replenish. A few dispute this, see abiogenic petroleum origin. The largest potential advantage is that it could be produced and consumed continuously, using solar, water, wind and nuclear power for electrolysis. However, current hydrogen production methods utilizing hydrocarbons produce more pollution and cost per mile driven, than would direct consumption of the same hydrocarbon fuel, gasoline, diesel or methane, in a modern internal combustion engine. To reduce pollution and reliance on fossil fuels, sustainable and cost effective methods of hydrogen production and containment would have to be improved beyond current capabilities. The costs of producing, containing, and distributing hydrogen are likely to go up as the costs of fossil fuels goes up from declining supply and increasing demand. A small number of experimental hydrogen cars currently exist, and a significant amount of research is underway to try to make the technology viable. The common internal combustion engine usually fueled with gasoline (petrol) or diesel liquids can be converted

to run on gaseous hydrogen. However, the more energy efficient use of hydrogen involves the use of fuel cells and electric motors instead of a traditional engine. Hydrogen reacts with oxygen inside the fuel cells, which produces electricity to power the motors. One primary area of research is hydrogen storage, to try to increase the range of hydrogen vehicles, while reducing the weight, energy consumption, and complexity of the storage systems. Two primary methods of storage are metal hydrides and compression. High speed cars, buses, submarines, and space rockets already can run on hydrogen, in various forms at great expense. There is even a working toy model car that runs on solar power, using a reversible fuel cell to store energy in the form of hydrogen and oxygen gas. It can then convert the fuel back into water to release the solar energy.

Hydrogen fuel cell difficulties


While fuel cells themselves are potentially highly energy efficient, and working prototypes were made by Roger E. Billings in the 1960s, at least four technical obstacles and other political considerations exist regarding the development and use of a fuel cell-powered hydrogen car.

Low volumetric energy


Hydrogen has a very low volumetric energy density at ambient conditions, equal to about one-third that of methane. Even when the fuel is stored as a liquid in a cryogenic tank or in a pressurized tank as a gas, the volumetric energy density (megajoules per liter) is small relative to that of gasoline. Because of the energy required to compress or liquefy the hydrogen gas, the supply chain for hydrogen has lower well-to-tank efficiency compared to gasoline. Some research has been done into using special crystalline materials to store hydrogen at greater densities and at lower pressures. Instead of storing molecular hydrogen on-board, some have advocated using hydrogen reformers to extract the hydrogen from more traditional fuels including methane, gasoline, and ethanol. Many environmentalists are irked by this idea, as it promotes continued dependence on fossil fuels, at least in the case of gasoline. However, vehicles using reformed gasoline or ethanol to power fuel cells could still be more efficient than vehicles running internal combustion engines, if the technology can be invented.

High cost of fuel cells


Currently, fuel cells are costly to produce and fragile. However, technologies currently under development may soon result in robust and cost efficient versions. Hydrogen fuel cells were initially plagued by the high production costs associated with converting the gas to electricity ultimately required to power a hydrogen car. Scientists are also studying how to produce inexpensive fuel cells that are robust enough to survive the bumps and vibrations that all automobiles have to handle. Furthermore, freezing conditions are a major consideration because fuel cells produce water and utilize moist air with

varying water content. Most fuel cell designs are fragile and can't survive in such environments. Also, many designs require rare substances such as platinum as a catalyst in order to work properly. Such a catalyst can be contaminated by impurities in the hydrogen supply. In the past few years, however, a nickels-tin catalyst has been under development which may lower the cost of cells.

Hydrogen production costs


While hydrogen can be used as an energy carrier, it is not an energy source. It still must be produced from fossil fuels, or from some other energy source, with a net loss of energy because the conversion from energy to hydrogen storage and back to energy is not 100% efficient. Using hydrogen in a fuel cell is nearly twice as efficient as traditional combustion engines, which only have an efficiency of 15-25%. Hydrogen fuel cells can achieve thermodynamic efficiencies of 50-60%. The percentage will never be 100% because of the second law of thermodynamics.

Hydrogen distribution
Fourth, in order to distribute hydrogen to cars, the current gasoline fueling system would need to be replaced, or at least significantly supplemented with hydrogen fuel stations.

Political considerations
Since all energy sources have drawbacks, a shift into hydrogen powered vehicles may require difficult political decisions on how to produce this energy. The United States Department of Energy has already announced a plan to produce hydrogen directly from generation IV reactors. These nuclear power plants would be capable of producing hydrogen and electricity at the same time. The main problem with the nuclear-to-hydrogen economy is that hydrogen is ultimately only a carrier of electricity. The costs associated with electrolysis and transportation and storage of hydrogen may make this method uneconomical in comparison to direct utilization of electricity. Electric power transmission is about 95% efficient and the infrastructure is already in place, so tackling the current drawbacks of electric cars or hybrid vehicles may be easier than developing a whole new hydrogen infrastructure that mimics the obsolete model of oil distribution. Continuing research on cheaper, higher capacity batteries is needed. Direct transmission though electric rails, for example in a guided vehicle configuration such as personal rapid transit, may turn out to make electric vehicles more economic than hydrogen fuel cell vehicles. Recently, alternative methods of creating hydrogen directly from sunlight and water through a metallic catalyst have been announced. This may provide a cheap, direct conversion of solar energy into hydrogen, a very clean solution for hydrogen production. Sodium borohydride (NaBH4) a chemical compound may hold future promise due to the ease at which hydrogen can be stored under normal atmospheric pressures in automobiles that have fuel cells.

United States President George W. Bush was optimistic that these problems could be overcome with research. In his 2003 State of the Union address, he announced the U.S. government's hydrogen fuel initiative, which complements the President's existing FreedomCAR initiative for safe and cheap hydrogen fuel cell vehicles. Critics charge that focus on the use of the hydrogen car is a dangerous detour from more readily available solutions to reducing the use of fossil fuels in vehicles.

Hydrogen internal combustion


Hydrogen internal combustion engine cars are different from hydrogen fuel cell cars. The hydrogen internal combustion car is a slightly modified version of the traditional gasoline internal combustion engine car. Hydrogen internal combustion cars burn hydrogen directly, with no other fuels and produce pure water vapor exhaust. The problem with these cars is the hydrogen fuel that can be stored in a normal size tank is used up rapidly. A full tank of hydrogen, in the gaseous state, would last only a few miles before the tank is empty. However, methods are being developed to reduce tank space, such as storing liquid hydrogen or using metal hydrides in the tank. In 1807, Franois Isaac de Rivaz built the first hydrogen-fueled internal combustion vehicle. However, the design was very unsuccessful. It is estimated that more than a thousand hydrogen powered vehicles were produced in Germany before the end of the WWII prompted by the acute shortage of oil. BMW's CleanEnergy internal combustion hydrogen car has more power and is faster than hydrogen fuel cell electric cars. A BMW hydrogen car ( H2R) broke the speed record for hydrogen cars at 300 km/h (186 mi/h), making automotive history. Mazda has developed Wankel engines to burn hydrogen. The Wankel engine uses a rotary principle of operation, so the hydrogen burns in a different part of the engine from the intake. This reduces intake backfiring, a risk with hydrogen-fueled piston engines. However the major car companies like DaimlerChrysler and General Motors Corp, are investing in the slower, weaker, but more efficient hydrogen fuel cells instead. A small proportion of hydrogen in an otherwise conventional internal combustion engine can both increase overall efficiency and reduce pollution. Such a conventional car can employ an electrolizer to decompose water, or a mixture of hydrogen and other gasses as produced in a reforming process. Since hydrogen can burn in a very wide range of air/fuel mixtures, a small amount of hydrogen can also be used to ignite various liquid fuels in existing internal combustion engines under extremely lean burning conditions. This process requires a number of modifications to existing engine air/fuel and timing controls. Roy McAlister of the American Hydrogen Association has been demonstrating these conversions. Other renewable energy sources, like biodiesel, are also practical for existing automobile conversions, but come with their own host of problems. In 2005 an Israeli company claimed it succeeded in conquering most of the problems related to producing hydrogen by using a device called a metal-steam combustor that

separates hydrogen out of heated water. A tip of a magnesium or aluminum coil is inserted into the small metal-steam combustor together with water where it is heated to very high temperatures. The metal atoms bond with the oxygen from the water, creating metal oxide. As a result, the hydrogen molecules become free, and are sent into the engine alongside the steam. The solid waste product of the process, in the form of metal oxide, will later be collected in the fuel station and recycled for further use by the metal industry. The problem is that it takes a lot of energy to make the magnesium or aluminum coils. Outside of specialty and small-scale uses, the primary target for the widespread application of fuel cells (hydrogen, zinc, other) is the transportation sector; however, to be economically and environmentally feasible, any fuel cell based engine would need to be more efficient from well head-to-wheel, than what currently exists. At the time of this writing, hydrogen fuel cells are roughly equivalent to gasoline combustion, in terms of energy efficiency and pollution; however, if the energy and pollution costs in the production of the fuel cell are considered, hydrogen is sorely behind. Other fuel cell technologies, like zinc-air, are currently ahead of gasoline combustion in energy efficiency, and hydrogen in terms of production costs and safety, but have been widely overlooked by the advocates of gasoline combustion alternatives.

Automobile and bus makers


Many companies are currently researching the feasibility of building hydrogen cars. Funding has come from both private and government sources. In addition to the BMW and Mazda examples cited above, many automobile manufacturers have begun developing cars. These include:

Hyundai Santa Fe FCEV in the background (on the left) and Toyota Highlander FCHV in the foreground (on the right) during UC Davis's Picnic Day activities

BMW the 750hL is powered by a dual-fuel Internal Combustion Engine and with an Auxiliary power based on UTC Power fuel cell technology. The BMW H2R speed record car is also power by an ICE. Both models use Liquid Hydrogen as fuel.

DaimlerChrysler F-Cell, a hydrogen fuel cell vehicle based on the MercedesBenz A-Class. Ford Motor Focus FCV, a hydrogen fuel cell modification of the Ford Focus General Motors multiple models of fuel cell vehicles including the Hy-wire and the HydroGen3 Honda currently experimenting with a variety of alternative fuels and fuel cells with experimental vehicles based on the Honda EV Plus, most notable the Honda FCX. Hyundai Santa Fe FCEV, based on UTC Power fuel cell technology Mazda - RX-8, with a dual-fuel (hydrogen or gasoline) rotary-engine Nissan X-TRAIL FCV, based on UTC Power fuel cell technology. Morgan Motor Company LIFEcar, a performance-oriented hydrogen fuel cell vehicle with the aid of several other British companies Toyota The Highlander FCHV and FCHV-BUS are currently under development and in active testing. Volkswagen also has hydrogen fuel cell cars in development.

A few bus companies are also conducting hydrogen fuel cell research. These include:

DaimlerChrysler, based on Ballard fuel cell technology Thor Industries (the largest maker of buses in the U.S.), based on UTC Power fuel cell technology Irisbus, based on UTC Power fuel cell technology

Supporting these automobile and bus manufacturers are fuel cell and hydrogen engine research and manufacturing companies. The largest of these is UTC Power, a division of United Technologies Corporation, currently in joint development with Hyundai, Nissan, and BMW, among other auto companies. Another major supplier is Ballard Power Systems. The Hydrogen Engine Center is a supplier of hydrogen-fueled engines. Most, but not all, of these vehicles are currently only available in demonstration models and cost a large amount of money to make and run. They are not yet ready for general public use and are unlikely to be as feasible as plug in biodiesel hybrids. There are, however, fuel cell powered buses currently active or in production, such as a fleet of Thor buses with UTC Power fuel cells in California, operated by SunLine Transit Agency. Perth is also participating in the trial with three fuel cell powered buses now operating between Perth and the port city of Fremantle. The trial is to be extended to other Australian cities over the next three years. Mazda leased two dual-fuel RX-8s to commercial customers in Japan in early 2006, becoming the first manufacturer to put a hydrogen vehicle in customer hands. BMW has recently released to the media information of a new car that has been manufactured and uses hydrogen or petrol and is completely clean. BMW also plans to release its first publicly available hydrogen vehicle in 2008.

Fuel stations
Since the turn of the millennium, filling stations offering hydrogen have been opening worldwide, new are the home stations.

Planes
Hyfish is a revolutionary general aviation aircraft technology from Swiss Company Smartfish that is highly innovative in terms of safety, economy and emotion. It uses hydrogen fuel to power fuel cells in the world's first fuel cell-powered aircraft, and the design was presented at the Hannover Fair 2006. The plane has been flown with battery power, and installation of the fuel cell system for subsequent test flights is planned for the summer of 2006.
August 6th, 1945, 70,000 lives were ended in a matter of seconds. The United States had dropped an atomic bomb on the city of Hiroshima. Today many argue over whether or not the US should have taken such a drastic measure. Was it entirely necessary that we drop such a devastating weapon? Yes, it was. First, we must look at what was going on at the time the decision was made. The US had been fighting a massive war since 1941. Morale was most likely low, and resources were probably at the same level as morale. However, each side continued to fight, and both were determined to win. Obviously, the best thing that could have possibly happened would have been to bring the war to a quick end, with a minimum of casualties. What would have happened had the A-bomb not been used? The most obvious thing is that the war would have continued. US forces; therefore, would have had to invade the home island of Japan. Imagine the number of casualties that could have occurred if this would have happened! Also, Allied Forces would not only have to fight off the Japanese military, but they would have to defend themselves against the civilians of Japan as well. It was also a fact that the Japanese government had been equipping the commoners with any kind of weapon they could get their hands on. It is true that this could mean a Japanese citizen could have anything from a gun to a spear, but many unsuspecting soldiers might have fallen victim to a surprise spear attack! The number of deaths that would have occurred would have been much greater, and an invasion would have taken a much longer period of time. The Japanese would have continued to fight the US with all of what they had; spears, guns, knives, whatever they could get their hands on, just as long as they continued to fight the enemy. A counter argument for dropping the A bomb is that Japan was so low on resources due to the US blockade and could not resist for a long time. Japan obviously was very low on resources, however, Japanese civilians were ready to die with spears in their hands, surely the military would do the same. Besides, the Japanese military did still have some resources to go on. So again I must bring out the fact that Japan could have continued to fight, and they would have. And I sure anyone can realize what would happen if the war continued; more deaths. It was the atomic bomb that forced Japan to surrender and in turn saved thousands if not millions of lives. I don believe that Hiroshima and Nagasaki were the best places to bomb, due to the high civilian numbers; however, it is still my belief that the Atomic Bomb was necessary to end the war. Also, leaflets and warnings had been issued to the people of those cities warning them of an attack. Some say that the United States should have warned what kind of attack it would have been. This however seems ridiculous to me. It shouldn matter what kind of warning is given, a threat under such conditions should be taken seriously. I do not believe the second A-Bomb was necessary, it was dropped merely to show the supreme power of US government and warn USSR under Stalin rule. After the first bomb US government should have waited. The first bomb was dropped on Hiroshima on August 6th 1945 and the second was dropped on Nagasaki three days later on 9th. The bombs caused a

horrible destruction which was never seen before and the radioactive effects have been carried on over generations. I am certain that despite other arguments, the Atomic Bomb was a necessity. Without it, the number of men that would have died on both sides far surpasses that of the number that were killed in the droppings of both Atomic Bombs. Let face it, the goal of waging war is victory with minimum losses on one own side, and if possible a minimum amount of losses on the enemy side. The Atomic Bomb cut losses to a minimum and drew war to an end quickly. It was a military necessity. However the second bomb was unnecessary and I believe was only dropped to assure political superiority for the post war era.

The Manhattan Project The United States concealed its project to develop an atomic bomb under the name "Manhattan Engineer District." Popularly known as the Manhattan Project, it carried out the first successful atomic explosion on July 16, 1945, in a deserted area called Jornada del Muerto ("Journey of the dead") near Alamagordo, New Mexico. Dropping the First Atomic Bomb At 2:45 A.M. local time, the Enola Gay, a B-29 bomber loaded with an atomic bomb, took off from the US air base on Tinian Island in the western Pacific. Six and a half hours later, at 8:15 A.M. Japan time, the bomb was dropped and it exploded a minute later at an estimated altitude of 580 + 20 meters over central Hiroshima. The Hiroshima Bomb Size: length - 3 meters, diameter - 0.7 meters. Weight: 4 tons. Nuclear material: Uranium 235. Energy released: equivalent to 12.5 kilotons of TNT. Code name: "Little Boy". Initial Explosive Conditions Maximum temperature at burst point: several million degrees centigrade. A fireball of 15-meters radius formed in 0.1 millisecond, with a temperature of 300,000 degrees centigrade, and expanded to its huge maximum size in one second. The top of the atomic cloud reached an altitude of 17,000 meters. Black Rain Radioactive debris was deposited by "black rain" that fell heavily for over an hour over a wide area. Demaging Effects of the Atomic Bomb Thermal Hear. Intense thermal heat emitted by the fireball caused severe burns and loss of eyesight. Thermal burns of bare skin occurred as far as 3.5 kilometers from ground zero (directly below the burst point). Most people exposed to thermal rays within 1-kilometer radius of ground zero died. Tile and glass melted; all combustible materials were consumed. Blast. An atomic explosion causes an enormous shock wave followed instanteneously by a rapid expansion of air called the blast; these represent roughtly half the explosion's released energy. Maximum wind pressure of the blast: 35 tons per square meter. Maximum wind velocity: 440 meters per second. Wooden houses within 2.3 kilometers of ground zero collapsed. Concrete buildings near ground zero (thus hit by the blast from above) had ceilings crushed and windows and doors blown off. Many people were trapped under fallen strunctures and burned to death.

Radiation. People exposure within 500 meters of ground zero was fatal. People exposed at distances of 3 to 5 kilometers later showed symptoms of aftereffects, including radiation-induced cancers. Bodily Injuries Acute symptoms. Symptoms appearing in the first four months were called acute. Besides burns and wounds, they included: general malaise, fatigue, headaches, loss of appetite, nausea, vomiting, diarrhea, fever, abnormally low white blood cell count, bloody discharge, anemia, loss of hair. Aftereffects. Prolonged injuries were associated with aftereffects. The most serious in this category were: keloids (massive scar tissue on burned areas), cataracts, leukemia and other cancers. Atomic Demographics Population. The estimated pre-bomb population was 300,000 to 400,000. Because official documents were burned, the exact population is uncertain. Deaths. With an uncertain population figure, the death toll could only be estimated. According to data submitted to the United Nations by Hiroshima City in 1976, the death count reached 140,000 (plus or minus 10,000) by the end of December, 1945. Health Card Holders. Persons qualifying for treatment under the A-bomb Victims Medical Care law of 1957 received Health Cards; holders as of March 31, 1990, numbered 352,550. Nagasaki. The atomic bomb dropped on Nagasaki exploded at 11:02 A.M. on August 9. Using plutonium with an explosive power of 20 kilotons of TNTequivalent, it left an estimated 70,000 dead by the end of 1945, although both population and the deaths are uncertain.

1950s

December 12, 1952 The first serious nuclear accident occurred at AECL's NRX reactor in Chalk River, Canada. A reactor shutoff rod failure, combined with several operator errors, led to a major power excursion of more than double the reactor's rated output. The heavy water moderator was purged, killing the reaction in under 30 seconds. A cover gas system failure led to hydrogen explosions, which severely damaged the reactor's interior. The fission products of approximately 30 kg of uranium were released through the reactor stack. Irradiated light-water coolant leaked from the damaged coolant circuit into the reactor building; some 4,000 cubic metres were pumped via pipeline to a disposal area to avoid contamination of the Ottawa River. Subsequent monitoring of surrounding water sources revealed no contamination. No immediate fatalities or injuries resulted from the incident; a 1982 followup study of exposed workers showed no long-term health effects. Jimmy Carter, then a nuclear engineer in the US Navy, was among the cleanup crew.[1][2] May 24, 1958 At the NRU reactor in Chalk River, Canada, a damaged uranium fuel rod caught fire and was torn in two as it was being removed from the core, due to inadequate cooling. The fire was extinguished, but not before releasing a sizeable quantity of radioactive combustion products that contaminated the interior of the

reactor building and, to a lesser degree, an area surrounding the laboratory site. Over 600 people were employed in the clean-up.[3][4] 1959 A sodium-cooled reactor suffered a partial core meltdown at Santa Susana Field Laboratory near Simi Valley, California.[5]

[edit]

1960s

April 21, 1964 A US Transit 5BN nuclear-powered navigational satellite failed to reach orbital velocity and reentered the atmosphere 150,000 feet (46 km) above the Indian Ocean. The satellite's SNAP generator contained 16 kCi (590 TBq) of plutonium-238, which at least partially burned upon reentry. Increased levels of Pu238 were first documented in the stratosphere four months later. The EPA estimated the abortive launch resulted in little Pu238 contamination to human lungs (0.06 mrem or 0.6 Sv) compared to fallout from weapons tests in the 1950s (0.35 mrem or 3.5 Sv) or the EPA's Clean Air Act airborne exposure limit of 10 mrem (100 Sv). [6][7] July 24, 1964 Wood River Junction facility in Charlestown, Rhode Island. A criticality accident occurred at the plant, designed to recover uranium from scrap material left over from fuel element production. An operator accidentally added a concentrated uranium solution to an agitated tank containing sodium carbonate, resulting in a critical nuclear reaction. The criticality exposed the operator to a fatal radiation dose of 10,000 rad (100 Gy). Ninety minutes later a second excursion happened, exposing two cleanup crew to doses of up to 100 rad (1 Gy) without ill effect. [8] pg27[9] October 5, 1966 A sodium cooling system malfunction at the Enrico Fermi demonstration nuclear breeder reactor on the shore of Lake Erie near Monroe, Michigan, caused a partial core meltdown. The accident was attributed to a piece of zirconium that obstructed a flow-guide in the sodium cooling system. Two of the 105 fuel assemblies melted during the incident, but no contamination was recorded outside the containment vessel. [10] May 1967 Unit 2 at the Chapelcross Magnox nuclear power station in Dumfries and Galloway, Scotland, suffered a partial meltdown when a fuel rod failed and caught fire after the unit was refuelled. Following the incident, the reactor was shut down for two years for repairs [11] [12]. January 21, 1969 A coolant malfunction in an experimental underground nuclear reactor at Lucens, Canton of Vaud, Switzerland. No injuries or fatalities resulted. The cavern was heavily contaminated and was sealed. [13][14]

[edit]

1970s

February 22, 1977 The Czechoslovakian nuclear power plant A1 in Jaslovske Bohunice experienced a serious accident during fuel loading. This INES level 4 nuclear accident resulted in damaged fuel integrity, extensive corrosion damage of fuel cladding and release of radioactivity into the plant area. As result the A1 power plant was shut down and is being decommissioned. [15][16] March 28, 1979 Equipment failures and worker mistakes contribute to a loss of coolant and a partial core meltdown at the Three Mile Island nuclear reactor in Middletown, Pennsylvania. This is the worst commercial nuclear accident in the United States to date. Site boundary radiation exposure was under 100 millirems (1 mSv) (less than annual exposure due to natural sources), with exposure of 1 millirem (10 Sv) to approximately 2 million people. There were no immediate fatalities, although followup radiological studies predict at most one long-term cancer fatality. [17][18][19]

[edit]

1980s

March 1981 More than 100 workers were exposed to doses of up to 155 millirem per day radiation during repairs of a nuclear power plant in Tsuruga, Japan, violating the company's limit of 100 millirems (1 mSv) per day. [20] January 25, 1982 At Rochester Gas & Electric Company's Ginna plant in Rochester, New York, a steam generator pipe broke, spilling radioactive coolant on the plant floor. Small amounts (about 80 Ci or 3 TBq) of radioactive steam escaped into the air.[21][22][23] September 23, 1983 Buenos Aires, Argentina An operator error during a fuel plate reconfiguration led to a criticality accident at the RA-2 facility in an experimental test reactor. An excursion of 3x1017 fissions followed; the operator absorbed 2000 rad (20 Gy) of gamma and 1700 rad (17 Gy) of neutron radiation which killed him two days later. Another 17 people outside of the reactor room absorbed doses ranging from 35 rad (0.35 Gy) to less than 1 rad (0.01 Gy).[24] pg103[25] April 26, 1986 The worst accident in the history of nuclear power occurred at the Chernobyl nuclear power plant located near Kiev, USSR (now part of Ukraine). Fire and explosions resulting from an unauthorized experiment left 31 dead in the immediate aftermath. Radioactive nuclear material was spread over much of Europe. Over 135,000 are evacuated from the areas immediately around Chernobyl (or, in Ukrainian, Chornobyl) and over 800,000 from the areas of fallout in Ukraine, Belarus and Russia. About 4,000 mi (10,000 km) were taken out of human use for an indefinite time. In 2005, a comprehensive study on the long-term health consequences of the accident was completed by the IAEA, World Health Organization and six other UN agencies, as well as the governments of Russia, Belarus and Ukraine. The findings include 60 radiation-caused fatalities to date, with an estimated 4000 additional fatalities to come within the lifetimes of those exposed; however, this is not nearly as many deaths as were predicted in the moreimmediate aftermath of the accident. [26]. See Chernobyl accident.

May 4, 1986, An experimental 300-megawatt THTR-300 HTGR located in Hamm-Uentrop, Germany released radiation after one of its spherical fuel pebbles became lodged in the pipe used to deliver fuel elements to the reactor. Operator actions to dislodge the obstruction during the event damaged the fuel pebble cladding, releasing radiation detectable up to two kilometers from the reactor. [27] November 24, 1989 Near-meltdown at Greifswald, East Germany [28] October 19, 1989, the Vandellos nuclear power plant near Tarragona, Spain did not result in an external release of radioactivity, nor was there damage to the reactor core or contamination on site. However, the damage to the plant's safety systems due to fire degraded the defence-in-depth significantly. The event is classified as Level 3, based on the defence-in-depth criterion. The plant was closed due to this accident, and now is in dismantling process.

[edit]

1990s

April 6, 1993 Tomsk, Russia At the Tomsk-7 Siberian Chemical Enterprise plutonium reprocessing facility, a pressure buildup led to an explosive mechanical failure in a 34 cubic meter stainless steel reaction vessel buried in a concrete bunker under building 201 of the radiochemical works. The vessel contained a mixture of concentrated nitric acid, uranium (8757 kg), plutonium (449 g) along with a mixture of radioactive and organic waste from a prior extraction cycle. The explosion dislodged the concrete lid of the bunker and blew a large hole in the roof of the building, releasing approximately 6 GBq of Pu 239 and 30 TBq of various other radionuclides into the environment. The accident exposed 160 on-site workers and almost two thousand cleanup workers to total doses of up to 50 mSv (the threshold limit for radiation workers is 100 mSv per 5 years)[29]. The contamination plume extended 28 km NE of building 201, 20 km beyond the facility property. The small village of Georgievka (pop. 200) was at the end of the fallout plume, but no fatalities, illnesses or injuries were reported. [30] September 30, 1999 Japan's worst nuclear accident to date takes place at a uranium reprocessing facility in Tokai-mura, Ibaraki prefecture, northeast of Tokyo, Japan. The direct cause of the criticality accident was workers putting uranyl nitrate solution containing about 16.6 kg of uranium, which exceeded the critical mass, into a precipitation tank. The tank was not designed to dissolve this type of solution and was not configured to prevent eventual criticality. Three workers were exposed to (neutron)radiation doses in excess of allowable limits (two of these workers died); a further 116 received lesser doses of 1 msV or greater. [31] [32] [33] For more details, see Tokai, Ibaraki and 5 yen coin.

[edit]

2000s

February 15, 2000 The Indian Point nuclear power plant's reactor 2 in Buchanan, New York, vented a small amount of radioactive steam when a steam generator tube failed. No detectable radioactivity was observed offsite. Con Edison was censured by the NRC for not following the procedures for timely notification of government agencies. Subsequently, Con Edison is required by the NRC to replace all four steam generators. [34] NRC Information Notice 2000-09 February 9, 2002 Two workers were exposed to a small amount of radiation and suffered minor burns when a fire broke out at the Onagawa Nuclear Power Station Miyagi Prefecture, Japan. The fire occurred in the basement of reactor #3 during a routine inspection when a spray can was punctured accidentally, igniting a sheet of plastic. [35] April 19, 2005 Sellafield, UK. Twenty metric tons of uranium and 160 kilograms of plutonium dissolved in 83,000 liters of nitric acid leaked undetected over several months from a cracked pipe into a stainless steel sump chamber at the Thorp nuclear fuel reprocessing plant. The partially processed spent fuel was drained into holding tanks outside the plant. [36]. 2005 Dounreay, UK. In September, the site's cementation plant was closed when 266 litres of radioactive reprocessing residues were spilled inside containment. [37] [38]. In October, another of the site's reprocessing laboratories was closed down after nose-blow tests of eight workers tested positive for trace radioactivity. [39]

A loss-of-coolant accident (LOCA) is a mode of failure for a nuclear reactor; in a nuclear reactor, the results of a LOCA could be catastrophic to the reactor, the facility that houses it, and the immediate vicinity around the reactor. Each plant's emergency core cooling system (ECCS) exists specifically to deal with a LOCA. Nuclear reactors generate heat internally; to convert this heat into useful power, a coolant system is used. If this coolant is lost, the nuclear reactor may continue to generate the same heat while its temperature rises to the point of damaging the reactor. Particularly dangerous is the possibility that the high temperatures may prevent the control systems from slowing the reaction; if this happens, the temperature will continue to rise until something drastic happens.

If water is present, it may boil, bursting out of its pipes. (For this reason, nuclear power plants are equipped with pressure-operated relief valves.) If graphite and air are present, the graphite may catch fire, spreading radioactive contamination. (This situation exists only in AGRs, RBMKs, Magnox and weapons-production reactors, which use graphite as a neutron moderator.) The fuel and reactor internals may melt; if the melted configuration remains critical, the molten mass will continue to generate heat, possibly melting its way down through the bottom of the reactor. Such an event is called a nuclear meltdown and can have severe consequences. The so-called "China syndrome" would be this process taken to an extreme: the molten mass working its way down through the soil to the water table (and below) - however, current understanding and experience

of nuclear fission reactions suggests that the molten mass would become too disrupted to carry on heat generation before descending very far; for example, in the Chernobyl accident the reactor core melted and core material was found in the basement, too widely dispersed to carry on a chain reaction (but still dangerously radioactive). A reactor may passively (that is, in the absence of any control systems) increase or decrease its power output in the event of a LOCA or of voids appearing in its coolant system (by water boiling, for example). This is measured by the void coefficient. All modern nuclear power plants (except, under some conditions, RBMKs) have a negative void coefficient, indicating that as water turns to steam, power instantly decreases. The Canadian CANDU reactor is a notable exception with a positive void coefficient for reasons outlined at the site Nuclearfaq. Boiling water reactors are designed to have steam voids inside the reactor vessel. Modern reactors are designed to prevent and withstand loss of coolant using various techniques. Some, such as the pebble bed reactor, passively shut down the chain reaction when coolant is lost; others have extensive safety systems to shut down the chain reaction. [edit]

The Three Final Defenses


A great deal of work goes into the prevention of a serious core event. If such an event was to occur, three different physical processes are expected to increase the time between the start of the accident and the time when a large release of radioactivity could occur. These three factors would provide additional time to the plant operators in order to mitigate the result of the event: 1. The time required for the water to boil away (coolant, moderator). Assuming that at the moment that the accident occurs the reactor will be SCRAMed (immediate and full insertion of all control rods), so reducing the thermal power input and further delaying the boiling. 2. The time required for the fuel to melt. After the water has boiled, then the time required for the fuel to reach its melting point will be dictated by the heat input due to decay of fission products, the heat capacity of the fuel and the melting point of the fuel. 3. The time required for the molten fuel to melt its way through the pressure vessel. The time required for the molten metal of the core to burn through the bottom of the vessel will depend on temperatures, vessel materials. Whether or not the fuel remains critical in the conditions inside the damaged core or beyond will play a significant role.

Three Mile Island Nuclear Accident

The accident at the Three Mile Island Unit 2 (TMI-2) nuclear power plant near Middletown, Pennsylvania, on March 28, 1979, was the most serious in U.S. commercial nuclear power plant operating history,(1) even though it led to no deaths or injuries to plant workers or members of the nearby community. But it brought about sweeping changes involving emergency response planning, reactor operator training, human factors engineering, radiation protection, and many other areas of nuclear power plant operations. It also caused the U.S. Nuclear Regulatory Commission to tighten and heighten its regulatory oversight. Resultant changes in the nuclear power industry and at the NRC had the effect of enhancing safety. The sequence of certain events -- equipment malfunctions, design related problems and worker errors -- led to a partial meltdown of the TMI-2 reactor core but only very small off-site releases of radioactivity.

Summary of Events
The accident began about 4:00 a.m. on March 28, 1979, when the plant experienced a failure in the secondary, non-nuclear section of the plant. The main feedwater pumps stopped running, caused by either a mechanical or electrical failure, which prevented the steam generators from removing heat. First the turbine, then the reactor automatically shut down. Immediately, the pressure in the primary system (the nuclear portion of the plant) began to increase. In order to prevent that pressure from becoming excessive, the pilot-operated relief valve (a valve located at the top of the pressurizer) opened. The valve should have closed when the pressure decreased by a certain amount, but it did not. Signals available to the operator failed to show that the valve was still open. As a result, cooling water poured out of the stuck-open valve and caused the core of the reactor to overheat. As coolant flowed from the core through the pressurizer, the instruments available to reactor operators provided confusing information. There was no instrument that showed the level of coolant in the core. Instead, the operators judged the level of water in the core by the level in the pressurizer, and since it was high, they assumed that the core was properly covered with coolant. In addition, there was no clear signal that the pilot-operated relief valve was open. As a result, as alarms rang and warning lights flashed, the operators did not realize that the plant was experiencing a loss-of-coolant accident. They took a series of actions that made conditions worse by simply reducing the flow of coolant through the core. Because adequate cooling was not available, the nuclear fuel overheated to the point at which the zirconium cladding (the long metal tubes which hold the nuclear fuel pellets) ruptured and the fuel pellets began to melt. It was later found that about one-half of the core melted during the early stages of the accident. Although the TMI-2 plant suffered a severe core meltdown, the most dangerous kind of nuclear power accident, it did not produce the worst-case consequences that reactor experts had long feared. In a worst-case accident, the melting of nuclear fuel would lead to a breach of the walls of the containment building and release massive quantities of radiation to the environment. But this did not occur as a result of the Three Mile Island accident. The accident caught federal and state authorities off-guard. They were concerned about the small releases of radioactive gases that were measured off-site by the late morning of March 28 and even more concerned about the potential threat that the reactor posed to the surrounding population. They did not know that the core had melted, but they immediately took steps to try to gain control of the reactor and ensure adequate cooling to the core. The NRCs regional office in King of Prussia, Pennsylvania, was notified at 7:45 a.m. on March 28. By 8:00, NRC Headquarters in Washington, D.C. was alerted and the NRC Operations Center in Bethesda, Maryland, was activated. The regional office promptly dispatched the first team of inspectors to the site and other agencies, such as the Department of Energy and the Environmental Protection Agency, also mobilized their response teams. Helicopters hired by TMI's owner, General Public Utilities Nuclear, and the Department of Energy were sampling radioactivity in the atmosphere above the plant by midday. A team from the Brookhaven National Laboratory was also sent to

assist in radiation monitoring. At 9:15 a.m., the White House was notified and at 11:00 a.m., all non-essential personnel were ordered off the plant's premises. By the evening of March 28, the core appeared to be adequately cooled and the reactor appeared to be stable. But new concerns arose by the morning of Friday, March 30. A significant release of radiation from the plants auxiliary building, performed to relieve pressure on the primary system and avoid curtailing the flow of coolant to the core, caused a great deal of confusion and consternation. In an atmosphere of growing uncertainty about the condition of the plant, the governor of Pennsylvania, Richard L. Thornburgh, consulted with the NRC about evacuating the population near the plant. Eventually, he and NRC Chairman Joseph Hendrie agreed that it would be prudent for those members of society most vulnerable to radiation to evacuate the area. Thornburgh announced that he was advising pregnant women and preschool-age children within a 5-mile radius of the plant to leave the area. Within a short time, the presence of a large hydrogen bubble in the dome of the pressure vessel, the container that holds the reactor core, stirred new worries. The concern was that the hydrogen bubble might burn or even explode and rupture the pressure vessel. In that event, the core would fall into the containment building and perhaps cause a breach of containment. The hydrogen bubble was a source of intense scrutiny and great anxiety, both among government authorities and the population, throughout the day on Saturday, March 31. The crisis ended when experts determined on Sunday, April 1, that the bubble could not burn or explode because of the absence of oxygen in the pressure vessel. Further, by that time, the utility had succeeded in greatly reducing the size of the bubble.

Health Effects
Detailed studies of the radiological consequences of the accident have been conducted by the NRC, the Environmental Protection Agency, the Department of Health, Education and Welfare (now Health and Human Services), the Department of Energy, and the State of Pennsylvania. Several independent studies have also been conducted. Estimates are that the average dose to about 2 million people in the area was only about 1 millirem. To put this into context, exposure from a full set of chest x-rays is about 6 millirem. Compared to the natural radioactive background dose of about 100-125 millirem per year for the area, the collective dose to the community from the accident was very small. The maximum dose to a person at the site boundary would have been less than 100 millirem. In the months following the accident, although questions were raised about possible adverse effects from radiation on human, animal, and plant life in the TMI area, none could be directly correlated to the accident. Thousands of environmental samples of air, water, milk, vegetation, soil, and foodstuffs were collected by various groups monitoring the area. Very low levels of radionuclides could be attributed to releases from the accident. However, comprehensive investigations and assessments by several well-respected organizations have concluded that in spite of serious damage to the reactor, most of the radiation was contained and that the actual release had negligible effects on the physical health of individuals or the environment.

Impact of the Accident


The accident was caused by a combination of personnel error, design deficiencies, and component failures. There is no doubt that the accident at Three Mile Island permanently changed both the nuclear industry and the NRC. Public fear and distrust increased, NRC's regulations and oversight became broader and more robust, and management of the plants was scrutinized more carefully. The problems identified from careful analysis of the events during those days have led to permanent and sweeping changes in how NRC regulates its licensees -which, in turn, has reduced the risk to public health and safety. Here are some of the major changes which have occurred since the accident:

Upgrading and strengthening of plant design and equipment requirements. This includes fire protection, piping systems, auxiliary feedwater systems, containment building isolation, reliability of individual components (pressure relief valves and electrical circuit breakers), and the ability of plants to shut down automatically; Identifying human performance as a critical part of plant safety, revamping operator training and staffing requirements, followed by improved instrumentation and controls for operating the plant, and establishment of fitness-for-duty programs for plant workers to guard against alcohol or drug abuse; Improved instruction to avoid the confusing signals that plagued operations during the accident; Enhancement of emergency preparedness to include immediate NRC notification requirements for plant events and an NRC operations center which is now staffed 24 hours a day. Drills and response plans are now tested by licensees several times a year, and state and local agencies participate in drills with the Federal Emergency Management Agency and NRC; Establishment of a program to integrate NRC observations, findings, and conclusions about licensee performance and management effectiveness into a periodic, public report; Regular analysis of plant performance by senior NRC managers who identify those plants needing additional regulatory attention; Expansion of NRC's resident inspector program -- first authorized in 1977 -- whereby at least two inspectors live nearby and work exclusively at each plant in the U.S to provide daily surveillance of licensee adherence to NRC regulations; Expansion of performance-oriented as well as safety-oriented inspections, and the use of risk assessment to identify vulnerabilities of any plant to severe accidents; Strengthening and reorganization of enforcement as a separate office within the NRC; The establishment of the Institute of Nuclear Power Operations (INPO), the industry's own "policing" group, and formation of what is now the Nuclear Energy Institute to provide a unified industry approach to generic nuclear regulatory issues, and interaction with NRC and other government agencies; The installing of additional equipment by licensees to mitigate accident conditions, and monitor radiation levels and plant status; Employment of major initiatives by licensees in early identification of important safetyrelated problems, and in collecting and assessing relevant data so lessons of experience can be shared and quickly acted upon; Expansion of NRC's international activities to share enhanced knowledge of nuclear safety with other countries in a number of important technical areas.

Current Status
Today, the TMI-2 reactor is permanently shut down and defueled, with the reactor coolant system drained, the radioactive water decontaminated and evaporated, radioactive waste shipped off-site to an appropropriate disposal site, reactor fuel and core debris shipped off-site to a Department of Energy facility, and the remainder of the site being monitored. The owner says it will keep the facility in long-term, monitored storage until the operating license for the TMI-1 plant expires at which time both plants will be decommissioned. Below is a chronology of highlights of the TMI-2 cleanup from 1980 through 1993. Date July 1980 July 1980 Event Approximately 43,000 curies of krypton were vented from the reactor building. The first manned entry into the reactor building took place.

Nov. 1980 An Advisory Panel for the Decontamination of TMI-2, composed of citizens, scientists, and State and local officials, held its first meeting in Harrisburg, PA. July 1984 Oct. 1985 July 1986 The reactor vessel head (top) was removed. Defueling began. The off-site shipment of reactor core debris began.

Aug. 1988 GPU submitted a request for a proposal to amend the TMI-2 license to a "possession-only" license and to allow the facility to enter long-term monitoring storage. Jan. 1990 July 1990 Defueling was completed. GPU submitted its funding plan for placing $229 million in escrow for radiological decommissioning of the plant. The evaporation of accident-generated water began.

Jan. 1991

April 1991 NRC published a notice of opportunity for a hearing on GPU's request for a license amendment. Feb. 1992 NRC issued a safety evaluation report and granted the license amendment.

Aug. 1993 The processing of accident-generated water was completed involving 2.23 million gallons. Sept. 1993 NRC issued a possession-only license. Sept. 1993 The Advisory Panel for Decontamination of TMI-2 held its last meeting. Dec. 1993 Post-Defueling Monitoring Storage began.

Additional Information
Further information on the TMI-2 accident can be obtained from sources listed below. The documents can be ordered for a fee from the NRC's Public Document Room at 301-415-4737 or 1-800-397-4209; e-mail pdr@nrc.gov. The PDR is located at 11555 Rockville Pike, Rockville, Maryland; however the mailing address is U.S. Nuclear Regulatory Commission, Public Document Room, Washington, D.C. 20555. A glossary is also provided below.

Additional Sources for Information on Three Mile Island


NRC Annual Report - 1979, NUREG-0690 "Population Dose and Health Impact of the Accident at the Three Mile Island Nuclear Station," NUREG-0558 "Environmental Assessment of Radiological Effluents from Data Gathering and Maintenance Operation on Three Mile Island Unit 2," NUREG-0681 "Report of the President's Commission on the Accident at Three Mile Island," October, 1979 "Investigation into the March 28, 1979 Three Mile Island Accident by the Office of Inspection and Enforcement," NUREG-0600 "Three Mile Island; A Report to the Commissioners and to the Public," by Mitchell Rogovin and George T. Frampton, NUREG/CR-1250, vols. I-II, 1980 "Lessons Learned from the Three Mile Island - Unit 2 Advisory Panel, NUREG/CR-6252 "The Status of Recommendations of the President's Commission on the Accident at Three Mile Island," (a 10-year review), NUREG-1355 "NRC Views and Analysis of the Recommendations of the President's Commission on the Accident at Three Mile Island," NUREG-0632 "Environmental Impact Statement related to decontamination and disposal of radioactive wastes resulting from March 28, 1979 accident Three Mile Island Nuclear Station, Unit 2," NUREG-0683 "Answers to Questions About Updated Estimates of Occupational Radiation Doses at Three Mile Island, Unit 2," NUREG-1060 "Answers to Frequently Asked Questions About Cleanup Activities at Three Mile Island, Unit 2," NUREG-0732 "Status of Safety Issues at Licensed Power Plants" (TMI Action Plan Reqmts.), NUREG-1435 Walker, J. Samuel, Three Mile Island: A Nuclear Crisis in Historical Perspective, Berkeley: University of California Press, 2004.

Other Organizations to Contact


GPU Nuclear Corp, One Upper Pond Road, Parsippany, NJ, 07054, telephone 201-316-7249; Three Mile Island Public Health Fund, 1622 Locust Street, Philadelphia, PA, 19103, telephone 215-875-3026; Pennsylvania Bureau of Radiation Protection, Department of Environmental Protection, Rachel Carson State Office Building, P.O. Box 8469, Harrisburg, PA, 17105-8469, telephone 717-7872480. March 2004

Glossary
Auxiliary feedwater - (see emergency feedwater) Background radiation - The radiation in the natural environment, including cosmic rays and radiation from the naturally radioactive elements, both outside and inside the bodies of humans and animals. The usually quoted average individual exposure from background radiation is 360 millirem per year. Cladding - The thin-walled metal tube that forms the outer jacket of a nuclear fuel rod. It prevents the corrosion of the fuel by the coolant and the release of fission products in the coolants. Aluminum, stainless steel and zirconium alloys are common cladding materials. Emergency feedwater system - Backup feedwater supply used during nuclear plant startup and shutdown; also known as auxiliary feedwater. Fuel rod - A long, slender tube that holds fuel (fissionable material) for nuclear reactor use. Fuel rods are assembled into bundles called fuel elements or fuel assemblies, which are loaded individually into the reactor core. Containment - The gas-tight shell or other enclosure around a reactor to confine fission products that otherwise might be released to the atmosphere in the event of an accident. Coolant - A substance circulated through a nuclear reactor to remove or transfer heat. The most commonly used coolant in the U.S. is water. Other coolants include air, carbon dioxide, and helium. Core - The central portion of a nuclear reactor containing the fuel elements, and control rods. Decay heat - The heat produced by the decay of radioactive fission products after the reactor has been shut down. Decontamination - The reduction or removal of contaminating radioactive material from a structure, area, object, or person. Decontamination may be accomplished by (1) treating the surface to remove or decrease the contamination; (2) letting the material stand so that the radioactivity is decreased by natural decay; and (3) covering the contamination to shield the radiation emitted. Feedwater - Water supplied to the steam generator that removes heat from the fuel rods by boiling and becoming steam. The steam then becomes the driving force for the turbine generator. Nuclear Reactor - A device in which nuclear fission may be sustained and controlled in a selfsupporting nuclear reaction. There are several varieties, but all incorporate certain features, such as fissionable material or fuel, a moderating material (to control the reaction), a reflector to conserve escaping neutrons, provisions for removal of heat, measuring and controlling instruments, and protective devices Pressure Vessel - A strong-walled container housing the core of most types of power reactors. Pressurizer - A tank or vessel that controls the pressure in a certain type of nuclear reactor.

Primary System - The cooling system used to remove energy from the reactor core and transfer that energy either directly or indirectly to the steam turbine. Radiation - Particles (alpha, beta, neutrons) or photons (gamma) emitted from the nucleus of an unstable atom as a result of radioactive decay. Reactor Coolant System - (see primary system) Secondary System - The steam generator tubes, steam turbine, condenser and associated pipes, pumps, and heaters used to convert the heat energy of the reactor coolant system into mechanical energy for electrical generation. Steam Generator - The heat exchanger used in some reactor designs to transfer heat from the primary (reactor coolant) system to the secondary (steam) system. This design permits heat exchange with little or no contamination of the secondary system equipment. Turbine - A rotary engine made with a series of curved vanes on a rotating shaft. Usually turned by water or steam. Turbines are considered to be the most economical means to turn large electrical generators.

Chernobyl Nuclear Disaster

Date and Time: The Chernobyl nuclear accident occurred on Saturday, April 26, 1986,
at 1:23:58 a.m. local time. Location: The V.I. Lenin Memorial Chernobyl Nuclear Power Station was located in Ukraine, near the town of Pripyat, which had been built to house power station employees and their families. The power station was in a wooded, marshy area near the UkraineBelarus border, approximately 18 kilometers northwest of the city of Chernobyl and 100 km north of Kiev, the capital of Ukraine. Background: The Chernobyl Nuclear Power Station included four nuclear reactors, each capable of producing one gigawatt of electric power. At the time of the accident, the four reactors produced about 10 percent of the electricity used in Ukraine. Construction of the Chernobyl power station began in the 1970s. The first of the four reactors was commissioned in 1977, and Reactor No. 4 began producing power in 1983. When the accident occurred in 1986, two other nuclear reactors were under construction.

The Accident: On April 26, 1986, the operating crew planned to test whether the
Reactor No. 4 turbines could produce enough energy to keep the coolant pumps running until the emergency diesel generator was activated in case of an external power loss. During the test, power surged unexpectedly, causing an explosion and driving temperatures in the reactor to more than 2,000 degrees Celsiusmelting the fuel rods, igniting the reactors graphite covering, and releasing a cloud of radiation into the atmosphere. Causes: The precise causes of the accident are still uncertain, but it is generally believed that the series of incidents that led to the explosion, fire and nuclear meltdown at Chernobyl was caused by a combination of reactor design flaws and operator error.

Number of Deaths: By mid-2005, fewer than 60 deaths could be linked directly to


Chernobylmostly workers who were exposed to massive radiation during the accident or children who developed thyroid cancer. Estimates of the eventual death toll from Chernobyl vary widely. A 2005 report by the Chernobyl Forumeight U.N. organizationsestimated the accident eventually would cause about 4,000 deaths. Greenpeace places the figure at 93,000 deaths, based on information from the Belarus National Academy of Sciences.

Physical Health Effects: The Belarus National Academy of Sciences estimates


270,000 people in the region around the accident site will develop cancer as a result of Chernobyl radiation and that 93,000 of those cases are likely to be fatal. Another report by the Center for Independent Environmental Assessment of the Russian Academy of Sciences found a dramatic increase in mortality since 199060,000 deaths in Russia and an estimated 140,000 deaths in Ukraine and Belarusprobably due to Chernobyl radiation.

Psychological Effects: The biggest challenge facing communities still coping with the
fallout of Chernobyl is the psychological damage to 5 million people in Belarus, Ukraine and Russia. "The psychological impact is now considered to be Chernobyl's biggest health consequence," said Louisa Vinton, of the UNDP. "People have been led to think of themselves as victims over the years, and are therefore more apt to take a passive approach toward their future rather than developing a system of self-sufficiency.

Countries and Communities Affected: Seventy percent of the radioactive fallout


from Chernobyl landed in Belarus, affecting more than 3,600 towns and villages, and 2.5 million people. The radiation contaminated soil, which in turn contaminates crops that people rely on for food. Many regions in Russia, Belarus and Ukraine are likely to be contaminated for decades. Radioactive fallout carried by the wind was later found in sheep in the UK, on clothing worn by people throughout Europe, and in rain in the United States.

Status and Outlook: The Chernobyl accident cost the former Soviet Union hundreds
of billions of dollars, and some observers believe it may have hastened the collapse of the Soviet government. After the accident, Soviet authorities resettled more than 350,000 people outside the worst areas, including all 50,000 people from nearby Pripyat, but millions of people continue to live in contaminated areas.

After the breakup of the Soviet Union, many projects intended to improve life in the region were abandoned, and young people began to move away to pursue careers and build new lives in other places. "In many villages, up to 60 percent of the population is made up of pensioners," said Vasily Nesterenko, director of the Belrad Radiation Safety and Protection Institute in Minsk. "In most of these villages, the number of people able to work is two or three times lower than normal." After the accident, Reactor No. 4 was sealed, but the Ukranian government allowed the other three reactors to keep operating because the country needed the power they provided. Reactor No. 2 was shut down after a fire damaged it in 1991, and Reactor No. 1 was decommissioned in 1996. In November 2000, the Ukranian president shut down Reactor No. 3 in an official ceremony that finally closed the Chernobyl facility. But Reactor No. 4, which was damaged in the 1986 explosion and fire, is still full of radioactive material encased inside a concrete barrier, called a sarcophagus, that is aging badly and needs to be replaced. Water leaking into the reactor carries radioactive material throughout the facility and threatens to seep into the groundwater. The sarcophagus was designed to last about 30 years, and current designs would create a new shelter with a lifetime of 100 years. But radioactivity in the damaged reactor would need to be contained for 100,000 years to ensure safety. That is a challenge not only for today, but for many generations to come.

1. The catastrophic Chernobyl accident in the former Soviet Union, in 1986, was by far the most severe nuclear reactor accident to occur in any country; it is widely believed an accident of that type could not occur in U.S.-designed plants. For more detail on the accident at Chernobyl, see Backgrounder.

A nuclear meltdown occurs when the core of a nuclear reactor ceases to be properly controlled and cooled due to failure of control or safety systems, and fuel assemblies (containing the uranium or plutonium reactor fuel and highly radioactive fission products) inside the reactor begin to overheat and melt. A meltdown is considered a serious nuclear accident because of the probability that a nuclear meltdown will defeat one or more reactor containment systems and potentially release highly radioactive fission products to the environment. Several nuclear meltdowns of differing severity have occurred throughout the history of both civilian and military nuclear reactor operations. All nuclear meltdowns are characterized by severe damage to the nuclear reactor in which it occurs. In some cases this has required extensive repairs or decommissioning of a nuclear reactor and in more severe cases it has required civilian evacuations.

Causes
In nuclear reactors, the fuel assemblies in the core can melt as a result of a loss of pressure control accident, loss of coolant accident, uncontrolled power excursion accident, or by a fire around the fuel assemblies.

In a loss of pressure control accident the pressure of the confined coolant falls below specification without the means to restore it. In some cases this may reduce the heat transfer efficiency (when using an inert gas as a coolant) and in others may form an insulating 'bubble' of steam surrounding the fuel assemblies (for pressurized water reactors). In the later case, due to localized heatup of the steam 'bubble' due to decay heat, the pressure required to collapse the steam 'bubble' may exceed reactor design specifications until the reactor has had time to cool down. In a loss of coolant accident either the physical loss of coolant (which is typically deioinized water, an inert gas, or a liquid sodium) or the loss of a method to ensure a sufficient flow rate of the coolant occurs. A loss of coolant accident and a loss of pressure control accidents are closely related in some reactors. In a pressurized water reactor a loss of coolant accident can also cause a steam 'bubble' to form in the core due to excessive heating of stalled coolant or by the subsequent loss of pressure control accident caused by a rapid loss of coolant. In an uncontrolled power excursion accident a sudden power spike in the reactor exceeds reactor design specifications due to a sudden increase in reactor reactivity. An uncontrolled power excursion occurs due to significantly altering a parameter that affects the exponential rate of a nuclear chain reaction (examples include ejecting a control rod or significantly altering the nuclear characteristics of the moderator, such as by rapid cooling). In extreme cases the reactor may proceed to a condition known as prompt critical. In certain reactor designs it is possible for hydrogen or graphite to ignite inside the reactor core. A fire inside the reactor may be caused by failure to carefully control the amount of hydrogen in the coolant, an air addition to certain types of nuclear reactors, the uncontrolled heating of the coolant or moderator of the reactor by the types of nuclear accidents listed above, or by an external source. Fires can be a much more severe casuality for nuclear reactors that are moderated with graphite because without taking proper precautions Wigner energy may accumulate (for example, during the Windscale fire).

In all of these cases, a reactor meltdown occurs when the fuel assemblies are heated significantly beyond their design specifications. In some cases (such as the Chernobyl accident) this may be almost instantanous and in others it could take hours or more (Three Mile Island accident). A nuclear reactor does not have to remain critical for a nuclear meltdown to occur since decay heat or fires can continue to heat the reactor fuel assemblies long after the reactor has shutdown. [edit]

Sequence of events
What happens when a reactor core melts is the subject of conjecture and some actual experience (see below).

Before the core of a nuclear reactor can melt, a number of events/failures must already have happened. Once the core melts, it will almost certainly destroy the fuel bundles and internal structures of the reactor vessel (although it may not penetrate the reactor vessel). [Note that the core at Three Mile Island did melt nearly completely but stayed within the reactor vessel.] If the melt drops into a pool of water (for example, coolant or moderator), a steam explosion called a Fuel-Coolant Interaction (FCI) is likely. If air is available, any exposed flammable substances will probably burn fiercely, but the liquid nature of the molten core poses special problems. In the worst case scenario, the above-ground containment would fail at an early stage, (due to say an FCI within the reactor vessel, ejecting part of the vessel as a missile - this is the 'alpha-mode' failure of the 1975 Rasmussen (WASH-1400) study), or there could be a large hydrogen explosion or other over-pressure event. Such an event could scatter uraniaaerosol and volatile fission-products directly into the atmosphere. However, these events are considered essentially incredible in modern 'large-dry' containments. (The WASH-1400 report has been supplanted by the 1991 NUREG-1150 study.) It seems to be an open question to what extent a molten mass can melt through a structure. The molten reactor core could penetrate the reactor vessel and the containment structure and burn down (core-concrete interaction) to ground water (this has not happened at any meltdown to date). It is possible that, as in the Chernobyl accident, the molten mass might mix with any material it melts, diluting itself down to a non-critical state. [Note that the molten core of Chernobyl flowed in a channel created by the structure of its reactor building, e.g., stairways and froze in place before core-concrete interaction. In the basement of the reactor at Chernobyl, a large "elephant's foot" of congealed core material was found.] Furthermore, the time delay and the lack of a direct path to the atmosphere would work to significantly ameliorate the radiological release. Any steam-explosions/FCI which occurred would probably work mainly to increase cooling of the core-debris. However, the ground water itself would likely be severely contaminated, and its flow could carry the contamination far afield. In the best case scenario, the reactor vessel would hold the molten material (as at Three Mile Island), limiting most of the damage to the reactor itself. However the Three Mile Island example also illustrates the difficulty in predicting such behavior: the reactor vessel was not built, and not expected, to sustain the temperatures it experienced when it underwent its meltdown, but because some of the melted material collected at the bottom of the vessel and cooled early on in the accident, it created a resistant shell against further pressure and heat. Such a possibility was not predicted by the engineers who designed the reactor and would not necessarily occur under duplicate conditions, but was largely seen as instrumental in the preservation of the vessel integrity. [edit]

Effects

The effects of a nuclear meltdown depend on the safety features designed into a reactor. A modern reactor is designed both to make a meltdown highly unlikely, and to contain one should it occur. In the future passively safe or inherently safe designs will make the possibility exceedingly unlikely. In a modern reactor, a nuclear meltdown, whether partial or total, should be contained inside the reactor's containment structure. Thus (assuming that no other major disasters occur) while the meltdown will severely damage the reactor itself, possibly contaminating the whole structure with highly-radioactive material, a meltdown alone will generally not lead to significant radiation release or danger to the public. The effects are therefore primarily economic (see [1]). In practice, however, a nuclear meltdown is often part of a larger chain of disasters (although there have been so few meltdowns in the history of nuclear power that there is not a large pool of statistical information from which to draw a credible conclusion as to what "often" happens in such circumstances). For example, in the Chernobyl accident, by the time the core melted, there had already been a large steam explosion and graphite fire and major release of radioactive contamination (as with almost all Soviet reactors, there was no containment structure at Chernobyl). [edit]

Reactor design
Although pressurized water reactors are more susceptible to nuclear meltdown in the absence of active safety measures, this is not a universal feature of civilian nuclear reactors. Much of the research in civilian nuclear reactors is for designs with passive safety features that would be much less susceptible to meltdown, even if all emergency systems failed. For example, pebble bed reactors are designed so that complete loss of coolant for an indefinite period does not result in the reactor overheating. The General Electric ESBWR and Westinghouse AP1000 have passively-activated safety systems. Fast breeder reactors are more susceptible to meltdown than other reactor types, due to the larger quantity of fissile material and the higher neutron flux inside the reactor core, which makes it more difficult to control the reaction. In addition, the liquid sodium coolant is highly corrosive and very difficult to manage. [edit]

Other theoretical consequences of a nuclear meltdown


If the reactor core becomes too hot, it might melt through the reactor vessel (although this has not happened to date) and the floor of the reactor chamber and descend until it becomes diluted by surrounding material and cooled enough to no longer melt through the material

underneath, or until it hits groundwater. Note that a thermonuclear explosion does not happen in a nuclear meltdown, although a steam explosion may occur, if it hits water. The geometry and presence of the coolant has a twin role, and both cools the reactor as well as slowing down emitted neutrons. The latter role is crucial to maintaining the chainreaction, and so even without coolant the molten core is designed to be unable to form an uncontrolled critical mass (a recriticality). However, the molten reactor core will continue generating enough heat through unmoderated radioactive decay ('decay heat') to maintain or even increase its temperature. One possibility is that a large steam explosion could occur when the molten mass encountered water (in the lower plenum of the reactor vessel or on the floor of the room the reactor is in). Also, if the melt were to go through the floor of the reactor building the result would depend on the substance underneath. [edit]

Meltdowns
A number of Russian nuclear submarines have experienced nuclear meltdowns. The only known large scale nuclear meltdowns at civilian nuclear power plants were in the Chernobyl accident at Chernobyl, Ukraine, in 1986, and Three Mile Island, Pennsylvania, USA, in 1979 although there have been several partial core meltdowns, including accidents at:

NRX, Ontario, Canada, in 1952 EBR-I, Idaho, USA, in 1955 Windscale, Sellafield, England, in 1957 Santa Susana Field Laboratory, Simi Hills, California, in 1959 Enrico Fermi Nuclear Generating Station, Michigan, USA, in 1966 Chapelcross, Dumfries and Galloway, Scotland, in 1967

Not all of these were caused by a loss of coolant and in several cases (the Chernobyl accident and the Windscale fire, for example) the meltdown was not the most severe problem. The Three Mile Island accident was caused by a loss of coolant, but "despite melting of about one-third of the fuel, the reactor vessel itself maintained its integrity and contained the damaged fuel".

Nuclear power is the controlled use of nuclear reactions to release energy for work including propulsion, heat, and the generation of electricity. Human use of nuclear power to do significant useful work is currently limited to nuclear fission and radioactive decay. Nuclear energy is produced when a fissile material, such as uranium-235 (235U), is concentrated such that the natural rate of radioactive decay is accelerated in a controlled chain reaction and creates heat which is used to boil water, produce steam, and drive a

steam turbine. The turbine can be used for mechanical work and also to generate electricity. Nuclear power is used to power most military submarines and aircraft carriers and provides 7% of the world's energy and 17% of the world's electricity. The United States produces the most nuclear energy, with nuclear power providing 20% of the electricity it consumes, while France produces the highest percent of its energy from nuclear reactors80% as of 2006. International research is ongoing into various safety improvements, the use of nuclear fusion and additional uses such as the generation of hydrogen (in support of hydrogen economy schemes), for desalinating sea water, and for use in district heating systems. Construction of nuclear power plants declined following the 1979 Three Mile Island accident and the 1986 disaster at Chernobyl. Lately, there has been renewed interest in nuclear energy from national governments, the public, and some notable environmentalists due to increased oil prices, new passively safe designs of plants, and the low emission rate of greenhouse gas which some governments need to meet the standards of the Kyoto Protocol. A few reactors are under construction, and several new types of reactors are planned. The use of nuclear power is controversial because of the problem of storing radioactive waste for indefinite periods, the potential for possibly severe radioactive contamination by accident or sabotage, and the possibility that its use in some countries could lead to the proliferation of nuclear weapons. Proponents believe that these risks are small and can be further reduced by the technology in the new reactors. They further claim that the safety record is already good when compared to other fossil-fuel plants, that it releases much less radioactive waste than coal power, and that nuclear power is a sustainable energy source. Critics, including most major environmental groups believe nuclear power is an uneconomic, unsound and potentially dangerous energy source, especially compared to renewable energy, and dispute whether the costs and risks can be reduced through new technology. There is concern in some countries over North Korea and Iran operating research reactors and fuel enrichment plants, since those countries refuse adequate IAEA oversight and are believed to be trying to develop nuclear weapons.
Nuclear power plants provide about 17 percent of the world's electricity. Some countries depend more on nuclear power for electricity than others. In France, for instance, about 75 percent of the electricity is generated from nuclear power, according to the International Atomic Energy Agency. In the United States, nuclear power supplies about 15 percent of the electricity overall, but some states get more power from nuclear plants than others. There are more than 400 nuclear power plants around the world, with more than 100 in the United States.

The dome-shaped containment building at the Shearon Harris Nuclear Power Plant near Raleigh, NC

Have you ever wondered how a nuclear power plant works or how safe nuclear power is? In this article, we will examine how a nuclear reactor and a power plant work. We'll explain nuclear fission and give you a view inside a nuclear reactor. Uranium is a fairly common element on Earth, incorporated into the planet during the planet's formation. Uranium is originally formed in stars. Old stars exploded, and the dust from these shattered stars aggregated together to form our planet. Uranium-238 (U-238) has an extremely long half-life> (4.5 billion years), and therefore is still present in fairly large quantities. U-238 makes up 99 percent of the uranium on the planet. U-235 makes up about 0.7 percent of the remaining uranium found naturally, while U-234 is even more rare and is formed by the decay of U-238. (Uranium-238 goes through many stages or alpha and beta decay to form a stable isotope of lead, and U-234 is one link in that chain.) Uranium-235 has an interesting property that makes it useful for both nuclear power production and for nuclear bomb production. U-235 decays naturally, just as U-238 does, by alpha radiation. U-235 also undergoes spontaneous fission a small percentage of the time. However, U-235 is one of the few materials that can undergo induced fission. If a free neutron runs into a U-235 nucleus, the nucleus will absorb the neutron without hesitation, become unstable and split immediately. See How Nuclear Radiation Works for complete details. Nuclear Fission There are three things about this induced fission process that make it especially interesting:

The probability of a U-235 atom capturing a neutron as it passes by is fairly high. In a reactor working properly (known as the critical state), one neutron ejected from each fission causes another fission to occur. The process of capturing the neutron and splitting happens very quickly, on the order of picoseconds (1x10-12 seconds).

An incredible amount of energy is released, in the form of heat and gamma radiation, when a single atom splits. The two atoms that result from the fission later release beta radiation and gamma radiation of their own as well. The energy released by a single fission comes from the fact that the fission products and the neutrons, together, weigh less than the original U-235 atom. The difference in weight is converted directly to energy at a rate governed by the equation E = mc2. Something on the order of 200 MeV (million electron volts) is released by the decay of one U-235 atom (if you would like to convert that into something useful, consider that 1 eV is equal to 1.602 x 10-12 ergs, 1 x 107 ergs is equal to 1 joule, 1 joule equals 1 wattsecond, and 1 BTU equals 1,055 joules). That may not seem like much, but there are a lot of uranium atoms in a pound of uranium. So many, in fact, that a pound of highly enriched uranium as used to power a nuclear submarine or nuclear aircraft carrier is equal to something on the order of a million gallons of gasoline. When you consider that a pound of uranium is smaller than a baseball, and a million gallons of gasoline would fill a cube 50 feet per side (50 feet is as tall as a five-story building), you can get an idea of the amount of energy available in just a little bit of U-235.

In order for these properties of U-235 to work, a sample of uranium must be enriched so that it contains 2 percent to 3 percent or more of uranium-235. Three-percent enrichment is sufficient for use in a civilian nuclear reactor used for power generation. Weapons-grade uranium is composed of 90-percent or more U-235. Inside a Nuclear Power Plant To build a nuclear reactor, what you need is some mildly enriched uranium. Typically, the uranium is formed into pellets with approximately the same diameter as a dime and a length of an inch or so. The pellets are arranged into long rods, and the rods are collected together into bundles. The bundles are then typically submerged in water inside a pressure vessel. The water acts as a coolant. In order for the reactor to work, the bundle, submerged in water, must be slightly supercritical. That means that, left to its own devices, the uranium would eventually overheat and melt. To prevent this, control rods made of a material that absorbs neutrons are inserted into the bundle using a mechanism that can raise or lower the control rods. Raising and lowering the control rods allow operators to control the rate of the nuclear reaction. When an operator wants the uranium core to produce more heat, the rods are raised out of the uranium bundle. To create less heat, the rods are lowered into the uranium bundle. The rods can also be lowered completely into the uranium bundle to shut the reactor down in the case of an accident or to change the fuel. The uranium bundle acts as an extremely high-energy source of heat. It heats the water and turns it to steam. The steam drives a steam turbine, which spins a generator to produce power. In some reactors, the steam from the reactor goes through a secondary, intermediate heat exchanger to convert another loop of water to steam, which drives the turbine. The advantage to this design is that the radioactive water/steam never contacts the turbine. Also, in some reactors, the coolant fluid in contact with the reactor core is gas (carbon dioxide) or liquid metal (sodium, potassium); these types of reactors allow the core to be operated at higher temperatures.

Nuclear power is the controlled use of nuclear reactions to release energy for work including propulsion, heat, and the generation of electricity. Human use of nuclear power to do significant useful work is currently limited to nuclear fission and radioactive decay.

Nuclear energy is produced when a fissile material, such as uranium-235 (235U), is concentrated such that the natural rate of radioactive decay is accelerated in a controlled chain reaction and creates heat which is used to boil water, produce steam, and drive a steam turbine. The turbine can be used for mechanical work and also to generate electricity. Nuclear power is used to power most military submarines and aircraft carriers and provides 7% of the world's energy and 17% of the world's electricity. The United States produces the most nuclear energy, with nuclear power providing 20% of the electricity it consumes, while France produces the highest percent of its energy from nuclear reactors80% as of 2006. [1] [2] International research is ongoing into various safety improvements, the use of nuclear fusion and additional uses such as the generation of hydrogen (in support of hydrogen economy schemes), for desalinating sea water, and for use in district heating systems. Construction of nuclear power plants declined following the 1979 Three Mile Island accident and the 1986 disaster at Chernobyl. Lately, there has been renewed interest in nuclear energy from national governments, the public, and some notable environmentalists due to increased oil prices, new passively safe designs of plants, and the low emission rate of greenhouse gas which some governments need to meet the standards of the Kyoto Protocol. A few reactors are under construction, and several new types of reactors are planned. The use of nuclear power is controversial because of the problem of storing radioactive waste for indefinite periods, the potential for possibly severe radioactive contamination by accident or sabotage, and the possibility that its use in some countries could lead to the proliferation of nuclear weapons. Proponents believe that these risks are small and can be further reduced by the technology in the new reactors. They further claim that the safety record is already good when compared to other fossil-fuel plants, that it releases much less radioactive waste than coal power, and that nuclear power is a sustainable energy source. Critics, including most major environmental groups believe nuclear power is an uneconomic, unsound and potentially dangerous energy source, especially compared to renewable energy, and dispute whether the costs and risks can be reduced through new technology. There is concern in some countries over North Korea and Iran operating research reactors and fuel enrichment plants, since those countries refuse adequate IAEA oversight and are believed to be trying to develop nuclear weapons.

nuclear weapon is a weapon which derives its destructive force from nuclear reactions of either nuclear fission or the more powerful fusion. As a result, even a nuclear weapon with a relatively small yield is significantly more powerful than the largest conventional explosives, and a single weapon is capable of destroying an entire city. In the history of warfare, nuclear weapons have been used only twice, both during the closing days of World War II. The first event occurred on the morning of August 6, 1945, when the United States dropped a uranium gun-type device code-named "Little Boy" on the

Japanese city of Hiroshima. The second event occurred three days later when a plutonium implosion-type device code-named "Fat Man" was dropped on the city of Nagasaki. The use of these weapons, which resulted in the immediate deaths of around 100,000 to 200,000 individuals and even more over time, was and remains controversial critics charged that they were unnecessary acts of mass killing, while others claimed that they ultimately reduced casualties on both sides by hastening the end of the war. This topic has seen increased debate recently in the wake of increased terrorism involving killings of civilians by both state and non-state players, with parties claiming that the end justifies the means. See Atomic bombings of Hiroshima and Nagasaki for a full discussion. Since the Hiroshima and Nagasaki bombings, nuclear weapons have been detonated on over two thousand occasions for testing and demonstration purposes. The only known countries to have detonated such weapons are the United States, Soviet Union, United Kingdom, France, People's Republic of China, India, and Pakistan. These countries are the declared nuclear powers (with Russia inheriting the weapons of the Soviet Union after its collapse). Various other countries may hold nuclear weapons but have never publicly admitted possession, or their claims to possession have not been verified. For example, Israel has modern airborne delivery systems and appears to have an extensive nuclear program with hundreds of warheads (see Israel and weapons of mass destruction); North Korea has recently stated that it has nuclear capabilities (although it has made several changing statements about the abandonment of its nuclear weapons programs, often dependent on the political climate at the time) but has never conducted a confirmed test and its weapons status remains unclear; and Iran currently stands accused by a number of governments of attempting to develop nuclear capabilities, though its government claims that its acknowledged nuclear activities, such as uranium enrichment, are for peaceful purposes. (For more information see List of countries with nuclear weapons.) Apart from their use as weapons, nuclear explosives have been tested and used for various non-military uses.

Uranium is a very heavy (dense) metal which can be used as an abundant source of concentrated energy. It occurs in most rocks in concentrations of 2 to 4 parts per million and is as common in the Earth's crust as tin, tungsten and molybdenum. It occurs in seawater, and could be recovered from the oceans if prices rose significantly. It was discovered in 1789 by Martin Klaproth, a German chemist, in the mineral called pitchblende. It was named after the planet Uranus, which had been discovered eight years earlier. Uranium was apparently formed in super novae about 6.6 billion years ago. While it is not common in the solar system, today its slow radioactive decay provides the main source of heat inside the earth, causing convection and continental drift.

The high density of uranium means that it also finds uses in the keels of yachts and as counterweights for aircraft control surfaces (rudders and elevators), as well as for radiation shielding. Its melting point is 1132C. The chemical symbol for uranium is U.

The Uranium Atom


On a scale arranged according to the increasing mass of their nuclei, uranium is the heaviest of all the naturally-occurring elements (Hydrogen is the lightest). Uranium is 18.7 times as dense as water. Like other elements, uranium occurs in slightly differing forms known as 'isotopes'. These isotopes (16 in the case of uranium) differ from each other in the number of particles (neutrons) in the nucleus. Natural uranium as found in the Earth's crust is a mixture largely of two isotopes: uranium-238 (U-238), accounting for 99.3% and U-235 about 0.7%. The isotope U-235 is important because under certain conditions it can readily be split, yielding a lot of energy. It is therefore said to be 'fissile' and we use the expression 'nuclear fission' Meanwhile, like all radioactive isotopes, they decay. U-238 decays very slowly, its half-life being about the same as the age of the Earth (4500 million years). This means that it is barely radioactive, less so than many other isotopes in rocks and sand. Nevertheless it generates 0.1 watts/tonne as decay heat and this is enough to warm the Earth's core. U-235 decays slightly faster.

Energy from the uranium atom The nucleus of the U-235 atom comprises 92 protons and 143 neutrons (92 + 143 = 235). When the nucleus of a U-235 atom captures a moving neutron it splits in two (fissions) and releases some energy in the form of heat, also two or three additional neutrons are thrown off. If enough of these expelled neutrons cause the nuclei of other U-235 atoms to split, releasing further neutrons, a fission chain reaction can be achieved. When this happens over and over again, many millions of times, a very large amount of heat is produced from a relatively small amount of uranium. It is this process, in effect "burning" uranium, which occurs in a nuclear reactor. The heat is used to make steam to produce electricity.

Inside the reactor


In a nuclear reactor the uranium fuel is assembled in such a way that a controlled fission chain reaction can be achieved. The heat created by splitting the U-235 atoms is then used to make steam which spins a turbine to drive a generator, producing electricity. Nuclear power stations and fossil-fuelled power stations of similar capacity have many features in common. Both require heat to produce steam to drive turbines and generators. In a nuclear power station, however, the fissioning of uranium atoms replaces the burning of coal or gas . The chain reaction that takes place in the core of a nuclear reactor is controlled by rods which absorb neutrons and which can be inserted or withdrawn to set the reactor at the required power level. The fuel elements are surrounded by a substance called a moderator to slow the speed of the emitted neutrons and thus enable the chain reaction to continue. Water, graphite and heavy water are used as moderators in different types of reactors. Because of the kind of fuel used (ie the concentration of U-235, see below), if there is a major uncorrected malfunction in a reactor the fuel may overheat and melt, but it cannot explode like a bomb. A typical 1000 megawatt (MWe) reactor can provide enough electricity for a modern city of up to one million people. About 35 such nuclear reactors could provide Australia's total electricity needs.

Uranium and Plutonium Whereas the U-235 nucleus is 'fissile', that of U-238 is said to be 'fertile'. This means that it can capture one of the neutrons which are flying about in the core of the reactor and become (indirectly) plutonium-239, which is fissile. Pu-239 is very much like U-235, in that it fissions when hit by a neutron and this also yields a lot of energy. Because there is so much U-238 in a reactor core (most of the fuel), these reactions occur frequently, and in fact about one third of the energy yield comes from "burning" Pu-239. But sometimes a Pu-239 atom simply captures a neutron without splitting, and it becomes Pu-240. Because the Pu-239 is either progressively "burned" or becomes Pu-240, the longer the fuel stays in the reactor the more Pu-240 is in it.*
* The significance of this is that when the spent fuel is removed after about three years, the plutonium in it is not suitable for making weapons but can be recycled as fuel.

From uranium ore to reactor fuel


Uranium ore can be mined by underground or open-cut methods, depending on its depth. After mining, the ore is crushed and ground up. Then it is treated with acid to dissolve the uranium, which is recovered from solution. Uranium may also be mined by in situ leaching (ISL), where it is dissolved from a porous underground orebody in situ and pumped to the surface. The end product of the mining and milling stages, or of ISL, is uranium oxide concentrate (U3O8). This is the form in which uranium is sold. Before it can be used in a reactor for electricity generation, however, it must undergo a series of processes to produce a useable fuel.

For most of the world's reactors, the next step in making the fuel is to convert the uranium oxide into a gas, uranium hexafluoride (UF6), which enables it to be enriched. Enrichment increases the proportion of the uranium-235 isotope from its natural level of 0.7% to 3 4%. This enables greater technical efficiency in reactor design and operation, particularly in larger reactors, and allows the use of ordinary water as a moderator. After enrichment, the UF6 gas is converted to uranium dioxide (UO2) which is formed into fuel pellets. These fuel pellets are placed inside thin metal tubes which are assembled in bundles to become the fuel elements or assemblies for the core of the reactor. For reactors which use natural uranium as their fuel (and hence which require graphite or heavy water as a moderator) the U3O8 concentrate simply needs to be refined and converted directly to uranium dioxide. When the uranium fuel has been in the reactor for about three years, the used fuel is removed, stored, and then either reprocessed or disposed of underground (see Nuclear Fuel Cycle or Radioactive Waste Management in this series).

Who uses nuclear power?


Over 16% of the world's electricity is generated from uranium in nuclear reactors. This amounts to about 2400 billion kWh each year, as much as from all sources of electricity worldwide in 1960. In a current perspective, it is twelve times Australia's or South Africa's total electricity production, five times India's, twice China's and 500 times Kenya's total. It comes from about 440 nuclear reactors with a total output capacity of more than 350 000 megawatts (MWe) operating in 31 countries. About thirty more reactors are under construction, and another 70 are on the drawing board.

Belgium, Bulgaria, Finland, France, Germany, Hungary, Japan, South Korea, Lithuania, Slovakia, Slovenia, Sweden, Switzerland and Ukraine all get 30% or more of their electricity from nuclear reactors. The USA has over 100 reactors operating, with capacity of almost three times Australias total, and supplying 20% of its electricity. The UK gets almost a quarter of its electricity from uranium.

Table of the World's Nuclear Power Reactors

Who has and who mines uranium?


Uranium is widespread in many rocks, and even in seawater. However, like other metals, it is seldom sufficiently concentrated to be economically recoverable. Where it is, we speak of an orebody. In defining what is ore, assumptions are made about the cost of mining and the market price of the metal. Uranium reserves are therefore calculated as tonnes recoverable up to a certain cost. Australias reasonably assured resources and inferred resources of uranium are 1,142,000 tonnes U recoverable at up to US$80/kg U (well under the market spot price), Canada's are 444,000 tonnes U. Australias resources in this category are just under 30% of the worlds total, Canada's are 12%. Although it has more than any other country, there are others with significant uranium resources. In order they are: Kazakhstan (16% of world total), Canada, USA, South Africa, Namibia, Brazil, Niger and Russia. Many more countries have smaller deposits which could be mined if needed. Despite being so well-endowed with uranium resources, political factors mean that Canada is well in front of Australia as the main supplier of uranium to world markets. Uranium is sold only to countries which are signatories of the Nuclear Non-Proliferation Treaty, and which allow international inspection to verify that it is used only for peaceful purposes. Customer countries for Australia's uranium must also have a bilateral safeguards agreement with Australia. Canada has similar arrangements. Australian exports in 2005 amounted to over 12,000 tonnes of U3O8 valued at nearly A$600 million. Actual production was about 23% of world mine total. Canada produced almost 14,000 tonnes of U3O8 in 2005, about one third of world total and mostly for export.

Other uses of nuclear energy


Many people, when talking about nuclear energy, have only nuclear reactors (or perhaps nuclear weapons) in mind. Few people realise the extent to which the use of radioisotopes has changed our lives over the last few decades. Using relatively small special-purpose nuclear reactors it has become possible to make a wide range of radioactive materials (radioisotopes) at low cost. For this reason the use of artificially produced radioisotopes has become widespread since the early 1950s, and there are now some 270 "research" reactors in 59 countries producing them. Radioisotopes In our daily life we need food, water and good health. Today, radioactive isotopes play an important part in the technologies that provide us with all three. They are produced by bombarding small amounts of particular elements with neutrons.

In medicine, radioisotopes are widely used for diagnosis and research. Radioactive chemical tracers emit gamma radiation which provides diagnostic information about a person's anatomy and the functioning of specific organs. Radiotherapy also employs radioisotopes in the treatment of some illnesses, such as cancer. More powerful gamma sources are used to sterilise syringes, bandages and other medical equipment. About one person in two in the western world is likely to experience the benefits of nuclear medicine in their lifetime, and gamma sterilisation of equipment is almost universal. In the preservation of food, radioisotopes are used to inhibit the sprouting of root crops after harvesting, to kill parasites and pests, and to control the ripening of stored fruit and vegetables. Irradiated foodstuffs are accepted by world and national health authorities for human consumption in an increasing number of countries. They include potatoes, onions, dried and fresh fruits, grain and grain products, poultry and some fish. Some prepacked foods can also be irradiated. In the growing of crops and breeding of livestock, radioisotopes also play an important role. They are used to produce high yielding, disease-resistant and weather-resistant varieties of crops, to study how fertilisers and insecticides work, and to improve the productivity and health of domestic animals. Industrially, and in mining, they are used to examine welds, to detect leaks, to study the rate of wear of metals, and for on-stream analysis of a wide range of minerals and fuels. There are many other uses. A radioisotope derived from the plutonium formed in nuclear reactors is used in most household smoke detectors. Radioisotopes are used by police to fight crime, in detecting and analysing pollutants in the environment, to study the movement of surface water and to measure water runoffs from rain and snow, as well as the flow rates of streams and rivers. Other reactors There are also other uses for reactors. Over 200 small nuclear reactors power some 150 ships, mostly submarines, but ranging from icebreakers to aircraft carriers. These can stay at sea for long periods without having to make refuelling stops. In the Russian Arctic where operating conditions are beyond the capability of conventional icebreakers, very powerful nuclear-powered vessels operate almost year-round, where previously only two months could be used each year. The heat produced by nuclear reactors can also be used directly rather than for generating electricity. In Sweden and Russia, for example, it is used to heat buildings and to provide heat for a variety of industrial processes such as water desalination. Nuclear desalination is likely to be a major growth area in future. Military weapons Both uranium and plutonium were used to make bombs before they became important for making electricity and radioisotopes. But the type of uranium and plutonium for bombs is

different from that in a nuclear power plant. Bomb-grade uranium is highly-enriched (>90% U-235, instead of about 3.5%); bomb-grade plutonium is fairly pure (>90%) Pu-239 and is made in special reactors. Today, due to disarmament, a lot of military uranium is becoming available for electricity production. The military uranium is diluted about 25:1 with depleted uranium (mostly U238) from the enrichment process before being used. Australia's reactor Australia has no nuclear reactors supplying electricity. However, it has a small (10 megawatt) old research reactor at Lucas Heights near Sydney which is due to be replaced by a more modern 20 megawatt unit in 2007. The Australian Nuclear Science and Technology Organisation's HIFAR reactor has operated since 1958. Its main work is providing an intense source of neutrons for researchers studying physics and the properties of various materials. In addition, it produces a wide range of medical and industrial radioisotopes for Australian hospitals and industry. Some of these isotopes are exported to nearby South-East Asian countries and New Zealand.

A History of Biodiesel/Biofuels Concurrent histories of the diesel engine and biofuels are necessary to understand the foundation for today's perception of biofuels, in general, and biodiesel, in particular. The history of biofuel is more political and economical than technological. The process for making fuel from biomass feedstock used in the 1800's is basically the same one used today. It was the influences of the industrial magnates during the 1920's and 1930's on both the politics and economics of those times that created the foundation for our perceptions today. Transesterification of vegetable oils has been in use since the mid-1800's. More than likely, it was originally used to distill out the glycerin used for making soap. The "by-products" of this process are methyl and ethyl esters. Biodiesel is composed of these esters. Ethyle esters are grain based while methyl esters are wood based. They are the residues of creating glycerin, or vice versa. Any source of complex fatty acid can be used to create biodiesel and glycerin. Early on, peanut oil, hemp oil, corn oil, and tallow were used as sources for the complex fatty acids used in the separation process. Currently, soybeans, rapeseed (or its cousin, canola oil), corn, recycled fryer oil, tallow, forest wastes, and sugar cane are common resources for the complex fatty acids and their by-product, biofuels. Research is being done into oil

production from algae, which could have yields greater than any feedstock known today. Ethanol and methanol are two other familiar biofuels. Distillation of grain or wood, resulting in an ethyl or methyl alcohol, is the process by which these two biofuels are created. Ethanol, made from soybeans or corn, is a common biofuel in the midwest. The viscosity of the "original" biodiesel is lowered by adding approximately 10% methanol or ethanol to the biodiesel esters. Methanol is prefered because there has a more reliable and predictable biodiesel reaction. However, ethanal is less toxic and is always produced from a renewable resource. The lower viscosity brings biodiesl in line with the viscosity requirements of today's diesel engines, making it a major competitor to petroleum based diesel fuel. In 1898, when Rudolph Diesel first demonstrated his compression ignition engine at the World's Exhibition in Paris, he used peanut oil - the original biodiesel. Diesel believed biomass fuel to be viable alternative to the resource consuming steam engine. Vegetable oils were used in diesel engines until the 1920's when an alteration was made to the engine, enabling it to use a residue of petroleum - what is now known as diesel #2. Diesel was not the only inventor to believe that biomass fuels would be the mainstay of the transportation industry. Henry Ford designed his automobiles, beginning with the 1908 Model T, to use ethanol. Ford was so convinced that renewable resources were the key to the success of his automobiles that he built a plant to make ethanol in the Midwest and formed a partnership with Standard Oil to sell it in their distributing stations. During the 1920's, this biofuel was 25% of Standard Oil's sales in that area. With the growth of the petroleum industry Standard Oil cast its future with fossil fuels. Ford continued to promote the use of ethanol through the 1930's. The petroleum industry undercut the biofuel sales and by 1940 the plant was closed due to the low prices of petroleum. Despite the fact that men such as Henry Ford, Rudolph Diesel, and subsequent manufacturers of diesel engines saw the future of renewable resource fuels, a political and economic struggle doomed the industry. Manufacturing industrialists made modifications to the diesel engines so they could take advantage of the extremely low prices of the residual, low-grade fuel now offered by the petroleum industry. The petroleum companies wanted control of the fuel supplies in the United States and, despite the benefits of biomass fuel verses the fossil fuels, they moved ahead to

eliminate all competition. One player in the biofuel, paper, textile, as well as many other industries, was hemp. Hemp had been grown as a major product in America since colonial times by such men as George Washington and Thomas Jefferson and has had both governmental and popular support. Hemp's long history in civilization and the multitude of products that can be derived from this single plant has made it one of the most valuable and sustainable plants in the history of mankind. More importantly to the biofuel industry, hemp provided the biomass that Ford needed for his production of ethanol. He found that 30% hemp seed oil is usable as a high-grade diesel fuel and that it could also be used as a machine lubricant and an engine oil. In the 1930's, the industrialists entered the picture. William Randolph Hurst, who produced 90% of the paper in the United States, Secretary of Treasury, Andrew Mellon, who was a major financial backer for the DuPont Company which ha d just patented the chemical necessary to process wood pulp into paper, the Rockefellers, and other "oil barons", who were developing vast empires from petroleum, all had vested interest in seeing the renewable resources industry derailed, the hemp industry eliminated, and biomass fuels derided. A campaign was begun to discredit hemp. Playing on the racism that existed in America, Hurst used his newspapers to apply the name "marijuana" to hemp. Marijuana is the Mexican word for the hemp plant. This application along with various "objective" articles began to create a fear. By 1937, these industrialists were able to parlay the fear they created into the Marijuana Tax Act. This law was the precursor to the demise of the hemp industry in the United States and the resultant long reaching effect on the biofuel, petroleum and many other industries. Within three years, Ford closed his biofuel plant. At the beginning of World War II, the groundwork for our current perceptions of biofuels was in place. First, the diesel engine had been modified, enabling it to use Diesel #2. Second, the petroleum industry had established a market with very low prices for a residual product. Third, a major biomass industry was being shut down. Corn farmers were unable to organize at that time and provide a potential product to replace hemp as a biomass resource. Finally, industries with immense wealth behind them were acting in concert to push forward their own agenda - that of making more wealth for themselves. It is interesting to note that, during World War II, the United States government launched a slogan campaign, "Hemp for Victory to encourage farmers to plant this discredited plant. Hemp made a multitude

of indispensable contributions to the war effort. It is also interesting that, during World War II, both the Allies and Nazi Germany utilized biomass fuels in their machines. Despite its use during World War II, biofuels remained in the obscurity to which they had been forced. Post war brought new cars and increased petroleum use. The petroleum industries quietly bought the trolley car systems that ran on electricity and were a major part of the transportation infrastructure system. They dismantled them. The trolleys were then sporadically replaced with diesel buses. These industries also pushed the government to build roads, highways, and freeways ("the ultimate solution to all our transportation and traffic problems"), so the automobiles they produced had a place to operate. This newly created transportation infrastructure was built with public funds, supporting and aiding the growth and strength of the petroleum, automobile, and related industries. By the 1970's, we were dependent on foreign oil. Our supply of crude oil, as are all supplies of fossil fuels, was limited. In 1973 we experienced the first of two crises. OPEC, the Middle Eastern organization controlling the majority of the oil in the world, reduced supplies and increased prices. The second one came five years later in 1978. As was noted in the Diesel Engine section, automobile purchasers began to seriously consider the diesel car as an option. What is more, people began making their own biofuel. The potential of biofuels reentered the public consciousness. The years since have brought many changes. Over 200 major fleets in the United States now run on biodiesel with entities such as the United States Post Office, the US Military, metropolitan transit systems, agricultural concerns, and school districts being major users. The biodiesel produced today can be used in unmodified diesel engines in almost all temperatures. It can be used in the individual automobile or larger engines and machines. The base biomass comes from soybeans and corn in the Midwest with tallow from the slaughter industries becoming a third source. Sugar cane provides the biomass for Hawaii and forest wastes are becoming a source in the Northwest. The embargo on Cuba halted oil importation depriving it of heating oil. They discovered that recycled fryer oil made a good biomass for fuel. Today, the fast food industry is the one of the largest and fastest growing industries in the United States and, in fact, the world. This industry can provide a major resource for biofuels - the recycled fryer oil. The Veggie Van traveled 25,000 miles around the United States on recycled fryer oil as did a group of women.

In Europe at this time, there is an option for biodiesel in many gas stations and vehicles that use diesel are readily available. Over 1000 stations in Germany alone offer biodiesel for their customers. Over 5% of all of France's energy uses are provided by biodiesel. Journey to Forever, a non-government organization, traveled from Hong Kong to Southern Africa producing their own biodiesel along the way and teaching the people of the small hamlets and villages how to make their own biofuel for use in their heaters, tractors, buses, automobiles, and other machines they might have. We have the opportunity and the resources to shed our dependence on foreign oil, if we choose. As in the 1930's, we are faced with tremendous political and economic pressure creating similar challenges. The enormous influence of the petroleum industries and other industries that might be threatened and/or impacted by a resurgence of the renewable, biomass, and associated industries is being felt on all levels. One only needs to look to Washington to see how that pressure is being played out. It is a time of choice and one in which small actions can lead to greater impact. Biodiesel remains in the political and economic arena and is playing a part in this process as the awareness alternative fuel spreads through the consciousness of the general public. Anthracite (Greek , literally "a form of coal", from Anthrax [], coal) is a hard, compact variety of mineral coal that has a high luster. It has the highest carbon count and contains the fewest impurities of all coals, despite its lower calorific content. Anthracite coal is the highest of the metamorphic rank, in which the carbon content is between 92% and 98%. The term is applied to those varieties of coal which do not give off tarry or other hydrocarbon vapours when heated below their point of ignition. Anthracite ignites with difficulty and burns with a short blue flame, without smoke. Other terms having the same meaning are blue coal, hard coal, stone coal (not to be confused with the German Steinkohle), blind coal (in Scotland), Kilkenny coal (in Ireland), and black diamond. The imperfect anthracite of north Devon, which however is only used as a pigment, is known as culm, the same term being used in geological classification to distinguish the strata in which it is found, and similar strata in the Rhenish hill countries which are known as the Culm Measures. In America, culm is used as an equivalent for waste or slack in anthracite mining.

Properties Anthracite is similar in appearance to the mineraloid jet, and is sometimes used to imitate it. Physically, anthracite differs from ordinary bituminous coal by its greater hardness, its higher relative density of 1.3-1.4, and luster, the latter being often semi-metallic with a somewhat brownish reflection. It contains a high percentage of fixed carbon and a low percentage of volatile matter. It is also free from included soft or fibrous notches and does not soil the fingers when rubbed. Anthracitization is the transformation of bituminous coal into anthracite coal. The moisture content of fresh-mined anthracite generally is less than 15 percent. The heat content of anthracite ranges from 22 to 28 million Btu per short ton (26 to 33 MJ/kg) on a moist, mineral-matter-free basis. The heat content of anthracite coal consumed in the United States averages 25 million Btu/ton (29 MJ/kg), on the as-received basis (i.e., containing both inherent moisture and mineral matter). Note: Since the 1980s, anthracite refuse or mine waste has been used for steam electric power generation. This fuel typically has a heat content of 15 million Btu/ton (17 MJ/kg) or less. Anthracite may be considered to be a transition stage between ordinary bituminous coal and graphite, produced by the more or less complete elimination of the volatile constituents of the former; and it is found most abundantly in areas that have been subjected to considerable earthmovements, such as the flanks of great mountain ranges. Anthracite coal is a product of metamorphism and is associated with metamorphic rocks, just as bituminous coal is associated with sedimentary rocks. For example, the compressed layers of anthracite that are deep mined in the folded (metamorphic) Appalachian Mountains of the Coal Region of northeastern Pennsylvania are extensions of the layers of bituminous coal that are strip mined on the (sedimentary) Allegheny Plateau of Kentucky and West Virginia, and Eastern Pennsylvania. In the same way the anthracite region of South Wales is confined to the contorted portion west of Swansea and Llanelly, the central and eastern portions producing steam, coking and house coals. Structurally it shows some alteration by the development of secondary divisional planes and fissures so that the original stratification lines are not

always easily seen. The thermal conductivity is also higher, a lump of anthracite feeling perceptibly colder when held in the warm hand than a similar lump of bituminous coal at the same temperature. The chemical composition of some typical anthracites is given in the article coal. Economic value Pottsville, Pennsylvania anthracite coal history began in 1790 with the discovery of coal made by the hunter Necho Allen in what is called the Coal Region. Legend has it that Allen fell asleep at the base of the Broad Mountain and woke to the sight of a large fire. His campfire had ignited an outcropping of anthracite coal. By 1795, an anthracite fired iron furnace was established on the Schuylkill River. Anthracite was first experimentally burned as a fuel on February 11, 1808 by Judge Jesse Fell in Wilkes-Barre, Pennsylvania on an open grate in a fireplace. It delivers high energy per its weight and burns cleanly with little soot, making it a sought after variety of coal and hence of higher value. It is also used as a filter medium. The principal use of anthracite is as a smokeless fuel. In the eastern United States it is largely employed as domestic fuel, usually in close stoves or furnaces, as well as for steam purposes, since, unlike that from South Wales, it does not decrepitate when heated, or at least not to the same extent. For proper use, however, it is necessary that the fuel should be supplied in pieces as nearly uniform in size as possible, a condition that has led to the development of the breaker which is so characteristic a feature in American anthracite mining. The large coal as raised from the mine is passed through breakers with toothed rolls to reduce the lumps to smaller pieces, which are separated into different sizes by a system of graduated sieves, placed in descending order. Each size can be perfectly well burnt alone on an appropriate grate, if kept free from larger or smaller admixtures. In the early 20th century United States, the Lackawanna Railroad started using only the more expensive anthracite coal, dubbed themselves "The Road of Anthracite," and advertised widely that travelers on their line could make railway journeys without getting their clothing stained with soot. The advertisements featured a white-clad woman named Phoebe Snow and poems containing lines like "My gown stays white / from morn till night / upon the road of Anthracite".

Formerly, anthracite was largely used, both in America and South Wales, as blast-furnace fuel for iron smelting, but for this purpose it has been largely superseded by coke in the former country and entirely in the latter. An important application has, however, been developed in the extended use of internal combustion motors driven by the so-called "mixed," "poor," "semiwater" or "Dowson gas" produced by the gasification of anthracite with air and a small proportion of steam. This is probably the most economical method of obtaining power known; with an engine as small as 15 horsepower the expenditure of fuel is at the rate of only 1 lb per horse-power hour, and with larger engines it is proportionately less. Large quantities of anthracite for power purposes are now exported from South Wales to France, Switzerland and parts of Germany. Anthracite coal mining today Anthracite coal mining in Eastern Pennsylvania continues in the early 21st Century and contributes up to 1% of the Pennsylvania Gross State Product. Over 2,000 people were making their living mining anthracite coal as of 1995. Most of the mining currently involves reclaiming coal from slag heaps (waste piles from past coal mining) next to closed mines. Some underground anthracite coal mining is also taking place up to this day. As petroleum and natural gas grow more expensive, anthracite coal is growing more important as an energy source for an energy hungry country. Major reserves The largest fields of anthracite coal in the United States are found in Northeastern Pennsylvania called the Coal Region, where there are 7 billion short tons (6.4 petagrams) of minable reserves. Deposits at Crested Butte, Colorado were mined historically. Anthracite deposits of an estimated 3 billion short tons (2.7 Pg) in Alaska have never been mined. Anthracites of newer, tertiary or cretaceous age, are found in the Crow's Nest part of the Rocky Mountains in Canada, and at various points in the Andes in Peru. Classifications The common American classification is as follows:--

Lump, steamboat, egg and stove coals, the latter in two or three sizes, all three being above 1-1/2 in. size on round-hole screens. Chestnut below 1-1/2 inch Pea " 7/8 " Buckwheat " 9/16 " Rice " 3/8 " Barley " 3/16 " above 7/8 inch " 9/16 " " 3/8 " " 3/16 " " 3/32 "

From the pea size downwards the principal use is for steam purposes. In South Wales a less elaborate classification is adopted; but great care is exercised in hand-picking and cleaning the coal from included particles of pyrites in the higher qualities known as best malting coals, which are used for kiln-drying malt and hops.

Bituminous coal is a relatively hard coal containing a tar-like substance called bitumen. It is of better quality than lignite coal but of poorer quality than anthracite coal. Bituminous coal is an organic sedimentary rock formed by diagenetic and submetamorphic compression of peat bog material. Bituminous coal has been compressed and heated so that its primary constituents are the macerals vitrinite, exinite, etc. The carbon content of bituminous coal is around 60-80%, the rest is composed of water, air, hydrogen, and sulphur which have not been driven off from the macerals. The heat content of bituminous coal ranges from 21 to 30 million Btu/ton (24 to 35 MJ/kg) on a moist, mineral-matter-free basis. Bituminous coal is usually black, sometimes dark brown, often with welldefined bands of bright and dull material. Bituminous coal seams are stratigraphically identified by the distinctive sequence of bright and dark bands and are classified accordingly as either "dull, bright-banded" or "bright, dull-banded" and so on. Uses

Bituminous coals are graded according to vitrinite reflectance, moisture content, volatile content, plasticity and ash content. Generally, the highest value bituminous coals are those which have a specific grade of plasticity, volatility and low ash content, especially with low carbonate, phosphorus and sulphur. Plasticity is vital for coking and steel making, where the coal has to behave in a manner which allows it to mix with the iron oxides during smelting. Low phosphorus content is vital for these coals, as phosphorus is a highly deleterious element in steel making. Coking coal is best if it has a very narrow range of volatility and plasticity. This is measured by the Free Swelling Index test. Tar content, volatile content and swelling index are used to select coals for coke blending. Volatility is also critical for steel making and power generation, as this determines the burn rate of the coal. High volatile content coals, while easy to ignite often are not as prized as moderately volatile coals; low volatile coal may be difficult to ignite although it will contain more energy per unit volume. The smelter must balance the volatile content of the coals to optimise the ease of ignition, burn rate, and energy output of the coal. Low ash, sulphur, and carbonate coals are prized for power generation because they do not produce much boiler slag and they do not require as much effort to scrub the flue gases to remove particulate matter. Carbonates are deleterious as they readily stick to the boiler apparatus. Sulphide contents are also deleterious in some fashion as this sulphur is emitted and can form smog, acid rain and haze pollution. Again, scrubbers on the flue gases aim to eliminate particulate and sulphur emissions.

Coking Coal When used for many industrial processes, bituminous coal must first be "coked" to remove volatile components. Coking is achieved by heating the coal in the absence of oxygen, which drives off volatile hydrocarbons such as propane, benzene and other aromatic hydrocarbons, and some sulfur gases. This also drives off a considerable amount of the contained water of the bituminous coal.

Coking coal is blended with uncoked coal for power generation. The primary use for coking coal is in the manufacture of steel, where carbon must be as volatile and ash free as possible. Jurassic Coals Extensive but low-value coals of Jurassic age extend through the Surat Basin in Australia, formed in an intracratonic sag basin, and contain evidence of dinosaur activity in the numerous ash plies. These coals are exploited in Queensland from the Walloon Coal Measures which are up to 15m thick of sub-bituminous to bituminous coals suited for coking, steam-raising and oil cracking. Triassic Coals Coals of Triassic age are known from the Clarence-Moreton and Ipswich Basins, near Ipswich, Australia and the Esk Trough. Coals of this era are rare, and many contain fossils of flowering plants. Some of the best coking coals are Australian Triassic coals, although most economic deposits have been worked out. Permian Coals The second largest deposit of the worlds bituminous coal is contained within Permian strata in Russia and also in the Bowen Basin in Queensland, Australia, as well as in the Sydney Basin and Perth Basin where thicknesses in excess of 300m are known. Current reserves and resources are projected to last for over 500 years. Australia exports the vast majority of its coal for coking and steel making in Japan. Certain Australian coals are the best in the world for these purposes, requiring little to no blending. Some bituminous coals from the Permian and Triassic in Australia are also the most suitable for cracking into oil. Vast deposits of oil shale exist in the Permian sediments of Queensland. Carboniferous Coals Much North American coal was created when swamps created organic material faster than it could decay, prior to the orogenies that created the Appalachian Mountains during the Carboniferous epoch, which is

subdivided in American literature into the Mississippian and Pennsylvanian eras after the two main coal-bearing time periods. Bituminous coal is mined in the Appalachian region, primarily for power generation. Mining is done via both surface and underground mines. Pocahontas bituminous coal at one time fueled half the world's navies and today stokes steel mills and power plants all over the globe. While coal mining is an important part of Appalachia's economy, many miners are afflicted with black lung disease.

Charcoal is the blackish residue consisting of impure carbon obtained by removing water and other volatile constituents from animal and vegetation substances. It is usually produced by heating wood in the absence of oxygen but sugar charcoal, bone charcoal (which contains a great amount of calcium phosphate), and others can be produced as well. The soft, brittle, light, black, porous material is 85% to 98% carbon, the remainder consisting of volatile chemicals and ash, and resembles coal. The first part of the word is of obscure origin, but the first use of the term "coal" in English was as a reference to charcoal. In this compound term, the prefix "chare-" meant "turn", with the literal meaning being "to turn to coal". The independent use of "char", meaning to scorch, to reduce to carbon, is comparatively recent and must be a back-formation from the earlier charcoal. It may be a use of the word charren, meaning to turn, i.e., wood changed or turned to coal; or it may be from the French charbon. A person who manufactured charcoal was formerly known as a collier, though the term was used later for those who dealt in coal, and the ships that transported it. Production of wood charcoal in districts where there is an abundance of wood dates back to a very remote period, and generally consists of piling billets of wood on their ends so as to form a conical pile, openings being left at the bottom to admit air, with a central shaft to serve as a flue. The whole pile is covered with turf or moistened clay. The firing is begun at the bottom of the flue, and gradually spreads outwards and upwards. The success of the operation depends upon the rate of the combustion. Under average conditions, 100 parts of wood yield about 60 parts by volume, or 25 parts by weight, of charcoal; small scale production on the spot often yields only

about 50%, large scale was efficient to about 90% even by the 17th century. The operation is so delicate that it was generally left to professional charcoal burners. These often worked in solitary groups in the woods and had a rather bad social reputation, especially travelling ones who often sold a sack (priced at about a day's wage) with lots of rubbish mixed in to farmers and townfolk. Historically the massive production of charcoal (at its height employing hundreds of thousands, mainly in Alpine and neighbouring forests) has been a major cause of deforestation, especially in Central Europe, but to a lesser extent even before, as in Stuart England. The increasing scarcity of easily harvested wood was a major factor for the switch to the fossil-fuel equivalents, mainly coal and brown coal for industrial use. The modern process of carbonizing wood either in small pieces or as sawdust in cast iron retorts is extensively practiced where wood is scarce, and also by reason of the recovery of valuable byproducts (wood spirit, pyroligneous acid, wood tar), which the process permits. The question of the temperature of the carbonization is important; according to J. Percy, wood becomes brown at 220C, a deep brown-black after some time at 280, and an easily powdered mass at 310. Charcoal made at 300 is brown, soft and friable, and readily inflames at 380; made at higher temperatures it is hard and brittle, and does not fire until heated to about 700. Uses One of the most important historical applications of wood charcoal is as a constituent of gunpowder. It is also used in metallurgical operations as a reducing agent, but its application has been diminished by the introduction of coke, anthracite smalls, etc. A limited quantity is made up into the form of drawing crayons; but the greatest amount is used as a fuel, which burns hotter and cleaner than wood. Charcoal is often used by blacksmiths, for cooking, and for other industrial applications. Commercially, charcoal is often found in either lump, briquette or extruded forms. Lump charcoal is made directly from hardwood material, burns hotter than briquettes and produces far less ash than briquettes. While some briquettes are made from a combination of charcoal (heat source), brown coal (heat source), mineral carbon (heat source), borax (press release agent), sodium nitrate (ignition aid), limestone (uniform visual ashing), starch (binder), raw sawdust (ignition aid) and possibly additives like paraffin or lighter fluid to aid in

lighting them, other "natural" briquettes are made solely from charcoal and a starch binder. Extruded charcoal is made by extruding either raw ground wood or carbonized wood into logs without the use of a binder. The heat and pressure of the extruding process hold the charcoal together. If the extrusion is made from raw wood material, the extruded logs are then subsequently carbonized. The porosity of activated charcoal accounts for its ability to readily adsorb gases and liquids; charcoal is often used to filter water or adsorb odors. Its pharmacological action depends on the same property; it adsorbs the gases of the stomach and intestines, and also liquids and solids (hence its use in the treatment of certain poisonings). Charcoal filters are used in some types of gas mask to remove poisonous gases from inhaled air. Wood charcoal also to some extent removes coloring material from solutions, but animal charcoal is generally more effective. Animal charcoal or bone black is the carbonaceous residue obtained by the dry distillation of bones; it contains only about 10% carbon, the remainder being calcium and magnesium phosphates (80%) and other inorganic material originally present in the bones. It is generally manufactured from the residues obtained in the glue and gelatin industries. Its decolorizing power was applied in 1812 by Derosne to the clarification of the syrups obtained in sugar refining; but its use in this direction has now greatly diminished, owing to the introduction of more active and easily managed reagents. It is still used to some extent in laboratory practice. The decolorizing power is not permanent, becoming lost after using for some time; it may be revived, however, by washing and reheating. Charcoal is used in art for drawing, making rough sketches in painting, and is one of the possible media for making a parsemage. It must usually be preserved by the application of a fixative. Artists generally utilize charcoal in three forms:

Vine charcoal Vine charcoal is created by burning sticks of wood (usually willow or linden/Tilia) into soft, medium, and hard consistencies. Bamboo charcoal is the principal tool in Japanese Sumi-e ( lit: charcoal drawing) art.

Compressed charcoal

Compressed charcoal is charcoal powder mixed with gum binder compressed into round or square sticks. The amount of binder determines the hardness of the stick. Compressed charcoal is used in charcoal pencils.

Powdered charcoal Finely powdered charcoal is often used to "tone" or cover large sections of a drawing surface. Drawing over the toned areas will darken it further, but the artist can also lighten (or completely erase) within the toned area to create lighter tones.

One additional use of charcoal rediscovered recently is in horticulture. Although American gardeners have been using charcoal for a short while, research on Terra preta soils in the Amazon has found widespread use there by natives to turn otherwise unproductive soil into very rich soil.
Coal (previously referred to as pitcoal or seacoal) is a fossil fuel extracted from the ground by underground mining or open-pit mining (surface mining). It is a readily combustible black or brownish-black sedimentary rock. It is composed primarily of carbon along with assorted other elements, including sulfur. Often associated with the Industrial Revolution, coal remains an enormously important fuel and is the largest single source of electricity world-wide. In the United States, for example, the coal power plants generate 50% of the electricity produced. In the UK, coal supplies 28% of electricity production (as of October 2004). In the UK this has been decreasing since the late-1990s with natural gas replacing coal as the primary electricity generating fuel.

Early history, etymology and folklore


The word "coal" came from the Anglo-Saxon col, which meant charcoal, but archaeological evidence demonstrates a history of use for much longer: It has been shown that outcrop coal was used in Britain since the Bronze Age (2-3000 years BCE), where it has been detected as forming part of the composition of funeral pyres. It was commonly used in the later period of the Roman occupation: coal cinders have been found in the hearths of villas, particularly in Northumberland, dated to ca. 400 CE. However, there is no evidence of that the product was of importance in Britain before the high Middle Ages: i.e. after ca. 1000 CE. Mineral coal became to be referred to as seacoal, probably because it came to many places in eastern England, including London, by sea. This is accepted as the more likely explanation for the name than that it was found on beaches, having fallen from the exposed coal seams above or washed out of underwater coal seam outcrops. These easily accessible sources had largely become exhausted (or could not meet the growing demand) by the thirteenth century, when underground mining

from shafts or adits was developed. In London there is still a Seacoal Lane (off the north side of Ludgate Hill) where the coal merchants conducted their business. An alternative name was pitcoal, because it came from mines, that is pits. In America and Britain, when referring to unburned coal, the word is a mass noun, and a phrase used for individual pieces is "lumps of coal". The plural "coals" can conventionally be used for types of coal. However, pieces burning, whether of coal, charcoal, or wood are called individually "coals." It is associated with the astrological sign Capricorn. It is carried by thieves to protect them from detection and to help them to escape when pursued. It is an element of a popular ritual associated with New Year's Eve. To dream of burning coals is a symbol of disappointment, trouble, affliction, and loss, unless they are burning brightly, when the symbol gives promise of uplifting and advancement. Santa Claus is said to leave a lump of coal instead of Christmas presents in the stockings of naughty children.

Composition
Carbon forms more than 50 percent by weight and more than 70 percent by volume of coal (this includes inherent moisture). This is dependent on coal rank, with higher rank coals containing less hydrogen, oxygen and nitrogen, until 95% purity of carbon is achieved at Anthracite rank and above. Graphite formed from coal is the end-product of the thermal and diagenetic conversion of plant matter (50% by volume of water) into pure carbon. Coal usually contains a considerable amount of incidental moisture, which is the water trapped within the coal in between the coal particles. Coals are usually mined wet and may be stored wet to prevent spontaneous combustion, so the carbon content of coal is quoted as both a 'as mined' and on a 'moisture free' basis. Lignite and other low-rank coals still contain a considerable amount of water and other volatile components trapped within the particles of the coal, known as its macerals. This is present either within the coal particles, or as hydrogen and oxygen atoms within the molecules. This is because coal is converted from carbohydrate material such as cellulose, into carbon, which is an incremental process (see below). Therefore coal carbon contents also depend heavily on the degree to which this cellulose component is preserved in the coal. Other constituents of coals include mineral matter, usually as silicate minerals such as clays, illite, kaolinite and so forth, as well as carbonate minerals like siderite, calcite and aragonite. Iron sulfide minerals such as pyrite are common constituents of coals. Sulfate minerals are also found, as is some form of salt, trace amounts of metals, notably iron, uranium, cadmium, and (rarely) gold.

Methane gas is another component of coal, produced not from bacterial means but from methanogenesis. Methane in coal is dangerous, as it can cause coal seam explosions, especially in underground mines, and may cause the coal to spontaneously combust. It is, however, a valuable by-product of some coal mining, serving as a significant source of natural gas. Coal composition is determined by specific coal assay techniques, and is performed to quantify the physical, chemical and mechanical behaviour of the coal, including whether it is a good candidate for coking coal. Some of the macerals of coal are:

vitrinite: fossil woody tissue, likely often charcoal from forest fires in the coal forests fusinite: made from peat made from cortical tissue exinite: fossil spore casings and plant cuticles resinite: fossil resin and wax alginite: fossil algal material

Origin of coal

Coal is formed from plant remains that have been compacted, hardened, chemically altered, and metamorphosed by heat and pressure over geologic time. Coal was formed in swamp ecosystems which persisted in lowland sedimentary basins similar, for instance, to the peat swamps of Borneo today. These swamp environments were formed during slow subsidence of passive continental margins, and most seem to have formed adjacent to estuarine and marine sediments suggesting that they may have been in tidal delta environments. They are often called the "coal forests". When plants die in these peat swamp environments, their biomass is deposited in anaerobic aquatic environments where low oxygen levels prevent their complete decay by bacteria and oxidation. For masses of undecayed organic matter to be preserved and to form economically valuable coal the environment must remain steady for prolonged periods of time, and the waters feeding these peat swamps must remain essentially free of sediment.

This requires minimal erosion in the uplands of the rivers which feed the coal swamps, and efficient trapping of the sediments. Eventually, and usually due to the initial onset of orogeny or other tectonic events, the coal forming environment ceases. In the majority of cases this is abrupt, with the majority of coal seams having a knife-sharp upper contact with the overlying sediments. This suggests that the onset of further sedimentation quickly destroys the peat swamp ecosystem and replaces it with meandering stream and river environments during ongoing subsidence. Burial by sedimentary loading on top of the peat swamp converts the organic matter to coal by the following processes;

compaction, due to loading of the sediments on the coal which flattens the organic matter removal of the water held within the peat in between the plant fragments with ongoing compaction, removal of water from the inter-cellular structure of fossilised plants with heat and compaction, removal of molecular water methanogenesis; similar to treating wood in a pressure cooker, methane is produced, which removes hydrogen and some carbon, and some further oxygen (as water) dehydrogenation, which removes hydroxyl groups from the cellulose and other plant molecules, resulting in the production of hydrogen-reduced coals

Generally, to form a coal seam 1 metre thick, between 10 and 30 metres of peat is required. Peat has a moisture content of up to 90%, so loss of water is of prime importance in the conversion of peat into lignite, the lowest rank of coal. Lignite is then converted by dehydrogenation and methanogenesis to sub-bituminous coal. Further dehydrogenation reactions, removing progressively more methane and higher hydrocarbon gases such as ethane, propane, etcetera, create bituminous coal and, when this process is complete at submetamorphic conditions, anthracite and graphite are formed.

Evidence of the types of plants that contributed to carbonaceous deposits can occasionally be found in the shale and sandstone sediments that overlie coal deposits and within the coal. Fossil evidence is best preserved in lignites and sub-bituminous coals, though fossils

in anthracite are not too rare. To date only three fossils have been found in graphite seams created from coal. The greatest coal-forming time in geologic history was during the Carboniferous era (280 to 345 million years ago). Further large deposits of coal are found in the Permian, with lesser but still significant Triassic and Jurassic deposits, and minor Cretaceous and younger deposits of lignite. In the modern European lowlands of Holland and Germany considerable thicknesses of peat have accumulated, testifying to the ubiquity of the coalforming process. In Europe, Asia, and North America, the Carboniferous coal was formed from tropical swamp forests, which are sometimes called the "coal forests". Southern hemisphere Carboniferous coal was formed from the Glossopteris flora, which grew on cold periglacial tundra when the South Pole was a long way inland in Gondwanaland. As an alternative to the widely-accepted theory of coal formation by decomposition of surface plants, a speculative creation process was proposed by Thomas Gold in his book The Deep Hot Biosphere: The Myth of Fossil Fuels. He proposes that black coal is continually created by bacteria living on upwelling methane and onther hydrocarbons under the Earth's crust. This speculative hypothesis (which is not widely accepted) makes a distinction between brown and black coal, and upholds that only brown coal is formed by the classical process of decomposition. It's interesting to note that elements such as (Nickel, Vanadium, Chromium, Arsenic, Mercury, Cadmium, Lead, Uranium and others) are present in black coals.

Types of coal
As geological processes apply pressure to peat over time, it is transformed successively into:

Lignite - also referred to as brown coal, is the lowest rank of coal and used almost exclusively as fuel for steam-electric power generation. Jet is a compact form of lignite that is sometimes polished and has been used as an ornamental stone since the Iron Age. Sub-bituminous coal - whose properties range from those of lignite to those of bituminous coal and are used primarily as fuel for steam-electric power generation. Bituminous coal - a dense coal, usually black, sometimes dark brown, often with well-defined bands of bright and dull material, used primarily as fuel in steamelectric power generation, with substantial quantities also used for heat and power applications in manufacturing and to make coke. Anthracite - the highest rank, used primarily for residential and commercial space heating.

Uses

Coal as fuel
Coal is primarily used as a solid fuel to produce heat through combustion. World coal consumption is about 5,800 million short tons (5.3 petagrams) annually, of which about 75% is used for electricity production. The region including China and India uses about 1,700 million short tons (1.5 Pg) annually, forecast to exceed 3,000 million short tons (2.7 Pg) in 2025. The USA consumes about 1,100 million short tons (1.0 Pg) of coal each year, using 90% of it for generation of electricity. Coal is the fastest growing energy source in the world, with coal use increasing by 25% for the three-year period ending in December 2004 (BP Statistical Energy Review, June 2005). When coal is used for electricity generation, it is usually pulverized and then burned in a furnace with a boiler. The furnace heat converts boiler water to steam, which is then used to spin turbines which turn generators and create electricity, with about 3540% thermodynamic efficiency for the entire process. Approximately 40% of the world electricity production uses coal, and the total known deposits recoverable by current technologies are sufficient for 300 years' use at current rates (see World Coal Reserves, below). A promising, more energy-efficient way of using coal for electricity production would be via solid-oxide fuel cells or molten-carbonate fuel cells (or any oxygen ion transport based fuel cells that do not discriminate between fuels, as long as they consume oxygen), which would be able to get 60%85% combined efficiency (direct electricity + waste heat steam turbine), compared to 3540% normally obtained with steam-only turbines. Currently these fuel cell technologies can only process gaseous fuels, and they are also sensitive to sulfur poisoning, issues which would first have to be worked out before large scale commercial success is possible with coal. As far as gaseous fuels go, one idea is pulverized coal in a gas carrier (nitrogen), especially if the resulting carbon dioxide is sequestered, and has to be separated anyway from the carrier. A better idea is coal gasification with water, then the water recycled.

Gasification
High prices of oil and natural gas are leading to increased interest in "BTU Conversion" technologies such as coal gasification, methanation, liquefacation, and solidification. Coal gasification breaks down the coal into its components, usually by subjecting it to high temperature and pressure, using steam and measured amounts of oxygen. This leads to the production of carbon dioxide and oxygen, as well as other gaseous compounds. In the past, coal was converted to make coal gas, which was piped to customers to burn for illumination, heating, and cooking. At present, the safer natural gas is used instead. South Africa still uses gasification of coal for much of its petrochemical needs. Gasification is also a possibility for future energy use, as it generally burns hotter and cleaner than conventional coal and can thus spin a more efficient gas turbine rather than a steam turbine. It also makes for the possibility of zero carbon dioxide emissions even though the energy comes from the conversion of carbon to carbon dioxide. This is because gasification produces a much higher concentration of carbon dioxide than direct combustion of coal in air (which is mostly nitrogen). The higher concentrations of carbon dioxide makes carbon capture and storage more economical than it otherwise would be.

Liquefaction
Coal can also be converted into liquid fuels like gasoline or diesel by several different processes. The Fischer-Tropsch process of indirect synthesis of liquid hydrocarbons was used in Nazi Germany, and for many years by Sasol in South Africa - in both cases because those regimes were politically isolated and unable to purchase crude oil on the open market. Coal would be gasified to make syngas (a balanced purified mixture of CO and H2 gas) and the syngas condensed using Fischer-Tropsch catalysts to make light hydrocarbons which are further processed into gasoline and diesel. Syngas can also be converted to methanol, which can be used as a fuel, fuel additive, or further processed into gasoline via the Mobil M-gas process. A direct liquefaction process Bergius process (liquefaction by hydrogenation) is also available but has not been used outside Germany, where such processes were operated both during World War I and World War II. SASOL in South Africa has experimented with direct hydrogenation. Several other direct liquefaction processes have been developed, among these being the SRC-I and SRC-II (Solvent Refined Coal) processes developed by Gulf Oil and implemented as pilot plants in the United States in the 1960's and 1970's. Yet another process to manufacture liquid hydrocarbons from coal is low temperature carbonization (LTC). Coal is coked at temperatures between 450 and 700 C compared to 800 to 1000 C for metallurgical coke. These temperatures optimize the production of coal tars richer in lighter hydrocarbons than normal coal tar. The coal tar is then further processed into fuels. The Karrick process was developed by Lewis C. Karrick, an oil shale technologist at the U.S. Bureau of Mines in the 1920s.

All of these liquid fuel production methods release carbon dioxide (CO2) in the conversion process, far more than is released in the extraction and refinement of liquid fuel production from petroleum. If these methods were adopted to replace declining petroleum supplies carbon dioxide emissions would be greatly increased on a global scale. For future liquefaction projects, Carbon dioxide sequestration is proposed to avoid releasing it into the atmosphere. As CO2 is one of the process streams, sequestration is easier than from flue gases produced in combustion of coal with air, where CO2 is diluted by nitrogen and other gases. Sequestration will, however, add to the cost. Coal liquefaction is one of the backstop technologies that could potentially limit escalation of oil prices and mitigate the effects of transportation energy shortage under peak oil. This is contingent on liquefaction production capacity becoming large enough to satiate the very large and growing demand for petroleum. Also, a risk is that the extra carbon dioxide released in the process could catastrophically accelerate global warming/adverse climate effects. Estimates of the cost of producing liquid fuels from coal suggest that domestic U.S. production of fuel from coal becomes cost-competitive with oil priced at around 35 USD per barrel, (break-even cost). This price, while above historical averages, is well bellow current oil prices. This makes coal a viable financial alternative to oil for the time being, although production is not great enough to make synfuels viable on a large scale. Among commercially mature technologies, advantage for indirect coal liquefaction over direct coal liquefaction are reported by Williams and Larson (2003). Estimates are reported for sites in China where break-even cost for coal liquefaction may be in the range between 25 to 35 USD/barrel of oil.

Coking and use of coke


Coke is a solid carbonaceous residue derived from low-ash, low-sulfur bituminous coal from which the volatile constituents are driven off by baking in an oven without oxygen at temperatures as high as 1,000 C (2,000 F) so that the fixed carbon and residual ash are fused together. Coke is used as a fuel and as a reducing agent in smelting iron ore in a blast furnace. Coke from coal is grey, hard, and porous and has a heating value of 24.8 million Btu/ton (29.6 MJ/kg). Byproducts of this conversion of coal to coke include coal-tar, ammonia, light oils, and "coal-gas". Petroleum coke is the solid residue obtained in oil refining, which resembles coke but contains too many impurities to be useful in metallurgical applications.

Harmful effects of coal burning


Combustion of coal, like any other compound containing carbon, produces carbon dioxide (CO2) and nitrogen oxides (NOx) along with varying amounts of sulfur dioxide (SO2) depending on where it was mined. Sulfur dioxide reacts with oxygen to form sulfur trioxide

(SO3), which then reacts with water to form Sulfuric Acid. Sulfuric Acid is returned to the Earth as a form of acid rain. Emissions from coal-fired power plants represent the largest source of carbon dioxide emissions, a primary cause of global warming. Coal mining and abandoned mines also emit methane, another cause of global warming. Since the carbon content of coal is much higher than oil, burning coal is a more serious threat to global temperatures. Many other pollutants are present in coal power station emissions. Some studies claim that coal power plant emissions are responsible for tens of thousands of premature deaths annually in the United States alone.Modern power plants utilize a variety of techniques to limit the harmfulness of their waste products and improve the efficiency of burning, though these techniques are not subject to standard testing or regulation in the U.S. and are not widely implemented in some countries, as they add to the capital cost of the power plant. To eliminate CO2 emissions from coal plants, carbon capture and storage has been proposed but has yet to be commercially used. CO2 from natural sources have long been recycled into depleted oil wells for the last of their reserves. The use of CO2 from artificial sources rarely occurs. Coal and coal waste products including fly ash, bottom ash, boiler slag, and flue gas desulferization contain many heavy metals, including arsenic, lead, mercury, nickel, vanadium, beryllium, cadmium, barium, chromium, copper, molybdenum, zinc, selenium and radium, which are dangerous if released into the environment. Coal also contains low levels of uranium, thorium, and other naturally-occurring radioactive isotopes whose release into the environment may lead to radioactive contamination. While these substances are trace impurities, enough coal is burned that significant amounts of these substances are released, paradoxically resulting in more radioactive waste than nuclear power plants

Coal fires
There are hundreds of coal fires burning around the world. Those burning underground can be difficult to locate and many cannot be extinguished. Fires can cause the ground above to subside, combustion gases are dangerous to life, and breaking out to the surface can initiate surface wildfires. Coal seams can be set on fire by spontaneous combustion or contact with a mine fire or surface fire. A grass fire in a coal area can set dozens of coal seams on fire. Coal fires in China burn 120 million tons of coal a year, emitting 360 million metric tons of carbon dioxide. This amounts to 2-3% of the annual worldwide production of CO2 from fossil fuels, or as much as emitted from all of the cars and light trucks in the United States. In the United States , a trash fire was lit in the borough landfill located in an abandoned Anthracite strip mine pit in the portion of the Coal Region called Centralia, Pennsylvania from 1962. It burns underground today, 44 years later. The reddish siltstone rock that caps many ridges and buttes in the Powder River Basin (Wyoming), and in western North Dakota is called porcelanite, which also may resemble the coal burning waste "clinker" or volcanic "scoria." Clinker is rock that has been fused by

the natural burning of coal. In the case of the Powder River Basin approximately 27 to 54 billion metric tons of coal burned within the past three million years. Wild coal fires in the area were reported by the Lewis and Clark expedition as well as explorers and settlers in the area. The Australian Burning Mountain was originally believed to be a volcano, but the smoke and ash comes from a coal fire which may have been burning for 5,000 years.

Petroleum diesel

A vintage Diesel pump Diesel is produced from petroleum, and is sometimes called petrodiesel (or, less seriously, dinodiesel) when there is a need to distinguish it from diesel obtained from other sources such as vegidiesel (biodiesel) derived from pure (SVO) or recycled waste (WVO) vegetable oil. As a hydrocarbon mixture, it is obtained in the fractional distillation of crude oil between 250 C and 350 C at atmospheric pressure. The density of diesel is about 850 grams per liter whereas gasoline has a density of about 720 g/l, or about 15% less. When burnt, diesel typically releases about 40.9 megajoules (MJ) per liter, whereas gasoline releases 34.8 MJ/L, also about 15% less. Diesel is generally simpler to refine than gasoline and often costs less (although price fluctuations sometimes mean that the inverse is true; for example, the cost of diesel traditionally rises during colder months as demand for heating oil, which is refined much the same way, rises). Diesel fuel, however, often contains higher quantities of sulfur. European emission standards and preferential taxation have forced oil refineries to dramatically reduce the level of sulfur in diesel fuels. In contrast, the United States has long had "dirtier" diesel, although more stringent emission standards have been adopted with the transition to ultra-

low sulfur diesel (ULSD) starting in 2006 and becoming mandatory on June 1, 2010 (see also diesel exhaust). U.S. diesel fuel typically also has a lower cetane number (a measure of ignition quality) than European diesel, resulting in worse cold weather performance and some increase in emissions. High levels of sulfur in diesel are harmful for the environment. It prevents the use of catalytic diesel particulate filters to control diesel particulate emissions, as well as more advanced technologies, such as nitrogen oxide (NOx) adsorbers (still under development), to reduce emissions. However, lowering sulfur also reduces the lubricity of the fuel, meaning that additives must be put into the fuel to help lubricate engines. Biodiesel is an effective lubricant. Diesel contains approximately 18% more energy per unit of volume than gasoline, which, along with the greater efficiency of diesel engines, contributes to fuel economy (distance traveled per volume of fuel consumed).

Chemical composition
Petroleum derived diesel is composed of about 75% saturated hydrocarbons (primarily paraffins including n, iso, and cycloparaffins), and 25% aromatic hydrocarbons (including naphthalenes and alkylbenzenes).The average chemical formula for common diesel fuel is C12H26, ranging from approx. C10H22 to C15H32.

Algae, microbes and water


There has been a lot of discussion and misinformation about algae in diesel fuel. An alga is a plant, and it requires sunlight to live and grow. As there is no sunlight in a closed fuel tank, no algae can survive there. However, some microbes can survive there. They can feed on the diesel fuel. These microbes form a slimy colony that lives at the fuel/water interface. They grow quite rapidly in warmer temperatures. They can even grow in cold weather when fuel tank heaters are installed. Parts of the colony can break off and clog the fuel lines and fuel filters. It is possible to either kill this growth with a biocide treatment, or eliminate the water, a necessary component of microbial life. There are a number of biocides on the market, which must be handled very carefully. If a biocide is used, it must be added every time a tank is refilled until the problem is fully resolved. Biocides attack the cell wall of microbes resulting in lysis, the death of a cell by bursting. The risk of filter clogging may continue for a short period after biocide treatment until cellular residues break down and are absorbed into the fuel.

Synthetic diesel
Wood, straw, corn, garbage, and sewage-sludge may be dried and gasified. After purification the so called Fischer Tropsch process is used to produce synthetic diesel. Other attempts use enzymatic processes and are also economic in case of high oil prices. Synthetic diesel may also be produced out of natural gas in the GTL process or out of coal

in the CTL process. Such synthetic diesel has 30% less particulate emissions than conventional diesel (US- California).

Biodiesel
Biodiesel can be obtained from vegetable oil (vegidiesel / vegifuel), or animal fats (biolipids, using transesterification). Biodiesel is a non-fossil fuel alternative to petrodiesel. It can also be mixed with petrodiesel in any amount in modern engines, though when first using it, the solvent properties of the fuel tend to dissolve accumulated deposits and can clog fuel filters. Biodiesel has a higher gel point than petrodiesel, but is comparable to diesel #2. This can be overcome by using a biodiesel/petrodiesel blend, or by installing a fuel heater, but this is only necessary during the colder months. There have been reports that a diesel-biodiesel mix results in lower emissions than either can achieve alone. A small percentage of biodiesel can be used as an additive in low-sulfur formulations of diesel to increase the lubricity lost when the sulfur is removed. Chemically, most biodiesel consists of alkyl (usually methyl) esters instead of the alkanes and aromatic hydrocarbons of petroleum derived diesel. However, biodiesel has combustion properties very similar to petrodiesel, including combustion energy and cetane ratings. Paraffin biodiesel also exists. Due to the purity of the source, it has a higher quality than petrodiesel. Ethanol can be added to petroleum diesel fuel in amounts up to 15% along with additives to keep the ethanol emulsified.

Applications
Internal Combustion Engines
Diesel is used in diesel engines, a type of internal combustion engine. Rudolf Diesel originally designed the diesel engine to use peanut oil as a fuel in order to help support agrarian society. Diesel engines are used in cars, trucks, motorcycles, boats and locomotives. Packard diesel motors were used in aircraft as early as 1927, and Charles Lindbergh flew a Stinson SM1B with a Packard Diesel in 1928. A Packard diesel motor designed by L.M. Woolson was fitted to a Stinson X7654, and in 1929 it was flown 1000 km non-stop from Detroit to Langley, Virginia (near Washington, D.C.). In 1931, Walter Lees and Fredrick Brossy set the nonstop flight record flying a Bellanca powered by a Packard Diesel for 84h 32m. The Hindenburg was powered by four 16 cylinder diesel engines, each with approximately 1200 horsepower available in bursts, and 850 horsepower available for cruising.

The very first diesel-engine automobile trip was completed on January 6, 1930. The trip was from Indianapolis to New York City, a distance of nearly 800 miles (1300 km). This feat helped to prove the usefulness of the internal combustion engine.

Automobile racing
In 1931, Dave Evans drove his Cummins Diesel Special to a nonstop finish in the Indianapolis 500, the first time a car had completed the race without a pit stop. That car and a later Cummins Diesel Special are on display at the Indianapolis Motor Speedway Hall of Fame Museum. With turbocharged Diesel-cars getting stronger in the 1990s, they were entered in touring car racing, and BMW even won the 24 Hours Nrburgring in 1998 with a 320d. After winning the 12 Hours of Sebring in 2006 with their Diesel-powered R10 LMP, Audi won the 24 Hours of Le Mans, too. This is the first time a Diesel-fueled vehicle has won at Le Mans against cars powered with regular fuel or other alternative fuel like methanol or bioethanol. Competitors like Porsche predicted this victory for Audi as the regulation is prodiesel. French automaker Peugeot is also planning to enter a diesel powered LMP in 2007.

Other uses
Bad quality (high sulfur) diesel fuel has been used as a parlladium extraction agent for the liquid-liquid extraction of this metal from nitric acid mixtures. This has been proposed as a means of separating the fission product palladium from PUREX raffinate which comes from used nuclear fuel. In this solvent extraction system the hydrocarbons of the diesel act as the diluent while the dialkyl sulfides act as the extractant. This extraction operates by a solvation mechanism. So far neither a pilot plant or full scale plant has been constructed to recover palladium, rhodium or ruthenium from nuclear wastes created by the use of nuclear fuel.

Taxation
Diesel fuel is very similar to heating oil which is used in central heating. In Europe, the United States and Canada, taxes on diesel fuel are higher than on heating oil due to the fuel tax, and in those areas, heating oil is marked with fuel dyes and trace chemicals to prevent and detect tax fraud. Similarly, "untaxed" diesel is available in the United States, which is available for use primarily in agricultural applications such as for tractor fuel. This untaxed diesel is also dyed red for identification purposes, and should a person be found to be using this untaxed diesel fuel for a typically taxed purpose (such as "over-the-road", or driving use), the user can be fined US$10,000. In the United Kingdom it is known as red diesel, and is also used by agricultural vehicles. Diesel fuel, or Marked Gas Oil is dyed green in the Republic of Ireland. The term DERV (short for "diesel engined road vehicle") is also used in the UK as a synonym for diesel fuel. In India, taxes on diesel fuel are lower than on gasoline as majority of the transportation, that transports grains and other essential commodities across the country, runs on diesel.

Lignite, often referred to as brown coal, is the lowest rank of coal and used almost exclusively as fuel for steam-electric power generation. It is brownish-black and has a high inherent moisture content, sometimes as high as 66 percent, and very high ash content compared to bituminous coal. It is also a heterogeneous mixture of compounds for which no single structural formula will suffice. The heat content of lignite ranges from 9 to 17 million Btu per short ton (10 to 20 MJ/kg) on a moist, mineral-matter-free basis. The heat content of lignite consumed in the United States averages 13 million Btu/ton (15 MJ/kg), on the as-received basis (i.e., containing both inherent moisture and mineral matter). When reacted with quaternary amine, amine treated lignite (ATL) forms. ATL is used in oil well drilling fluids to reduce fluid loss. Lignite mined in millions of metric tons: 1970 1. Germany 2. Russia 3. USA 4. Australia 5. Greece 6. Poland 7. Turkey 1980 1990 356,500 137,300 82,600 46,000 51,700 67,600 43,800 2000 2001

369,300 388,000 127,000 141,000 5,400 42,300

167,700 175,400 86,400 83,200 83,500 80,500 65,000 67,800 63,300 67,000 61,300 59,500 63,000 57,200

24,200 32,900 8,100 23,200

32,800 36,900 4,400 15,000

8. Czech Republic 67,000 87,000 9. China 13,000 22,000

71,000 38,000

50,100 50,700 40,000 47,000

10 SFR Yugoslavia 26,000 43,000 . 10 FR Yugoslavia . 11 Romania . 12 North Korea . ... ... ... Austria ... Total

60,000

35,500 35,500

14,100 27,100

33,500

17,900 29,800

5,700

10,000

10,000

26,000 26,500

... 3,700

... 1,700

... 2,500

... 1,300

... 1,200

804,000 1.028,000 1.214,000 877,400 894,800

Gasoline, also called petrol, is a petroleum-derived liquid mixture consisting primarily of hydrocarbons and enhanced with benzenes to increase octane ratings, used as fuel in internal combustion engines. Many Commonwealth countries use the term petrol (abbreviated from petroleum spirit). The term gasoline is commonly used in North America. The word is commonly shortened in colloquial usage to "gas" (see other meanings). The term mogas, short for motor gasoline, for use in cars is used to distinguish it from avgas, aviation gasoline used in aircraft. This should be distinguished in usage from genuinely gaseous fuels used in internal combustion engines such as LPG.

The word gasoline can also be used in British English to refer to a different petroleum derivative historically used in lamps. However this use is now uncommon. (Refer to the Oxford English Dictionary.)

Pharmaceutical
Before internal combustion engines were invented in the mid-1800s, gasoline was sold in small bottles as a treatment against lice and their eggs. At that time, the word "Petrol" was a trade name. This treatment method is no longer common because of the inherent fire hazard and the risk of dermatitis. The word petroleum may be derived from Old French ptrole, meaning petroleum. During the Franco-Prussian War of 1870-71, ptrole was stockpiled in Paris for use against a possible Prussian attack on the city. Later in 1871, during the revolutionary Paris Commune, rumours spread around the city of ptroleuses, women using bottles of petrol to commit arson against city buildings. Petrol is also abused as a psychoactive inhalant.

Etymology
The word "gasolene" was coined in 1865 from the word gas and the chemical suffix -ine/ene. The modern spelling was first used in 1871. The shortened form "gas" was first recorded in American English in 1905. Gasoline originally referred to any liquid used as the fuel for a gasoline-powered engine, other than diesel fuel or liquefied gas. Methanol racing fuel would have been classed as a type of gasoline. The word "petrol" was first used in reference to the refined substance as early as 1892 (it previously referred to unrefined petroleum), and was registered as a trade name by English wholesaler Carless, Capel & Leonard. Although it was never officially registered as a trademark, Carless's competitors used the term "Motor Spirit" until the 1930s. Bertha Benz got petrol for her famous drive from Mannheim to Pforzheim and back from chemists' shops. In Germany petrol is called Benzin, only the usage does not derive from her name but from the chemical Benzine.

World War II and octane


One interesting historical issue involving octane rating took place during WWII. Germany received nearly all its oil from Romania, and set up huge distilling plants in Germany to produce gasoline from coal. In the US the oil was not "as good" and the oil industry had to invest heavily in various expensive boosting systems. This turned out to have benefits. The US industry started delivering fuels of ever-increasing octane ratings by adding more of the

boosting agents and the infrastructure was in place for post war octane agents additive industry. Good crude oil was no longer a factor during wartime and by war's end, American aviation fuel was commonly 130 to 150 octane. This high octane could easily be used in existing engines to deliver much more power by increasing the pressure delivered by the superchargers. The Germans, relying entirely on "good" gasoline, had no such industry, and instead had to rely on ever-larger engines to deliver more power. However, German aviation engines were of the direct fuel injection type and could use methanol-water injection and nitrous oxide injection, which gave 50% more engine power for five minutes of dogfight. This could be done only five times or after 40 hours run-time and then the engine would have to be rebuilt. Most German aero engines used 87 octane fuel (called B4), while some high-powered engines used 100 octane (C2/C3) fuel. This historical "issue" is based on a very common misapprehension about wartime fuel octane numbers. There are two octane numbers for each fuel, one for lean mix and one for rich mix, rich being always greater. So, for example, a common British aviation fuel of the later part of the war was 100/125. The misapprehension that German fuels have a lower octane number (and thus a poorer quality) arises because the Germans quoted the lean mix octane number for their fuels while the Allies quoted the rich mix number for their fuels. Standard German high-grade aviation fuel used in the later part of the war (given the designation C3) had lean/rich octane numbers of 100/130. The Germans would list this as a 100 octane fuel while the Allies would list it as 130 octane. After the war the US Navy sent a Technical Mission to Germany to interview German petrochemists and examine German fuel quality. Their report entitled "Technical Report 145-45 Manufacture of Aviation Gasoline in Germany" chemically analyzed the different fuels and concluded that "Toward the end of the war the quality of fuel being used by the German fighter planes was quite similar to that being used by the Allies".

Chemical analysis and production


Gasoline is produced in oil refineries. Material that is separated from crude oil via distillation, called natural gasoline, does not meet the required specifications for modern engines (in particular octane rating; see below), but will form part of the blend. The bulk of a typical gasoline consists of hydrocarbons with between 5 and 12 carbon atoms per molecule. Many of these hydrocarbons are considered hazardous substances and are regulated by OSHA. The MSDS (Material Safety Data Sheet) for unleaded gasoline shows at least 15 hazardous chemicals occurring in various amounts from 5% to 35% by volume of gasoline. These include big names like benzene (up to 5% by volume), toluene (up to 35% by volume), naphthalene (up to 1% by volume), trimethylbenzene (up to 7% by volume), MTBE (up to 18% by volume) and about 10 others. Ref: (Tesoro Petroleum Companies, Inc. )

The various refinery streams blended together to make gasoline all have different characteristics. Some important streams are:

Reformate, produced in a catalytic reformer with a high octane rating and high aromatic content, and very low olefins (alkenes). Cat Cracked Gasoline or Cat Cracked Naphtha, produced from a catalytic cracker, with a moderate octane rating, high olefins (alkene) content, and moderate aromatics level. Here, "cat" is short for "catalyst". Hydrocrackate (Heavy, Mid, and Light), produced from a hydrocracker, with medium to low octane rating and moderate aromatic levels. Natural Gasoline (has very many names), directly from crude oil with low octane rating, low aromatics (depending on the crude oil), some naphthenes (cycloalkanes) and zero olefins (alkenes). Alkylate, produced in an alkylation unit, with a high octane rating and which is pure paraffin (alkane), mainly branched chains. Isomerate (various names) which is made by isomerising Natural Gasoline to increase its octane rating and is very low in aromatics.

(The terms used here are not always the correct chemical terms. Typically they are old fashioned, but they are the terms normally used in the oil industry. The exact terminology for these streams varies by oil company and by country.) Overall a typical gasoline is predominantly a mixture of paraffins (alkanes), naphthenes (cycloalkanes), aromatics and olefins (alkenes). The exact ratios can depend on

the oil refinery that makes the gasoline, as not all refineries have the same set of processing units. the crude oil used by the refinery on a particular day. the grade of gasoline, in particular the octane rating.

Currently many countries set tight limits on gasoline aromatics in general, benzene in particular, and olefins (alkene) content. This is increasing the demand for high octane pure paraffin (alkane) components, such as alkylate, and is forcing refineries to add processing units to reduce the benzene content. Gasoline can also contain some other organic compounds: such as organic ethers (deliberately added), plus small levels of contaminants, in particular sulfur compounds such as disulfides and thiophenes. Some contaminants, in particular thiols and hydrogen sulfide, must be removed because they cause corrosion in engines.

Volatility
Gasoline is more volatile than diesel oil, Jet-A or kerosene, not only because of the base constituents, but because of the additives that are put into it. The final control of volatility is often by blending of butane. The desired volatility depends on the ambient temperature: in hotter climates, gasoline components of higher molecular weight and thus lower

volatility are used. In cold climates, too little volatility results in cars failing to start. In hot climates, excessive volatility results in what is known as "vapour lock" where combustion fails to occur. In Australia the volatility limit changes every month and differs for each main distribution center, but most countries simply have a summer, winter and perhaps intermediate limit. In the United States, volatility is regulated in large urban centres to reduce the emission of unburned hydrocarbons. In large cities, so-called reformulated gasoline that is less prone to evaporation, among other properties, is required. Volatility standards may be relaxed (allowing more gasoline components into the atmosphere) during emergency anticipated gasoline shortages. For example, on 31 August 2005 in response to Hurricane Katrina, the United States permitted the sale of nonreformulated gasoline in some urban areas, which effectively permitted an early switch from summer to winter-grade gasoline. As mandated by EPA administrator Stephen L. Johnson, this "fuel waiver" was made effective through 15 September 2005. Though relaxed volatility standards damage ozone and pollute the air, higher volatility gasoline (which contains less additives than gasoline whose volatility has been artificially lowered) effectively increases a nation's gasoline supply by making it easier for oil refiners to produce gasoline.

Octane rating
The most important characteristic of gasoline is its octane rating, which is a measure of how resistant gasoline is to premature detonation which causes knocking. It is measured relative to a mixture of 2,2,4-trimethylpentane (an isomer of octane) and n-heptane. There are a number of different conventions for expressing the octane rating therefore the same fuel may be labeled with a different number depending upon the system used.

Energy content
Gasoline contains about 32 megajoules per litre (MJ/l) or 131MJ/US gallon. Volumetric energy density of some fuels compared to gasoline: Research octane number (RON) 25(*) 9198

Fuel type

MJ/l

BTU/imp gal

BTU/US gal

Diesel Gasoline

40.9 32.0

176,000 150,000

147,000 125,000

Gasohol (10% ethanol + 90% gasoline) LPG Ethanol Methanol

28.06

145,200

120,900

93/94

22.16 19.59 14.57

114,660 101,360 75,420

95,475 84,400 62,800

115 129 123

A high octane fuel such as LPG has a lower energy content than lower octane gasoline, resulting in an overall lower power output at the regular compression ratio an engine ran at on gasoline. However, with an engine tuned to the use of LPG (ie. via higher compression ratios such as 12:1 instead of 8:1), this lower power output can be overcome. This is because higher-octane fuels allow for a higher compression ratio - this means less space in a cylinder on its combustion stroke, hence a higher cylinder temperature which improves efficiency according to Carnot's theorem, along with less wasted hydrocarbons (therefore less pollution and wasted energy), bringing higher power levels coupled with less pollution overall because of the greater efficiency. The main reason for the lower energy content (per litre) of LPG in comparison to gasoline is that it has a lower density. Energy content per kilogram is higher than for gasoline (higher hydrogen to carbon ratio). The weight-density of gasoline is about 737.22 kg/m3. Different countries have some variation in what RON (Research Octane Number) is standard for gasoline, or petrol. In the UK, ordinary regular unleaded petrol is 91 RON (not commonly available), premium unleaded petrol is always 95 RON, and super unleaded is usually 97-98 RON. However both Shell and BP produce fuel at 102 RON for cars with hiperformance engines. In the US, octane ratings in fuels can vary between 86-87 AKI (9192 RON) for regular, through 89-90 (94-95) for mid-grade (European Premium), up to 9094 (RON 95-99) for premium unleaded or E10 (Super in Europe)

Additives
Lead
The mixture known as gasoline, when used in high compression internal combustion engines, has a tendency to ignite early (pre-ignition or detonation) causing a damaging "engine knocking" (also called "pinging" or "pinking") noise. Early research into this effect

was led by A.H. Gibson and Harry Ricardo in England and Thomas Midgley and Thomas Boyd in the United States. The discovery that lead additives modified this behavior led to the widespread adoption of the practice in the 1920s and therefore more powerful higher compression engines. The most popular additive was tetra-ethyl lead. However, with the discovery of the environmental and health damage caused by the lead, and the incompatibility of lead with catalytic converters found on virtually all automobiles since 1975, this practice began to wane in the 1980s. Most countries are phasing out leaded fuel; different additives have replaced the lead compounds. The most popular additives include aromatic hydrocarbons, ethers and alcohol (usually ethanol or methanol). In the U.S., where lead was blended with gasoline, primarily to boost octane levels, since the early 1920s, standards to phase out leaded gasoline were first implemented in 1973. In 1995, leaded fuel accounted for only 0.6 % of total gasoline sales and less than 2,000 tons of lead per year. From January 1, 1996, the Clean Air Act banned the sale of leaded fuel for use in on-road vehicles. Possession and use of leaded petrol in a regular on-road vehicle now carries a maximum $10,000 fine in the United States. However, fuel containing lead may continue to be sold for off-road uses, including aircraft, racing cars, farm equipment, and marine engines until 2008. The ban on leaded gasoline was presumed to lower levels of lead in people's bloodstream and led to thousands of tons of lead not being released in the air by automobiles. A side effect of the lead additives was protection of the valve seats from erosion. Many classic cars' engines have needed modification to use lead-free fuels since leaded fuels became unavailable. However, "Lead substitute" products are also produced and can sometimes be found at auto parts stores. Gasoline, as delivered at the pump, also contains additives to reduce internal engine carbon buildups, improve combustion, and to allow easier starting in cold climates. In most of South America, Africa, and some parts of Asia and the Middle East, leaded gasoline is common.

MMT
Methylcyclopentadienyl manganese tricarbonyl (MMT) has been used for many years in Canada and recently in Australia to boost octane. It also helps old cars designed for leaded fuel run on unleaded fuel without need for additives to prevent valve problems. There are currently ongoing debates as to whether or not MMT is harmful to the environment and toxic to humans. However, US Federal sources state that MMT is suspected to be a powerful neurotoxin and respiratory toxin.

Dye
Sometimes dyes are added to fuel for identification. However there are different systems in use and this has led to confusion. In the United States one color scheme dyed one kind of aircraft fuel red while in another color scheme red dye was used for indicating untaxed agricultural diesel. This resulted in contaminated aviation fuel when the very different fuels of similar color were mixed.

Oxygenate blending
Oxygenate blending adds oxygen to the fuel in oxygen-bearing compounds such as MTBE, ethanol and ETBE, and so reduces the amount of carbon monoxide and unburned fuel in the exhaust gas, thus reducing smog. In many areas throughout the US oxygenate blending is mandatory. For example, in Southern California, fuel must contain 2% oxygen by weight. The resulting fuel is often known as reformulated gasoline (RFG) or oxygenated gasoline. The federal requirement that RFG contain oxygen was dropped May 6, 2006. MTBE use is being phased out in some states due to issues with contamination of ground water. In some places it is already banned. Ethanol and to a lesser extent the ethanol derived ETBE are a common replacements. Especially ethanol derived from biomatter such as corn, sugar cane or grain is frequent, this will often be referred to as bio-ethanol. An ethanol-gasoline mix of 10% ethanol mixed with gasoline is called gasohol. An ethanolgasoline mix of 85% ethanol mixed with gasoline is called E85. The most extensive use of ethanol takes place in Brazil, where the ethanol is derived from sugarcane. Over 3,400 million US gallons (13,000,000 m) of ethanol mostly produced from corn was produced in the United States in 2004 for fuel use, and E85 is fast becoming available in much of the United States. The use of bioethanol, either directly or indirectly by conversion of such ethanol to bio-ETBE, is encouraged by the European Union Biofuels Directive. However since producing bio-ethanol from fermented sugars and starches involves distillation, ordinary people in much of Europe cannot ferment and distill their own bio-ethanol at present (unlike in the US where getting a BATF distillation permit has been easy since the 1973 oil crisis.)

Health concerns
Many of the non-aliphatic hydrocarbons naturally present in gasoline (especially aromatic ones like benzene), as well as many anti-knocking additives, are carcinogenic. Because of this, any large-scale or ongoing leaks of gasoline pose a threat to the public's health and the environment, should the gasoline reach a public supply of drinking water. The chief risks of such leaks come not from vehicles, but from gasoline delivery truck accidents and leaks from storage tanks. Because of this risk, most (underground) storage tanks now have extensive measures in place to detect and prevent any such leaks, such as sacrificial anodes. Gasoline is rather volatile (meaning it readily evaporates), requiring that storage tanks on land and in vehicles be properly sealed. The high volatility also means that it will easily ignite in cold weather conditions, unlike diesel for example. Appropriate venting is needed

to ensure the level of pressure is similar on the inside and outside. Gasoline also reacts dangerously with certain common chemicals; for example, gasoline and crystal Drno (sodium hydroxide) react together in a spontaneous combustion. It is also one of the few liquids that you are not supposed to vomit out of your system because of its tendency to burn your throat. Gasoline is also one of the sources of pollutant gases. Even gasoline which does not contain lead or sulfur compounds produces carbon dioxide, nitrogen oxides, and carbon monoxide in the exhaust of the engine which is running on it. Through misuse as an inhalant, gasoline also contributes to damage to health. Petrol sniffing is a common way of obtaining a high for many people and has become epidemic in many poorer communities such as with Indigenous Australians. In response, Opal fuel has been developed by the BP Kwinana Refinery in Australia, and contains only 5% aromatics (unlike the usual 25%) which inhibits the effects of inhalation.

Petroleum
From Wikipedia, the free encyclopedia

Petroleum (from Greek petra rock and elaion oil or Latin oleum oil ) or crude oil is a black, dark brown or greenish liquid found in porous rock formations in the earth. The American Petroleum Institute, in its Manual of Petroleum Measurement Standards (MPMS), defines it as "a substance, generally liquid, occurring naturally in the earth and composed mainly of mixtures of chemical compounds of carbon and hydrogen with or without other nonmetallic elements such as sulfur, oxygen, and nitrogen." Petroleum is found in porous rock formations in the upper strata of some areas of the Earth's crust. It consists of a complex mixture of various hydrocarbons, largely of the alkane series, but may vary much in appearance and composition. Petroleum is used mostly, by volume, for producing fuel oil and petrol (gasoline), both important "primary energy" sources (IEA Key World Energy Statistics). Petroleum is also the raw material for

many chemical products, including solvents, fertilizers, pesticides, and plastics. 88% of all petroleum extracted is processed as fuel; the other 12% is converted into other materials such as plastic. Since petroleum is a non-renewable resource, many people are worried about peak oil and eventual depletion in the near future. Due to its continual demand and consequent value, oil has been dubbed black gold. The combining form of the word petroleum is petro-, as in petrodiesel (petroleum diesel).

Formation
Biogenic theory
Most geologists view crude oil, like coal and natural gas, as the product of compression and heating of ancient organic materials over geological time. According to this theory, oil is formed from the preserved remains of prehistoric zooplankton and algae which have been settled to the sea bottom in large quantities under anoxic conditions. (Terrestrial plants tend to form coal, and very few dinosaurs have been converted into oil.) Over geological time this organic matter, mixed with mud, is buried under heavy layers of sediment. The resulting high levels of heat and pressure cause the remains to metamorphose, first into a waxy material known as kerogen which is found in various oil shales around the world, and then with more heat into liquid and gaseous hydrocarbons in a process known as catagenesis. Because most hydrocarbons are lighter than rock or water, these sometimes migrate upward through adjacent rock layers until they become trapped beneath impermeable rocks, within porous rocks called reservoirs. Concentration of hydrocarbons in a trap forms an oil field, from which the liquid can be extracted by drilling and pumrping. Geologists often refer to an "oil window" which is the temperature range that oil forms in below the minimum temperature oil remains trapped in the form of kerogen, and above the maximum temperature the oil is converted to natural gas through the process of thermal cracking. Though this happens at different depths in different locations around the world, a 'typical' depth for the oil window might be 46 km. Note that even if oil is formed at extreme depths, it may be trapped at much shallower depths, even if it is not formed there. (In the case of the Athabasca Oil Sands, it is found right at the surface.) Three conditions must be present for oil reservoirs to form: first, a source rock rich in organic material buried deep enough for subterranean heat to cook it into oil; second, a porous and permeable reservoir rock for it to accumulate in; and last a cap rock (seal) that prevents it from escaping to the surface. The vast majority of oil that has been produced by the earth has long ago escaped to the surface and been biodegraded by oil-eating bacteria. What oil companies are looking for is the small fraction that has been trapped by this rare combination of circumstances. Oil sands are reservoirs of partially biodegraded oil still in the process of escaping, but contain so much migrating oil that, although most of it has escaped, vast amounts are still present more than can be found in conventional oil reservoirs. On the other hand, oil shales are

source rocks that have never been buried deep enough to convert their trapped kerogen into oil. The reactions that produce oil and natural gas are often modeled as first order breakdown reactions, where kerogen is broken down to oil and natural gas by a set of parallel reactions, and oil eventually breaks down to natural gas by another set of reactions. The first set was originally patented in 1694 under British Crown Patent No. 330 covering "a way to extract and make great quantityes of pitch, tarr, and oyle out of a sort of stone." The latter set is regularly used in petrochemical plants and oil refineries.

Abiogenic theory
The idea of abiogenic petroleum origin was championed in the Western world by astronomer Thomas Gold based on thoughts from Russia, mainly on studies of Nikolai Kudryavtsev. The idea proposes that large amounts of carbon exist naturally in the planet, some in the form of hydrocarbons. Hydrocarbons are less dense than aqueous pore fluids, and migrate upward through deep fracture networks. Thermophilic, rock-dwelling microbial life-forms are in part responsible for the biomarkers found in petroleum. However, this theory is very much a minority opinion, especially amongst western geologists. It often pops up when scientists are not able to explain apparent oil inflows into certain oil reservoirs. However, most of these "abiotic" fields are explained as being the the result of geologic quirks. No western oil companies are currently known to explore for oil based on this theory.

Extraction
Locating an oil field is the first obstacle to be overcome. Today, petroleum engineers use instruments such as gravimeters and magnetometers in the search for petroleum. Generally, the first stage in the extraction of crude oil is to drill a well into the underground reservoir. Historically, in the USA, some oil fields existed where the oil rose naturally to the surface, but most of these fields have long since been depleted, except for certain remote locations in Alaska. Often many wells (called multilateral wells) are drilled into the same reservoir, to ensure that the extraction rate will be economically viable. Also, some wells (secondary wells) may be used to pump water, steam, acids or various gas mixtures into the reservoir to raise or maintain the reservoir pressure, and so maintain an economic extraction rate. If the underground pressure in the oil reservoir is sufficient, then the oil will be forced to the surface under this pressure. Gaseous fuels or natural gas are usually present, which also supply needed underground pressure. In this situation it is sufficient to place a complex arrangement of valves (the Christmas tree) on the well head to connect the well to a pipeline network for storage and processing. This is called primary oil recovery. Usually, only about 20% of the oil in a reservoir can be extracted this way.

Over the lifetime of the well the pressure will fall, and at some point there will be insufficient underground pressure to force the oil to the surface. If economical, and it often is, the remaining oil in the well is extracted using secondary oil recovery methods (see: energy balance and net energy gain). Secondary oil recovery uses various techniques to aid in recovering oil from depleted or low-pressure reservoirs. Sometimes pumps, such as beam pumps and electrical submersible pumps (ESPs), are used to bring the oil to the surface. Other secondary recovery techniques increase the reservoir's pressure by water injection, natural gas reinjection and gas lift, which injects air, carbon dioxide or some other gas into the reservoir. Together, primary and secondary recovery allow 25% to 35% of the reservoir's oil to be recovered. Tertiary oil recovery reduces the oil's viscosity to increase oil production. Tertiary recovery is started when secondary oil recovery techniques are no longer enough to sustain production, but only when the oil can still be extracted profitably. This depends on the cost of the extraction method and the current price of crude oil. When prices are high, previously unprofitable wells are brought back into production and when they are low, production is curtailed. Thermally enhanced oil recovery methods (TEOR) are tertiary recovery techniques that heat the oil and make it easier to extract. Steam injection is the most common form of TEOR, and is often done with a cogeneration plant. In this type of cogeneration plant, a gas turbine is used to generate electricity and the waste heat is used to produce steam, which is then injected into the reservoir. This form of recovery is used extensively to increase oil production in the San Joaquin Valley, which has very heavy oil, yet accounts for 10% of the United States' oil production. In-situ burning is another form of TEOR, but instead of steam, some of the oil is burned to heat the surrounding oil. Occasionally, detergents are also used to decrease oil viscosity. Tertiary recovery allows another 5% to 15% of the reservoir's oil to be recovered.

History
The first oil wells were drilled in China in the 4th century or earlier. They had depths of up to 243 meters and were drilled using bits attached to bamboo poles. The oil was burned to evaporate brine and produce salt. By the 10th century, extensive bamboo pipelines connected oil wells with salt springs. Ancient Persian tablets indicate the medicinal and lighting uses of petroleum in the upper levels of their society. In the 8th century, the streets of the newly constructed Baghdad were paved with tar, derived from easily accessible petroleum from natural fields in the region. In the 9th century, oil fields were exploited in Baku, Azerbaijan, to produce naphtha. These fields were described by the geographer Masudi in the 10th century, and by Marco Polo in the 13th century, who described the output of those wells as hundreds of shiploads. (See also: Timeline of Islamic science and technology.)

The modern history of petroleum began in 1846, with the discovery of the process of refining kerosene from coal by Atlantic Canada's Abraham Pineo Gesner. Poland's Ignacy ukasiewicz discovered a means of refining kerosene from the more readily available "rock oil" ("petr-oleum") in 1852 and the first rock oil mine was built in Bbrka, near Krosno in southern Poland in the following year. These discoveries rapidly spread around the world, and Meerzoeff built the first Russian refinery in the mature oil fields at Baku in 1861. At that time Baku produced about 90% of the world's oil. The battle of Stalingrad was fought over Baku (now the capital of the Azerbaijan Republic).

Oil field in California, 1938. The first modern oil well was drilled in 1848 by Russian engineer F.N. Semyonov, on the Aspheron Peninsula north-east of Baku. The first commercial oil well drilled in North America was in Oil Springs, Ontario, Canada in 1858, dug by James Miller Williams. The American petroleum industry began with Edwin Drake's discovery of oil in 1859, near Titusville, Pennsylvania. The industry grew slowly in the 1800s, driven by the demand for kerosene and oil lamps. It became a major national concern in the early part of the 20th century; the introduction of the internal combustion engine provided a demand that has largely sustained the industry to this day. Early "local" finds like those in Pennsylvania and Ontario were quickly exhausted, leading to "oil booms" in Texas, Oklahoma, and California. By 1910, significant oil fields had been discovered in Canada (specifically, in the province of Alberta), the Dutch East Indies (1885, in Sumatra), Persia (1908, in Masjed Soleiman), Peru, Venezuela, and Mexico, and were being developed at an industrial level. Even until the mid-1950s, coal was still the world's foremost fuel, but oil quickly took over. Following the 1973 energy crisis and the 1979 energy crisis, there was significant media coverage of oil supply levels. This brought to light the concern that oil is a limited resource that will eventually run out, at least as an economically viable energy source. At the time, the most common and popular predictions were always quite dire, and when they did not come true, many dismissed all such discussion. The future of petroleum as a fuel remains somewhat controversial. USA Today news (2004) reports that there are 40 years of petroleum left in the ground. Some would argue that because the total amount of petroleum is finite, the dire predictions of the 1970s have merely been postponed. Others argue that technology will continue to allow for the production of cheap hydrocarbons and that the earth has vast sources of unconventional petroleum reserves in the form of tar sands, bitumen fields and oil shale that will allow for petroleum use to continue in the future, with

both the Canadian tar sands and United States shale oil deposits representing potential reserves matching existing liquid petroleum deposits worldwide. Today, about 90% of vehicular fuel needs are met by oil. Petroleum also makes up 40% of total energy consumption in the United States, but is responsible for only 2% of electricity generation. Petroleum's worth as a portable, dense energy source powering the vast majority of vehicles and as the base of many industrial chemicals makes it one of the world's most important commodities. Access to it was a major factor in several military conflicts, including World War II and the Persian Gulf War. About 80% of the world's readily accessible reserves are located in the Middle East, with 62.5% coming from the Arab 5: Saudi Arabia (12.5%), UAE, Iraq, Qatar and Kuwait. The USA has less than 3%.

Alternative means of producing oil


As oil prices continue to escalate, other alternatives to producing oil have been gaining importance. The best known such methods involve extracting oil from sources such as oil shale or tar sands. These resources are known to exist in large quantities; extracting the oil at low cost and without too deleterious an impact on the environment remains a challenge. It is also possible to transform natural gas or coal into oil (or, more precisely, the various hydrocarbons found in oil). The best-known such method is the Fischer-Tropsch process, It was a concept pioneered in Nazi Germany when imports of petroleum were restricted due to war and Germany found a method to extract oil from coal. It was known as Ersatz ("substitute" in German), and accounted for nearly half the total oil used in WWII by Germany. However, the process was used only as a last resort as naturally occurring oil was much cheaper. As crude oil prices increase, the cost of coal to oil conversion becomes comparatively cheaper. The method involves converting high ash coal into synthetic oil in a multistage process. Ideally, a ton of coal produces nearly 200 liters (1.25 bbl, 52 US gallons) of crude, with byproducts ranging from tar to rare chemicals. Currently, two companies have commercialised their Fischer-Tropsch technology. Shell in Bintulu, Malaysia, uses natural gas as a feedstock, and produces primarily low-sulfur diesel fuels. Sasol in South Africa uses coal as a feedstock, and produces a variety of synthetic petroleum products. The process is today used in South Africa to produce most of the country's diesel fuel from coal by the company Sasol. The process was used in South Africa to meet its energy needs during its isolation under Apartheid. This process has received renewed attention in the quest to produce low sulfur diesel fuel in order to minimize the environmental impact from the use of diesel engines. An alternative method is the Karrick process, which converts coal into crude oil, pioneered in the 1930s in the United States.

More recently explored is thermal depolymerization (TDP). In theory, TDP can convert any organic waste into petroleum.

Production, consumption and alternatives


The term alternative propulsion or "alternative methods of propulsion" includes both

alternative fuels used in standard or modified internal combustion engines (i.e. combustion hydrogen). propulsion systems not based on internal combustion, such as those based on electricity (for example, electric or hybrid vehicles), compressed air, or fuel cells (i.e. hydrogen fuel cells).

The nowdays cars can be classified between the next main groups:

Pampetro cars, this is, only uses petroleum. Hybrid vehicle, that uses petroleum and other source, generally, electricity. Petrofree car, that do not use petroleum, like 100 % electric cars, hydrogen vehicles...

Black Gold
Black gold, in most of the world, refers to crude oil or petroleum. The name is derived from the black color of crude oil combined with its status as a highly valuable resource, serving in the industrial age, in many ways, the same role that gold did in the pre-industrial era. In the Appalachian Mountains of the United States, a major coal-producing region, the term refers to coal. In Taiwan, it means iron, petroleum, and coal. The term was used in the theme song of the TV show The Beverly Hillbillies, along with the term "Texas Tea", another synonym for crude oil.

Environmental effects
The presence of oil has significant social and environmental impacts, from accidents and routine activities such as seismic exploration, drilling, and generation of polluting wastes. Oil extraction is costly and sometimes environmentally damaging, although Dr. John Hunt of the Woods Hole Oceanographic Institution pointed out in a 1981 paper that over 70% of the reserves in the world are associated with visible macroseepages, and many oil fields are found due to natural leaks. Offshore exploration and extraction of oil disturbs the surrounding marine environment. Extraction may involve dredging, which stirs up the seabed, killing the sea plants that marine creatures need to survive. Crude oil and refined fuel spills from tanker ship accidents have damaged fragile ecosystems in Alaska, the Galapagos Islands, Spain, and many other places.

Burning oil releases carbon dioxide into the atmosphere, which contributes to global warming. Per energy unit, oil produces less CO2 than coal, but more than natural gas. However, oil's unique role as a transportation fuel makes reducing its CO2 emissions a particularly thorny problem; amelioration strategies such as carbon sequestering are generally geared for large power plants, not individual vehicles. Renewable energy alternatives do exist, although the degree to which they can replace petroleum and the possible environmental damage they may cause are uncertain and controversial. Sun, wind, geothermal, and other renewable electricity sources cannot directly replace high energy density liquid petroleum for transportation use; instead automobiles and other equipment must be altered to allow using electricity (in batteries) or hydrogen (via fuel cells or internal combustion) which can be produced from renewable sources. Other options include using biomass-origin liquid fuels (ethanol, biodiesel). Any combination of solutions to replace petroleum as a liquid transportation fuel will be a very large undertaking.

Future of oil
The Hubbert peak theory, also known as peak oil, is a theory concerning the long-term rate of production of conventional oil and other fossil fuels. It assumes that oil reserves are not replenishable (i.e. that abiogenic replenishment, if it exists at all, is negligible), and predicts that future world oil production must inevitably reach a peak and then decline as these reserves are exhausted. Controversy surrounds the theory, as predictions for when the global peak will actually take place are highly dependent on the past production and discovery data used in the calculation. Proponents of peak oil theory also refer as an example of their theory, that when any given oil well produces oil in similar volumes to the amount of water used to obtain the oil, it tends to produce less oil afterwards, leading to the relatively quick exhaustion and/or commercial unviablility of the well in question. The issue can be considered from the point of view of individual regions or of the world as a whole. Originally M. King Hubbert noticed that the discoveries in the United States had peaked in the early 1930s, and concluded that production would then peak in the early 1970s. His prediction turned out to be correct, and after the US peaked in 1971 - and thus lost its excess production capacity - OPEC was finally able to manipulate oil prices, which led to the oil crisis in 1973. Since then, most other countries have also peaked: Scotland's North Sea, for example in the late 1990s. China has confirmed that two of its largest producing regions are in decline, and Mexico's national oil company, Pemex, has announced that Cantarell Field, one of the world's largest offshore fields, is expected to peak in 2006, and then decline 14% per annum. For various reasons (perhaps most importantly the lack of transparency in accounting of global oil reserves), it is difficult to predict the oil peak in any given region. Based on

available production data, proponents have previously (and incorrectly) predicted the peak for the world to be in years 1989, 1995, or 1995-2000. However these predictions date from before the recession of the early 1980s, and the consequent reduction in global consumption, the effect of which was to delay the date of any peak by several years. A new prediction by Goldman Sachs picks 2007 for oil and some time later for natural gas. Just as the 1971 U.S. peak in oil production was only clearly recognized after the fact, a peak in world production will be difficult to discern until production clearly drops off. One signal is that 2005 saw a dramatic fall in announced new oil projects coming to production from 2008 onwards. Since it takes on average four to six years for a new project to start producing oil, in order to avoid the peak, these new projects would have to not only make up for the depletion of current fields, but increase total production annually to meet increasing demand. 2005 also saw substantial increases in oil prices due to temporary circumstances, which then failed to be controlled by increasing production. The inability to increase production in the short term, indicating a general lack of spare capacity, and the corresponding uncontrolled price fluctuations, can be interpreted as a sign that peak oil has occurred or is presently in the process of occurring.

Classification
The oil industry classifies "crude" by the location of its origin (e.g., "West Texas Intermediate, WTI" or "Brent") and often by its relative weight (API gravity) or viscosity ("light", "intermediate" or "heavy"); refiners may also refer to it as "sweet", which means it contains relatively little sulfur, or as "sour", which means it contains substantial amounts of sulfur and requires more refining in order to meet current product specifications. Each crude oil has unique molecular characteristics which are understood by the use of crude oil assay analysis in petroleum laboratories. The world reference barrels are:Brent Crude, comprising 15 oils from fields in the Brent and Ninian systems in the East Shetland Basin of the North Sea. The oil is landed at Sullom Voe terminal in the Shetlands. Oil production from Europe, Africa and Middle Eastern oil flowing West tends to be priced off the price of this oil, which forms a benchmark. West Texas Intermediate (WTI) for North American oil. Dubai, used as benchmark for Middle East oil flowing to the Asia-Pacific region. Tapis (from Malaysia, used as a reference for light Far East oil) Minas (from Indonesia, used as a reference for heavy Far East oil) The OPEC basket used to be the average price of the following blends: Arab Light Saudi Arabia Bonny Light Nigeria Fateh Dubai Isthmus Mexico (non-OPEC) Minas Indonesia Saharan Blend Algeria Tia Juana Light Veneruela OPEC attempts to keep the price of the Opec Basket between upper and lower limits, by increasing and decreasing production. This makes the measure important for market analysts. The OPEC Basket, including a mix of light and heavy crudes, is heavier than both Brent and WTI. In June 15, 2005 the OPEC basket was changed to reflect the characteristics of the oil produced by OPEC members. The new OPEC Reference Basket (ORB) is made up of the

following: Saharan Blend (Algeria), Minas (Indonesia), Iran Heavy (Islamic Republic of Iran), Basra Light (Iraq), Kuwait Export (Kuwait), Es Sider (Libya), Bonny Light (Nigeria), Qatar Marine (Qatar), Arab Light (Saudi Arabia), Murban (UAE) and BCF 17 (Venezuela).

Pricing
References to the oil prices are usually either references to the spot price of either WTI/Light Crude as traded on New York Mercantile Exchange (NYMEX) for delivery in Cushing, Oklahoma; or the price of Brent as traded on the Intercontinental Exchange (ICE, which the International Petroleum Exchange has been incorporated into) for delivery at Sullom Voe. The price of a barrel of oil is highly dependent on both its grade (which is determined by factors such as its specific gravity or API and its sulphur content) and location. The vast majority of oil will not be traded on an exchange but on an over-thecounter basis, typically with reference to a marker crude oil grade that is typically quoted via pricing agencies such as Argus Media Ltd and Platts. For example in Europe a particular grade of oil, say Fulmar, might be sold at a price of "Brent plus US$0.25/barrel" or as an intra-company transaction. IPE claim that 65% of traded oil is priced off their Brent benchmarks. Other important benchmarks include Dubai, Tapis, and the OPEC basket. The Energy Information Administration (EIA) uses the Imported Refiner Acquisition Cost, the weighted average cost of all oil imported into the US as their "world oil price". It is often claimed that OPEC sets the oil price and the true cost of a barrel of oil is around $2, which is equivalent to the cost of extraction of a barrel in the Middle East. These estimates of costs ignore the cost of finding and developing oil reserves. Furthermore the important cost as far as price is concerned, is not the price of the cheapest barrel but the cost of producing the marginal barrel. By limiting production OPEC has caused more expensive areas of production such as the North Sea to be developed before the Middle East has been exhausted. OPEC's power is also often overstated. Investing in spare capacity is expensive and the low oil price environment in the late 90s led to cutbacks in investment. This has meant during the oil price rally seen between 2003-2005, OPEC's spare capacity has not been sufficient to stabilise prices. Oil demand is highly dependent on global macroeconomic conditions, so this is also an important determinant of price. Some economists claim that high oil prices have a large negative impact on the global growth. This means that the relationship between the oil price and global growth is not particularly stable although a high oil price is often thought of as being a late cycle phenomenon. A recent high point was reached in January 1999, after increased oil production from Iraq coincided with the Asian financial crisis, which reduced demand. The prices then rapidly increased, more than doubling by September 2000, then fell until the end of 2001 before steadily increasing, reaching US $40 to US $50 per barrel by September 2004. In October

2004, light crude futures contracts on the NYMEX for November delivery exceeded US $53 per barrel and for December delivery exceeded US $55 per barrel. Crude oil prices surged to a record high above $60 a barrel in June 2005, sustaining a rally built on strong demand for gasoline and diesel and on concerns about refiners' ability to keep up. This trend continued into early August 2005, as NYMEX crude oil futures contracts surged past the $65 mark as consumers kept up the demand for gasoline despite its high price. Individuals can now trade crude oil through online trading sites margin account or their banks through structured products indexed on the Commodities markets.

Sub-bituminous coal is a coal whose properties range from those of lignite to those of bituminous coal and are used primarily as fuel for steam-electric power generation. It may be dull, dark brown to black, soft and crumbly at the lower end of the range, to bright, jet-black, hard, and relatively strong at the upper end. Subbituminous coal contains 20 to 30 percent inherent moisture by weight. The heat content of sub-bituminous coal ranges from 17 to 24 million Btu per short ton (20 to 28 MJ/kg) on a moist, mineral-matter-free basis. The heat content of sub-bituminous coal consumed in the United States averages 17 to 18 million Btu/ton (20 to 21 MJ/kg), on the as-received basis (i.e., containing both inherent moisture and mineral matter). A major source of sub-bituminous coal in the United States is the Powder River Basin in Wyoming. Its relatively low density and high water content renders some types of subbituminous coal susceptible to spontaneous combustion if not packed densely during storage in order to exclude free air flow.
Many of us assume that the nation's first serious push to develop renewable fuels was spawned while angry Americans waited in gas lines during the "energy crisis" of the 1970s. Held hostage by the OPEC oil embargo, the country suddenly seemed receptive to warnings from scientists, environmentalists, and even a few politicians to end its over-reliance on finite coal and oil reserves or face severe economic distress and political upheaval. But efforts to design and construct devices for supplying renewable energy actually began some 100 years before that turbulent time--ironically, at the very height of the Industrial Revolution, which was largely founded on the promise of seemingly inexhaustible supplies of fossil fuels. Contrary to the prevailing opinion of the day, a number of engineers questioned the practice of an industrial economy based on

nonrenewable energy and worried about what the world's nations would do after exhausting the fuel supply. More important, many of these visionaries did not just provide futuristic rhetoric but actively explored almost all the renewable energy options familiar today. In the end, most decided to focus on solar power, reasoning that the potential rewards outweighed the technical barriers. In less than 50 years, these pioneers developed an impressive array of innovative techniques for capturing solar radiation and using it to produce the steam that powered the machines of that era. In fact, just before World War I, they had outlined all of the solar thermal conversion methods now being considered. Unfortunately, despite their technical successes and innovative designs, their work was largely forgotten for the next 50 years in the rush to develop fossil fuels for an energy-hungry world. Now, a century later, history is repeating itself. After following the same path as the early inventors--in some cases reinventing the same techniques--contemporary solar engineers have arrived at the same conclusion: solar power is not only possible but eminently practical, not to mention more environmentally friendly. Alas, once again, just as the technology has proven itself from a practical standpoint, public support for further development and implementation is eroding, and solar power could yet again be eclipsed by conventional energy technologies. The First Solar Motor The earliest known record of the direct conversion of solar radiation into mechanical power belongs to Auguste Mouchout, a mathematics instructor at the Lyce de Tours. Mouchout began his solar work in 1860 after expressing grave concerns about his country's dependence on coal. "It would be prudent and wise not to fall asleep regarding this quasi-security," he wrote. "Eventually industry will no longer find in Europe the resources to satisfy its prodigious expansion. Coal will undoubtedly be used up. What will industry do then?" By the following year he was granted the first patent for a motor running on solar power and continued to improve his design until about 1880. During this period the inventor laid the foundation for our modern understanding of converting solar radiation into mechanical steam power. Mouchout's initial experiments involved a glass-enclosed iron cauldron: incoming solar radiation passed through the glass cover, and the trapped rays transmitted heat to the water. While this simple arrangement boiled water, it was of little practical value because the quantities and pressures of steam it produced were minimal. However, Mouchout soon discovered that by adding a reflector to concentrate additional radiation onto the cauldron, he could generate more steam. In late 1865, he succeeded in using his apparatus to operate a small, conventional steam engine. By the following summer, Mouchout displayed his solar motor to Emperor Napoleon III in Paris. The monarch, favorably impressed, offered financial assistance for developing an industrial solar motor for France. With the newly acquired funds, Mouchout enlarged his invention's capacity, refined the reflector, redesigning it as a truncated cone, like a dish with slanted sides, to more accurately focus the sun's rays on the boiler. Mouchout also constructed a tracking mechanism that enabled the entire machine to follow the sun's altitude and azimuth, providing uninterrupted solar reception. After six years of work, Mouchout exhibited his new machine in the library

courtyard of his Tours home in 1872, amazing spectators. One reporter described the reflector as an inverted "mammoth lamp shade...coated on the inside with very thin silver leaf" and the boiler sitting in the middle as an "enormous thimble" made of blackened copper and "covered with a glass bell." Anxious to put his invention to work, he connected the apparatus to a steam engine that powered a water pump. On what was deemed "an exceptionally hot day," the solar motor produced one-half horsepower. Mouchout reported the results and findings to the French Academy of Science. The government, eager to exploit the new invention to its fullest potential, decided that the most suitable venue for the new machine would be the tropical climes of the French protectorate of Algeria, a region blessed with almost constant sunshine and entirely dependent on coal, a prohibitively expensive commodity in the African region. Mouchout was quickly deployed to Algeria with ample funding to construct a large solar steam engine. He first decided to enlarge his invention's capacity yet again to 100 liters (70 for water and 30 for steam) and employ a multi-tubed boiler instead of the single cauldron. The boiler tubes had a better surface-area-to-water ratio, yielding more pressure and improved engine performance. In 1878, Mouchout exhibited the redesigned invention at the Paris Exposition. Perhaps to impress the audience or, more likely, his government backers, he coupled the steam engine to a refrigeration device. The steam from the solar motor, after being routed through a condenser, rapidly cooled the inside of a separate insulated compartment. He explained the result: "In spite of the seeming paradox of the statement, [it was] possible to utilize the rays of the sun to make ice." Mouchout was awarded a medal for his accomplishments. By 1881 the French Ministry of Public Works, intrigued by Mouchout's machine, appointed two commissioners to assess its cost efficiency. But after some 900 observations at Montpelier, a city in southern France, and Constantine, Algeria, the government deemed the device a technical success but a practical failure. One reason was that France had recently improved its system for transporting coal and developed a better relationship with England, on which it was dependent for that commodity. The price of coal had thus dropped, rendering the need for alternatives less attractive. Unable to procure further financial assistance, Mouchout returned to his academic pursuits. The Tower of Power During the height of Mouchout's experimentation, William Adams, the deputy registrar for the English Crown in Bombay, India, wrote an award-winning book entitled Solar Heat: A Substitute for Fuel in Tropical Countries. Adams noted that he was intrigued with Mouchout's solar steam engine after reading an account of the Tours demonstration, but that the invention was impractical, since "it would be impossible to construct [a dish-shaped reflector] of much greater dimensions" to generate more than Mouchout's one-half horsepower. The problem, he felt, was that the polished metal reflector would tarnish too easily, and would be too costly to build and too unwieldy to efficiently track the sun. Fortunately for the infant solar discipline, the English registrar did not spend all his time finding faults in the French inventor's efforts, but offered some creative

solutions. For example, Adams was convinced that a reflector of flat silvered mirrors arranged in a semicircle would be cheaper to construct and easier to maintain. His plan was to build a large rack of many small mirrors and adjust each one to reflect sunlight in a specific direction. To track the sun's movement, the entire rack could be rolled around a semicircular track, projecting the concentrated radiation onto a stationary boiler. The rack could be attended by a laborer and would have to be moved only "three or four times during the day," Adams noted, or more frequently to improve performance. Confident of his innovative arrangement, Adams began construction in late 1878. By gradually adding 17-by-10-inch flat mirrors and measuring the rising temperatures, he calculated that to generate the 1,200 F necessary to produce steam pressures high enough to operate conventional engines, the reflector would require 72 mirrors. To demonstrate the power of the concentrated radiation, Adams placed a piece of wood in the focus of the mirrored panes where, he noted, "it ignited immediately." He then arranged the collectors around a boiler, retaining Mouchout's enclosed cauldron configuration, and connected it to a 2.5-horsepower steam engine that operated during daylight hours "for a fortnight in the compound of [his] bungalow." Eager to display his invention, Adams notified newspapers and invited his important friends--including the Army's commander in chief, a colonel from the Royal Engineers, and the secretary of public works, various justices, and principal mill owners--to a demonstration. Adams wrote that all were impressed; even the local engineers who, while doubtful that solar power could compete directly with coal and wood, thought it could be a practical supplemental energy source. Adams's experimentation ended soon after the demonstration, though, perhaps because he had achieved his goal of proving the feasibility of his basic design, but more likely because, as some say, he lacked sufficient entrepreneurial drive. Even so, his legacy of producing a powerful and versatile way to harness and convert solar heat survives. Engineers today know this design as the Power Tower concept, which is one of the best configurations for large scale, centralized solar plants. In fact, most of the modern tower-type solar plants follow Adams's basic configuration: flat or slightly curved mirrors that remain stationary or travel on a semicircular track and either reflect light upward to a boiler in a receiver tower or downward to a boiler at ground level, thereby generating steam to drive an accompanying heat engine. Collection without Reflection Even with Mouchout's abandonment and the apparent disenchantment of England's sole participant, Europe continued to advance the practical application of solar heat, as the torch returned to France and engineer Charles Tellier. Considered by many the father of refrigeration, Tellier actually began his work in refrigeration as a result of his solar experimentation, which led to the design of the first nonconcentrating, or non-reflecting, solar motor. In 1885, Tellier installed a solar collector on his roof similar to the flat-plate collectors placed atop many homes today for heating domestic water. The collector was composed of ten plates, each consisting of two iron sheets riveted together to

form a watertight seal, and connected by tubes to form a single unit. Instead of filling the plates with water to produce steam, Tellier chose ammonia as a working fluid because of its significantly lower boiling point. After solar exposure, the containers emitted enough pressurized ammonia gas to power a water pump he had placed in his well at the rate of some 300 gallons per hour during daylight. Tellier considered his solar water pump practical for anyone with a south-facing roof. He also thought that simply adding plates, thereby increasing the size of the system, would make industrial applications possible. By 1889 Tellier had increased the efficiency of the collectors by enclosing the top with glass and insulating the bottom. He published the results in The Elevation of Water with the Solar Atmosphere, which included details on his intentions to use the sun to manufacture ice. Like his countryman Mouchout, Tellier envisioned that the large expanses of the African plains could become industrially and agriculturally productive through the implementation of solar power. In The Peaceful Conquest of West Africa, Tellier argued that a consistent and readily available supply of energy would be required to power the machinery of industry before the French holdings in Africa could be properly developed. He also pointed out that even though the price of coal had fallen since Mouchout's experiments, fuel continued to be a significant expense in French operations in Africa. He therefore concluded that the construction costs of his low-temperature, non-concentrating solar motor were low enough to justify its implementation. He also noted that his machine was far less costly than Mouchout's device, with its dish-shaped reflector and complicated tracking mechanism. Yet despite this potential, Tellier evidently decided to pursue his refrigeration interests instead, and do so without the aid of solar heat. Most likely the profits from conventionally operated refrigerators proved irresistible. Also, much of the demand for the new cooling technology now stemmed from the desire to transport beef to Europe from North and South America. The rolling motion of the ships combined with space limitations precluded the use of solar power altogether. And as Tellier redirected his focus, France saw the last major development of solar mechanical power on her soil until well into the twentieth century. Most experimentation in the fledgling discipline crossed the Atlantic to that new bastion of mechanical ingenuity, the United States. The Parabolic Trough Though Swedish by birth, John Ericsson was one of the most influential and controversial U.S. engineers of the nineteenth century. While he spent his most productive years designing machines of war--his most celebrated accomplishment was the Civil War battleship the Monitor--he dedicated the last 20 years of his life largely to more peaceful pursuits such as solar power. This work was inspired by a fear shared by virtually all of his fellow solar inventors that coal supplies would someday end. In 1868 he wrote, "A couple of thousand years dropped in the ocean of time will completely exhaust the coal fields of Europe, unless, in the meantime, the heat of the sun be employed." Thus by 1870 Ericsson had developed what he claimed to be the first solar-powered steam engine, dismissing Mouchout's machine as "a mere toy." In truth, Ericsson's first designs greatly resembled Mouchout's devices, employing a conical, dish-shaped

reflector that concentrated solar radiation onto a boiler and a tracking mechanism that kept the reflector directed toward the sun. Though unjustified in claiming his design original, Ericsson soon did invent a novel method for collecting solar rays--the parabolic trough. Unlike a true parabola, which focuses solar radiation onto a single, relatively small area, or focal point, like a satellite television dish, a parabolic trough is more akin to an oil drum cut in half lengthwise that focuses solar rays in a line across the open side of the reflector. This type of reflector offered many advantages over its circular (dish-shaped) counterparts: it was comparatively simple, less expensive to construct, and, unlike a circular reflector, had only to track the sun in a single direction (up and down, if lying horizontal, or east to west if standing on end), thus eliminating the need for complex tracking machinery. The downside was that the device's temperatures and efficiencies were not as high as with a dish-shaped reflector, since the configuration spread radiation over a wider area--a line rather than a point. Still, when Ericsson constructed a single linear boiler (essentially a pipe), placed it in the focus of the trough, positioned the new arrangement toward the sun, and connected it to a conventional steam engine, he claimed the machine ran successfully, though he declined to provide power ratings. The new collection system became popular with later experimenters and eventually became a standard for modern plants. In fact, the largest solar systems in the last decade have opted for Ericsson's parabolic trough reflector because it strikes a good engineering compromise between efficiency and ease of operation. For the next decade, Ericsson continued to refine his invention, trying lighter materials for the reflector and simplifying its construction. By 1888, he was so confident of his design's practical performance that he planned to mass-produce and supply the apparatus to the "owners of the sunburnt lands on the Pacific coast" for agricultural irrigation. Unfortunately for the struggling discipline, Ericsson died the following year. And because he was a suspicious and, some said, paranoid man who kept his designs to himself until he filed patent applications, the detailed plans for his improved sun motor died with him. Nevertheless, the search for a practical solar motor was not abandoned. In fact, the experimentation and development of large-scale solar technology was just beginning. The First Commercial Venture Boston resident Aubrey Eneas began his solar motor experimentation in 1892, formed the first solar power company (The Solar Motor Co.) in 1900, and continued his work until 1905. One of his first efforts resulted in a reflector much like Ericsson's early parabolic trough. But Eneas found that it could not attain sufficiently high temperatures, and, unable to unlock his predecessor's secrets, decided to scrap the concept altogether and return to Mouchout's truncated-cone reflector. Unfortunately, while Mouchout's approach resulted in higher temperatures, Eneas was still dissatisfied with the machine's performance. His solution was to make the bottom of the reflector's truncated cone-shaped dish larger by designing its sides to be more upright to focus radiation onto a boiler that was 50 percent larger.

Finally satisfied with the results, he decided to advertise his design by exhibiting it in sunny Pasadena, Calif., at Edwin Cawston's ostrich farm, a popular tourist attraction. The monstrous machine did not fail to attract attention. Its reflector, which spanned 33 feet in diameter, contained 1,788 individual mirrors. And its boiler, which was about 13 feet in length and a foot wide, held 100 gallons of water. After exposure to the sun, Eneas's device boiled the water and transferred steam through a flexible pipe to an engine that pumped 1,400 gallons of water per minute from a well onto the arid California landscape. Not everyone grasped the concept. In fact, one man thought the solar machine had something to do with the incubation of ostrich eggs. But Eneas's marketing savvy eventually paid off. Despite the occasional misconceptions, thousands who visited the farm left convinced that the sun machine would soon be a fixture in the sunny Southwest. Moreover, many regional newspapers and popular-science journals sent reporters to the farm to cover the spectacle. To Frank Millard, a reporter for the brand new magazine World's Work, the potential of solar motors placed in quantity across the land inspired futuristic visions of a region "where oranges may be growing, lemons yellowing, and grapes purpling, under the glare of the sun which, while it ripens the fruits it will also water and nourish them." He also predicted that the potential for this novel machine was not limited to irrigation: "If the sun motor will pump water, it will also grind grain and saw lumber and run electric cars." The future, like the machine itself, looked bright and shiny. In 1903 Eneas, ready to market his solar motor, moved his Boston-based company to Los Angeles, closer to potential customers. By early the following year he had sold his first complete system for $2,160 to Dr. A. J. Chandler of Mesa, Ariz. Unfortunately, after less than a week, the rigging supporting the heavy boiler weakened during a windstorm and collapsed, sending it tumbling into the reflector and damaging the machine beyond repair. But Eneas, accustomed to setbacks, decided to push onward and constructed another solar pump near Tempe, Ariz. Seven long months later, in the fall of 1904, John May, a rancher in Wilcox, Ariz., bought another machine for $2,500. Unfortunately, shortly afterward, it was destroyed by a hailstorm. This second weather-related incident all but proved that the massive parabolic reflector was too susceptible to the turbulent climactic conditions of the desert southwest. And unable to survive on such measly sales, the company soon folded. Though the machine did not become a fixture as Eneas had hoped, the inventor contributed a great deal of scientific and technical data about solar heat conversion and initiated more than his share of public exposure. Despite his business failure, the lure of limitless fuel was strong, and while Eneas and the Solar Motor Company were suspending their operations, another solar pioneer was just beginning his. Moonlight Operation Henry E. Willsie began his solar motor construction a year before Eneas's company folded. In his opinion, the lessons of Mouchout, Adams, Ericsson, and Eneas proved the cost inefficiency of high-temperature, concentrating machines. He was convinced that a nonreflective, lower-temperature collection system similar to Tellier's invention was the best method for directly utilizing solar heat. The inventor also felt that a solar motor would never be practical unless it could operate around the clock.

Thus thermal storage, a practice that lent itself to low-temperature operation, was the focus of his experimentation. To store the sun's energy, Willsie built large flat-plate collectors that heated hundreds of gallons of water, which he kept warm all night in a huge insulated basin. He then submerged a series of tubes, or vaporizing pipes, inside the basin to serve as boilers. When the acting medium--Willsie preferred sulfur dioxide to Tellier's ammonia--passed through the pipes, it transformed into a high-pressure vapor, which passed to the engine, operated it, and exhausted into a condensing tube, where it cooled, returned to a liquid state, and was reused. In 1904, confident that his design would produce continuous power, he built two plants, a 6-horsepower facility in St. Louis, Mo., and a 15-horsepower operation in Needles, Calif. And after several power trials, Willsie decided to test the storage capacity of the larger system. After darkness had fallen, he opened a valve that "allowed the solar-heated water to flow over the exchanger pipes and thus start up the engine." Willsie had created the first solar device that could operate at night using the heat gathered during the day. He also announced that the 15-horsepower machine was the most powerful arrangement constructed up to that time. Beside offering a way to provide continuous solar power production, Willsie also furnished detailed cost comparisons to justify his efforts: the solar plant exacted a two-year payback period, he claimed, an exceptional value even when compared with today's standards for alternative energy technology. Originally, like Ericsson and Eneas before him, Willsie planned to market his device for desert irrigation. But in his later patents Willsie wrote that the invention was "designed for furnishing power for electric light and power, refrigerating and ice making, for milling and pumping at mines, and for other purposes where large amounts of power are required." Willsie determined all that was left to do was to offer his futurist invention for sale. Unfortunately, no buyers emerged. Despite the favorable long-term cost analysis, potential customers were suspicious of the machine's durability, deterred by the high ratio of machine size to power output, and fearful of the initial investment cost of Willsie's ingenious solar power plant. His company, like others before it, disintegrated. A Certain Technical Maturity Despite solar power's dismal commercial failures, some proponents continued to believe that if they could only find the right combination of solar technologies, the vision of a free and unlimited power source would come true. Frank Shuman was one who shared that dream. But unlike most dreamers, Shuman did not have his head in the clouds. In fact, his hardheaded approach to business and his persistent search for practical solar power led him and his colleagues to construct the largest and most cost-effective machine prior to the space age. Shuman's first effort in 1906 was similar to Willsie's flat-plate collector design except that it employed ether as a working fluid instead of sulfur dioxide. The machine performed poorly, however, because even at respectable pressures, the steam--or more accurately, the vapor--exerted comparatively little force to drive a motor because of its low specific gravity.

Shuman knew he needed more heat to produce steam, but felt that using complicated reflectors and tracking devices would be too costly and prone to mechanical failure. He decided that rather than trying to generate more heat, the answer was to better conserve the heat already being absorbed. In 1910, to improve the collector's insulation properties, Shuman enclosed the absorption plates not with a single sheet of glass but with dual panes separated by a one-inch air space. He also replaced the boiler pipes with a thin, flat metal container similar to Tellier's original greenhouse design. The apparatus could now consistently boil water rather than ether. Unfortunately, however, the pressure was still insufficient to drive industrial-size steam engines, which were designed to operate under pressures produced by hotter-burning coal or wood. After determining that the cost of building a larger absorber would be prohibitive, Shuman reluctantly conceded that the additional heat would have to be provided through some form of concentration. He thus devised a low-cost reflector stringing together two rows of ordinary mirrors to double the amount of radiation intercepted. And in 1911, after forming the Sun Power Co., he constructed the largest solar conversion system ever built. In fact, the new plant, located near his home in Talcony, Penn., intercepted more than 10,000 square feet of solar radiation. The new arrangement increased the amount of steam produced, but still did not provide the pressure he expected. Not easily defeated, Shuman figured that if he couldn't raise the pressure of the steam to run a conventional steam engine, he would have to redesign the engine to operate at lower pressures. So he teamed up with E.P. Haines, an engineer who suggested that more precise milling, closer tolerances in the moving components, and lighter-weight materials would do the trick. Haines was right. When the reworked engine was connected to the solar collectors, it developed 33 horsepower and drove a water pump that gushed 3,000 gallons per minute onto the Talcony soil. Shuman calculated that the Talcony plant cost $200 per horsepower compared with the $80 of a conventionally operated coal system--a respectable figure, he pointed out, considering that the additional investment would be recouped in a few years because the fuel was free. Moreover, the fact that this figure was not initially competitive with coal or oil-fired engines in the industrial Northeast did not concern him because, like the French entrepreneurs before him, he was planning to ship the machine to the vast sunburnt regions in North Africa. To buy property and move the machine there, new investors were solicited from England and the Sun Power Co. Ltd. was created. But with the additional financial support came stipulations. Shuman was required to let British physicist C. V. Boys review the workings of the machine and suggest possible improvements. In fact, the physicist recommended a radical change. Instead of flat mirrors reflecting the sun onto a flat-plate configuration, Boys thought that a parabolic trough focusing on a glass-encased tube would perform much better. Shuman's technical consultant A.S.E. Ackermann agreed, but added that to be effective, the trough would need to track the sun continuously. Shuman felt that his conception of a simple system was rapidly disintegrating. Fortunately, when the machine was completed just outside of Cairo, Egypt, in 1912, Shuman's fears that the increased complexity would render the device impractical

proved unfounded. The Cairo plant outperformed the Talcony model by a large margin--the machine produced 33 percent more steam and generated more than 55 horsepower--which more than offset the higher costs. Sun Power Co.'s solar pumping station offered an excellent value of $150 per horsepower, significantly reducing the payback period for solar-driven irrigation in the region. It looked as if solar mechanical power had finally developed the technical sophistication it needed to compete with coal and oil. Unfortunately, the beginning was also the end. Two months after the final Cairo trials, Archduke Ferdinand was assassinated in the Balkans, igniting the Great War. The fighting quickly spread to Europe's colonial holdings, and the upper regions of Africa were soon engulfed. Shuman's solar irrigation plant was destroyed, the engineers associated with the project returned to their respective countries to perform war-related tasks, and Frank Shuman died before the armistice was signed. Whether or not Shuman's device would have initiated the commercial success that solar power desperately needed, we will never know. However, the Sun Power Co. can boast a certain technical maturity by effectively synthesizing the ideas of its predecessors from the previous 50 years. The company used an absorber (though in linear form) of Tellier and Willsie, a reflector similar to Ericsson's, simple tracking mechanisms first used by Mouchout and later employed by Eneas, and combined them to operate an engine specially designed to run with solar-generated steam. In effect, Shuman and his colleagues set the standard for many of the most popular modern solar systems 50 to 60 years before the fact. The Most Rational Source The aforementioned solar pioneers were only the most notable inventors involved in the development of solar thermal power from 1860 to 1914. Many others contributed to the more than 50 patents and the scores of books and articles on the subject. With all this sophistication, why couldn't solar mechanical technology blossom into a viable industry? Why did the discipline take a 50-year dive before again gaining a measure of popular interest and technical attention? First, despite the rapid advances in solar mechanical technology, the industry's future was rendered problematic by a revolution in the use and transport of fossil fuels. Oil and coal companies had established a massive infrastructure, stable markets, and ample supplies. Also, besides trying to perfect the technology, solar pioneers had the difficult task of convincing skeptics to see solar energy as something more than a curiosity. Visionary rhetoric without readily tangible results was not well received by a population accustomed to immediate gratification. Improving and adapting existing power technology, deemed less risky and more controlled, seemed to make far more sense. Finally, the ability to implement radically new hardware requires either massive commitment or the failure of existing technology to get the job done. Solar mechanical power production in the late nineteenth and early twentieth centuries did not meet either criterion. Despite warnings from noted scientists and engineers, alternatives to what seemed like an inexhaustible fuel supply did not fit into the U.S. agenda. Unfortunately, in many ways, these antiquated sentiments remain with us today. During the 1970s, while the OPEC nations exercised their economic power and as the environmental and "no-nuke" movements gained momentum, Americans

plotted an industrial coup whose slogans were energy efficiency and renewable resources. Consequently, mechanical solar power--along with its space-age, electricity-producing sibling photovoltaics, as well as other renewable sources such as wind power--underwent a revival. And during the next two decades, solar engineers tried myriad techniques to satisfy society's need for power. They discovered that dish-shaped reflectors akin to Mouchout's and Eneas's designs were the most efficient but also the most expensive and difficult to maintain. Lowtemperature, nonconcentrating systems like Willsie's and Tellier's, though simple and less sensitive to climatic conditions, were among the least powerful and therefore suited only to small, specific tasks. Stationary reflectors like those used in Adams's device, now called Power Tower systems, offered a better solution but were still pricey and damage prone. By the mid-1980s, contemporary solar engineers, like their industrial-revolution counterparts Ericsson and Shuman, determined that for sunny areas, tracking parabolic troughs were the best compromise because they exhibited superior cost-topower ratios in most locations. Such efforts led engineers at the Los Angeles-based Luz Co. to construct an 80-megawatt electric power plant using parabolic trough collectors to drive steam-powered turbines. The company had already used similar designs to build nine other solar electric generation facilities, providing a total of 275 megawatts of power. In the process, Luz engineers steadily lowered the initial costs by optimizing construction techniques and taking advantage of economies of buying material in bulk to build ever-larger plants until the price dropped from 24 to 12 cents per kilowatt hour. The next, even larger plant--a 300-megawatt facility-scheduled for completion last year, promised to provide 6 to 7 cents per kilowatt hour, near the price of electricity produced by coal, oil, or nuclear technology. Once again, as with Shuman and his team, the gap was closing. But once again these facilities would not be built. Luz, producer of more than 95 percent of the world's solar-based electricity, filed for bankruptcy in 1991. According to Newton Becker, Luz's chairman of the board, and other investors, the demise of the already meager tax credits, declining fossil fuel prices, and the bleak prospects for future assistance from both federal and state governments drove investors to withdraw from the project. As Becker concluded, "The failure of the world's largest solar electric company was not due to technological or business judgment failures but rather to failures of government regulatory bodies to recognize the economic and environmental benefits of solar thermal generating plants." Other solar projects met with similar financial failure. For example, two plants that employed the tower power concept, Edison's 10-megawatt plant in Daggett, Calif., and a 30-megawatt facility built in Jordan performed well despite operating on a much smaller scale and without Luz's advantages of heavy initial capital investment and a lengthy trial-and-error process to improve efficiency. Still they were assessed as too costly to compete in the intense conventional fuel market. Although some of our brightest engineers have produced some exemplary solar power designs during the past 25 years, their work reflects a disjointed solar energy policy. Had the findings of the early solar pioneers and the evolution of their machinery been more closely scrutinized, perhaps by Department of Energy officials or some other oversight committee, contemporary efforts might have focused on building a new infrastructure when social and political attitudes were more receptive

to solar technology. Rather than rediscovering the technical merits of the various systems, we might have been better served by reviewing history, selecting a relatively small number of promising systems, and combining them with contemporary materials and construction techniques. Reinventing the wheel when only the direction of the cart seems suspect is certainly not the best way to reach one's destination. While the best period to make our energy transition may have passed and though our energy future appears stable, the problems that initiated the energy crisis of the 1970s have not disappeared. Indeed, the instability of OPEC and the recent success in the Gulf War merely created an artificial sense of security about petroleum supplies. While we should continue to develop clean, efficient petroleum and coal technology while our present supplies are plentiful, this approach should not dominate our efforts. Alternative, renewable energy technologies must eventually be implemented in tandem with their fossil-fuel counterparts. Not doing so would simply provide an excuse for maintaining the status quo and beg for economic disruption when reserves run low or political instability again erupts in oil-rich regions. Toward that end, we must change the prevailing attitude that solar power is an infant field born out of the oil shocks and the environmental movement of the past 25 years. Such misconceptions lead many to assert that before solar power can become a viable alternative, the industry must first pay its dues with a fair share of technological evolution. Solar technology already boasts a century of R&D, requires no toxic fuel and relatively little maintenance, is inexhaustible, and, with adequate financial support, is capable of becoming directly competitive with conventional technologies in many locations. These attributes make solar energy one of the most promising sources for many current and future energy needs. As Frank Shuman declared more than 80 years ago, it is "the most rational source of power."

The term solar power is used to describe a number of methods of harnessing energy from the light of the Sun. It has been used in many traditional technologies for centuries and has come into widespread use where other power supplies are absent, such as in remote locations and in space. Its use is spreading as the environmental costs and limited supply of other power sources such as fossil fuels are realized.

Energy from the Sun

Theoretical annual mean insolation, at the top of Earth's atmosphere (top) and at the surface on a horizontal square meter .

Global solar energy resources. The colors in the map show the actual local solar energy, averaged through the years of 1991-1993. The scale is in watts per square meter. The land area required to supply the current global primary energy demand by solar energy using available technology is represented by the dark disks. The rate at which solar radiation reaches a unit of area in space in the region of the Earth's orbit is 1,366 W/m, as measured upon a surface normal (at a right angle) to the Sun. This number is referred to as the solar constant.[1] The atmosphere reflects 6% and absorbs 16% of incoming radiation resulting in a peak power at sea level of 1,020 W/m . [2] [3] Average cloud cover reduces incoming radiation by 20% through reflection and 16% through absorption.[4] The image on the right shows the average solar power available on the surface in W/m calculated from satellite cloud data averaged over three years from 1991 to 1993 (24 hours a day). For example, in North America the average power of the solar radiation lies somewhere between 125 and 375 W/m, between 3 and 9 kWh/m/day. [5]

It should be noted that this is the maximum available power, and not the power delivered by solar power technology. For example, photovoltaic panels currently have an efficiency of ca. 15% and, hence, a solar panel delivers 19 to 56 W/m or 0.45-1.35 kWh/m/day (annual day and night average). The dark disks in the image on the right are an example for the land areas that, if covered with solar panels, would produce slightly more energy in the form of electricity than the total primary energy supply in 2003. [6] That is, solar cells with an assumed 8% efficiency installed in these areas would deliver a bit more energy in the form of electricity than what is currently available from oil, gas, hydropower, nuclear power, etc. combined. It should also be noted that a recent concern is that of Global dimming, an effect of pollution that is allowing less and less sunlight to reach the Earth's surface. It is intricately linked with pollution particles and Global warming, and is mostly of concern for issues of Global climate change, but is also of concern to proponents of Solar Power due to the existing and potential future decreases in available Solar Energy. The order of magnitude is about 10% less solar energy available at sea level, mostly due to more intense cloud reflections back into outer space. That is, the clouds are whiter and brighter because the pollution dust serves as a vapor-liquid phase change initiation site and generates clouds where otherwise there would be a moisture filled but otherwise clear sky. After passing through the Earth's atmosphere, most of the sun's energy is in the form of visible and Infrared radiations. Plants use solar energy to create chemical energy through photosynthesis. Humans regularly use this energy burning wood or fossil fuels, or when simply eating the plants. [edit]

Classification
Solar power technologies can be classified in a number of ways. [edit]

Direct or Indirect

A photovoltaic cell produces electricity directly from solar energy Direct solar power involves a single transformation of sunlight which results in a useable form of energy.

Sunlight hits a photovoltaic cell (also called a photoelectric cell) creating electricity. Sunlight hits the dark absorber surface of a solar thermal collector and the surface warms. The heat energy may be carried away by a fluid circuit. Sunlight strikes a solar sail on a space craft and is converted directly into a force on the sail which causes motion of the craft. Sunlight is collected using focusing mirrors and transmitted via optical fibers into a building's interior to supplement lighting.[7]

Hydroelectric power stations produce indirect solar power. The Itaipu Dam, Brazil / Paraguay Indirect solar power involves multiple transformations of sunlight which result in a useable form of energy.

Vegetation uses photosynthesis to convert solar energy to chemical energy, which can later be burned as fuel to generate electricity (see biofuel). Methane (natural gas) and hydrogen may be derived from the biofuel. Hydroelectric dams and wind turbines are powered by solar energy through its interaction with the Earth's atmosphere and the resulting weather phenomena. Ocean thermal energy production uses the thermal gradients that are present across ocean depths to generate power. These temperature differences are ultimately due to the energy of the sun. Energy obtained from oil, coal, and peat originated as solar energy captured by vegetation in the remote geological past and fossilised. Hence the term fossil fuel. The great time delay between the input of the solar energy and its recovery means these are not practically renewable and therefore not normally classified as solar power.

[edit]

Passive or Active
Passive solar systems use non-mechanical techniques of capturing, converting and distributing sunlight into useable forms of energy such as heating, lighting or ventillation. These techniques include selecting materials with favorable thermal properties, designing spaces that naturally circulate air and referencing the position of a building to the sun.

Passive solar water heaters use a thermosiphon to circulate fluid.

A Trombe wall circulates air by natural circulation and acts as a thermal mass which absorbs heat during the day and radiates heat at night. Clerestory windows, light shelves, skylights and light tubes can be used among other daylighting techniques to illuminate a building's interior. Passive solar water distillers may use capillary action to pump water.

Active solar systems use mechanical components such as pumps and fans to process sunlight into useable forms of energy. [edit]

Concentrating or Non-concentrating

Point focus parabolic dish with Stirling System and its solar tracker at Plataforma Solar de Almera (PSA) in Spain. Concentrating solar power systems use lenses or mirrors and tracking systems to focus sunlight into a high intensity beam. Concentrating solar power systems are sub-classified by focus and tracking type.

Line focus o A solar trough consists of a long parabolic reflector that uses single-axis tracking to follow the sun and concentrate light along a line formed at the parabola's focus. The SEGS systems in California are an example of this type of system. Point focus o A power tower consists of an array of flat reflectors that use dual-axis tracking to follow the sun and concentrate light at a single point on the tower where a thermal receiver is located. o A parabolic dish or dish/engine system consists of a single unit that uses dual-axis tracking to follow the sun and focuses light at a single point where photovoltaic cells or a thermal receiver is located.

Non-concentrating systems include solar domestic hot water systems and most photovoltaic cells. These systems have the advantage that they can make use of diffuse solar radiation (which can not be focused). However, if high temperatures are required, this type of system is usually not suitable, because of the lower radiation intensity. Solar water

heating is arguably the most practical and economical way to harness "non-focused" solar energy [edit]

Advantages and disadvantages of Solar power


[edit]

Advantages

Solar power is pollution free during use. Production end wastes and emissions are manageable using existing pollution controls. Decommisioning end recycling technologies are under developement. [8] Facilities can operate with very little maintenance or intervention after initial setup. Solar power is becoming more and more economical as costs associated with production decreases, the technology becomes more effective in energy conversion, and the costs of other energy source alternatives increase. In situations where connection to the electricity grid is difficult, costly, or impossible (such as island communities, areas not served by a power grid, illuminated roadside signs, and ocean-going vessels) harvesting solar power is often an economically competitive alternative to energy from traditional sources. When grid connected, solar electric generation can displace the highest cost electricity during times of peak demand (in most climatic regions), can reduce grid loading, and can eliminate the need for local battery power for use in times of darkness and high local demand; such application is encouraged by net metering. Time-of-use net metering can be highly favorable to small photovoltaic systems. Grid connected solar electricity can be used locally thus minimizing transmission/distribution losses (approximately 7.2%).[9]

[edit]

Disadvantages

Intermittency: It is not available at night and is reduced when there is cloud cover, decreasing the reliability of peak output performance or requiring a means of energy storage. For power grids to stay functional at all times, o energy storage facilities, such as hydroelectric plants, would need to store excess solar power generation and/or be used in a dispatchable manner to

o o

'gapfill' low points in solar generation. Some Pumped-storage hydroelectric facilities exist for the sole purpose of storing excess energy to be used when needed.[1] other renewable energy sources (i.e., wind, geothermal, tidal, wave, ocean power, etc) would need to be active, or backup conventional powerplants would be needed. There is an energy cost to keep coal-burning power plants 'hot', which includes the burning of coal to keep boilers at temperature. Natural gas power plants can quickly come up to full load without requiring significant standby idling[2]. Without changes in the energy supply and control system (such as a shift to using current hydropower as nighttime/backup across wider regions or the incorporation of more renewable power), few coal power plants could be displaced, according to critics.

Locations at high latitudes or with frequent substantial cloud cover offer reduced potential for solar power use. It can only realistically be used to power transport vehicles by converting light energy into another form of energy (e.g. battery stored electricity or by electrolysing water to produce hydrogen) suitable for transport, incurring an energy penalty similar to coal or nuclear electricity generation. While the burning of gasoline in an internal combustion engine is only about 20%-25% efficient[3], depending on driving mode, the use of battery electric technology can match or exceed that efficiency when various external factors are included, such as the loss of energy in the production of gasoline and the energy cost of battery manufacture and recycling. Solar cells produce DC which must be converted to AC when used in currently existing distribution grids. This incurs an energy penalty of 5-10%.

[edit]

Types of technologies
Most solar energy used today is harnessed as heat or electricity. [edit]

Solar design in architecture


Main articles: Passive solar and Active solar Solar design can be used to achieve comfortable temperature and light levels with little or no additional energy. This can be through passive solar, where maximising the entrance of sunlight in cold conditions and reducing it in hot weather; and active solar, using additional devices such as pumps and fans to direct warm and cool air or fluid.

[edit]

Solar heating systems


Main article: Solar hot water Solar hot water systems use sunlight to heat water. These systems may be used to heat domestic hot water or for space heating but are most commonly used to heat pools. These systems are basically composed of solar thermal collectors and a storage tank.[10] The three basic classifications of solar water heaters are:

Active systems which use pumps to circulate water or a heat transfer fluid. Passive systems which circulate water or a heat transfer fluid by natural circulation. These are also called thermosiphon systems. Batch systems using a tank directly heated by sunlight.

A Trombe wall is a thermal mass that is heated by sunlight during the day and radiates stored heat during the evening. [edit]

Solar cooking
Main article: Solar cooker

Pictured: Solar Cooker A solar box cooker traps the Sun's power in an insulated box; such boxes have been successfully used for cooking, pasteurization and fruit canning. Solar cooking is helping many developing countries, both reducing the demands for local firewood and maintaining a cleaner environment for the cooks. The first known western solar oven is attributed to Horace de Saussure. [edit]

Solar lighting
Main articles: Daylighting and Light tube The interior of a building can be lit during daylight hours using light tubes.

For instance, fiber optic light pipes can be connected to a parabolic collector mounted on the roof. The manufacturer claims this gives a more natural interior light and can be used to reduce the energy demands of electric lighting. [11] [edit]

Photovoltaics
Main article: Photovoltaics

The solar panels (photovoltaic arrays) on this small yacht at sea can charge the 12 V batteries at up to 9 Amps in full, direct sunlight Solar cells, also referred to as photovoltaic cells, are devices or banks of devices that use the photovoltaic effect of semiconductors to generate electricity directly from sunlight. Until recently, their use has been limited due to high manufacturing costs. One cost effective use has been in very low-power devices such as calculators with LCDs. Another use has been in remote applications such as roadside emergency telephones, remote sensing, cathodic protection of pipe lines, and limited "off grid" home power applications. A third use has been in powering orbiting satellites and other spacecraft. Declining manufacturing costs (dropping at 3 to 5% a year in recent years) are expanding the range of cost-effective uses. The average lowest retail cost of a large photovoltaic array declined from $7.50 to $4 per watt between 1990 and 2005. With many jurisdictions now giving tax and rebate incentives, solar electric power can now pay for itself in five to ten years in many places. "Grid-connected" systems - that is, systems with no battery that connect to the utility grid through a special inverter - now make up the largest part of the market. In 2004 the worldwide production of solar cells increased by 60%. 2005 is expected to see large growth again, but shortages of refined silicon have been hampering production worldwide since late 2004. [edit]

Solar fibers
A photovoltaic device not using silicon is currently in development. [4] The device consists of a "solar tape," containing titanium dioxide (TiO2) in the form of a tape or fiber that could be combined with clothing or building materials. The material has inferior efficiency to conventional photovoltaics (5% for an initial commercial version to "near 12%" in the

lab as of 2004, versus 15-30% for conventional cells). Its advantages are its low manufacturing cost, low weight, flexibility, function in artificial light, and resulting versatility. [5] [edit]

Concentrating Photovoltaic (CPV) systems


Despite major progress made over the last decade the use of solar panels remains relatively expensive compared to conventional electricity generation. One promising way to reduce cost even further is by using concentrating photovoltaic systems.[12][13][14] The idea is to concentrate sunlight by lenses or mirrors onto a small panel of high-efficiency solar cells. That way expensive solar panels are replaced by cheap plastic or glass, thus dramatically reducing the cost per watt. In addition, the amount of solar energy harvested per m is increased, thus reducing the area needed for generating solar power. High-efficiency cells have been developed for special applications such as satellites and space exploration which require high-performance. GaAs multijunction devices are the most efficient solar cells to date, reaching as high as 39% efficiency[15]. They are also some of the most expensive cells per unit area (up to US$40/cm). In Concentrating Photovoltaic systems solar energy is concentrated several hundred times, which increases the solar energy conversion efficiency and reduces the semiconductor area needed per watt of power output. This may be beneficial as an application for multijunction solar cells, as the high costs and technical challenges of generating large area multi-junction photovoltaics are prohibitive relative to current silicon PV technologies. Since concentrating photovoltaics requires solar tracking the approach is most suited for large utility scale applications.[16] Different approaches are being evaluated for that purpose, [17] in particular Fresnel lenses,[18] parabolic trough concentration systems,[19][20] and solar dishes.[21] For examples of concentrating photovoltaic systems suited for rooftop installation on commercial buildings, see the "Sunflower", and the "SunCube" for domestic applications. [edit]

Solar thermal electric power plants

Solar Two, a concentrating solar power plant (an example of solar thermal energy). Main article: Solar thermal energy Solar thermal energy can be used to heat a fluid to high temperatures and use it to produce electric power. [edit]

Solar updraft tower


Main article: Solar updraft tower A Solar updraft tower is a relatively low tech solar thermal power plant where air passes under a very large agricultural glass house (between 2 and 8 km in diameter), is heated by the sun and channeled upwards towards a convection tower. It then rises naturally and is used to drive turbines, which generate electricity. [edit]

Energy Tower
An Energy tower is an alternative proposal for the Solar updraft tower. The "Energy Tower" is driven by spraying water at the top of the tower; evaporation of water causes a downdraft by cooling the air thereby increasing its density, driving windturbines at the bottom of the tower. It requires a hot arid climate and large quantities of water (seawater may be used for this purpose) but it does not require the large glass house of the Solar updraft tower. [edit]

Solar pond
A solar pond is a relatively low-tech, low cost approach to harvesting solar energy. The principle is to fill a pond with 3 layers of water: 1. A top layer with a low salt content 2. An intermediate insulating layer with a salt gradient, which sets up a density gradient that prevents heat exchange by natural convection in the water. 3. A bottom layer has with a high salt content which reaches a temperature approaching 90 degrees Celsius. The different densities in the layers due to their salt content prevent convection currents developing which would normally transfer the heat to the surface and then to the air above. The heat trapped in the salty bottom layer can be used for different purposes, such as heating of buildings, industrial processes, or generating electricity. [edit]

Solar chemical
Solar chemical refers to a number of possible processes that harness solar energy by absorbing sunlight in a chemical reaction in a way similar to photosynthesis in plants but without using living organisms. No practical process has yet emerged. A promising approach is to use focused sunlight to provide the energy needed to split water into its constituent hydrogen and oxygen in the presence of a metallic catalyst such as zinc.
[22][23][24]

While metals, such as zinc, have been shown to drive photoelectrolysis of water, more research has focused on semiconductors. Further research has examined transition metal compounds, in particular titania, titanates, niobates, tantalates. [citation needed]Unfortunately, these materials exhibit very low efficiencies, because they require ultraviolet light to drive the photoelectrolysis of water. Current materials also require an electrical voltage bias for the hydrogen and oxygen gas to evolve from the surface, another disadvantage. Current research is focusing on the development of materials capable of the same water splitting reaction using lower energy visible light. It is also possible to use solar energy to drive industrial chemical processes without a requirement for fossil fuel. [edit]

Phytochemical energy storage (Biofuels)


See Biofuels and Biodiesel The oil in plant seeds, in chemical terms, very closely resembles that of petroleum. Many, since the invention of the Diesel engine, have been using this form of captured solar energy as a fuel comparable to petrodiesel - for functional use in any diesel engine or generator and known as Biodiesel. A 1998 joint study by the U.S. Department of Energy (DOE) and the U.S. Department of Agriculture (USDA) traced

many of the various costs involved in the production of biodiesel and found that overall, it yields 3.2 units of fuel product energy for every unit of fossil fuel energy consumed. [25] Other Biofuels include ethanol, wood for stoves, ovens and furnaces, and methane gas produced from biofuels through chemical processes. [edit]

Energy storage
Main article: Grid energy storage For a stand-alone system, some means must be employed to store the collected energy for use during hours of darkness or cloud cover. The following list includes both mature and immature techniques:

Electrochemically in batteries Cryogenic liquid air or nitrogen Compressed air in a cylinder Flywheel energy storage Hydrogen produced by electrolysis of water and then available for pollution free combustion Hydraulic accumulator Pumped-storage hydroelectricity Molten salt[26] Superconducting magnetic energy storages

Storage always has an extra stage of energy conversion, with consequent energy losses, greatly increasing capital costs. One way around this is to export excess power to the power grid, drawing it back when needed. This appears to use the power grid as a battery but in fact is relying on conventional energy production through the grid during the night. However, since the grid always has a positive outflow, the result is exactly the same. Electric power costs are highly dependant on the consumption per time of day, since plants must be built for peak power (not average power). Expensive gas-fired "peaking generators" must be used when base capacity is insufficient. Fortunately for solar, solar capacity parallels energy demand -since much of the electricity is for removing heat produced by too much solar energy (air conditioners)! This is less true in the winter. Wind power complements solar power since it can produce energy when there is no sunlight. [edit]

Deployment of solar power to energy grids

Deployment of solar power depends largely upon local conditions and requirements. But as all industrialised nations share a need for electricity, it is clear that solar power will increasingly be used to supply a cheap, reliable electricity supply. Several experimental photovoltaic (PV) power plants of 300 to 600 kW capacity are connected to electricity grids in Europe and the U.S. Other major research is investigating economic ways to store the energy which is collected from the sun's rays during the day. [edit]

Africa
Africa is home to the over 9 million km Sahara desert, whose overall capacity assuming 50 MW/km day/night/cloud average with 15% efficient photovoltaic panels is over 450 TW, or over 4,000,000 terawatt-hours per year. The current global energy consumption by humans, including all oil, natural gas, coal, nuclear, and hydroelectric, is pegged at about 13 TW. [edit]

Australia
The largest solar power station in Australia is the 400kWp array at Singleton, New South Wales. Other significant solar arrays include the 220 kWp array on the Anangu Pitjantjatjara Lands in South Australia, the 200kWp array at Queen Victoria Market in Melbourne and the 160kWp array at Kogarah Town Square in Sydney. A buildingintegrated photo voltaic (BIPV) installation of 60kW in Brisbane (at the Hall-Chadwick building) has an uninterruptible power supply (UPS) which gives around 10-15 minutes worth of emergency power in the event of the loss of electricity supply. Any power not used by the UPS is connected to the grid and goes towards reducing the building's overall power bills. Numerous smaller arrays have been established, mainly in remote areas where solar power is cost-competitive with diesel power.[27] [edit]

Asia
As of 2004, Japan had 1200 MWe installed. Japan currently consumes about half of worldwide production of solar modules, mostly for grid connected residential applications. In terms of overall installed PV capacity, India comes fourth after Japan, Germany, and the United States (Indian Ministry of Non-conventional Energy Sources 2002). Government support and subsidies have been major influences in its progress.[28] India's very long-term solar potential may be unparalleled in the world because it is one of the few places with an ideal combination of both high solar power reception and a large consumer base in the same place. India's theoretical solar potential is about 5000 TWh per year (i.e. 600 GW), far more than its current total consumption.

In 2005, the Israeli government announced an international contract for building a 100 MW solar power plant to supply the electricity needs of more than 200,000 Israelis living in southern Israel. The plan may eventually allow the creation of a gigantic 500 MW power plant, making Israel a leader in solar power production.[29] [edit]

Europe
The 10 megawatt Bavaria Solarpark in Germany is the world's largest solar electric system, covering 25 hectares (62 acres) with 57,600 photovoltaic panels. [30] A large solar PV plant is planned for the island of Crete. Research continues into ways to make the actual solar collecting cells less expensive and more efficient.

The scientific solar furnace at Odeillo, French Cerdagne A large parabolic reflector solar furnace is located in the Pyrenees at Odeillo, France. It is used for various research purposes.[31] Another site is the Loser in Austria. The Plataforma Solar de Almera (PSA) in Spain, part of the Center for Energy, Environment and Technological Research (CIEMAT), is the largest center for research, development, and testing of concentrating solar technologies in Europe.[32] In the United Kingdom, the tallest building in Manchester, the CIS Tower, was clad in photovoltaic panels at a cost of 5.5 million and started feeding electricity to the national grid on November 2005.[33] On April 27, 2006, GE Energy Financial Services, PowerLight Corporation and Catavento Lda announced that they will build the worlds largest solar photovoltaic power project. The 11-megawatt solar power plant, comprising 52,000 photovoltaic modules, will be built at a single site in Serpa, Portugal, 200 kilometers (124 miles) southeast of Lisbon in one of Europes sunniest areas. [34] [edit]

North America

A laundromat in California supplements water heating with solar panels on the roof. In some areas of the United States, solar electric systems are already competitive with utility systems. As of 2005, there is a list of technical conditions that factor into the economic feasibility of going solar: the amount of sunlight that the area receives; the purchase cost of the system; the ability of the system owner to sell power back to the electric grid; and most important, the competing power prices from the local utility. For example, a photovoltaic system installed in Boston, Massachusetts, produces 25% less electricity than it would in Albuquerque, New Mexico, but yields roughly the same savings on utility bills since electricity costs more in Boston. In addition to these considerations, many states and regions offer substantial incentives to improve the economics for potential consumers. Congress recently adopted the first federal tax breaks for residential solar since 1985 -- temporary credits available for systems installed in 2006 or 2007. Homeowners can claim one federal credit of up to $2,000 to cover 30% of a photovoltaic system's cost and another 30% credit of up to $2,000 for a solar thermal system. Fifteen states also offer tax breaks for solar, and two dozen states offer direct consumer rebates.[35] Solar One is a pilot solar-thermal project in the Mojave Desert near Barstow, California. It uses heliostats, and molten salts storage technology, to achieve longer periods of power generation. Solar Two, also near Barstow, has now built and elaborated on the success of Solar One. It was an R&D project in Barstow, California, financed by the US federal Department of Energy. Solar Two used liquid salts as a storage medium in order to continue to provide energy for much of the time when sunlight is not available. Its success has lead to the larger Solar Tres project in Spain. On August 11, 2005, Southern California Edison announced an agreement to purchase solar powered Stirling engines from Stirling Energy Systems over a twenty year period and in quantities (20,000 units) sufficient to generate 500 megawatts of electricity.[36] These systems to be installed on a 4,500 acre (18 km) solar farm will use mirrors to direct and concentrate sunlight onto the engines which will drive generators. Less than a month later, Stirling Energy Systems announced another agreement with San Diego Gas & Electric to provide between 300 and 900 megawatts of electricity.[37]

The world's largest solar power plant is located in the Mojave Desert. Solel[38], an Israeli company, operates the plant, which consists of 1000 acres (4 km) of solar reflectors. This plant produces 90% of the world's commercially produced solar power. On January 12, 2006, the California Public Utilities Commission approved the California Solar Incentive Program[39], a comprehensive $2.8 billion program that provides incentives toward solar development over 11 years. [edit]

Deployment of Solar power in transport

The solar powered car The Nuna 3 built by the Dutch Nuna team Development of a practical solar powered car has been an engineering goal for twenty years. The center of this development is the World Solar Challenge, a biannual solar powered car race over 3021 km through central Australia from Darwin to Adelaide. The race's stated objective is to promote research into solar-powered cars. Teams from universities and enterprises participate. In 1987 when it was founded the winner's average speed was 67 km/h. By the 2005 race this had increased to a record average speed of 103 km/h. [edit]

World solar power production


Total peak power of installed solar panels is around 5,300 MW as of the end of 2005. (IEA statistics appear to be underreported: they report 2,600 MW as of 2004, which with 1,700 installed in 2005 would be a cumulative total of 4,300 for 2005). These figures include only photovoltaic generated power and not that produced by other solar means. Inclusion of the U.S.'s solar reflector plants would double its total, putting it at the level of the second place country on the list.

Installed PV Power as of the end of 2004 [40]

PV Capacity Country Off-grid PV (kW) Japan Germany United States Australia Netherlands Spain Italy France Switzerland Austria Mexico Canada Korea United Kingdom Norway 84,245 26,000 189,600 48,640 4,769 14,000 12,000 18,300 3,100 2,687 18,172 13,372 5,359 776 6,813 Cumulative Grid-connected (kW) 1,047,746 768,000 175,600 6,760 44,310 23,000 18,700 8,000 20,000 16,493 10 512 4,533 7,386 75 Total (kW) 1,131,991 794,000 365,200 52,300 49,079 37,000 30,700 26,300 23,100 19,180 18,182 13,884 9,892 8,164 6,888 Installed in 2004 Total (kW) 272,368 363,000 90,000 6,670 3,162 10,000 4,700 5,228 2,100 2,347 1,041 2,054 3,454 2,261 273 Grid-tied (kW) 267,016 360,000 62,000 780 3,071 8,460 4,400 4,183 2,000 1,833 0 107 3,106 2,197 0

A solar car is an electric vehicle powered by solar energy obtained from solar panels on the surface of the car. Photovoltaic (PV) cells convert the suns energy directly into electrical energy. Cells are grouped into strings or circuits. Solar cars are not currently a practical form of transportation. Although they can operate for limited distances without the sun, the solar cells are generally very fragile and there is only enough room for one or two people. This is because the teams have been putting time towards the functionality of the vehicle, with less concern for comfort. However, they are raced in competitions such as the World Solar Challenge and the American Solar Challenge. These events are often sponsored by Government agencies, such as the United States Department of Energy, who are keen to promote the development of alternative

energy technology (such as solar cells). Such challenges are often entered by universities to develop their students' engineering and technological skills, but many professional teams have entered competitions as well, including teams from GM and Honda.

Solar vehicles
Driver's cockpit
The driver's cockpit usually only contains a single seat, although a few cars do contain room for a second passenger. They contain some of the features available to drivers of traditional vehicles such as brakes, accelerator, turn signals, rear view mirrors (or camera), ventilation, and sometimes cruise control. A radio for communication with their support crews is often included. Solar cars are fitted with some gauges seen in conventional cars. Aside from keeping the car on the road, the driver's main priority is to keep an eye on these gauges to spot possible problems. Drivers also have a safety harness, and optionally (depending on the race) a helmet similar to racing car drivers.

Electrical system
The electrical system is the most important part of the car's systems as it controls all of the power that comes into and leaves the system. The battery pack plays the same role in a solar car that a petrol tank plays in a normal car in storing power for future use. Solar cars use a range of batteries including lead-acid batteries, nickel-metal hydride batteries (NiMH), Nickel-Cadmium batteries (NiCd), Lithium ion batteries and Lithium polymer batteries. Lead-acid batteries are less expensive and easier to work with but have less power to weight ratio. Typically, solar cars use voltages between 84 and 170 volts. Power electronics monitor and regulate the car's electricity. Components of the power electronics include the peak power trackers, the motor controller and the data acquisition system. The peak power trackers manage the power coming from the solar array to maximize the power and either delivers it to be stored in the battery or used in the motor. They also protect the batteries from overcharging. The motor controller manages the electricity flowing to the motor according to signals flowing from the accelerator. Many solar cars have complex data acquisition systems that monitor the whole electrical system while even the most basic cars have systems that provide information on battery voltage and current to the driver. One such system utilizes Controller Area Network (CAN).

Drive train
The setup of the motor and transmission is unique in solar cars. The electric motor normally drives only one wheel at the back of the car due to the low amount of power it generates. Solar car motors are normally rated at between 2 and 5 hp (1 and 3 kW) and the most common type of motor is a dual-winding DC brushless. The dual-winding motor is sometimes also used as a transmission because multi-geared transmissions are rarely used. There are three basic types of transmissions used in solar cars:

a single reduction direct drive a variable ratio drive belt a hub motor

There are several varieties of each type. The most common is the direct drive transmission.

Mechanical systems
The mechanical systems are designed to keep friction and weight to a minimum while maintaining strength. Designers normally use titanium and composites to ensure a good strength-to-weight ratio. Solar cars usually have three wheels, but some have four. Three wheelers usually have two front wheels and one rear wheel: the front wheels steer and the rear wheel follows. Four wheel vehicles are set up like normal cars or similarly to three wheeled vehicles with the two rear wheels close together. Solar cars have a wide range of suspensions because of varying bodies and chassis. The most common front suspension is the double-A-arm suspension found in traditional cars. The rear suspension is often a trailer-arm suspension found in motor cycles. Solar cars are required to meet rigorous standards for brakes. Disc brakes are the most commonly used due to their good braking ability and ability to adjust. Mechanical and hydraulic brakes are both widely used with the brakes designed to move freely by minimize brake drag. Steering systems for solar cars also vary. The major design factors for steering systems are efficiency, reliability and precision alignment to minimize tire wear and power loss. The popularity of solar car racing has led to some tire manufacturers designing tires for solar vehicles. This has increased overall safety and performance.

Solar array
The solar array consists of hundreds of photovoltaic solar cells converting sunlight into electricity. Cars can use a variety of solar cell technologies; most often polycrystalline silicon, monocrystalline silicon, or gallium arsenide. The cells are wired together into strings while strings are often wired together to form a panel. Panels normally have voltages close to the nominal battery voltage. The main aim is to get as many cells in as small a space as possible. Designers encapsulate the cells to protect them from the weather and breakage. Designing a solar array isn't as easy as just stringing bunch of cells together. A solar array acts like a lot of very small batteries all hooked together in series. The total voltage produced is the sum of all cell voltages. The problem is that if a single cell is in shadow it acts like a diode, blocking the flow of current for the entire string of cells. To correct against this, array designers use by-pass diodes in parallel with smaller segments of the string of cells, allowing current to flow around the non-functioning cell(s). Another consideration is that the battery itself can force current backwards through the array unless there are blocking diodes put at the end of each panel. The power produced by the solar array depends on the weather conditions, the position of the sun and the capacity of the array. At noon on a bright day, a good array can produce over 2 kilowatts (2.6 hp).

Bodies and chassis


Solar cars have very distinctive shapes as there are no established standards for design. Designers aim to minimize drag, maximize exposure to the sun, minimize weight and make vehicles as safe as possible. In chassis design the aim is to maximize strength and safety while keeping the weight as low as possible. There are three main types of chassis:

space frame semi-monocoque or carbon beam monocoque

The space frame uses a welded or tube structure to support the body which is a lightweight composite shell attached to the body separately and the loads. The semi-monocoque chassis uses composite beams and bulkheads to support the weight and is integrated into the belly with the top sections often being attached to the body. A monocoque structure uses the body of the car to support the weight. Composite materials are widely used in solar cars. Carbon fiber, Kevlar and fiberglass are common composite structural materials while foam and honeycomb are commonly used filler materials. Epoxy resins are used to bond these materials together. Carbon fiber and Kevlar structures can be as strong as steel but with a much lighter weight.

Race Strategy
Optimizing energy consumption is of prime importance in a solar car race. Therefore it is very important to be able to closely monitor the speed, energy consumption, energy intake from solar panel, among other things in real time. Some teams employ sophisticated telemetry that automatically keeps a follow vehicle continuously up to date on the state of the car. The strategy employed depends upon the race rules and conditions. Most solar car races have set starting and stopping points where the objective is to reach the final point in the least amount of total time. Since aerodynamic drag rises exponentially with speed, the energy the car consumes also rises exponentially. This simple fact means that the optimum strategy is to travel at a single steady speed during all phases of the race. Given the varied conditions in all races and the limited (and constantly changing) supply of energy, most teams have race speed optimization programs that continuously update the team on how fast the vehicle should be traveling.

Solar car races

University of Michigan and University of Minnesota heading west toward the finish line in the North American Solar Challenge 2005 The two most notable solar car races are the World Solar Challenge and the North American Solar Challenge. They are contested by a variety of university and corporate teams. Corporate teams contest the race to give its design teams experience in working with both alternative energy sources and advanced materials (although some may view their participation as mere PR exercises). GM and Honda are among the companies who have sponsored solar teams. University teams enter the races because it gives their students experience in designing high technology cars and working with environmental and advanced materials technology. These races are often sponsored by agencies such as the US Department of Energy keen to promote renewable energy sources. The cars require intensive support teams similar in size to professional motor racing teams. This is especially the case with the World Solar Challenge where sections of the race run through very remote country. There are other races, such as Suzuka and Phaethon. Suzuka is a yearly track race in Japan and Phaethon was part of the Cultural Olympiad in Greece right before the 2004 Olympics.

The 2005 North American Solar Challenge, which ran from Austin, Texas, to Calgary, Canada, was the successor of the American Solar Challenge. The ASC ran in 2001 and 2003 from Chicago, Illinois, to Claremont, California along old Route 66. The ASC was in turn the successor to the old GM Sunrayce, which was run across the country in 1990, 1993, and then every two years through 1999. The 2005 North American Solar Challenge had two classes:

Open: where teams are allowed to use space-grade solar cells - won by the University of Michigan. Stock: limits the type of cells that can be used on solar arrays - won by Stanford University.

The North American Solar Challenge was sponsored in part by the US Department of Energy. However, funding was cut near the start of 2006, and the 2007 NASC will likely not happen. It is unclear what will become of solar racing in North America. The 20th Anniversary race of the World Solar Challenge will be run in October of 2007, and is already shaping up to be a race to remember. Major regulation changes were released in June 2006 for this race, intended to slow down cars in the main event, which had been approaching the speed limit in previous years.

Timeline

US685957 : Rays falling on insulated conductor connected to a capacitor; the capacitor charges electrically [edit]

1800s

1839 - Alexandre Edmond Becquerel observes the photoelectric effect via an electrode in a conductive solution exposed to light. 1873 - Willoughby Smith finds that selenium is photoconductive.

1877 - W.G. Adams and R.E. Day observed the photovoltaic effect in solid selenium, and published a paper on the selenium cell. 'The action of light on selenium,' in "Proceedings of the Royal Society, A25, 113. 1883 - Charles Fritts develops a solar cell using selenium on a thin layer of gold to form a device giving less than 1% efficiency. 1887 - Heinrich Hertz investigates ultraviolet light photoconductivity. 1888 - Edward Weston receives patent US389124, "Solar cell", and US389125, "Solar cell". 1894 - Melvin Severy receives patent US527377, "Solar cell", and US527379, "Solar cell". 1897 - Harry Reagan receives patent US588177, "Solar cell".

[edit]

1900-1929

1901 - Nikola Tesla receives the patent US685957, "Apparatus for the Utilization of Radiant Energy", and US685958, "Method of Utilizing of Radiant Energy". 1902 - Philipp von Lenard observes the variation in electron energy with light frequency. 1904 - Albert Einstein publishes a paper on the photoelectric effect. Wilhelm Hallwachs makes a semiconductor-junction solar cell (copper and copper oxide). 1913 - William Coblentz receives US1077219, "Solar cell". 1914 - Sven Ason Berglund patents "methods of increasing the capacity of photosensitive cells". 1916 - Robert Millikan conducts experiments and proves the photoelectric effect. 1918 - Jan Czochralski, a Polish scientist, produces a method to grow single-crystal silicon.

[edit]

1930-1959

1932 - Audobert and Stora discover the photovoltaic effect in Cadmium-Selenide (CdS), a photovoltaic material still used today. 1946 - Russell Ohl receives patent US2402662, "Light sensitive device". 1950s - Bell Labs produce solar cells for space activities. 1953 - Gerald Pearson begins research into lithium-silicon photovoltaic cells. 1954 - AT&T exhibits solar cells at Murray Hill, New Jersey. [1]. Shortly afterwards, AT&T shows them at the National Academy of Science Meeting. These cells have about 6% efficiency. The New York Times forecasts that solar cells will eventually lead to a source of "limitless energy of the sun". 1955 - Western Electric licences commercial solar cell technologies. Hoffman Electronics-Semiconductor Division creates a 2% efficient commercial solar cell for $25/cell or $1,785/Watt.

1957 - AT&T assignors (Gerald L. Pearson, Daryl M. Chapin, and Calvin S. Fuller) receive patent US2780765, "Solar Energy Converting Apparatus". They refer to it as the "solar battery". Hoffman Electronics creates an 8% efficient solar cell. 1958 - T. Mandelkorn, U.S. Signal Corps Laboratories, creates n-on-p silicon solar cells, which are more resistant to radiation damage and are better suited for space. Hoffman Electronics creates 9% efficient solar cells. Vanguard I, the first solar powered satellite, was launched with a 0.1W, 100 cm solar panel. 1959 - Hoffman Electronics creates a 10% efficient commercial solar cell, and introduces the use of a grid contact, reducing the cell's resistance.

[edit]

1960-1979

1960 - Hoffman Electronics creates a 14% efficient solar cell. 1961 - "Solar Energy in the Developing World" conference is held by the United Nations. 1962 - The Telstar communications satellite is powered by solar cells. 1963 - Sharp Corporation produces a viable photovoltaic module of silicon solar cells. 1967 - Soyuz 1 is the first manned spacecraft to be powered by solar cells 1971 - Salyut 1 is powered by solar cells. 1973 - Skylab is powered by solar cells. 1974 - Florida Solar Energy Center begins [2]]. 1976 - David Carlson and Christopher Wronski of RCA Laboratories create first amorphous silicon PV cells, which have an efficiency of 1.1%. 1977 - The Solar Energy Research Institute is established at Golden, Colorado. World production of solar cells exceeds 500 kW.

[edit]

1980-1999

1980 - The Institute of Energy Conversion at University of Delaware develops the first thin-film solar cell exceeding 10% efficiency using Cu2S/CdS technology. 1983 - Worldwide photovoltaic production exceeds 21.3 MW, and sales exceed $250 million. 1985 - 20% efficient silicon cell are created by the Centre for Photovoltaic Engineering at the University of New South Wales. 1989 - Reflective solar concentrators are first used with solar cells. 1990 - The Cathedral of Magdeburg installs solar cells on the roof, marking the first installation on a church in East Germany. 1991 - Efficient Photoelectrochemical cells are developed; the Dye-sensitized solar cell is invented.

1991 - President George H. W. Bush directs the U.S. Department of Energy to establish the National Renewable Energy Laboratory (transferring the existing Solar Energy Research Institute). 1993 - The National Renewable Energy Laboratory's Solar Energy Research Facility is established. 1994 - NREL develops a GaInP/GaAs two-terminal concentrator cell (180 suns) which becomes the first solar cell to exceed 30% conversion efficiency. 1996 - The National Center for Photovoltaics is established. Graetzel, EPFL, Laussane, Switzerland achieves 11% efficient energy conversion with dyesensitized cells that use a photoelectrochemical effect, not a photovoltaic effect. 1999 - Total worldwide installed photovoltaic power reached 1000MW.

[edit]

2000-Today

2005 - Solar cells in modules can convert around 17% of visible incidental radiant energy to electrical energy. 2006 - Estimated yearly solar cell production reached 1868 megawatts. Worldwide polysilicon production is projected to grow from 31,000 tons in 2005 to 36,000 tons in 2006.

[edit]

Future developments
Solar power satellites are proposed satellites to be built in high Earth orbit that would use microwave power transmission to beam solar power to a very large antenna on Earth where it would be used in place of conventional power sources. Tellurium has potential applications in cadmium-telluride solar cells. Some of the highest efficiencies for solar cell electric power generation have been obtained by using this material, but previous applications have not yet caused demand to increase significantly

The History and Future of Wind Power

Humans have been harnessing wind energy for thousands of years to pump water, grind grain and power boats. The technologies for using wind energy have been refined over the last two centuries and those refinement efforts continue today. References to windmills are common in literature. The best known is probably seventeenth century writer Miguel de Cervantes, author of Don Quixote, whose main character mistook windmills for giants. Electricity-generating wind turbines have been used in the US and Europe for over 100 years. In the early 1900s, many rural communities and homesteads used small-scale wind turbines, also known as windmills, for their electricity supply. By 1800, an estimated 500,000 windmills were in use in Europe and China. Wind power also played an important role in settling the Great Plains. By 1930, an estimated 600,000 windmills were at work in the US, pumping water and producing electricity. The Turbines of Today Modern use of wind energy has certainly changed since those primitive days of small windmills that farmers used to help grind feed for cattle. Today , wind machines continue to generate power on farms, but huge wind turbines that tower as high as twenty stories above the flat farmland have replaced the small windmill. Giant wind farms run by energy companies are becoming commonplace throughout the US and around the world. Some wind farms feature hundreds of turbines that work year-round generating electricity for entire communities. The turbines of today are very sophisticated and vary in size from small turbines that can be mounted on your house to large turbines that can generate an astonishing 3.6 megawatts of power. The Growing Interest in Wind Power Much of the growth in the industry has occurred in European countries such as Holland, Denmark, the United Kingdom and Germany. In the past decade, the installed wind capacity in Europe has increased by about forty percent per year. Today wind energy projects across Europe produce enough electricity to meet the domestic needs of five million people. They have plans to continue the strong growth in the future

with a goal of 60,000 megawatts of wind energy capacity installed throughout Europe by 2010, which would be enough to provide electricity to about 75 million people. Denmark, for example, currently generates about five percent of its electricity from wind turbines and intends to increase this to forty percent by 2030. Interest in wind power is also growing in countries such as India and China, and Australia is also paying increasing attention to the concept. Over ninety percent of the world's wind turbine manufacturers are in Europe and have a combined annual turnover of more than one billion euros. Although the costs of other forms of energy continue to rise from year to year, the costs of wind energy are actually coming down. The Future of Wind Power Today, wind produces only a small percentage of our nation's electricity. But many experts believe that by the year 2020, wind power will produce up to six percent of our electricity needs. That may not sound like much, but it is enough power to serve 25 million homes. Wind energy is also smart financially, because wind plants are cheaper to build and run than other power plants. As demand for electricity increases, adding more wind turbines makes sense. The cost of producing electricity from the wind has dropped dramatically in the last two decades. In 1975, the cost of electricity generated by the wind was about thirty cents per kilowatt, but it's dropped to less than five cents per kilowatt at some installations. The newest turbines are lowering the cost even further. Wind Energy Benefits All Wind energy provides significant rural economic development. Depending on the size and wind conditions, a single wind turbine can provide between $2,000 and $4,000 a year in income to individual landowners, while occupying only a small portion of their land. In the US alone, over $5 million is paid annually to landowners hosting wind turbines. Wind farms can also be a valuable source of property tax income for local governments. The new wind farms will displace on average emissions of three million tons of carbon dioxide and more than 27,000 tons of noxious pollutants each year.

5000 BC

Wind Energy propelled boats along the Nile River

The first true windmill, a machine with vanes attached to an axis to produce circular motion, may have been built in ancient Babylon.

2000 BC

200 BC

Simple winmills in China were pumping water, while vertical-axis windmills with woven reed sails were grinding grain in Persian and the Middle East.

Earliest actual documentation of chinese winmill

1219 AD

1270 AD

Four-bladed mill mounted on a central (a "postmill")

The Dutch set out to refine the tower mill design

1390

1390

Mechanical water pumping using relatively small systems with rotor diameters of one to several meters were perfected in the United States beginning with Halladay windmill.x

The most important refinement of the American fan-type windmill was that of steel blades.

1870

1888

The first use of a large windmill to generate electricity was a system built in Cleveland, Ohio by Charles F. Bush. The brush machine.

There were 77 windmill factories in the United States.

1889

Wind energy is moving air. Traditional windmills have used the kinetic energy of wind to pump water and to grind corn. Modern windmills-or wind turbines as they are more correctly termed-are connected to generators which convert the kinetic energy of wind to electric energy. The use of wind turbines are steadily increasing, especially in the United States and Europe. Wind power is the conversion of wind energy into more useful forms, usually electricity using wind turbines. In 2005, worldwide capacity of wind-powered generators was 58,982 megawatts, their production making up less than 1% of world-wide electricity use. Although still a relatively minor source of electricity for most countries, it accounts for 23% of electricity use in Denmark, 4.3% in Germany and around 8% in Spain. Globally, wind power generation more than quadrupled between 1999 and 2005. Most modern wind power is generated in the form of electricity by converting the rotation of turbine blades into electrical current by means of an electrical generator. In windmills (a much older technology) wind energy is used to turn mechanical machinery to do physical work, like crushing grain or pumping water. Wind power is used in large scale wind farms for national electrical grids as well as in small individual turbines for providing electricity in isolated locations. Wind energy is abundant, renewable, widely distributed, clean, and mitigates the greenhouse effect by replacement of fossil-fuel-derived electricity.

Contents
[show] [edit]

Cost and growth


The cost of wind-generated electric power has dropped substantially. Since 2004, according to some sources, the price in the United States is now lower than the cost of fuelgenerated electric power, even without taking externalities into account.[1][2][3] In 2005, wind energy cost one-fifth as much as it did in the late 1990s, and that downward trend is expected to continue as larger multi-megawatt turbines are mass-produced.[4] A British Wind Energy Association report gives an average generation cost of onshore wind power of around 3.2 pence per kilowatt hour. [5] Wind power is growing quickly, at about 38%,[6] up from 25% growth in 2002. In the United States, as of 2003, wind power was the fastest growing form of electricity generation on a percentage basis.[7] [edit]

Wind energy
Main article: Wind An estimated 1 to 3% of energy from the Sun that hits the earth is converted into wind energy. This is about 50 to 100 times more energy than is converted into biomass by all the plants on earth through photosynthesis. Most of this wind energy can be found at high altitudes where continuous wind speeds of over 160 km/h (100 mph) occur. Eventually, the wind energy is converted through friction into diffuse heat all through the earth's surface and atmosphere. The origin of wind is simple. The earth is unevenly heated by the sun resulting in the poles receiving less energy from the sun than the equator does. Also the dry land heats up (and cools down) more quickly than the seas do. The differential heating powers a global atmospheric convection system reaching from the earth's surface to the stratosphere which acts as a virtual ceiling. The change of seasons, change of day and night, the Coriolis effect, the irregular albedo (reflectivity) of land and water, humidity, and the friction of wind over different terrain are some of the factors which complicate the flow of wind over the surface. [edit]

Wind variability and turbine power

A Darrieus wind turbine. The power in the wind can be extracted by allowing it to blow past moving wings that exert torque on a rotor. The amount of power transferred is directly proportional to the density of the air, the area swept out by the rotor, and the cube of the wind speed. The mass flow of air that travels through the swept area of a wind turbine varies with the wind speed and air density. As an example, on a cool 15C (59F) day at sea level, air density is about 1.22 kilograms per cubic metre (it gets less dense with higher humidity). An 8 m/s breeze blowing through a 100 meter diameter rotor would move about 76,000 kilograms of air per second through the swept area. The kinetic energy of a given mass varies with the square of its velocity. Because the mass flow increases linearly with the wind speed, the wind energy available to a wind turbine increases as the cube of the wind speed. The power of the example breeze above through the example rotor would be about 2.5 megawatts. As the wind turbine extracts energy from the air flow, the air is slowed down, which causes it to spread out and diverts it around the wind turbine to some extent. Albert Betz, a German physicist, determined in 1919 that a wind turbine can extract at most 59% of the energy that would otherwise flow through the turbine's cross section. The Betz limit applies regardless of the design of the turbine. More recent work by Gorlov shows a theoretical limit of about 30% for propeller-type turbines.[8] Actual efficiencies range from 10% to 20% for propeller-type turbines, and are as high as 35% for three-dimensional vertical-axis turbines like Darrieus or Gorlov turbines.

Distribution of wind speed (red) and energy (blue) for all of 2002 at the Lee Ranch facility in Colorado. The histogram shows measured data, while the curve is the Rayleigh model distribution for the same average wind speed. Energy is the Betz limit through a 100 meter diameter circle facing directly into the wind. Total energy for the year through that circle was 15.4 gigawatt-hours. Windiness varies, and an average value for a given location does not alone indicate the amount of energy a wind turbine could produce there. To assess the climatology of wind speeds at a particular location, a probability distribution function is often fit to the observed data. Different locations will have different wind speed distributions. The distribution model most frequently used to model wind speed climatology is a two-parameter Weibull distribution because it is able to conform to a wide variety of distribution shapes, from gaussian to exponential. The Rayleigh model, an example of which is shown plotted against an actual measured dataset, is a specific form of the Weibull function in which the shape parameter equals 2, and very closely mirrors the actual distribution of hourly wind speeds at many locations. Because so much power is generated by higher windspeed, much of the average power available to a windmill comes in short bursts. The 2002 Lee Ranch sample is telling: half of the energy available arrived in just 15% of the operating time. The consequence of this is that wind energy is not dispatchable as for fuel-fired power plants; additional output cannot be supplied in response to load demand. Since wind speed is not constant, a wind generator's annual energy production is never as much as its nameplate rating multiplied by the total hours in a year. The ratio of actual productivity in a year to this theoretical maximum is called the capacity factor. A well-sited wind generator will have a capacity factor of as much as 35%. This compares to typical capacity factors of 90% for nuclear plants, 70% for coal plants, and 30% for oil plants.[9] When comparing the size of wind turbine plants to fueled power plants, it is important to note that 1000 kW of wind-turbine potential power would be expected to produce as much energy in a year as approximately 500 kW of coal-fired generation. Though the short-term

(hours or days) output of a wind-plant is not completely predictable, the annual output of energy tends to vary only a few percent points between years. When storage, such as with pumped hydroelectric storage, or other forms of generation are used to "shape" wind power (by assuring constant delivery reliability), commercial delivery represents a cost increase of about 25%, yielding viable commercial performance.[1] Electricity consumption can be adapted to production variability to some extent with Energy Demand Management and smart meters that offer variable market pricing over the course of the day. For example, municipal water pumps that feed a water tower do not need to operate continuously and can be restricted to times when electricity is plentiful and cheap. Consumers could choose when to run the dishwasher or charge an electric vehicle. [edit]

Wind power density classes


Wind maps in the United States and Europe divide areas into seven classes of wind power density, which give an indication of the quality of wind power resource in the area. Each class is a range of power densities, so that an area rated as class 4, for example, would have an average power density from 200 to 250 W/m2 at 10 m above ground. Generally, economic development of wind power for electricity generation takes place in areas rated class 3 or higher. Table 1-1 Classes of wind power density at 10 m and 50 m above ground.[10][11] 10 m (33 ft) above ground Wind Power Class 1 2 150 3 200 4 250 5 6.0 (13.4) 500 7.5 (16.8) 5.6 (12.5) 400 7.0 (15.7) 5.1 (11.5) 300 6.4 (14.3) Wind power density (W/m2) 0 100 Speed m/s (mph)[12] 0 4.4 (9.8) 50 m (164 ft) above ground Wind power density (W/m2) 0 200 Speed m/s (mph)[12] 0 5.6 (12.5)

300 400

6.4 (14.3) 7.0 (15.7) 9.4 (21.1)

600 800 2000

8.0 (17.9) 8.8 (19.7) 11.9 (26.6)

7 [edit]

1000

Turbine siting

Map of available wind power over the United States. Color codes indicate wind power density class. As a general rule, wind generators are practical where the average wind speed is greater than 20 km/h (5.5 m/s or 12.5 mph). Obviously, meteorology plays an important part in determining possible locations for wind parks, though it has great accuracy limitations. Meteorological wind data is not usually sufficient for accurate siting of a large wind power project. An 'ideal' location would have a near constant flow of non-turbulent wind throughout the year and would not suffer too many sudden powerful bursts of wind. The wind blows faster at higher altitudes because of the reduced influence of drag of the surface (sea or land) and the reduced viscosity of the air. The increase in velocity with altitude is most dramatic near the surface and is affected by topography, surface roughness, and upwind obstacles such as trees or buildings. Typically, the increase of wind speeds with increasing height follows a logarithmic profile that can be reasonably approximated by the wind profile power law, using an exponent of 1/7th, which predicts that wind speed rises proportionally to the seventh root of altitude. Doubling the altitude of a turbine, then, increases the expected wind speeds by 10% and the expected power by 34%. Wind farms or wind parks often have many turbines installed. Since each turbine extracts some of the energy of the wind, it is important to provide adequate spacing between turbines to avoid excess energy loss. Where land area is sufficient, turbines are spaced three to five rotor diameters apart perpendicular to the prevailing wind, and five to ten rotor

diameters apart in the direction of the prevailing wind, to minimize efficiency loss. The "wind park effect" loss can be as low as 2% of the combined nameplate rating of the turbines. Utility-scale wind turbine generators have minimum temperature operating limits which restrict the application in areas that routinely experience temperatures less than 20C. Wind turbines must be protected from ice accumulation, which can make anemometer readings inaccurate and which can cause high structure loads and damage. Some turbine manufacturers offer low-temperature packages at a few percent extra cost, which include internal heaters, different lubricants, and different alloys for structural elements, to make it possible to operate the turbines at lower temperatures. If the low-temperature interval is combined with a low-wind condition, the wind turbine will require station service power, equivalent to a few percent of its output rating, to maintain internal temperatures during the cold snap. For example, the St. Leon, Manitoba project has a total rating of 99 MW and is estimated to need up to 3 MW (around 3% of capacity) of station service power a few days a year for temperatures down to 30C. This factor affects the economics of wind turbine operation in cold climates.[citation needed] Rural communities are thought to welcome wind farms because they provide income to farmers and ranchers, skilled jobs, cheap electricity and additional tax revenue to upgrade schools and maintain roads. [edit]

Onshore

Wind turbines near Walla Walla in Washington Onshore turbine installations tend to be along mountain ridges or passes, or at the top of cliff faces. The change in ground elevation causes the wind velocities to be generally higher in these areas, although there may be variation over short distances (a difference of 30 m can sometimes mean a doubling in output). Local winds are often monitored for a year or more with anemometers and detailed wind maps constructed before wind generators are installed.

For smaller installations where such data collection is too expensive or time consuming, the normal way of prospecting for wind-power sites is to directly look for trees or vegetation that are permanently "cast" or deformed by the prevailing winds. Another way is to use a wind-speed survey map, or historical data from a nearby meteorological station, although these methods are less reliable. Sea shores also tend to be windy areas and good sites for turbine installation, because a primary source of wind is convection from the differential heating and cooling of land and sea over the course of day and night. Winds at sea level carry somewhat more energy than winds of the same speed in mountainous areas because the air at sea level is more dense. Unfortunately, windy areas onshore tend to be picturesque, and so there is sometimes opposition to the installation of wind turbines on what would otherwise be ideal sites. [edit]

Offshore

Wind blows briskly and smoothly over water since there are no obstructions. The large and slow turning turbines of this offshore wind farm near Copenhagen take advantage of the moderate yet constant breezes here. Offshore wind turbines cause less aesthetic controversy since they often cannot be seen from the shore. Because there are fewer obstacles and stronger winds, such turbines also dont need to be built as high into the air. However, offshore turbines are more inaccessible and offshore conditions are harsh, abrasive, and corrosive, thereby increasing the costs of operation and maintenance compared to onshore turbines. In areas with extended shallow continental shelves and sand banks (such as Denmark), turbines are reasonably easy to install, and give good service. At the site shown, the wind is not especially strong but is very consistent. The largest offshore wind turbines in the world are seven 3.6 MW rated machines off the east coast of Ireland about sixty kilometres south of Dublin. The turbines are located on a sandbank approximately ten kilometres from the coast that has the potential for the installation of 500 MW of generation capacity. As of 2006, the largest offshore wind farm is the Nysted Offshore Wind Farm at Rdsand, located about ten kilometres south of Nysted and thirteen kilometres west of Gedser Denmark. The wind farm consists of seventy two turbines of 2.3 MW, which produces

165.6 MW of power at rated wind speed.[13]. Three offshore wind farms in the United Kingdom are currently operating, North Hoyle (30 x 2 MW), Scroby Sand (30 x 2 MW) and Kentish flat (30 x 3 MW). Another offshore wind farm, Barrow (30 x 3 MW), is under construction. Under the energy policy of the United Kingdom further offshore facilities are feasible and expected by the year 2010. [edit]

Airborne
Main article: Airborne wind turbine Wind turbines might also be flown in high speed winds at altitude[14], although no such systems currently exist in the marketplace. An Ontario company, Magenn Power, Inc., is attempting to commercialize tethered aerial turbines suspended with helium[citation needed] [edit]

Vertical axis turbines


A prototype of vertical axis wind turbine is the italian project called "Kitegen". It is an innovative plan (still in phase of construction) that consists in one wind farm with a vertical spin axis, and employs kites to exploit high-altitude winds. The Kite Wind Generator (KWG) or KiteGen is claimed to eliminate all the static and dynamic problems that prevent the increase of the power (in terms of dimensions) obtainable from the traditional horizontal axis wind turbine generators. According to its developers, a one GigaWatt installation will be 1/40 the cost of the corresponding nuclear powerplant. [edit]

Utilization
[edit]

Large scale
Total installed windpower capacity (end of year & latest estimates)[15] Capacity (MW) Rank Nation Latest 2005 2004

1 Germany 2 Spain

18,428 16,629 10,941 10,027 8,263

3 USA 4 India 5 Denmark 6 Italy 7 United Kingdom 8 China 9 Netherlands 10 Japan 11 Portugal 12 Austria 13 France 14 Canada 15 Greece 16 Australia 17 Sweden 18 Ireland 19 Norway 20 New Zealand 21 Belgium 22 Egypt

9,971 9,149

6,725

5,340 4,430 3,000 3,128 3,124 1,717 1,832 1,353 1,260 1,219 1,040 1,022 819 918 1,049 757 683 573 738 572 510 496 270 168 167 145 1,265 888 764 1,078 896 522 606 386 444 473 379 452 339 270 168 95 145

23 South Korea 24 Taiwan 25 Finland 26 Poland 27 Ukraine 28 Costa Rica 29 Morocco 30 Luxembourg 31 Iran 32 Estonia 33 Philippines 34 Brazil 35 Czech Republic World total 79

119 103 82 73 73 70 64 35 32 30 29 29 28

23 13 82 63 69 70 54 35 25 3 29 24 17

~62,000 58,982 47,671

There are many thousands of wind turbines operating, with a total capacity of 58,982 MW of which Europe accounts for 69% (2005). One megawatt of power provides for about 160 average American households. Wind power was the most rapidly-growing means of alternative electricity generation at the turn of the century and provides a valuable complement to large-scale base-load power stations. World wind generation capacity more than quadrupled between 1999 and 2005. 90% of wind power installations are in the US and Europe, but the share of the top five countries in terms of new installations fell from 71% in 2004 to 55% in 2005. By 2010, World Wind Energy Association expects 120,000 MW to be installed worldwide.[15] Germany, Spain, the United States, India and Denmark have made the largest investments in wind generated electricity. Denmark is prominent in the manufacturing and use of wind turbines, with a commitment made in the 1970s to eventually produce half of the country's power by wind. Denmark generates over 20% of its electricity with wind turbines, the highest percentage of any country and is fifth in the world in total power generation(which

can be compared with the fact that Denmark is 56th on the general electricity comsumption list). Denmark and Germany are leading exporters of large (0.66 to 5 MW) turbines. Wind accounts for 1% of the total electricity production on a global scale (2005). Germany is the leading producer of wind power with 32% of the total world capacity in 2005 (6% of German electricity); the official target is that by 2010, renewable energy will meet 12.5% of German electricity needs - it can be expected that this target will be reached even earlier. Germany has 16,000 wind turbines, mostly in the north of the country - including three of the biggest in the world, constructed by the companies Enercon (4.5 MW), Multibrid (5 MW) and Repower (5 MW). Germany's Schleswig-Holstein province generates 25% of its power with wind turbines. Spain and the United States are next in terms of installed capacity. In 2005, the government of Spain approved a new national goal for installed wind power capacity of 20,000 MW by 2012. According to the American Wind Energy Association, wind generated enough electricity to power 0.4% (1.6 million households) of total electricity in US, up from less than 0.1% in 1999. In 2005, both Germany and Spain have produced more electricity from wind power than from hydropower plants. US Department of Energy studies have concluded wind harvested in just three of the fifty U.S. states could provide enough electricity to power the entire nation, and that offshore wind farms could do the same job.[1] Wind power could grow by 50% in the U.S. in 2006.[16] India ranks 4th in the world with a total wind power capacity of 5,340 MW. Wind power generates 3% of all electricity produced in India. The World Wind Energy Conference in New Delhi in November 2006 will give additional impetus to the Indian wind industry.[15] In December 2003, General Electric installed the world's largest offshore wind turbines in Ireland, and plans are being made for more such installations on the west coast, including the possible use of floating turbines. On August 15, 2005, China announced it would build a 1000-megawatt wind farm in Hebei for completion in 2020. China reportedly has set a generating target of 20,000 MW by 2020 from renewable energy sources - it says indigenous wind power could generate up to 253,000 MW. Following the World Wind Energy Conference in November 2004, organised by the Chinese and the World Wind Energy Association, a Chinese renewable energy law was adopted. In late 2005, the Chinese government increased the official wind energy target for the year 2020 from 20 GW to 30 GW.[citation needed] Another growing market is Brazil, with a wind potential of 143 GW.[17] The federal government has created an incentive program, called Proinfa,[18] to build production capacity of 3300 MW of renewable energy for 2008, of which 1422 MW through wind energy. The program seeks to produce 10% of Brazilian electricity through renewable sources. Brazil produced 320 TWh in 2004. France recently annonced a very ambiious target of 12 500 MW installed by 2010. Over the 6 years from 2000-2005, Canada experienced rapid growth of wind capacity moving from a total installed capacity of 137 MW to 943 MW, and showing a growth rate

of 38% and rising.[19] This growth was fed by provincial measures, including installation targets, economic incentives and political support. For example, the government of the Canadian province of Ontario announced on 21 March 2006 that it will introduce a feed-in tariff for wind power, referred to as 'Standard Offer Contracts', which may boost the wind industry across the entire country.[20] In the Canadian province of Quebec, the state-owned hydroelectric utility plans to generate 2000 MW from wind farms by 2013.[21] [edit]

Small scale

This rooftop-mounted urban wind turbine charges a 12 volt battery and runs various 12 volt appliances within the building on which it is installed. Wind turbines have been used for household electricity generation in conjunction with battery storage over many decades in remote areas. Household generator units of more than 1 kW are now functioning in several countries. To compensate for the varying power output, grid-connected wind turbines may utilise some sort of grid energy storage. Off-grid systems either adapt to intermittent power or use photovoltaic or diesel systems to supplement the wind turbine. Wind turbines range from small four hundred watt generators for residential use to several megawatt machines for wind farms and offshore. The small ones have direct drive generators, direct current output, aeroelastic blades, lifetime bearings and use a vane to point into the wind; while the larger ones generally have geared power trains, alternating current output, flaps and are actively pointed into the wind. Direct drive generators and aeroelastic blades for large wind turbines are being researched and direct current generators are sometimes used. In urban locations, where it is difficult to obtain large amounts of wind energy, smaller systems may still be used to run low power equipment. Distributed power from rooftop mounted wind turbines can also alleviate power distribution problems, as well as provide resilience to power failures. Equipment such as parking meters or wireless internet gateways may be powered by a wind turbine that charges a small battery, replacing the need for a connection to the power grid and/or maintaining service despite possible power grid failures.

Small-scale wind power in rural Indiana. Small scale turbines are available that are approximately 7 feet (2 m) in diameter and produce 900 watts. Units are lightweight, e.g. 16 kilograms (35 lbs), allowing rapid response to wind gusts typical of urban settings and easy mounting much like a television antenna. It is claimed that they are inaudible even a few feet under the turbine.[citation needed] Dynamic braking regulates the speed by dumping excess energy, so that the turbine continues to produce electricity even in high winds. The dynamic braking resistor may be installed inside the building to provide heat (during high winds when more heat is lost by the building, while more heat is also produced by the braking resistor). The proximal location makes low voltage (12 volt, or the like) energy distribution practical. An additional benefit is that owners become more aware of electricity consumption, possibly reducing their consumption down to the average level that the turbine can produce. According to the World Wind Energy Association, it is difficult to assess the total number or capacity of small-scaled wind turbines, but in China alone, there are roughly 300,000 small-scale wind turbines generating electricity.[15] [edit]

Debate for and against wind power


Arguments for and against wind power are listed below. [edit]

Arguments of supporters

Erection of an Enercon E70-4 Supporters of wind energy state that: [edit]

Pollution
Wind power is a renewable resource, which means using it will not deplete the earth's supply of fossil fuels. It also is a clean energy source, and operation does not produce carbon dioxide, sulfur dioxide, mercury, particulates, or any other type of air pollution, as do conventional fossil fuel power sources. During manufacture of the wind turbine, however, steel, concrete, aluminium and other materials will have to be made and transported using energy-intensive processes, generally using fossil energy sources. Nevertheless, the energy used for manufacture of a wind turbine is earned back in four to six months of operation. [edit]

Long-term potential

Wind's long-term technical potential is believed five times current global energy consumption or four times current electricity demand. This would require ~13% of all land area, or that land area with Class 3 or greater potential at a height of 80 meters. It assumes a placement of six large wind turbines per square kilometer on land. Offshore resources experience mean wind speeds ~90% greater than that of land, so offshore resources could contribute about seven times more energy than land.[22][23] This number could also increase with higher altitude or airborne wind turbines.[24]

[edit]

Coping with intermittency

As the fraction of energy produced by wind ("penetration") increases, different technical and economic factors affect the need, if there is one, for grid energy storage facilities. Large networks, connected to multiple wind plants at widely separated geographic locations, may accept a higher penetration of wind than small networks or those without storage systems or economical methods of compensating for the variability of wind. In systems with significant amounts of existing pumped storage (e.g. UK, eastern US) this proportion may be higher. Isolated, relatively small systems with only a few wind plants may only be stable and economic with a lower fraction of wind energy (e.g. Ireland). On most large power systems a moderate proportion of wind generation can be connected without the need for storage. For larger proportions, storage may be economically attractive or even technically necessary. Long-term storage of electrical energy involves substantial capital costs, space for storage facilities, and some portion of the stored power will be lost during conversion and transmission. The percentage retrievable from stored power is called the "efficiency of storage." The cost incurred to "shape" intermittent wind power for reliable delivery is about a 20% premium for most wind applications on large grids, but approaches 50% of the cost of generation when wind comprises more than 70% of the local grid's input power. See: Grid energy storage

Cement works in New South Wales, Australia. Energy-intensive process like this could utilize burst electricity from wind.

Electricity demand is variable but generally very predictable on larger grids; errors in demand forecasting are typically no more than 2%. Because conventional powerplants can drop off the grid within a few seconds, for example due to equipment failures, in most systems the output of some coal or gas powerplants is intentionally part-loaded to follow demand and to replace rapidly lost generation. The ability to follow demand (by maintaining constant frequency) is termed "response." The ability to quickly replace lost generation, typically within timescales of 30 seconds to 30 minutes, is termed "spinning reserve." Nuclear power plants in contrast are not very flexible and are not intentionally part-loaded.

A power plant that operates in a steady fashion, usually for many days continuously, is termed a "base load" plant.

What happens in practice therefore is that as the power output from wind varies, part-loaded conventional plants, which must be there anyway to provide response (due to continuously changing demand) and reserve , adjust their output to compensate; they do this in response to small changes in the frequency (nominally 50 or 60 Hz) of the grid. In this sense wind acts like "negative" load or demand. The maximum proportion of wind power allowable in a power system will thus depend on many factors, including the size of the system, the attainable geographical diversity of wind, the conventional plant mix (coal, gas, nuclear) and seasonal load factors (heating in winter, air-conditioning in summer) and their statistical correlation with wind output. For most large systems the allowable penetration fraction (wind nameplate rating divided by system peak demand) is thus at least 15% without the need for any energy storage whatsoever. Note that the interconnected electrical system may be much larger than the particular country or state (e.g. Denmark, California) being considered. It should also be borne in mind that wind output, especially from large numbers of turbines/farms can be predicted with a fair degree of confidence many hours ahead using weather forecasts. The allowable penetration may of course be further increased by increasing the amount of part-loaded generation available, or by using energy storage facilities, although if purpose-built for wind energy these may significantly increase the overall cost of wind power. Existing European hydroelectric power plants can store enough energy to supply one month's worth of European electricity consumption. Improvement of the international grid would allow using this in the relatively short term at low cost, as a matching variable complementary source to wind power. Excess wind power could even be used to pump water up into collection basins for later use. Energy Demand Management or Demand-Side Management refers to the use of communication and switching devices which can release deferrable loads quickly to correct supply/demand imbalances. Incentives can be created for the use of these systems, such as favorable rates or capital cost assistance, encouraging consumers with large loads to take advantage of renewable energy by adjusting their loads to coincide with resource availability. For example, pumping water to pressurize municipal water systems is an electricity intensive application that can be performed when electricity is available.[25] Real-time variable electricity pricing can encourage all users to reduce usage when the renewable sources happen to be at low production.

In energy schemes with a high penetration of wind energy, secondary loads, such as desalination plants and electric boilers may be encouraged because their output (water and heat) can be stored. The utilization of "burst electricity", where excess electricity is used on windy days for opportunistic purposes greatly improves the economic efficiency of wind turbine schemes. An ice storage device has been invented which allows cooling energy to be consumed during resource availability, and dispatched as air conditioning during peak hours. Multiple wind farms spread over a wide geographic area and gridded together produce power much more constantly. Electricity produced from solar energy could be a counter balance to the fluctuating supplies generated from wind. It tends to be windier at night and during cloudy or stormy weather, so there is likely to be more sunshine when there is less wind.

[edit]

Ecology

Because it uses energy already present in the atmosphere, and can displace fossilfuel generated electricity (with its accompanying carbon dioxide emissions), wind power mitigates global warming. If the entire world's nameplate electrical demand expected in 2010 were served from wind power alone, the amount of energy extracted from the atmosphere would be less than the increase added by radiative forcing by additional carbon dioxide at 2000 levels above those of the year 1500, before fossil fuel consumption became significant.[citation needed] Energy payback ratio (ratio of energy produced compared to energy expended in construction and operation) for wind turbines is between 17 and 39 (i.e. over its life-time a wind turbine produces 17-39 times as much energy as is needed for its manufacture, construction, operation and decomissioning). This is to be compared with 11 for coal power plants and 16 for nuclear power plants.[26] The energy consumption for production, installation, operation and decommissioning of a wind turbine is usually earned back within 3 months of operation.[27] Unlike fossil or nuclear power stations, which circulate large amounts of water for cooling, wind turbines do not need water to generate electricity. Studies show that the number of birds killed by wind turbines is negligible compared to the amount that die as a result of other human activities such as traffic, hunting, power lines and high-rise buildings and especially the environmental impacts of using non-clean power sources. For example, in the UK, where there are a few hundred turbines, about one bird is killed per turbine per year; 10 million per year are killed by cars alone.[28] In the United States, turbines kill 70,000 birds per

year, compared to 57 million killed by cars and 97.5 million killed by collisions with plate glass.[29] Another study suggests that migrating birds adapt to obstacles; those birds which don't modify their route and continue to fly through a wind farm are capable of avoiding windmills,[30] at least in the low-wind non-twilight conditions studied. In the UK, the Royal Society for the Protection of Birds (RSPB) concluded that "The available evidence suggests that appropriately positioned wind farms do not pose a significant hazard for birds."[31] It notes that climate change poses a much more significant threat to wildlife, and therefore supports wind farms and other forms of renewable energy.

Clearing of wooded areas is often unnecessary, as the practice of farmers leasing their land out to companies building wind farms is common. Farmers receive annual lease payments of two thousand to five thousand dollars per turbine.[32] The land can still be used for farming and cattle grazing. The ecological and environmental costs of wind plants are paid by those using the power produced, with no long-term effects on climate or local environment left for future generations. Less than 1% of the land would be used for foundations and access roads, the other 99% could still be used for farming.[33] Turbines can be sited on land unused in techniques such as center-pivot irrigation. After decommissioning wind turbines, even the foundations are removed.

[edit]

Economic feasibility

Conventional and nuclear power plants receive massive amounts of direct and indirect governmental subsidies. If a comparison is made on real production costs, wind energy is competitive in many cases. If the full costs (environmental, health, etc.) are taken into account, wind energy is competitive in most cases. Furthermore, wind energy costs are continuously decreasing due to technology development and scale enlargement. Nuclear power plants receive special immunity from the disasters they may cause, which prevents victims from recovering the cost of their continued health care from those responsible, even in the case of criminal malfeasance. Conventional and nuclear plants also have sudden unpredictable outages (see above). Statistical analysis shows that 1000 MW of wind power can replace 300 MW of conventional power.

[edit]

Aesthetics

Wind power is nothing new. Windmills at La Mancha, Spain.

Improvements in blade design and gearing have quietened modern turbines to the point where a normal conversation can be held underneath one Newer wind farms have more widely spaced turbines due to the greater power of the individual wind turbines, and so look less cluttered Wind turbines can be positioned alongside motorways, significantly reducing aesthetic concerns The aesthetics of wind turbines have been compared favourably to those of pylons from conventional power stations Areas under windfarms can be used for farming, and are protected from development Offshore sites have on average a higher energy yield than onshore sites, and often cannot be seen from the shore.

[edit]

Arguments of opponents

Some of the over 4000 wind turbines at Altamont Pass, in California. Developed during a period of tax incentives in the 1980s, this wind farm has more turbines than any other in the United States. These turbines are only a few tens of kilowatts each. They cost several times more per kWh and spin much more quickly than modern megawatt turbines, endangering birds and making noise. [edit]

Economics

To compete with traditional sources of energy, wind power often receives financial incentives. In the United States, wind power receives a tax credit of 1.9 cents per kilowatt-hour produced, with a yearly inflationary adjustment. However, in 2004 when the U.S. production tax credit had lapsed for nine months, wind power was still a rapidly growing form of electrical generation, calling into question the value of these production tax credits. Another tax benefit is accelerated depreciation. Many American states also provide incentives, such as exemption from property tax, mandated purchases, and additional markets for "green credits." Countries such as Canada and Germany also provide tax credits and other incentives for wind turbine construction. Many potential sites for wind farms are far from demand centers, requiring substantially more money to construct new transmission lines and substations.

[edit]

Yield

The goals of renewable energy development are reduction of reliance on fossil and nuclear fuels, reduction of greenhouse gas and other emissions, and establishment of more sustainable sources of energy. Some critics question wind energy's ability to significantly move society towards these goals. They point out that 25-30% annual load factor is typical for wind facilities. The intermittent and nondispatchable nature of wind turbine power requires that "spinning reserves" are kept

burning for supply security. The fluctuation in wind power requires more frequent load ramping of such plants to maintain grid system frequency. This can force operators to run conventional plants below optimal thermal efficiency resulting in greater emissions. A recent European Nuclear Society study estimates that the equivalent of one third of energy saved from wind generation is lost to these inefficiencies.[citation needed] [edit]

CO2 Emissions

Electric power production is only part (about 39% in the USA[34]) of a country's energy use, so wind power alone does little to mitigate the larger part of the effects of energy use. For example, despite more than doubling the installed wind power capacity in the U.K. from 2002 to 2004, wind power contributed less than 1% of the national electricity supply,[5] and that country's CO2 emissions continued to rise in 2002 and 2003 (Department of Trade and Industry). Six of the U.K.'s nuclear reactors were closed in this period.[35] Groups such as the UN's Intergovernmental Panel on Climate Change state that the desired mitigation goals can be achieved at lower cost and to a greater degree by continued improvements in general efficiency in building, manufacturing, and transport than by wind power. Such statements, however, do not take into account long-term costs and calculations, like drastically increasing prices for oil, gas, uranium etc. Also once an investment in a wind turbine is made, the electricity produced by that turbine is fixed for a period of 20 years.

[edit]

Ecological footprint

The clearing of trees may be necessary since obstructions reduce yield. Wind turbines should ideally be placed about ten times their diameter apart in the direction of prevailing winds and five times their diameter apart in the perpendicular direction for minimal losses due to wind park effects. As a result, wind turbines require roughly 0.1 square kilometres of unobstructed land per megawatt of nameplate capacity. A wind farm that produces the energy equivalent of a conventional power plant might have turbines spread out over an area of approximately 200 square kilometres. A nuclear plant of comparable capacity would be surrounded of a 100 square kilometre exclusion zone, and strip mines supplying coal power plants claim large tracts of land. Though restrictions in land use possibilities are different in the three cases, the area needed for wind farming is not excessive compared to conventional power.[36]

A wind turbine at Greenpark, Reading, England

Windmills kill birds, especially birds of prey. Siting generally takes into account known bird flight patterns, but most paths of bird migration, particularly for birds that fly by night, are unknown. Although a Danish survey in 2005 (Biology Letters 2005:336) showed that less than 1% of migrating birds passing a wind farm in Rnde, Denmark, got close to collision, the site was studied only during low-wind non-twilight conditions. A survey at Altamont Pass, California conducted by a California Energy Commission in 2004 showed that turbines killed 4,700 birds annually (1,300 of which are birds of prey). Radar studies of proposed sites in the eastern U.S. have shown that migrating songbirds fly well within the reach of large modern turbines. Many more birds are killed by cars, and this is a widely accepted cost {POV}.

A wind farm in Norway's Smla islands is reported to have destroyed a colony of sea eagles according to the British Royal Society for the Protection of Birds. [citation needed] The society said turbine blades killed nine of the birds in a 10 month period, including all three of the chicks that fledged that year. Norway is regarded as the most important place for white-tailed eagles. In 1989, Smla was designated as having one of the highest densities of white-tailed eagles in the world. But the society now fears the 100 or so more wind farms planned in the rest of Norway could have a similar impact. "Smla is demonstrating the damage that can be caused by a wind farm in the wrong location. The RSPB strongly supports renewable energies including wind, but the deaths of adult birds and the three young born last year make the prospects for white-tailed eagles on the island look bleak," said Dr. Rowan Langston, senior research biologist at the RSPB.

The numbers of bats killed by existing facilities has troubled even industry personnel.[37] A six-week study in 2004 estimated that over 2200 bats were killed by

63 turbines at two sites in the Eastern US.[38] This study suggests some site locations may be particularly hazardous to local bat populations, and that more research is urgently needed. Migratory bat species appear to be particularly at risk, especially during key movement periods (spring and more importantly in fall). Lasiurines such as the hoary bat (Lasiurus cinereus), and red bat (Lasiurus borealis) along with semi-migratory silver-haired bats (Lasionycteris noctivagans) appear to be most vulnerable at North American sites. Almost nothing is known about current populations of these species and the impact on bat numbers as a result of mortality at windpower locations. [edit]

Scalability

To meet the energy demands worldwide in the future in a sustainable way, a much larger number of turbines than we have today will be required. Naturally this will affect more people and wildlife habitat.

[edit]

Aesthetics

Perceptions that wind turbines are noisy and contribute to "visual pollution" creates resistance to the establishment of land-based wind farms in some places. Moving the turbines offshore mitigates the problem, but offshore wind farms are more expensive to maintain and there is an increase in transmission loss due to longer distances of power lines. One solution to such objections is the early and close involvement of the local population, as recommended in the sustainability guidelines of the World Wind Energy Association[15] - in the ideal case through community/citizen ownership of wind farms. Some residents near windmills complain of "shadow flicker," which is the alternating pattern of sun and shade caused by a rotating windmill casting a shadow over residences. Efforts are made when siting turbines to avoid this problem.

History of Wind Energy

A History of Wind Energy


contents > history of wind energy
The wind has played a long and important role in the history of human civilization. The first known use of wind dates back 5,000 years to Egypt, where boats used sails to travel from shore to shore. The first true windmill, a machine with vanes attached to an axis to produce circular motion, may have been built as early as 2000 B.C. in ancient Babylon. By the 10th century A.D., windmills with wind-catching surfaces as long as 16 feet and as high as 30 feet were grinding grain in the area now known as eastern Iran and Afghanistan.

The western world discovered the windmill much later. The earliest written references to working wind machines date from the 12th century. These too were used for milling grain. It was not until a few hundred years later that windmills were modified to pump water and reclaim much of Holland from the sea. The familiar multi-vane "farm windmill" of the American Midwest and West was invented in the United States during the latter half of the l9th century. In 1889 there were 77 windmill factories in the United States, and by the turn of the century, windmills had become a major American export. Until the diesel engine came along, many transcontinental rail routes in the U.S. depended on large multi-vane windmills to pump water for steam locomotives. Farm windmills are still being produced and used, though in reduced numbers, and show no sign of becoming obsolete. They are best suited for pumping ground water in small quantities to livestock water tanks. Without the water supplied by the multi-vane windmill, beef production over large areas of the West would not be possible. In the 1930s and 1940s, hundreds of thousands of electricity producing wind turbines were built in the U.S. They had two or three thin blades which rotated at high speeds to drive electrical generators. These wind turbines provided electricity to farms beyond the reach of power lines and were typically used to charge storage batteries, operate radio receivers and power a light bulb or two. By the early 1950s, however, the extension of the central power grid to nearly every American household, via the Rural Electrification Administration, eliminated the market for these machines. Wind turbine development lay nearly dormant for the next 20 years. Following the OPEC Oil Embargo of 1973, interest in wind energy resurfaced in response to climbing energy prices and questionable availability of conventional fuels. Federal and state tax incentives and aggressive government research programs triggered the development and use of many new wind turbine designs. Some experimental models were very large. With a blade diameter of 300 feet, a single machine was able to supply enough electricity for 700 homes. A wide variety of small-scale models also became available for home, farm and remote uses. In the 1970s there were nearly 50 domestic wind turbine manufacturers. Since then, the wind industry has undergone massive consolidation, resulting in less than a dozen domestic manufacturers in 1997. Roughly half of these deal exclusively with small-scale models. This consolidation followed the expiration of the tax incentives in the mid-1980s and the easing of the energy crisis, both of which reduced market demand. A competitive marketplace to weed out inferior products further contributed to consolidation. Meanwhile, a new market for wind systems, "wind farms," began in the early 1980s. This market evolved thanks in part to a new Federal law, the Public Utility Regulatory Policies Act of 1978. This legislation requires utilities to buy electricity from private, non-utility individuals and developers. California has been home to most wind farm development due to very attractive electricity buy-back rates and the availability of windy, sparsely populated mountain passes. As of 1997, nearly 2% of California's electricity is generated by the wind. As the cost of the technology has continued to decline, other areas of the country, namely the Great Plains, Pacific Northwest and Northeast, are now beginning to see greater wind farm development

500-900 AD

The first windmills

late 1880s

1888

1941

1973

1974

were developed in Persia for pumping water and grinding grain. The development of steel blades made windmills more efficient. Six million windmills sprung up across America as settlers moved west. Charles F. Brush used the first large windmill to generate electricity in Cleveland, Ohio. The windmill starts to be called "wind turbine." In later years, General Electric acquired Brush's company, Brush Electric Co. On a hilltop in Rutland, Vermont, "Grandpa's Knob" wind turbine supplied power to the local community for several months during World War II. The Organization of Petroleum Exporting Countries(OPEC) oil embargo caused the prices of oil to rise sharply. High oil prices increased interest in other energy sources, such as wind energy. In response to the oil crisis, the National Aeronautics and Space Administration(NASA) developed a twobladed wind turbine at the Lewis Research Center in Cleveland,

1977-1981

1978

Ohio. Unfortunately, the design did not include a "teetering hub"- a feature very important for a twobladed turbine to function properly. New types of twobladed turbines (MOD0, MOD-1, MOD-2) were developed and tested. The first wind turbine rated over 1 megawatt (MOD-1), began operating in 1979.The MOD-1, had a 2-megawatt capacity rating. The improved design of the MOD-2s included a "teetering hub." The MOD-2s operated for several years on the Columbia River, and could each power up to 630 households for a year. The Department of Energy's (DOE) budget for wind power research in 1978 was $59.6 million. This was the first time that the budget was more than $50 million. The Public Utility Regulatory Policies Act (PURPA) required utility companies to buy a percentage of their electricity from non-utility power producers. PURPA has been an effective way of encouraging the use of renewable energy.

1980

1983

1985

1988

The Crude Oil Windfall Profits Tax Act further increased tax credits for businesses using renewable energy. The Federal tax credit for wind energy reached 25% and rewarded businesses choosing to use renewable energy. Because of a need for more electricity, California utilities contracted with facilities that qualified under PURPA to generate electricity independently. The price set in these contracts was based on the costs saved by not building the planned coal plants. Many wind turbines were installed in California in the early 1980s to help meet growing electricity needs and take advantage of incentives. By 1985, California wind capacity exceeded 1,000 megawatt, enough power to supply 250,000 homes. These wind turbines were very inefficient. Many of the hastily installed turbines of the early 1980s were removed and later replaced with more reliable models.

1989

1990

1992

1993

1995

Throughout the 1980s, DOE funding for wind power research and development declined, reaching its low point in fiscal year 1989. More than 2,200 megawatts of wind energy capacity was installed in California-more than half of the world' s capacity at the time. Energy Policy Act - The Act reformed the Public Utility Holding Company Act and many other laws dealing with the electric utility industry. It also authorized a production tax credit of 1.5 cents per kilowatt hour for windgenerated electricity. U.S. Windpower developed one of the first commercially available variablespeed wind turbines, the 33M-VS, over a period of 5 years. The final prototype tests were completed in 1992. The $20 million project was funded mostly by U.S. Windpower, but also involved Electric Power Research Institute (EPRI), Pacific Gas & Electric, and Niagara Mohawk Power Company. Federal Energy

Mid-1990s

Regulatory Commission (FERC) prohibition on QF contracts above avoided cost was implemented. In a ruling against the California Public Utility Commission, FERC refused to allow a bidding procedure that would have the effect of allowing rates above avoided cost from renewable Qfs. The DOE wind program lowered technology costs. DOE advanced turbine program, funded at $49 million, has led to new turbines with energy costs of 5 cents per kilowatt hour of electricity generated. Standard Offer Number 4 contract rollovers in California led to lower rates being paid to the Qualifying Facilities (QF). The ten-year QF contracts written during the mid-1980s (at rates of 6 cents per kilowatt hour and higher) began rolling over at mid-1990s to match the avoided costs (about 3 cents per kilowatt hour). This "11th-year cliff" created financial hardships for most QFs on these contracts. Kenetech, the producer of most of the

1999-2000

2005

US-made wind generators, faced financial difficulties and sold off most of its assets and stopped making wind generators. Installed capacity of wind-powered electricity generating equipment exceeded 2,500 megawatts. Contracts for new wind farms continued to be signed. The Energy Policy Act of 2005 strengthened incentives for wind and other renewable energy sources.

Das könnte Ihnen auch gefallen