Beruflich Dokumente
Kultur Dokumente
ESSAY WRITING
Writing an essay takes time: time to gather information,
determine a topic, decide what directions you want to take, organize
your evidence, write the first draft, revise and edit the next draft or
drafts, and then prepare a final copy that says clearly and
convincingly what you want to say. You will be surprised at the
difference taking your time makes in the quality of your writing.
Time taken to plan an essay saves time staring at a blank sheet of
paper with no clear ideas in mind. The guide below outlines a writing
process.
Thinking about an essay topic
Before you start to write, study your essay topic carefully; be
sure you understand it and know what you're supposed to be doing
with it. Here's a sample essay topic:
Describe the image of women presented in television advertising. Do
you think it is fair or unfair?
Understand the topic: If you've done sufficient research, you
should have a fairly good understanding of the topic you have been
given or have chosen. Sometimes, as in this example, the factual
content of an essay topic will be fairly simple; at other times, you'll
have to read the material very carefully to be sure that you
understand it fully.
Be sure you know what you are to do with the topic. In the
topic given above you're asked to do two things: describe the image
of women presented in TV ads, and give your own opinion about its
fairness.
These two processes are different. When you describe
something, you don't judge it. In planning your essay, you would
first work at gathering evidence showing how women are portrayed
in TV ads. Only when you have presented as complete and fair a
picture as is possible in a short essay will you be able to argue
convincingly the second part of the essay topic: your personal
evaluation of the image.
In the final version of your essay you might not present the
materials in this order. You might decide to start your essay with
your evaluation and then go on to support it with evidence. But in
thinking about a topic, act like a scientist: always put together the
6
evidence first, and pass the judgments only after you've surveyed
your collection of data. If an assigned topic seems ambiguous, vague,
or too broad, make clear your interpretation of the topic before you
try to address it. Sometimes you'll find a vague word or phrase in an
essay topic and you will be forced to decide what the term really
means. This is not a trick; rather it's an invitation to think carefully
about a topic. Watch out for phrases like "more likely," "better,"
"desirable," and so on. Think through the idea: Better than what?
Desirable for what? In our example you would probably have to
decide what you mean by "fair": impartially accurate, presenting all
sides, helpful, or what? Make sure that you understand the direction
words in the topic: Every essay question or topic has a controlling
idea expressed usually in one or two key words. Each of these words
has a precise meaning. In the previous example. you saw what the
word "describe" meant precisely. Here are a few more common
direction words:
• Compare. Examine a set of items and find the resemblances
among them. You may mention differences, but stress
similarities.
• Contrast. Stress the differences.
• Define. Give a short, clear statement of meaning.
• Discuss. Present various sides and views of a matter, give a
detailed answer.
• Evaluate. Appraise something. Do not simply present the
facts, but say why some appear more important, more valid
than others.
• Justify. Show the grounds for any judgments you've made.
• Summarize. Give the main points or facts in short form.
Leave out all details or illustrations.
• Trace. Describe a sequence of events or development from
the point of origin.
Directions may come in many other forms: single words like
"analyse," "criticize," and so on, or whole phrases like "Make up
your mind whether...," "Differentiate among..." Be sure you
understand what they mean before you begin to write.
Brainstorming
Your first step in the actual writing of an essay is to note
down quickly all the ideas that come into your head as you think
about the topic. Write everything down; don't pass judgments on
7
your own ideas yet. Something that looks unimportant at the outset
may lead you to think of something very important later on. This list
is for your eyes only and provides an invaluable reference as you
start to write the first draft of your essay. The ideas you write down
need not be in any order. This process is also called free writing or
prewriting.
Let's brainstorm on the sample topic given above:
• Older women are wise: women helping men in laundromat,
Mrs. Olson
• Men experts who have to help women solve problems like
"ring around the collar"
• Women concerned only/mostly with family and floor wax
• Women take care of themselves for men who possess them
"She takes Geritol; I think I'll keep her"
• Some non-stereotyped woman driving Porsche, jockey, oil
engineer
• All professional women very young and beautiful
Continue brainstorming on your own. Add precise details
names of products, actors, actresses. Keep adding to the list
whenever an idea flashes through your head at work, while reading,
after dinner while you're doing dishes. Don't erase anything! You're
not judging yet. You're amassing a store of information and ideas
that you can draw on later.
Formulating a thesis.
Now you're ready to start deciding the direction of your
essay.
Begin by testing different possibilities. Here are three
possible directions to take our women- in-TV-ads topic:
Although there are some positive images of women on TV,
in general the ads present an unfair view of women as dependent,
shallow materialists. While the TV ads are sometimes annoying in
their shallowness, they reflect fairly accurately the concerns of a
majority of American women. Since ads have a powerful influence
beyond simply selling products, it is only fair that they be used to
improve the position of women by showing their strengths as well as
their weaknesses.
See how the point of your essay, which you're trying to fix,
takes time to decide on. It is not just a simple restatement of a
8
question. In fact, you can try to write out a statement of purpose
without repeating the vocabulary of a question:
Most women find television ads demeaning and frustrating
because they depict American women as dependent, shallow
materialists.
Just be sure that the statement is a good response to the
topic.
Next, tentatively decide on a direction to take. Check your
brainstorming lists of ideas and facts to see which of the statements
you've written down has the most support in the evidence you've
gathered. Which of the directions you've explored sounds most
interesting or convincing, or for which you have gathered the most
ideas? This becomes the point you're going to make in your essay,
the reason you want someone to read what you've written: your
possible thesis.
Remember, however, that as you continue to write the essay,
new ideas will come to you. You wrote the original thesis statement;
you can change it at any time and must if you find that you can no
longer support your original position. (This changing in your thesis
may start as early as the initial organizing of evidence, taken up in
the next section.)
You may end up writing an essay that says just the opposite
of what you started with; this happens, so don't let it worry you.
However, writing out a thesis statement in the early stage of writing
lets you start organizing your materials efficiently.
Organizing your evidence
Remember that evidence is fact, not opinion! Go over your
brainstorming notes and see whether you can group the mess of
ideas, examples, and questions into categories. In coming up with
your thesis, you probably started this process, thinking about which
examples would support your thesis and which might go against it.
You can use different coloured pens to underscore in your notes
those ideas that go together: examples of women as shallow all
underlined in red, for example. Or you may cut up your notes and
arrange them on a table top, like playing cards, trying to find a
pattern. As you sort your notes, think about the points you want to
stress in your essay.
In a 500- or 1000-word paper you're not going to be able to
make more than two or three points. Each of these points must be
supported with evidence (facts), and defended against possible attack
from a reader who thinks differently from you. Initially, think in
9
terms of one major point to a paragraph. You may end up with
something like this:
• Women are often portrayed as simple-minded in TV ads.
• The few women who are shown as intelligent are
unconvincing.
• To be fair to women, ads should often portray them more
naturalistically, as more men are portrayed.
Keep in mind that you're writing a short essay, not the final word on
the image of women in ads. For a short paper, select a limited set of
ideas and the best possible examples to illustrate them. Other ideas
and examples will have to be discarded, but don't destroy any notes
until you've finished the paper; if you change your thesis, you may
need to go back to the notes for fresh ideas.
Writing the rough draft
Now you are ready to start composing the first version of the
actual essay. Don't begin by constructing a magnificent introductory
paragraph. The chances are that by the time you finish writing the
essay, it will no longer fit what you've said!
On the other hand, don't worry if you find it hard to begin
writing. Start with the second paragraph. Or write "Dear Nick" at the
top of the page and pretend you're writing a letter to an intelligent
and sympathetic friend who really wants to know about this subject.
Write anything, in fact, to get started -- even an elaborate
introductory paragraph that you're prepared to throw out if
necessary!
Professional writers often have a hard time starting a piece
of writing: one author sits in front of a typewriter and does not allow
himself to get up, even to go to the bathroom, unless he has written
at least four sentences. Then, if he has to interrupt his work, he can
pick it up again easily. The successful writer Gene Fowler writes:
"Writing is easy; all you do is sit staring at the blank piece of paper
until the drops of blood form on your forehead." Solution? Get
something down so that the sheet is no longer blank!
As you compose the rough draft, remember that it is rough.
Concentrate on getting your ideas down in some logical order. Don't
worry about grammar or spelling or even final organization: that is
another process -- revising and editing. The rough draft is your first
effort at composing your ideas, not the final version of your essay.
"The idea," says novelist Bernard Malamud, "is to get the pencil
moving quickly."
10
Revising and editing
Once you have composed a rough draft, you're ready to
revise and edit it -- two separate and important steps that most
writers find they work on at the same time. When you revise, you are
looking for ways to improve the content; when you edit, you are
looking for technicalities of writing (grammar, spelling, and so
forth).
Do yourself a favour: make a clean copy of your rough draft
before revising and editing the essay. It makes both tasks easier. If
you are composing on a computer, make a printout at this point. If
you compose by hand, make a neatly written copy. Why? A clean
copy allows you to see everything more clearly. Problems in
organization of ideas or of facts supporting those ideas will stand out
more clearly during revising. And problems of sentence structure,
punctuation, even spelling, become more apparent during editing.
Revising for content. On the clean copy of your draft, first
check the sense of what you've said. Have you made the point you
set out to prove, or have you gone off into other areas? Have you
repeated yourself? Have you made a point, yet not backed it up with
evidence? Have you put in unnecessary evidence for a very simple
point? Are things in the right order? Does everything you say make
sense? Try writing a brief outline of your essay to see if it all hangs
together. Look again at the introductory paragraph: does it really
introduce your paper? Try to avoid editing the paper until you are
satisfied with what you've said.
Revising for shape. An important aspect of revising a rough
draft is making sure your essay has "shape." By that I mean the
simple, classical pattern of a beginning, a middle, and an end. Just
about every piece of writing, short or long, needs these three
elements. They insure that your reader understands your thesis (main
point, central or controlling idea, or theme they all mean the same
thing) and then becomes convinced that it is an interesting or valid
one. Here is one way to look at this three-part pattern:
Beginning: Say what you are going to do. State your thesis (this
is the main topic or single general idea of the entire essay).
Middle: Do it. Develop your thesis, discuss it and support it with
evidence. This is the "body" of the essay.
End: Note that you have done it. Recognize your thesis and
evidence for it and state your conclusions.
11
You will probably follow this pattern quite mechanically in
your first two or three essays. In later thinking and writing you will
become increasingly subtle as you experiment with expanding on,
varying, and enriching this pattern.
Revising for style. As you revise your rough draft, think
carefully about your audience, the person who is going to read what
you are writing. For most college or university essays it is, quite
frankly, the professor. You would write the same ideas quite
differently in a letter to your grandmother or in a memo to your boss.
However, please do not fall prey to the misconception that intelligent
writing is pompous writing. It is not. Here is one way to picture your
audience:
Consider that your paper is going to be read by a small
group of knowledgeable peers who are interested in your ideas and
are willing to be convinced by them, given the information they need
about them.
Again, writing to convince is preferable to writing to
impress.
One rule of good style is that when you revise, think about
whether or not you can explain something more clearly, in simpler
language, or with even clearer evidence backing up your ideas. If
you can, go with the simpler version. Another rule of good style is to
use varied words, an active or vivid noun, or just the right adjective
or adverb. As you read and write more, you will be automatically
building your vocabulary. Use only words for which you know the
meanings. As you add words to your vocabulary be sure that you are
using them accurately, precisely.
Editing for form. Once you're satisfied with the essay as an
essay (this may mean that you've had to do another draft copy)
review it very carefully for technical errors: spelling, grammar,
subject- verb agreements, adjectives used as adverbs, and so on.
Know your own weaknesses: if you can't spell "occurrence," look it
up. If you know that you sometimes leave the "s" off verbs, double-
check for that point. Don't just make a stab at corrections; be sure
you understand what you're doing. Keep asking questions until you
do!
Final copy
Type the final copy yourself. The retyping gives you a
chance to make fine adjustments to things you didn't notice before.
Proofread a hard copy of the final draft very carefully. Proofreading
is easier if you put the paper aside for at least a few hours before
12
rereading it. Do not hesitate to make corrections if you find errors;
it's your paper, and you have the right to change anything you've
written. However, don't panic and make unnecessary corrections.
Decide at some point that the paper is finished.
A Final Note: These notes on writing an essay are not
something to be read once and mastered. Return to these pages often
for ideas and guidance. If you are at a sticking place in your writing,
the guidelines can help you identify what points of the process are
giving you problems and then find ways to solve them. For example,
are you stuck on what to write about? You may need more
prewriting or you may need to reformulate your thesis into a more
workable one. Are you not satisfied with your second draft? You
may need to go back and expand on your rough draft or review the
ideas for revising again.
13
Revival of democracy
Pakistan's political history is a tale of opportunism,
confrontation, bickering and 'solemn efforts' to destabilise the
adversary at the cost of social and democratic institutions. Year after
year, government after government, all the political forces,
irrespective of their ideological affiliation, always tried to enjoy
unobstructed power.
It is the same people, who, ousted from power and stupefied,
find themselves 'pained' by the derailment of democracy, and
ardently demand a return to the democratic form of government. But
as the demand for the restoration of democracy and the return of the
army to the barracks appears to gain momentum, politicians need to
answer some vital questions. Will they again enforce the 'democracy'
witnessed in Pakistan after 1985?
The country and the people cannot afford another round of
prime ministerial slots for Nawaz Sharif and Benazir Bhutto. It is
time to analyse the situation and embrace the 'bony arms of reality'.
The challenge and dilemma Pakistan faces today is the
strengthening of civil society and the consolidation of democratic
institutions. How can that be brought about, who is going to do it,
and who has brought things to such a pretty pass are the pressing
questions? From these viewpoints, the recent political history of the
country is a huge disappointment. After the 11-year era of Ziaul Haq,
the expectations of the people were high. But, from 1985 onwards,
Pakistan has seen power change hands between the two mainstream
political parties on four occasions.
At that time, two new leaders were coming to the forefront,
16
promising democratic days filled with prosperity. Unfortunately, the
PPP and the PML have only managed to establish themselves as
invalid and ineffective political forces, making it clear that they are
only interested in power and its perks, and nothing else. When not in
power, they tried to destabilise the government of the day by every
means.
Their ideological adherence and perseverance is open to
question. Both the PPP and the PML have been opportunistic enough
to form alliances with any party at any time. A harmonious
relationship between the PPP and the Altaf-led Muttahida Qaumi
Movement (MQM), then a similar bond between the PML and
MQM, and more recently, the possibility of an alliance between the
two mainstream parties, though ideologically discordant, is a telling
comment on their character.
During the last ten years, Pakistan witnessed social chaos,
economic bankruptcy, weakening of the political institutions and
system, disfiguring of the Constitution, lawlessness, social discord,
unemployment, mounting foreign debt and nothing to show for the
good of the people. Today, when Nawaz Sharif and Benazir Bhutto
are talking of joining hands for the revival of democracy, they should
be held accountable for their past misdeeds against each other.
Why did Nawaz Sharif, as the Punjab chief minister, instigate a no-
confidence vote against Benazir Bhutto during her first tenure as
prime minister? Why did Benazir rush back to Pakistan from abroad
to hand over the resignations of her partymen in the parliament as
Nawaz Sharif was battling it out for the seat of power with the then
president Ghulam Ishaq Khan? And why the smiling snaps of Nawaz
Sharif with then president Farooq Leghari as Benazir Bhutto was
lamenting the killing of her brother in Karachi? Nowadays, Farooq
Leghari is speaking out against both Sharif and Benazir.
Why, when one of them was in trouble, did the other not
care? Just because the other saw the opportunity to grab power. Why
none of them cared for democracy when one of them was losing
power before completing the democratic term? Still, under the
prevailing impasse, one should welcome the alliance being forged
between the PML and the PPP under the umbrella of the Grand
Democratic Alliance. On the surface, it betokens political harmony
and reconciliation that Pakistan so badly needs. But the question is
how the alliance is going to impact on the people of Pakistan? How
will it change their lives, how will it pull them out of the chaotic
mess they have been hurled into?
17
Will it create more jobs, bring down the price spiral, make
their lives easy, and help in retiring the foreign debt? From past bitter
experience, it will do none of these things, except for protecting the
interests of a handful of politicians. The alliance, essentially between
the two former prime ministers, one in jail and the other in exile, is a
prime example of the politics of opportunism. Not so long ago, their
main preoccupation was hurling abuses at each other, and extending
'compliments' such as 'security risk, 'corrupt' and 'anti-state'. To give
exit strategies is quite easy while sitting thousand of miles away
from Pakistan, but to give a clear roadmap for improving the
economy is something politicians do - that too with the help of
rhetoric - when in power. The PPP chairperson takes pride in giving
employment, and initiating polio campaigns, but never tells the
nation as to how, under PPP rule, Pakistan would come out of the
donor agencies' vicious circle. And when is the PPP going to
undertake elections within the party itself?
Ironically, both Nawaz and Benazir are least prepared to
introduce democracy in their parties. Benazir is lifetime chairperson
of PPP while Nawaz, despite being in jail, continues to head the
PML, which has created an open rift within the party, and even
risking its very survival. This alone should be emblematic of their
attitude towards power - 'never to submit or yield'. They, and others
of their kind, when entrusted with power, maligned the good name of
democracy, misused their authority, set dubious examples of gross
corruption and left the country impoverished, while democracy
weakened. These are the precedents which require the masses to be
cautious about the ins and outs of the present demands for a return to
democracy. People must realise that these very politicians, when in
opposition, encouraged the military to intervene and on finding their
own dreams shattered after the event, started to give renewed calls
for the restoration of democracy. Even then, if one is generous
enough to overlook the past and grant the demand of these self-
professed champions of democracy, one is tempted to ask: "Do they
have any agenda, any consensus candidate, any programme to tackle
the economic crisis?" Are they going to use some magic wand to set
things right or is their criticism just for criticism's sake? One has
every reason, for instance, to suspect the intentions behind the
lobbying against the local government plan. Why doesn't anyone talk
of the inept governance at the local level? Maybe it is because the
existing system protects the vested interests. These people are in fact
happy with the inefficiency at the administrative level. It is perhaps
to their advantage that any proposed change that threatens their
18
'sacred interests' is given a tinge of anti-state conspiracy.
The people of Pakistan must question the intentions of these
politicians and ask themselves as to what are their plans to improve
the economy and the condition of the common man. There is no
doubt that Pakistan today needs a stronger civil society, adequately
backed by supporting democratic institutions. But there are no
shortcuts and the journey is certainly long, requiring us to be patient,
aware and vigilant. As the present government continues to make
efforts for the implementation of the new system, it is indeed
encouraging that doors have been left open to suggestions and
proposals, and those found practically feasible are being accepted
with an open heart. Thus, the regime is proving itself more
democratic than the so-called democratic governments of the past.
Democracy, of course, is our ultimate goal and democracy has to be
installed. But to enjoy its true fruits, there is a need to educate the
masses and make them politically mature so that they can distinguish
right from wrong.
They must be made to understand that all that glitters is not
gold. They must know that those proclaiming to be gold have rusted
in the past.
Cause of Conflict
In 1846, the British colonial rulers of India sold the territory,
including its populace (by a sale deed called the Treaty of Amritsar
in return for a sum of money) to a Hindu warlord who had no roots
there. This warlord who bought himself into royalty, styled himself
as the Maharajah of Jammu and Kashmir. The acts of brutality
during his regime have left bitter memories, some of which persist to
this present day. Several mosques were closed and occupied by his
forces.
The slaughtering of a cow was declared a crime punishable
by death. Between 1925 and 1947 Maharajah Hari Singh continued
this policy of discrimination against the 94 percent Muslim majority.
It was nearly 65 years ago, in 1931, that the people of Kashmir made
their first organised protest against Maharajah Hari Singh's cruelty.
That led to the "Quit Kashmir" campaign against the Maharajah in
1946, and eventually to the Azad Kashmir movement which gained
momentum a year later.
The first armed encounter between the Maharajah's troops
and insurgent forces occurred in August 1947. At this time, Britain
19
was liquidating its empire in the subcontinent. Faced with a
insurgency of his people, strengthened by a few hundred civilian
volunteers from Pakistan, Maharajah fled to Jammu on 25th October
1947. In Jammu, after he ascertained a commitment of military
assistance from the government of India to crush the impending
revolution in Kashmir, he is alleged to have signed an "Instrument of
Accession" to India.
Lord Mountbatten conditionally accepted the "Instrument of
Accession" on behalf of the British Crown, and furthermore, outlined
the conditions for official acceptance in a letter dated 27th October
1947:
"In consistence with their policy that in the case of any
(native) state where the issue of accession has been subject of
dispute, the question of accession should be decided in accordance
with the wishes of the people of the state, it is my government's wish
that as soon as law and order have been restored in Kashmir and her
soil cleared of the invaders the question of state's accession should
be settled by a reference to the people."
Then Indian Prime Minister Jawaharlal Nehru, in a speech
aired on All-India Radio (2nd November 1947), reaffirmed the
Indian Government's commitment to the right of the Kashmiri people
to determine their own future through a plebiscite.
"We have declared that the fate of Kashmir is ultimately to
be decided by the people. That pledge we have given, and the
Maharajah has supported it, not only to the people of Jammu and
Kashmir, but also to the world. We will not and cannot back out of it.
We are prepared when peace and law have been established to have a
referendum held under international auspices like the United
Nations. We want it to be a fair and just reference to the people and
we shall accept their verdict."
The Government of India accepted the "Instrument of
accession" conditionally, promising the people of the state and the
world at large that "accession" would be final only after the wishes
of the people of the state were ascertained upon return of normalcy
in the state.
Following this, India moved her forces into Srinagar and a
drawn-out fight ensued between Indian forces and the forces of
liberation. The forces of Azad Kashmir successfully resisted India's
armed intervention and liberated one-third of the State. Realising it
could not quell the resistance, India brought the issue to the United
Nations Security Council in January 1948. As the rebel forces had
20
undoubtedly been joined by volunteers from Pakistan, India charged
Pakistan with having sent "armed raiders" into the state, and
demanded that Pakistan be declared an aggressor in Kashmir.
Furthermore, India demanded that Pakistan stop aiding freedom
fighters, and allowing the transit of tribesmen into the state.
After acceptance of these demands, coupled with the
assurance that all "raiders" were withdrawn, India would enable a
plebiscite to be held under impartial auspices to decide Kashmir's
future status. In reply, Pakistan charged India with having
manoeuvred the Maharajah's accession through "fraud and violence"
and with collusion with a "discredited" ruler in the repression of his
people. Pakistan's counter complaint was also coupled with the
proposal of a plebiscite under the supervision and control of the
United Nations to settle the dispute.
The Security Council exhaustively discussed the question
from January until April of 1948. It came to the conclusion that it
would be impossible to determine responsibility for the fighting and
futile to blame either side. Since both parties desired that the
question of accession should be decided through an impartial
plebiscite, the Council developed proposals based on the common
ground between them.
These were embodied in the resolution of 21st April 1948,
envisaging a cease-fire, the withdrawal of all outside forces from the
State, and a plebiscite under the control of an administrator who
would be nominated by the Secretary General. For negotiating the
details of the plan, the Council constituted a five-member
commission known as "United Nations Commission for India and
Pakistan" (UNCIP) to implement the resolution.
After the cease-fire, India began efforts to drag the issue
down, and under various pretexts tried to stop the UN resolution
from being implemented. To this day, India pursues the same plan,
and the resolution of 1948 has yet to be realised.
India and Pakistan were at war over Kashmir from 194748
and all early U. N. Security Council Resolutions contained
admonishment for both countries demanding an immediate cease-
fire, which would be followed by a-UN directed Plebiscite.
However, disregarding that some fifteen resolutions were passed by
the United Nations to this very effect, India and Pakistan again
initiated military skirmish in 1965. At this point, another cease-fire
agreement was effected after United Nations intervention, followed
by an agreement at Tashkent with the good offices of the USSR.
21
In 1971, India and Pakistan once again became locked in
war. Efforts to bring the latest conflict to an end resulted in the Simla
Agreement and was signed by both India and Pakistan and declared
commitment to reach a "final settlement" on the Kashmir issue, but
this has yet to happen.
Novel Reading
OR
The Study of Fiction
Novel reading has become a popular pastime. To relax after
a day’s hard work, there is nothing more handy a novel or a detective
story. Life often proves too much for us and we seek the aid of our
imagination to escape from it. Reading is not a duty imposed on us;
it is a pleasure. Books exist to please, to entertain and to comfort us.
Some seek relaxation in intoxication through wine or gambling but
the reader of fiction quite harmlessly excites himself by an imaginary
story of love, intrigue or murder. Fiction, like poetry, provides us an
escape from this dark drab world and takes us into the land of
romance where beauty dwells, justice is done and there is no misery.
The majority of fiction readers resort to fiction for mental relief.
Sometimes, when we are not tired, we feel lonely, as for example, on
a journey. In an express train a novel is an invaluable partner: it
helps us to down the rattling of the puffing of the engine and to
forget the irksomeness of the journey. When we are engrossed in the
novel, we are hardly conscious of time passing till the porter shouts
out the name of our destination. Reading makes travel tolerable, you
hardly know how else to kill time on a railway journey. The ailing
patient, the convalescent and the attendant nurse find in novel
reading a matchless delight. Novel reading is light reading and puts
no strain on our mind or brain; on the other hand it is a palliative.
All novels are not fairy stories of love and romance. We
have the picaresque novel of adventure and travel. The reading of
these novels provides us not only with information about the
countries concerned but also the pleasant sensation of sojourning in
them. Sitting by the side of the singing kettle on the hearth we can
hunt wild game in the heart of Africa or the Sundarbans or climb on
the top of the Kamet, or drive into sea in Mr. Well’s apparatus. Such
novels are not meant to be a substitute for actual journey but as
63
simulative of interest. To those whom a long journey is a forbidden
luxury they are more than a compensation.
Their novel with a purpose has various branches. The best
amongst them is the historic novels, which deal with a particular
period of history, a famous personality or an incident. When we read
these novels we live in those periods and move in their long and dead
society. Usually historical novels are pure fictitious romances with
an appeal to young minds. Scott’s Waverley Novels and Thackeray’s
Henry Esmond belong to separate groups. Another variety of this
type of novel is in which the author passes in review through
imaginary characters, the conflicting forces and the changing values
which mark an epoch. Galsworthy’s Forsyte Saga describes the
disintegration of the Victorian society in England. Sinclair Lewis’
Bobbit and Main Street are its American kin. A minor offshoot of it
is the regional or provincial novel describing the characteristics of a
typical locality like Hardy’s Wessex novels and Arnold Bennett’s
Annal of the Five Towns series. We have the political novel in which
political strife is mirrored in symbols. Disraeli’s Consingsby or The
Flaming Sword by L. P. Jack are novels of this type. For those who
are of a humanitarian turn of mind we have the social novel with its
main objective of reforming society. Mrs. Gaskell’s Cranford,
Thackeray’s Vanity Fair and Dickens’s novel went a very long way
improving the social conditions of their times. The novelist has a
wide canvas at his command. He can gradually work up his plot and
resolve it into the desired manner. His weapon is persuasion through
winning our sympathy. He aims at change of heart, perseveres and
ultimately wins. The novelist’s field is not limited and he is free to
take up any problem that is likely to interest the reader. He treads on
slippery ground when he handles sex. Writers like Joyce, Lawrence
and even Aldous Huxley have sometimes been banned for young
readers. But I say to you – Do not be afraid; read them all, especially
Aldous Huxley. I am not exaggerating when I say that Huxley’s
After Many a Summer has thrown more light on many tangles of our
society than years and years of analysis and introspection. It is better
to abstain from sentimental types of fiction. Not only do they
produce unhealthy reactions but they also vitiate our taste for good
literature. Classical writers always deal with eternal problem of life:
of evil and of good. They invent characters, sometimes
individualised and sometimes types and through them as
mouthpieces voice their own feelings. Psychological fiction is a
recent production and specialises in the analytical study of human
character. Reading of such novels is a mental exercise; all the time
64
we feel as if we were playing at chess. The human mind is ever
interesting. Jane Austen, George Eliot, Meredith give us beautiful
characters. The modern novel is psychological and deals with the
stream of consciousness.
Modern life has multiplied crime and scientific research has
made the detection of crime an art as well as a science. The detective
novels of Edgar Wallace have covered a vast ground from the
Moonstone of Wikie Collins. It is this type of novel which is very
popular today: it gives excitement and relaxation. Incidentally, it has
created a new character, Sherlock Holmes. The purely propaganda
novel has its votaries. War fiction too had its heyday. All Quiet on
the Western Front by Remarque had amazing sales.
Novel reading is regarded by the orthodox as sheer waste of
time. This is a mistaken notion. The variety of situation and the
diversity of characters in novels expand our outlook and broaden our
vision. Novels are cheap and are easily procured from every
circulating library or bookshop. Their very popularity shows that
reading them is common and conducive to happiness. The reader
with the discriminating eye avoids reading obscene and cheap
fiction. Just as too many sweet object produce bile, similarly an over-
dose of fiction gluts the appetite. Drama elevates and poetry refines
our mind but fiction, of the earthly, interests us in the living of life.
Novel reading rouses the taste for reading. After poetry
fiction is the greatest department of literature and should occupy an
honoured place on the shelves of every student.
Democracy
The word democracy has many meanings, but in the modern
world its use signifies that the ultimate authority in political affairs
rightfully belongs to citizens. There was a time when democrat was a
term of abuse, virtually synonymous with mob rule or anarchy.
Today democracy's connotations are honourable. This is especially
true given the growth of democratic trends in Eastern Europe
following the collapse of the Soviet empire. Dissidents in these
societies evoked democracy as the ideal alternative to a bureaucratic,
authoritarian state. A transition to democratic regimes appears to be a
dominant political pattern at the end of the 20th century.
Whereas in centuries past there were principled opponents to
democratic political rule, such antidemocrats are rarer today in
nearly all societies. Democracy's opponents tend to be
fundamentalists who favour theocratic regimes or adversaries who
find democracy wanting because it seems not to meet certain abstract
standards of justice or perfect freedom. Because democracy is so
much in favour, even dictators and authoritarians embrace the
democratic idiom to characterize their regimes and their actions. As a
result, the 20th century has seen a proliferation in the meanings of
democracy, though not all evocations of democracy, past or present,
are credible. The leaders of the Soviet-dominated authoritarian
regimes of Eastern Europe called themselves “worker's republics”
and wrapped themselves in the mantle of democracy. The People's
Republic of China proclaims itself democratic even as protestors
demanding freedom of speech and of the press, hallmarks of
democratic polities, are routinely imprisoned. No one, it seems,
wants to be called “antidemocratic.”
In view of the variety of ways in which the term democracy
is used, the only way to distinguish between arbitrary definitions and
coherent ones is to observe under what circumstances positive or
negative judgments are made concerning the absence or presence of
democratic institutions. For example, when communists classified
the former Soviet Union as a socialist democracy and denied that
118
Spain under the regime of Gen. Francisco Franco had an organic
democracy, the reasons listed for denying the democratic nature of
the Spanish state would also apply to the communist states these
advocates had labelled democratic.
The converse is also true. Defenders of Franco's
authoritarian order characterized Spain as a democracy in some sense
and scornfully rejected the view that communist countries were
democracies in any sense. But the reasons they gave for refusing to
describe communist regimes as democratic largely invalidated their
ascription of a democratic character to Spain during the years of
Franco's reign.
Proceeding in this way, and using the historical context to
control specific applications of the term, a central or basic concept of
democracy may be presented that will approximate most non-
arbitrary uses. Democracy is a form of government in which the
major decisions of government—or the direction of policy behind
these decisions—rests directly or indirectly on the freely given
consent of the majority of the adults governed. This makes
democracy essentially a political concept even when it is used—and
sometimes misused—to characterize non-political institutions.
Democracy as a political process is obviously a matter of
degree—depending on the areas of society open to political debate
and adjudication and the number of adults qualifying as citizens
within the political system. The differences between non-democratic
and democratic states are sometimes characterized as being “merely”
one of degree. But this rhetorical ploy is used to minimize and
confuse the difference between democratic and non-democratic
states.
It becomes necessary, therefore, to supplement the above
definition with a working conception that will enable us to
distinguish democratic regimes from others. One such working
conception is the view that a democratic government is one in which
the minority or its representatives may peacefully become the
majority or the representatives of the majority. The presupposition is,
of course, that this transition is made possible by, and expresses the
freely given consent of, the majority of the adults governed. The
implications of the presence of freely given consent call attention to
the difference between ancient democracies, which stressed only
majority rule as a validating principle, and modern democracies,
which since the birth of the American republic have stressed the
operating presence of inalienable rights.
119
Before developing the implications of this distinction, it is
necessary to dissolve certain misconceptions that have often plagued
discussions of democracy. The first is the view that the only genuine
democracy is “direct” democracy in which all citizens of the
community are present and collectively pass on all legislation, as was
practiced in ancient Athens or as is the case in a New England town
meeting. From this point of view an “indirect” or “representative”
democracy is not a democracy but a constitutional republic or
commonwealth. This distinction breaks down because, literally
construed, there can be no direct democracy if laws are defined not
only in terms of their adoption but also in terms of their execution.
Delegation of authority is inescapable in any political assemblage
unless all citizens are in continuous service at all times, not only
legislating but also executing the laws together. The basic question is
whether the delegation of authority is reversible—controlled by
those who delegated it.
The second misconception is the identification of, or
confusion between, the terms democracy and republic. Strictly
speaking, a republican form of government is one in which the
position of the chief titular head of government is not hereditary. A
republic can have an undemocratic form of government, whereas a
monarchy can be a democracy. There is no necessary connection
between the two terms, although particular regimes usually embody
a complex mingling of republican and democratic principles.
Majority Rule and Minority Rights.
Any community in which a majority of the adult population
are slaves cannot be considered democratic. Nonetheless, there is a
valid distinction between the kinds of government that existed in
antiquity in which the freemen—however limited in numbers—were
the source of ultimate political authority and governments in which
the authority of government was vested in a dictator or an absolute
monarch. The former were democracies, even though the free
citizenry or its representatives recognized no limitation on the nature
and exercise of their rule and others enjoyed no political rights. The
result of elections in the ancient democracies often was the civil
equivalent of a military victory, and vae victis (“woe to the
vanquished”) often described the fate of the defeated. Under such
circumstances democratic rule was bloody, disorderly, and
frequently a preface to the emergence of a strongman or dictator.
Even where power was in the hands of the majority, there was no
democracy in the modern sense, for minority rights were not
considered.
120
With the emergence of a theory of human rights beginning in
the 17th century and its explicit development in the writings of
Thomas Hobbes and, above all, John Locke, the way was prepared
for a conception of democracy in which the principle of majority rule
was a necessary but not a sufficient condition. The will of the
majority was to enjoy democratic legitimacy only if it was an
expression of freely given consent. The specific provisions of the
U.S. Bill of Rights and the unwritten, but not unspoken, assumptions
of the British constitution after the Cromwellian revolution
expressed the limits set by human rights on the power of ruling
majorities, minorities, or monarchs.
Majorities could do everything except deprive minorities of
their civil rights, including freedom of speech, of the press, and of
assembly and the right to a fair trial, the exercise of which might
enable the minority to win over the electorate and come to power.
Minorities might do everything within the context of these human
rights to present their case, but so long as they accepted the
principles of democratic organization, they were bound by the
outcome of the give and take of free discussion until another
opportunity for persuasion might present itself. Since unanimity
among human beings about matters of great concern is impossible,
the majority principle, insofar as it truly respects human rights, is the
only one that makes democracy a viable alternative to tyranny,
whether ancient or modern.
What are the signs of freely given consent, or under what
conditions is it present? Briefly, freely given consent exists when
there is no physical coercion or threat of coercion employed against
expression of opinion; when there is no arbitrary restriction placed
on freedom of speech, of the press, and of assembly; where there is
no monopoly of propaganda by the ruling party; and where there is
no institutional control over the instruments or facilities of
communication. These are minimal conditions for the existence of
freely given consent. In their absence a plebiscite, even if
unanimous, is not democratically valid.
These may be considered negative conditions for the
presence of democratic rule. But it may be necessary for a
government to take positive measures to ensure that different groups
in the population have access to the means by which public opinion
is swayed. If, for example, an individual or a group had a monopoly
of newsprint or television channels and barred those with contrary
121
views from using them, both the spirit and letter of democracy would
be violated.
Philosophers of democracy, especially Thomas Jefferson,
John Stuart Mill, and John Dewey, have called attention to certain
positive conditions the presence of which quickens and strengthens
the democratic process. Foremost among these is the availability of
education, allowing for an informed and critical awareness of the
issues and problems of the times. If the avenues of communication
are open, an educated electorate can become aware of the
consequences and costs of past policies and of the present
alternatives.
If, as the 17th-century philosopher Barukh Spinoza declared,
men and women may be enslaved by their ignorance, uninformed
freedom of choice may lead to disaster. It is this fear of mass
ignorance or the excitability and gullibility of “the herd” that is one
root of opposition to democracy. The more informed and better
educated the electorate, the healthier the democracy is. This, at least,
has been the nearly universal claim of most democratic theorists. But
modern means of mass communication and persuasion, especially
political advertising, present challenges to this fondly held dictum of
democratic faith. How does one distinguish between unacceptable
manipulation of the citizenry and wholly legitimate efforts to
persuade? There is no consensus on these matters, and the debate
promises to grow more intense given the explosion in information
technology in the last quarter of the 20th century.
Citizen Participation.
A second positive condition for the existence of an effective
democracy is the active participation of the citizens in the processes
of government. Participation is all the more essential as government
grows in size and complexity and as individual citizens may be
tempted to succumb to a feeling of ineffectiveness in the face of
anonymous forces controlling their destiny. The result may be wide-
scale apathy and a decay in democratic vitality, even when
democratic forms are preserved. “The food of feeling,” observed
Mill, “is action. Let a person have nothing to do for his country, and
he will not care for it.”
It was Dewey and Jane Adams, however, who stressed the
importance of participation in the day-to-day political affairs of the
street, the borough, the city, the region, the state, and the nation, to a
point where the whole concept of democracy acquired a new
dimension. By involving the greatest number of citizens in different
122
ways and on different levels in political action, plural centres are
developed to counteract the tendency to expansion and centralization
of government, and the conditions of “a Great Community” are
established. “Democracy,” Dewey wrote, “is a name for a free and
enriching communion.” Civil rights leader Martin Luther King, Jr.,
evoking religious language, described American democracy as an
ideal of a “beloved community.” However, this generous conception
of a participatory democracy can be misunderstood and vulgarised.
Some have interpreted it to mean that there is no place for expertise
in a democracy, that all citizens are capable of administering all
things, and that all opinions not only have a right to be heard but also
are entitled to receive equal weight. This denies Jefferson's insistence
that one of the fruits of democracy is the emergence of an
“aristocracy of virtue and talent.”
This reinforces the third positive condition for effective
democracy. Intelligent delegation of power and responsibility is
essential because no community can sit in continuous legislative
session, and not everybody can do everything equally well. In
addition, it is necessary during periods of crisis to entrust certain
institutions and persons with emergency powers to ensure the
defense and preservation of the community.
Scepticism and Judgment.
The possibility of abuse of the delegation of power both in
ordinary and extraordinary times reinforces the fourth positive
condition for a healthy democracy. This is an intelligent scepticism
concerning claims to absolute truth, the possession of charisma
among leaders, or the infallibility of experts. As indispensable as
experts are, the assumption of both democratic thought and common
sense is that one does not have to be an expert to evaluate the work
of experts. One does not have to be a cook to judge the claims of
great cooks, a general to know when the war has been won or lost, or
a civil servant to discover whether the policy of bureaucracy leads to
well-being or woe. In a democracy the citizen is and should be king.
The problems and challenges of democracy are many. Some
flow from the tension between the emphasis on equality in the
democratic outlook and the desire to preserve individual variation
and freedom. Alexis de Tocqueville and other critical observers of
democracy, as well as friends of democracy such as Mill, feared that
its extension would lead to the erosion of personal freedom by
imposing legal restrictions on the use of property and on personal
behaviour.
123
To some extent restrictions on individual freedom in a
democratic society flow not from the theory and practice of
democracy but from the complexity of social relations in a growing
community. So long as there is recognition of the area of personal
privacy that may not be invaded by public power, freedom faces no
intolerable threats. Despite the fears of Tocqueville and Mill, there is
far greater allowance for, and tolerance of, deviant ideas and
practices in all areas of personal life in contemporary democratic
society than was the case in the less democratic world of these
scholars. According to some latter-day voices, the sphere of personal
freedom has been extended to a point where law and order seem
threatened. This is particularly true in the United States, where the
proliferation of weapons of deadly force in private hands, under the
constitutional right to bear arms, is implicated in dangerously high
rates of homicide and assault in large urban areas. Many critics claim
that this right has been extended to the point where public safety, the
most basic right of all, is increasingly jeopardized. Balancing
individual rights against one another in light of the legitimate need of
communities for safety and security promises to be one of the great
democratic challenges of the 21st century.
The acceptance of the inviolable rights of minorities reduces
the danger of dictatorship by the majority in a democracy. However,
the rights of minorities cannot be construed as absolute; rather, these
rights depend, in part, on the consequences of the actions of
minorities, on the freedom and safety of majorities, or on society as a
whole. In addition, rights may conflict. Freedom of speech may
interfere with a person's right to a fair trial and sometimes, as when
an orator is inciting a lynching mob, with the victim's right to life. In
such circumstances the rights of a minority may have to be abridged.
What, then, is the difference between democratic and non-
democratic governments? Do not the latter also abridge the rights of
citizens in the alleged interests of the common good?
The first distinction is that democratic government
recognizes the intrinsic as well as the instrumental value of civil
rights. When it moves to restrict or abridge civil rights, it does so
slowly and reluctantly. Second, if and when the exercise of a civil
right creates a clear and present danger of a social evil that threatens
other human rights, it is abridged only for a limited period of time
and is restored as soon as normalcy returns. Finally, the restrictions
imposed by government agencies on every level in a democracy are
subject to appeal, review, and check by an independent judiciary.
124
The relation between democracy and forms of property is
extremely tangled. It is sometimes argued that the collective
ownership of the means of production is incompatible with
democratic government, because the monopoly of control and the
necessities of a totally planned economy necessarily result in
dictatorship over the lives and movement of citizens.
It is true that in societies where the economic system was
centralized and socialized, as in the Soviet Union, or where it was
brought under complete political control, as in Nazi Germany,
democracy could not exist. In these situations political democracy
was destroyed, and control of all aspects of economic life was a
central feature of the overall assault on democracy. Measures of
partial economic socialization adopted in Britain and the
Scandinavian countries in the post–World War II era did not erode
democratic political processes. Nonetheless, although economic
centralization and democracy are not incompatible in principle, there
is an antidemocratic thrust to a completely socialized and state-
dominated economy. Concentrations of power of this magnitude will
always pose a threat to political democracy even as democracy must
challenge excessive centralization of power; hence political
democracy is either destroyed first as a prelude to such
centralization, or concentrating economic power foreshadows the
assault on democratic political forms.
Some have argued that capitalism is incompatible with
democracy because private ownership of the means of production
gives entrepreneurs control over the lives of those who earn their
living by using those means of production. Such ownership, it is
claimed, gives a disproportionate influence over the electorate to
those who command great wealth. Though not without merit, these
contentions overlook the fact that political processes in a democracy
make possible the limitation of economic power not only by
establishing free trade unions or other solidaristic organizations as
countervailing forces to capital but also by the use of taxation and
the regulation of elections. Laws protecting civil liberties guarantee a
dramatic extension of free expression and keep open the free
marketplace in ideas. Furthermore, new information-technologies
have decentralized political power, making it less likely that a
narrow elite will exert disproportionate control. This by no means
eliminates the disparity between social classes, but it does
complicate any simplistic picture of “haves” versus “have nots” as
characteristic of capitalist regimes.
125
Furthermore, once we distinguish between personal property
—home, land, tools, books—and property in the large social
instruments of production—mines, factories, plantations—we can
appreciate the insight of Locke and Jefferson that ownership of the
former actually may be a source and guarantee of individual
freedom.
The origins of modern democracy are rooted in the scientific
revolution of the 16th and 17th centuries and in the industrial and
technological revolutions of the 18th and 19th centuries. These
upheavals enlarged the imaginations of citizens and would-be
citizens by making what seemed merely possible probable. Thus they
transformed social relations to a point where persons whose status
was one of relative powerlessness—appendages of machines—began
to demand first suffrage and then their fair share of the social
product. The growth of political democracy depended more heavily
on the activities of trade unions, dissident religions, and social
reform movements than on the actions of traditionally established
institutions. Because power limits power, the landowning and
capitalist classes, in their struggles with each other, sought allies
among the lower classes and therewith extended the scope of
political suffrage and recognition in dramatic and irreversible ways.
With the extension of political suffrage, the middle and
lower classes acquired the strength and opportunity to carry
democratic principles into other dimensions of the social system. As
a result, a massive system of social security developed in most
democratic countries, and educational facilities and a higher standard
of living became available to greater numbers of citizens. The
welfare state emerged as a consequence of the influence of political
democracy on other areas of social life, particularly through the
redistribution of wealth.
The faith in democracy ultimately rests not in the belief in
the natural goodness of human beings but in the belief that most
human beings are open to democratic responsibility and possibility.
This faith derives from a notion of the human person as deserving of
recognition and respect. It is true that democracies are imperfect and
democratic citizens may do foolish or dangerous things. But the
democrat holds that the solution to such dilemmas is more informed
democratic action rather than salvation in a dictatorship, whether of a
single leader or of the proletariat. Those who have moved down this
latter path are responsible for much of the horror of 20th-century
126
politics, with its millions of human beings lost to political terror and
millions others displaced, tortured, or tormented to varying degrees.
Democracy, as we have seen, is not indivisible—all or
nothing—in the sense that its political form necessitates the
extension of the democratic principle to other areas of experience. It
only makes an extension possible to those who have the vision,
courage, and intelligence to struggle for it. Nor is democracy
indivisible on the international scene in the sense that the world must
soon become one democratic community. Democratic regimes are
compelled to coexist with nondemocratic regimes for the sake of
peace and security.
What can be expected is that the ideals of freedom in
flourishing democratic cultures still struggling to solve problems of
poverty, ignorance, and violence have functioned and will continue
to function as an aspiration to the subjects of non-democratic
regimes. Having secured and extended democracy throughout the
years of the Cold War, the citizens of democratic states face new and
daunting challenges. Freedom may be infectious, but so are
nationalism and intolerance. “Eternal vigilance,” in Jefferson's
memorable phrase, is the continuing price citizens of a democracy
pay to sustain and secure freedom for future generations.
Air Pollution
The contamination of air by unwanted gases, smoke
particles, and other substances is generally considered a relatively
recent phenomenon. However, pollution of the air, particularly by
smoke, has plagued many communities since the beginning of the
Industrial Revolution. By the late 19th century there was
considerable agitation by citizens' groups in London who protested
the smoke-laden air of the city, but their protests were drowned out
by the clamour for industrial development. Complaints against air
pollution were registered elsewhere in Europe as well as in the
United States. Laws on air quality were first adopted as early as 1815
in Pittsburgh. Chicago and Cincinnati followed with smoke-control
measures in 1881. By 1912, 23 of the 28 American cities with
populations over 200,000 had smoke-abatement ordinances.
From the 1930s to the 1950s, when smoke pollution was at
its worst in the United States, air pollution was still regarded as a
nuisance worthy of only local attention. During that period, in a
number of Eastern and Midwestern industrial cities the smoke was so
thick, particularly during the winter, that at noon it was sometimes
nearly as dark as at midnight. Finally, the sheer soiling nuisance of
the problem provoked public outcries that resulted in the enactment
of smoke-control legislation, its partial enforcement, and visible
improvement in the atmosphere of a number of industrial cities.
These efforts were focused primarily on reducing smoke
from fossil fuels; particularly coal, by regulating the types of coal
that could be burned, by improving combustion practices, and in
some cases by using special devices to control emissions of particles
into the air. The replacement of steam locomotives by diesel engines
and the increased use of gas for heating also contributed greatly to
the reduction of air pollution.
In the 1940s a new type of air-pollution problem emerged.
When the residents of Los Angeles complained of smog, few people
suspected that smoke pollution and general air pollution were not the
131
same problem. Los Angeles used virtually none of the fuels primarily
responsible for the smoke problems of other cities, and yet its smog
problem became worse. It was discovered that the Los Angeles smog
was not due primarily to smoke in the air but to the action of sunlight
on gases emitted by car exhausts and certain industrial processes.
The experience of Los Angeles in the 1940s revealed a general
pattern of air-pollution problems that developed in the 1950s and
1960s in cities elsewhere in the United States and in other parts of
the world. This pattern, characterized by rapid industrial growth,
development of metropolitan areas with their associated urban
centres and suburban satellites, and dependence on motor vehicles,
creates new gaseous and particulate emissions that complement,
interact with, and further complicate the action of the more common
pollutants.
Urban development, increased use of motor vehicles,
industrial expansion, construction and operation of large facilities
using fossil fuels for generating electricity, production of iron and
steel, petrochemicals, and petroleum refining gave rise to regional
air-quality problems in many areas of the United States. They are
manifest as seasonal episodes of unhealthy air quality in the Los
Angeles south-coastal basin and the Baltimore to Boston urban
corridor, and as regional hazes and poor visibility in the western
Great Basin, Appalachia, and Adirondacks. Another manifestation is
the acid-rain phenomenon in several parts of the United States and
Canada.
These contemporary air quality problems are due to a
combination of factors—the physical nature of urban and industrial
developments, the spatial density of emissions, land and weather
features that affect diffusion, dispersion, and transport of pollutants,
and the chemical and physical transformation of the plumes of
pollution that hang over cities and industrial centres. A wide range of
pollutants currently pose an ecological threat in cities all over the
United States and in other industrialized countries.
Transportation equipment—chiefly cars, motor rickshaws
and trucks—is the greatest source of air pollution in the Pakistan. It
is responsible for about 56% of the total. While motor vehicles—
automobiles and trucks—are the principal villains, aircraft, trains,
and boats can cause problems in certain areas—for example, airports,
railroad yards, and harbours. Although diesel engines in trucks and
other vehicles are minor contributors to the overall problem, they
produce objectionable odours and noxious fine particulate matter.
132
Second in importance is stationary combustion—electric-
power plants and heating plants—which contributes about 22% of
the total air pollutants. Industrial processes are a close third, with
15% of the total. Miscellaneous pollution sources, forest fires,
agricultural burning, coal-waste fires, and solvent disposal contribute
5% of all pollutants, and solid-waste disposal, primarily through
incineration, accounts for the remaining 2%.
The worst episode of air pollution in modern times occurred
in London in 1952. That famous “killer smog” is believed to have
been responsible for 4,000 deaths. In the United States a similar
incident occurred in 1948 in Donora, an industrial town in the
mountains of western Pennsylvania. Almost half of the town's
14,000 inhabitants became ill, and 20 people died during the five-day
smog episode. A study indicated that many who survived the episode
suffered permanent health impairment. Another serious episode took
place in New York City in 1953 when 200 people died as a result of
high levels of sulfur oxides and particulate matter. The London,
Donora, and New York City episodes occurred when unusual
weather conditions lasting several days prevented dispersal of the
pollutants.
More important than the major disasters are the subtle, long-
range effects on human health caused by exposure to low-level but
prolonged air pollution. This type of exposure contributes to the
incidence of such chronic respiratory ailments as emphysema and
bronchitis and to reduced exercise performance by healthy children
and adults. It also contributes to higher mortality rates from other
causes, including cancer and heart disease. Smokers who live in
polluted cities have a much higher rate of lung cancer than those in
rural areas.
Among children, air pollution has been shown to be
associated with a high incidence of asthma, allergies, and acute
respiratory infection. Such childhood disorders may lead to chronic
disease in later life.
Although present knowledge of the health effects of those
specific contaminants so far identified is incomplete, the more overt
effects of several major classes of pollutants are well defined and are
known to result in about 15,000 deaths, 7 million sick days, and 15
million days of reduced performance each year, affecting almost one
of every five persons.
Abatement and control of pollution in Australia, Brazil,
Japan, and Mexico strongly focuses on smog problems in
133
metropolitan centres. Smog in Mexico City, for example, is
particularly difficult to control because of the city's size, population,
elevation, high actinic radiation, terrain, and weather conditions.
These all favour seasonal accumulation of pollution and aggravate
smog formation from emissions of the many motor vehicles and
local industries. Motor vehicles in Australia and Japan are equipped
with emission-control systems, and state and local governments
inspect vehicles to ensure compliance. Brazil reduces transportation
pollution by use of clean-burning sugarcane alcohol as a partial
substitute for gasoline, and limits emissions from stationary sources.
Japan requires very advanced control systems for industrial
emissions in cities where land and weather conditions accentuate the
health risk of air pollution.
Nuclear War
Or
What would be the scene of the Earth after a
Nuclear War
Or
Devastation and havoc caused by a Nuclear War
Warfare involving the use of nuclear weapons called nuclear
war. The dropping of an atomic bomb on Hiroshima, Japan, on Aug.
6, 1945, and another on Nagasaki three days later, introduced the
most revolutionary weapon ever to be used in warfare. Suddenly, a
tool of warfare could create conditions at the earth's surface similar
to those at the centre of the sun. By today's standards the two bombs
dropped on Japan were small—equivalent to 15,000 tons of TNT in
the case of the Hiroshima bomb and 20,000 tons in the case of the
Nagasaki bomb. Yet in these two attacks the damage was so
extensive and so many people were immediately killed that no
accurate death count was ever possible. Estimates vary from about
40,000 to 170,000 killed at Hiroshima and from 20,000 to 40,000 at
Nagasaki.
The two most common measures of the destructive power of
nuclear weapons are the kiloton and the megaton. A 1-kiloton
nuclear explosion releases as much energy as would an explosion of
1,000 tons of TNT. A megaton—1,000 kilotons—releases as much
energy as would an explosion of 1 million tons of TNT. Three 1-
megaton nuclear explosions would release the same amount of
134
energy as all the bombs dropped in the six years of World War II. As
will be seen in the following sections, however, such comparisons do
not yield an accurate picture of the destructive potential of nuclear
weapons, since much of the energy they release differs in form from
that released by high explosives.
Prompt effects are those that occur in the interval
immediately following detonation of a nuclear weapon. When a
nuclear bomb explodes, an enormous amount of energy is released in
an extraordinarily short interval of time—within hundredths of
millionths of a second. In the case of a 1-megaton bomb, so much
energy is released into such a small volume that the temperature can
rise quickly to about 100 million degrees Kelvin—about five times
hotter than the temperature at the center of the sun.
High explosives such as TNT derive their explosive power
from chemical reactions. Almost all the power of the explosion is in
the expanding gases yielded by the reaction. In a nuclear explosion,
however, more than 95% of the explosive power is at first in the
form of intense radiation. The initial temperature near the center of
the explosion is so high that this radiant energy is of a frequency
many thousands of times higher than that of visible light. Since air is
not transparent at these frequencies, the radiant energy is quickly
absorbed by the surrounding air, creating a superheated sphere of
high-pressure glowing-hot gas—a fireball.
Because the fireball is so hot, it undergoes a violent
expansion, initially moving outward at several millions of miles per
hour, while radiating tremendous amounts of light and heat. The
rapidly expanding fireball in turn compresses the surrounding air,
forming a steeply fronted shock wave of enormous extent and power.
By the time the fireball reaches its maximum diameter of
several thousand feet (over a thousand meters), each section of its
surface is radiating about three times as much heat and light as a
comparable area of the sun itself. Under ordinary atmospheric
conditions at a distance of 2.5 miles (4 km) the fireball radiates more
than 1,000 times more heat and light than a noontime sun on the
earth's desert. Even at a distance of 6 miles (10 km) it radiates more
than 100 times as much heat and light. Thus, extensive fires would
accompany attacks against urban/industrial targets and be more
destructive than the shock wave.
The nuclear reactions that cause the explosion also create
harmful nuclear radiation (gamma rays and neutrons). Well-
documented studies of the relatively low-yield detonations at
135
Hiroshima and Nagasaki indicate that nuclear radiation emanating
from the fireball was sufficiently intense at 0.7 to 0.9 mile (1.1–1.4
km) to kill or seriously injure unshielded or partially shielded
individuals. However, in the case of higher-yield weapons (100
kilotons to 10 megatons), unprotected individuals close enough to
the detonation point to suffer injury from nuclear radiations would
instead be killed by the far more intense effects of blast and heat.
The term “delayed” is applied to effects that follow the
formation of the fireball and the arrival of the initial shock wave.
Some such effects occur within minutes of an explosion, while others
may occur or persist months or even years after the detonations. The
three main delayed effects are radioactive fallout from dust that has
been mixed with radioactive bomb products and target debris; heat,
smoke, and toxic gases created by vast fires in and around target
areas; and depletion of the ozone layer by nitric and nitrous oxides
created by nuclear explosions.
The first of these phenomena—radioactive fallout—is
formed when a nuclear bomb explodes at such a low altitude that the
fireball touches or nearly touches the ground. When this occurs,
large amounts of material can be vaporized, lifted into the fireball,
and carried aloft, where they mix with the fireball's radioactive
materials. The result is a cloud of highly radioactive dust, which can
be carried great distances by wind before it drops from the air onto
the ground in the form of fallout. In the case of a 1-megaton weapon,
significant fallout is created at burst altitudes below about 0.6 mile (1
km).
Within 24 hours of a near-surface detonation, from 40% to
70% of the immense radioactivity in the cloud can be carried to the
ground as the dust settles. For a 1-megaton detonation, an area of
well over 1,000 square miles (2,600 sq km) could be so contaminated
with radiation-emitting dust that individuals who are unable to
immediately evacuate the area or find adequate shelter would suffer
serious injury or death from radiation. Especially heavy fallout
would result from near-surface nuclear explosions needed to destroy
hardened underground targets such as missile silos and military
command centers.
Long-term biological effects result from two of the most
abundant radioactive by-products contained in fallout—cesium-137
and strontium-90. Since the chemical properties of cesium resemble
those of the common body chemical potassium, cesium can find its
way into the organs of humans and other animals, where its
136
radioactivity can cause cancer and other diseases. Similarly,
strontium-90 is chemically similar to calcium, another common body
chemical, and it, too, can become incorporated into people's bodies.
These two isotopes, which have radioactive half-lives of about 30
years, can present a serious long-term hazard.
A 1-megaton air or near-surface burst usually would
immediately initiate fires out to 6 or 7 miles (10–11 km) from the
detonation point. The devastation caused by large numbers of
simultaneously initiated fires can hardly be overstated. At the end of
World War II, the U.S. Strategic Bombing Survey found that each
70-pound (32-kg) incendiary dropped against targets classified as
highly flammable, was 12 times more effective, on a bomb-for-bomb
basis, than a 500-pound (225-kg) high-explosive bomb. Against fire-
resistant targets, the incendiaries were found to be one and half times
more effective. Thus, on a pound-for-pound basis, incendiaries were
about 10 to 85 times more destructive than high explosives.
In some cases—depending on time of year, atmospheric
visibility, and types of dwellings—a nuclear explosion could ignite
fires at ranges of 10 miles (16 km) or more. Conceivably, an area of
more than 300 square miles (775 sq km) could be set on fire within
minutes of a nuclear attack. Such superfires, often called firestorms,
would be far more devastating than most other fires in human
experience. Because of the large area of the fire, the fire zone would
act as a gigantic air pump, driving enormous volumes of air skyward.
The heat released from burning debris would drive air temperatures
to many hundreds of degrees Fahrenheit. As cooler air is drawn in to
replace the air pumped away by the action of the fire, the pumping
action of the fire would create very high ground winds, possibly of
hurricane force. Large amounts of poisonous smoke and gases would
be generated by these mass fires. Conditions in the fire zone could
therefore kill many more people than blast effects alone. In cities
struck by only a few nuclear weapons, firestorms probably would kill
more than two to three times as many people as would blast effects.
For low-altitude nuclear explosions in urban-industrial areas,
the combined effects of fires and fallout would create a deadly
environment for people who survive the prompt effects. Within
minutes or tens of minutes of such an attack, both fires and lethal
levels of radiation would involve large areas surrounding the
detonation point. The fires would spread, while the larger and
heavier pieces of radioactive debris would fall onto the target area
and surrounding countryside.
137
A general nuclear war between superpowers might have
long-term effects on the earth's climate. In such a war, hundreds or
thousands of nuclear weapons might be detonated over highly
combustible urban/industrial targets. The resulting firestorms
possibly would inject massive amounts of smoke—along with
sulphur dioxide and other fire-produced gases—into the upper
atmosphere of the Northern Hemisphere. In addition, large amounts
of radioactive dust and nitric and nitrous oxides created in the high-
temperature radioactive environment of nuclear fireballs would be
injected into the atmosphere by the detonations themselves.
The massive amounts of soot, dust, and other materials
injected into the atmosphere would make it relatively opaque, greatly
reducing the amount of sunlight that reaches the earth's surface.
During the late spring and summer months, when substantial solar
heating of the earth normally occurs, the blockage of sunlight could
cause temperatures to drop—perhaps sharply—in mid-continental
regions of the middle latitudes of the Northern Hemisphere. Coastal
regions, where the moderating effects of oceans are likely to be
important, would not suffer temperature drops as large as those that
might occur in the middle of a large continent. Also, the reduction in
surface temperatures probably would be small during winter
months, when surface temperatures are already low.
Due to the complexity and uncertainty of many atmospheric
calculations, estimates of the severity and duration of the summer
temperature drop vary widely. Additional variation is created by
differing assumptions about the number and characteristics of the
targets attacked in a nuclear war. The most pessimistic estimates
have led to the prediction that a major nuclear war could result in a
“nuclear winter,” with freezing or near-freezing summer
temperatures in mid-continental regions.
Even if nuclear attacks were confined to the Northern
Hemisphere, the climatic effects might not be limited to that region.
As the smoke-filled air at high altitudes in the Northern Hemisphere
became warmer, it might expand into the cooler upper atmosphere of
the Southern Hemisphere. This might set up a circulating pattern in
which dust-and-soot-laden air of the Northern Hemisphere upper
atmosphere spread south, while air from the lower atmosphere of the
Southern Hemisphere moved north.
As with all other climatic effects, the nature and scale of this
involvement is not well understood. Even though the changes in
land-surface temperatures from smoke spreading in the atmosphere
138
would have serious ecological effects, these might be minor
compared with other effects on the weather. For example, some
research indicates that, under certain circumstances, shifts in weather
patterns would eliminate the monsoons that normally occur in India.
This would essentially eliminate agriculture on the Indian
subcontinent.
Other long-lasting climatic consequences might be caused by
nitric and nitrous oxides. These chemical compounds are formed
from nitrogen and oxygen in the very high temperatures and
radioactive environment of the early fireball. Typically, about 5,000
tons of the oxides are generated per megaton of yield. If the yield of
a nuclear detonation is a megaton or higher, the buoyancy and
momentum of the rising fireball will carry much of this material into
the stratosphere.
Nitric and nitrous oxides also may be injected into the
stratosphere by a second mechanism. In the aftermath of a nuclear
attack large volumes of the atmosphere around target areas could
become filled with thick, opaque smoke from mass fires.
Calculations indicate that much of this opaque smoke-filled air
would rise into the stratosphere as it is heated by the sun, carrying
additional amounts of nitric and nitrous oxides into the stratosphere.
If nitric and nitrous oxides should be carried to stratospheric
altitudes, they are expected to enter a complex series of chemical
reactions with the molecules normally present at those altitudes. The
end result of the reactions could be a loss of stratospheric ozone. The
full effects of this loss would not be felt until after the soot and dust
had settled out of the stratosphere, when the surface of the earth
would again be exposed to the full radiant energy of the sun. The
reduced levels of stratospheric ozone would then absorb a much
smaller than normal fraction of the sun's ultraviolet rays, and for
perhaps several years the level of ultraviolet radiation at the earth's
surface could be very much higher than normal. Since excessive
ultraviolet radiation injures living tissue, the reduction in
stratospheric ozone might cause widespread destruction of plant and
animal life.
The climatic effects here described are subject to
considerable uncertainty, both because the scientific basis for
predicting such effects is still under development and because no one
can know what course a nuclear war actually would take. Numerous
factors—such as the time of year, meteorological conditions, and
139
targeting strategies—would affect the environmental consequences
of a nuclear war.
Nationalism
The term nationalism refers to an ideology based on the
notion that people who have a sense of homogeneity rooted in a
conception of a shared history and a common ethnicity, cultural
heritage, language, or religion should be united in a single nation-
state free of “alien” political, economic, or cultural influence or
domination. The “alien” may be internal, for example, the Russian
immigrants who flooded into Estonia, Lithuania, and Latvia during
the five decades of Soviet occupation, or external, as in the case of
Great Britain, Belgium, or Portugal in relation to their colonies. The
consciousness of group identity and sense of the alien become
nationalism, an ideology, when they are linked to political
aspirations.
A political force rooted in 18th-century Europe, nationalism
has come to play an important role throughout the world. For two
centuries it has been one of the easiest and most effective means for
regimes and leaders of national or ethnic groups to generate political
support and influence. The disintegration of the Soviet Union and
Yugoslavia in the early 1990s resulted in an upsurge of nationalism
and in concerns about its role in contemporary domestic and
international politics. The disintegration of multinational states also
illuminates the underlying paradox of nationalism: it can be a force
for liberation and a force for repression, for consolidation and
disintegration, for ending conflicts and for bloodshed and war. Led
by nationalists, the republics of the Soviet Union gained
independence in 1991 without the use of force, but several have
sought to forcibly suppress nationalities or ethnic groups within their
own borders when these groups have demanded the right to self-
determination.
Partly because nationalism manifests itself in various guises
and partly because the term is used for different purposes, it is an
ambiguous concept. Scholars debate the meaning and role of
nationalism; political leaders and regimes may use it as a means to
influence and manipulate public opinion; and the general public may
regard nationalism as an emotional attachment to a mythical identity.
Compounding the difficulty of defining nationalism is the fact that
the term has been applied to a variety of phenomena that may be
related to but are distinct from nationalism: patriotism, chauvinism,
140
xenophobia, racism, and popular sentiment. These concepts are more
limited than nationalism or are extreme manifestations of some
aspect of the concept. Nationalism is not simply a sentiment that
focuses on group distinctions. Nor is it simply loyalty to the state.
That concept is appropriately called patriotism. An excessive or
belligerent form of patriotism based on a belief of superiority of
one's own nation or state is chauvinism, named after Nicolas
Chauvin, a soldier in Napoleon Bonaparte's army. A fear and
loathing of foreigners or other ethnic groups is xenophobia, not
nationalism. Nor is racism, a belief that differences in human
conduct and achievement are determined by race, equivalent to
nationalism.
Finally, any analysis of nationalism is complicated by the
impossibility of disentangling its role from that of other political,
cultural, and economic influences. For example, it is difficult to
determine what role nationalism played in the disintegration of the
war-weakened Russian and Austro-Hungarian empires or, more
recently, in the disintegration of the Soviet Union as it began a
process of democratization that allowed a free rein to previously
repressed nationalisms.
Like other ideologies, nationalism offers an interpretation of
the historical and contemporary reality in which a nation finds itself,
a critique of that reality together with a conceptualization of an ideal
or preferred reality as a goal to be striven for, and a plan or set of
guidelines for reaching that goal. In effect, nationalism can be used
to mobilize people for political action by cultivating or even creating,
through propaganda and education, a national consciousness based
on myths of common identity and differentiation from others. Most
often these myths are defined in terms of a heroic, glorious, or
otherwise romanticized past or a conception of a threat to the
existence of the nation.
Because nationalism is an ideology that acquires its specific
content from the particular grievances, fears, and ambitions of a
nation—from the political context or social and economic
circumstances within which it arises—it assumes various forms and
plays different roles in the history of a given people or nation. These
manifestations of nationalism may be grouped in six categories:
anticolonial, secessionist, unifying, integrationist, irredentist, and
exclusive. These categories are not mutually exclusive—Northern
Ireland, for example, represents a case of both unifying (with the
Republic of Ireland) and secessionist (from Britain) nationalism—
141
but the categories are sufficiently distinct to merit separate
description of their dominant characteristics.
Nationalism in the form of anti-colonialism has been linked
primarily to Asian and African independence movements. Economic
exploitation of the Third World by Western colonial powers has been
a major theme of anti-colonial rhetoric. The more radical of the
arguments, ideologically based on a Marxist-Leninist analysis, have
sought to explain the poverty of the Third World as a result of
deliberate colonial policies of exploitation and suppression of
commercial and industrial development—policies that are held to
account for the economic wealth of the West. Within the first three
decades after World War II, nearly 50 African and more than 30
Asian states became independent as Britain, France, the Netherlands,
and later Spain and Portugal lost their colonial empires. Until
recently, this process of decolonization had been considered part of
the last phase in the history of nationalism, but a revival of
nationalist movements occurred as the Soviet Union began the
process of economic and political reform in the mid-1980s. Among
the last to regain independence from an imperial power were the
three Baltic republics, Estonia, Latvia, and Lithuania, which had
been annexed by the Soviet Union in the course of World War II but
not integrated into the regime's centers of power.
A second (and the most prevalent) manifestation of
nationalism in the last decade of the 20th century takes the form of
demands for secession from or autonomy within a state. In the 1990s
more than a dozen nationalities were seeking independence or
autonomy in the republics of the former Soviet Union. There were a
half-dozen such movements in Europe and North America, including
the Irish in Northern Ireland and the Basques in Spain and France;
several in the Middle East; nearly a dozen in Africa; and almost two
dozen in Asia. Some multinational states, notably China, Iraq, India,
Indonesia, and the Russian Federation, have been confronted with
several such movements at once. Many of these movements have
been engaged in decades-long violent struggles that have cost
hundreds and even thousands of lives.
The goal of a third category of nationalist movements is to
unite a people that live in areas within the borders of two or more
states either by establishing a unified nation-state or by joining a
neighboring state of a kindred nationality. The Kurds, who are
seeking to establish an independent Kurdish state, straddle the
borders of five states: Syria, Turkey, Iran, Iraq, and Azerbaijan. The
142
Basques, who are separated by the French-Spanish border, are an
example of a European people seeking a unified state. Aside from
these, most other divided people live in the republics of the former
Soviet Union, including Ukraine, Azerbaijan, Georgia, Kyrgyzstan,
Tajikistan, and the Russian Federation. Prolonged armed conflict or
terrorism has been a part of several of these nationalist campaigns.
Integration.
A fourth category of nationalism manifests itself when there
is a dispersed minority within one or more states that seeks
reintegration in its former homeland. The Volga Germans and
Crimean Tatars are two examples of minorities that were forcibly
deported. The Germans were moved from their autonomous Volga
republic to Kazakhstan and other regions of Central Asia following
Germany's invasion of the Soviet Union in 1941; the Tatars were
deported from the Crimea in 1944 and dispersed throughout the
Soviet Union. Both seek to reclaim their homeland and establish
autonomous republics. As a nationalist movement claiming Palestine
as a Jewish homeland for a people in worldwide diaspora, Zionism is
the best-known case of this form of nationalism.
Territorial claims on other states on the basis that the areas in
question are inhabited by a kindred people or were inhabited or
controlled by them in the past, however remote, is another related,
but distinct, manifestation of nationalism. Serbian claims to
territories of the mid-14th-century Nemanjid dynasty, the era of the
greatest Serbian imperial expansion, are one example of this form of
nationalism. Israeli claims to territory inhabited for generations by
Palestinians on the grounds of Jewish habitation in biblical times are
another example of such irredentist claims—a word derived from the
19th-century Italian effort to “redeem” or free areas with historic
links to Rome that were under foreign control.
Finally, nationalism may manifest itself as a claim to
national exclusiveness. As policy, it has been practiced throughout
history in the form of expulsion or massacre of religious or ethnic
minorities. For centuries, Jews and Gypsies, two dispersed ethnic
minorities, were subject to more or less violent expulsion from many
European countries, but the practice of what has become known as
ethnic cleansing assumed particular ferocity in the 20th century. The
Ottoman Turks annihilated more than a million Armenians in 1915
and tens of thousands of Greeks at Smyrna in 1922. In the name of
preserving “ethnic purity,” Nazi Germany expelled from territories
occupied by or incorporated into the Third Reich and ultimately
143
killed more than five million Jews and hundreds of thousands of
Gypsies and other minorities. With the mass expulsions and killings
of Croats and Muslims by Serbs in Croatia and in Bosnia and
Herzegovina, of Serbs and Muslims by Croats, and, in fewer cases,
of the others by Muslims, as each tried to establish new or to
preserve existing ethnically homogeneous areas, the history of
nationalism in Europe has descended to its post war nadir.
Population
The term “population” refers to the inhabitants of a
designated territory. The science of population study is called
demography. Population scientists (demographers) concentrate on
three aspects of population: numbers, characteristics, and distribution
of persons within the territory they inhabit. The viewpoint of
demographers is not unlike that of biologists, who often speak of
“animal populations” or “plant populations” and who count the total
number of a particular class of organism, the numbers of each of
several species or types, and the numbers inhabiting each segment of
a territory.
Like the biologists, demographers tend to view a human
population as an aggregate of living creatures that somehow must
organize and adapt themselves to their environment in order to
provide food, clothing, and shelter and to attain material comforts
above the requirements for subsistence. If the human species is to
survive, its members must breed, care for their young, and train them
for survival during the period of immaturity. Yet reproduction must
not be “too successful.” As with other animal species it is possible
for man to overpopulate his environment, creating a shortage of food
and other necessities of life.
Demographic matters are high public interest because of
unprecedented large increases in the world population. The earth's
population doubled—from 2 billion to 4 billion—between 1930 and
1975, and would double again by about the year 2010 if the 1975
annual rate of increase, 1.9%, remained constant. A disproportionate
share of population growth is among nations that have been defined
as “developing.” These countries are least able to provide large
additions of youngsters with food, clothing, and education, and of
young adults with jobs, housing, and other consumer essentials,
while trying to break out of the vicious circle of poverty created by
technological backwardness.
144
In many of these areas, rapid accumulation of population has
continued to a point where the nations are entering or already have
entered a phase of net food deficit without adequate compensating
industrial and commercial development. Such nations can avert
famine only if they are aided by the few remaining countries having
large food surpluses or if they embark upon extensive programs of
modernizing their agriculture and at the same time try to slow their
population growth.
Colourful expressions, such as “population explosion,”
“population bomb,” and “demographic doom,” are often used to
dramatize the immediacy and seriousness of the population crisis.
The gravity of this crisis, as a cause of impending mass misery, as a
threat to world peace, and as a major obstacle in the path of
worldwide efforts to raise levels of living, has been acknowledged by
large and influential national and international organizations. Most
nations that have critical population problems now have population
control programs. The United Nations has declared the population
crisis equal in importance to the problems of world peace and human
rights.
Mafia
The mafia had its origin as a secret society fighting foreign
rule in Sicily in the 15th century. In 1868 when Sicily became an
integral part of Italy the mafia continued to operate with the time-
148
honoured methods of organised violence but discovered new goals,
new purposes and new causes. It continued to challenge to
established governments.
Mafia, name for an open-ended association of criminal
groups, sometimes bound by a blood oath and sworn to secrecy. The
Mafia first developed in Sicily in feudal times to protect the estates
of absentee landlords. By the 19th century it had become a network
of criminal bands that dominated the Sicilian countryside. The
members were bound by omertà, a rigid code of conduct that
included avoiding all contact and cooperation with the authorities.
The Mafia had neither a centralized organization nor a hierarchy; it
consisted of many small groups, each autonomous within its own
district. By employing terrorist methods against the peasant
electorate, the Mafia attained political office in several communities,
thus acquiring influence with the police and obtaining legal access to
weapons.
The Fascist government of Benito Mussolini succeeded for
a time in suppressing the Mafia, but the organization emerged again
after World War II. Over the next 30 years the Mafia became a
power not only in Sicily but all over Italy as well. The Italian
government began an anti-Mafia campaign in the early 1980s,
leading not only to a number of arrests and sensational trials, but also
to the assassination of several key law-enforcement officials in
retaliation. Public outrage was tempered by the arrest in 1993 of the
reputed Mafia leader, Salvatore Riina.
Beginning in the late 19th century, some members of the
Mafia emigrated to the United States. They soon became entrenched
in American organized crime, especially in the 1920s during
Prohibition. After the repeal of Prohibition in 1933 put an end to
most bootlegging, the Mafia moved into other areas, such as
gambling, protection, prostitution, and, in recent years, illicit drugs.
Links with the Italian Mafia were also maintained. As in Italy,
prosecution of reputed Mafia leaders in the United States increased
in the 1980s and 1990s.
Until the 19th century, the Sicilian word “mafia” did not
appear in Italian writings. In the mid-19th century, Sicilian-Italian
dictionaries offered different definitions of the term. One dictionary
(1868) said that “mafia” expressed bravado and dignity (“what a
mafioso horse!”), but another (1876) said “mafia” was the equivalent
of “gang” (camorra).
149
These definitions reflect the alternative, but related, senses in
which the word was, and is, employed in Sicily. In The Italians
(1964), Luigi Barzini says that the word should be spelled with a
lowercase “m” when it means a state of mind—admiration for manly
bravado. When the word is capitalized, it has a more specialized
meaning and is the name of a world-famous illegal organization.
The Sicilian Mafia never was a tightly knit bureaucracy.
Instead, it always has been an alliance of many small,
semiautonomous groups, each active only in a limited district and
each concerned only with specific occupations, services, or
commodities. While the alliance of semiautonomous groups
ordinarily is called “The Mafia,” in contemporary western Sicily
there are construction trades Mafie, public works Mafie, wholesale
fruit and vegetable Mafie, fishing fleet Mafie, and many others. The
crime common to each of these Mafie is extortion. If a bricklayer
does not pay a fee to a Mafia group, for example, his scaffolding is
likely to fall.
The Sicilian association originated in a peasant society,
where face-to-face relations between neighbors predominated. The
early Mafie provided law and order where the various official
governments occupying Sicily did not. They collected taxes, which
were payments for protection against bandits. They tended to be kin
groups, with a hierarchy of authority relevant only to family affairs
—the patriarch and his heirs.
The kinship groups adapted their organization and operation,
however, to meet changing conditions as Sicily became more
industrialized and urbanized. By the beginning of the 20th century,
each group had a designated chief and a concept of membership that
permitted “men of honor” in the group even if they were not
relatives. The demarcations between taxation and extortion, and
between peacekeeping and murder, became blurred. Gradually the
Mafia groups formed alliances that enabled them, collectively, to
dominate almost all aspects of life—economic, political, religious,
and social—in the western part of Sicily.
Near the end of the period of national Prohibition, in the
early 1930s, the basic structure of current American organized crime
was established as the final product of a series of “gangland wars.”
In these armed conflicts, an alliance of Italians and Sicilians first
conquered bootlegging gangs dominated by members of other ethnic
groups. Then, in a series of battles in 1930 and 1931, the same
Italians and Sicilians almost destroyed one another. The device used
150
to bring this “Castellamarese War” to a close set the pattern for
organized crime in America. Basically, the agreement was to divide
the nation into 20 or 30 territories, each with a semi-independent
“boss,” or capo. This arrangement for peaceful coexistence was
patterned on that of the Sicilian Mafia. It continues, with
modifications.
The 1931 “peace treaty” functioned to create a loose alliance
between the various Mafia groups that have operated in American
cities ever since. Members of this alliance have profited because
many Americans demand the illegal goods and services they offer
for sale. They have managed to secure control of a large part of the
illegal gambling in the United States. They have become identified
as loan sharks and as importers and wholesalers of narcotics. They
have infiltrated certain labour unions, where they extort money from
employers and, at the same time, cheat the members of the union.
Mafia members own a wide variety of legitimate retail firms,
restaurants and bars, hotels, trucking companies, food companies,
and other corporations. They also have corrupted some government
officials at the local, state, and federal levels.
The American alliance is thought to consist of at least 24
“families,” as the membership groups are called. All members of
these families are Italians or Sicilians, or of Italian-Sicilian descent.
The persons occupying key positions in the skeletal structure of each
“family”—boss, underboss, counsellor, captains, and soldiers—are
well known to law-enforcement agencies. Names of persons who
permanently or temporarily occupy other positions—buffers, money
movers, enforcers, and executioners—also are well known. The
“families” are linked to each other, and to associated syndicates,
through understandings and agreements and through mutual
deference to a “commission” that is made up of the leaders of the
most powerful “families.”
The Federal Bureau of Narcotics started calling this
American alliance “The Mafia” in the early 1930s, and news media
and some local law-enforcement agencies still use this name. The
Federal Bureau of Investigation (FBI) used the name “Cosa Nostra”
(“our affair”) until 1970, when the U.S. Department of Justice
dropped both the names Mafia and Cosa Nostra from official use.
Organized criminals on the eastern seaboard use the term Cosa
Nostra, but it is seldom heard in Chicago or the West, where
members may call themselves “The Syndicate,” “The Outfit,” or
“The Organization.” “The Mob” occasionally has been used by
151
journalists as well as by some criminals themselves to refer to all
organized criminals, whether or not they are members of the Italian-
Sicilian alliance.
Capital Punishment
Capital Punishment, legal infliction of the death penalty; in
modern law, corporal punishment in its most severe form. Lynching,
in contrast to capital punishment, is the unauthorized, illegal use of
death as a punishment. The usual alternative to the death penalty is
long-term or life imprisonment.
The earliest historical records contain evidence of capital
punishment. It was mentioned in the Code of Hammurabi (1750 BC).
In England, Sexsex during the reigns of King Canute and William
the Conqueror, the death penalty was not used, although the results
of interrogation and torture were often fatal. By the end of the 15th
century, English law recognized seven major crimes: treason (grand
and petty), murder, larceny, burglary, rape, and arson. By 1800, more
than 200 capital crimes were recognized, and, as a result, 1,000 or
more people were sentenced to death each year (although most
sentences were commuted by royal pardon). In the American
154
colonies before the War of Independence, the death penalty was
commonly authorized for a wide variety of crimes. Blacks, whether
slave or free, were threatened with death for many crimes that were
punished less severely when committed by whites.
Efforts to abolish the death penalty did not gather
momentum until the end of the 18th century; in Britain and the
United States this reform was led by the Quakers (Society of
Friends). In Europe, a short treatise, On Crimes and Punishments
(1764), by the Italian jurist Cesare Beccaria, inspired influential
thinkers such as the French philosopher Voltaire to oppose torture,
flogging, and the death penalty. Encouraged by the writings of the
philosopher Jeremy Bentham, Britain repealed all but a few of its
capital statutes during the 19th century. Several states in the United
States (led by Michigan in 1847) and a few countries (beginning with
Venezuela in 1853 and Portugal in 1867) abolished the death penalty
entirely.
Where complete abolition could not be achieved, reformers
concentrated on limiting the scope and mitigating the harshness of
the death penalty. However, the reform movement succeeded in
Britain in 1965 after a number of dubious and even manifestly wrong
executions, carried out by hanging, in the previous two decades. In
one case a posthumous pardon was issued. Since then there have
been regular and determined attempts to restore the death penalty,
but they have been rejected in Parliament. A series of miscarriages of
justice in Britain in the 1970s and 1980s in what would have been
capital crimes emphasized the dangers of executing the innocent and
made the return of the death penalty unlikely. It remains the
theoretical punishment for a very few offences such as piracy.
The death penalty has been inflicted in many ways now
regarded as barbaric and forbidden by law almost everywhere:
crucifixion, boiling in oil, drawing and quartering, impalement,
beheading, burning alive, crushing, tearing asunder, stoning, and
drowning are examples.
In the United States, the death penalty is currently authorized
in one of five ways: hanging (the traditional method of execution
throughout the English-speaking world), electrocution (introduced by
New York State in 1890), the gas chamber (adopted in Nevada in
1923), firing squad (used only in Utah), or lethal injection
(introduced in 1977 by Oklahoma). In most nations that still retain
the death penalty for some crimes, hanging or the firing squad are the
preferred methods of execution. In some countries that adhere
155
strictly to the traditional practices of Islam, beheading or stoning are
still occasionally employed as punishment.
The fundamental questions raised by the death penalty are
whether it is an effective deterrent to violent crime, and whether it is
more effective than the alternative of long-term imprisonment.
Defenders of the death penalty insist that because taking an
offender's life is a more severe punishment than any prison term, it
must be a better deterrent. Supporters also argue that without capital
punishment there is no adequate deterrent for those already serving a
life term who commit murder while incarcerated, or for those who
have not yet been caught but who would be liable to a life term if
arrested, as well as for revolutionaries, terrorists, traitors, and spies.
Those who argue against the death penalty as a deterrent to
crime in the United States cite the following: adjacent states, in
which one has a death penalty and the other does not, show no
significant long-term differences in the murder rate; states that use
the death penalty seem to have a higher number of homicides than
states that do not use it; states that abolish and then reintroduce the
death penalty do not seem to show any significant change in the
murder rate; no change in the rate of homicides in a given city or
state seems to occur following a local execution.
In the early 1970s, some published reports purported to show
that each execution in the United States deterred eight or more
homicides, but subsequent research has discredited this finding. The
current prevailing view among criminologists is that no conclusive
evidence exists to show that the death penalty is a more effective
deterrent to violent crime than long-term imprisonment.
The classic moral arguments in favour of the death penalty
have been biblical and retributive. "Whosoever sheds man's blood,
by man shall his blood be shed" (Genesis 9:6) has usually been
interpreted as a divine warrant for putting the murderer to death. "Let
the punishment fit the crime" is its secular counterpart. Both maxims
imply that the murderer deserves to die. Proponents of capital
punishment have also claimed that society has the right to kill in
defence of its members, just as the individual may kill in self-
defence. The analogy to self-defence, however, is somewhat
doubtful, as long as the effectiveness of the death penalty as a
deterrent to violent crimes has not been proved.
Critics of the death penalty have always pointed to the risk
of executing the innocent, although definitely established cases of
this sort in recent years are rare. They have also argued that one can
156
accept a retributive theory of punishment without necessarily
resorting to the death penalty; proportioning the severity of
punishment to the gravity of the crime does not require the primitive
rule of "a life for a life".
In the United States, the chief objection to capital
punishment has been that it was always used unfairly, in at least
three major ways: that is, with regard to race, sex, and social status.
Women, for example, are rarely sentenced to death and executed,
even though 20 per cent of all homicides in the United States in
recent years have been committed by women. Defenders of the death
penalty, however, have insisted that, because nothing inherent in the
laws of capital punishment causes sexist, racist, or class bias in its
use, these kinds of discrimination are not a sufficient reason for
abolishing the death penalty. Opponents have replied that the death
penalty is inherently subject to caprice and mistake in practice and
that it is impossible to administer fairly.
In many countries the death penalty is inflicted for a range of
crimes against people, property, public order, and the state, as, for
example, in some African, Middle Eastern (Arab), and Asian nations.
About a dozen European countries have carried out executions since
the 1970s. By the late 1980s, some Western nations had no capital
punishment, while others had abolished it except for military or
national security offences.
Capital punishment was reviewed by the Supreme Court of
the United States in the 1970s, making it unconstitutional to impose
the death penalty in certain circumstances, for example, for a crime
that does not take or threaten life. However, many court decisions of
the 1980s and early 1990s have lowered bars to executions. In
addition, in the early 1990s the trend of Supreme Court rulings was
to cut back on the appeals that death row inmates could make to the
federal courts. During a visit to Mexico and the United States in
January 1999, Pope John Paul II made some of his strongest
denunciations of capital punishment to date, calling the death penalty
"cruel and unnecessary". After he made a direct request on behalf of
a death-row inmate, the state governor of Missouri, where the pope
was speaking, granted clemency. Capital punishment experts and
religious officials described the successful intervention as an
unprecedented act of papal influence.
157
Budget
The process of making decisions about raising revenues and
spending money for public purposes. The budget lies at the very core
of a government's decision-making process. It is the task of deciding
what share of a society's resources ought to be devoted to public
purposes and how much ought to be left in private hands. For
politicians, budgeting centers on deciding how to raise revenues and
on what programs to spend the money. It is a virtual lightning rod for
political conflict.
In most countries government budgeting is executive
centered. The president or prime minister prepares estimates of the
revenues and expenditures required for the government's programs.
In democracies these estimates are typically submitted as proposals
to the legislature for its approval. Governments prepare their budgets
for a fiscal year—that is, a budget year—which rarely coincides with
the calendar year. The U.S. government's fiscal year begins on
October 1 and ends the following September 30. In most state and
many local governments, the fiscal year runs from July 1 through
June 30. The fiscal year is numbered by the year in which it ends;
thus the federal budget year ending on Sept. 30, 1995, for example,
is known as fiscal year 1995.
In a given fiscal year, when a government's revenues equal
its expenditures, the government budget is said to be “in balance.” If
revenues exceed expenditures, the budget is “in surplus.” If
expenditures exceed revenues, the budget is “in deficit.” A budget
deficit is therefore a snapshot of a government's financial condition
in a given fiscal year. The accumulated deficits of all previous fiscal
years is the government's debt. Governments finance their debt
through borrowing—from their own citizens, from banks, and from
investors abroad. Borrowed funds are repaid with interest; these
payments usually have first call on the government's revenues and
must be made if the government is to remain solvent and
creditworthy. Because virtually all governments have a debt,
financing the debt is a central ongoing part of the budget process.
The government budget both affects and is affected by the
economy. Surpluses in the budgets of nations tend to slow their
economic growth by withdrawing wealth from the private sector.
Deficits tend to fuel economic growth by pumping money into the
private sector. One of the most important decisions of national
budget makers thus is their calculation of the effect of their budget
158
on the economy. Given the persistently high deficits with which most
major countries have struggled—and their inability to bring them
under control—national budgets have become far less useful tools
for steering the economy. Instead, the decisions of central banks,
which make national monetary policy—managing the supply of
currency and the level of interest rates—have become much more
important in guiding the economy.
The economy, in turn, has a profound effect on government
budgets. Higher economic growth is likely to reduce government
deficits by increasing revenues faster than spending; lower economic
growth tends to increase the deficit by driving up spending for
unemployment, income security, and government health care. High
inflation is likely to increase government deficits, especially because
interest costs on the debt tend to increase at the same time.
Moreover, governments are indexing more expenditures to the level
of inflation so that service recipients keep even with the cost of
living. This has tied government spending even more closely to the
economy's performance.
One of the first steps in preparing a government budget,
therefore, is to forecast economic growth and inflation and to project
what effects they will have on government revenues and
expenditures. Since even small errors in economic forecasts can
produce large errors in estimating the budget, there is a high
premium on accurate forecasting. Because these forecasts have to be
prepared more than a year in advance of the end of the fiscal year,
and because economic forecasting is more art than science,
unexpected changes in the economy can easily disrupt a
government's economic plans.
The national budget also expresses the government's fiscal
policy. Governments are faced with numerous, often conflicting,
objectives: promoting maximum employment, fighting inflation, and
pursuing economic stability and growth. To assist in achieving these
objectives, the government may decide to stimulate the economy by
operating with a budget deficit. If inflationary pressures persist, the
government may choose to reduce the deficit, bring the budget into
balance, or produce a budget surplus to restrain the economy. To be
effective, fiscal policy must accord with monetary policy decisions
of the central bank.
159
Nuclear Proliferation
Or
Non-Nuclear Proliferation
Nuclear Warfare and Nuclear Proliferation, dissemination
and use of nuclear weapons and military nuclear technology. The
first use of nuclear weapons in war took place in August 1945, when
the United States dropped atomic bombs on Hiroshima and Nagasaki
in Japan. Atomic or fission bombs involve a self-sustaining atom-
splitting chain reaction in a mass of uranium or plutonium, causing
the release of a huge amount of energy in a very short time. Atom
bomb design and construction is very precise, but it is now widely
accepted that a competent nuclear physicist could glean all the
information needed from readily available scientific literature. Most
modern nuclear weapons are scientifically more advanced and
belong in the second category of hydrogen or thermonuclear
weapons. These weapons employ a fission explosion to create
sufficient energy for the hydrogen fusion process to take place. There
is then a massive release of energy, much larger than that from an
atomic bomb. There is theoretically no limit to the size of a
thermonuclear explosion.
From the moment nuclear weapon technology was revealed
in 1945, it has been widely, though not universally, understood that
these are not "weapons" in the traditional sense of conferring a viable
military advantage on the possessor. When the use of nuclear
weapons threatened to annihilate not just the belligerents in any
conflict, but also to destroy and contaminate much of the Earth's
surface, it seemed that weapons development had finally gone
beyond the bounds of proportionality and utility. The British
strategist Basil Henry Liddell Hart commented that "with the advent
of atomic weapons, we have come either to the last page of war or
the last page of history".
A nuclear weapon involves a delivery system as well as the
bomb or warhead. The first delivery system was the long-range
bomber. As the Cold War progressed, so a wide variety of delivery
systems were developed: missiles of various ranges launched from
the air, ground, and sea; battlefield artillery; shipborne depth charges
and torpedoes; and landmines. Nuclear weapon technology is now so
diffuse that an effective means of delivery could be as simple as the
boot of a car or the proverbial "suitcase bomb". The yield or
explosive force of nuclear weapons has also increased dramatically.
160
The Hiroshima and Nagasaki bombs had a yield of about 13
kilotons-equivalent to 13,000 tonnes of TNT explosive. The yield of
some US and Soviet intercontinental ballistic missiles (ICBMs) was
in the region of 1.5 megatons, or 1,500,000 tonnes of TNT. In 1962
the Union of Soviet Socialist Republics exploded a thermonuclear
bomb with a staggering yield of 58 megatons. Nuclear weapons have
also become extremely accurate: the "circular error probable" of a
US MX missile is just 100 metres, meaning that there is a 50 per cent
chance of the 11,000-kilometre-range missile delivering a warhead
within 100 m of the target.
The first effects of a nuclear explosion are an extremely
bright and hot wave of thermal radiation, and a massive blast,
resulting in widespread destruction and fires. A series of
electromagnetic pulses (EMP) are also released which, while not
damaging to humans or buildings, could destroy communications
systems. Nuclear radiation, extremely harmful to all forms of life,
comes in the form of direct radiation at the time of the explosion, and
radiation from radioactive fallout, the dirt and debris sucked up and
irradiated during the explosion and falling back to Earth. It is
possible to design weapons which augment one or other effect.
Weapons designed to be used against military units might produce a
high EMP in order to break down the military communications
network. The best known special design was the so-called "neutron
bomb", or enhanced radiation/reduced blast bomb, which maximized
lethal direct radiation to kill tank crews but which kept the blast
effect to a minimum.
The United States had the monopoly in atomic bombs from
1945 until the first Soviet test in 1949. The US tested a
thermonuclear bomb in 1952, and the Soviet Union followed suit one
year later. In 1957 the Soviets launched the Sputnik satellite into the
first global orbit, prompting US fears of a "missile gap". Both sides
then raced to produce bombers, missiles, and other means of
delivery. By the end of the Cold War, each had well over 10,000
strategic (i.e. intercontinental) warheads, with many more in the sub-
strategic range. Other acknowledged possessors of nuclear weapons
are Britain, China, and France. Several states, such as Israel and,
until recently, South Africa, are widely suspected of possessing
nuclear weapons. Other suspected "threshold" states, those thought to
be capable of developing nuclear weapons, include Iran, North
Korea, India, and Pakistan. The dismantling of Iraq's nuclear
weapons programme after the Gulf War revealed it to be well on the
way to developing a nuclear device.
161
The "proliferation", or spread of nuclear weapons has taken
place in two ways, "vertical" and "horizontal". Vertical proliferation
is the expansion and development of existing arsenals, and is
controlled through arms control agreements between the possessors.
The prevention of horizontal proliferation is the object of the nuclear
non-proliferation regime. The regime has several elements, including
national and multilateral export controls, inspection and verification
agencies, bans on weapon testing, and, most recently, an attempt to
prohibit further production of fissile material.
The key to the regime is the Nuclear Non-Proliferation
Treaty (NPT) signed in 1968. In essence, the NPT is a bargain in
which non-nuclear states agree to renounce weapons-related
research, development, and acquisition in exchange for access to
civil nuclear technology. Non-signatories, however, include Israel,
India, and Pakistan: three almost certainly nuclear-armed states that
have fought several wars in recent history. North Korea's threat to
withdraw from the NPT in 1993, to avoid opening its nuclear
facilities to inspection by the International Atomic Energy Agency
(IAEA), brought the counter-threat of a pre-emptive strike on the
facilities by US bombers. The NPT, now with over 170 signatories,
was in 1995 the subject of a major international review conference to
meet its official expiry in May. The major powers possessing nuclear
weapons, with permanent seats on the United Nations Security
Council, secured indefinite continuation of the NPT regime by,
amongst other proposed policies, promising concerted retaliation
against any state guilty of nuclear attack or threats against an NPT
signatory.
Many states criticize the NPT as discriminatory, and
demand that the nuclear "haves" do more to meet their part of the
bargain by working more conscientiously towards nuclear
disarmament and complete test bans. The non-proliferation regime
faces several other challenges. There is a fundamental difficulty in
distinguishing between civil and military use of nuclear technology,
and it seems increasingly difficult to control the traffic in key
components and materials. It may also be that the taboo against the
viewing of nuclear weapons as acceptable weapons of war has been
eroded in various parts of the world. And it is not beyond the realms
of possibility that a terrorist organization might eventually acquire
and use its own bomb.
In an important benchmark decision, the International Court
of Justice of the United Nations ruled in July 1996 that the use of
162
nuclear weapons in warfare was contrary to the rules of war, except
in "extreme circumstances". A ruling was initially requested in
December 1994 by a large majority vote of the UN General
Assembly, reflecting the concern of the non-nuclear majority at the
possession of nuclear weapons by the "nuclear club" of permanent
UN Security Council members. The decision rendered the use of
tactical or theatre nuclear weapons illegal in any circumstances.
Without an effective mechanism for enforcement, the decision is
significant chiefly as a moral precedent.
Environmentalism
Approach to economic and ecological questions stressing
that factors of environmental impact (such as pollution or loss of
biodiversity), whether local or global, must be taken into account and
properly weighted in assessing the acceptability of human actions.
By campaigning and other actions environmentalists, usually as
members of a like-minded group, seek to raise awareness of specific
environmental concerns. These activities may take the form of work
by pressure groups or looser associations of campaigners, or the
parliamentary processes of the various parties involved in green
politics.
Concern about environmental degradation is not a new
phenomenon. For instance, in the 4th century BC the Greek
philosopher Plato worried about the effects of deforestation and soil
erosion. The roots of modern environmentalism may be found in
Victorian Britain when natural history became a popular pastime.
Gradually, as the effects of rapid and often uncontrolled
industrialization became evident, the emphasis shifted from the study
of nature to a desire to protect it. The first recorded environmental
pressure group was the Commons, Open Spaces and Footpaths
Preservation Society, founded in 1865. At about the same time, the
first national parks were being established in the United States.
Today’s environmentalists can be seen as the direct
descendants of these early movements, and may be divided into four
broad categories: 1) Single-issue campaigning organizations, such as
Friends of the Earth, Green peace, or the World Wide Fund for
Nature. 2) Advocates of environmental protection within other
180
organizations and institutions such as the Church, education,
business, or professional bodies. 3) Developers of relevant theories
and practices for environmental protection, such as ecological
economics, organic farming, or renewable energy technology. 4)
“Green” political parties.
From the late 1960s to the mid-1980s the first three of these
groups brought the issues to the attention of both governments and
the public, using various tactics, including sometimes confrontational
campaigning.
During the 1980s a new phase of environmentalism began:
policy responses to a wide range of environmental problems were
developed; in some countries green parties were formed; a growing
number of local projects demonstrated how these policies might
work in practice; and people began to accept environmental
protection as a matter for sustained everyday concern, rating it in
opinion polls alongside unemployment and other manifestations of
economic insecurity. Membership of environmental organizations
outstripped that of political parties and began to challenge other mass
membership organizations, such as trade unions.
By the mid-1990s, the need properly to integrate
environmental protection with social and economic policies has led
environmental activists to form strategic partnerships. For instance,
Greenpeace, a group of insurance companies, and the G-77 group of
developing countries worked together to influence the first review of
the Convention on Climate Change agreed at the United Nations
Conference on Environment and Development (commonly known as
the Earth Summit, held in Rio de Janeiro in June, 1992). In April
1996, Real World was founded in the United Kingdom—a coalition
of over 30 pressure groups covering issues of the environment,
development, social justice, and democratic renewal, and united by
what members see as the root cause of their individual areas of
concern—the inherent unsustainability of current world trends in
economic and social policy. Sustainable development is increasingly
seen as a central concept in evolving new strategies for future
growth.
The Greenpeace and Real World initiatives echo at
international and national level the partnership approach which has
been active at local level for some time. One high-profile instance is
the UK anti-roads campaigns. These have involved coalitions of
small, locally based groups joining forces to target and disrupt
specific road-building projects, such as the M3 extension at Twyford
181
Down, Hampshire, the M11 extension in north London, the M77
extension in Glasgow, and the Bath and Newbury bypasses. These
coalitions have been remarkable in uniting advocates of anarchist
and new-age philosophies with more conservative elements among
local settled communities in a common front against what they see as
an unacceptable trade-off between the conflicting interests of road
transport and the environment.
In the United Kingdom, an electoral process in which small
parties have difficulty gaining representation in Parliament has
meant that environmentalists have tended to bypass party politics
altogether, either through organizing in more powerful coalitions of
the type described above, or through a kaleidoscope of local self-help
activities which cross traditional activist group boundaries. In much
of Western Europe and beyond, however, environmentalism has
consolidated in and around the democratic process with political
parties taking certain issues into their electoral agenda. In some
countries, such as Germany and Sweden, green parties are now
viewed as an established part of the political spectrum.
The first time an environmental issue was taken to the polls
was when the United Tasmania Group (UTG) in Australia, amid
contoversy over a hydroelectric plan to flood Lake Pedder, contested
state elections in April 1972. When UTG leader Richard Jones wrote
a pamphlet entitled New Ethic to outline his group’s programme, it
was as much about community and political integrity as
environmental protection. One month later, when the world’s first
nationwide green party was formed in neighbouring New Zealand, it
called itself Values.
Inspired by the Tasmanian and New Zealand greens, the first
European green party was founded in Britain in 1973. Originally
called People (later Ecology party and finally Green party), its
founder members were greatly influenced by the idea of a rapidly
growing population putting an intolerable strain on the Earth’s
capacity to provide resources and absorb pollution. This
understanding eventually led to the setting up of the United Nations
Environment Programme (UNEP).
One of the popular books of this time was Blueprint for
Survival, published by the British magazine The Ecologist. From it
People drew the basis of its political programme. Blueprint
suggested that the principal characteristics of a society that could “to
all intents and purposes ... be sustained indefinitely while giving
optimum satisfaction to its members” would be: (1) minimum
182
disruption of ecological processes; (2) maximum conservation of
materials and energy; (3) a population in which “recruitment”
(births) equals “loss” (deaths—that is, a static, rather than increasing,
population); (4) a social system in which individuals can enjoy,
rather than feel restricted by, the first three conditions.
Developments of these principles, which are closely related to the
concept of sustainable development, make up the programmes of
most green parties today.
A Swiss, Daniel Brélaz, made history in 1979 by becoming
the first green to be elected to a national parliament. Two years later
four greens were elected to the Belgian parliament. Neither event
made much impact. It was the gain in 1983 of 28 seats in the German
Bundestag by Die Grünen, led by the charismatic Petra Kelly, that
really heralded the birth of a new political force—the first for over
half a century. Since then, green parties have won seats at some level
of local government in a large number of countries, and entered both
the European parliament and 20 national parliaments (see table). The
Federation of European Green Parties notes the existence of over 70
parties on six continents.
Although in some countries the nature of the electoral
system or weak internal organization of green parties has prevented
electoral success, steady progress has occurred in the Netherlands,
Finland, the Republic of Ireland, and Switzerland. In March 1995
Finland’s Pekka Haavisto became the first green to join a national
government (as Minister for Environment and Planning). And
despite domestic political upheavals in Italy (the anti-corruption
drive) and Sweden (the collapse of social-democratic consensus),
greens there have managed to keep (and in Sweden’s case regain)
most of their parliamentary seats. Two capital cities, Dublin and
Rome, have had a green mayor.
Many of the democratic movements in Eastern Europe had
their roots in environmental groups, such as Ecoglasnost in Bulgaria,
the Danube Circle in Hungary, the Ecological Library in East
Germany, and the Polish Ecological Club, with several greens
joining transitional governments or winning seats in parliaments
when elections were held. However, as the difficulties of reforming
economies and building a civil society became apparent, many of
these seats were lost in subsequent elections.
Beyond Europe, Tasmanian Green Independents have held
seats in the state parliament, and Values has re-formed as the Green
Party of New Zealand. For greens in Canada and the United States,
183
the difficulties of nationwide organizing have kept activity local,
while in Japan, the green party has been eclipsed by the Seikatsu
Club. The Club promotes “green” consumption and, with a turnover
of around US$300 million per year, has influenced production of
both agricultural and manufactured goods.
Green parties are active in Africa and in South and Central
America, but only the long-established Partido Verde of Brazil is
represented in a national parliament. As in Japan, and especially in
countries where access to the political process is difficult or
impossible, it has tended to be non-party activity which has had the
biggest political impact. One of the best-known examples comes
from northern India, where women of the Chipko Andolan used the
slogan “ecology is permanent economy” and hugged trees to prevent
the logging which was destroying their communities.
In the 1990s environmentalism entered a new phase. With
governments in broad agreement over the seriousness of
environmental problems, sustainable development (defined by UNEP
as “development which improves people’s quality of life, within the
carrying capacity of the Earth’s life-support system”) has become a
main objective for all parties. At the same time, pressure groups and
coalitions of activists, working through specific and often
emblematic issues, seek to bring about a fundamental shift in
perceptions of the environment in the public at large.
We hear lots of bad news about the environment. But there’s
plenty of good news too. We need it. If environmentalists offer too
much bad news, the public will become despondent until they feel
paralysed—and then they will do nothing about the problems. This is
a natural human reaction. I would do it myself. I would tune out the
bad news in the first place.
So here are some pieces of good news. Let’s start with the
atmosphere and climate. The ozone layer is on the point of
recovering. This success story dates back to 1987, when scientists
began to speak with a single, decibels-loud voice. The world’s
governments moved in just nine months (instant speed for
governments) to conclude a treaty to eliminate chlorofluorocarbons
(CFCs) and other ozone-destroying chemicals. All too often, when a
dozen governments get together they are unlikely to agree on the
time of day. Yet 163 governments signed the treaty.
Now for what many scientists believe is the biggest
environmental problem ahead, global warming. To tackle it we need
to reduce our consumption of fossil fuels, viz. coal, oil, and natural
184
gas. Burning fossil fuels gives off carbon dioxide, and the build-up
of that greenhouse gas causes half of global warming processes. To
cut back on fossil fuels, we should build more efficient cars, insulate
our buildings better, and use advanced light bulbs. Amory Lovins, a
Colorado scientist, is well on the way to inventing a streamlined and
hybrid-power car that will drive from New York to Los Angeles on a
single tankful of petrol. He and his wife Hunter live at 7,100 feet in
the Rocky Mountains where the winter temperature often plunges
below freezing every night for weeks if not months on end. Their
house with its exceptional energy-efficiency installations leaves them
with an annual heating bill of less than $50 (£30). Nor did the house
need way-out and costly technology. All items had long been
available at the local store, and the Lovins’s energy savings paid off
their capital investment within two years.
The same applies to most other forms of energy efficiency. If
all of us made use of them, we could save two-fifths of our carbon
dioxide emissions straightaway. We would cut back not only on
carbon dioxide but acid rain and urban smog as well—and we would
put money in our pockets. An average British household could save
enough during a year to take the family off for a long weekend
holiday. It is what is known in the trade as a “win-win” situation
where nobody ends up a loser.
Consider how our house lighting can save on electricity and
hence on fossil fuels. In Japan, more than 80 per cent of homes are lit
with low-power and long-lasting bulbs that give light as good as
conventional bulbs. In Norway, one home in every 25 (50,000 in
total) are powered by photovoltaics. In Kenya, 20,000 homes are
electrified with solar cells, or 3,000 more than by hooking up with
the central power grid. If price trends of the 1990s continue, solar
technologies will provide power at 6 cents per kWh by the year
2000, making it broadly competitive with electricity derived from
fossil fuels.
Much the same applies to wind power. During just the past
few years, generating capacity has risen rapidly until wind power is
now the fastest-growing energy source. In Germany, its output has
topped 1,000 MW, making it the world’s most energetic (sic) wind-
power market. India possesses the second fastest growing wind-
power industry with 500 MW installed, while China plans on 1,300
MW by the year 2000. In California there are 1,700 turbines
generating enough electricity to supply all of San Francisco’s
people. By late 1995, 25,000 wind turbines worldwide produced
185
nearly 5,000 MW of power, albeit only 0.1 per cent of the world’s
electricity. In many parts of the world, the cost of wind-generated
electricity has fallen by two-thirds since 1990, and in many regions it
has become competitive with new coal-fired power plants. As wind
turbines enter mass production, costs should soon fall below 4 cents
per kWh, making wind one of the least expensive electricity sources.
The fossil-fuel industries, worth over $1 trillion worldwide,
will, of course, work to counter this treat, and they can mobilize
impressive financial muscle for lobbying campaigns. However, they
are being challenged by another corporate giant, the insurance
industry, worth $1.5 trillion worldwide. Insurers have suffered from
ever-increasing payouts for floods, droughts, hurricanes, and other
extreme weather events, all of which are steadily on the rise as
probable portents of global warming. Insurance leaders on both sides
of the Atlantic have asserted that if the trend persists, it could
precipitate a crisis in the industry by the year 2000, with all manner
of knock-on effects for banks and other finance institutions such as
pension funds, thus affecting all citizens.
Examples of environmentally friendly practices making
good business sense are increasing. Minnesota Mining and
Manufacturing, makers of scotch tape and many other office
supplies, has saved more than $750 million since 1975 through its
recycling and waste management practices. The eco-technology
market as a whole was worth $210 billion in 1992 in developed
countries, and is expected to reach $320 billion by the year 2000—
only a little less than the global chemicals industry, now worth $350
billion.
Much eco-technology is being deployed in the
Mediterranean countries, following a remarkable political
breakthrough that is coming to full fruition. By the mid-1970s
concern was rising that the Mediterranean Sea was dying from
industrial effluents and the like. Both fishing and tourism were in
steep decline. The governments concerned, among them some
traditional enemies such as Israel and Syria, Greece and Turkey,
Egypt and Libya, France and Algeria, and Spain and Morocco, came
together around a United Nations table and tackled their common
problem with a common solution. Virtually all the coastal states have
ratified the Barcelona Convention for the Protection of the
Mediterranean Sea Against Pollution (1976) and its associated
protocols. Today the Mediterranean is in better shape than for
decades, thanks to its Clean-Up Plan. Many of the states now treat
186
their sewage before discharging it into the sea, industrial pollution
has been reduced, and eight out of ten beaches are considered safe
for swimming once more. The Mediterranean Clean-Up Plan is being
replicated in other “regional seas”, including the Persian Gulf where
Iran and Iraq sit side by side at the negotiating table.
There is still, of course, much to be done. In developing
countries, there can hardly be a more widespread pollution problem
today than dirty water. It is the source of 90 per cent of all disease
there, and it helps to kill millions of children every year. As long as
parents see their children dying, they won’t be interested in family
planning. Rather they will produce as many children as they can
manage in order to be sure that at least some survive to support the
parents in old age. So a prime means to defuse the population
explosion lies with clean water—and it is the main defence against
the number one problem, diarrhoea. We already save four million
children a year, but we could easily save another three million from
this scourge.
We could save a further two million children from other
water-related diseases through mass immunization. The two
measures together would cost only $15 (£9) per child a year, while
avoiding future medical costs averaging $150 (£90). Rich countries
usually contribute one quarter of the bill, the rest being paid by
developing countries themselves. To save the additional five million
children each year, the extra cost to the rich-world taxpayer would be
the equivalent of a beer every second month. It adds up to a splendid
opportunity. No other generation has ever had the chance to save so
many children, and at such trifling cost.
Why is it taking so long to do something about it? A major
reason is the lack of awareness. But that is changing, as the large
increase in environmental pressure groups and public opinion
surveys show. It is partly in response to this growing public
awareness that political and military leaders are starting to recall
what one visionary statesman, Mikhail Gorbachev, said, that the
threat from the skies is no longer nuclear missiles but climate
dislocation.
Finally, let us remind ourselves that there is no limit to what
we can do when we set our minds to it. Just the four years 1989-1992
saw the end of the Berlin Wall, the Cold War, Communism, and the
Soviet Union; and we made solid moves toward peace in South
Africa, the Middle East, and El Salvador. Who would have taken on
a bet in 1989 that we would achieve that much by the year 2000?
187
And in light of the good news items above, shouldn’t we consider
that we face insurmountable opportunities?
Nuclear Testing
Nuclear explosions conducted to test nuclear weapons.
These may be new designs under development or existing stockpile
weapons. The total explosive power of nuclear test explosions
carried out to date is estimated to be the equivalent of about 510
million tonnes (510 megatons) of TNT, equivalent to 40,000 times
the size of the atomic bomb dropped on Hiroshima in 1945.
A new type of nuclear weapon requires about seven or eight
tests before it is put into a country’s nuclear arsenal. In addition,
existing stockpile weapons are tested to check that they still work
reliably (a warhead is taken at random from the stockpile and
exploded), and to improve the safety of nuclear weapons. In these
tests, new safety features incorporated in the weapons, such as non-
sensitive high explosives, are tested. The sheer numbers, power in
terms of megatons, and spectacle of the above-ground tests were
essential elements of Cold War competition between the nuclear
powers, which to many was considered the true purpose of the tests.
Nuclear explosions have also been carried out for non-
military purposes, for example, for creating underground storage
cavities for gas, extracting gas and oil, extinguishing burning oil
wells, and building canals. Non-military nuclear explosives are
similar to nuclear weapons and military information can be obtained
from exploding them. However, objections to “peaceful” nuclear
explosions tend to be mainly economic and environmental.
Nuclear tests have been conducted in space, under water, in
the atmosphere, on the surface of land, and on water (carried on a
raft or contained in a ship, for example). Nuclear weapons exploded
in the atmosphere have been dropped as bombs from aircraft,
suspended on balloons, exploded on towers, and fired aloft by
rockets.
Zero-yield or subcritical tests involve high explosives and
fissile material in which the fissile material does not reach a critical
mass and therefore does not produce significant fission energy but
allows scientists to assess design features and safety. Computer
simulation of nuclear explosions has also been carried out by the
major nuclear weapons laboratories. Some observers argue that zero-
yield tests and computer simulation allow nuclear-weapon powers to
188
improve the performance of their existing nuclear weapons and
possibly even develop new types of nuclear weapons.
Others claim that the development of new types of nuclear weapons
is not possible without actually testing them. It is also argued that
fundamental civilian scientific research will provide nuclear-
weapons designers with the knowledge needed to develop an entirely
new generation of nuclear weapons.
The first nuclear test, code-named Trinity, was conducted by
the United States on July 16, 1945, in the desert near Alamogordo,
New Mexico. As the culmination of the Manhattan Project, the
Trinity test was set up to try out the first plutonium weapon using the
implosion method of detonation. The Union of Soviet Socialist
Republics (USSR) began testing nuclear weapons in 1949, the
United Kingdom followed in 1952; France in 1960; and China in
1964. Each of these four powers conducted nuclear tests in the
atmosphere, until 1963, and underground thereafter. The United
States, the USSR, France, and the United Kingdom have also
conducted tests under water. India officially became a nuclear power
when it conducted five underground nuclear tests and a missile test
in May 1998, with Pakistan following suit days later with six
underground tests. These tests raised fears of a nuclear arms race, or
even a conflict, on the subcontinent.
The five officially recognized nuclear powers carried out
most of their atmospheric and underground nuclear tests at remote
sites, such as in the Nevada desert in the south-western United
States; the Marshall Islands and Christmas Islands in the Pacific
Ocean; Moruroa and Fangataufa, two atolls in French Polynesia; and
Maralinga in Australia.
Out of a total of 2,050 nuclear explosions, about 1,300 have
been carried out by the United States. As the Cold War gathered
pace, public opposition to atmospheric testing grew, and on August
5, 1963, the United States, the USSR, and the United Kingdom
signed a Partial Test Ban Treaty, banning nuclear explosions in the
atmosphere, outer space, and under water. France stopped
conducting nuclear tests in the atmosphere in 1974; China, in 1980.
In 1962 alone, there had been 178 tests; the 500 atmospheric nuclear
tests had a total explosive power equivalent to that of about 30,000
Hiroshima bombs. The largest of these—and the most powerful
explosion ever—was conducted by the USSR at Novaya Zemlya in
1961; it had an explosive power equivalent to that of 58 megatons,
about four times larger than the largest US test.
189
Underground nuclear tests were typically conducted at the
bottom of shafts. Instruments used to measure the characteristics and
effects of the nuclear explosions were placed in tunnels running off
the shafts. French tests at the Moruroa and Fangataufa atolls, for
example, were conducted at the bottoms of shafts bored to depths of
up to 1,200 m (4,000 ft) into the basalt core of the atoll.
Atmospheric nuclear tests have resulted in polluting the
Earth with radioactive isotopes. Near the test sites, they produced
intense radioactive fallout, seriously harming the health of local
populations and the environment. They also spread radioactivity
around the globe that will persist for thousands of years. Exposure to
the radiation from this radioactive fallout has already resulted in
hundreds of cancer deaths; it is estimated that eventually the death
toll may rise to about 2 million people.
In the Marshall Islands tests, the inhabitants of Rongelap
were seriously exposed to radiation. On average, they received a
radiation dose of about 190 rems (radiation units). This dose was,
according to current medical opinion, sufficient to cause a 1 in 7
(extra) risk of dying of cancer. Medical examinations carried out on
adults in Rongelap between 1970 and 1974, which compared
exposed and unexposed inhabitants, showed that there was a higher-
than-average incidence among those exposed of anaemia, thyroid
disease, rheumatic heart disease, and tumours. All water samples
taken from Bikini and Enewetak islands showed that the level of
radioactive contamination was too high to allow consumption of
food grown on the island or fished from the sea; only in 1983 were
the inhabitants of Rongelap made aware of these findings. As a
result, in 1985 they had to evacuate to a smaller, uncultivated island
on Kwajalein Atoll.
The fragile Moruroa Atoll has been seriously affected by the
French nuclear testing programme, which was resumed briefly in
1995 and has caused the atoll to begin sinking—some 5 m (16y ft)
by the mid-1980s—into the lagoon. In 1979 a bomb was trapped in
the 800-m (2,625-ft) test shaft and exploded, causing a tidal wave. In
Australia, British tests were conducted on Aboriginal lands, vast
areas of which were left contaminated with scattered, partially buried
radioactive debris; many Aborigines suffered radiation sickness and
died.
In the 1980s, when British, Australian, and American ex-
servicemen began to die of cancer, the effects on army personnel
taking part in the tests—most of whom had no protective clothing or
190
medical monitoring for after-effects—began to be known. In many
cases, servicemen would observe the explosion and then undertake
military combat exercises thereafter, as training for the possible
future use of nuclear weapons on the battlefield. Livestock were also
used as experimental animals in many early nuclear tests.
When the biggest US bomb, code-named Bravo, was set off,
radioactive white ash fell on Rongelap, contaminating food and
drinking water; children playing nearby rubbed the ash on their faces
and arms. It has been claimed that the test was carried out with the
knowledge that winds were blowing directly towards unevacuated
islands.
Underground nuclear testing produces high levels of fallout
when the explosion is not totally contained underground, but instead
escapes through vents into the atmosphere. Many underground tests
are known to have vented in this way. Even when the tests are
contained, they leave behind large amounts of long-lived radioactive
wastes in an underground cavity. Generally, the explosion destroys
the ability of the cavity to contain the radioactivity, which is likely to
last for thousands of years. It is probable that some radioactivity will
escape into the groundwater and then into the human environment at
some of the many sites where underground nuclear tests have been
carried out, such as Semipalatinsk in the former Soviet Union, which
are now severely blighted.
A Comprehensive Test Ban Treaty (CTBT) was opened for
signature at the United Nations General Assembly in September
1996, which is expected to put an end to all nuclear testing.
A comprehensive test ban can only be effective if it is
verified, in order for the parties to the treaty to have confidence in it.
Verification will rely on a global network of seismic stations and
hydroacoustic arrays, supplemented by atmospheric monitoring
stations, which can detect reliably any nuclear test with an explosive
yield with an equivalent to 1,000 tonnes of TNT (1 kiloton). By
April 1998 the CTBT had been signed by 149 countries, including
the UK , France, and Israel; out of this total, some 13 countries have
completed the process of ratification. However, the treaty has to be
ratified by 44 countries known to have either nuclear weapons or
nuclear reactors from which nuclear weapons might be derived.
France Detonates Nuclear Device at Pacific Test Site
Paris—Defying growing worldwide protests, France began a
series of nuclear weapons tests [on] Tuesday, detonating a nuclear
device at remote Mururoa atoll in the South Pacific.
191
The underground explosion marked the first of as many as
eight tests that the French government has said it will conduct in the
South Pacific through[out] May, and it is sure to add fuel to the
international campaign against France that has spawned angry
demonstrations from Australia to Japan to France.
“These programs are indispensable so that we can be in a
position to guarantee the viability and the certainty of our nuclear
arms in the long term,” said a statement issued by the French
Defense Ministry. “The nuclear deterrent guarantees our
independence and the ultimate protection of our vital interests.”
The bomb—the equivalent of less than 20,000 tons of TNT
—was detonated at 12:30 p.m. (2:30 p.m. PDT [Pacific Daylight
Time]), according to Col. Abel Moittier, a French military
spokesman in Papeete, Tahiti, the capital of French Polynesia in the
South Pacific.
“This is the first test, but it has to be the absolute last,” said
Sebia Hawkins, a spokesman in Fiji for the environmental group
Greenpeace, which has repeatedly mounted protests against France's
nuclear-testing plans. “The world has to unleash as much pressure as
it possibly can at this stage to ensure this becomes a reality.”
French President Jacques Chirac had decided during the
summer to begin what he described as a final round of tests at the
French-owned Pacific atolls of Mururoa and Fangataufa, arguing that
the explosions would give scientists important information and allow
them to do future tests by computer simulation.
He also has promised that when the testing is complete,
France will end its tests and sign an international nuclear test ban
treaty.
In Washington on Tuesday, the White House issued a
statement expressing regret at the French move.
“We continue to urge all of the nuclear powers, including
France, to refrain from further nuclear tests and to join in a global
moratorium as we work to complete and sign a comprehensive test
ban treaty in 1996,” the statement said.
Last month, U.S. diplomats, who had earlier pressed the
French not to conduct the tests at all, asked Chirac's government to at
least refrain from staging the test while President Clinton was
presiding over V-J Day commemorations in Hawaii over the
weekend.
192
Congressman Eni Falcomavaega, the non-voting delegate
from American Samoa in the Pacific, denounced the tests as “a very
sad commentary on French colonialism” and called for a worldwide
boycott of French products.
Falcomavaega, a Democrat, had returned earlier [on]
Tuesday from French Polynesia, where French commandos had
arrested and briefly detained him while he was aboard the
Greenpeace ship Rainbow Warrior 2 near Mururoa over the
weekend.
The Tuesday blast ended France's 3-year-old moratorium on
nuclear tests, which former President Francois Mitterrand declared
without consulting his own scientists or military officials.
Between 1960 and 1992, France conducted 204 nuclear tests,
17 of them in the 1960s in the Sahara desert and the remainder in
French Polynesia. Mururoa has been used for most of those.
Although Australia has been one of the most vocal opponents of the
resumption of French tests, the French atolls are actually closer to
Los Angeles than to Sydney.
Opponents of the tests, arguing that they could damage the
environment, had employed a wide variety of tactics in an effort to
persuade Chirac to back away from his decision. Dozens of marches
and hunger strikes have been staged in Papeete, about 600 miles
northwest of the blast site, as well as in Australia, Japan and France.
Protesters in Australia have burned the French flag—one
person even set fire to a French consulate—and the Australian
government is engaged in a trade war with France over the issue.
Police in Paris last week banned a protest march through the center
of the city and then arrested 300 demonstrators trying to deliver a
petition to Chirac's office at the Elysee Palace.
In Europe, anti-nuclear activists have called on consumers to
launch their own economic boycott against French products,
particularly wine and other well-known symbols of the country.
About 3 million people have signed petitions against the tests,
including 1,200 French scientists.
Greenpeace has sailed its flagship Rainbow Warrior 2 to
within a few miles of the Mururoa atoll. The ship has twice been
stopped by French commandos and towed back into international
waters. During the last clash with Greenpeace, on Friday, French
military officers also arrested several divers who swam beneath the
testing platform.
193
France and China are the only countries known to be
conducting nuclear tests. China carried out its most recent test last
month.
Seeking to counter widespread condemnation of the tests,
which are opposed by 60% of the French according to recent opinion
polls, the French government has released a barrage of information
on its nuclear program, including details of radiation contamination.
It even invited foreign reporters to Mururoa, a closed atoll, last
month to answer questions about its program.
French government scientists contend that of the more than
200 tests the country has conducted in the past, only three have
caused radiation contamination. Two tests in 1966 and a third in
1973 caused some contamination at Mururoa and 25 miles away at
Fangataufa, which the government says was cleaned up.
Current government statistics indicate that the level of
radiation in the atolls is lower than in France, where the effects of the
1986 Chernobyl nuclear plant accident in the former Soviet Union
still are measurable, although not considered to be a health threat. In
the lagoon waters of the atolls, radiation levels are slightly above
normal but still below levels in the Baltic Sea, according to the
French.
But anti-nuclear activists argue that the tests may have done
damage, still undetected, to the fragile coral reef. Atolls are used in
nuclear testing because they lie atop great volcanic mountains on the
floor of the ocean. Coral adheres to the mountain, thickening and
growing upward over millions of years, eventually creating a strip of
land above water that is the atoll.
Nuclear tests are typically carried out deep inside the dead
volcanic mountains. Scientists dig a well 2,000 to 3,000 feet deep,
then put long cylinders into the well that contain measuring
equipment and the nuclear device. When the device is detonated,
sensors have one-millionth of a second to record the blast data before
they are destroyed.
French scientists told reporters in Mururoa last month that
the blasts create a molten lava inside the well that turns within
minutes into a glass-like substance that traps the radioactivity in the
mountain. About 2,000 French soldiers, sailors, scientists and
technicians live on Mururoa.
194
International Monetary Fund (IMF)
Specialized agency of the United Nations, established, along
with the International Bank for Reconstruction and Development
(the World Bank), at the UN Monetary and Financial Conference
held in 1944 at Bretton Woods, New Hampshire. The IMF began
operations in 1947. Its purpose is to promote international monetary
co-operation and to facilitate the expansion and balanced growth of
international trade through the establishment of a multilateral system
of payments for current transactions and the elimination of foreign
trade restrictions. The IMF is a permanent forum for consideration of
issues of international payments, in which member nations are
encouraged to maintain an orderly pattern of exchange rates and to
avoid restrictive exchange practices. It also provides advice on
economic policy and fiscal policy, promotes world policy co-
ordination, and gives technical assistance for central banks,
accounting, taxation, and other financial matters. Membership,
currently comprising 182 countries, is open to all sovereign nations.
Members undertake to keep the IMF informed about
economic and financial policies that impinge on the exchange value
of their national currencies so that other members can make
appropriate policy decisions. On joining the fund, each member is
assigned a quota in special drawing rights (SDRs), the fund's unit of
account since its establishment in 1969, whose value is based on the
weighted average value of five major currencies. (In early 1999 the
SDR was worth US$0.7338.) This replaced the old system whereby
subscription of members was to be 75 per cent currency and 25 per
cent gold. The total quotas at the end of August 1998 were SDR
145.3 billion. Each member's quota is an amount corresponding to its
relative position in the world economy. As the world's leading
economy, in 1997 the United States has the largest quota, some SDR
25.5 billion; the smallest quota is about SDR 3.5 million. The
amount of the quota subscription determines how large a vote a
member will have in IMF deliberations, how much foreign exchange
it may withdraw from the fund, and how many SDRs it will receive
in periodic allocations. Thus, the European Union has almost 30 per
cent of the voting strength, while the United States has slightly less
than 20 per cent (1999).
Members who have temporary balance of payments
difficulties may apply to the fund for needed foreign currency from
its pool of resources, to which all members have contributed through
195
payment of their quota subscriptions. The IMF may also borrow
from official institutions, and the General Agreement to Borrow of
1962 gave it the right to borrow from the so-called “Paris Club” of
industrialized countries, which have undertaken to make up to
US$6.5 billion available if needed (this sum was raised to US$17
billion). The member may use this foreign exchange for a certain
time (up to about five years) to extricate itself from its balance of
payments problem, after which the currency is to be returned to the
IMF's pool of resources. The borrower pays a below-market rate of
interest for the IMF resources it uses; the member whose currency is
used receives almost all of these interest payments; the remainder
goes to the fund for operating expenses. The IMF is thus not a bank,
but sells countries SDRs in exchange for their own currency.
The IMF also supports economic development, such as the
establishment of functioning free market economies in the former
Warsaw Pact countries. This includes a special temporary fund,
established in 1993, to offset trade and balance of payments
difficulties experienced by any member country abandoning artificial
price control policies. Its Enhanced Structural Adjustment Facility
(ESAF) assists developing countries with economic reform. By the
end of April 1998 it had provided SDR 6.4 billion to 48 countries.
Loans under IMF terms frequently have stiff clauses attached
regarding domestic economic policy: these have been the cause of
some friction between the IMF and its debtors in the past.
The IMF commenced operating in 1947. It initially aimed to
confine exchange rate fluctuations between member currencies to
within 1 per cent of a par value quoted in terms of the US dollar and
hence linked to gold; 25 per cent of members' subscriptions were to
be in gold. The first major change in policy was the General
Agreement to Borrow, concluded in 1962 when it became clear that
the fund needed increasing. The 1967 IMF meeting in Rio de Janeiro
led to the creating of the Special Drawing Right as a standard
international unit of account.
In 1971 the IMF's par value system was renegotiated to
allow a 10 per cent devaluation of the dollar and a broadening of
fluctuation ranges to 2.25 per cent. The sharp oil price rises after
1973 severely affected member countries' balances of payments, and
led effectively to the end of the Bretton Woods agreement to restrict
exchange rate fluctuations. Revision of the fund's articles in 1976
ended gold's role as a basis for the IMF and hastening the demise of
the gold standard, which the dollar left in 1978.
196
From 1982 the IMF devoted much of its resources to the
resolution of the worldwide debt crisis, caused by excessive lending
to developing countries. It assisted indebted members to devise
programmes of economic adjustment and has backed this assistance
with massive lending. In conjunction with its own loans, it
encouraged additional lending from commercial banks. As the
realization grew that the problems of its members involved long-term
structural inadequacies, the IMF established new facilities, using
funds borrowed from better-off members, to provide money in larger
amounts and for longer periods to members that seek to reorganize
their economies.
The IMF acquired an important new remit at the end of the
1980s with the implosion of European Communism and the
appearance of a host of European states determined to join the global
capitalist system. This role was initially met through a series of new
funds for overhauling the former command economies of Central and
Eastern Europe. The debt crisis by this time had largely abated.
The IMF has to some extent lost its original form and
purpose, since exchange rates are now largely left to the currency
markets to determine. Modern regimes that control exchange rates,
such as the European Exchange Rate Mechanism, are usually tied to
convergence programmes design to produce international currencies,
and the ERM's breakdown in 1992 demonstrated the IMF's relative
impotence when confronted with currency problems in modern
developed economies. The financial crisis in Mexico in 1995 showed
once more that IMF funds are now unequal to the vast amounts of
private capital circulating in the world economy. The financial crises
in Mexico in 1995, and in Asia and Russia during 1998, showed
once more that IMF funds are now unequal to the vast amounts of
private capital circulating in the world economy. In November 1998
it agreed a rescue package for the Brazilian economy that helped to
forestall the threat of a global financial collapse due to economic
turbulence in Asia, Latin America, and elsewhere.
The board of governors, made up of leading monetary
officials from each of the member nations, is the highest authority in
the IMF. Day-to-day operations are the responsibility of the 24-
member executive board, which represents member nations
individually (for larger countries) or in groups. The managing
director chairs the executive board. Main headquarters is in
Washington, D.C.
197
Taxation
System of compulsory contributions levied by a government
or other qualified public body on people, corporations, and property,
in order to fund public expenditure. In deciding whom, what, and
how much to tax, all governments have economic and social
objectives. Some types of business activity or product, such as
cigarettes, may be discouraged by heavy taxes. Other businesses,
such as those operating in depressed areas, may be encouraged by
tax breaks. Or taxation may be used to bring about social reforms
through altering the distribution of wealth.
The effectiveness of any government, at central or local
level, depends on the willingness of the people governed to surrender
or exchange a measure of control over property in return for
protection and other services. Taxation is one form of this exchange.
In medieval times, taxes were customarily paid not in money
but in the form of labour or other payments in kind (such as work on
local roads or supplies of grain or other farm produce). As long as
the government’s services consisted largely of military actions and
the provision of roads and other public works, this form of taxation
satisfied most governmental needs reasonably well. Rulers could
require feudal lords to provide, as a form of tax, workers or soldiers
in numbers that reflected the noble’s rank and wealth. In the same
manner, grain levies could be imposed on landowners, both to feed
the workers or troops and to provide for other government needs. In
modern industrial nations, although taxes are levied in terms of
money, the fundamental pattern remains: the government designates
a tax base (such as income, property holdings, or a given
commodity); applies a tax-rate structure to the base; and collects the
tax (equal to the base multiplied by the applicable rate) from the
stipulated legal taxpayer.
Tax systems, even today, are as varied as the nations that
devise them, ranging in complexity from the most basic
arrangements to computerized revenue systems. Simple tax
mechanisms are suitable only to the needs of those governments that
are extremely limited in scope. When government responsibilities are
extensive and diverse (as, for example, when taxes are used to
modify economic inequalities and to distribute benefits in ways that
are considered equitable), the underlying system of taxes must be
sophisticated. Elaborate networks of fiscal reporting become
198
essential, as do legal enforcement and a standard of public education
adequate to ensure a high degree of taxpayer compliance.
Tax systems perform differing functions, depending on the
responsibilities expected of the enacting government. Local
governments traditionally depend most heavily on property taxes,
and central governments on sales taxes and income taxes. Local
governments are required to keep their expenditures within the
budgetary limits, determined by their own revenues augmented by
payments received from central government, though in some
circumstances they can borrow money. The central government,
however, can borrow or even create money; it does not have to raise
enough from its tax system to balance its budget. Taxation is also the
basic instrument of fiscal policy. In concert with its control over the
money supply (that is, its monetary policy), the government aims to
maintain the stability of the economy. In depressions, for example,
taxes may be lowered and budget deficits incurred so that consumers
will have money to buy goods and investors will have capital to put
into industry, thus stimulating production. In prosperous times, tax
increases may be needed to hold down or prevent the inflation
caused by too much money chasing too few goods, or if the
government prefers to control inflation through interest rates, taxes
may be cut for political or other ends.
Among the tax systems of different nations, wide variations
exist in how money is raised and spent. Tax and expenditure policies
reveal the fundamental ideology of a government and a political
system. Most democracies today derive their general notions of what
constitutes a good tax system from four principles enunciated in the
18th century by the Scottish economist Adam Smith.
Fairness Of fundamental importance is that any tax must be
fair—that is, citizens should be taxed in proportion to their abilities
to pay (a concept that Smith defined somewhat ambiguously as “in
proportion to the benefit they derive from the government”). A tax is
considered fair if those who have the means to pay are assessed
either in proportion to their capacity to pay or, depending on the
situation, in proportion to what they receive from the government.
Both “ability to pay” and “benefits received”, therefore, are criteria
of fairness. When government services confer identifiable personal
benefits on some individuals and not on others, and when it is
feasible to expect the users to bear a reasonable part of the cost,
financing the benefits, at least partly, by taxing the people who
benefit is considered fair, as in the repayment of loans to students by
199
subsequent taxation. (Obviously, this method does not apply to such
services as public welfare payments.) Taxation in accordance with
appropriately applied standards of ability to pay or of benefits
received is said to meet the requirements of vertical equity (because
such taxation exacts different amounts from people in different
situations). Just as important is horizontal equity—the principle that
people who are equally able to pay and who benefit equally should
be taxed equally.
The application of a tax should be clear and certain. This
principle, considered very important by Smith, has often been
underestimated in modern tax systems (in which open and impartial
administration usually can be taken for granted). Where the
application of taxes is uncertain and arbitrary, however, the public
can have no confidence in the system. The old British tax on
numbers of house windows was disliked and widely resisted partly
because its rationale was unclear; likewise, windfall taxes introduced
by a government on gains produced by the policies of a previous
government can appear uncertain.
Taxes should be easy to calculate and collect. Compliance
with income tax laws increased dramatically where a system of
deducting tax from earnings before they are paid has been
introduced.
A good tax system should be structured so that it can be
administered efficiently and economically. Taxes that are costly or
difficult to administer divert resources to non-productive uses and
diminish confidence in both the levy and the government. Worse
still, waste can also be created by excessive tax rates; economic
efforts are then shunted from high- into low-yielding activities, from
productive enterprises into tax shelters, and from open, above-board
transactions into hidden, off-the-record participation in the
underground economy. When this happens, the important principle
of tax neutrality (which maintains that a tax should not cause people
to change their economic behaviour), implied by Smith, is violated.
Smith’s tax maxims have stood the test of time remarkably well.
Other basic principles have been added to the list, but some have
occasionally been proven counter-productive. An example is the
desirability of tax elasticity—that is, the automatic response of taxes
to changing economic conditions without adjustments in tax rates.
In designing tax systems, governments customarily consider
three basic indicators of taxpayer wealth or ability to pay: what
people own, what they spend, and what they earn. Historically,
200
agriculture, as the fundamental basis of the subsistence economy,
became the earliest lucrative tax base. Thus, among major revenue
sources, the property tax on land and its produce is the oldest of
modern taxes.
Movable property was somewhat harder to tap as a source of
taxation, but as market places developed, taxes on the sale or transfer
of goods became productive sources of revenue. International
commerce gave rise to customs duties, levied both to yield revenue
and to control the amount and kind of imported merchandise.
Domestic trade spawned a variety of taxes, ranging from excises on
specific commodities (such as the ancient salt tax) to levies aimed at
taxing designated transactions. An example of the latter, still widely
used in some parts of the world, is the stamp tax on bills of sale and
other legal and financial documents. (The stamp tax levied by the
British government on American colonists became so prominent as a
symbol of tyranny—of “taxation without representation”—that it
helped trigger the American War of Independence.) Also widely
used today are excise taxes of many kinds, especially on luxury
items and on goods such as alcohol and cigarettes, the use of which
governments wish to regulate. Many countries levy sales taxes at the
retail level. To lighten the burden on the poor, countries exempt
necessities such as food and prescription drugs. European Union
countries use a value added tax, levied on goods and services, at each
stage of production, on the value added at that stage.
Although the value added tax is comparatively new, taxes on what
people own, buy, transfer, or use have a far longer history than do
taxes on what people earn or otherwise receive in income. A
personal income tax was first used in Britain in 1799. It was dropped
for a time and then revived, and has been in continuous use in Britain
since 1842. Because an individual income tax is complex and
difficult to administer, this kind of tax was slow to take hold. By the
end of the 19th century, however, a number of countries in Europe
and elsewhere had adopted it. In the United States, the 16th
Amendment to the Constitution (ratified in 1913) was needed to
establish the legality of a federally imposed income tax.
Because no single form of wealth is a perfect indicator of
taxpayer ability to pay, most modern nations try to diversify their tax
systems. Many people think of ability to pay largely in terms of
income. This assumption, however, is losing ground as the inequities
in modern income tax systems become increasingly apparent.
Inheritance tax on bequeathed wealth has also come under
201
considerable criticism. A comprehensive form of taxation on
consumption expenditures has gained support among tax specialists,
but public acceptance has been lacking.
No tax is levied with perfect evenness or on a completely
comprehensive base; its burden inevitably falls more heavily on
some taxpayers than on others: this unfortunate fact exacerbates the
basic unpopularity of taxes per se. The exemptions, exceptions, and
other loopholes in tax laws are partly the result of humanitarian
concern for those who might be overburdened; partly, they reflect
political pressures; and partly, they come from administrative
inefficiency or inability to deal with the extremely complex tax
structure, or to foresee all possibilities for tax evasion. By using a
variety of taxes, governments can attempt to ensure that the tax
burden falls fairly across all taxpayers.
As governments find it ever harder to finance all their
commitments, and as taxpayers grow ever more resentful of the taxes
they are asked to pay, interest has grown in levies designed to
achieve fairness in terms of benefits received. Aside from simple
user charges such as those on public leisure amenities (which may be
thought of more as prices than as taxes), the benefits standard is
apparent in many major levies. These include petrol taxes that are
earmarked principally for road maintenance and construction;
business levies collected to provide unemployment insurance; and
social security taxes allocated to worker casualty-insurance and
retirement funds. The effectiveness of earmarking is a much disputed
issue, but it tends to appeal to politicians, though in fact all revenues
are generally pooled regardless of notional earmarking. Although
earmarking can make raising new revenue easier, it can also create
budgetary distortions, especially in times of economic stress, when
the general fund may be in need while special funds are more than
adequately filled.
The effects of taxation are difficult to judge. Even personal
income tax, which is presumed to fall entirely on the legal taxpayer,
has indirect consequences in the economy; it influences decisions to
work, save, and invest, and these decisions affect other people.
Corporate income tax may in some cases simply result in lower
corporate profits and dividends; in other cases, it may broadly reduce
the incomes of all owners of property and businesses. To the extent
that corporations compensate for the tax by raising the prices of their
products, the tax burden may simply be shifted on to consumers. To
202
the extent that tax-reduced corporate profit margins hold down
wages, the incidence of the tax is shifted backwards to workers.
Similar disagreements arise over the incidence of local
property taxes and over the employers’ share of social security
payroll taxes. Even the long-established view that retail sales taxes
are shifted forward from retailers to consumers is challenged in a
world in which wages and government transfers (that is, income
payments such as social security) are indexed, or automatically
adjusted upward, for inflation. Inclusion of the sales tax in the Retail
Price Index insulates recipients of indexed incomes against inflation-
induced tax increases and therefore puts the burden of those
increases on the recipients of non-indexed incomes. As awareness
grows of the difficulties in pinning down the burden patterns of
various taxes, the old distinction between direct and indirect taxes
becomes relatively meaningless.
Despite the difficulties of precise measurement, governments
are appropriately concerned with the vertical pattern of the tax
burden: does it fall proportionately more heavily on the rich than on
the poor (progressive taxation)? Does it burden everyone to the same
degree in relation to taxpaying ability (proportional taxation)? Or
does it place a relatively heavier burden on the poor (regressive
taxation)? In most modern nations, a generally progressive tax
structure is considered desirable for two reasons. First, a progressive
tax is considered more equitable (because the wealthy have more
ability to pay). Second, extremes of wealth and poverty are
considered injurious to the economic and social well-being of a
society, and a progressive tax structure tends to moderate such
extremes.
On the other hand, tax rates that are too progressive—that
rise too steeply—may discourage both work and investment by
removing much of the reward. In the early 1980s concern about this
problem attracted the attention of policymakers to so-called supply-
side economics—to economic theories emphasizing the importance
of ensuring that taxes do not drain away incentives to investment,
either by individuals or by businesses.
In the system of feudalism that dominated Western Europe
during the Middle Ages, lords gave land and protection to their
subjects, called vassals, in exchange for military service, taxes, and
dues from the vassals. In this undated drawing, a vassal brings a tax
payment to his lord.
203
Taliban
Islamic fundamentalist movement in Afghanistan and
unofficial government of most of the country since 1996. The
Taliban movement was created in August 1994 by a senior mullah,
Muhammad Omar Akhund, in the southern Afghan town of
Kandahâr. The name “Taliban”, meaning “student”, supposedly
refers to the group’s origins, although most members have known
war all their lives and consequently have been students only for
rudimentary religious training.
The Taliban movement emerged out of the chaos and
uncertainty of the Afghan-Soviet conflict of 1979-1988, and
subsequent internal civil strife in Afghanistan. During the 1980s
Afghanistan was occupied by Soviet troops and thereafter ruled by a
Soviet-backed government. Afghanistan’s long war with the Union
of Soviet Socialist Republics (USSR) and its Afghan puppet
government was largely fought by mujahedin (guerrilla) factions
with military assistance from the United States; Pakistan also
provided places of refuge, military training, and other support.
After the Soviets withdrew in 1989, civil war broke out between the
mujahedin factions and the central government. Afghanistan’s
central government had long been dominated by the country’s
majority ethnic group, the Pashtuns, but after Soviet withdrawal, a
coalition government that included Tajiks, Uzbeks, Hazaras, and
other minority groups came to power. The Taliban, which emerged
as a mujahedin faction, consisted mostly of Pashtuns intent on once
again dominating the central government in Kabul. They were
trained and armed by the Frontier Constabulary, a quasi-military unit
in Pakistan, which also has a significant Pashtun population. The
Taliban promoted itself as a new force for peace and unity, and many
Afghans, particularly fellow Pashtuns, supported the Taliban in
hopes of respite from years of war.
In late 1994 and early 1995, the Taliban moved through the
south and west of Afghanistan, taking control of Kandahâr and many
other towns and cities dominated by fellow Pashtuns. Herât and most
of the other towns along the main southern and western highway
soon followed. In February 1995 the Taliban reached the outskirts of
Kabul, but were ousted by government forces in March. Again they
advanced to the capital in October. While continuing to bombard
Kabul, Taliban soldiers advanced and took control of eastern
Afghanistan, as well as the remote central area.
204
The Taliban continued their siege of Kabul intermittently
throughout 1996, until they were able to advance in September 1996
and capture the city. Government troops fled as Pashtun control was
once again restored to the capital. Shortly after the city fell to the
Taliban, Muhammad Najibullah, the last Soviet-backed president of
the country, and his brother, the security chief Shahpur Ahmadzai,
both of whom had taken refuge in the UN compound in Kabul in
1992, were dragged out by Taliban soldiers, beaten, shot, and hanged
in a public area.
After taking over Kabul, the Taliban created a government
agency, called the Ministry for Ordering What is Right and
Forbidding What is Wrong, to enforce its fundamentalist rules of
behaviour. Some of these rules had little to do with Islamic Shari’ah
law, however, and were more influenced by ancient Pashtun tribal
beliefs. Taliban leaders banned music, shut down cinemas and
burned the films, and bulldozed bottles and cans of alcohol taken
from foreign hotels. Men were ordered to grow full, untrimmed
beards (in accordance with orthodox Islam), and were rounded up
and beaten with sticks in an effort to force prayer in the mosques.
Women were told to cover themselves from head to toe in burkas
(long veils covering the whole body) with woven, dark screens in
front of the eyes; improperly dressed women were beaten. Girls’
schools were closed, and women were forbidden to work outside
their homes. As a result, hospitals lost almost all their staff and
children in orphanages were abandoned. In a country where
hundreds of thousands of men had been killed in warfare, widows,
who were the sole sources of income for their families, found
themselves unable to work.
The Taliban continued to announce additional rules and
laws, using Radio Kabul, and trucks equipped with loudspeakers.
The Taliban made murder, adultery, and drug dealing punishable by
death, and allowed stonings, some of which have been fatal, of
women escorted by men who are not related to them. Other rules
enforced by the Taliban include the punishment of theft by
amputation of the hand. Many of these laws have alarmed human-
rights groups and provoked worldwide condemnation. Even Iran has
censured the Taliban’s excesses in the name of Islam.
The Taliban’s rapid takeover of Kabul in September 1996
paved the way for their conquest of the rest of the country, as
Taliban soldiers advanced north to the mountain strongholds of the
Tajiks, Uzbeks, and Hazaras. President Burhanuddin Rabbani and
205
Prime Minister Gulbuddin Hekmatyar fled when the Taliban took
over the capital, remaining in the northern part of the country and
fighting the Taliban alongside other factions. In November 1996 the
Taliban were driven back toward Kabul. Sporadic fighting between
the Taliban and the northern factions reached a stalemate in early
1997 with all but northern Afghanistan under Taliban rule. However,
by mid-1997 the Taliban had captured some of the northern area,
thus bringing most of Afghanistan under their control. Thousands of
refugees streamed into UN-supported camps outside Herât. Despite
concerns over human rights abuses, particularly against women, the
UN, and also the United States and other countries, have held
diplomatic talks with the Taliban in an effort to restore peace to the
area.