Sie sind auf Seite 1von 719

1.

Blood
Chapter Editor: Bernard D. Goldstein

Haematopoietic and Lymphatic System

Written by ILO Content Manager

The lymphohaemopoietic system consists of the blood, the bone marrow, the spleen, the
thymus, lymphatic channels and lymph nodes. The blood and bone marrow together are
referred to as the haematopoietic system. The bone marrow is the site of cell production,
continually replacing the cellular elements of the blood (erythrocytes, neutrophils and
platelets). Production is under tight control of a group of growth factors. Neutrophils and
platelets are used as they perform their physiological functions, and erythrocytes eventually
become senescent and outlive their usefulness. For successful function, the cellular elements
of the blood must circulate in proper numbers and retain both their structural and
physiological integrity. Erythrocytes contain haemoglobin, which permits uptake and delivery
of oxygen to tissues to sustain cellular metabolism. Erythrocytes normally survive in the
circulation for 120 days while sustaining this function. Neutrophils are found in blood on their
way to tissues to participate in the inflammatory response to microbes or other agents.
Circulating platelets play a key role in haemostasis.

The production requirement of the bone marrow is a prodigious one. Daily, the marrow
replaces 3 billion erythrocytes per kilogram of body weight. Neutrophils have a circulating
half-life of only 6 hours, and 1.6 billion neutrophils per kilogram of body weight must be
produced each day. The entire platelet population must be replaced every 9.9 days. Because of
the need to produce large numbers of functional cells, the marrow is remarkably sensitive to
any infectious, chemical, metabolic or environmental insult that impairs DNA synthesis or
disrupts the formation of the vital subcellular machinery of the red blood cells, white blood
cells or platelets. Further, since the blood cells are marrow progeny, the peripheral blood
serves as a sensitive and accurate mirror of bone marrow activity. Blood is readily available
for assay via venipuncture, and examination of the blood can provide an early clue of
environmentally induced illness.

The haematological system can be viewed as both serving as a conduit for substances entering
the body and as an organ system that may be adversely affected by occupational exposures to
potentially harmful agents. Blood samples may serve as a biological monitor of exposure and
provide a way to assess the effects of occupational exposure on the lymphohaematopoietic
system and other body organs.

Environmental agents can interfere with the haematopoietic system in several ways, including
inhibition of haemoglobin synthesis, inhibition of cell production or function,
leukaemogenesis and increased red blood cell destruction.
Abnormality of blood cell number or function caused directly by occupational hazards can be
divided into those for which the haematological problem is the most important health effect,
such as benzene-induced aplastic anaemia, and those for which the effects on the blood are
direct but of less significance than the effects on other organ systems, such as lead-induced
anaemia. Sometimes haematological disorders are a secondary effect of a workplace hazard.
For example, secondary polycythaemia can be the result of an occupational lung disease.
Table 1 lists those hazards which are reasonably well accepted as having a direct effect on the
haematological system.

Table 1. Selected agents implicated in environmentally and occupationally acquired


methaemoglobinaemia

 Nitrate-contaminated well water


 Nitrous gases (in welding and silos)
 Aniline dyes
 Food high in nitrates or nitrites
 Mothballs (containing naphthalene)
 Potassium chlorate
 Nitrobenzenes
 Phenylenediamine
 Toluenediamine

Examples of Workplace Hazards Primarily Affecting the Haematological System

Benzene

Benzene was identified as a workplace poison producing aplastic anaemia in the late 19th
century (Goldstein 1988). There is good evidence that it is not benzene itself but rather one or
more metabolites of benzene that is responsible for its haematological toxicity, although the
exact metabolites and their subcellular targets have yet to be clearly identified (Snyder, Witz
and Goldstein 1993).

Implicit in the recognition that benzene metabolism plays a role in its toxicity, as well as
recent research on the metabolic processes involved in the metabolism of compounds such as
benzene, is the likelihood that there will be differences in human sensitivity to benzene, based
upon differences in metabolic rates conditioned by environmental or genetic factors. There is
some evidence of a familial tendency towards benzene-induced aplastic anaemia, but this has
not been clearly demonstrated. Cytochrome P-450(2E1) appears to play an important role in
the formation of haematotoxic metabolites of benzene, and there is some suggestion from
recent studies in China that workers with higher activities of this cytochrome are more at risk.
Similarly, it has been suggested that Thalassaemia minor, and presumably other disorders in
which there is increased bone marrow turnover, may predispose a person to benzene-induced
aplastic anaemia (Yin et al. 1996). Although there are indications of some differences in
susceptibility to benzene, the overall impression from the literature is that, in contrast to a
variety of other agents such as chloramphenicol, for which there is a wide range in sensitivity,
even including idiosyncratic reactions producing aplastic anaemia at relatively trivial levels of
exposure, there is a virtual universal response to benzene exposure, leading to bone marrow
toxicity and eventually aplastic anaemia in a dose-dependent fashion.

The effect of benzene on the bone marrow is thus analogous to the effect produced by
chemotherapeutic alkylating agents used in the treatment of Hodgkin’s disease and other
cancers (Tucker et al. 1988). With increasing dosage there is a progressive decline in all of
the formed elements of the blood, which is sometimes manifested initially as anaemia,
leucopenia or thrombocytopenia. It should be noted that it would be most unexpected to
observe a person with thrombocytopenia that was not at least accompanied by a low normal
level of the other formed blood elements. Further, such an isolated cytopenia would not be
expected to be severe. In other words, an isolated white blood count of 2,000 per ml, where
the normal range is 5,000 to 10,000, would suggest strongly that the cause of the leucopenia
was other than benzene (Goldstein 1988).

The bone marrow has substantial reserve capacity. Following even a significant degree of
hypoplasia of the bone marrow as part of a chemotherapeutic regimen, the blood count
usually eventually returns to normal. However, individuals who have undergone such
treatments cannot respond by producing as high a white blood cell count when exposed to a
challenge to their bone marrow, such as endotoxin, as can individuals who have never
previously been treated with such chemotherapeutic agents. It is reasonable to infer that there
are dose levels of an agent such as benzene which can destroy bone marrow precursor cells
and thus affect the reserve capability of the bone marrow without incurring sufficient damage
to lead to a blood count that was lower than the laboratory range of normal. Because routine
medical surveillance may not reveal abnormalities in a worker who may have indeed suffered
from the exposure, the focus on worker protection must be preventive and employ basic
principles of occupational hygiene. Although the extent of the development of bone marrow
toxicity in relationship to benzene exposure at the workplace remains unclear, it does not
appear that a single acute exposure to benzene is likely to cause aplastic anaemia. This
observation might reflect the fact that bone marrow precursor cells are at risk only in certain
phases of their cell cycle, perhaps when they are dividing, and not all the cells will be in that
phase during a single acute exposure. The rapidity with which cytopenia develops depends in
part on the circulating lifetime of the cell type. Complete cessation of bone marrow
production would lead first to a leucopenia because white blood cells, particularly
granulocytic blood cells, persist in circulation for less than a day. Next there would be a
decrease in platelets, whose survival time is about ten days. Lastly there would be a decrease
in red cells, which survive for a total of 120 days.

Benzene not only destroys the pluripotential stem cell, which is responsible for the production
of red blood cells, platelets and granulocytic white blood cells, but it also has been found to
cause a rapid loss in circulating lymphocytes in both laboratory animals and in humans. This
suggests the potential for benzene to have an adverse effect on the immune system in exposed
workers, an effect that has not been clearly demonstrated as yet (Rothman et al. 1996).

Benzene exposure has been associated with aplastic anaemia, which is frequently a fatal
disorder. Death usually is caused by infection because the reduction in white blood cells,
leucopenia, so compromises the body’s defence system, or by haemorrhage due to the
reduction in platelets necessary for normal clotting. An individual exposed to benzene at a
workplace who develops a severe aplastic anaemia must be considered to be a sentinel for
similar effects in co-workers. Studies based on the discovery of a sentinel individual often
have uncovered groups of workers who exhibit obvious evidence of benzene haematotoxicity.
For the most part, those individuals who do not succumb relatively quickly to aplastic
anaemia will usually recover following removal from the benzene exposure. In one follow-up
study of a group of workers who previously had significant benzene-induced pancytopenia
(decrease in all blood cell types) there were only minor residual haematological abnormalities
ten years later (Hernberg et al. 1966). However, some workers in these groups, with initially
relatively severe pancytopenia, progressed in their illnesses by first developing aplastic
anaemia, then a myelodysplastic preleukaemic phase, and finally to the eventual development
of acute myelogenous leukaemia (Laskin and Goldstein 1977). Such progression of disease is
not unexpected since individuals with aplastic anaemia from any cause appear to have a
higher-than-expected likelihood of developing acute myelogenous leukaemia (De Planque et
al. 1988).

Other causes of aplastic anaemia

Other agents in the workplace have been associated with aplastic anaemia, the most notable
being radiation. The effects of radiation on bone marrow stem cells have been employed in
the therapy of leukaemia. Similarly, a variety of chemotherapeutic alkylating agents produce
aplasia and pose a risk to workers responsible for producing or administering these
compounds. Radiation, benzene and alkylating agents all appear to have a threshold level
below which aplastic anaemia will not occur.

Protection of the production worker becomes more problematic when the agent has an
idiosyncratic mode of action in which minuscule amounts may produce aplasia, such as
chloramphenicol. Trinitrotoluene, which is absorbed readily through the skin, has been
associated with aplastic anaemia in munition plants. A variety of other chemicals has been
reported to be associated with aplastic anaemia, but it is often difficult to determine causality.
An example is the pesticide lindane (gamma-benzene hexachloride). Case reports have
appeared, generally following relatively high levels of exposure, in which lindane is
associated with aplasia. This finding is far from being universal in humans, and there are no
reports of lindane-induced bone marrow toxicity in laboratory animals treated with large
doses of this agent. Bone marrow hypoplasia has also been associated with exposure to
ethylene glycol ethers, various pesticides and arsenic (Flemming and Timmeny 1993).

Leukaemia, Malignant Lymphomas and Multiple Myeloma

Written by ILO Content Manager

Leukaemias

Leukaemias constitute 3% of all cancers worldwide (Linet 1985). They are a group of
malignancies of blood precursor cells, classified according to cell type of origin, degree of
cellular differentiation, and clinical and epidemiological behaviour. The four common types
are acute lymphocytic leukaemia (ALL), chronic lymphocytic leukaemia (CLL), acute
myelocytic leukaemia (AML) and chronic myelocytic leukaemia (CML). ALL develops
rapidly, is the most common form of leukaemia in childhood and originates in the white blood
corpuscles in the lymph nodes. CLL arises in bone marrow lymphocytes, develops very
slowly and is more common in aged persons. AML is the common form of acute leukaemia in
adults. Rare types of acute leukaemia include monocytic, basophilic, eosinophilic, plasma-,
erythro- and hairy-cell leukaemias. These rarer forms of acute leukaemia are sometimes
lumped together under the heading acute non-lymphocytic leukaemia (ANLL), due in part to
the belief that they arise from a common stem cell. Most cases of CML are characterized by a
specific chromosomal abnormality, the Philadelphia chromosome. The eventual outcome of
CML often is leukaemic transformation to AML. Transformation to AML also can occur in
polycythaemia vera and essential thrombocythaemia, neoplastic disorders with elevated red
cell or platelet levels, as well as myelofibrosis and myeloid dysplasia. This has led to
characterizing these disorders as related myeloproliferative diseases.

The clinical picture varies according to the type of leukaemia. Most patients suffer from
fatigue and malaise. Haematological count anomalies and atypical cells are suggestive of
leukaemia and indicate a bone marrow examination. Anaemia, thrombocytopenia,
neutropenia, elevated leucocyte count and elevated number of blast cells are typical signs of
acute leukaemia.

Incidence: The annual overall age-adjusted incidence of leukaemias varies between 2 and 12
per 100,000 in men and between 1 and 11 per 100,000 in women in different populations.
High figures are encountered in North American, western European and Israeli populations,
while low ones are reported for Asian and African populations. The incidence varies
according to age and to type of leukaemia. There is a marked increase in the incidence of
leukaemia with age, and there is also a childhood peak which occurs around two to four years
of age. Different leukaemia subgroups display different age patterns. CLL is about twice as
frequent in men as in women. Incidence and mortality figures of adult leukaemias have tended
to stay relatively stable over the past few decades.

Risk factors: Familial factors in the development of leukaemia have been suggested, but the
evidence for this is inconclusive. Certain immunological conditions, some of which are
hereditary, appear to predispose to leukaemia. Down’s syndrome is predictive of acute
leukaemia. Two oncogenic retroviruses (human T-cell leukaemia virus-I, human T-
lymphotropic virus-II) have been identified as being related to the development of
leukaemias. These viruses are thought to be early-stage carcinogens and as such are
insufficient causes of leukaemia (Keating, Estey and Kantarjian 1993).

Ionizing radiation and benzene exposure are established environmental and occupational
causes of leukaemias. The incidence of CLL, however, has not been associated with exposure
to radiation. Radiation and benzene-induced leukaemias are recognized as occupational
diseases in a number of countries.

Much less consistently, leukaemia excesses have been reported for the following groups of
workers: drivers; electricians; telephone linepersons and electronic engineers; farmers; flour
millers; gardeners; mechanics, welders and metal workers; textile workers; paper-mill
workers; and workers in the petroleum industry and distribution of petroleum products. Some
particular agents in the occupational environment have been consistently associated with
increased risk of leukaemia. These agents include butadiene, electromagnetic fields, engine
exhaust, ethylene oxide, insecticides and herbicides, machining fluids, organic solvents,
petroleum products (including gasoline), styrene and unidentified viruses. Paternal and
maternal exposures to these agents prior to conception have been suggested to increase the
leukaemia risk in the offspring, but the evidence at this time is insufficient to establish such
exposure as causative.
Treatment and prevention: Up to 75% of male cases of leukaemia may be preventable
(International Agency for Research on Cancer 1990). Avoidance of exposure to radiation and
benzene will reduce the risk of leukaemias, but the potential reduction worldwide has not
been estimated. Treatments of leukaemias include chemotherapy (single agents or
combinations), bone marrow transplant and interferons. Bone marrow transplant in both ALL
and AML is associated with a disease-free survival between 25 and 60%. The prognosis is
poor for patients who do not achieve remission or who relapse. Of those who relapse, about
30% achieve a second remission. The major cause of failure to achieve remission is death
from infection and haemorrhage. The survival of untreated acute leukaemia is 10% within 1
year of diagnosis. The median survival of patients with CLL before the initiation of treatment
is 6 years. The length of survival depends on the stage of the disease when the diagnosis is
initially made.

Leukaemias may occur following medical treatment with radiation and certain
chemotherapeutic agents of another malignancy, such as Hodgkin’s disease, lymphomas,
myelomas, and ovarian and breast carcinomas. Most of these secondary cases of leukaemia
are acute non-lymphocytic leukaemias or myelodysplastic syndrome, which is a preleukaemic
condition. Chromosomal abnormalities appear to be more readily observed in both treatment-
related leukaemias and in leukaemias associated with radiation and benzene exposure. These
acute leukaemias also share a tendency to resist therapy. Activation of the ras oncogene has
been reported to occur more frequently in patients with AML who worked in professions
deemed to be at high risk of exposure to leukaemogens (Taylor et al. 1992).

Malignant Lymphomas and Multiple Myeloma

Malignant lymphomas constitute a heterogeneous group of neoplasms primarily affecting


lymphoid tissues and organs. Malignant lymphomas are divided into two major cellular types:
Hodgkin’s disease (HD) (International Classification of Disease, ICD-9 201) and non-
Hodgkin lymphomas (NHL) (ICD-9 200, 202). Multiple myeloma (MM) (ICD-9 203)
represents a malignancy of plasma cells within the bone marrow and accounts usually for less
than 1% of all malignancies (International Agency for Research on Cancer 1993). In 1985,
malignant lymphomas and multiple myelomas ranked seventh among all cancers worldwide.
They represented 4.2% of all estimated new cancer cases and amounted to 316,000 new cases
(Parkin, Pisani and Ferlay 1993).

Mortality and incidence of malignant lymphomas do not reveal a consistent pattern across
socio-economic categories worldwide. Children’s HD has a tendency to be more common in
less developed nations, while relatively high rates have been observed in young adults in
countries in more developed regions. In some countries, NHL seems to be in excess among
people in higher socio-economic groups, while in other countries no such clear gradient has
been observed.

Occupational exposures may increase the risk of malignant lymphomas, but the
epidemiological evidence is still inconclusive. Asbestos, benzene, ionizing radiation,
chlorinated hydrocarbon solvents, wood dust and chemicals in leather and rubber-tire
manufacturing are examples of agents that have been associated with the risk of unspecified
malignant lymphomas. NHL is more common among farmers. Further suspect occupational
agents for HD, NHL and MM are mentioned below.
Hodgkin’s disease

Hodgkin’s disease is a malignant lymphoma characterized by the presence of multinucleated


giant (Reed-Sternberg) cells. Lymph nodes in the mediastinum and neck are involved in about
90% of the cases, but the disease may occur in other sites as well. Histological subtypes of
HD differ in their clinical and epidemiological behaviour. The Rye classification system
includes four subtypes of HD: lymphocytic predominance, nodular sclerosis, mixed cellularity
and lymphocytic depletion. The diagnosis of HD is made by biopsy and treatment is radiation
therapy alone or in combination with chemotherapy.

The prognosis of HD patients depends on the stage of the disease at diagnosis. About 85 to
100% of patients without massive mediastinal involvement survive for about 8 years from the
start of the treatment without further relapse. When there is massive mediastinal involvement,
about 50% of the cases suffer a relapse. Radiation therapy and chemotherapy may involve
various side effects, such as secondary acute myelocytic leukaemia discussed earlier.

The incidence of HD has not undergone major changes over time but for a few exceptions,
such as the populations of the Nordic countries, in which the rates have declined
(International Agency for Research on Cancer 1993).

Available data show that in the 1980s the populations of Costa Rica, Denmark and Finland
had median annual incidence rates of HD of 2.5 per 100,000 in men and 1.5 per 100,000 in
women (standardized to world population); these figures yielded a sex ratio of 1.7. The
highest rates in males were recorded for populations in Italy, the United States, Switzerland
and Ireland, while the highest female rates were in the United States and Cuba. Low incidence
rates have been reported for Japan and China (International Agency for Research on Cancer
1992).

Viral infection has been suspected as involved in the aetiology of HD. Infectious
mononucleosis, which is induced by the Epstein-Barr virus, a herpes virus, has been shown to
be associated with increased risk of HD. Hodgkin’s disease may also cluster in families, and
other time-space constellations of cases have been observed, but the evidence that there are
common aetiological factors behind such clusters is weak.

The extent to which occupational factors can lead to increased risk for HD has not been
established. There are three predominant suspect agents—organic solvents, phenoxy
herbicides and wood dust—but the epidemiological evidence is limited and controversial.

Non-Hodgkin lymphoma

About 98% of the NHLs are lymphocytic lymphomas. At least four different classifications of
lymphocytic lymphomas have been commonly used (Longo et al. 1993). In addition, an
endemic malignancy, Burkitt’s lymphoma, is endemic in certain areas of tropical Africa and
New Guinea.

Thirty to fifty per cent of NHLs are curable with chemotherapy and/or radiotherapy. Bone
marrow transplants may be necessary.

Incidence: High annual incidences of NHL (over 12 per 100,000, standardized to world
standard population) have been reported during the 1980s for the White population in the
United States, particularly San Francisco and New York City, as well as in some Swiss
cantons, in Canada, in Trieste (Italy) and Porto Alegre (Brazil, in men). The incidence of
NHL is usually higher in men than in women, with the typical excess in men being 50 to
100% greater than in women. In Cuba, and in the White population of Bermuda, however, the
incidence is slightly higher in women (International Agency for Research on Cancer 1992).

NHL incidence and mortality rates have been rising in a number of countries worldwide
(International Agency for Research on Cancer 1993). By 1988, the average annual incidence
in US White men increased by 152%. Some of the increase is due to changes in diagnostic
practices of physicians and part due to an increase in immunosuppressive conditions which
are induced by the human immunodeficiency virus (HIV, associated with AIDS), other
viruses and immunosuppressive chemotherapy. These factors do not explain the entire
increase, and a considerable proportion of residual increase may be explained by dietary
habits, environmental exposures such as hair dyes, and possibly familial tendencies, as well as
some rare factors (Hartge and Devesa 1992).

Occupational determinants have been suspected to play a role in the development of NHL. It
is currently estimated that 10% of NHLs are thought to be related to occupational exposures
in the United States (Hartge and Devesa 1992), but this percentage varies by time period and
location. The occupational causes are not well established. Excess risk of NHL has been
associated with electric power plant jobs, farming, grain handling, metal working, petroleum
refining and woodworking, and has been found among chemists. Occupational exposures that
have been associated with an increased NHL risk include ethylene oxide, chlorophenols,
fertilizers, herbicides, insecticides, hair dyes, organic solvents and ionizing radiation. A
number of positive findings for phenoxyacetic acid herbicide exposure have been reported
(Morrison et al. 1992). Some of the herbicides involved were contaminated with 2,3,7,8-
tetrachlorodibenzo-para-dioxin (TCDD). The epidemiological evidence for occupational
aetiologies of NHL is still limited, however.

Multiple myeloma

Multiple myeloma (MM) involves predominantly bone (especially the skull), bone marrow
and kidney. It represents malignant proliferation of B-lymphocyte-derived cells that
synthesize and secrete immunoglobulins. The diagnosis is made using radiology, a test for the
MM-specific Bence-Jones proteinuria, determination of abnormal plasma cells in the bone
marrow, and immunoelectrophoresis. MM is treated with bone marrow transplantation,
radiation therapy, conventional chemotherapy or polychemotherapy, and immunological
therapy. Treated MM patients survive 28 to 43 months on the average (Ludwig and Kuhrer
1994).

The incidence of MM increases sharply with increasing age. High age-standardized annual
incidence rates (5 to 10 per 100,000 in men and 4 to 6 per 100,000 in women) have been
encountered in the United States Black populations, in Martinique and among the Maoris in
New Zealand. Many Chinese, Indian, Japanese and Filipino populations have low rates (less
than 10 per 100,000 person-years in men and less than 0.3 per 100,000 person-years in
women) (International Agency for Research on Cancer 1992). The rate of multiple myeloma
has been on the increase in Europe, Asia, Oceania and in both the Black and White United
States populations since the 1960s, but the increase has tended to level off in a number of
European populations (International Agency for Research on Cancer 1993).
Throughout the world there is an almost consistent excess among males in the incidence of
MM. This excess is typically of the order of 30 to 80%.

Familial and other case clusterings of MM have been reported, but the evidence is
inconclusive as to the causes of such clusterings. The excess incidence among the United
States Black population as contrasted with the White population points towards the possibility
of differential host susceptibility among population groups, which may be genetic. Chronic
immunological disorders have on occasion been associated with the risk of MM. The data on
social class distribution of MM are limited and unreliable for conclusions on any gradients.

Occupational factors: Epidemiological evidence of an elevated risk of MM in gasoline-


exposed workers and refinery workers suggests a benzene aetiology (Infante 1993). An excess
of multiple myeloma has repeatedly been observed in farmers and farm workers. Pesticides
represent a suspect group of agents. The evidence for carcinogenicity is, however, insufficient
for phenoxyacetic acid herbicides (Morrison et al. 1992). Dioxins are sometimes impurities in
some phenoxyacetic acid herbicides. There is a reported significant excess of MM in women
residing in a zone contaminated with 2,3,7,8-tetrachlorodibenzo-para-dioxin after an accident
in a plant near Seveso, Italy (Bertazzi et al. 1993). The Seveso results were based on two
cases which occurred during ten years of follow-up, and further observation is needed to
confirm the association. Another possible explanation for the increased risk in farmers and
farm workers is exposure to some viruses (Priester and Mason 1974).

Further suspect occupations and occupational agents that have been associated with increased
risk of MM include painters, truck drivers, asbestos, engine exhaust, hair-colouring products,
radiation, styrene, vinyl chloride and wood dust. The evidence for these occupations and
agents remains inconclusive.

Agents or Work Conditions Affecting the Blood

Written by ILO Content Manager

Circulating Red Blood Cells

Interference in haemoglobin oxygen deliverythrough alteration of haeme

The major function of the red cell is to deliver oxygen to the tissue and to remove carbon
dioxide. The binding of oxygen in the lung and its release as needed at the tissue level
depends upon a carefully balanced series of physicochemical reactions. The result is a
complex dissociation curve which serves in a healthy individual to maximally saturate the red
cell with oxygen under standard atmospheric conditions, and to release this oxygen to the
tissues based upon oxygen level, pH and other indicators of metabolic activity. Delivery of
oxygen also depends upon the flow rate of oxygenated red cells, a function of viscosity and of
vascular integrity. Within the range of the normal haematocrit (the volume of packed red
cells), the balance is such that any decrease in blood count is offset by the decrease in
viscosity, allowing improved flow. A decrease in oxygen delivery to the extent that someone
is symptomatic is usually not observed until the haematocrit is down to 30% or less;
conversely, an increase in haematocrit above the normal range, as seen in polycythaemia, may
decrease oxygen delivery due to the effects of increased viscosity on blood flow. An
exception is iron deficiency, in which symptoms of weakness and lassitude appear, primarily
due to the lack of iron rather than to any associated anaemia (Beutler, Larsh and Gurney
1960).

Carbon monoxide is a ubiquitous gas which can have severe, possibly fatal, effects on the
ability of haemoglobin to transport oxygen. Carbon monoxide is discussed in detail in the
chemicals section of this Encyclopaedia.

Methaemoglobin-producing compounds. Methaemoglobin is another form of haemoglobin


that is incapable of delivering oxygen to the tissues. In haemoglobin, the iron atom at the
centre of the haeme portion of the molecule must be in its chemically reduced ferrous state in
order to participate in the transport of oxygen. A certain amount of the iron in haemoglobin is
continuously oxidized to its ferric state. Thus, approximately 0.5% of total haemoglobin in the
blood is methaemoglobin, which is the chemically oxidized form of haemoglobin that cannot
transport oxygen. An NADH-dependent enzyme, methaemoglobin reductase, reduces ferric
iron back to ferrous haemoglobin.

A number of chemicals in the workplace can induce levels of methaemoglobin that are
clinically significant, as for example in industries using aniline dyes. Other chemicals that
have been found frequently to cause methaemoglobinaemia in the workplace are
nitrobenzenes, other organic and inorganic nitrates and nitrites, hydrazines and a variety of
quinones (Kiese 1974). Some of these chemicals are listed in Table 1 and are discussed in
more detail in the chemicals section of this Encyclopaedia. Cyanosis, confusion and other
signs of hypoxia are the usual symptoms of methaemoglobinaemia. Individuals who are
chronically exposed to such chemicals may have blueness of the lips when methaemoglobin
levels are approximately 10% or greater. They may have no other overt effects. The blood has
a characteristic chocolate brown colour with methaemoglobinaemia. Treatment consists of
avoiding further exposure. Significant symptoms may be present, usually at methaemoglobin
levels greater than 40%. Therapy with methylene blue or ascorbic acid can accelerate
reduction of the methaemoglobin level. Individuals with glucose-6-phosphate dehydrogenase
deficiency may have accelerated haemolysis when treated with methylene blue (see below for
discussion of glucose-6-phosphate dehydrogenase deficiency).

There are inherited disorders leading to persistent methaemoglobinaemia, either due to


heterozygosity for an abnormal haemoglobin, or to homozygosity for deficiency of red cell
NADH-dependent methaemoglobin reductase. Individuals who are heterozygous for this
enzyme deficiency will not be able to decrease elevated methaemoglobin levels caused by
chemical exposures as rapidly as will individuals with normal enzyme levels.

In addition to oxidizing the iron component of haemoglobin, many of the chemicals causing
methaemoglobinaemia, or their metabolites, are also relatively non-specific oxidizing agents,
which at high levels can cause a Heinz-body haemolytic anaemia. This process is
characterized by oxidative denaturation of haemoglobin, leading to the formation of punctate
membrane-bound red cell inclusions known as Heinz bodies, which can be identified with
special stains. Oxidative damage to the red cell membrane also occurs. While this may lead to
significant haemolysis, the compounds listed in Table 1 primarily produce their adverse
effects through the formation of methaemoglobin, which may be life threatening, rather than
through haemolysis, which is usually a limited process.
In essence, two different red cell defence pathways are involved: (1) the NADH-
dependent methaemoglobin reductase required to reduce methaemoglobin to normal
haemoglobin; and (2) the NADPH-dependent process through the hexose monophosphate
(HMP) shunt, leading to the maintenance of reduced glutathione
as a means to defend against oxidizing species capable of producing Heinz-body
haemolytic anaemia (figure 1). Heinz-body haemolysis can be exacerbated by the treatment of
methaemoglobinaemic patients with methylene blue because it requires NADPH for its
methaemoglobin-reducing effects. Haemolysis will also be a more prominent part of the
clinical picture in individuals with (1)deficiencies in one of the enzymes of the NADPH
oxidant defence pathway, or (2) an inherited unstable haemoglobin. Except for the glucose-6-
phosphate dehydrogenase (G6PD) deficiency, described later in this chapter, these are
relatively rare disorders.

Figure 1. Red blood cell enzymes of oxidant defence and related reactions

GSH + GSH + (O) ←-Glutathione peroxidase-→ GSSG + H2O

GSSG + 2NADPH ←-Glutathione peroxidase-→ 2GSH + 2NADP

Glucose-6-Phosphate + NADP ←-G6PD-→ 6-Phosphogluconate + NADPH

Fe+++·Haemoglobin (Methaemoglobin) + NADH ←-Methaemoglobin reductase-→ Fe++·Haemoglobin

Another form of haemoglobin alteration produced by oxidizing agents is a denatured species known as sulphaemoglobin. This
irreversible product can be detected in the blood of individuals with significant methaemoglobinaemia produced by oxidant
chemicals. Sulphaemoglobin is the name also given, and more appropriately, to a specific product formed during hydrogen sulphide
poisoning.

Haemolytic agents: There are a variety of haemolytic agents in the workplace. For many the
toxicity of concern is methaemoglobinaemia. Other haemolytic agents include naphthalene
and its derivatives. In addition, certain metals, such as copper, and organometals, such as
tributyl tin, will shorten red cell survival, at least in animal models. Mild haemolysis can also
occur during traumatic physical exertion (march haemoglobinuria); a more modern
observation is elevated white blood counts with prolonged exertion (jogger’s leucocytosis).
The most important of the metals that affects red cell formation and survival in workers is
lead, described in detail in the chemicals section of this Encyclopaedia.

Arsine: The normal red blood cell survives in the circulation for 120 days. Shortening of this
survival can lead to anaemia if not compensated by an increase in red cell production by the
bone marrow. There are essentially two types of haemolysis: (1) intravascular haemolysis, in
which there is an immediate release of haemoglobin within the circulation; and (2)
extravascular haemolysis, in which red cells are destroyed within the spleen or the liver.

One of the most potent intravascular haemolysins is arsine gas (AsH3). Inhalation of a
relatively small amount of this agent leads to swelling and eventual bursting of red blood cells
within the circulation. It may be difficult to detect the causal relation of workplace arsine
exposure to an acute haemolytic episode (Fowler and Wiessberg 1974). This is partly because
there is frequently a delay between exposure and onset of symptoms, but primarily because
the source of exposure is often not evident. Arsine gas is made and used commercially, often
now in the electronics industry. However, most of the published reports of acute haemolytic
episodes have been through the unexpected liberation of arsine gas as an unwanted by-product
of an industrial process—for example, if acid is added to a container made of arsenic-
contaminated metal. Any process that chemically reduces arsenic, such as acidification, can
lead to the liberation of arsine gas. As arsenic can be a contaminant of many metals and
organic materials, such as coal, arsine exposure can often be unexpected. Stibine, the hydride
of antimony, appears to produce a haemolytic effect similar to arsine.

Death can occur directly due to complete loss of red blood cells. (A haematocrit of zero has
been reported.) However, a major concern at arsine levels less than those producing complete
haemolysis is acute renal failure due to the massive release of haemoglobin within the
circulation. At much higher levels, arsine may produce acute pulmonary oedema and possibly
direct renal effects. Hypotension may accompany the acute episode. There is usually a delay
of at least a few hours between inhalation of arsine and the onset of symptoms. In addition to
red urine due to haemoglobinuria, the patient will frequently complain of abdominal pain and
nausea, symptoms that occur concomitantly with acute intravascular haemolysis from a
number of causes (Neilsen 1969).

Treatment is aimed at maintenance of renal perfusion and transfusion of normal blood. As the
circulating red cells affected by arsine appear to some extent to be doomed to intravascular
haemolysis, an exchange transfusion in which arsine-exposed red cells are replaced by
unexposed cells would appear to be optimal therapy. As in severe life-threatening
haemorrhage, it is important that replacement red cells have adequate 2,3-diphosphoglyceric
acid (DPG) levels so as to be able to deliver oxygen to the tissue.

Other Haematological Disorders

White blood cells

There are a variety of drugs, such as propylthiourea (PTU), which are known to affect the
production or survival of circulating polymorphonuclear leucocytes relatively selectively. In
contrast, non-specific bone marrow toxins affect the precursors of red cells and platelets as
well. Workers engaged in the preparation or administration of such drugs should be
considered at risk. There is one report of complete granulocytopenia in a worker poisoned
with dinitrophenol. Alteration in lymphocyte number and function, and particularly of subtype
distribution, is receiving more attention as a possible subtle mechanism of effects due to a
variety of chemicals in the workplace or general environment, particularly chlorinated
hydrocarbons, dioxins and related compounds. Validation of the health implications of such
changes is required.

Coagulation

Similar to leucopenia, there are many drugs that selectively decrease the production or
survival of circulating platelets, which could be a problem in workers involved in the
preparation or administration of such agents. Otherwise, there are only scattered reports of
thrombocytopenia in workers. One study implicates toluene diisocyanate (TDI) as a cause of
thrombocytopenic purpura. Abnormalities in the various blood factors involved in coagulation
are not generally noted as a consequence of work. Individuals with pre-existing coagulation
abnormalities, such as haemophilia, often have difficulty entering the workforce. However,
although a carefully considered exclusion from a few selected jobs is reasonable, such
individuals are usually capable of normal functioning at work.
Haematological Screening and Surveillance in the Workplace

Markers of susceptibility

Due in part to the ease in obtaining samples, more is known about inherited variations in
human blood components than for those in any other organ. Extensive studies sparked by
recognition of familial anaemias have led to fundamental knowledge concerning the structural
and functional implications of genetic alterations. Of pertinence to occupational health are
those inherited variations that might lead to an increased susceptibility to workplace hazards.
There are a number of such testable variations that have been considered or actually used for
the screening of workers. The rapid increase in knowledge concerning human genetics makes
it a certainty that we will have a better understanding of the inherited basis of variation in
human response, and we will be more capable of predicting the extent of individual
susceptibility through laboratory tests.

Before discussing the potential value of currently available susceptibility markers, the major
ethical considerations in the use of such tests in workers should be emphasized. It has been
questioned whether such tests favour exclusion of workers from a site rather than a focus on
improving the worksite for the benefit of the workers. At the very least, before embarking on
the use of a susceptibility marker at a workplace, the goals of the testing and consequences of
the findings must be clear to all parties.

The two markers of haematological susceptibility for which screening has taken place most
frequently are sickle cell trait and G6PD deficiency. The former is at most of marginal value
in rare situations, and the latter is of no value whatsoever in most of the situations for which it
has been advocated (Goldstein, Amoruso and Witz 1985).

Sickle cell disease, in which there is homozygosity for haemoglobin S (HbS), is a fairly
common disorder among individuals of African descent. It is a relatively severe disease that
often, but not always, precludes entering the workforce. The HbS gene may be inherited with
other genes, such as HbC, which may reduce the severity of its effects. The basic defect in
individuals with sickle cell disease is the polymerization of HbS, leading to microinfarction.
Microinfarction can occur in episodes, known as sickle cell crises, and can be precipitated by
external factors, particularly those leading to hypoxia and, to a lesser extent, dehydration.
With a reasonably wide variation in the clinical course and well-being of those with sickle cell
disease, employment evaluation should focus on the individual case history. Jobs that have the
possibility of hypoxic exposures, such as those requiring frequent air travel, or those with a
likelihood of significant dehydration, are not appropriate.

Much more common than sickle cell disease is sickle cell trait, the heterozygous condition in
which there is inheritance of one gene for HbS and one for HbA. Individuals with this genetic
pattern have been reported to undergo sickle cell crisis under extreme conditions of hypoxia.
Some consideration has been given to excluding individuals with sickle cell trait from
workplaces where hypoxia is a common risk, probably limited to the jobs on military aircraft
or submarines, and perhaps on commercial aircraft. However, it must be emphasized that
individuals with sickle cell trait do very well in almost every other situation. For example,
athletes with sickle cell trait had no adverse effects from competing at the altitude of Mexico
City (2,200m, or 7,200ft) during the 1968 Summer Olympics. Accordingly, with the few
exceptions described above, there is no reason to consider exclusion or modification of work
schedules for those with sickle cell trait.
Another common genetic variant of a red blood cell component is the A– form of G6PD
deficiency. It is inherited on the X chromosome as a sex-linked recessive gene and is present
in approximately one in seven Black males and one in 50 Black females in the United States.
In Africa, the gene is particularly prevalent in areas of high malaria risk. As with sickle cell
trait, G6PD deficiency provides a protective advantage against malaria. Under usual
circumstances, individuals with this form of G6PD deficiency have red blood counts and
indices within the normal range. However, due to the inability to regenerate reduced
glutathione, their red blood cells are susceptible to haemolysis following ingestion of oxidant
drugs and in certain disease states. This susceptibility to oxidizing agents has led to workplace
screening on the erroneous assumption that individuals with the common A– variant of G6PD
deficiency will be at risk from the inhalation of oxidant gases. In fact, it would require
exposure to levels many times higher than the levels at which such gases would cause fatal
pulmonary oedema before the red cells of G6PD-deficient individuals would receive oxidant
stress sufficient to be of concern (Goldstein, Amoruso and Witz 1985). G6PD deficiency will
increase the likelihood of overt Heinz-body haemolysis in individuals exposed to aniline dyes
and other methaemoglobin-provoking agents (Table 1), but in these cases the primary clinical
problem remains the life-threatening methaemoglobinaemia. While knowledge of G6PD
status might be useful in such cases, primarily to guide therapy, this knowledge should not be
used to exclude workers from the workplace.

There are many other forms of familial G6PD deficiency, all far less common then the A–
variant (Beutler 1990). Certain of these variants, particularly in individuals from the
Mediterranean basin and Central Asia, have much lower levels of G6PD activity in their red
blood cells. Consequently the affected individual can be severely compromised by ongoing
haemolytic anaemia. Deficiencies in other enzymes active in defence against oxidants have
also been reported as have unstable haemoglobins that render the red cell more susceptible to
oxidant stress in the same manner as in G6PD deficiency.

Surveillance

Surveillance differs substantially from clinical testing in both the evaluation of ill patients and
the regular screening of presumably healthy individuals. In an appropriately designed
surveillance programme, the aim is to prevent overt disease by picking up subtle early
changes through the use of laboratory testing. Therefore, a slightly abnormal finding should
automatically trigger a response—or at least a thorough review—by physicians.

In the initial review of haematological surveillance data in a workforce potentially exposed to


a haematotoxin such as benzene, there are two major approaches that are particularly helpful
in distinguishing false positives. The first is the degree of the difference from normal. As the
count gets further removed from the normal range, there is a rapid drop-off in the likelihood
that it represents just a statistical anomaly. Second, one should take advantage of the totality
of data for that individual, including normal values, keeping in mind the wide range of effects
produced by benzene. For example, there is a much greater probability of a benzene effect if a
slightly low platelet count is accompanied by a low-normal white blood cell count, a low-
normal red cell count, and a high-normal red cell mean corpuscular volume (MCV).
Conversely, the relevance of this same platelet count to benzene haematotoxicity can be
discounted if the other blood counts are at the opposite end of the normal spectrum. These
same two considerations can be used in judging whether the individual should be removed
from the workforce while awaiting further testing and whether the additional testing should
consist only of a repeat complete blood count (CBC).
If there is any doubt as to the cause of the low count, the entire CBC should be repeated. If the
low count is due to laboratory variability or some short-term biological variability within the
individual, it is less likely that the blood count will again be low. Comparison with
preplacement or other available blood counts should help distinguish those individuals who
have an inherent tendency to be on the lower end of the distribution. Detection of an
individual worker with an effect due to a haematological toxin should be considered a sentinel
health event, prompting careful investigation of working conditions and of co-workers
(Goldstein 1988).

The wide range in normal laboratory values for blood counts can present an even greater
challenge since there can be a substantial effect while counts are still within the normal range.
For example, it is possible that a worker exposed to benzene or ionizing radiation may have a
fall in haematocrit from 50 to 40%, a fall in the white blood cell count from 10,000 to 5,000
per cubic millimetre and a fall in the platelet count from 350,000 to 150,000 per cubic
millimetre—that is, more than a 50% decrease in platelets; yet all these values are within the
“normal” range of blood counts. Accordingly, a surveillance programme that looks solely at
“abnormal” blood counts may miss significant effects. Therefore, blood counts that decrease
over time while staying in the normal range need particular attention.

Another challenging problem in workplace surveillance is the detection of a slight decrease in


the mean blood count of an entire exposed population—for example, a decrease in mean
white blood cell count from 7,500 to 7,000 per cubic millimetre because of a widespread
exposure to benzene or ionizing radiation. Detection and appropriate evaluation of any such
observation requires meticulous attention to standardization of laboratory test procedures, the
availability of an appropriate control group and careful statistical analysis.
Blood References
Bertazzi, A, AC Pesatori, D Consonni, A Tironi, MT Landi and C Zocchetti. 1993. Cancer
incidence in a population accidentally exposed to 2,3,7,8-tetrachlorodibenzo-para-dioxin,
Seveso, Italy. Epidemiology 4(5): 398-406.

Beutler, E. 1990. Genetics of glucose-6-phosphate dehydrogenase deficiency. Sem Hematol


27:137.

Beutler, E, SE Larsh, and CW Gurney. 1960. Iron therapy in chronically fatigued nonanemic
women: a double-blind study. Ann Intern Med 52:378.

De Planque, MM, HC Kluin-Nelemans, HJ Van Krieken, MP Kluin, A Brand, GC


Beverstock, R Willemze and JJ van Rood. 1988. Evolution of acquired severe aplastic
anaemia to myelodysplasia and subsequent leukaemia in adults. Brit J Haematol 70:55-62.

Flemming, LE and W Timmeny. 1993. Aplastic anemia and pesticides. J Med 35(1):1106-
1116.

Fowler, BA and JB Wiessberg. 1974. Arsine poisoning. New Engl J Med 291:1171-1174.

Goldstein, BD. 1988. Benzene toxicity. Occup Med: State Art Rev 3(3):541-554.

Goldstein, BD, MA Amoruso, and G Witz. 1985. Erythrocyte glucose-6-phosphate


dehydrogenase deficiency does not pose an increased risk for Black Americans exposed to
oxidant gases in the workplace or general environment. Toxicol Ind Health 1:75-80.

Hartge, P and SS Devesa. 1992. Quantification of the


impact of known risk factors on time trends in non-Hodgkin’s lymphoma incidence.
Cancer Res 52:5566S-5569S.

Hernberg, S et al. 1966. Prognostic aspects of benzene poisoning. Brit J Ind Med 23:204.
Infante, P. 1993. State of the science on the carcinogenicity of gasoline with particular
reference to cohort mortality study results. Environ Health Persp 101 Suppl. 6:105-109.

International Agency for Research on Cancer (IARC). 1990. Cancer: Causes, Occurrence and
Control. IARC Scientific Publications, no. 100. Lyon: IARC.

——. 1992. Cancer Incidence in Five Continents. Vol. VI. IARC Scientific Publications, no.
120. Lyon: IARC.

——. 1993. Trends in Cancer Incidence and Mortality. IARC Scientific Publications, no. 121.
Lyon: IARC.

Keating, MJ, E Estey, and H Kantarjian. 1993. Acute leukaemia. In Cancer: Principles and
Practice of Oncology, edited by VTJ DeVita, S Hellman, and SA Rosenberg. Philadelphia: JB
Lippincott.

Kiese, M. 1974. Methemoglobinemia: A Comprehensive Treatise. Cleveland: CRC Press.


Laskin, S and BD Goldstein. 1977. Benzene toxicity, a clinical evaluation. J Toxicol Environ
Health Suppl. 2.

Linet, MS. 1985. The Leukemias, Epidemiologic Aspects. New York: Oxford Univ. Press.

Longo, DL, VTJ DeVita, ES Jaffe, P Mauch, and WJ Urba. 1993. Lymphocytic lymphomas.
In Cancer: Principles and Practice of Oncology, edited by VTJ DeVita, S Hellman, and SA
Rosenberg. Philadelphia: JB Lippincott.

Ludwig, H and I Kuhrer. 1994. The treatment of multiple myeloma. Wien klin Wochenschr
106:448-454.

Morrison, HI, K Wilkins, R Semenciw, Y Mao, and Y Wigle. 1992. Herbicides and cancer. J
Natl Cancer Inst 84:1866-1874.

Neilsen, B. 1969. Arsine poisoning in a metal refinery plant: fourteen simultaneous cases.
Acta Med Scand Suppl. 496.

Parkin, DM, P Pisani, and J Ferlay. 1993. Estimates of the worldwide incidence of eighteen
major cancers in 1985. Int J Cancer 54:594-606.

Priester, WA and TJ Mason. 1974. Human cancer mortality in relation to poultry population,
by county, in 10 southeastern states. J Natl Cancer Inst 53:45-49.

Rothman, N, G-L Li, M Dosemeci, WE Bechtold, GE Marti, Y-Z Wang, M Linet, L Xi, W
Lu, MT Smith, N Titenko-Holland, L-P Zhang, W Blot, S-N Yin, and RB Hayes. 1996.
Hematoxicity among Chinese workers heavily exposed to benzene. Am J Ind Med 29:236-
246.

Snyder, R, G Witz, and BD Goldstein. 1993. The toxicology of benzene. Environ Health
Persp 100:293-306.

Taylor, JA, DP Sandler, CD Bloomfield, DL Shore, ED Ball, A Neubauer, OR McIntyre, and


E Liu. 1992. [r]as Oncogene activation and occupational exposures in acute myeloid
leukemia. J Natl Cancer Inst 84:1626-1632.

Tucker, MA, CN Coleman, RS Cox, A Varghese, and SA Rosenberg. 1988. Risk of second
cancers after treatment for Hodgkin’s disease. New Engl J Med 318:76-81.

Yin, S-N, RB Hayes, MS Linet, G-L Li, M Dosemeci, LB Travis, C-Y Li, Z-N Zhang, D-G
Li, W-H Chow, S Wacholder, Y-Z Wang, Z-L Jiang, T-R Dai, W-Y Zhang, X-J Chao, P-Z
Ye, Q-R Kou, X-C Zhang, X-F Lin, J-F Meng, C-Y Ding, J-S Zho, and W-J Blot. 1996. A
cohort study of cancer among benzene-exposed workers in China: Overall results. Am J Ind
Med 29:227-235.
2. Cancer
Chapter Editor: Paolo Boffetta

Introduction

Written by ILO Content Manager

Magnitude of the Problem

The first clear-cut evidence of cancer causation involved an occupational carcinogen


(Checkoway, Pearce and Crawford-Brown 1989). Pott (1775) identified soot as the cause of
scrotal cancer in London chimney-sweeps, and graphically described the abysmal working
conditions, which involved children climbing up narrow chimneys that were still hot. Despite
this evidence, reports of the need to prevent fires in chimneys were used to delay legislation
on child labour in this industry until 1840 (Waldron 1983). An experimental model of soot
carcinogenesis was first demonstrated in the 1920s (Decoufle 1982), 150 years after the
original epidemiological observation.

In subsequent years, a number of other occupational causes of cancer have been demonstrated
through epidemiological studies (although the association with cancer has usually first been
noted by occupational physicians or by workers). These include arsenic, asbestos, benzene,
cadmium, chromium, nickel and vinyl chloride. Such occupational carcinogens are very
important in public health terms because of the potential for prevention through regulation
and improvements in industrial hygiene practices (Pearce and Matos 1994). In most instances,
these are hazards which markedly increase the relative risk of a particular type or types of
cancer. It is possible that other occupational carcinogens remain undetected because they
involve only a small increase in risk or because they simply have not been studied (Doll and
Peto 1981). Some key facts about occupational cancer are given in table 1.

Table 1. Occupational cancer: Key facts.

 Some 20 agents and mixtures are established occupational carcinogens; a similar number of
chemicals are highly suspected occupational carcinogens.
 In industrialized countries, occupation is causally linked to 2 to 8% of all cancers; among
exposed workers, however, this proportion is higher.
 No reliable estimates are available on either the burden of occupational cancer or the extent
of workplace exposure to carcinogens in developing countries.
 The relatively low overall burden of occupational cancer in industrialized countries is the
result of strict regulations on several known carcinogens; exposure to other known or highly
suspected agents, however, is still allowed.
 Although several occupational cancers are listed as occupational diseases in many countries,
a very small fraction of cases is actually recognized and compensated.
 Occupational cancer is-to a very large extent-a preventable disease.
Occupational causes of cancer have received considerable emphasis in epidemiological
studies in the past. However, there has been much controversy regarding the proportion of
cancers which are attributable to occupational exposures, with estimates ranging from 4 to
40% (Higginson 1969; Higginson and Muir 1976; Wynder and Gori 1977; Higginson and
Muir 1979; Doll and Peto 1981; Hogan and Hoel 1981; Vineis and Simonato 1991; Aitio and
Kauppinen 1991). The attributable cancer risk is the total cancer experience in a population
that would not have occurred if the effects associated with the occupational exposures of
concern were absent. It may be estimated for the exposed population, as well as for a broader
population. A summary of existing estimates is shown in table 2. Universal application of the
International Classification of Diseases is what makes such tabulations possible (see box).

Table 2. Estimated proportions of cancer (PAR) attributable to occupations in selected


studies.

Study Population PAR and cancer site Comments

Higginson 1969 Not stated 1% Oral cancer No detailed presentation of


1-2% Lung cancer exposure levels and other
10% Bladder cancer assumptions
2% Skin cancer

Higginson and Not stated 1-3% Total cancer No detailed presentation of


Muir 1976 assumptions

Wynder and Not stated 4% Total cancer in men, Based on one PAR for bladder
Gori 1977 2% for women cancer and two personal
communications

Higginson and West Midland, 6% Total cancer in men, Based on 10% of non-tobacco
Muir 1979 United Kingdom 2% total cancer related lung cancer, mesothelioma,
bladder cancer (30%), and
leukaemia in women (30%)

Doll and Peto United States 4% (range 2-8%) Based on all studied cancer sites;
1981 early 1980 Total cancer reported as ‘tentative’ estimate

Hogan and Hoel United States 3% (range 1.4-4%) Risk associated with occupational
1981 Total cancer asbestos exposure

Vineis and Various 1-5% Lung cancer, Calculations on the basis of data
Simonato 1991 16-24% bladder cancer from case-control studies.
Percentage for lung cancer
considers only exposure to
asbestos. In a study with a high
proportion of subjects exposed to
ionising radiation, a 40% PAR was
estimated. Estimates of PAR in a
few studies on bladder cancer were
between 0 and 3%.

The International Classification of Diseases

Human diseases are classified according to the International Classification of Diseases


(ICD), a system that was started in 1893 and is regularly updated under the coordination of
the World Health Organization. The ICD is used in almost all countries for tasks such as
death certification, cancer registration and hospital discharge diagnosis. The Tenth Revision
(ICD-10), which was approved in 1989 (World Health Organization 1992), differs
considerably from the previous three revisions, which are similar to each other and have been
in use since the 1950s. It is therefore likely that the Ninth Revision (ICD-9, World Health
Organization 1978), or even earlier revisions, will still be used in many countries during the
coming years.

The large variability in the estimates arises from the differences in the data sets used and the
assumptions applied. Most of the published estimates on the fraction of cancers attributed to
occupational risk factors are based on rather simplified assumptions. Furthermore, although
cancer is relatively less common in developing countries due to the younger age structure
(Pisani and Parkin 1994), the proportion of cancers due to occupation may be higher in
developing countries due to the relatively high exposures which are encountered (Kogevinas,
Boffetta and Pearce 1994).

The most generally accepted estimates of cancers attributable to occupations are those
presented in a detailed review on the causes of cancer in the population of the United States in
1980 (Doll and Peto 1981). Doll and Peto concluded that about 4% of all the deaths due to
cancer may be caused by occupational carcinogens within “acceptable limits” (i.e., still
plausible in view of all the evidence at hand) of 2 and 8%. These estimates being proportions,
they are dependent on how causes other than occupational exposures contribute to produce
cancers. For example, the proportion would be higher in a population of lifetime non-smokers
(such as the Seventh-Day Adventists) and lower in a population in which, say, 90% are
smokers. Also the estimates do not apply uniformly to both sexes or to different social classes.
Furthermore, if one considers not the whole population (to which the estimates refer), but the
segments of the adult population in which exposure to occupational carcinogens almost
exclusively occurs (manual workers in mining, agriculture and industry, broadly taken, who in
the United States numbered 31 million out of a population, aged 20 and over, of 158 million
in the late 1980s), the proportion of 4% in the overall population would increase to about 20%
among those exposed.

Vineis and Simonato (1991) provided estimates on the number of cases of lung and bladder
cancer attributable to occupation. Their estimates were derived from a detailed review of
case-control studies, and demonstrate that in specific populations located in industrial areas,
the proportion of lung cancer or bladder cancer from occupational exposures may be as high
as 40% (these estimates being dependent not only on the local prevailing exposures, but also
to some extent on the method of defining and assessing exposure).

Mechanisms and Theories of Carcinogenesis

Studies of occupational cancer are complicated because there are no “complete” carcinogens;
that is, occupational exposures increase the risk of developing cancer, but this future
development of cancer is by no means certain. Furthermore, it may take 20 to 30 years (and at
least five years) between an occupational exposure and the subsequent induction of cancer; it
may also take several more years for cancer to become clinically detectable and for death to
occur (Moolgavkar et al. 1993). This situation, which also applies to non-occupational
carcinogens, is consistent with current theories of cancer causation.

Several mathematical models of cancer causation have been proposed (e.g., Armitage and
Doll 1961), but the model which is simplest and most consistent with current biological
knowledge is that of Moolgavkar (1978). This assumes that a healthy stem cell occasionally
mutates (initiation); if a particular exposure encourages the proliferation of intermediate cells
(promotion) then it becomes more likely that at least one cell will undergo one or more further
mutations producing a malignant cancer (progression) (Ennever 1993).

Thus, occupational exposures can increase the risk of developing cancer either by causing
mutations in DNA or by various “epigenetic” mechanisms of promotion (those not involving
damage to DNA), including increased cell proliferation. Most occupational carcinogens which
have been discovered to date are mutagens, and therefore appear to be cancer initiators. This
explains the long “latency” period which is required for further mutations to occur; in many
instances the necessary further mutations may never occur, and cancer may never develop.

In recent years, there has been increasing interest in occupational exposures (e.g., benzene,
arsenic, phenoxy herbicides) which do not appear to be mutagens, but which may act as
promoters. Promotion may occur relatively late in the carcinogenic process, and the latency
period for promoters may therefore be shorter than for initiators. However, the
epidemiological evidence for cancer promotion remains very limited at this time (Frumkin
and Levy 1988).

Transfer of Hazards

A major concern in recent decades has been the problem of the transfer of hazardous
industries to the developing world (Jeyaratnam 1994). Such transfers have occurred in part
due to the stringent regulation of carcinogens and increasing labour costs in the industrialized
world, and in part from low wages, unemployment and the push for industrialization in the
developing world. For example, Canada now exports about half of its asbestos to the
developing world, and a number of asbestos-based industries have been transferred to
developing countries such as Brazil, India, Pakistan, Indonesia and South Korea (Jeyaratnam
1994). These problems are further compounded by the magnitude of the informal sector, the
large numbers of workers who have little support from unions and other worker organizations,
the insecure status of workers, the lack of legislative protection and/or the poor enforcement
of such protection, the decreasing national control over resources, and the impact of the third
world debt and associated structural adjustment programmes (Pearce et al. 1994).
As a result, it cannot be said that the problem of occupational cancer has been reduced in
recent years, since in many instances the exposure has simply been transferred from the
industrialized to the developing world. In some instances, the total occupational exposure has
increased. Nevertheless, the recent history of occupational cancer prevention in industrialized
countries has shown that it is possible to use substitutes for carcinogenic compounds in
industrial processes without leading industry to ruin, and similar successes would be possible
in developing countries if adequate regulation and control of occupational carcinogens were
in place.

Prevention of Occupational Cancer

Swerdlow (1990) outlined a series of options for the prevention of exposure to occupational
causes of cancer. The most successful form of prevention is to avoid the use of recognized
human carcinogens in the workplace. This has rarely been an option in industrialized
countries, since most occupational carcinogens have been identified by epidemiological
studies of populations that were already occupationally exposed. However, at least in theory,
developing countries could learn from the experience of industrialized countries and prevent
the introduction of chemicals and production processes that have been found to be hazardous
to the health of workers.

The next best option for avoiding exposure to established carcinogens is their removal once
their carcinogenicity has been established or suspected. Examples include the closure of
plants making the bladder carcinogens 2-naphthylamine and benzidine in the United Kingdom
(Anon 1965), termination of British gas manufacture involving coal carbonization, closure of
Japanese and British mustard gas factories after the end of the Second World War (Swerdlow
1990) and gradual elimination of the use of benzene in the shoe industry in Istanbul (Aksoy
1985).

In many instances, however, complete removal of a carcinogen (without closing down the
industry) is either not possible (because alternative agents are not available) or is judged
politically or economically unacceptable. Exposure levels must therefore be reduced by
changing production processes and through industrial hygiene practices. For example,
exposures to recognized carcinogens such as asbestos, nickel, arsenic, benzene, pesticides and
ionizing radiation have been progressively reduced in industrialized countries in recent years
(Pearce and Matos 1994).

A related approach is to reduce or eliminate the activities that involve the heaviest exposures.
For example, after an 1840 act was passed in England and Wales prohibiting chimney-sweeps
from being sent up chimneys, the number of cases of scrotal cancer decreased (Waldron
1983). Exposure also can be minimized through the use of protective equipment, such as
masks and protective clothing, or by imposing more stringent industrial hygiene measures.

An effective overall strategy in the control and prevention of exposure to occupational


carcinogens generally involves a combination of approaches. One successful example is a
Finnish registry which has as its objectives to increase awareness about carcinogens, to
evaluate exposure at individual workplaces and to stimulate preventive measures (Kerva and
Partanen 1981). It contains information on both workplaces and exposed workers, and all
employers are required to maintain and update their files and to supply information to the
registry. The system appears to have been at least partially successful in decreasing
carcinogenic exposures in the workplace (Ahlo, Kauppinen and Sundquist 1988).
Occupational Carcinogens

Written by ILO Content Manager

The control of occupational carcinogens is based on the critical review of scientific


investigations both in humans and in experimental systems. There are several review
programmes being undertaken in different countries aimed at controlling occupational
exposures which could be carcinogenic to humans. The criteria used in different programmes
are not entirely consistent, leading occasionally to differences in the control of agents in
different countries. For example, 4,4-methylene-bis-2-chloroaniline (MOCA) was classified
as an occupational carcinogen in Denmark in 1976 and in the Netherlands in 1988, but only in
1992 has a notation “suspected human carcinogen” been introduced by the American
Conference of Governmental Industrial Hygienists in the United States.

The International Agency for Research on Cancer (IARC) has established, within the
framework of its Monographs programme, a set of criteria to evaluate the evidence of the
carcinogenicity of specific agents. The IARC Monographs programme represents one of the
most comprehensive efforts to review systematically and consistently cancer data, is highly
regarded in the scientific community and serves as the basis for the information in this article.
It also has an important impact on national and international occupational cancer control
activities. The evaluation scheme is given in table 1.

Table 1. Evaluation of evidence of carcinogenicity in the IARC Monographs programme.

1. The evidence for the induction of cancer in humans, which obviously plays an important
role in the identification of human carcinogens is considered. Three types of epidemiological
studies contribute to an assessment of carcinogenicity in humans: cohort studies, case-control
studies and correlation (or ecological) studies. Case reports of cancer in humans may also be
reviewed. The evidence relevant to carcinogenicity from studies in humans is classified into
one of the following categories:

 Sufficient evidence of carcinogenicity: A causal relationship has been established between


exposure to the agent, mixture or exposure circumstance and human cancer. That is, a
positive relationship has been observed between the exposure and cancer in studies in which
chance, bias and confounding could be ruled out with reasonable confidence.
 Limited evidence of carcinogenicity: A positive association has been observed between
exposure to the agent, mixture or exposure circumstance and cancer for which a causal
interpretation is considered to be credible, but chance, bias or confounding could not be
ruled out with reasonable confidence.
 Inadequate evidence of carcinogenicity: The available studies are of insufficient quality,
consistency or statistical power to permit a conclusion regarding the presence or absence of
a causal association, or no data on cancer in humans are available.
 Evidence suggesting lack of carcinogenicity: There are several adequate studies covering the
full range of levels of exposure that human beings are known to encounter, which are
mutually consistent in not showing a positive association between exposure to the agent and
the studied cancer at any observed level of exposure.
2. Studies in which experimental animals (mainly rodents) are exposed chronically to
potential carcinogens and examined for evidence of cancer are reviewed and the degree of
evidence of carcinogenicity is then classified into categories similar to those used for human
data.

3. Data on biological effects in humans and experimental animals that are of particular
relevance are reviewed. These may include toxicological, kinetic and metabolic
considerations and evidence of DNA binding, persistence of DNA lesions or genetic damage
in exposed humans. Toxicological information, such as that on cytotoxicity and regeneration,
receptor binding and hormonal and immunological effects, and data on structure-activity
relationship are used when considered relevant to the possible mechanism of the carcinogenic
action of the agent.

4. The body of evidence is considered as a whole, in order to reach an overall evaluation of


the carcinogenicity to humans of an agent, mixture or circumstance of exposure (see table 2).

Agents, mixtures and exposure circumstances are evaluated within the IARC Monographs if
there is evidence of human exposure and data on carcinogenicity (either in humans or in
experimental animals) (for IARC classification groups, see table 2).

Table 2. IARC Monograph programme classification groups.

The agent, mixture or exposure circumstance is described according to the wording of one of
the following categories:

Group 1— The agent (mixture) is carcinogenic to humans. The exposure circumstance


entails exposures that are carcinogenic to humans.

Group 2A— The agent (mixture) is probably carcinogenic to humans. The exposure
circumstance entails exposures that are probably carcinogenic to humans.

Group 2B— The agent (mixture) is possibly carcinogenic to humans. The exposure
circumstance entails exposures that are possibly carcinogenic to humans.

Group 3— The agent (mixture, exposure circumstance) is not classifiable as to its


carcinogenicity to humans.

Group 4— The agent (mixture, exposure circumstance) is probably not carcinogenic to


humans.

Known and Suspected Occupational Carcinogens

At present, there are 22 chemicals, groups of chemicals or mixtures for which exposures are
mostly occupational, without considering pesticides and drugs, which are established human
carcinogens (table 3). While some agents such as asbestos, benzene and heavy metals are
currently widely used in many countries, other agents have mainly an historical interest (e.g.,
mustard gas and 2-naphthylamine).
Table 3. Chemicals, groups of chemicals or mixtures for which exposures are mostly
occupational (excluding pesticides and drugs).
Group 1-Chemicals carcinogenic to humans1

Exposure2 Human target organ(s) Main industry/use

4-Aminobiphenyl (92-67-1) Bladder Rubber manufacture

Arsenic (7440-38-2) and arsenic Lung, skin Glass, metals, pesticides


compounds3

Asbestos (1332-21-4) Lung, pleura, peritoneum Insulation, filter material,


textiles

Benzene (71-43-2) Leukaemia Solvent, fuel

Benzidine (92-87-5) Bladder Dye/pigment manufacture,


laboratory agent

Beryllium (7440-41-7) and beryllium Lung Aerospace industry/metals


compounds

Bis(chloromethyl)ether (542-88-11) Lung Chemical intermediate/by-


product

Chloromethyl methylether (107-30-2) Lung Chemical intermediate/by-


(technical grade) product

Cadmium (7440-43-9) and cadmium Lung Dye/pigment manufacture


compounds

Chromium (VI) compounds Nasal cavity, lung Metal plating,


dye/pigment manufacture

Coal-tar pitches (65996-93-2) Skin, lung, bladder Building material,


electrodes

Coal-tars (8007-45-2) Skin, lung Fuel

Ethylene oxide (75-21-8) Leukaemia Chemical intermediate,


sterilant

Mineral oils, untreated and mildly Skin Lubricants


treated

Mustard gas (sulphur mustard) Pharynx, lung War gas


(505-60-2)

2-Naphthylamine (91-59-8) Bladder Dye/pigment manufacture

Nickel compounds Nasal cavity, lung Metallurgy, alloys, catalyst

Shale-oils (68308-34-9) Skin Lubricants, fuels


Soots Skin, lung Pigments

Talc containing asbestiform fibers Lung Paper, paints

Vinyl chloride (75-01-4) Liver, lung, blood vessels Plastics, monomer

Wood dust Nasal cavity Wood industry

1
Evaluated in the IARC Monographs, Volumes 1-63 (1972-1995) (excluding pesticides and
drugs).
2
CAS Registry Nos. appear between parentheses.
3
This evaluation applies to the group of chemicals as a whole and not necessarily to all
individual chemicals within the group.

An additional 20 agents are classified as probably carcinogenic to humans (Group 2A); they
are listed in table 4, and include exposures that are currently prevalent in many countries, such
as crystalline silica, formaldehyde and 1,3-butadiene. A large number of agents are classified
as possible human carcinogens (Group 2B, table 5) - for example, acetaldehyde,
dichloromethane and inorganic lead compounds. For the majority of these chemicals the
evidence of carcinogenicity comes from studies in experimental animals.

Table 4. Chemicals, groups of chemicals or mixtures for which exposures are mostly
occupational (excluding pesticides and drugs).
Group 2A—Probably carcinogenic to humans1

Exposure2 Suspected human target Main industry/use


organ(s)

Acrylonitrile (107-13-1) Lung, prostate, lymphoma Plastics, rubber, textiles,


monomer

Benzidine-based dyes – Paper, leather, textile dyes

1,3-Butadiene (106-99-0) Leukaemia, lymphoma Plastics, rubber, monomer

p-Chloro-o-toluidine (95-69-2) Bladder Dye/pigment manufacture,


and its strong acid salts textiles

Creosotes (8001-58-9) Skin Wood preservation

Diethyl sulphate (64-67-5) – Chemical intermediate

Dimethylcarbamoyl chloride – Chemical intermediate


(79-44-7)

Dimethyl sulphate (77-78-1) – Chemical intermediate

Epichlorohydrin (106-89-8) – Plastics/resins monomer

Ethylene dibromide (106-93-4) – Chemical intermediate,


fumigant, fuels
Formaldehyde (50-0-0) Nasopharynx Plastics, textiles, laboratory
agent

4,4´-Methylene- bis-2- Bladder Rubber manufacture


chloroaniline (MOCA)
(101-14-4)

Polychlorinated biphenyls Liver, bile ducts, leukaemia, Electrical components


(1336-36-3) lymphoma

Silica (14808-60-7), crystalline Lung Stone cutting, mining, glass,


paper

Styrene oxide (96-09-3) – Plastics, chemical intermediate

Tetrachloroethylene Oesophagus, lymphoma Solvent, dry cleaning


(127-18-4)

Trichloroethylene (79-01-6) Liver, lymphoma Solvent, dry cleaning, metal

Tris(2,3- – Plastics, textiles, flame


dibromopropylphosphate retardant
(126-72-7)

Vinyl bromide (593-60-2) – Plastics, textiles, monomer

Vinyl fluoride (75-02-5) – Chemical intermediate

1
Evaluated in the IARC Monographs, Volumes 1-63 (1972-1995) (excluding pesticides and
drugs).
2
CAS Registry Nos. appear between parentheses.

Table 5. Chemicals, groups of chemicals or mixtures for which exposures are mostly
occupational (excluding pesticides and drugs).
Group 2B—Possibly carcinogenic to humans1

Exposure2 Main industry/use

Acetaldehyde (75-07-0) Plastics manufacture, flavors

Acetamide (60-35-5) Solvent, chemical intermediate

Acrylamide (79-06-1) Plastics, grouting agent

p-Aminoazotoluene (60-09-3) Dye/pigment manufacture

o-Aminoazotoluene (97-56-3) Dyes/pigments, textiles

o-Anisidine (90-04-0) Dye/pigment manufacture

Antimony trioxide (1309-64-4) Flame retardant, glass, pigments

Auramine (492-80-8) (technical-grade) Dyes/pigments


Benzyl violet 4B (1694-09-3) Dyes/pigments

Bitumens (8052-42-4), extracts of Building material


steam-refined and air-refined

Bromodichloromethane (75-27-4) Chemical intermediate

b-Butyrolactone (3068-88-0) Chemical intermediate

Carbon-black extracts Printing inks

Carbon tetrachloride (56-23-5) Solvent

Ceramic fibers Plastics, textiles, aerospace

Chlorendic acid (115-28-6) Flame retardant

Chlorinated paraffins of average carbon chain Flame retardant


length C12 and average degree of chlorination
approximately 60%

a-Chlorinated toluenes Dye/pigment manufacture, chemical


intermediate

p-Chloroaniline (106-47-8) Dye/pigment manufacture

Chloroform (67-66-3) Solvent

4-Chloro-o-phenylenediamine (95-83-9) Dyes/pigments, hair dyes

CI Acid Red 114 (6459-94-5) Dyes/pigments, textiles, leather

CI Basic Red 9 (569-61-9) Dyes/pigments, inks

CI Direct Blue 15 (2429-74-5) Dyes/pigments, textiles, paper

Cobalt (7440-48-4)and cobalt compounds Glass, paints, alloys

p-Cresidine (120-71-8) Dye/pigment manufacture

N,N´-Diacetylbenzidine (613-35-4) Dye/pigment manufacture

2,4-Diaminoanisole (615-05-4) Dye/pigment manufacture, hair dyes

4,4´-Diaminodiphenyl ether (101-80-4) Plastics manufacture

2,4-Diaminotoluene (95-80-7) Dye/pigment manufacture, hair dyes

p-Dichlorobenzene (106-46-7) Chemical intermediate

3,3´-Dichlorobenzidine (91-94-1) Dye/pigment manufacture

3,3´-Dichloro-4,4´-diaminodiphenyl ether Not used


(28434-86-8)

1,2-Dichloroethane (107-06-2) Solvent, fuels


Dichloromethane (75-09-2) Solvent

Diepoxybutane (1464-53-5) Plastics/resins

Diesel fuel, marine Fuel

Di(2-ethylhexyl)phthalate (117-81-7) Plastics, textiles

1,2-Diethylhydrazine (1615-80-1) Laboratory reagent

Diglycidyl resorcinol ether (101-90-6) Plastics/resins

Diisopropyl sulphate (29973-10-6) Contaminant

3,3´-Dimethoxybenzidine (o-Dianisidine) Dye/pigment manufacture


(119-90-4)

p-Dimethylaminoazobenzene (60-11-7) Dyes/pigments

2,6-Dimethylaniline (2,6-Xylidine)(87-62-7) Chemical intermediate

3,3´-Dimethylbenzidine (o-Tolidine)(119-93-7) Dye/pigment manufacture

Dimethylformamide (68-12-2) Solvent

1,1-Dimethylhydrazine (57-14-7) Rocket fuel

1,2-Dimethylhydrazine (540-73-8) Research chemical

1,4-Dioxane (123-91-1) Solvent

Disperse Blue 1 (2475-45-8) Dyes/pigments, hair dyes

Ethyl acrylate (140-88-5) Plastics, adhesives, monomer

Ethylene thiourea (96-45-7) Rubber chemical

Fuel oils, residual (heavy) Fuel

Furan (110-00-9) Chemical intermediate

Gasoline Fuel

Glasswool Insulation

Glycidaldehyde (765-34-4) Textile, leather manufacture

HC Blue No. 1 (2784-94-3) Hair dyes

Hexamethylphosphoramide (680-31-9) Solvent, plastics

Hydrazine (302-01-2) Rocket fuel, chemical intermediate

Lead (7439-92-1) and lead compounds, Paints, fuels


inorganic

2-Methylaziridine(75-55-8) Dyes, paper, plastics manufacture


4,4’-Methylene-bis-2-methylaniline (838-88-0) Dye/pigment manufacture

4,4’-Methylenedianiline(101-77-9) Plastics/resins, dye/pigment manufacture

Methylmercury compounds Pesticide manufacture

2-Methyl-1-nitroanthraquinone (129-15-7) Dye/pigment manufacture


(uncertain purity)

Nickel, metallic (7440-02-0) Catalyst

Nitrilotriacetic acid (139-13-9) and its salts Chelating agent, detergent

5-Nitroacenaphthene (602-87-9) Dye/pigment manufacture

2-Nitropropane (79-46-9) Solvent

N-Nitrosodiethanolamine (1116-54-7) Cutting fluids, impurity

Oil Orange SS (2646-17-5) Dyes/pigments

Phenyl glycidyl ether (122-60-1) Plastics/adhesives/resins

Polybrominated biphenyls (Firemaster BP-6) Flame retardant


(59536-65-1)

Ponceau MX (3761-53-3) Dyes/pigments, textiles

Ponceau 3R (3564-09-8) Dyes/pigments, textiles

1,3-Propane sulphone (1120-71-4) Dye/pigment manufacture

b-Propiolactone (57-57-8) Chemical intermediate; plastics manufacture

Propylene oxide (75-56-9) Chemical intermediate

Rockwool Insulation

Slagwool Insulation

Styrene (100-42-5) Plastics

2,3,7,8-Tetrachlorodibenzo-p-dioxin (TCDD) Contaminant


(1746-01-6)

Thioacetamide (62-55-5) Textile, paper, leather, rubber manufacture

4,4’-Thiodianiline (139-65-1) Dye/pigment manufacture

Thiourea (62-56-6) Textile, rubber ingredient

Toluene diisocyanates (26471-62-5) Plastics

o-Toluidine (95-53-4) Dye/pigment manufacture

Trypan blue (72-57-1) Dyes/pigments


Vinyl acetate (108-05-4) Chemical intermediate

Welding fumes Metallurgy

1
Evaluated in the IARC Monographs, Volumes 1-63 (1972-1995) (excluding pesticides and
drugs).
2
CAS Registry Nos. appear between parentheses.

Occupational exposures may also occur during the production and use of some pesticides and
drugs. Table 6 presents an evaluation of the carcinogenicity of pesticides; two of them,
captafol and ethylene dibromide, are classified as probable human carcinogens, while a total
of 20 others, including DDT, atrazine and chlorophenols, are classified as possible human
carcinogens.

Table 6. Pesticides evaluated in IARC Monographs, Volumes 1-63(1972-1995)

IARC Group Pesticide1

2A—Probably carcinogenic to humans Captafol (2425-06-1) Ethylene dibromide (106-93-


4)

2B—Possibly carcinogenic to humans Amitrole (61-82-5) Atrazine (1912-24-9) Chlordane


(57-74-9) Chlordecone (Kepone) (143-50-0)
Chlorophenols Chlorophenoxy herbicides DDT (50-
29-3) 1,2-Dibromo-3-chloropropane (96-12-8) 1,3-
Dichloropropene (542-75-6) (technical-grade)
Dichlorvos (62-73-7) Heptachlor (76-44-8)
Hexachlorobenzene (118-74-1)
Hexachlorocyclohexanes (HCH) Mirex (2385-85-5)
Nitrofen (1836-75-5), technical-grade
Pentachlorophenol (87-86-5) Sodium o-
phenylphenate (132-27-4) Sulphallate (95-06-7)
Toxaphene (Polychlorinated camphenes) (8001-35-
2)

1
CAS Registry Nos. appear between parentheses.

Several drugs are human carcinogens (table 9): they are mainly alkylating agents and
hormones; 12 more drugs, including chloramphenicol, cisplatine and phenacetin, are
classified as probable human carcinogens (Group 2A). Occupational exposure to these known
or suspected carcinogens, used mainly in chemotherapy, can occur in pharmacies and during
their administration by nursing staff.

Table 7. Drugs evaluated in IARC Monographs, Volumes 1-63 (1972-1995).

Drug1 Target organ2

IARC GROUP 1—Carcinogenic to humans


Analgesic mixtures containing phenacetin Kidney, bladder

Azathioprine (446-86-6) Lymphoma, hepatobiliary system, skin

N,N-Bis(2-chloroethyl)- b-naphthylamine Bladder


(Chlornaphazine) (494-03-1)

1,4-Butanediol dimethanesulphonate (Myleran) Leukaemia


(55-98-1)

Chlorambucil (305-03-3) Leukaemia

1-(2-Chloroethyl)-3-(4-methylcyclohexyl)-1- Leukaemia
nitrosourea (Methyl-CCNU) (13909-09-6)

Cyclosporin (79217-60-0) Lymphoma, skin

Cyclophosphamide (50-18-0) (6055-19-2) Leukaemia, bladder

Diethylstilboestrol (56-53-1) Cervix, vagina, breast

Melphalan (148-82-3) Leukaemia

8-Methoxypsoralen (Methoxsalen) (298-81-7) plus Skin


ultraviolet A radiation

MOPP and other combined chemotherapy including Leukaemia


alkylating agents

Oestrogen replacement therapy Uterus

Oestrogens, nonsteroidal Cervix, vagina, breast

Oestrogens, steroidal Uterus

Oral contraceptives, combined Liver

Oral contraceptives, sequential Uterus

Thiotepa (52-24-4) Leukaemia

Treosulfan (299-75-2) Leukaemia

IARC GROUP 2A—Probably carcinogenic to humans

Adriamycin (23214-92-8) –

Androgenic (anabolic) steroids (Liver)

Azacitidine (320-67-2) –

Bischloroethyl nitrosourea (BCNU) (154-93-8) (Leukaemia)

Chloramphenicol (56-75-7) (Leukaemia)

1-(2-Chloroethyl)-3-cyclohexyl-1-nitrosourea (CCNU) –
(13010-47-4)
Chlorozotocine (54749-90-5) –

Cisplatin (15663-27-1) –

5-Methoxypsoralen (484-20-8) –

Nitrogen mustard (51-75-2) (Skin)

Phenacetin (62-44-2) (Kidney, bladder)

Procarbazine hydrochloride (366-70-1) –

1
CAS Registry Nos. appear between parentheses.
2
Suspected target organs are given in parentheses.

Several environmental agents are known or suspected causes of cancer in humans and are
listed in table 8; although exposure to such agents is not primarily occupational, there are
groups of individuals exposed to them because of their work: examples are uranium miners
exposed to radon decay products, hospital workers exposed to hepatitis B virus, food
processors exposed to aflatoxins from contaminated foods, outdoor workers exposed to
ultraviolet radiation or diesel engine exhaust, and bar staff or waiters exposed to
environmental tobacco smoke.

The IARC Monograph programme has covered most of the known or suspected causes of
cancer; there are, however, some important groups of agents that have not been evaluated by
IARC—namely, ionizing radiation and electrical and magnetic fields.

Table 8. Environmental agents/exposures known or suspected to cause cancer in humans.1

Agent/exposure Target organ2 Strength of evidence3

Air pollutants

Erionite Lung, pleura 1

Asbestos Lung, pleura 1

Polycyclic aromatic (Lung, bladder) S


hydrocarbons4

Water pollutants

Arsenic Skin 1

Chlorination by-products (Bladder) S

Nitrate and nitrite (Oesophagus, stomach) S

Radiation

Radon and its decay products Lung 1

Radium, thorium Bone E


Other X-irradiation Leukaemia, breast, thyroid, E
others

Solar radiation Skin 1

Ultraviolet radiation A (Skin) 2A

Ultraviolet radiation B (Skin) 2A

Ultraviolet radiation C (Skin) 2A

Use of sunlamps and sunbeds (Skin) 2A

Electric and magnetic fields (Leukaemia) S

Biological agents

Chronic infection with hepatitis B Liver 1


virus

Chronic infection with hepatitis C Liver 1


virus

Infection with Helicobacter pylori Stomach 1

Infection with Opistorchis Bile ducts 1


viverrini

Infection with Chlonorchis (Liver) 2A


sinensis

Human Papilloma virus types 16 Cervix 1


and18

Human Papilloma virus types 31 (Cervix) 2A


and 33

Human Papilloma virus types (Cervix) 2B


other than 16, 18, 31 and 33

Infection with Schistosoma Bladder 1


haematobium

Infection with Schistosoma (Liver, colon) 2B


japonicum

Tobacco, alcohol and related substances

Alcoholic beverages Mouth, pharynx, oesophagus, 1


liver, larynx

Tobacco smoke Lip, mouth, pharynx, 1


oesophagus, pancreas, larynx,
lung, kidney, bladder, (others)
Smokeless tobacco products Mouth 1

Betel quid with tobacco Mouth 1

Dietary factors

Aflatoxins Liver 1

Aflatoxin M1 (Liver) 2B

Ochratoxin A (Kidney) 2B

Toxins derived from Fusarium (Oesophagus) 2B


moniliforme

Chinese style salted fish Nasopharynx 1

Pickled vegetables (traditional in (Oesophagus, stomach) 2B


Asia)

Bracken fern (Oesophagus) 2B

Safrole – 2B

Coffee (Bladder) 2B

Caffeic acid – 2B

Hot mate (Oesophagus) 2A

Fresh fruits and vegetables Mouth, oesophagus, stomach, E


(protective) colon, rectum, larynx, lung
(others)

Fat (Colon, breast, endometrium) S

Fiber (protective) (Colon, rectum) S

Nitrate and nitrite (Oesophagus, stomach) S

Salt (Stomach) S

Vitamin A, b-carotene (Mouth, oesophagus, lung, S


(protective) others)

Vitamin C (protective) (Oesophagus, stomach) S

IQ (Stomach, colon, rectum) 2A

MeIQ – 2B

MeIQx – 2B

PhIP – 2B

Reproductive and sexual behavior


Late age at first pregnancy Breast E

Low parity Breast, ovary, corpus uteri E

Early age at first intercourse Cervix E

Number of sexual partners Cervix E

1
Agents and exposures, as well as medicines, occurring mainly in the occupational setting
are excluded.
2
Suspected target organs are given in parentheses.
3
IARC Monograph evaluation reported wherever available (1: human carcinogen; 2A:
probable human carcinogen; 2B: possible human carcinogen); otherwise E: established
carcinogen; S: suspected carcinogen.
4
Human exposure to polycyclic aromatic hydrocarbons occurs in mixtures, such as
engine emissions, combustion fumes and soots. Several mixtures and individual hydrocarbons
have been evaluated by IARC.

Industries and Occupations

Current understanding of the relationship between occupational exposures and cancer is far
from complete; in fact, only 22 individual agents are established occupational carcinogens
(table 9), and for many more experimental carcinogens no definitive evidence is available
based on exposed workers. In many cases, there is considerable evidence of increased risks
associated with particular industries and occupations, although no specific agents can be
identified as aetiological factors. Table 10 present lists of industries and occupations
associated with excess carcinogenic risks, together with the relevant cancer sites and the
known (or suspected) causative agent(s).

Table 9. Industries, occupations and exposures recognized as presenting a carcinogenic risk.

Industry (ISIC code) Occupation/process Cancer site/type Known or suspected


causative agent

Agriculture, forestry Vineyard workers using Lung, skin Skin, lip Arsenic compounds
and fishing (1) arsenic insecticides Fishermen Ultraviolet radiation

Mining and Arsenic mining Iron ore Lung, skin Lung Lung, Arsenic compounds
quarrying (2) (haematite) mining Asbestos pleural and Radon decay products
mining Uranium mining Talc peritoneal Asbestos Radon decay
mining and milling mesothelioma Lung products Talc
Lung containing
asbestiform fibers

Chemical (35) Bis(chloromethyl) ether Lung (oat-cell BCME, CMME Vinyl


(BCME) and chloromethyl- carcinoma) Liver chloride monomer
methyl ether (CMME) angiosarcoma Not identified
production workers and users Sinonasal Lung, Chromium (VI)
Vinyl chloride production sinonasal Bladder compounds
Isopropyl alcohol manufacture Bladder Bladder Benzidine, 2-
(strong-acid process) Pigment naphthylamine, 4-
chromate production Dye aminobiphenyl
manufacturers and users Auramine and other
Auramine manufacture p- aromatic amines used
chloro-o-toluidine production in the process p-
chloro-o-toluidine and
its strong acid salts

Leather (324) Boot and shoe manufacture Sinonasal, leukaemia Leather dust, benzene

Wood and wood Furniture and cabinet makers Sinonasal Wood dust
products (33)

Pesticides and Arsenical insecticides Lung Arsenic compounds


herbicides production and packaging
production (3512)

Rubber industry Rubber manufacture Leukaemia Bladder Benzene Aromatic


(355) Calendering, tyre curing, tyre Leukaemia Bladder amines Benzene
building Millers, mixers Bladder Leukaemia Aromatic amines
Synthetic latex production, Aromatic amines
tyre curing, calender Benzene
operatives, reclaim, cable
makers Rubber film
production

Asbestos production Insulated material production Lung, pleural and Asbestos


(3699) (pipes, sheeting, textile, peritoneal
clothes, masks, asbestos mesothelioma
cement products)

Metals (37) Aluminum production Copper Lung, bladder Lung Polycyclic aromatic
smelting Chromate Lung, sinonasal Lung hydrocarbons, tar
production, chromium plating Sinonasal, lung Arsenic compounds
Iron and steel founding Nickel Larynx, lung Lung Chromium (VI)
refining Pickling operations Lung compounds Not
Cadmium production and identified Nickel
refining; nickel-cadmium compounds Inorganic
battery manufacture; acid mists containing
cadmium pigment sulphuric acid
manufacture; cadmium alloy Cadmium and
production; electroplating; cadmium compounds
zinc smelters; brazing and Beryllium and
polyvinyl chloride beryllium compounds
compounding Beryllium
refining and machining;
production of beryllium-
containing products

Shipbuilding, motor Shipyard and dockyard, motor Lung, pleural and Asbestos
vehicle and railroad vehicle and railroad peritoneal
equipment manufacture workers mesothelioma
manufacture (385)

Gas (4) Coke plant workers Gas Lung Lung, bladder, Benzo(a)pyrene Coal
workers Gas-retort house scrotum Bladder carbonization
workers products, 2-
naphthylamine
Aromatic amines

Construction (5) Insulators and pipe coverers Lung, pleural and Asbestos Polycyclic
Roofers, asphalt workers peritoneal aromatic
mesothelioma Lung hydrocarbons

Other Medical personnel (9331) Skin, leukaemia Lung Ionizing radiation Not
Painters (construction, identified
automotive industry and other
users)

Table 10. Industries, occupations and exposures reported to present a cancer excess but for
which the assessment of the carcinogenic risk is not definitive.

Industry (ISIC code) Occupation/process Cancer site/type Known (or suspected)


causative agent

Agriculture, forestry Farmers, farm workers Lymphatic and Not identified


and fishing (1) Herbicide application haematopoietic system Chlorophenoxy
Insecticide application (leukaemia, herbicides, chlorophenols
lymphoma) Malignant (presumably
lymphomas, soft-tissue contaminated with
sarcomas Lung, polychlorinated
lymphoma dibenzodioxins) Non-
arsenical insecticides

Mining and Zinc-lead mining Coal Lung Stomach Lung Radon decay products
quarrying (2) Metal mining Asbestos Gastrointestinal tract Coal dust Crystalline silica
mining Asbestos

Food industry Butchers and meat Lung Viruses, PAH1


(3111) workers

Beverage industry Beer brewers Upper aero-digestive Alcohol consumption


(3131) tract

Textile manufacture Dyers Weavers Bladder Bladder, Dyes Dusts from fibers
(321) sinonasal, mouth and yarns
Leather (323) Tanners and processors Bladder, pancreas, Leather dust, other
Boot and shoe lung Sinonasal, chemicals, chromium Not
manufacture and repair stomach, bladder identified

Wood and wood Lumbermen and sawmill Nasal cavity, Hodgkin Wood dust,
products (33), pulp workers Pulp and lymphoma, skin chlorophenols, creosotes
and paper industry papermill workers Lymphopoietic tissue, Not identified Wood
(341) Carpenters, joiners lung Nasal cavity, dust, solvents Not
Woodworkers, Hodgkin lymphoma identified Formaldehyde
unspecified Plywood Lymphomas
production, particle- Nasopharynx,
board production sinonasal

Printing (342) Rotogravure workers, Lymphocytic and Oil mist, solvents


binders, printing haemopoietic system,
pressmen, machine-room oral, lung, kidney
workers and other jobs

Chemical (35) 1,3-Butadiene production Lymphocytic and 1,3-Butadiene


Acrylonitrile production haemopoietic system Acrylonitrile Vinylidene
Vinylidene chloride Lung, colon Lung chloride (mixed exposure
production Isopropyl Larynx Lung Lung Lung, with acrylonitrile) Not
alcohol manufacture lymphatic and identified Chloroprene
(strong-acid process) haemopoietic system Dimethylsulphate
Polychloroprene (leukaemia) Lymphatic Epichlorohydrin Ethylene
production and haemopoietic oxide Ethylene dibromide
Dimethylsulphate system (leukaemia), Formaldehyde
production stomach Digestive Polychlorinated biphenyls
Epichlorohydrin system Nasopharynx, Benzoyl chloride
production Ethylene sinonasal Skin
oxide production (melanoma) Lung
Ethylene dibromide
production Formaldehyde
production Flame
retardant and plasticizer
use Benzoyl chloride
production

Herbicides Chlorophenoxy herbicide Soft-tissue sarcoma Chlorophenoxy


production (3512) production herbicides, chlorophenols
(contaminated with
polychlorinated
dibenzodioxins)

Petroleum (353) Petroleum refining Skin, leukaemia, brain Benzene, PAH, untreated
and mildly treated
mineral oils
Rubber (355) Various occupations in Lymphoma, multiple Benzene, MOCA,2 other
rubber manufacture myeloma, stomach, not identified 1,3-
Styrene-butadiene rubber brain, lung Lymphatic Butadiene
production and haematopoietic
system

Ceramic, glass and Ceramic and pottery Lung Lung Crystalline silica Arsenic
refractory brick (36) workers Glass workers and other metal oxides,
(art glass, container and silica, PAH
pressed ware)

Asbestos Insulation material Larynx, gastrointestinal Asbestos


production (3699) production (pipes, tract
sheeting, textiles, clothes,
masks, asbestos cement
products)

Metals (37, 38) Lead smelting Cadmium Respiratory and Lead compounds
production and refining; digestive systems Cadmium and cadmium
nickel-cadmium battery Prostate Lung compounds Crystalline
manufacture; cadmium silica
pigment manufacture;
cadmium alloy
production;
electroplating; zinc
smelting; brazing and
polyvinyl chloride
compounding Iron and
steel founding

Shipbuilding (384) Shipyard and dockyard Larynx, digestive Asbestos


workers system

Motor vehicle Mechanics, welders, etc. Lung PAH, welding fumes,


manufacturing engine exhaust
(3843, 9513)

Electricity (4101, Generation, production, Leukaemia, brain Extremely low frequency


9512) distribution, repair tumors Liver, bile ducts magnetic fields PCBs3

Construction (5) Insulators and pipe Larynx, gastrointestinal Asbestos PAH, coal tar,
coverers Roofers, asphalt tract Mouth, pharynx, pitch
workers larynx, oesophagus,
stomach

Transport (7) Railroad workers, filling Lung, bladder Diesel engine exhaust
station attendants, bus Leukaemia Extremely low frequency
and truck drivers, magnetic fields
operators of excavating
machines
Other Service station attendants Leukaemia and Benzene Not identified
(6200) Chemists and lymphoma Leukaemia (viruses, chemicals)
other laboratory workers and lymphoma, Formaldehyde Hepatitis B
(9331) Embalmers, pancreas Sinonasal, virus Tri- and
medical personnel (9331) nasopharynx Liver tetrachloroethylene and
Health workers (9331) Lung, oesophagus, carbon tetrachloride Hair
Laundry and dry cleaners bladder Bladder, dyes, aromatic amines
(9520) Hairdressers leukaemia and Radon
(9591) Radium dial lymphoma Breast
workers

1
PAH, polycyclic aromatic hydrocarbon.
2
MOCA, 4,4’-methylene-bis-2-chloroaniline.
3
PCBs, polychlorinated biphenyls.

Table 9 presents industries, occupations and exposures in which the presence of a


carcinogenic risk is considered to be established, whereas Table 10 shows industrial
processes, occupations and exposures for which an excess cancer risk has been reported but
evidence is not considered to be definitive. Also included in table 10 are some occupations
and industries already listed in table 9, for which there is inconclusive evidence of association
with cancers other than those mentioned in table 9. For example, the asbestos production
industry is included in table 9 in relation to lung cancer and pleural and peritoneal
mesothelioma, whereas the same industry is included in table 10 in relation to gastrointestinal
neoplasms. A number of industries and occupations listed intables 9 and 10 have also been
evaluated under the IARC Monographs programme. For example, “occupational exposure to
strong inorganic acid mist containing sulphuric acid” was classified in Group 1 (carcinogenic
to humans).

Constructing and interpreting such lists of chemical or physical carcinogenic agents and
associating them with specific occupations and industries is complicated by a number of
factors: (1) information on industrial processes and exposures is frequently poor, not allowing
a complete evaluation of the importance of specific carcinogenic exposures in different
occupations or industries; (2) exposures to well-known carcinogenic exposures, such as vinyl
chloride and benzene, occur at different intensities in different occupational situations;
(3) changes in exposure occur over time in a given occupational situation, either because
identified carcinogenic agents are substituted by other agents or (more frequently) because
new industrial processes or materials are introduced; (4) any list of occupational exposures
can refer only to the relatively small number of chemical exposures which have been
investigated with respect to the presence of a carcinogenic risk.

All of the above issues emphasize the most critical limitation of a classification of this type,
and in particular its generalization to all areas of the world: the presence of a carcinogen in an
occupational situation does not necessarily mean that workers are exposed to it and, in
contrast, the absence of identified carcinogens does not exclude the presence of yet
unidentified causes of cancer.
A particular problem in developing countries is that much of the industrial activity is
fragmented and takes place in local settings. These small industries are often characterized by
old machinery, unsafe buildings, employees with limited training and education, and
employers with limited financial resources. Protective clothing, respirators, gloves and other
safety equipment are seldom available or used. The small companies tend to be
geographically scattered and inaccessible to inspections by health and safety enforcement
agencies.

Environmental Cancer

Written by ILO Content Manager

Cancer is a common disease in all countries of the world. The probability that a person will
develop cancer by the age of 70 years, given survival to that age, varies between about 10 and
40% in both sexes. On average, in developed countries, about one person in five will die from
cancer. This proportion is about one in 15 in developing countries. In this article,
environmental cancer is defined as cancer caused (or prevented) by non-genetic factors,
including human behaviour, habits, lifestyle and external factors over which the individual
has no control. A stricter definition of environmental cancer is sometimes used, comprising
only the effect of factors such as air and water pollution, and industrial waste.

Geographical Variation

Variation between geographical areas in the rates of particular types of cancer can be much
greater than that for cancer as a whole. Known variation in the incidence of the more common
cancers is summarized in table 1. The incidence of nasopharyngeal carcinoma, for example,
varies some 500-fold between South East Asia and Europe. This wide variation in frequency
of the various cancers has led to the view that much of human cancer is caused by factors in
the environment. In particular, it has been argued that the lowest rate of a cancer observed in
any population is indicative of the minimum, possibly spontaneous, rate occurring in the
absence of causative factors. Thus the difference between the rate of a cancer in a given
population and the minimum rate observed in any population is an estimate of the rate of the
cancer in the first population which is attributable to environmental factors. On this basis it
has been estimated, very approximately, that some 80 to 90% of all human cancers are
environmentally determined (International Agency for Research on Cancer 1990).

Table 1. Variation between populations covered by cancer registration in the incidence of


common cancers.1

Cancer (ICD9 High-incidence area CR2 Low-incidence area CR2 Range of


code) variation
Mouth (143-5) France, Bas Rhin 2 Singapore (Malay) 0.02 80
Nasopharynx Hong Kong 3 Poland, Warsaw (rural) 0.01 300
(147)
Oesophagus France, Calvados 3 Israel (Israeli-born Jews) 0.02 160
(150)
Stomach (151) Japan, Yamagata 11 USA, Los Angeles 0.3 30
(Filipinos)
Colon (153) USA, Hawaii 5 India, Madras 0.2 30
(Japanese)
Rectum (154) USA, Los Angeles 3 Kuwait (non-Kuwaiti) 0.1 20
(Japanese)
Liver (155) Thailand, Khon 11 Paraguay, Asuncion 0.1 110
Khaen
Pancreas (157) USA, Alameda 2 India, Ahmedabad 0.1 20
County (Calif.)
(Blacks)
Lung (162) New Zealand (Maori) 16 Mali, Bamako 0.5 30
Melanoma of Australia, Capital 3 USA, Bay Area 0.01 300
skin (172) Terr. (Calif.)(Blacks)
Other skin Australia, Tasmania 25 Spain, Basque Country 0.05 500
cancers (173)
Breast (174) USA, Hawaii 12 China, Qidong 1.0 10
(Hawaiian)
Cervix uteri Peru, Trujillo 6 USA, Hawaii (Chinese) 0.3 20
(180)
Corpus uteri USA, Alameda 3 China, Qidong 0.05 60
(182) County (Calif.)
(Whites)
Ovary (183) Iceland 2 Mali, Bamako 0.09 20
Prostate (185) USA, Atlanta (Blacks) 12 China, Qidong 0.09 140
Bladder (188) Italy, Florence 4 India, Madras 0.2 20
Kidney (189) France, Bas Rhin 2 China, Qidong 0.08 20

1 Data from cancer registries included in IARC 1992. Only cancer sites with cumulative rate
larger or equal to 2% in the high-incidence area are included. Rates refer to males except for
breast, cervix uteri, corpus uteri and ovary cancers.
2 Cumulative rate % between 0 and 74 years of age.

Source: International Agency for Research on Cancer 1992.

There are, of course, other explanations for geographical variation in cancer rates. Under-
registration of cancer in some populations may exaggerate the range of variation, but certainly
cannot explain differences of the size shown in table 1. Genetic factors also may be important.
It has been observed, however, that when populations migrate along a gradient of cancer
incidence they often acquire a rate of cancer which is intermediate between that of their home
country and that of the host country. This suggests that a change in environment, without
genetic change, has changed the cancer incidence. For example, when Japanese migrate to the
United States their rates of colon and breast cancer, which are low in Japan, rise, and their rate
of stomach cancer, which is high in Japan, falls, both tending more closely towards United
States’ rates. These changes may be delayed until the first post-migration generation but they
still occur without genetic change. For some cancers, change with migration does not occur.
For example, the Southern Chinese retain their high rate of cancer of the nasopharynx
wherever they live, thus suggesting that genetic factors, or some cultural habit which changes
little with migration, are responsible for this disease.
Time Trends

Further evidence of the role of environmental factors in cancer incidence has come from the
observation of time trends. The most dramatic and well-known change has been the rise in
lung cancer rates in males and females in parallel with but occurring some 20 to 30 years after
the adoption of cigarette use, which has been seen in many regions of the world; more
recently in a few countries, such as the United States, there has been the suggestion of a fall in
rates among males following a reduction in tobacco smoking. Less well understood are the
substantial falls in incidence of cancers including those of the stomach, oesophagus and
cervix which have paralleled economic development in many countries. It would be difficult
to explain these falls, however, except in terms of reduction in exposure to causal factors in
the environment or, perhaps, increasing exposure to protective factors—again environmental.

Main Environmental Carcinogenic Agents

The importance of environmental factors as causes of human cancer has been further
demonstrated by epidemiological studies relating particular agents to particular cancers. The
main agents which have been identified are summarized in table 10. This table does not
contain the drugs for which a causal link with human cancer has been established (such as
diethylstilboestrol and several alkylating agents) or suspected (such as cyclophosphamide)
(see also Table 9). In the case of these agents, the risk of cancer has to be balanced with the
benefits of the treatment. Similarly, Table 10 does not contain agents that occur primarily in
the occupational setting, such as chromium, nickel and aromatic amines. For a detailed
discussion of these agents see the previous article “Occupational Carcinogens.” The relative
importance of the agents listed in table 8 varies widely, depending on the potency of the agent
and the number of people involved. The evidence of carcinogenicity of several environmental
agents has been evaluated within the IARC Monographs programme (International Agency
for Research on Cancer 1995) (see again “Occupational Carcinogens” for a discussion of the
Monographs programme); table 10 is based mainly on the IARC Monograph evaluations. The
most important agents among those listed in table 10 are those to which a substantial
proportion of the population is exposed in relatively large amounts. They include particularly:
ultraviolet (solar) radiation; tobacco smoking; alcohol drinking; betel quid chewing; hepatitis
B; hepatitis C and human papilloma viruses; aflatoxins; possibly dietary fat, and dietary fiber
and vitamin A and C deficiency; reproductive delay; and asbestos.

Attempts have been made to estimate numerically the relative contributions of these factors to
the 80 or 90% of cancers which might be attributed to environmental factors. The pattern
varies, of course, from population to population according to differences in exposures and
possibly in the genetic susceptibility to various cancers. In many industrialized countries,
however, tobacco smoking and dietary factors are likely to be responsible each for roughly
one-third of environmentally determined cancers (Doll and Peto 1981); while in developing
countries the role of biological agents is likely to be large and that of tobacco relatively small
(but increasing, following the recent increase in the consumption of tobacco in these
populations).

Interactions between Carcinogens

An additional aspect to consider is the presence of interactions between carcinogens. Thus for
example, in the case of alcohol and tobacco, and cancer of the oesophagus, it has been shown
that an increasing consumption of alcohol multiplies manyfold the rate of cancer produced by
a given level of tobacco consumption. Alcohol by itself may facilitate transport of tobacco
carcinogens, or others, into the cells of susceptible tissues. Multiplicative interaction may also
be seen between initiating carcinogens, as between radon and its decay products and tobacco
smoking in miners of uranium. Some environmental agents may act by promoting cancers
which have been initiated by another agent—this is the most likely mechanism for an effect of
dietary fat on the development of breast cancer (probably through increased production of the
hormones which stimulate the breast). The reverse may also occur, as, for example, in the
case of vitamin A, which probably has an anti-promoting effect on lung and possibly other
cancers initiated by tobacco. Similar interactions may also occur between environmental and
constitutional factors. In particular, genetic polymorphism to enzymes implicated in the
metabolism of carcinogenic agents or DNA repair is probably an important requirement of
individual susceptibility to the effect of environmental carcinogens.

The significance of interactions between carcinogens, from the point of view of cancer
control, is that withdrawal of exposure to one of two (or more) interacting factors may give
rise to a greater reduction in cancer incidence than would be predicted from consideration of
the effect of the agent when acting alone. Thus, for example, withdrawal of cigarettes may
eliminate almost entirely the excess rate of lung cancer in asbestos workers (although rates of
mesothelioma would be unaffected).

Implications for Prevention

The realization that environmental factors are responsible for a large proportion of human
cancers has laid the foundation for primary prevention of cancer by modification of exposure
to the factors identified. Such modification may comprise: removal of a single major
carcinogen; reduction, as discussed above, in exposure to one of several interacting
carcinogens; increasing exposure to protective agents; or combinations of these approaches.
While some of this may be achieved by community-wide regulation of the environment
through, for example, environmental legislation, the apparent importance of lifestyle factors
suggests that much of primary prevention will remain the responsibility of individuals.
Governments, however, may still create a climate in which individuals find it easier to take
the right decision.

Prevention

Written by ILO Content Manager

Occupational exposures account for only a minor proportion of the total number of cancers in
the entire population. It has been estimated that 4% of all cancers can be attributed to
occupational exposures, based on data from the United States, with a range of uncertainty
from 2 to 8%. This implies that even total prevention of occupationally induced cancers
would result in only a marginal reduction in national cancer rates.

However, for several reasons, this should not discourage efforts to prevent occupationally
induced cancers. First, the estimate of 4% is an average figure for the entire population,
including unexposed persons. Among people actually exposed to occupational carcinogens,
the proportion of tumours attributable to occupation is much larger. Second, occupational
exposures are avoidable hazards to which individuals are involuntarily exposed. An individual
should not have to accept an increased risk of cancer in any occupation, especially if the cause
is known. Third, occupationally induced cancers can be prevented by regulation, in contrast to
cancers associated with lifestyle factors.

Prevention of occupationally induced cancer involves at least two stages: first, identification
of a specific compound or occupational environment as carcinogenic; and second, imposing
appropriate regulatory control. The principles and practice of regulatory control of known or
suspected cancer hazards in the work environment vary considerably, not only among
different parts of the developed and developing world, but also among countries of similar
socio-economic development.

The International Agency for Research on Cancer (IARC) in Lyon, France, systematically
compiles and evaluates epidemiological and experimental data on suspected or known
carcinogens. The evaluations are presented in a series of monographs, which provide a basis
for decisions on national regulations on the production and use of carcinogenic compounds
(see “Occupational Carcinogens”, above.

Historical Background

The history of occupational cancer dates back to at least 1775, when Sir Percivall Pott
published his classical report on scrotal cancer in chimney-sweeps, linking exposure to soot to
the incidence of cancer. The finding had some immediate impact in that sweeps in some
countries were granted the right to bathe at the end of the working day. Current studies of
sweeps indicate that scrotal and skin cancer are now under control, although sweeps are still
at increased risk for several other cancers.

In the 1890s, a cluster of bladder cancer was reported at a German dye factory by a surgeon at
a nearby hospital. The causative compounds were later identified as aromatic amines, and
these now appear in lists of carcinogenic substances in most countries. Later examples include
skin cancer in radium-dial painters, nose and sinus cancer among woodworkers caused by
inhalation of wood dust, and “mule-spinner’s disease”—that is, scrotal cancer among cotton
industry workers caused by mineral oil mist. Leukaemia induced by exposure to benzene in
the shoe repair and manufacturing industry also represents a hazard that has been reduced
after the identification of carcinogens in the workplace.

In the case of linking asbestos exposure to cancer, this history illustrates a situation with a
considerable time-lag between risk identification and regulatory action. Epidemiological
results indicating that exposure to asbestos was associated with an increased risk of lung
cancer were already starting to accumulate by the 1930s. More convincing evidence appeared
around 1955, but it was not until the mid-1970s that effective steps for regulatory action
began.

The identification of the hazards associated with vinyl chloride represents a different history,
where prompt regulatory action followed identification of the carcinogen. In the 1960s, most
countries had adopted an exposure limit value for vinyl chloride of 500 parts per million
(ppm). In 1974, the first reports of an increased frequency of the rare tumour liver
angiosarcoma among vinyl chloride workers were soon followed by positive animal
experimental studies. After vinyl chloride was identified as carcinogenic, regulatory actions
were taken for a prompt reduction of the exposure to the current limit of 1 to 5 ppm.
Methods Used for the Identificationof Occupational Carcinogens

The methods in the historical examples cited above range from observations of clusters of
disease by astute clinicians to more formal epidemiological studies—that is, investigations of
the disease rate (cancer rate) among human beings. Results from epidemiological studies are
of high relevance for evaluations of the risk to humans. A major drawback of cancer
epidemiological studies is that a long time period, usually at least 15 years, is necessary to
demonstrate and evaluate the effects of an exposure to a potential carcinogen. This is
unsatisfactory for surveillance purposes, and other methods must be applied for a quicker
evaluation of recently introduced substances. Since the beginning of this century, animal
carcinogenicity studies have been used for this purpose. However, the extrapolation from
animals to humans introduces considerable uncertainty. The methods also have limitations in
that a large number of animals must be followed for several years.

The need for methods with a more rapid response was partly met in 1971, when the short-term
mutagenicity test (Ames test) was introduced. This test uses bacteria to measure the
mutagenic activity of a substance (its ability to cause irreparable changes in the cellular
genetic material, DNA). A problem in the interpretation of the results of bacterial tests is that
not all substances causing human cancers are mutagenic, and not all bacterial mutagens are
considered to be cancer hazards for human beings. However, the finding that a substance is
mutagenic is usually taken as an indication that the substance might represent a cancer hazard
for humans.

New genetic and molecular biology methods have been developed during the last 15 years,
with the aim of detecting human cancer hazards. This discipline is termed “molecular
epidemiology.” Genetic and molecular events are studied in order to clarify the process of
cancer formation and thus develop methods for early detection of cancer, or indications of
increased risk of the development of cancer. These methods include analysis of damage to the
genetic material and the formation of chemical linkages (adducts) between pollutants and the
genetic material. The presence of chromosomal aberrations clearly indicates effects on the
genetic material which may be associated with cancer development. However, the role of
molecular epidemiological findings in human cancer risk assessment remains to be settled,
and research is under way to indicate more clearly exactly how results of these analyses
should be interpreted.

Surveillance and Screening

The strategies for prevention of occupationally induced cancers differ from those applied for
control of cancer associated with lifestyle or other environmental exposures. In the
occupational field, the main strategy for cancer control has been reduction or total elimination
of exposure to cancer-causing agents. Methods based on early detection by screening
programmes, such as those applied for cervical cancer or breast cancer, have been of very
limited importance in occupational health.

Surveillance

Information from population records on cancer rates and occupation may be used for
surveillance of cancer frequencies in various occupations. Several methods to obtain such
information have been applied, depending on the registries available. The limitations and
possibilities depend largely on the quality of the information in the registries. Information on
disease rate (cancer frequency) is typically obtained from local or national cancer registries
(see below), or from death certificate data, while information on the age-composition and size
of occupational groups is obtained from population registries.

The classical example of this type of information is the “Decennial supplements on


occupational mortality,” published in the UK since the end of the nineteenth century. These
publications use death certificate information on cause of death and on occupation, together
with census data on frequencies of occupations in the entire population, to calculate cause-
specific death rates in different occupations. This type of statistic is a useful tool to monitor
the cancer frequency in occupations with known risks, but its ability to detect previously
unknown risks is limited. This type of approach may also suffer from problems associated
with systematic differences in the coding of occupations on the death certificates and in the
census data.

The use of personal identification numbers in the Nordic countries has offered a special
opportunity to link individual census data on occupations with cancer registration data, and to
directly calculate cancer rates in different occupations. In Sweden, a permanent linkage of the
censuses of 1960 and 1970 and the cancer incidence during subsequent years have been made
available for researchers and have been used for a large number of studies. This Swedish
Cancer-Environment Registry has been used for a general survey of certain cancers tabulated
by occupation. The survey was initiated by a governmental committee investigating hazards
in the work environment. Similar linkages have been performed in the other Nordic countries.

Generally, statistics based on routinely collected cancer incidence and census data have the
advantage of ease in providing large amounts of information. The method gives information
on the cancer frequencies regarding occupation only, not in relation to certain exposures. This
introduces a considerable dilution of the associations, since exposure may differ considerably
among individuals in the same occupation. Epidemiological studies of the cohort type (where
the cancer experience among a group of exposed workers is compared with that in unexposed
workers matched for age, sex and other factors) or the case-control type (where the exposure
experience of a group of persons with cancer is compared to that in a sample of the general
population) give better opportunities for detailed exposure description, and thus better
opportunities for investigation of the consistency of any observed risk increase, for example
by examining the data for any exposure-response trends.

The possibility of obtaining more refined exposure data together with routinely collected
cancer notifications was investigated in a prospective Canadian case-control study. The study
was set up in the Montreal metropolitan area in 1979. Occupational histories were obtained
from males as they were added to the local cancer registry, and the histories were
subsequently coded for exposure to a number of chemicals by occupational hygienists. Later,
the cancer risks in relation to a number of substances were calculated and published
(Siemiatycki 1991).

In conclusion, the continuous production of surveillance data based on recorded information


provides an effective and comparatively easy way to monitor cancer frequency by occupation.
While the main purpose achieved is surveillance of known risk factors, the possibilities for the
identification of new risks are limited. Registry-based studies should not be used for
conclusions regarding the absence of risk in an occupation unless the proportion of
individuals significantly exposed is more precisely known. It is quite common that only a
relatively small percentage of members of an occupation actually are exposed; for these
individuals the substance may represent a substantial hazard, but this will not be observable
(i.e., will be statistically diluted) when the entire occupational group is analysed as a single
group.

Screening

Screening for occupational cancer in exposed populations for purposes of early diagnosis is
rarely applied, but has been tested in some settings where exposure has been difficult to
eliminate. For example, much interest has focused on methods for early detection of lung
cancer among people exposed to asbestos. With asbestos exposures, an increased risk persists
for a long time, even after cessation of exposure. Thus, continuous evaluation of the health
status of exposed individuals is justified. Chest x rays and cytological investigation of sputum
have been used. Unfortunately, when tested under comparable conditions neither of these
methods reduces the mortality significantly, even if some cases may be detected earlier. One
of the reasons for this negative result is that the prognosis of lung cancer is little affected by
early diagnosis. Another problem is that the x rays themselves represent a cancer hazard
which, while small for the individual, may be significant when applied to a large number of
individuals (i.e., all those screened).

Screening also has been proposed for bladder cancer in certain occupations, such as the rubber
industry. Investigations of cellular changes in, or mutagenicity of, workers’ urine have been
reported. However, the value of following cytological changes for population screening has
been questioned, and the value of the mutagenicity tests awaits further scientific evaluation,
since the prognostic value of having increased mutagenic activity in the urine is not known.

Judgements on the value of screening also depend on the intensity of the exposure, and thus
the size of the expected cancer risk. Screening might be more justified in small groups
exposed to high levels of carcinogens than among large groups exposed to low levels.

To summarize, no routine screening methods for occupational cancers can be recommended


on the basis of present knowledge. The development of new molecular epidemiological
techniques may improve the prospects for early cancer detection, but more information is
needed before conclusions can be drawn.

Cancer Registration

During this century, cancer registries have been set up at several locations throughout the
world. The International Agency for Research on Cancer (IARC) (1992) has compiled data on
cancer incidence in different parts of the world in a series of publications, “Cancer Incidence
in Five Continents.” Volume 6 of this publication lists 131 cancer registries in 48 countries.

Two main features determine the potential usefulness of a cancer registry: a well-defined
catchment area (defining the geographical area involved), and the quality and completeness of
the recorded information. Many of those registries that were set up early do not cover a
geographically well-defined area, but rather are confined to the catchment area of a hospital.

There are several potential uses of cancer registries in the prevention of occupational cancer.
A complete registry with nationwide coverage and a high quality of recorded information can
result in excellent opportunities for monitoring the cancer incidence in the population. This
requires access to population data to calculate age-standardized cancer rates. Some registries
also contain data on occupation, which therefore facilitates the monitoring of cancer risk in
different occupations.

Registries also may serve as a source for the identification of cases for epidemiological
studies of both the cohort and case-control types. In the cohort study, personal identification
data of the cohort is matched to the registry to obtain information on the type of cancer (i.e.,
as in record linkage studies). This assumes that a reliable identifying system exists (for
example, personal identification numbers in the Nordic countries) and that confidentiality
laws do not prohibit use of the registry in this way. For case-control studies, the registry may
be used as a source for cases, although some practical problems arise. First, the cancer
registries cannot, for methodological reasons, be quite up to date regarding recently diagnosed
cases. The reporting system, and necessary checks and corrections of the obtained
information, results in some lag time. For concurrent or prospective case-control studies,
where it is desirable to contact the individuals themselves soon after a cancer diagnosis, it
usually is necessary to set up an alternative way of identifying cases, for example via hospital
records. Second, in some countries, confidentiality laws prohibit the identification of potential
study participants who are to be contacted personally.

Registries also provide an excellent source for calculating background cancer rates to use for
comparison of the cancer frequency in cohort studies of certain occupations or industries.

In studying cancer, cancer registries have several advantages over mortality registries
commonly found in many countries. The accuracy of the cancer diagnoses is often better in
cancer registries than in mortality registries, which are usually based on death certificate data.
Another advantage is that the cancer registry often holds information on histological tumour
type, and also permits the study of living persons with cancer, and is not limited to deceased
persons. Above all, registries hold cancer morbidity data, permitting the study of cancers that
are not rapidly fatal and/or not fatal at all.

Environmental Control

There are three main strategies for reducing workplace exposures to known or suspected
carcinogens: elimination of the substance, reduced exposure by reduced emission or improved
ventilation, and personal protection of the workers.

It has long been debated whether a true threshold for carcinogen exposure exists, below which
no risk is present. It is often assumed that the risk should be extrapolated linearly down to
zero risk at zero exposure. If this is the case, then no exposure limit, no matter how low,
would be considered entirely risk-free. Despite this, many countries have defined exposure
limits for some carcinogenic substances, while, for others, no exposure limit value has been
assigned.

Elimination of a compound may give rise to problems when replacement substances are
introduced and when the toxicity of the replacement substance must be lower than that of the
substance replaced.

Reducing the exposure at the source may be relatively easily accomplished for process
chemicals by encapsulation of the process and ventilation. For example, when the
carcinogenic properties of vinyl chloride were discovered, the exposure limit value for vinyl
chloride was lowered by a factor of one hundred or more in several countries. Although this
standard was at first considered impossible to achieve by industry, later techniques allowed
compliance with the new limit. Reduction of exposure at the source may be difficult to apply
to substances that are used under less controlled conditions, or are formed during the work
operation (e.g., motor exhausts). The compliance with exposure limits requires regular
monitoring of workroom air levels.

When exposure cannot be controlled either by elimination or by reduced emissions, the use of
personal protection devices is the only remaining way to minimize the exposure. These
devices range from filter masks to air-supplied helmets and protective clothing. The main
route of exposure must be considered in deciding appropriate protection. However, many
personal protection devices cause discomfort to the user, and filter masks introduce an
increased respiratory resistance which may be very significant in physically demanding jobs.
The protective effect of respirators is generally unpredictable and depends on several factors,
including how well the mask is fitted to the face and how often filters are changed. Personal
protection must be considered as a last resort, to be attempted only when more effective ways
of reducing exposure fail.

Research Approaches

It is striking how little research has been done to evaluate the impact of programmes or
strategies to reduce the risk to workers of known occupational cancer hazards. With the
possible exception of asbestos, few such evaluations have been conducted. Developing better
methods for control of occupational cancer should include an evaluation of how present
knowledge is actually put to use.

Improved control of occupational carcinogens in the workplace requires the development of a


number of different areas of occupational safety and health. The process of identification of
risks is a basic prerequisite for reducing exposure to carcinogens in the workplace. Risk
identification in the future must solve certain methodological problems. More refined
epidemiological methods are required if smaller risks are to be detected. More precise data on
exposure for both the substance under study and possible confounding exposures will be
necessary. More refined methods for description of the exact dose of the carcinogen delivered
to the specific target organ also will increase the power of exposure-response calculations.
Today, it is not uncommon that very crude substitutes are used for the actual measurement of
target organ dose, such as the number of years employed in the industry. It is quite clear that
such estimates of dose are considerably misclassified when used as a surrogate for dose. The
presence of an exposure-response relationship is usually taken as strong evidence of an
aetiological relationship. However, the reverse, lack of demonstration of an exposure-
response relationship, is not necessarily evidence that no risk is involved, especially when
crude measures of target organ dose are used. If target organ dose could be determined, then
actual dose-response trends would carry even more weight as evidence for causation.

Molecular epidemiology is a rapidly growing area of research. Further insight into the
mechanisms of cancer development can be expected, and the possibility of the early detection
of carcinogenic effects will lead to earlier treatment. In addition, indicators of carcinogenic
exposure will lead to improved identification of new risks.

Development of methods for supervision and regulatory control of the work environment are
as necessary as methods for the identification of risks. Methods for regulatory control differ
considerably even among western countries. The systems for regulation used in each country
depend largely on socio-political factors and the status of labour rights. The regulation of
toxic exposures is obviously a political decision. However, objective research into the effects
of different types of regulatory systems could serve as a guide for politicians and decision-
makers.

A number of specific research questions also need to be addressed. Methods to describe the
expected effect of withdrawal of a carcinogenic substance or reduction of exposure to the
substance need to be developed (i.e., the impact of interventions must be assessed). The
calculation of the preventive effect of risk reduction raises certain problems when interacting
substances are studied (e.g., asbestos and tobacco smoke). The preventive effect of removing
one of two interacting substances is comparatively greater than when the two have only a
simple additive effect.

The implications of the multistage theory of carcinogenesis for the expected effect of
withdrawal of a carcinogen also adds a further complication. This theory states that the
development of cancer is a process involving several cellular events (stages). Carcinogenic
substances may act either in early or late stages, or both. For example, ionizing radiation is
believed to affect mainly early stages in inducing certain cancer types, while arsenic acts
mainly at late stages in lung cancer development. Tobacco smoke affects both early and late
stages in the carcinogenic process. The effect of withdrawing a substance involved in an early
stage would not be reflected in a reduced cancer rate in the population for a long time, while
the removal of a “late-acting” carcinogen would be reflected in a reduced cancer rate within a
few years. This is an important consideration when evaluating the effects of risk-reduction
intervention programmes.

Finally, the effects of new preventive factors have recently attracted considerable interest.
During the last five years, a large number of reports have been published on the preventive
effect on lung cancer of consuming fruits and vegetables. The effect seems to be very
consistent and strong. For example, the risk of lung cancer has been reported as double among
those with a low consumption of fruits and vegetables versus those with high intake. Thus,
future studies of occupational lung cancer would have greater precision and validity if
individual data on fruit and vegetable consumption can be included in the analysis.

In conclusion, improved prevention of occupational cancer involves both improved methods


for risk identification and more research on the effects of regulatory control. For risk
identification, developments in epidemiology should mainly be directed toward better
exposure information, while in the experimental field, validation of the results of molecular
epidemiological methods regarding cancer risk are needed.
Cancer References
Aitio, A and T Kauppinen. 1991. Occupational cancer as occupational disease. In
Occupational Diseases. Helsinki: Institute of Occupational Health.

Aksoy, M. 1985. Malignancies due to occupational exposure to humans. Am J Ind Med


7:395-402.

Alho, M, T Kauppinen, and E Sundquist. 1988. Use of exposure registration in the prevention
of occupational cancer in Finland. Am J Ind Med 13:581-592.

Anon. Bladder tumours in industry. 1965. Lancet 2:1173.

Armitage, P and R Doll. 1961. Stochastic models for carcinogenesis. In Proceedings of the
Fourth Berkeley Symposium on Mathematical Statistics and Probability, edited by J Neyman.
Berkeley: Univ. of California Press.

Checkoway, H, NE Pearce, and DJ Crawford-Brown. 1989. Research Methods in


Occupational Epidemiology. New York: Oxford Univ. Press.

Decoufle, P. 1982. Occupation. In Cancer Epidemiology and Prevention, edited by D


Schottenfeld and JF Fraumenti. Philadelphia: WB Saunders.

Doll, R and R Peto. 1981. The causes of cancer. J Natl Cancer Inst 66:1191-1308.

Ennever, FK. 1993. Biologically based mathematical models of lung cancer risk.
Epidemiology 4:193-194.

Frumkin, H and BS Levy. 1988. Carcinogens. In Occupational Health, edited by BS Levy and
DH Wegman. Boston: Little, Brown & Co.

Higginson, J. 1969. Present trends in cancer epidemiology. Proc Canadian Cancer Res Conf
8:40-75.

Higginson, J and CS Muir. 1976. The role of epidemiology in elucidating the importance of
environmental factors in human cancer. Cancer Detec Prev 1:79-105.

—. 1979. Environmental carcinogenesis: Misconceptions and limitations to cancer control. J


Natl Cancer Inst 63:1291-1298.

Hogan, MD and DG Hoel. 1981. Estimated cancer risk associated with occupational asbestos
exposure. Risk Anal 1:67-76.

International Agency for Research on Cancer (IARC). 1972-1995. IARC Monographs on the
Evaluation of Carcinogenic Risks to Humans. Vol. 1-63. Lyon: IARC.

—. 1990. Cancer: Causes, Occurrence and Control. IARC Scientific Publication, No. 100.
Lyon: IARC.
—. 1992. Cancer Incidence in Five Continents. Vol. VI. IARC Scientific Publications, No.
120. Lyon: IARC.

Jeyaratnam, J. 1994. Transfer of hazardous industries. In Occupational Cancer in Developing


Countries, edited by NE Pearce, E Matos, H Vainio, P Boffetta, and M Kogevinas. Lyon:
IARC.

Kerva, A and T Partanen. 1981. Computerizing occupational carcinogenic data in Finland.


Am Ind Hyg Assoc J 42:529-533.

Kogevinas, M, P Boffetta, and N Pearce. 1994. Occupational exposure to carcinogens in


developing countries. In Occupational Cancer in Developing Countries, edited by NE Pearce,
E Matos, H Vainio, P Boffetta, and M Kogevinas. Lyon: International Agency for Research
on Cancer (IARC).

Moolgavkar, S. 1978. The multistage theory of carcinogenesis and the age distribution of
cancer in man. J Natl Cancer Inst 61:49-52.

Moolgavkar, SH, EG Luebeck, D Krewski, and JM Zielinski. 1993. Radon, cigarette smoke
and lung cancer: A re-analysis of the Colorado Plateau uranium miners’ data. Epidemiology
4:204-217.

Pearce, NE and E Matos. 1994. Strategies for prevention of occupational cancer in developing
countries. In Occupational Cancer in Developing Countries, edited by NE Pearce, E Matos, H
Vainio, P Boffetta, and M Kogevinas. Lyon: International Agency for Research on Cancer
(IARC).

Pearce, NE, E Matos, M Koivusalo, and S Wing. 1994. Industrialization and health. In
Occupational Cancer in Developing Countries, edited by NE Pearce, E Matos, H Vainio, P
Boffetta, and M Kogevinas. Lyon: International Agency for Research on Cancer (IARC).

Pisani, P and M Parkin. 1994. Burden of cancer in developing countries. In Occupational


Cancer in Developing Countries, edited by NE Pearce, E Matos, H Vainio, P Boffetta, and M
Kogevinas. Lyon: International Agency for Research on Cancer (IARC).

Pott, P. 1775. Chirugical Observations. London: Hawes, Clarke and Collins.

Siemiatycki, J. 1991. Risk Factors for Cancer in the Workplace. London: CRC Press.

Swerdlow, AJ. 1990. Effectiveness of primary prevention of occupational exposures on


cancer risk. In Evaluating Effectiveness of Primary Prevention of Cancer, edited by M
Hakama, V Veral, JW Cullen, and DM Parkin. IARC Scientific Publications, No. 103. Lyon:
International Agency for Research on Cancer (IARC).

Vineis, P and L Simonato. 1991. Proportion of lung and bladder cancers in males resulting
from occupation: A systematic approach. Arch Environ Health 46:6-15.

Waldron, HA. 1983. A brief history of scrotal cancer. Br J Ind Med 40:390-401.
World Health Organization (WHO). 1978. International Classification of Diseases. Geneva:
WHO.

—. 1992. International Classification of Diseases and Related Health Problems. Geneva:


WHO.

Wynder, EJ and GB Gori. 1977. Contribution of the environment to cancer incidence: An


epidemiologic exercise. J Natl Cancer Inst 58:825-832.

Copyright 2015 International Labour Organization

3. Cardiovascular System
Chapter Editors: Lothar Heinemann and Gerd Heuchert
Physical, Chemical and Biological Hazards
Physical Factors

Written by ILO Content Manager

Noise

Hearing loss due to workplace noise has been recognized as an occupational disease for many
years. Cardiovascular diseases are at the centre of the discussion on possible chronic extra-
aural effects of noise. Epidemiological studies have been done within the workplace noise
field (with high-level noise indicators) as well as in the surrounding noise field (with low-
level noise indicators). The best studies to date were done on the relationship between
exposure to noise and high blood pressure. In numerous new survey studies, noise researchers
have assessed the available research results and summarized the current state of knowledge
(Kristensen 1994; Schwarze and Thompson 1993; van Dijk 1990).

Studies show that the noise risk factor for diseases of the cardiovascular system is less
significant than behavioural risk factors like smoking, poor nutrition or physical inactivity
(Aro and Hasan 1987; Jegaden et al. 1986; Kornhuber and Lisson 1981).

The results of epidemiological studies do not permit any final answer on the adverse
cardiovascular health effects of chronic workplace or environmental noise exposure. The
experimental knowledge on hormonal stress effects and changes in peripheral
vasoconstriction, on the one hand, and the observation, on the other, that a high workplace
noise level >85 dBA) promotes the development of hypertension, allow us to include noise as
an non-specific stress stimulus in a multi-factored risk model for cardiovascular diseases,
warranting high biological plausibility.

The opinion is advanced in modern stress research that although increases in blood pressure
during work are connected to noise exposure, the blood pressure level per se depends on a
complex set of personality and environmental factors (Theorell et al. 1987). Personality and
environmental factors play an intimate role in determining the total stress load at the
workplace.

For this reason it appears all the more urgent to study the effect of multiple burdens at the
workplace and to clarify the cross effects, mostly unknown up to now, between combined
influencing exogenous factors and diverse endogenous risk characteristics.

Experimental studies

It is today generally accepted that noise exposure is a psychophysical stressor. Numerous


experimental studies on animals and human subjects permit extending the hypothesis on the
pathomechanism of noise to the development of cardiovascular diseases. There is a relatively
uniform picture with respect to acute peripheral reactions to noise stimuli. Noise stimuli
clearly cause peripheral vasoconstriction, measurable as a decrease in finger pulse amplitude
and skin temperature and an increase in systolic and diastolic blood pressure. Almost all
studies confirm an increase in heart rate (Carter 1988; Fisher and Tucker 1991; Michalak,
Ising and Rebentisch 1990; Millar and Steels 1990; Schwarze and Thompson 1993;
Thompson 1993). The degree of these reactions is modified by such factors as the type of
noise occurrence, age, sex, state of health, nervous state and personal characteristics (Harrison
and Kelly 1989; Parrot et al. 1992; Petiot et al. 1988).

A wealth of research deals with the effects of noise on metabolism and hormone levels.
Exposure to loud noise almost always results fairly quickly in changes such as in blood
cortisone, cyclical adenosinmonophosphate (CAMP), cholesterol and certain lipoprotein
fractions, glucose, protein fractions, hormones (e.g., ACTH, prolactin), adrenalin and
noradrenalin. Increased catecholamine levels can be found in the urine. All of this clearly
shows that noise stimuli below the noise-deafness level can lead to hyperactivity of the
hypophyseal adrenal cortex system (Ising and Kruppa 1993; Rebentisch, Lange-Asschenfeld
and Ising 1994).

Chronic exposure to loud noise has been shown to result in a reduction of magnesium content
in serum, erythrocytes and in other tissues, such as the myocardium (Altura et al. 1992), but
study results are contradictory (Altura 1993; Schwarze and Thompson 1993).

The effect of workplace noise on blood pressure is equivocal. A series of epidemiological


studies, which were mostly designed as cross-sectional studies, indicate that employees with
long-term exposure to loud noise show higher systolic and/or diastolic blood pressure values
than those who work under less noisy conditions. Counterpoised, however, are studies that
found very little or no statistical association between long-term noise exposure and in- creased
blood pressure or hypertension (Schwarze and Thompson 1993; Thompson 1993; van Dijk
1990). Studies that enlist hearing loss as a surrogate for noise show varied results. In any case,
hearing loss is not a suitable biological indicator for noise ex- posure (Kristensen 1989; van
Dijk 1990). The indications are mounting that noise and the risk factors—increased blood
pres- sure, increased serum cholesterol level (Pillsburg 1986), and smoking (Baron et al.
1987)—have a synergistic effect on the de- velopment of noise-induced hearing loss.
Differentiating between hearing loss from noise and hearing loss from other factors is
difficult. In the studies (Talbott et al. 1990; van Dijk, Veerbeck and de Vries 1987), no
connection was found between noise exposure and high blood pressure, whereas hearing loss
and high blood pressure have a positive correlation after correction for the usual risk factors,
especially age and body weight. The relative risks for high blood pressure range between
1 and 3.1 in com- parisons of exposure to loud and less loud noise. Studies with qualitatively
superior methodology report a lower relationship. Differences among the blood pressure
group means are relatively narrow, with values between 0 and 10 mm Hg.

A large epidemiological study of women textile workers in China (Zhao, Liu and Zhang
1991) plays a key role in noise effect research. Zhao ascertained a dose-effect relationship
between noise levels and blood pressure among women industrial workers who were subject
to various noise exposures over many years. Using an additive logistical model the factors
“indicated cooking salt use”, “family history of high blood pressure” and “noise level”
(<<0.05) significantly correlated with the probability of high blood pressure. The authors
judged that no confounding was present due to overweight. The noise level factor
nevertheless constituted half the risk of hypertension of the first two named factors. An
increase in the noise level from 70 to 100 dBA raised the risk for high blood pressure by a
factor of 2.5. The quantification of the risk of hypertension by using higher noise exposure
levels was possible in this study only because the offered hearing protection was not worn.
This study looked at non-smoking women aged 35 ±8 years, so according to v. Eiff’s results
(1993), the noise-related risk of hypertension among men could be significantly higher.
Hearing protection is prescribed in western industrialized countries for noise levels over 85-
90 dBA. Many studies carried out in these countries demonstrated no clear risk at such noise
levels, so it can be concluded from Gierke and Harris (1990) that limiting the noise level to
the set limits prevents most extra-aural effects.

Heavy Physical Work

The effects of “lack of movement” as a risk factor for cardiovascular disease and of physical
activity as promoting health were elucidated in such classic publications as those by Morris,
Paffenbarger and their co-workers in the 1950s and 1960s, and in numerous epidemiological
studies (Berlin and Colditz 1990; Powell et al. 1987). In previous studies, no direct cause-and-
effect relationship could be shown between lack of movement and the rate of cardiovascular
disease or mortality. Epidemiological studies, however, point to the positive, protective
effects of physical activity on reducing various chronic diseases, including coronary heart
disease, high blood pressure, non insulin dependent diabetes mellitus, osteoporosis and colon
cancer, as well as anxiety and depression. The connection between physical inactivity and the
risk of coronary heart disease has been observed in numerous countries and population
groups. The relative risk for coronary heart disease among inactive people compared to active
people varies between 1.5 and 3.0; with the studies using qualitatively higher methodology
showing higher relationship. This increased risk is comparable to that found for
hypercholesterolemia, hypertension and smoking (Berlin and Colditz 1990; Centers for
Disease Control and Prevention 1993; Kristensen 1994; Powell et al. 1987).

Regular, leisure-time physical activity appears to reduce the risk of coronary heart disease
through various physiological and metabolic mechanisms. Experimental studies have shown
that with regular motion training, the known risk factors and other health-related factors are
positively influenced. It results, for example, in an increase in the HDL-cholesterol level, and
a decrease in the serum-triglyceride level and blood pressure (Bouchard, Shepard and
Stephens 1994; Pate et al. 1995).

A series of epidemiological studies, spurred on by the studies of Morris et al. on coronary risk
among London bus drivers and conductors (Morris, Heady and Raffle 1956; Morris et al.
1966), and the study of Paffenbarger et al. (1970) among American harbour workers, looked
at the relationship between the difficulty level of physical work and the incidence of
cardiovascular diseases. Based on earlier studies from the 1950s and 1960s the prevailing idea
was that physical activity at work could have a certain protective effect on the heart. The
highest relative risk for cardiovascular diseases was found in people with physically inactive
jobs (e.g., sitting jobs) as compared to people who do heavy physical work. But newer studies
have found no difference in the frequency of coronary disease between active and inactive
occupational groups or have even found a higher prevalence and incidence of cardiovascular
risk factors and cardiovascular diseases among heavy labourers (Ilmarinen 1989; Kannel et al.
1986; Kristensen 1994; Suurnäkki et al. 1987). Several reasons can be given for the
contradiction between the health-promoting effect of free-time physical activities on
cardiovascular morbidity and the lack of this effect with heavy physical labour:

 Primary and secondary selection processes (healthy worker effect) can lead to serious
distortions in occupational medical epidemiological studies.
 The relationship found between physical work and the onset of cardiovascular diseases can
be influenced by a number of confounding variables (like social status, education,
behavioural risk factors).
 Assessing the physical load, often solely on the basis of job descriptions, must be seen as an
inadequate method.

Social and technological development since the 1970s has meant that only a few jobs with
“dynamic physical activity” remain. Physical activity in the modern workplace often means
heavy lifting or carrying and a high proportion of static muscle work. So it is not surprising
that physical activity in occupations of this type lacks an essential criterion for coronary-
protective effect: a sufficient intensity, duration and frequency to optimize the physical load
on big muscle groups. The physical work is, in general, intensive, but has less of a workout
effect on the cardiovascular system. The combination of heavy, physically demanding work
and high free-time physical activity could establish the most favourable situation with respect
to the cardiovascular risk-factor profile and the onset of CHD (Saltin 1992).

The results of studies to date are also not consistent on the question of whether heavy physical
work is related to the onset of arterial hypertension.

Physically demanding work is related to changes in blood pressure. In dynamic work that
utilizes big muscle masses, blood supply and demand are in balance. In dynamic work that
requires the smaller and middle muscle masses, the heart may put out more blood than is
needed for the total physical work and the result can be considerably increased systolic and
diastolic blood pressure (Frauendorf et al. 1986).

Even with combined physical-mental strain or physical strain under the effects of noise, a
substantial increase in blood pressure and heart rate are seen in a certain percentage
(approximately 30%) of people (Frauendorf, Kobryn and Gelbrich 1992; Frauendorf et al.
1995).

No studies are presently available on the chronic effects of this increased circulatory activity
in local muscle work, with or without noise or mental strain.

In two recently published independent studies, by American and German researchers


(Mittleman et al. 1993; Willich et al. 1993), the question was pursued as to whether heavy
physical work can be a trigger for an acute myocardial infarction. In the studies, of 1,228 and
1,194 people with acute myocardial infarction respectively, the physical strain one hour
before the infarction was compared with the situation 25 hours before. The following relative
risks were calculated for the onset of a myocardial infarction within one hour of heavy
physical strain in comparison with light activity or rest: 5.9 (CI 95%: 4.6-7.7) in the American
and 2.1 (CI 95%: 1.6-3.1) in the German study. The risk was highest for people not in shape.
An important limiting observation is, however, that the heavy physical strain occurred one
hour before the infarction in only 4.4 and 7.1% of the infarction patients respectively.

These studies involve questions of the significance of physical strain or a stress-induced


increased output of catecholamines on the coronary blood supply, on triggering coronary
spasms, or an immediately harmful effect of catecholamines on the beta adrenergic receptors
of the heart muscle membrane as a cause of the infarction manifestation or acute cardiac
death. It can be assumed that such results will not ensue with a healthy coronary vessel system
and intact myocardium (Fritze and Müller 1995).
The observations make clear that statements on possible causal relationships between heavy
physical labour and effects on cardiovascular morbidity are not easy to substantiate. The
problem with this type of investigation clearly lies in the difficulty in measuring and assessing
“hard work” and in excluding preselections (healthy worker effect). Prospective cohort
studies are needed on the chronic effects of selected forms of physical work and also on the
effects of combined physical-mental or noise stress on selected functional areas of the
cardiovascular system.

It is paradoxical that the result of reducing heavy dynamic muscle work—until now greeted as
a significant improvement in the level of strain in the modern workplace—possibly results in
a new, significant health problem in modern industrial society. From the occupational
medicine perspective, one might conclude that static physical strain on the muscle-skeleton
system with lack of movement, presents a much greater health risk than previously assumed,
according to the results of studies to date.

Where monotonous improper strains cannot be avoided, counterbalancing with free-time


sports activities of comparable duration should be encouraged (e.g., swimming, bicycling,
walking and tennis).

Heat and Cold

Exposure to extreme heat or cold is thought to influence cardiovascular morbidity (Kristensen


1989; Kristensen 1994). The acute effects of high outside temperatures or cold on the
circulatory system are well documented. An increase in mortality as a result of cardiovascular
diseases, mostly heart attacks and strokes, was observed at low temperatures (under +10°C) in
the winter in countries at northern latitudes (Curwen 1991; Douglas, Allan and Rawles 1991;
Kristensen 1994; Kunst, Looman and Mackenbach 1993). Pan, Li and Tsai (1995) found an
impressive U-shaped relationship between outside temperature and mortality rates for
coronary heart disease and strokes in Taiwan, a subtropical country, with a similarly falling
gradient between +10°C and +29°C and a sharp increase thereafter at over +32°C. The
temperature at which the lowest cardiovascular mortality was observed is higher in Taiwan
than in countries with colder climates. Kunst, Looman and Mackenbach found in the
Netherlands a V-shaped relationship between total mortality and outside temperature, with the
lowest mortality at 17°C. Most cold-related deaths occurred in people with cardiovascular
diseases, and most heat-related deaths were associated with respiratory tract illnesses. Studies
from the United States (Rogot and Padgett 1976) and other countries (Wyndham and
Fellingham 1978) show a similar U-shaped relationship, with the lowest heart attack and
stroke mortality at outside temperatures around 25 to 27°C.

It is not yet clear how these results should be interpreted. Some authors have concluded that a
causal relationship possibly exists between temperature stress and the pathogenesis of
cardiovascular diseases (Curwen and Devis 1988; Curwen 1991; Douglas, Allan and Rawles
1991; Khaw 1995; Kunst, Looman and Mackenbach 1993; Rogot and Padgett 1976;
Wyndham and Fellingham 1978). This hypothesis was supported by Khaw in the following
observations:

 Temperature proved to be the strongest, acute (day to day) predictor for cardiovascular
mortality under the parameters which were handled differently, such as seasonal
environmental changes and factors like air pollution, sunlight exposure, incidence of flu and
nutrition. This speaks against the assumption that temperature acts only as a substitute
variable for other detrimental environmental conditions.
 The consistency of the connection in various countries and population groups, over time and
in different age groups, is furthermore convincing.
 Data from clinical and laboratory research suggests various biologically plausible
pathomechanisms, including effects of changing temperature on haemostasis, blood
viscosity, lipid levels, the sympathetic nervous system and vasoconstriction (Clark and
Edholm 1985; Gordon, Hyde and Trost 1988; Keatinge et al. 1986; Lloyd 1991; Neild et al.
1994; Stout and Grawford 1991; Woodhouse, Khaw and Plummer 1993b; Woodhouse et al.
1994).

Exposure to cold increases blood pressure, blood viscosity and heart rate (Kunst, Looman and
Mackenbach 1993; Tanaka, Konno and Hashimoto 1989; Kawahara et al. 1989). Studies by
Stout and Grawford (1991) and Woodhouse and co-workers (1993; 1994) show that
fibrinogens, blood clotting factor VIIc and lipids were higher among older people in the
winter.

An increase in blood viscosity and serum cholesterol was found with exposure to high
temperatures (Clark and Edholm 1985; Gordon, Hyde and Trost 1988; Keatinge et al. 1986).
According to Woodhouse, Khaw and Plummer (1993a), there is a strong inverse correlation
between blood pressure and temperature.

Still unclear is the decisive question of whether long-term exposure to cold or heat results in
lasting increased risk of cardiovascular disease, or whether exposure to heat or cold increases
the risk for an acute manifestation of cardiovascular diseases (e.g., a heart attack, a stroke) in
connection with the actual exposure (the “triggering effect”). Kristensen (1989) concludes
that the hypothesis of an acute risk increase for complications from cardiovascular disease in
people with underlying organic disease is confirmed, whereas the hypothesis of a chronic
effect of heat or cold can neither be confirmed nor rejected.

There is little, if any, epidemiological evidence to support the hypothesis that the risk of
cardiovascular disease is higher in populations with an occupational, long-term exposure to
high temperature (Dukes-Dobos 1981). Two recent cross-section studies focused on
metalworkers in Brazil (Kloetzel et al. 1973) and a glass factory in Canada (Wojtczak-
Jaroszowa and Jarosz 1986). Both studies found a significantly increased prevalence of
hypertension among those subject to high temperatures, which increased with the duration of
the hot work. Presumed influences of age or nutrition could be excluded. Lebedeva, Alimova
and Efendiev (1991) studied mortality among workers in a metallurgical company and found
high mortality risk among people exposed to heat over the legal limits. The figures were
statistically significant for blood diseases, high blood pressure, ischemic heart disease and
respiratory tract diseases. Karnaukh et al. (1990) report an increased incidence of ischemic
heart disease, high blood pressure and haemorrhoids among workers in hot casting jobs. The
design of this study is not known. Wild et al. (1995) assessed the mortality rates between
1977 and 1987 in a cohort study of French potash miners. The mortality from ischemic heart
disease was higher for underground miners than for above-ground workers (relative
risk = 1.6). Among people who were separated from the company for health reasons, the
ischemic heart disease mortality was five times higher in the exposed group as compared to
the above-ground workers. A cohort mortality study in the United States showed a 10% lower
cardiovascular mortality for heat-exposed workers as compared to the non-exposed control
group. In any case, among those workers who were in heat-exposed jobs less than six months,
the cardiovascular mortality was relatively high (Redmond, Gustin and Kamon 1975;
Redmond et al. 1979). Comparable results were cited by Moulin et al. (1993) in a cohort study
of French steel workers. These results were attributed to a possible healthy worker effect
among the heat-exposed workers.

There are no known epidemiological studies of workers exposed to cold (e.g., cooler,
slaughterhouse or fishery workers). It should be mentioned that cold stress is not only a
function of temperature. The effects described in the literature appear to be influenced by a
combination of factors like muscle activity, dress, dampness, drafts and possibly poor living
conditions. Workplaces with exposure to cold should pay special attention to appropriate
dress and avoiding drafts (Kristensen 1994).

Vibration

Hand-arm vibration stress

It is long known and well documented that vibrations transmitted to the hands by vibrating
tools can cause peripheral vascular disorders in addition to damage to the muscle and skeletal
system, and peripheral nerve-function disorders in the hand-arm area (Dupuis et al. 1993;
Pelmear, Taylor and Wasserman 1992). The “white finger disease”, first described by
Raynaud, appears with higher prevalency rates among exposed populations, and is recognized
as an occupational disease in many countries.

Raynaud’s phenomenon is marked by an attack with vasospastic reduced fusion of all or some
fingers, with the exception of the thumbs, accompanied by sensibility disorders in the affected
fingers, feelings of cold, pallor and paraesthesia. After the exposure ends, circulation resumes,
accompanied by a painful hyperaemia.

It is assumed that endogenous factors (e.g., in the sense of a primary Raynaud’s phenomenon)
as well as exogenous exposures can be held responsible for the occurrence of a vibration-
related vasospastic syndrome (VVS). The risk is clearly greater with vibrations from
machines with higher frequencies (20 to over 800 Hz) than with machines that produce low-
frequency vibrations. The amount of static strain (gripping and pressing strength) appears to
be a contributing factor. The relative significance of cold, noise and other physical and
psychological stressors, and heavy nicotine consumption is still unclear in the development of
the Raynaud’s phenomenon.

The Raynaud’s phenomenon is pathogenetically based on a vasomotor disorder. Despite a


large number of studies on functional, non-invasive (thermography, plethysmography,
capillaroscopy, cold test) and invasive examinations (biopsy, arteriography), the
pathophysiology of the vibration-related Raynaud’s phenomenon is not yet clear. Whether the
vibration directly causes damage to the vascular musculature (a “local fault”), or whether it is
a vasoconstriction as a result of sympathetic hyperactivity, or whether both these factors are
necessary, is at present still unclear (Gemne 1994; Gemne 1992).

The work-related hypothenar hammer syndrome (HHS) should be distinguished in the


differential diagnosis from vibration-caused Raynaud’s phenomenon. Pathogenetically this is
a chronic-traumatic damage to the artery ulnaris (intima lesion with subsequent
thrombosization) in the area of the superficial course above the unciform bone (os hamatum).
HHS is caused by long-term mechanical effects in the form of external pressure or blows, or
by sudden strain in the form of mechanical partial body vibrations (often combined with
persistent pressure and the effects of impacts). For this reason, HHS can occur as a
complication or in connection with a VVS (Kaji et al. 1993; Marshall and Bilderling 1984).

In addition to the early and, for exposure against hand-arm vibration, specific peripheral
vascular effects, of particular scientific interest are the so-called non-specific chronic changes
of autonomous regulations of the organ systems—for example, of the cardiovascular system,
perhaps provoked by vibration (Gemne and Taylor 1983). The few experimental and
epidemiological studies of possible chronic effects of hand-arm vibration give no clear results
confirming the hypothesis of possible vibration-related endocrine and cardiovascular function
disorders of the metabolic processes, cardiac functions or blood pressure (Färkkilä, Pyykkö
and Heinonen 1990; Virokannas 1990) other than that the activity of the adrenergic system is
increased from exposure to vibration (Bovenzi 1990; Olsen 1990). This applies to vibration
alone or in combination with other strain factors like noise or cold.

Whole-body vibration stress

If whole-body mechanical vibrations have an effect on the cardio- vascular system, then a
series of parameters such as heart rate, blood pressure, cardiac output, electrocardiogram,
plethysmo- gram and certain metabolic parameters must show corresponding reactions.
Conclusions on this are made difficult for the method- ological reason that these circulation
quantifications do not react specifically to vibrations, but can also be influenced by other
simultaneous factors. Increases in heart rate are apparent only under very heavy vibration
loads; the influence on blood pressure values shows no systematic results and
electrocardiographic (ECG) changes are not significantly differentiable.

Peripheral circulatory disorders resulting from vasoconstriction have been less researched and
appear weaker and of shorter duration than those from hand-arm vibrations, which are marked
by an effect on the grasping strength of the fingers (Dupuis and Zerlett 1986).

In most studies the acute effects of whole-body vibrations on the cardiovascular system of
vehicle drivers were found to be relatively weak and temporary (Dupius and Christ 1966;
Griffin 1990).

Wikström, Kjellberg and Landström (1994), in a comprehensive overview, cited eight


epidemiological studies from 1976 to 1984 that examined the connection between whole-body
vibrations and cardiovascular diseases and disorders. Only two of these studies found a higher
prevalence of such illnesses in the group exposed to vibrations, but none where this was
interpreted as the effect of whole-body vibrations.

The view is widely accepted that changes of physiological functions through whole-body
vibrations have only a very limited effect on the cardiovascular system. Causes as well as
mechanisms of the reaction of the cardiovascular system to whole-body vibrations are not yet
sufficiently known. At present there is no basis to assume that whole-body vibrations per se
contribute to the risk of diseases of the cardiovascular system. But attention should be paid to
the fact that this factor very often is combined with exposure to noise, inactivity (sitting work)
and shift work.
Ionizing Radiation, Electromagnetic Fields, Radioand Microwaves, Ultra- and
Infrasound

Many case studies and a few epidemiological studies have drawn attention to the possibility
that ionizing radiation, introduced to treat cancer or other diseases, may promote the
development of arteriosclerosis and thereby increase the risk for coronary heart disease and
also other cardiovascular diseases (Kristensen 1989; Kristensen 1994). Studies on the
incidence of cardiovascular diseases in occupational groups exposed to ionizing radiation are
not available.

Kristensen (1989) reports on three epidemiological studies from the early 1980s on the
connection between cardiovascular diseases and exposure to electromagnetic fields. The
results are contradic- tory. In the 1980s and 1990s the possible effects of electrical and
magnetic fields on human health have attracted increasing atten- tion from people in
occupational and environmental medicine. Partially contradictory epidemiological studies that
looked for cor- relations between occupational and/or environmental exposure to weak, low-
frequency electrical and magnetic fields, on the one hand, and the onset of health disorders on
the other, aroused considerable attention. In the foreground of the numerous experi- mental
and few epidemiological studies stand possible long-term effects such as carcinogenicity,
teratogenicity, effects on the im- mune or hormone systems, on reproduction (with special
atten- tion to miscarriages and defects), as well as to “hypersensitivity to electricity” and
neuro-psychological behavioural reactions. Poss- ible cardiovascular risk is not being
discussed at present (Gamber- ale 1990; Knave 1994).

Certain immediate effects of low-frequency magnetic fields on the organism that have been
scientifically documented through in vitro and in vivo examinations of low to high field
strengths should be mentioned in this connection (UNEP/WHO/IRPA 1984;
UNEP/WHO/IRPA 1987). In the magnetic field, such as in the blood stream or during heart
contraction, charged carriers lead to induction of electrical fields and currents. Thus the
electrical voltage that is created in a strong static magnetic field over the aorta near the heart
during coronary activity can amount to 30 mV at a flow thickness of 2 Tesla (T), and
induction values over 0.1 T were detected in the ECG. But effects on the blood pressure, for
example, were not found. Magnetic fields that change with time (intermittent magnetic fields)
induce electrical eddy fields in biological objects that can for example arouse nerve and
muscle cells in the body. No certain effect appears with electrical fields or induced currents
under 1 mA/m2. Visual (induced with magnetophosphene) and nervous effects are reported at
10 to 100 mA/m2. Extrasystolic and heart chamber fibrillations appear at over 1 A/m2.
According to currently available data, no direct health threat is to be expected for short-term
whole-body exposure up to 2 T (UNEP/WHO/IRPA 1987). However, the danger threshold for
indirect effects (e.g., from the magnetic field force action on ferromagnetic materials) lies
lower than that for direct effects. Precautionary measures are thus required for persons with
ferromagnetic implants (unipolar pacemakers, magnetizable aneurysm clips, haemoclips,
artificial heart valve parts, other electrical implants, and also metal fragments). The danger
threshold for ferromagnetic implants begins at 50 to 100 mT. The risk is that injuries or
bleeding can result from migration or pivotal motions, and that functional capacities (e.g., of
heart valves, pacemakers and so on) can be affected. In facilities in research and industry with
strong magnetic fields, some authors advise medical surveillance examinations for people
with cardiovascular diseases, including high blood pressure, in jobs where the magnetic field
exceeds 2 T (Bernhardt 1986; Bernhardt 1988). Whole-body exposure of 5 T can lead to
magnetoelectrodynamic and hydrodynamic effects on the circulatory system, and it should be
assumed that short-term whole-body exposure of 5 T causes health hazards, especially for
people with cardiovascular diseases, including high blood pressure (Bernhardt 1988;
UNEP/WHO/ IRPA 1987).

Studies that examine the various effects of radio and microwaves have found no detrimental
effects to health. The possibility of cardiovascular effects from ultrasound (frequency range
between 16 kHz and 1 GHz) and infrasound (frequency range >>20 kHz) are discussed in the
literature, but the empirical evidence is very slight (Kristensen 1994).

Chemical Hazardous Material

Written by ILO Content Manager

Despite numerous studies, the role of chemical factors in causing cardiovascular diseases is
still disputed, but probably is small. The calculation of the aetiological role of chemical
occupational factors for cardiovascular diseases for the Danish population resulted in a value
under 1% (Kristensen 1994). For a few materials such as carbon disulphide and organic
nitrogen compounds, the effect on the cardiovascular system is generally recognized
(Kristensen 1994). Lead seems to affect blood pressure and cerebrovascular morbidity.
Carbon monoxide (Weir and Fabiano 1982) undoubtedly has acute effects, especially in
provoking angina pectoris in pre-existing ischaemia, but probably does not increase the risk of
the underlying arteriosclerosis, as was long suspected. Other materials like cadmium, cobalt,
arsenic, antimony, beryllium, organic phosphates and solvents are under discussion, but not
sufficiently documented as yet. Kristensen (1989, 1994) gives a critical overview. A selection
of relevant activities and industrial branches can be found in Table 1.

Table 1. Selection of activities and industrial branches that may be associated with
cardiovascular hazards

Hazardous material Occupational branch affected/use


Carbon disulphide (CS2 ) Rayon and synthetic fiber fabrication, rubber,
matches, explosives and cellulose industries
Used as solvent in manufacture of
pharmaceuticals, cosmetics and insecticides
Organic nitro-compounds Explosives and munitions manufacture,
pharmaceuticals industry
Carbon monoxide (CO) Employees in large industrial combustion
facilities (blast furnaces, coke ovens)
Manufacture and utilization of gas mixtures
containing CO (producer gas facilities)
Repair of gas pipelines
Casting workers, firefighters, auto mechanics
(in badly ventilated spaces)
Exposures to accidents (gases from
explosions,
fires in tunnel building or underground work)
Lead Smelting of lead ore and secondary raw
materials containing lead
Metal industry (production of various alloys),
cutting and welding metals containing lead
or materials coated with coverings containing
lead
Battery factories
Ceramics and porcelain industries
(production
of leaded glazes)
Production of leaded glass
Paint industry, application and removal of
leaded paints
Hydrocarbons, halogenated hydrocarbons Solvents (paints, lacquer)
Adhesives (shoe, rubber industries)
Cleaning and degreasing agents
Basic materials for chemical syntheses
Refrigerants
Medicine (narcotics)
Methyl chloride exposure in activities using
solvents

The exposure and effect data of important studies for carbon disulphide (CS2), carbon
monoxide (CO) and nitroglycerine are given in the chemical section of the Encyclopaedia.
This listing makes clear that problems of inclusion, combined exposures, varying
consideration of compounding factors, changing target sizes and assessment strategies play a
considerable role in the findings, so that uncertainties remain in the conclusions of these
epidemiological studies.

In such situations clear pathogenetic conceptions and knowledge can support the suspected
connections and thereby contribute to deriving and substantiating the consequences, including
preventive measures. The effects of carbon disulphide are known on lipids and carbohydrate
metabolism, on thyroid functioning (triggering hypothyroidism) and on coagulation
metabolism (promoting thrombocyte aggregation, inhibiting plasminogen and plasmin
activity). Changes in blood pressure such as hypertension are mostly traceable to vascular-
based changes in the kidney, a direct causal link to high blood pressure due to carbon
disulphide has not yet been excluded for certain, and a direct (reversible) toxic effect is
suspected on the myocardium or an interference with the catecholamine metabolism. A
successful 15-year intervention study (Nurminen and Hernberg 1985) documents the
reversibility of the effect on the heart: a reduction in exposure was followed almost
immediately by a decrease in cardiovascular mortality. In addition to the clearly direct
cardiotoxic effects, arteriosclerotic changes in the brain, eye, kidney and coronary vasculature
that can be considered the basis of encephalopathies, aneurysms in the retina area,
nephropathies and chronic ischaemic heart disease have been proven among those who are
exposed to CS2. Ethnic and nutritionally related components interfere in the pathomechanism;
this was made clear in the comparative studies of Finnish and Japanese viscous rayon
workers. In Japan, vascular changes in the area of the retina were found, whereas in Finland
the cardiovascular effects dominated. Aneurysmatic changes in the retinal vasculature were
observed at carbon disulphide concentrations under 3 ppm (Fajen, Albright and Leffingwell
1981). Reducing the exposure to 10 ppm clearly reduced cardiovascular mortality. This does
not definitively clarify whether cardiotoxic effects are definitely excluded at doses under
10 ppm.

The acute toxic effects of organic nitrates involve widening of the vasa, accompanied by
dropping blood pressure, increased heart rate, spotty erythema (flush), orthostatic dizziness
and headaches. Since the half-life of the organic nitrate is short, the ailments soon subside.
Normally, serious health considerations are not to be expected with acute intoxication. The
so-called withdrawal syndrome appears when exposure is interrupted for employees with
long-term exposure to organic nitrate, with a latency period of 36 to 72 hours. This includes
ailments ranging from angina pectoris up to acute myocardial infarction and cases of sudden
death. In the investigated deaths, often no coronary sclerotic changes were documented. The
cause is therefore suspected to be “rebound vasospasm”. When the vasa-widening effect of
the nitrate is removed, an autoregulative increase in resistance occurs in the vasa, including
the coronary arteriae, which produces the above-mentioned results. In certain epidemiological
studies, suspected associations between exposure duration and intensity of organic nitrate and
ischaemic heart disease are considered uncertain, and pathogenetic plausibility for them is
lacking.

Concerning lead, metallic lead in dust form, the salts of diva- lent lead and organic lead
compounds are toxicologically impor- tant. Lead attacks the contractile mechanism of the
vasa muscle cells and causes vascular spasms, which are considered causes for a series of
symptoms of lead intoxication. Among these is tem- porary hypertension that appears with
lead colic. Lasting high blood pressure from chronic lead intoxication can be explained by
vasospasms as well as kidney changes. In epidemiological studies an association has been
observed with longer exposure times between lead exposure and increased blood pressure, as
well as an increased incidence of cerebrovascular diseases, whereas there was little evidence
of increased cardiovascular diseases.

Epidemiological data and pathogenetic investigations to date have produced no clear results
on the cardiovascular toxicity of other metals like cadmium, cobalt and arsenic. However, the
hypothesis that halogenated hydrocarbon acts as a myocardial irritant is considered certain.
The triggering mechanism of occasionally life-threatening arrhythmia from these materials
presumably comes from myocardial sensitivity to epinephrine, which works as a natural
carrier for the autonomic nervous system. Still being discussed is whether a direct cardiac
effect exists such as reduced contractility, suppression of impulse formation centres, impulse
transmission, or reflex impairment resulting from irrigation in the upper airway region. The
sensitizing potential of hydrocarbons apparently depends on the degree of halogenation and
on the type of the halogen contained, whereas chlorine-substituted hydrocarbons are supposed
to have a stronger sensitizing effect than fluoride compounds. The maximum myocardial
effect for hydrocarbons containing chlorine occurs at around four chlorine atoms per
molecule. Short chain non-substituted hydrocarbons have a higher toxicity than ones with
longer chains. Little is known about the arrhythmia-triggering dosage of the individual
substances, as the reports on humans predominantly are case descriptions with exposure to
high concentrations (accidental exposure and “sniffing”). According to Reinhardt et al.
(1971), benzene, heptane, chloroform and trichlorethylene are especially sensitizing, whereas
carbon tetrachloride and halothane have less arrhythmogenic effect.
The toxic effects of carbon monoxide result from tissue hypoxaemia, which results from the
increased formation of CO-Hb (CO has 200-times greater affinity to haemoglobin than does
oxygen) and the resulting reduced release of oxygen to the tissues. In addition to the nerves,
the heart is one of the organs that react especially critically to such hypoxaemia. The resulting
acute heart ailments have been repeatedly examined and described according to exposure
time, breathing frequency, age and previous illnesses. Whereas among healthy subjects,
cardiovascular effects first appear at CO-Hb concentrations of 35 to 40%, angina pectoris
ailments could be experimentally produced in patients with ischaemic heart disease already at
CO-Hb concentrations between 2 and 5% during physical exposure (Kleinman et al. 1989;
Hinderliter et al. 1989). Deadly infarctions were observed among those with previous
afflictions at 20% CO-Hb (Atkins and Baker 1985).

The effects of long-term exposure with low CO concentrations are still subject to controversy.
Whereas experimental studies on animals possibly showed an atherogenic effect by way of
hypoxia of the vasa walls or by direct CO effect on the vasa wall (increased vascular
permeability), the flow characteristics of the blood (strengthened thrombocyte aggregation),
or lipid metabolism, the corresponding proof for humans is lacking. The increased
cardiovascular mortality among tunnel workers (SMR 1.35, 95% CI 1.09-1.68) can more
likely be explained by acute exposure than from chronic CO effects (Stern et al. 1988). The
role of CO in the cardiovascular effects of cigarette smoking is also not clear.

Biological Hazards

Written by ILO Content Manager

“A biological hazardous material can be defined as a biological material capable of self-


replication that can cause harmful effects in other organisms, especially humans” (American
Industrial Hygiene Association 1986).

Bacteria, viruses, fungi and protozoa are among the biological hazardous materials that can
harm the cardiovascular system through contact that is intentional (introduction of
technology-related biological materials) or unintentional (non-technology-related
contamination of work materials). Endotoxins and mycotoxins may play a role in addition to
the infectious potential of the micro-organism. They can themselves be a cause or
contributing factor in a developing disease.

The cardiovascular system can either react as a complication of an infection with a localized
organ participation—vasculitis (inflammation of the blood vessels), endocarditis
(inflammation of the endocardium, primarily from bacteria, but also from fungus and
protozoa; acute form can follow septic occurrence; subacute form with generalization of an
infection), myocarditis (heart muscle inflammation, caused by bacteria, viruses and protozoa),
pericarditis (pericardium inflammation, usually accompanies myocarditis), or pancarditis
(simultaneous appearance of endocarditis, myocarditis and pericarditis)—or be drawn as a
whole into a systemic general illness (sepsis, septic or toxic shock).

The participation of the heart can appear either during or after the actual infection. As
pathomechanisms the direct germ colon- ization or toxic or allergic processes should be
considered. In addition to type and virulence of the pathogen, the efficiency of the immune
system plays a role in how the heart reacts to an infection. Germ-infected wounds can induce
a myo- or endo- carditis with, for example, streptococci and staphylococci. This can affect
virtually all occupational groups after a workplace accident.

Ninety per cent of all traced endocarditis cases can be attributed to strepto- or staphylococci,
but only a small portion of these to accident-related infections.

Table 1 gives an overview of possible occupation-related infectious diseases that affect the
cardiovascular system.

Table 1. Overview of possible occupation-related infectious diseases that affect the


cardiovascular system

Disease Effect on Occurrence/frequency of Occupational


heart effects on heart in case of risk groups
disease

AIDS/HIV Myocarditis,
 42% (Blanc et al. 1990); Personnel in


opportunistic infections but health and
Endocarditis,
also by the HIV
 virus itself welfare


 Pericarditis
as lymphocytic myocarditis services
(Beschorner et al. 1990)

Aspergillosis Endocarditis Rare; among those with Farmers


suppressed immune system

Brucellosis Endocarditis, Rare (Groß, Jahn and Workers in



 Myocarditis Schölmerich 1970; Schulz meatpacking and
and Stobbe 1981) 
 animal
husbandry,
farmers,

veterinarians

Chagas’ disease Myocarditis Varying data: 20% in Business


Argentina (Acha and Szyfres travelers to
1980); 69% in Chile
 Central and

(Arribada et al. 1990); 67% South America
(Higuchi et al. 1990); chronic
Chagas’
 disease always
with myocarditis (Gross, Jahn
and Schölmerich 1970)

Coxsackiessvirus Myocarditis,
 5% to 15% with Coxsackie-B Personnel in


virus (Reindell and Roskamm health and
Pericarditis
1977) welfare

services, sewer
workers
Cytomegaly Myocarditis,
 Extremely rare, especially Personnel who
among those with suppressed work with
Pericarditis
immune
 system children

(especially small
children), in

dialysis and
transplant

departments

Diphtheria Myocarditis,
 With localized diphtheria 10 Personnel who


to 20%, more common with work with
Endocarditis
progressive
 D. (Gross, Jahn children
 and in
and Schölmerich 1970), health services
especially with toxic

development

Echinococcosis Myocarditis Rare (Riecker 1988) Forestry workers

Epstein-Barr Myocarditis,
 Rare; especially among those Health and


virus
 infections with defective immune welfare
Pericarditis
system personnel

Erysipeloid Endocarditis Varying data from rare Workers in


(Gross, Jahn and Schölmerich meatpacking, fish
1970; Riecker
 1988) to 30% 
 processing,
(Azofra et al. 1991) fishers,
veterinarians

Filariasia Myocarditis Rare (Riecker 1988) Business


travelers in
endemic
 areas

Typhus among Myocarditis,
 Data varies, through direct Business


other
 rickettsiosis pathogen, toxic or resistance- travelers in
Vasculitis of
small vasa reduction
 during fever endemic
 areas
(exclud-
 ing Q
resolution
fever)

Early Myocarditis Rare (Sundermann 1987) Forestry workers,


summer
 meningo- gardeners
encephalitis

Yellow fever Toxic damage Rare; with serious cases Business


to vasa
 travelers in
(Gross, Jahn endemic
 areas
and

Schölmerich
1970),

Myocarditis

Haemorrhagic Myocarditis No information available Health service


fever
 (Ebola, and
 employees in

Marburg,
 Lassa, endocardial affected areas
Dengue, etc.) bleedings
 and in special

through laboratories, and
general
 workers in
animal

hemorrhage,

husbandry
cardiovascular
failure

Influenza Myocarditis,
 Data varying from rare to Health service


often (Schulz and Stobbe employees
Hemorrhages
1981)

Hepatitis Myocarditis Rare (Schulz and Stobbe Health and


(Gross,
 1981) welfare
Willensand employees,

Zeldis 1981;
 sewage and
waste-water
Schulzand
workers
Stobbe 1981)

Legionellosis Pericarditis,
 If occurs, probably rare Maintenance


(Gross, Willens and Zeldis personnel in air
Myocarditis,
 1981) 
 conditioning,
Endocarditis
humidifiers,

whirlpools,
nursing staff

Leishmaniasis Myocarditis With visceral leishmaniasis Business


(Reindell
 and travelers to
Roskamm endemic
 areas
1977)

Leptospirosis (icteric Myocarditis Toxic or direct pathogen Sewage and


form) infection (Schulz and Stobbe waste-water
1981) workers,

slaughterhouse
workers
Listerellosis Endocarditis Very rare (cutaneous Farmers,
listeriosis predominant as veterinarians,

occupational disease)
meat-processing
workers

Lyme disease In stage 2:
 8% (Mrowietz 1991) or 13% Forestry workers


(Shadick et al. 1994)
Myocarditis

Pancarditis
 In
stage 3:

Chronic
carditis

Malaria Myocarditis Relatively frequent with Business


malaria tropica (Sundermann travelers in
1987); direct
 infection of endemic
 areas
capillaries

Measles Myocarditis,
 Rare Personnel in


health service
Pericarditis
and
 who work
with children

Foot-and-mouth Myocarditis Very rare Farmers, animal


disease husbandry

workers,
(especially with
cloven-
 hoofed
animals)

Mumps Myocarditis Rare—under 0.2-0.4% Personnel in


(Hofmann 1993) health service
and
 who work
with children

Mycoplasma- Myocarditis,
 Rare Health service



 pneumonia Pericarditis and welfare

infections employees

Ornithosis/Psittacosis Myocarditis,
 Rare (Kaufmann and Potter Ornamental bird


1986; Schulz and Stobbe and poultry

Endocarditis
1981)
raisers, pet shop
workers,

veterinarians
Paratyphus Interstitial Especially among older and Development aid
myocarditis very sick as toxic damage workers in
tropics
 and
subtropics

Poliomyelitis Myocarditis Common in serious cases in Health service


the first and second weeks employees

Q fever Myocarditis,
 Possible to age 20 after acute Animal


disease (Behymer and husbandry
Endocarditis,
Riemann 1989);
 data from workers,


 Pericarditis
rare (Schulz and Stobbe veterinarians,
1981; Sundermann 1987) to farmers, possibly
7.2%
 (Conolly et al. 1990); 
 also
more frequent (68%) among slaughterhouse
chronic Q-fever
 with weak and dairy

immune system or pre- workers
existing heart disease

(Brouqui et al. 1993)

Rubella Myocarditis,
 Rare Health service


Pericarditis and child care

employees

Relapsing fever Myocarditis No information available Business


travelers and
health
 service
workers in
tropics and

subtropics

Scarlet fever and Myocarditis,
 In 1 to 2.5% rheumatic fever Personnel in


other streptococcal as complication (Dökert health service
Endocarditis
infections 1981), then
 30 to 80% and
 who work
carditis (Sundermann 1987); with children
43 to 91% (al-Eissa 1991)

Sleeping sickness Myocarditis Rare Business


travelers to
Africa
 between
20° Southern and

 Northern
parallels
Toxoplasmosis Myocarditis Rare, especially among those People with
with weak immune systems occupational
contact
 with
animals

Tuberculosis Myocarditis,
 Myocarditis especially in Health service


conjunction with miliary employees
Pericarditis
tuberculosis,
 pericarditis
with high tuberculosis
prevalence to 25%, otherwise
7%
 (Sundermann 1987)

Typhus abdominalis Myocarditis Toxic; 8% (Bavdekar et al. Development aid


1991) workers,

personnel in
microbiological

 laboratories
(especially stool
labs)

Chicken pox, Herpes Myocarditis Rare Employees in


zoster health service
and
 who work
with children

Introduction

Written by ILO Content Manager

Cardiovascular diseases (CVDs) are among the most common causes of illness and death in
the working population, particularly in industrialized countries. They are also increasing in
developing countries (Wielgosz 1993). In the industrialized countries, 15 to 20% of all
working people will suffer from a cardiovascular dis- order sometime during their working
lives and the frequency climbs sharply with age. Among those between 45 to 64 years of age,
more than a third of the deaths among men and more than a quarter of deaths among women
are caused by this group of diseases (see table 1). In recent years, CVDs have become the
most frequent cause of death among post-menopausal women.

Table 1. Mortality from cardiovascular diseases in 1991 and 1990 in the age groups 45-54 and
55-64 for selected countries.

Country Men Women


45-54 Years 55-64 Years 45-54 Years 55-64 Years
Rate % Rate % Rate % Rate %
Russia** 528 36 1,290 44 162 33 559 49
Poland** 480 38 1,193 45 134 31 430 42
Argentina* 317 40 847 44 131 33 339 39
Britain** 198 42 665 47 59 20 267 32
USA* 212 35 623 40 83 24 273 31
Germany** 181 29 597 38 55 18 213 30
Italy* 123 27 404 30 41 18 148 25
Mexico** 128 17 346 23 82 19 230 24
France** 102 17 311 22 30 12 94 18
Japan** 111 27 281 26 48 22 119 26

*1990. **1991. Rate=Deaths per 100,000 inhabitants. % is from all causes of death in the age
group.

Because of their complex aetiology, only a very small pro- portion of the cases of
cardiovascular disease are recognized as occupational. Many countries, however, recognize
that occu- pational exposures contribute to CVDs (sometimes referred to as work-related
diseases). Working conditions and job demands play an important role in the multifactorial
process that leads to these diseases, but ascertaining the role of the individual causal com-
ponents is very difficult. The components interact in close, shifting relationships and often the
disease is triggered by a combination or accumulation of different causal factors, including
those that are work related.

The reader is referred to the standard cardiology texts for details of the epidemiology,
pathophysiology, diagnosis and treatment of cardiovascular diseases. This chapter will focus
on those aspects of cardiovascular disease that are particularly relevant in the workplace and
are likely to be influenced by factors in the job and work environment.

Cardiovascular Morbidity and Mortality in the Workforce

Written by ILO Content Manager

In the following article, the term cardiovascular diseases (CVDs) refers to organic and
functional disorders of the heart and circu- latory system, including the resultant damage to
other organ systems, which are classified under numbers 390 to 459 in the 9th revision of the
International Classification of Diseases (ICD) (World Health Organization (WHO) 1975).
Based essentially on international statistics assembled by the WHO and data collected in
Germany, the article discusses the prevalence of CVDs, new disease rates, and frequency of
deaths, morbidity and disability.

Definition and Prevalence in the Working-Age Population

Coronary artery disease (ICD 410-414) resulting in ischaemia of the myocardium is probably
the most significant CVD in the working population, particularly in industrialized countries.
This condition results from a constriction in the vascular system that supplies the heart
muscle, a problem caused primarily by arteriosclerosis. It affects 0.9 to 1.5% of working-age
men and 0.5 to 1.0% of women.

Inflammatory diseases (ICD 420-423) may involve the endo- cardium, the heart valves, the
pericardium and/or the heart muscle (myocardium) itself. They are less common in
industrialized countries, where their frequency is well below 0.01% of the adult population,
but are seen more frequently in developing countries, perhaps reflecting the greater
prevalence of nutritional disorders and infectious diseases.

Heart rhythm disorders (ICD 427) are relatively rare, although much media attention has been
given to recent instances of disability and sudden death among prominent professional
athletes. Although they can have a significant impact on the ability to work, they are often
asymptomatic and transitory.

The myocardiopathies (ICD 424) are conditions which involve enlargement or thickening of
the heart musculation, effectively narrowing the vessels and weakening the heart. They have
attracted more attention in recent years, largely because of improved methods of diagnosis,
although their pathogenesis is often obscure. They have been attributed to infections,
metabolic diseases, immunologic disorders, inflammatory diseases involving the capillaries
and, of particular importance in this volume, to toxic exposures in the workplace. They are
divided into three types:

 dilative—the most common form (5 to 15 cases per 100,000 people), which is associated
with the functional weakening of the heart
 hypertrophic—thickening and enlargement of the myocardium resulting in relative
insufficiency of the coronary arteries
 restrictive—a rare type in which myocardial contractions are limited.

Hypertension (ICD 401-405) (increased systolic and/or diastolic blood pressure) is the most
common circulatory disease, being found among 15 to 20% of working people in
industrialized countries. It is discussed in greater detail below.

Atherosclerotic changes in the major blood vessels (ICD 440), often associated with
hypertension, cause disease in the organs they serve. Foremost among these is
cerebrovascular disease (ICD 430-438), which may result in a stroke due to infarction and/or
haemorrhage. This occurs in 0.3 to 1.0% of working people, most commonly among those
aged 40 and older.

Atherosclerotic diseases, including coronary artery disease, stroke and hypertension, by far
the most common cardiovascular diseases in the working population, are multifactorial in
origin and have their onset early in life. They are of importance in the workplace because:

 so large a proportion of the workforce has an asymptomatic or unrecognized form of


cardiovascular disease
 the development of that disease may be aggravated or acute symptomatic events
precipitated by working conditions and job demands
 the acute onset of a symptomatic phase of the cardiovascular disease is often attributed to
the job and/or the workplace environment
 most individuals with an established cardiovascular disease are capable of working
productively, albeit, sometimes, only after effective rehabilitation and job retraining
 the workplace is a uniquely propitious arena for primary and secondary preventive
programmes.

Functional circulatory disorders in the extremities (ICD 443) include Raynaud’s disease,
short-term pallor of the fingers, and are relatively rare. Some occupational conditions, such as
frostbite, long-term exposure to vinyl chloride and hand-arm exposure to vibration can induce
these disorders.

Varicosities in the leg veins (ICD 454), often improperly dismissed as a cosmetic problem, are
frequent among women, especially during pregnancy. While a hereditary tendency to
weakness of the vein walls may be a factor, they are usually associated with long periods of
standing in one position without movement, during which the static pressure within the veins
is increased. The result- ant discomfort and leg oedema often dictate change or modifi- cation
of the job.

Annual incidence rates

Among the CVDs, hypertension has the highest annual new case rate among working people
aged 35 to 64. New cases develop in approximately 1% of that population every year. Next in
frequency are coronary heart disease (8 to 92 new cases of acute heart attack per 10,000 men
per year, and 3 to 16 new cases per 10,000 women per year) and stroke (12 to 30 cases per
10,000 men per year, and 6 to 30 cases per 10,000 women per year). As demonstrated by
global data collected by the WHO-Monica project (WHO-MONICA 1994; WHO-MONICA
1988), the lowest new incidence rates for heart attack were found among men in China and
women in Spain, while the highest rates were found among both men and women in Scotland.
The significance of these data is that in the population of working age, 40 to 60% of heart
attack victims and 30 to 40% of stroke victims do not survive their initial episodes.

Mortality

Within the primary working ages of 15 to 64, only 8 to 18% of deaths from CVDs occur prior
to age 45. Most occur after age 45, with the annual rate increasing with age. The rates, which
have been changing, vary considerably from country to country (WHO 1994b).

Table 1 shows the death rates for men and for women aged 45 to 54 and 55 to 64 for some
countries. Note that the death rates for men are consistently higher than those for women of
corresponding ages. Table 2 compares the death rates for various CVDs among people aged
55 to 64 in five countries.

Table 1. Mortality from cardiovascular diseases in 1991 and 1990 in the age groups 45-54 and
55-64 for selected countries.

Country Men Women


45-54 Years 55-64 Years 45-54 Years 55-64 Years

Rate % Rate % Rate % Rate %

Russia** 528 36 1,290 44 162 33 559 49


Poland** 480 38 1,193 45 134 31 430 42
Argentina* 317 40 847 44 131 33 339 39
Britain** 198 42 665 47 59 20 267 32
USA* 212 35 623 40 83 24 273 31
Germany** 181 29 597 38 55 18 213 30
Italy* 123 27 404 30 41 18 148 25
Mexico** 128 17 346 23 82 19 230 24
France** 102 17 311 22 30 12 94 18
Japan** 111 27 281 26 48 22 119 26

*1990. **1991. Rate=Deaths per 100,000 inhabitants. % is from all causes of death in the age
group.

Table 2. Mortality rates from special cardiovascular diagnosis groups in the years 1991 and
1990 in the age group 55-64 for selected countries

Diagnosi Russia (1991) USA (1990) Germany France Japan (1991)


s group (1991) (1991)
(ICD 9th
Rev.)
M F M F M F M F M F
393–398 16.8 21.9 3.3 4.6 3.6 4.4 2.2 2.3 1.2 1.9
401–405 22.2 18.5 23.0 14. 16.9 9.7 9.4 4.4 4.0 1.6
6
410 160.2 48.9 216. 79. 245. 61. 100. 20. 45.9 13.
4 9 2 3 7 5 7
411–414 586.3 189.9 159. 59. 99.2 31. 35.8 6.8 15.2 4.2
0 5 8
415–429 60.9 24.0 140. 64. 112. 49. 73.2 27. 98.7 40.
4 7 8 2 0 9
430–438 385.0 228.5 54.4 42. 84.1 43. 59.1 26. 107. 53.
2 8 7 3 6
440 4.4 2.1 11.8 3.8 1.5 0.3 0.3 0.1
{50.0 {19.2
441–448 } } 18.4 6.7 15.5 4.2 23.4 3.8 3.8 2.6
Total 1,290 559 623 273 597 213 311 94 281 119
390–459

Deaths per 100,000 inhabitants; M=male; F=female.

Work Disability and Early Retirement

Diagnosis-related statistics on time lost from work represent an important perspective on the
impact of morbidity on the working population, even though the diagnostic designations are
usually less precise than in cases of early retirement because of disability. The case rates,
usually expressed in cases per 10,000 employees, provide an index of the frequency of the
disease categories, while the average number of days lost per case indicates the relative
seriousness of particular diseases. Thus, according to statistics on 10 million workers in
western Germany compiled by the Allgemeinen Ortskrankenkasse, CVDs accounted for 7.7%
of the total disability in 1991-92, although the number of cases for that period was only 4.6%
of the total (Table 3). In some countries, where early retirement is provided when work ability
is reduced due to illness, the pattern of disability mirrors the rates for different categories of
CVD.

Table 3. Rate of cardiovascular disease among early pensioners* due to reduced ability to
work (N = 576,079) and diagnosis-related work disability in the western part of Germany,
1990-92

Diagnosis Main cause of Access to early Average annual work disability


group illness retirement; 1990–92
(ICD 9th number per
Rev.) 100,000 early
retirees
Cases per Duration
100,000 (days) per case
employed
Men Women Men Women Men Women
390–392 Acute rheumatic 16 24 49 60 28.1 32.8
fever
393–398 Chronic 604 605 24 20 67.5 64.5
rheumatic heart
disease
401–405 Hypertension, 4,158 4,709 982 1,166 24.5 21.6
high blood
pressure diseases
410–414 Ischaemic heart 9,635 2,981 1,176 529 51.2 35.4
diseases
410, 412 Acute and 2,293 621 276 73 85.8 68.4
existing
myocardial
infarction
414 Coronary heart 6,932 2,183 337 135 50.8 37.4
disease
415–417 Pulmonary 248 124 23 26 58.5 44.8
circulatory
diseases
420–429 Other non- 3,434 1,947 645 544 36.3 25.7
rheumatic heart
diseases
420–423 Inflammatory 141 118 20 12 49.4 48.5
heart diseases
424 Heart valve 108 119 22 18 45.6 38.5
disorders
425 Myocardiopathy 1,257 402 38 14 66.8 49.2
426 Stimulus 86 55 12 7 39.6 45.0
performance
disorder
427 Cardiac rhythm 734 470 291 274 29.3 21.8
disorder
428 Cardiac 981 722 82 61 62.4 42.5
insufficiency
430–438 Cerebrovascular 4,415 2,592 172 120 75.6 58.9
diseases
440–448 Diseases of the 3,785 1,540 238 90 59.9 44.5
arteries, arterioles
and capillaries
440 Arteriosclerosis 2,453 1,090 27 10 71.7 47.6
443 Raynaud’s 107 53 63 25 50.6 33.5
disease and other
vascular diseases
444 Arterial embolism 219 72 113 34 63.3 49.5
and thrombosis
451–456 Diseases of the 464 679 1,020 1,427 22.9 20.3
veins
457 Noninfectious 16 122 142 132 10.4 14.2
diseases of the
lymph nodes
458 Hypotension 29 62 616 1,501 9.4 9.5
459 Other circulatory 37 41 1,056 2,094 11.5 10.2
diseases
390–459 Total 26,843 15,426 6,143 7,761 29.6 18.9
cardiovascular
diseases

*Early pensioned: Statutory pensions insurance for former Federal Republic of Germany,
work disability AOK-West.

The Risk Factor Concept in Cardiovascular Disease

Written by ILO Content Manager

Risk factors are genetic, physiological, behavioural and socioeconomic characteristics of


individuals that place them in a cohort of the population that is more likely to develop a
particular health problem or disease than the rest of the population. Usually applied to
multifactorial diseases for which there is no single precise cause, they have been particularly
useful in identifying candidates for primary preventive measures and in assessing the
effectiveness of the prevention programme in controlling the risk factors being targeted. They
owe their development to large-scale prospective population studies, such as the Framingham
study of coronary artery disease and stroke conducted in Framingham, Massachusetts, in the
United States, other epidemiological studies, intervention studies and experimental research.

It should be emphasized that risk factors are merely expressions of probability—that is, they
are not absolute nor are they diagnostic. Having one or more risk factors for a particular
disease does not necessarily mean that an individual will develop the disease, nor does it
mean that an individual without any risk factors will escape the disease. Risk factors are
individual characteristics which affect that person’s chances of developing a particular disease
or group of diseases within a defined future time period. Categories of risk factors include:

 somatic factors, such as high blood pressure, lipid metabolism disorders, overweight and
diabetes mellitus
 behavioural factors, such as smoking, poor nutrition, lack of physical movement, type-A
personality, high alcohol consumption and drug abuse
 strains, including exposures in the occupational, social and private spheres.

Naturally, genetic and dispositional factors also play a role in high blood pressure, diabetes
mellitus and lipid metabolism disorders. Many of the risk factors promote the development of
arteriosclerosis, which is a significant precondition for the onset of coronary heart disease.

Some risk factors may put the individual at risk for the development of more than one disease;
for example, cigarette smoking is associated with coronary artery disease, stroke and lung
cancer. At the same time, an individual may have multiple risk factors for a particular disease;
these may be additive but, more often, the combinations of risk factors may be multiplicative.
Somatic and lifestyle factors have been identified as the main risk factors for coronary heart
disease and stroke.

Hypertension

Hypertension (increased blood pressure), a disease in its own right, is one of the major risk
factors for coronary heart disease (CHD) and stroke. As defined by the WHO, blood pressure
is normal when the diastolic is below 90 mm Hg and the systolic is below 140 mm Hg. In
threshold or borderline hypertension, the diastolic ranges from 90 to 94 mm Hg and the
systolic from 140 to 159 mm Hg. Individuals with diastolic pressures equal to or greater than
95 mm Hg and systolic pressures equal to or greater than 160 mm Hg are designated as being
hypertensive. Studies have shown, however, that such sharp criteria are not entirely correct.
Some individuals have a “labile” blood pressure—the pressure fluctuates between normal and
hypertensive levels depending on the circumstances of the moment. Further, without regard to
the specific categories, there is a linear progression of relative risk as the pressure rises above
the normal level.

In the United States, for example, the incidence rate of CHD and stroke among men aged 55
to 61 was 1.61% per year for those whose blood pressure was normal compared to 4.6% per
year for those with hypertension (National Heart, Lung and Blood Institute 1981).

Diastolic pressures over 94 mm Hg were found in 2 to 36% of the population aged 35 to 64


years, according to the WHO-MONICA study. In many countries of Central, Northern and
Eastern Europe (e.g., Russia, the Czech Republic, Finland, Scotland, Romania, France and
parts of Germany, as well as Malta), hypertension was found in over 30% of the population
aged 35 to 54, while in countries including Spain, Denmark, Belgium, Luxembourg, Canada
and the United States, the corresponding figure was less than 20% (WHO-MONICA 1988).
The rates tend to increase with age, and there are racial differences. (In the United States, at
least, hypertension is more frequent among African-Americans than in the White population.)
Risks for developing hypertension

The important risk factors for developing hypertension are excess body weight, high salt
intake, a series of other nutritional factors, high alcohol consumption, physical inactivity, and
psychosocial factors, including stress (Levi 1983). Furthermore, there is a certain genetic
component whose relative significance is not yet fully understood (WHO 1985). Frequent
familial high blood pressure should be considered a danger and special attention paid to
controlling lifestyle factors.

There is evidence that psychosocial and psychophysical factors, in conjunction with the job,
can have an influence on developing hypertension, especially for short-term blood pressure
increases. Increases have been found in the concentration of certain hormones (adrenalin and
noradrenalin) as well as cortisol (Levi 1972), which, alone and in combination with high salt
consumption, can lead to increased blood pressure. Work stress also appears to be related to
hypertension. A dose-effect relationship with intensity of air traffic was shown (Levi 1972;
WHO 1985) in comparing groups of air traffic controllers with different high psychic strain.

Treatment of hypertension

Hypertension can and should be treated, even in the absence of any symptoms. Lifestyle
changes such as weight control, reduction of sodium intake and regular physical exercise,
coupled when necessary with anti-hypertensive medications, regularly evoke re- ductions in
blood pressure, often to normal levels. Unfortunately, many individuals found to be
hypertensive are not receiving adequate treatment. According to the WHO-MONICA study
(1988), less than 20% of hypertensive women in Russia, Malta, eastern Germany, Scotland,
Finland and Italy were receiving adequate treatment during the mid-1980s, while the
comparable figure for men in Ireland, Germany, China, Russia, Malta, Finland, Poland,
France and Italy was under 15%.

Prevention of hypertension

The essence of preventing hypertension is identifying individuals with blood pressure


elevation through periodic screening or medical examination programmes, repeated checks to
verify the extent and duration of the elevation, and the institution of an appropriate treatment
regimen that will be maintained indefinitely. Those with a family history of hypertension
should have their pressures checked more frequently and should be guided to elimination or
control of any risk factors they may present. Control of alcohol abuse, physical training and
physical fitness, normal weight maintenance and efforts to reduce psychological stress are all
important elements of prevention programmes. Improvement in workplace conditions, such as
reducing noise and excess heat, are other preventive measures.

The workplace is a uniquely advantageous arena for programmes aimed at the detection,
monitoring and control of hypertension in the workforce. Convenience and low or no cost
make them attractive to the participants and the positive effects of peer pressure from co-
workers tend to enhance their compliance and the success of the programme.

Hyperlipidemia

Many long-term international studies have demonstrated a convincing relationship between


abnormalities in lipid metabolism and an increased risk of CHD and stroke. This is
particularly true for elevated total cholesterol and LDL (low density lipoproteins) and/or low
levels of HDL (high density lipoproteins). Recent research provides further evidence linking
the excess risk with different lipoprotein fractions (WHO 1994a).

The frequency of elevated total cholesterol levels >>6.5 mmol/l) was shown to vary
considerably in population groups by the worldwide WHO-MONICA studies in the mid-
1980s (WHO- MONICA 1988). The rate of hypercholesterolemia for popu- lations of
working age (35 to 64 years of age) ranged from 1.3 to 46.5% for men and 1.7 to 48.7% for
women. Although the ranges were generally similar, the mean cholesterol levels for the study
groups in different countries varied significantly: in Finland, Scot- land, East Germany, the
Benelux countries and Malta, a mean of over 6 mmol/l was found, while the means were
lower in east Asian countries like China (4.1 mmol/l) and Japan (5.0 mmol/l). In both regions,
the means were below 6.5 mmol/l (250 mg/dl), the level designated as the threshold of
normal; however, as noted above for blood pressure, there is a progressive increase of risk as
the level rises, rather than a sharp demarcation between normal and abnormal. Indeed, some
authorities have pegged a total chol- esterol level of 180 mg/dl as the optimal level that should
not be exceeded.

It should be noted that gender is a factor, with women averaging lower levels of HDL. This
may be one reason why women of working age have a lower mortality rate from CHD.

Except for the relatively few individuals with hereditary hyper- cholesterolemia, cholesterol
levels generally reflect the dietary intake of foods rich in cholesterol and saturated fats. Diets
based on fruit, plant products and fish, with reduced total fat intake and substitution of poly-
unsaturated fats, are generally associated with low cholesterol levels. Although their role is
not yet entirely clear, intake of anti-oxidants (vitamin E, carotene, selenium and so on) is also
thought to influence cholesterol levels.

Factors associated with higher levels of HDL cholesterol, the “protective” form of lipoprotein,
include race (Black), gender (female), normal weight, physical exercise and moderate alcohol
intake.

Socio-economic level also appears to play a role, at least in industrialized countries, as in


West Germany, where higher cholesterol levels were found in population groups of both men
and women with lower education levels (under ten years of schooling) compared to those
completing 12 years of education (Heinemann 1993).

Cigarette Smoking

Cigarette smoking is among the most important risk factors for CVD. The risk from cigarette
smoking is directly related to the number of cigarettes one smokes, the length of time one has
been smoking, the age at which one began to smoke, the amount one inhales and the tar,
nicotine and carbon monoxide content of the inspired smoke. Figure 1 illustrates the striking
increase in CHD mortality among cigarette smokers compared to non-smokers. This increased
risk is demonstrated among both men and women and in all socio-economic classes.

The relative risk of cigarette smoking declines after tobacco use is discontinued. This is
progressive; after about ten years of non-smoking, the risk is down almost to the level of those
who never smoked.
Recent evidence has demonstrated that those inhaling “second-hand smoke” (i.e., passive
inhalation of smoke from cigarettes smoked by others) are also at significant risk (Wells
1994; Glantz and Parmley 1995).

Rates of cigarette smoking vary among countries, as demonstrated by the international WHO-
MONICA study (1988). The highest rates for men aged 35 to 64 were found in Russia,
Poland, Scotland, Hungary, Italy, Malta, Japan and China. More women smokers were found
in Scotland, Denmark, Ireland, the United States, Hungary and Poland (the recent Polish data
are limited to large cities).

Social status and occupational level are factors in the level of smoking among workers. Figure
1, for example, demonstrates that the proportions of smokers among men in East Germany
increased in the lower social classes. The reverse is found in countries with relatively low
numbers of smokers, where there is more smoking among those at higher social levels. In East
Germany, smoking is also more frequent among shift-workers when compared with those on
a “normal” work schedule.

Figure 1. Relative mortality risk from cardiovascular diseases for smokers (including ex-
smokers) and social classes compared to non-smoking, normal weight, skilled workers (male)
based on occupational medical care examinations in East Germany, mortality 1985-89, N=
2.7 million person years.

Unbalanced Nutrition, Salt Consumption

In most industrialized countries traditional low-fat nutrition has been replaced by high-calorie,
high-fat, low carbohydrate, too sweet or too salty eating habits. This contributes to the
development of overweight, high blood pressure, and high cholesterol level as elements of
high cardiovascular risk. The heavy consumption of animal fats, with their high proportion of
saturated fatty acids, leads to an increase in LDL cholesterol and increased risk. Fats derived
from vegetables are much lower in these substances (WHO 1994a). Eating habits are
also strongly associated with both socio-economic level and occupation.

Overweight

Overweight (excess fat or obesity rather than increased muscle mass) is a cardiovascular risk
factor of lesser direct significance. There is evidence that the male pattern of excess fat
distribution (abdominal obesity) is associated with a greater risk of cardiovascular and
metabolic problems than the female (pelvic) type of fat distribution.

Overweight is associated with hypertension, hypercholesterolemia and diabetes mellitus, and,


to a much greater extent in women than men, tends to increase with age (Heuchert and
Enderlein 1994) (Figure 2). It is also a risk factor for musculoskeletal problems and
osteoarthritis, and makes physical exercise more difficult. The frequency of significant
overweight varies considerably among countries. Random population surveys conducted by
the WHO-MONICA project found it in more than 20% of females aged 35 to 64 in the Czech
Republic, East Germany, Finland, France, Hungary, Poland, Russia, Spain and Yugoslavia,
and in both sexes in Lithuania, Malta and Romania. In China, Japan, New Zealand and
Sweden, fewer than 10% of both men and women in this age group were significantly
overweight.

Common causes of overweight include familial factors (these may in part be genetic but more
often reflect common dietary habits), overeating, high-fat and high-carbohydrate diets and
lack of physical exercise. Overweight tends to be more common among the lower socio-
economic strata, particularly among women, where, among other factors, financial constraints
limit the availability of a more balanced diet. Population studies in Germany demonstrated
that the proportion of significant overweight among those with lower education levels is 3 to 5
times greater than that among people with more education, and that some occupations,
notably food preparation, agriculture and to some extent shift work, have a high percentage of
overweight people (Figure 3) (Heinemann 1993).

Figure 2. Prevalence of hypertension by age, sex and six levels of relative body weight
according tot he body-mass index (BMI) in occupational medical care examinations in East
Germany (normal BMI values are underlined).
Figure 3. Relative risk from overweight by length of education(years of schooling) in Germay
(population 25-64 years).
Physical Inactivity

The close association of hypertension, overweight and diabetes mellitus with lack of exercise
at work and/or off the job has made physical inactivity a significant risk factor for CHD and
stroke (Briazgounov 1988; WHO 1994a). A number of studies have demonstrated that,
holding all other risk factors constant, there was a lower mortality rate among persons
engaging regularly in high-intensity exercises than among those with a sedentary lifestyle.

The amount of exercise is readily measured by noting its duration and either the amount of
physical work accomplished or the extent of the exercise-induced increase in heart rate and
the time required for that rate to return to its resting level. The latter is also useful as an
indicator of the level of cardiovascular fitness: with regular physical training, there will be
less of an increase in heart rate and a more rapid return to the resting rate for a given intensity
of exercise.

Workplace physical fitness programmes have been shown to be effective in enhancing


cardiovascular fitness. Participants in these tend also to give up cigarette smoking and to pay
greater attention to proper diets, thus significantly reducing their risk of CHD and stroke.

Alcohol

High alcohol consumption, especially the drinking of high-proof spirits, has been associated
with a greater risk of hypertension, stroke and myocardiopathy, while moderate alcohol use,
particularly of wine, has been found to reduce the risk of CHD (WHO 1994a). This has been
associated with the lower CHD mortality among the upper social strata in industrialized
countries, who generally prefer wine to “hard” liquors. It should also be noted that while their
alcohol intake may be similar to that of wine drinkers, beer drinkers tend to accumulate
excess weight, which, as noted above, may increase their risk.

Socio-economic Factors

A strong correlation between socio-economic status and the risk of CVD has been
demonstrated by analyses of the death register mortality studies in Britain, Scandinavia,
Western Europe, the United States and Japan. For example, in eastern Germany, the
cardiovascular death rate is considerably lower for the upper social classes than for the lower
classes (see Figure 1) (Marmot and Theorell 1991). In England and Wales, where general
mortality rates are declining, the relative gap between the upper and lower classes is
widening.

Socio-economic status is typically defined by such indicators as occupation, occupational


qualifications and position, level of education and, in some instances, income level. These are
readily translated into standard of living, nutritional patterns, free-time activities, family size
and access to medical care. As noted above, behavioural risk factors (such as smoking and
diet) and the somatic risk factors (such as overweight, hypertension and hyperlipidemia) vary
considerably among social classes and occupational groups (Mielck 1994; Helmert, Shea and
Maschewsky Schneider 1995).

Occupational Psychosocial Factors and Stress

Occupational stress

Psychosocial factors at the workplace primarily refer to the combined effect of working
environment, work content, work demands and technological-organizational conditions, and
also to personal factors like capability, psychological sensitivity, and finally also to health
indicators (Karasek and Theorell 1990; Siegrist 1995).

The role of acute stress on people who already suffer from cardiovascular disease is
uncontested. Stress leads to episodes of angina pectoris, rhythm disorders and heart failure; it
can also precipitate a stroke and/or a heart attack. In this context stress is generally understood
to mean acute physical stress. But evidence has been mounting that acute psychosocial stress
can also have these effects. Studies from the 1950s showed that people who work two jobs at
a time, or who work overtime for long periods, have a relatively higher risk of heart attack,
even at a young age. Other studies showed that in the same job, the person with the greater
work and time pressure and frequent problems on the job is at significantly greater risk
(Mielck 1994).

In the last 15 years, job stress research suggests a causal relationship between work stress and
the incidence of cardiovascular disease. This is true for cardiovascular mortality as well as
frequency of coronary disease and hypertension (Schnall, Landsbergis and Baker 1994).
Karasek’s job strain model defined two factors that could lead to an increased incidence of
cardiovascular disease:

 extent of job demands


 extent of decision-making latitude.

Later Johnson added as a third factor the extent of social support (Kristensen 1995) which is
discussed more fully elsewhere in this Encyclopaedia. The chapter Psychosocial and
Organizational Factors includes discussions on individual factors, such as Type A
personality, as well as social support and other mechan- isms for overcoming the effects of
stress.

The effects of factors, whether individual or situational, that lead to increased risk of
cardiovascular disease can be reduced by “coping mechanisms”, that is, by recognizing the
problem and overcoming it by attempting to make the best of the situation.

Until now, measures aimed at the individual have predominated in the prevention of the
negative health effects of work stress. Increasingly, improvements in organizing the work and
expanding employee decision-making latitude have been used (e.g., action research and
collective bargaining; in Germany, occupational quality and health circles) to achieve an
improvement in productivity as well as to humanize the work by decreasing the stress load
(Landsbergis et al. 1993).

Night and Shift Work

Numerous publications in the international literature cover the health risks posed by night and
shift work. It is generally accepted that shift work is one risk factor which, together with other
relev- ant (including indirect) work-related demands and expectation factors, leads to adverse
effects.

In the last decade research on shift work has increasingly dealt with the long-term effects of
night and shift work on the frequency of cardiovascular disease, especially ischaemic heart
disease and myocardial infarction, as well as cardiovascular risk factors. The results of
epidemiological studies, especially from Scandinavia, permit a higher risk of ischemic heart
disease and myocardial infarction to be presumed for shift workers (Alfredsson, Karasek and
Theorell 1982; Alfredsson, Spetz and Theorell 1985; Knutsson et al. 1986; Tüchsen 1993). In
Denmark it was even estimated that 7% of cardiovascular disease in men as well as women
can be traced to shift work (Olsen and Kristensen 1991).

The hypothesis that night and shift workers have a higher risk (estimated relative risk
approximately 1.4) for cardiovascular disease is supported by other studies that consider
cardiovascular risk factors like hypertension or fatty acid levels for shift workers as compared
to day workers. Various studies have shown that night and shift work may induce increased
blood pressure and hypertension as well as increased triglyceride and/or serum cholesterol (as
well as normal range fluctuations for HDL-cholesterol in increased total cholesterol). These
changes, together with other risk factors (like heavy cigarette smoking and overweight among
shift workers), can cause increased morbidity and mortality due to atherosclerotic disease
(DeBacker et al. 1984; DeBacker et al. 1987; Härenstam et al. 1987; Knutsson 1989; Lavie et
al. 1989; Lennernäs, Åkerstedt and Hambraeus 1994; Orth-Gomer 1983; Romon et al. 1992).

In all, the question of possible causal links between shift work and atherosclerosis cannot be
definitively answered at present, as the pathomechanism is not sufficiently clear. Possible
mechanisms discussed in the literature include changes in nutrition and smoking habits, poor
sleep quality, increases in lipid level, chronic stress from social and psychological demands
and disrupted circadian rhythms. Knutsson (1989) has proposed an interesting pathogenesis
for the long-term effects of shift work on chronic morbidity.

The effects of various associated attributes on risk estimation have hardly been studied, since
in the occupational field other stress-inducing working conditions (noise, chemical hazardous
materials, psychosocial stress, monotony and so on) are connected with shift work. From the
observation that unhealthy nutritional and smoking habits are often connected with shift work,
it is often concluded that an increased risk of cardiovascular disease among shift workers is
more the indirect result of unhealthy behaviour (smoking, poor nutrition and so on) than
directly the result of night or shift work (Rutenfranz, Knauth and Angersbach 1981).
Furthermore, the obvious hypothesis of whether shift work promotes this conduct or whether
the difference comes primarily from the choice of workplace and occupation must be tested.
But regardless of the unanswered questions, special attention must be paid in cardiovascular
prevention programmes to night and shift workers as a risk group.

Summary

In summary, risk factors represent a broad variety of genetic, somatic, physiological,


behavioural and psychosocial characteristics which can be assessed individually for
individuals and for groups of individuals. In the aggregate, they reflect the probability that
CVD, or more precisely in the context of this article, CHD or stroke will develop. In addition
to elucidating the causes and pathogenesis of multifactorial diseases, their chief importance is
that they delineate individuals who should be targets for risk factor elimination or control, an
exercise admirably suited to the workplace, while repeated risk assessments over time
demonstrate the success of that preventive effort.

Rehabilitation and Prevention Programmes

Written by ILO Content Manager


Most individuals with recognized CVD are able to work effectively and productively in most
of the jobs found in the modern workplace. Just a few decades ago, individuals surviving an
acute myocardial infarction were cosseted and pampered for weeks and months with close
supervision and enforced inactivity. Laboratory confirmation of the diagnosis was enough to
justify labelling the individual as “permanently and totally disabled”. New diagnostic
technology that provides more accurate evaluation of cardiac status and the favourable
experiences of those who could not or would not accept such a label, soon demonstrated that
an early return to work and an optimal level of activity was not only possible but desirable
(Edwards, McCallum and Taylor 1988; Theorell et al. 1991; Theorell 1993). Today, patients
commence supervised physical activity as soon as the acute effects of the infarction subside,
are often out of the hospital in a few days instead of the mandatory 6 to 8 weeks of yore, and
are often back on the job within a few weeks. When desirable and feasible, surgical
procedures such as angioplasty, by-pass operations and even cardiac transplantation can
improve the coronary blood flow, while a regimen featuring diet, exercise and control of the
risk factors for CHD can minimize (or even reverse) the progression of coronary
atherosclerosis.

Once the acute, often life-threatening phases of the CVD have been overcome, passive
movement followed by active exercise should be initiated early during the stay in the hospital
or clinic. With heart attacks, this phase is completed when the individual can climb stairs
without great difficulty. At the same time, the individual is schooled in a risk-prevention
regimen that includes proper diet, cardiovascular conditioning exercises, adequate rest and
relaxation, and stress management. During these phases of rehabilitation, support from family
members, friends and co-workers can be particularly helpful (Brusis and Weber-
Falkensammer 1986). The programme can be carried out in rehabilitation facilities or in
ambulatory “heart groups” under the supervision of a trained physician (Halhubar and
Traencker 1986). The focus on controlling lifestyle and behavioural risk factors and
controlling stress has been shown to result in a measurable reduction in the risk of re-
infarction and other cardiovascular problems.

Throughout the programme the attending physician should maintain contact with the
employer (and particularly with the company doctor, if there is one) to discuss the prospects
for recovery and the probable duration of the period of disability, and to explore the feasibility
of any special arrangements that may be needed to permit an early return to the job. The
worker’s knowledge that the job is waiting and that he or she is expected to be able to return
to it is a potent motivating factor for the enhancement of recovery. Experience has amply
demonstrated that the success of the rehabilitation effort diminishes as the absence from work
lengthens.

In instances where desirable adjustments in the job and/or the workplace are not possible or
feasible, retraining and appropriate job placement can obviate unnecessary invalidism.
Specially protected workshops are often helpful in reintegrating into the workplace people
who have been absent from the job for long periods while receiving treatment for the serious
effects of stroke, congestive heart failure or disabling angina pectoris.

Following the return to work, continued surveillance by both the attending physician and the
occupational physician is eminently desirable. Periodic medical evaluations, at intervals that
are frequent initially but lengthen as recovery is assured, are helpful in assessing the worker’s
cardiovascular status, adjusting medications and other elements in the maintenance regimen
and monitoring the adherence to the lifestyle and behavioural recommendations. Satisfactory
findings in these examinations may allow the gradual easing of any work limitations or
restrictions until the worker is fully integrated into the workplace.

Workplace Health Promotion and Prevention Programmes

The prevention of occupational diseases and injuries is a prime responsibility of the


organization’s occupational health and safety programme. This includes primary prevention
(i.e., the identifica- tion and elimination or control of potential hazards and strains by
changing the work environment or the job). It is supplemented by secondary prevention
measures which protect the workers from the effects of existing hazards and strains that
cannot be elim- inated (i.e., personal protective equipment and periodic medical surveillance
examinations). Workplace health promotion and pre- vention (HPP) programmes go beyond
these goals. They place their emphasis on health-conscious behaviour as it relates to life-
style, behavioural risk factors, eliminating or coping with stress and so on. They are of great
significance, particularly in pre- venting CVD. The goals of HPP, as formulated by the WHO
Committee on Environmental and Health Monitoring in Occupational Health, extend beyond
the mere absence of disease and injury to include well-being and functional capacity (WHO
1973).

The design and operation of HPP programmes are discussed in more detail elsewhere in the
chapter. In most countries, they have a particular focus on the prevention of CVDs. For
example, in Germany, the “Have a heart for your heart” programme supplements the heart
health circles organized by the health insurance companies (Murza and Laaser 1990, 1992),
while the “Take Heart” movement in Britain and Australia has similar goals (Glasgow et al.
1995).

That such programmes are effective was verified in the 1980s by the WHO Collaborative
Trial in Prevention of Heart Disease, which was carried out in 40 pairs of factories in four
European countries and involved approximately 61,000 men aged 40 to 59. The preventive
measures largely comprised health education activities, carried out primarily by the
organization’s employee health service, focused on cholesterol-lowering diets, giving up
cigarette smoking, weight control, increased physical activity and controlling hypertension. A
randomized screening of 10% of the eligible workers in the factories designated as controls
demonstrated that during the 4 to 7 years of the study, overall risk of CVDs could be reduced
by 11.1% (19.4% among those initially at high risk). In the study factories, mortality from
CHDs fell by 7.4%, while overall mortality fell by 2.7%. The best results were achieved in
Belgium, where the intervention was carried out continuously during the entire study period,
while the poorest results were seen in Britain, where the prevention activities were sharply
curtailed prior to the last follow-up examination. This disparity emphasizes the relationship of
success to the duration of the health education effort; it takes time to inculcate the desired
lifestyle changes. The intensity of the educational effort was also a factor: in Italy, where six
full-time health educators were involved, a 28% reduction in overall risk-factor profile was
achieved, whereas in Britain, where only two full-time educators served three times the
number of workers, a risk factor reduction of only 4% was achieved.

While the time required to detect reductions in CHD mortality and morbidity is a formidable
limiting factor in epidemiological studies aimed at evaluating the results of company health
programmes (Mannebach 1989), reductions in risk factors have been demonstrated (Janssen
1991; Gomel et al. 1993; Glasgow et al. 1995). Temporary decreases in the number of lost
workdays and a decline in hospitalization rates have been reported (Harris 1994). There seems
to be general agreement that HPP activities in the community and particularly in the
workplace have significantly contributed to the reduction in cardiovascular mortality in the
United States and other western industrialized countries.

Conclusion

CVDs loom large in the workplace, not so much because the cardiovascular system is
particularly vulnerable to environmental and job hazards, but because they are so common in
the popu- lation of working age. The workplace offers a singularly advant- ageous arena for
the detection of unrecognized, asymptomatic CVDs, for the circumvention of workplace
factors that might accelerate or aggravate them and for the identification of factors that
increase the risk of CVDs and the mounting of programmes to eliminate or control them.
When CVDs do occur, prompt attention to control of job-related circumstances that may
prolong or increase their severity can minimize the extent and duration of disability, while
early, professionally supervised rehabilitation efforts will facilitate the restoration of working
capacity and reduce the risk of recurrences.

Physical, Chemcial and Biological Hazards

The intact cardiovascular system is remarkably resistant to the harmful effects of physical,
chemical and biological hazards encountered on the job or in the workplace. With a very few
exceptions, such hazards are rarely a direct cause of CVDs. On the other hand, once the
integrity of the cardiovascular system is compromised—and this may be entirely silent and
unrecognized—exposure to these hazards may contribute to the ongoing development of a
disease process or precipitate symptoms reflecting functional impairment. This dictates early
identification of workers with incipient CVD and modification of their jobs and/or the work
environment to reduce the risk of harmful effects. The following segments will include brief
discussions of some of the more commonly encountered occupational hazards that may affect
the cardiovascular system. Each of the hazards presented below is discussed more fully
elsewhere in the Encyclopaedia.

Cardiovascular System References


Acha, P and B Szyfres. 1980. Zoonoses and Communicable Diseases Common to Man and
Animals. Washington, DC: Regional Office of the WHO.
al-Eissa, YA. 1991. Acute rheumatic fever during childhood in Saudi Arabia. Ann Trop
Paediat 11(3):225-231.

Alfredsson, L, R Karasek, and T Theorell. 1982. Myocardial infarction risk and psychosocial
work environment: An analysis of male Swedish working force. Soc Sci Med 16:463-467.

Alfredsson, L, C-L Spetz, and T Theorell. 1985. Type of occupation and near-future
hospitalization for myocardial infarction (MI) and some other diagnoses. Int J Epidemiol
14:378-388.

Altura, BM. 1993. Extraaural effects of chronic noise exposure on blood pressure,
microcirculation and electrolytes in rats: Modulation by Mg2+. In Lärm und Krankheit [Noise
and Disease], edited by H Ising and B Kruppa. Stuttgart: Gustav Fischer.

Altura, BM, BT Altura, A Gebrewold, H Ising, and T Gunther. 1992. Noise-induced


hypertension and magnesium in rats: Relationship to microcirculation and calcium. J Appl
Physiol 72:194-202.

American Industrial Hygiene Association (AIHA). 1986. Biohazards—Reference Manual.


Akron, Ohio: AIHA.

Arribada, A, W Apt, X Aguilera, A Solari, and J Sandoval. 1990. Chagas cardiopathy in the
first region of Chile. Clinical, epidemiologic and parasitologic study. Revista Médica de Chile
118(8):846-854.

Aro, S and J Hasan. 1987. Occupational class, psychosocial stress and morbidity. Ann Clin
Res 19:62-68.

Atkins, EH and EL Baker. 1985. Exacerbation of coronary artery disease by occupational


carbon monoxide exposure: A report of two fatalities and a review of the literature. Am J Ind
Med 7:73-79.

Azofra, J, R Torres, JL Gómez Garcés, M Górgolas, ML Fernández Guerrero, and M Jiménez


Casado. 1991. Endocarditis por erysipelothrix rhusiopathiae. Estudio de due casos y revisión
de la literatura [Endocarditis for erysipelothrix rhusiopathiae. Study of two cases and revision
of the literature]. Enfermedades Infecciosas y Microbiologia Clinica 9(2):102-105.

Baron, JA, JM Peters, DH Garabrant, L Bernstein, and R Krebsbach. 1987. Smoking as a risk
factor in noise-induced hearing loss. J Occup Med 29:741-745.

Bavdekar, A, M Chaudhari, S Bhave, and A Pandit. 1991. Ciprofloxacin in typhoid fever. Ind
J Pediatr 58(3):335-339.

Behymer, D and HP Riemann. 1989. Coxiella burnetii infection (Q-fever). J Am Vet Med
Assoc 194:764-767.

Berlin, JA and GA Colditz. 1990. A meta-analysis of physical activity in the prevention of


coronary heart disease. Am J Epidemiol 132:612-628.
Bernhardt, JH. 1986. Biological Effects of Static and Extremely Low Frequency Magnetic
Fields. Munich: MMV Medizin Verlag.

—. 1988. The establishment of frequency dependent limits for electric and magnetic fields
and evaluation of indirect effects. Radiat Environ Biophys 27: 1-27.

Beschorner, WE, K Baughman, RP Turnicky, GM Hutchins, SA Rowe, AL Kavanaugh-


McHugh, DL Suresch, and A Herskowitz. 1990. HIV-associated myocarditis pathology and
immunopathology. Am J Pathol 137(6):1365-1371.

Blanc, P, P Hoffman, JF Michaels, E Bernard, H Vinti, P Morand, and R Loubiere. 1990.


Cardiac involvement in carriers of the human immunodeficiency virus. Report of 38 cases.
Annales de cardiologie et d’angiologie 39(9):519-525.

Bouchard, C, RJ Shephard, and T Stephens. 1994. Physical Activity, Fitness and Health.
Champaign, Ill: Human Kinetics.

Bovenzi, M. 1990. Autonomic stimulation and cardiovascular reflex activity in the hand-arm-
vibration syndrome. Kurume Med J 37:85-94.

Briazgounov, IP. 1988. The role of physical activity in the prevention and treatment of
noncommunicable diseases. World Health Stat Q 41:242-250.

Brouqui, P, HT Dupont, M Drancourt, Y Berland, J Etienne, C Leport, F Goldstein, P Massip,


M Micoud, A Bertrand 1993. Chronic Q fever. Ninety-two cases from France, including 27
cases without endocarditis. Arch Int Med 153(5):642-648.

Brusis, OA and H Weber-Falkensammer (eds). 1986. Handbuch der


Koronargruppenbetreuung
[Handbook of Coronary Group Care]. Erlangen: Perimed.

Carter, NL. 1988. Heart rate and blood pressure response in medium artillery gun crews. Med
J Austral 149:185-189.

Centers for Disease Control and Prevention. 1993. Public health focus: Physical activity and
the prevention of coronary heart disease. Morb Mortal Weekly Rep 42:669-672.

Clark, RP and OG Edholm. 1985. Man and his Thermal Environment. London: Edward
Arnold.

Conolly, JH, PV Coyle, AA Adgey, HJ O’Neill, and DM Simpson. 1990. Clinical Q-fever in
Northern Ireland 1962-1989. Ulster Med J 59(2):137-144.

Curwen, M. 1991. Excess winter mortality: A British phenomenon? Health Trends 22:169-
175.
Curwen, M and T Devis. 1988. Winter mortality, temperature and influenza: Has the
relationship changed in recent years? Population Trends 54:17-20.

DeBacker, G, M Kornitzer, H Peters, and M Dramaix. 1984. Relation between work rhythm
and coronary risk factors. Eur Heart J 5 Suppl. 1:307.
DeBacker, G, M Kornitzer, M Dramix, H Peeters, and F Kittel. 1987. Irregular working hours
and lipid levels in men. In Expanding Horizons in Atherosclerosis Research, edited by G
Schlierf and H Mörl. Berlin: Springer.

Dökert, B. 1981. Grundlagen der Infektionskrankheiten für medizinische Berufe


[Fundamentals of Infectious Diseases for the Medical Profession]. Berlin: Volk und Wissen.

Douglas, AS, TM Allan, and JM Rawles. 1991. Composition of seasonality of disease. Scott
Med J 36:76-82.

Dukes-Dobos, FN. 1981. Hazards of heat exposure. Scand J Work Environ Health 7:73.

Dupuis, H and W Christ. 1966. On the vibrating behavior of the stomach under the influence
of sinusoidal and stochastic vibration. Int J Appl Physiol Occup Physiol 22:149-166.

Dupuis, H and G Zerlett. 1986. The Effects of Whole-Body Vibration. Berlin: Springer.

Dupuis, H, E Christ, DJ Sandover, W Taylor, and A Okada. 1993. Proceedings of 6th


International Conference on Hand-Arm Vibration, Bonn, Federal Republic of Germany, May
19-22, 1992. Essen: Druckzentrum Sutter & Partner.

Edwards, FC, RI McCallum, and PJ Taylor. 1988. Fitness for Work: The Medical Aspects.
Oxford: Oxford Univ. Press.

Eiff, AW v. 1993. Selected aspects of cardiovascular responses to acute stress. Lärm und
Krankheit [Noise and Disease], edited by H Ising and B Kruppa. Stuttgart: Gustav Fischer.

Fajen, J, B Albright, and SS Leffingwell. 1981. A cross-sectional medical and industrial


hygiene survey of workers exposed to carbon disulfide. Scand J Work Environ Health 7
Suppl. 4:20-27.

Färkkilä, M, I Pyykkö, and E Heinonen. 1990. Vibration stress and the autonomic nervous
system. Kurume Med J 37:53-60.

Fisher, LD and DC Tucker. 1991. Air jet noise exposure rapidly increases blood pressure in
young borderline hypertensive rats. J Hypertension 9:275-282.

Frauendorf, H, U Kobryn, and W Gelbrich. 1992. [Circulatory reactions to physical strains of


noise effects relevant to occupational medicine (in German)]. In Arbeitsmedizinische Aspekte
der Arbeits (-zeit) organisation [Occupational Medical Aspects of the Organization of the
Workplace and Working Time], edited by R Kreutz and C Piekarski. Stuttgart: Gentner.

Frauendorf, H, U Kobryn, W Gelbrich, B Hoffman, and U Erdmann. 1986. [Ergometric


examinations on the different muscle groups and their effects on heart beat frequency and
blood pressure (in German).] Zeitschrift für klinische Medizin 41:343-346.

Frauendorf, H, G Caffier, G Kaul, and M Wawrzinoszek. 1995. Modelluntersuchung zur


Erfassung und Bewertung der Wirkung kombinierter physischer und psychischer Belastungen
auf Funktionen des Herz-Kreislauf-Systems (Schlußbericht) [Model Research on the
Consideration and Assessment of the Effects of Combined Physical and Psychic Stresses on
the Functions of the Cardiovascular System
(Final Report)]. Bremerhaven: Wirtschaftsverlag NW.

Fritze, E and KM Müller. 1995. Herztod und akuter Myokardinfarkt nach psychischen oder
physischen Belastungen—Kausalitätsfragen und Versicherungsrecht. Versicherungsmedizin
47:143-147.

Gamberale, F. 1990. Physiological and psychological effects of exposure to extremely low-


frequency and magnetic fields on humans. Scand J Work Environ Health 16 Suppl. 1:51-54.

Gemne, G. 1992. Pathophysiology and pathogenesis of disorders in workers using hand-held


vibrating tools. In Hand-Arm Vibration: A Comprehensive Guide for Occupational Health
Professionals, edited by PL Pelmear, W Taylor, and DE Wasserman. New York: Van
Nostrand Reinhold.
—. 1994. Where is the research frontier for hand-arm vibration? Scand J Work Environ
Health 20, special issue:90-99.

Gemne, G and W Taylor. 1983. Hand-arm vibration and the central autonomic nervous
system. Proceedings of International Symposium, London, 1983. J Low Freq Noise Vib
special issue.

Gierke, HE and CS Harris. 1990. On the potential association between noise exposure and
cardiovascular disease. In Noise As a Public Health Problem, edited by B Berglund and T
Lindvall. Stockholm: Swedish Council for Building Research.

Glantz, SA and WW Parmley. 1995. Passive smoking and heart disease. JAMA 273:1047-
1053.

Glasgow, RE, JR Terborg, JF Hollis, HH Severson, and MB Shawn. 1995. Take Heart:
Results from the initial phase of a work site wellness program. Am J Public Health 85: 209-
216.

Gomel, M, B Oldenberg, JM Sumpson, and N Owen. 1993. Work site cardiovascular risk
reduction: A randomized trial of health risk assessment, education, counseling and incentives.
Am J Public Health 83:1231-1238.

Gordon, DJ, J Hyde, and DC Trost. 1988. Cyclic seasonal variation in plasma lipid and
lipoprotein levels: The Lipid Research Clinics Coronary Primary Prevention Trial placebo
group. J Clin Epidemiol 41:679-689.

Griffin, MJ, 1990. Handbook of Human Vibration. London: Academic.

Gross, R, D Jahn, and P Schölmerich (eds). 1970. Lehrbuch der Inneren Medizin [Textbook
of Internal Medicine]. Stuttgart: Schattauer.

Gross, D, H Willens, and St Zeldis. 1981. Myocarditis in Legionnaire’s disease. Chest


79(2):232-234.
Halhuber, C and K Traencker (eds). 1986. Die Koronare Herzkrankheit—eine
Herausforderung an Politik und Gesellschaft [Coronary Heart Disease—A Political and Social
Challenge]. Erlangen: Perimed.

Härenstam, A, T Theorell, K Orth-Gomer, U-B Palm, and A-L Unden. 1987. Shift work,
decision latitude and ventricular ectopic activity: A study of 24-hour electrocardiograms in
Swedish prison personnel. Work Stress 1:341-350.

Harris, JS. 1994. Health promotion in the workplace. In Occupational Medicine, edited by C
Zenz. St. Louis: Mosby.

Harrison, DW and PL Kelly. 1989. Age differences in cardiovascular and cognitive


performance under noise conditions. Perceptual and Motor Skills 69:547-554.

Heinemann, L. 1993. MONICA East Germany Data Book. Berlin: ZEG.

Helmert, U, S Shea, and U Maschewsky-Schneider. 1995. Social class and cardiovascular


disease risk factor changes in West Germany 1984-1991. Eur J Pub Health 5:103-108.

Heuchert, G and G Enderlein. 1994. Occupational registers in Germany—diversity in


approach and setout. In Quality Assurance of Occupational Health Services. Bremerhaven:
Wirtschaftsverlag NW.
Higuchi, M de L, CF DeMorais, NV Sambiase, AC Pereira-Barretto, G Bellotti, and F
Pileggi. 1990. Histopathological criteria of myocarditis—A study based on normal heart,
chagasic heart and dilated cardiomyopathy. Japan Circul J 54(4):391-400.

Hinderliter, AL, AF Adams, CJ Price, MC Herbst, G Koch, and DS Sheps. 1989. Effects of
low-level carbon monoxide exposure on resting and exercise-induced ventricular arrhythmias
in patients with coronary artery disease and no baseline ectopy. Arch Environ Health
44(2):89-93.

Hofmann, F (ed). 1993. Infektiologie—Diagnostik Therapie Prophylaxe—Handbuch und


Atlas für Klinik und Praxis [Infectology—Diagnostic Therapy Prophylaxis—Handbook and
Atlas for Clinic and Practice]. Landsberg: Ecomed.

Ilmarinen, J. 1989. Work and cardiovascular health: Viewpoint of occupational physiology.


Ann Med 21:209-214.

Ising, H and B Kruppa. 1993. Lärm und Krankheit [Noise and Disease]. Proceedings of the
International Symposium “Noise and Disease”, Berlin, September 26-28, 1991. Stuttgart:
Gustav Fischer.

Janssen, H. 1991. Zur Frage der Effektivität und Effizienz betrieblicher


Gesundheitsförderung—Ergebnisse einer Literatur recherche [On the question of the
effectiveness and efficiency of company health research—Results of a literature search].
Zeitschrift für Präventivmedizin und Gesundheitsförderung 3:1-7.

Jegaden, D, C LeFuart, Y Marie, and P Piquemal. 1986. Contribution à l’étude de la relation


bruit-hypertension artérielle à propos de 455 marins de commerce agés de 40 à 55 ans. Arch
mal prof (Paris) 47:15-20.
Kaji, H, H Honma, M Usui, Y Yasuno, and K Saito. 1993. Analysis of 24 cases of
Hypothenar Hammer Syndrome observed among vibration exposed workers. In Proceedings
of 6th International Conference on Hand-Arm-Vibration, edited by H Dupuis, E Christ, DJ
Sandover, W Taylor, and A Okade. Essen: Druckzentrum Sutter.

Kannel, WB, A Belanger, R D’Agostino, and I Israel. 1986. Physical activity and physical
demand on the job and risk of cardiovascular disease and death: The Framingham Study. Am
Heart J 112:820-825.

Karasek, RA and T Theorell. 1990. Healthy Work. New York: Basic Books.

Karnaukh, NG, GA Petrow, CG Mazai, MN Zubko, and ER Doroklin. 1990. [The temporary
loss of work capacity in workers in the hot shops of the metallurgical industry due to disease
of the circulatory organs (in Russian)]. Vracebnoe delo 7:103-106.

Kaufmann, AF and ME Potter. 1986. Psittacosis. Occupational Respiratory Diseases, edited


by JA Merchant. Publication No. 86-102. Washington, DC: NIOSH.

Kawahara, J, H Sano, H Fukuzaki, H Saito, and J Hirouchi. 1989. Acute effects of exposure to
cold on blood pressure, platelet function and sympathetic nervous activity in humans. Am J
Hypertension 2:724-726.

Keatinge, WR, SRW Coleshaw, JC Eaton et al. 1986. Increased platelet and red cell counts,
blood viscosity and plasma cholesterol levels during heat stress, and mortality from coronary
and cerebral thrombosis. Am J Med 81: 795-800.

Khaw, K-T. 1995. Temperature and cardiovascular mortality. Lancet 345: 337-338.

Kleinman, MT, DM Davidson, RB Vandagriff, VJ Caiozzo, and JL Whittenberger. 1989.


Effects of short-term exposure to carbon monoxide in subjects with coronary artery disease.
Arch Environ Health 44(6):361-369.

Kloetzel, K, AE deAndrale, J Falleiros, JC Pacheco. 1973. Relationship between hypertension


and prolonged exposure to heat. J Occup Med 15: 878-880.

Knave, B. 1994. Electric and magnetic fields and health outcomes—an overview. Scand J
Work Environ Health 20, special issue: 78-89.

Knutsson, A. 1989. Relationships between serum triglycerides and gamma-


glutamyltransferase among shift and day workers. J Int Med 226:337-339.

Knutsson, A, T Åkerstedt, BG Jonsson, and K Orth-Gomer. 1986. Increased risk of ischemic


heart disease in shift workers. Lancet 2:89-92.

Kornhuber, HH and G Lisson. 1981. Bluthochdruck—sind Industriestressoren, Lärm oder


Akkordarbeit wichtige Ursachen? Deutsche medizinische Wochenschrift 106:1733-1736.

Kristensen, TS. 1989. Cardiovascular diseases and the work environment. Scand J Work
Environ Health 15:245-264.
—. 1994. Cardiovascular disease and the work environment. In Encyclopedia of
Environmental Control Technology, edited by PN Cheremisinoff. Houston: Gulf.

—. 1995. The demand-control-support model: Methodological challenges for future research.


Stress Medicine 11:17-26.

Kunst, AE, CWN Looman, and JP Mackenbach. 1993. Outdoor air temperature and mortality
in the Netherlands: A time series anlaysis. Am J Epidemiol 137:331-341.

Landsbergis, PA, SJ Schurman, BA Israel, PL Schnall, MK Hugentobler, J Cahill, and D


Baker. 1993. Job stress and heart disease: Evidence and strategies for prevention. New
Solutions :42-58.

Lavie, P, N Chillag, R Epstein, O Tzischinsky, R Givon, S Fuchs and B Shahal. 1989. Sleep
disturbance in shift-workers: As marker for maladaptation syndrome. Work Stress 3:33-40.

Lebedeva, NV, ST Alimova, and FB Efendiev. 1991. [Study of mortality among workers
exposed to heating microclimate (in Russian)]. Gigiena truda i professionalnye zabolevanija
10:12-15.

Lennernäs, M, T Åkerstedt, and L Hambraeus. 1994. Nocturnal eating and serum cholesterol
of three-shift workers. Scand J Work Environ Health 20:401-406.

Levi, L. 1972. Stress and distress in response to psychosocial stimuli. Acta Med Scand Suppl.
528.

—. 1983. Stress and coronary heart disease—causes, mechanisms, and prevention. Act Nerv
Super 25:122-128.

Lloyd, EL. 1991. The role of cold in ischaemic heart disease: A review. Public Health
105:205-215.

Mannebach, H. 1989. [Have the last 10 years improved the chances for preventing coronary
heart disease? (in German)]. J Prev Med Health Res 1:41-48.

Marmot, M and T Theorell. 1991. Social class and cardiovascular disease: The contribution of
work. In The Psychosocial Work Environment, edited by TV Johnson and G Johannson.
Amityville: Baywood.

Marshall, M and P Bilderling. 1984. [The Hypothenar-Hammer syndrome, and important


differential diagnosis on vibration-related white-finger disease (in German)]. In
Neurotoxizität von Arbeitsstoffen. Kausalitätsprobleme beim Berufskrebs. Vibration.
[Neurotoxicity from Workplace Substances. Causality Problems with Occupational Cancer],
edited by H Konietzko and F Schuckmann. Stuttgart: Gentner.

Michalak, R, H Ising, and E Rebentisch. 1990. Acute circulatory effects of military low
altitude flight noise. Int Arch Occup Environ Health 62:365-372.

Mielck, A. 1994. Krankheit und soziale Ungleichheit. Opladen: Leske & Budrich.
Millar, K and MJ Steels. 1990. Sustained peripheral vasoconstriction while working in
continuous intense noise. Aviat Space Environ Med 61:695-698.

Mittleman, MA, M Maclure, GH Tofler, JB Sherwood, RJ Goldberg, and JE Muller. 1993.


Triggering of acute myocardial infarction by heavy physical exertion. New Engl J Med
329:1677-1683.
Morris, JN, JA Heady, and PAB Raffle. 1956. Physique of London busmen: Epidemiology of
uniforms. Lancet 2:569-570.

Morris, JN, A Kagan, DC Pattison, MJ Gardner, and PAB Raffle. 1966. Incidence and
prediction of ischaemic heart-disease in London busmen. Lancet 2:553-559.

Moulin, JJ, P Wild, B Mantout, M Fournier-Betz, JM Mur, and G Smagghe. 1993. Mortality
from lung cancer and cardiovascular diseases among stainless-steel producing workers.
Cancer Causes Control 4:75-81.

Mrowietz, U. 1991. Klinik und Therapie der Lyme-Borreliose. Informationen über


Infektionen [Clinic and Therapy of Lyme-Borreliosis. Information on Infections—Scientific
Conference, Bonn, June 28-29, 1990]. Basel: Editiones Roches.

Murza, G and U Laaser. 1990, 1992. Hab ein Herz für Dein Herz [Have a Heart for Your
Heart]. Gesundheitsförderung [Health research]. Vol. 2 and 4. Bielefeld: IDIS.

National Heart, Lung and Blood Institute. 1981. Control for Blood Pressure in the Work
Setting, University of Michigan. Washington, DC: US Government Printing Office.

Neild, PJ, P Syndercombe-Court, WR Keatinge, GC Donaldson, M Mattock, and M Caunce.


1994. Cold-induced increases in erythrocyte count, plasma cholesterol and plasma fibrinogen
of elderly people without a comparable rise in protein C or factor X. Clin Sci Mol Med 86:43-
48.

Nurminen, M and S Hernberg. 1985. Effects of intervention on the cardiovascular mortality of


workers exposed to carbon disulphide: A 15 year follow up. Brit J Ind Med 42:32-35.

Olsen, N. 1990. Hyperreactivity of the central sympathetic nervous system in vibration-


induced white finger. Kurume Med J 37:109-116.

Olsen, N and TS Kristensen. 1991. Impact of work environment on cardiovascular diseases in


Denmark. J Epidemiol Community Health 45:4-10.

Orth-Gomer, K. 1983. Intervention on coronary risk factors by adapting a shift work schedule
to biologic rhythmicity. Psychosom Med 45:407-415.

Paffenbarger, RS, ME Laughlin, AS Gima, and RA Black. 1970. Work activity of


longshoremen as related to death from coronary heart disease and stroke. New Engl J Med
282:1109-1114.

Pan, W-H, L-A Li, and M-J Tsai. 1995. Temperature extremes and mortality from coronary
heart disease and cerebral infarction in elderly Chinese. Lancet 345:353-355.
Parrot, J, JC Petiot, JP Lobreau, and HJ Smolik. 1992. Cardiovascular effects of impulse
noise, road traffic noise, and intermittent pink noise at LAeq=75 dB, as a function of sex, age
and level of anxiety: A comparative study. Int Arch Occup Environ Health 63:477-484;485-
493.

Pate, RR, M Pratt, SN Blair, WL Haskell, et al. 1995. Physical activity and public health. A
recommendation from the Centers for Disease Control and Prevention and the American
College of Sports Medicine. JAMA 273:402-407.

Pelmear, PL, W Taylor, and DE Wasserman (eds). 1992. Hand-Arm Vibration: A


Comprehensive Guide for Occupational Health Professionals. New York: Van Nostrand
Reinhold.

Petiot, JC, J Parrot, JP Lobreau, and JH Smolik. 1988. Individual differences in


cardiovascular responses to intermittent noise in human females. Int J Psychophysiol 6:99-
109;111-123.

Pillsburg, HC. 1986. Hypertension, hyperlipoproteinemia, chronic noise exposure: Is there


synergism in cochlear pathology? Laryngoscope 96:1112-1138.

Powell, KE, PD Thompson, CJ Caspersen, and JS Kendrick. 1987. Physical activity and the
incidence of coronary heart disease. Ann Rev Pub Health 8:253-287.

Rebentisch, E, H Lange-Asschenfeld, and H Ising (eds). 1994. Gesundheitsgefahren durch


Lärm: Kenntnisstand der Wirkungen von arbeitslärm, Umweltlärm und lanter Musik [Health
Hazards from Noise: State of the Knowledge of the Effects of Workplace Noise,
Environmental Noise, and Loud Music]. Munich: MMV, Medizin Verlag.

Redmond, CK, J Gustin, and E Kamon. 1975. Long-term mortality experience of


steelworkers: VIII. Mortality patterns of open hearth steelworkers. J Occup Med 17:40-43.

Redmond, CK, JJ Emes, S Mazumdar, PC Magee, and E Kamon. 1979. Mortality of


steelworkers employed in hot jobs. J Environ Pathol Toxicol 2:75-96.

Reindell, H and H Roskamm (eds). 1977. Herzkrankheiten: Pathophysiologie, Diagnostik,


Therapie
[Heart Diseases: Pathophysiology, Diagnostics, Therapy]. Berlin: Springer.

Riecker, G (ed). 1988. Therapie innerer Krankheiten [Therapy of Internal Diseases]. Berlin:
Springer.

Rogot, E and SJ Padgett. 1976. Associations of coronary and stroke mortality with
temperature and snowfall in selected areas of the United States 1962-1966. Am J Epidemiol
103:565-575.

Romon, M, M-C Nuttens, C Fievet, P Pot, JM Bard, D Furon, and JC Fruchart. 1992.
Increased triglyceride levels in shift workers. Am J Med 93:259-262.
Rutenfranz, J, P Knauth, and D Angersbach. 1981. Shift work research issues. In Biological
Rhythms, Sleep and Shift Work, edited by LC Johnson, DI Tepas, WP Colquhoun, and MJ
Colligan. New York: Spectrum.

Saltin, B. 1992. Sedentary lifestyle: An underestimated health risk. J Int Med 232:467-469.
Schnall, PL, PA Landsbergis, and D Baker. 1994. Job strain and cardiovascular disease. Ann
Rev Pub Health 15:381-411.

Schulz, F-H and H Stobbe (eds). 1981. Grundlagen und Klinik innerer Erkrankungen
[Fundamentals and Clinic of Internal Diseases]. Vol. III. Berlin: Volk and Gesundheit.

Schwarze, S and SJ Thompson. 1993. Research on non-auditory physiological effects of noise


since 1988: Review and perspectives. In Bruit et Santé [Noise and Man ’93: Noise As a
Public Health Problem], edited by M Vallet. Arcueil: Inst. national de recherche sur les
transports et leur securité.

Siegrist, J. 1995. Social Crises and Health (in German). Gottingen: Hogrefe.

Shadick, NA, CB Phillips, EL Logigian, AC Steere, RF Kaplan, VP Berardi, PH Duray, MG


Larson, EA Wright, KS Ginsburg, JN Katz, and MH Liang. 1994. The long-term clinical
outcomes of Lyme disease—A population-based retrospective cohort study. Ann Intern Med
121:560-567.

Stern, FB, WE Halperin, RW Hornung, VL Ringenburg, and CS McCammon. 1988. Heart


disease mortality among bridge and tunnel officers exposed to carbon monoxide. Am J
Epidemiol 128(6):1276-1288.

Stout, RW and V Grawford. 1991. Seasonal variations in fibrinogen concentrations among


elderly people. Lancet 338:9-13.

Sundermann, A (ed). 1987. Lehrbuch der Inneren Medizin [Textbook of Internal Medicine].
Jena: Gustav Fischer.

Suurnäkki, T, J Ilmarinen, G Wägar, E Järvinen, and K Landau. 1987. Municipal employees’


cardiovascular diseases and occupational stress factors in Finland. Int Arch Occup Environ
Health 59:107-114.

Talbott, E, PC Findlay, LH Kuller, LA Lenkner, KA Matthews, RA Day, and EK Ishii. 1990.


Noise-induced hearing loss: A posssible marker for high blood pressure in older noise-
exposed populations. J Occup Med 32:690-697.

Tanaka, S, A Konno, A Hashimoto. 1989. The influence of cold temperatures on the


progression of hypertension: An epidemiological study. J Hypertension 7 Suppl. 1:549-551.

Theorell, T. 1993. Medical and psychological aspects of job interventions. Int Rev Ind Organ
Psychol 8: 173-192.

Theorell, T, G Ahlberg-Hulten, L Alfredsson, A Perski, and F Sigala. 1987. Bullers Effekter


Pa Människor. Stress Research Reports, No. 195. Stockholm: National Institute of
Psychosocial Factors and Health.
Theorell, T, A Perski, K Orth-Gomér, U deFaire. 1991. The effects of the strain of returning
to work on the risk of cardiac death after a first myocardial infraction before the age of 45. Int
J Cardiol 30: 61-67.

Thompson, SJ. 1993. Review: Extraaural health effects of chronic noise exposure in humans.
In Lärm und Krankheit [Noise and Disease], edited by H Ising and B Kruppa. Stuttgart:
Gustav Fischer.

Tüchsen, F. 1993. Working hours and ischaemic heart disease in Danish men: A 4-year cohort
study of hospitalization. Int J Epidemiol 22:215-221.

United Nations Environment Programme (UNEP), World Health Organization (WHO), and
International Radiation Protection Association (IRPA). 1984. Extremely low frequency (ELF)
fields. Environmental Health Criteria, No. 35. Geneva: WHO.

—. 1987. Magnetic fields. Environmental Health Criteria, No. 69. Geneva: WHO.

van Dijk, FJH. 1990. Epidemiological research on non-auditory effects of occupational noise
exposure. Environ Int 16 (special issue):405-409.

van Dijk, FJH, JHA Verbeek, and FF de Vries. 1987. Non-auditory effects of occupational
noise in industry. V. A field study in a shipyard. Int Arch Occup Environ Health 59:55-
62;133-145.

Virokannas, H. 1990. Cardiovascular reflexes in workers exposed to hand-arm vibration.


Kurume Med J 37:101-107.

Weir, FW and VL Fabiano. 1982. Reevaluation of the role of carbon monoxide in production
or aggravation of cardiovascular disease processes. J Occup Med 24(7):519-525

Wells, AJ. 1994. Passive smoking as a cause of heart disease. JAMA 24:546-554.

Wielgosz, AT. 1993. The decline in cardiovascular health in developing countries. World
Health Stat Q 46:90-150.

Wikström, B-O, A Kjellberg, and U Landström. 1994. Health effects of long-term


occupational exposure to whole-body vibration: A review. Int J Ind Erg 14:273-292.

Wild, P, J-J Moulin, F-X Ley, and P Schaffer. 1995. Mortality from cardiovascular diseases
among potash miners exposed to heat. Epidemiology 6:243-247.

Willich, SN, M Lewis, H Löwel, H-R Arntz, F Schubert, and R Schröder. 1993. Physical
exertion as a trigger of acute myocardial infarction. New Engl J Med 329:1684-1690.

Wojtczak-Jaroszowa, J and D Jarosz. 1986. Health complaints, sicknesses and accidents of


workers employed in high environmental temperatures. Canad J Pub Health 77:132-135.

Woodhouse, PR, KT Khaw, and M Plummer. 1993a. Seasonal variation in blood pressure in
relation to temperature in elderly men and women. J Hypertension 11:1267-1274.
—. 1993b. Seasonal variation of lipids in an elderly population. Age Ageing 22:273-278.

Woodhouse, PR, KT Khaw, TW Meade, Y Stirling, and M Plummer. 1994. Seasonal


variations of plasma fibrinogen and factor VII activity in the elderly: Winter infections and
death from cardiovascular disease. Lancet 343:435-439.

World Health Organization MONICA Project. 1988. Geographical variation in the major risk
factors of coronary heart disease in men and women aged 35-64 years. World Health Stat Q
41:115-140.

—. 1994. Myocardial infarction and coronary deaths in the World Health Organization
MONICA project. Registration procedures, event rates, and case-fatality in 38 populations
from 21 countries in four continents. Circulation 90:583-612.

World Health Organization (WHO). 1973. Report of a WHO Expert Committee on


Environmental and Health Monitoring in Occupation Health. Technical Report Series, No.
535. Geneva: WHO.

—. 1975. International Classification of Diseases, 9th Revision. Geneva: WHO

—. 1985. Identification and control of work-related diseases. Technical Report Series, No.
714. Geneva: WHO.

—. 1994a. Cardiovascular disease risk factors: New areas for research. Technical Report
Series, No. 841.Geneva: WHO.

—. 1994b. World Health Statistics Annual 1993. Geneva: WHO.

Wyndham, CH and SA Fellingham. 1978. Climate and disease. S Afr Med J 53:1051-1061.

Zhao, Y, S Liu, and S Zhang. 1994. Effects of short-term noise exposure on heart rate and
ECG ST segment in male rats. In Health Hazards from Noise: State of the Knowledge of the
Effects of Workplace Noise, Environmental Noise, and Loud Music, edited by E Rebentisch,
H Lange-Asschenfeld, and H Ising. Munich: MMV, Medizin Verlag.

Copyright 2015 International Labour Organization


4. Digestive System
Chapter Editor: Heikki Savolainen

Digestive System

Written by ILO Content Manager

The digestive system exerts a considerable influence on the efficiency and work capacity of
the body, and acute and chronic illnesses of the digestive system are among the commonest
causes of absenteeism and disablement. In this context, the occupational physician may be
called upon in either of the following ways to offer suggestions concerning hygiene and
nutritional requirements in relation to the particular needs of a given occupation: to assess the
influence that factors inherent in the occupation may have either in producing morbid
conditions of the digestive system, or in aggravating others that may pre-exist or be otherwise
independent of the occupation; or to express an opinion concerning general or specific fitness
for the occupation.

Many of the factors that are harmful to the digestive system may be of occupational origin;
frequently a number of factors act in concert and their action may be facilitated by individual
predisposition. The following are among the most important occupational factors: industrial
poisons; physical agents; and occupational stress such as tension, fatigue, abnormal postures,
frequent changes in work tempo, shift work, night work and unsuitable eating habits
(quantity, quality and timing of meals).

Chemical Hazards

The digestive system may act as a portal for the entry of toxic substances into the body,
although its role here is normally much less important than that of the respiratory system
which has an absorption surface area of 80-100 m2 whereas the corresponding figure for the
digestive system does not exceed 20 m2. In addition, vapours and gases entering the body by
inhalation reach the bloodstream and hence the brain without meeting any intermediate
defence; however, a poison that is ingested is filtered and, to some degree, metabolized by the
liver before reaching the vascular bed. Nevertheless, the organic and functional damage may
occur both during entry into and elimination from the body or as a result of accumulation in
certain organs. This damage suffered by the body may be the result of the action of the toxic
substance itself, its metabolites or the fact that the body is depleted of certain essential
substances. Idiosyncrasy and allergic mechanisms may also play a part. The ingestion of
caustic substances is still a fairly common accidental occurrence. In a retrospective study in
Denmark, the annual incidence was of 1/100,000 with an incidence of hospitalization of
0.8/100,000 adult person-years for oesophageal burns. Many household chemicals are caustic.

Toxic mechanisms are highly complex and may vary considerably from substance to
substance. Some elements and compounds used in industry cause local damage in the
digestive system affecting, for example, the mouth and neighbouring area, stomach, intestine,
liver or pancreas.
Solvents have particular affinity for lipid-rich tissues. The toxic action is generally complex
and different mechanisms are involved. In the case of carbon tetrachloride, liver damage is
thought to be mainly due to toxic metabolites. In the case of carbon disulphide,
gastrointestinal involvement is attributed to the specific neurotropic action of this substance
on the intramural plexus whilst liver damage seems to be more due to the solvent’s cytotoxic
action, which produces changes in lipoprotein metabolism.

Liver damage constitutes an important part of the pathology of exogenic poisons since the
liver is the prime organ in metabolizing toxic agents and acts with the kidneys in detoxication
processes. The bile receives from the liver, either directly or after conjugation, various
substances that can be reabsorbed in the enterohepatic cycle (for instance, cadmium, cobalt,
manganese). Liver cells participate in oxidation (e.g., alcohols, phenols, toluene), reduction,
(e.g., nitrocompounds), methylation (e.g., selenic acid), conjugation with sulphuric or
glucuronic acid (e.g., benzene), acetylation (e.g., aromatic amines). Kupffer cells may also
intervene by phagocytosing the heavy metals, for example.

Severe gastro-intestinal syndromes, such as those due to phosphorus, mercury or arsenic are
manifested by vomiting, colic, and bloody mucus and stools and may be accompanied by liver
damage (hepatomegalia, jaundice). Such conditions are relatively rare nowadays and have
been superseded by occupational intoxications which develop slowly and even insidiously;
consequently liver damage, in particular, may often be insidious too.

Infectious hepatitis deserves particular mention; it may be related to a number of occupational


factors (hepatotoxic agents, heat or hot work, cold or cold work, intense physical activity,
etc.), may have an unfavourable course (protracted or persistent chronic hepatitis) and may
easily result in cirrhosis. It frequently occurs with jaundice and thus creates diagnostic
difficulties; moreover, it presents difficulties of prognosis and estimation of the degree of
recovery and hence of fitness for resumption of work.

Although the gastro-intestinal tract is colonized by abundant microflora which have important
physiological functions in human health, an occupational exposure may give rise to
occupational infections. For example, abattoir workers may be at risk to contract a
helicobacter infection. This infection may often be symptomless. Other important infections
include the Salmonella and Shigella species, which must be also controlled in order to
maintain product safety, such as in the food industry and in catering services.

Smoking and alcohol consumption are the major risks for oesophageal cancer in industrialized
countries, and occupational aetiology is of lesser importance. However, butchers and their
spouses seem to be at elevated risk of colorectal cancer.

Physical Factors

Various physical agents may cause digestive system syndromes; these include direct or
indirect disabling traumata, ionizing radiations, vibration, rapid acceleration, noise, very high
and low temperatures or violent and repeated climatic changes. Burns, especially if extensive,
may cause gastric ulceration and liver damage, perhaps with jaundice. Abnormal postures or
movements may cause digestive disorders especially if there are predisposing conditions such
as para-oesophageal hernia, visceroptosis or relaxatio diaphragmatica; in addition, extra-
digestive reflexes such as heartburn may occur where digestive disorders are accompanied by
autonomic nervous system or neuro-psychological troubles. Troubles of this type are common
in modern work situations and may themselves be the cause of gastro-intestinal dysfunction.

Occupational Stress

Physical fatigue may also disturb digestive functions, and heavy work may cause
secretomotor disorders and dystrophic changes, especially in the stomach. Persons with
gastric disorders, especially those who have undergone surgery are limited in the amount of
heavy work they can do, if only because heavy work requires higher levels of nutrition.

Shift work may cause important changes in eating habits with resultant functional gastro-
intestinal problems. Shift work may be associated with elevated blood cholesterol and
triglyceride levels, as well as increased gamma-glutamyltransferase activity in serum.

Nervous gastric dyspepsia (or gastric neurosis) seems to have no gastric or extragastric cause
at all, nor does it result from any humoral or metabolic disorder; consequently, it is considered
to be due to a primitive disorder of the autonomic nervous system, sometimes associated with
excessive mental exertion or emotional or psychological stress. The gastric disorder is often
manifested by neurotic hypersecretion or by hyperkinetic or atonic neurosis (the latter
frequently associated with gastroptosis). Epigastric pain, regurgitation and aerophagia may
also come under the heading of neurogastric dyspepsia. Elimination of the deleterious
psychological factors in the work environment may lead to remission of symptoms.

Several observations point to an increased frequency of peptic ulcers among people carrying
responsibilities, such as supervisors and executives, workers engaged in very heavy work,
newcomers to industry, migrant workers, seafarers and workers subject to serious socio-
economic stress. However, many people suffering the same disorders lead a normal
professional life, and statistical evidence is lacking. In addition to working conditions
drinking, smoking and eating habits, and home and social life all play a part in the
development and prolongation of dyspepsia, and it is difficult to determine what part each one
plays in the aetiology of the condition.

Digestive disorders have also been attributed to shift work as a consequence of frequent
changes of eating hours and poor eating at workplaces. These factors can aggravate pre-
existing digestive troubles and release a neurotic dyspepsia. Therefore, workers should be
assigned to shift work only after medical examination.

Medical Supervision

It can be seen that the occupational health practitioner is faced with many difficulties in the
diagnosis and estimation of digestive system complaints (due inter alia to the part played by
deleterious non-occupational factors) and that his or her responsibility in prevention of
disorders of occupational origin is considerable.

Early diagnosis is extremely important and implies periodical medical examinations and
supervision of the working environment, especially when the level of risk is high.

Health education of the general public, and of workers in particular, is a valuable preventive
measure and may yield substantial results. Attention should be paid to nutritional
requirements, choice and preparation of foodstuffs, the timing and size of meals, proper
chewing and moderation in the consumption of rich foods, alcohol and cold drinks, or
complete elimination of these substances from the diet.

Mouth and Teeth

Written by ILO Content Manager

The mouth is the portal of entry to the digestive system and its functions are, primarily, the
chewing and swallowing of food and the partial digestion of starches by means of salivary
enzymes. The mouth also participates in vocalizing and may replace or complement the nose
in respiration. Due to its exposed position and the functions it fulfils, the mouth is not only a
portal of entry but also an area of absorption, retention and excretion for toxic substances to
which the body is exposed. Factors which lead to respiration via the mouth (nasal stenoses,
emotional situations) and increased pulmonary ventilation during effort, promote either the
penetration of foreign substances via this route, or their direct action on the tissues in the
buccal cavity.

Respiration through the mouth promotes:

 greater penetration of dust into the respiratory tree since the buccal cavity has a retention
quotient (impingement) of solid particles much lower than that of the nasal cavities

 dental abrasion in workers exposed to large dust particles, dental erosion in workers exposed
to strong acids, caries in workers exposed to flour or sugar dust, etc.

The mouth may constitute the route of entry of toxic substances into the body either by
accidental ingestion or by slow absorption. The surface area of the buccal mucous membranes
is relatively small (in comparison with that of the respiratory system and gastro-intestinal
system) and foreign substances will remain in contact with these membranes for only a short
period. These factors considerably limit the degree of absorption even of substances which are
highly soluble; nevertheless, the possibility of absorption does exist and is even exploited for
therapeutic purposes (perlingual absorption of drugs).

The tissues of the buccal cavity may often be the site of accumulation of toxic substances, not
only by direct and local absorption, but also by transport via the bloodstream. Research using
radioactive isotopes has shown that even the tissues which seem metabolically the most inert
(such as dental enamel and dentine) have a certain accumulative capacity and a relatively
active turnover for certain substances. Classical examples of storage are various
discolorations of the mucous membranes (gingival lines) which often provide valuable
diagnostic information (e.g. lead).

Salivary excretion is of no value in the elimination of toxic substances from the body since
the saliva is swallowed and the substances in it are once more absorbed into the system, thus
forming a vicious circle. Salivary excretion has, on the other hand, a certain diagnostic value
(determination of toxic substances in the saliva); it may also be of importance in the
pathogenesis of certain lesions since the saliva renews and prolongs the action of toxic
substances on the buccal mucous membrane. The following substances are excreted in the
saliva: various heavy metals, the halogens (the concentration of iodine in the saliva may be 7-
700 times greater than that in plasma), the thiocyanates (smokers, workers exposed to
hydrocyanic acid and cyanogen compounds), and a wide range of organic compounds
(alcohols, alkaloids, etc.).

Aetiopathogenesis and Clinical Classification

Lesions of the mouth and teeth (also called stomatological lesions) of occupational origin may
be caused by:

 physical agents (acute traumata and chronic microtraumata, heat, electricity, radiations, etc.)

 chemical agents which affect the tissues of the buccal cavity directly or by means of systemic
changes

 biological agents (viruses, bacteria, mycetes).

However, when dealing with mouth and teeth lesions of occupational origin, a classification
based on topographical or anatomical location is preferred to one employing aetiopathogenic
principles.

Lips and cheeks. Examination of the lips and cheeks may reveal: pallor due to anaemia
(benzene, lead poisoning, etc.), cyanosis due to acute respiratory insufficiency (asphyxia) or
chronic respiratory insufficiency (occupational diseases of the lungs), cyanosis due to
methaemoglobinaemia (nitrites and organic nitro-compounds, aromatic amines), cherry-red
colouring due to acute carbon monoxide poisoning, yellow colouring in cases of acute
poisoning with picric acid, dinitrocresol, or in a case of hepatotoxic jaundice (phosphorus,
chlorinated hydrocarbon pesticides, etc.). In argyrosis, there is brown or grey-bluish
coloration caused by the precipitation of silver or its insoluble compounds, especially in areas
exposed to light.

Occupational disorders of the lips include: dyskeratoses, fissures and ulcerations due to the
direct action of caustic and corrosive substances; allergic contact dermatitis (nickel, chrome)
which may also include the dermatitis found in tobacco industry workers; microbial eczemas
resulting from the use of respiratory protective equipment where the elementary rules of
hygiene have not been observed; lesions caused by anthrax and glanders (malignant pustules
and cancroid ulcer) of workers in contact with animals; inflammation due to solar radiation
and found among agricultural workers and fishermen; neoplastic lesions in persons handling
carcinogenic substances; traumatic lesions; and chancre of the lip in glassblowers.

Teeth. Discoloration caused by the deposition of inert substances or due to the impregnation
of the dental enamel by soluble compounds is of almost exclusively diagnostic interest. The
important colourings are as follows: brown, due to the deposition of iron, nickel and
manganese compounds; greenish-brown due to vanadium; yellowish-brown due to iodine and
bromine; golden-yellow, often limited to gingival lines, due to cadmium.
Of greater importance is dental erosion of mechanical or chemical origin. Even nowadays it is
possible to find dental erosions of mechanical origin in certain craftsmen (caused by holding
nails or string, etc., in the teeth) which are so characteristic that they can be considered
occupational stigmata. Lesions caused by abrasive dusts have been described in grinders,
sandblasters, stone industry workers and precious stone workers. Prolonged exposure to
organic and inorganic acids will often cause dental lesions occurring mainly on the labial
surface of the incisors (rarely on the canines); these lesions are initially superficial and limited
to the enamel but later become deeper and more extensive, reaching the dentine and resulting
in solubilization and mobilization of calcium salts. The localization of these erosions to the
anterior surface of the teeth is due to the fact that when the lips are open it is this surface
which is the most exposed and which is deprived of the natural protection offered by the
buffer effect of saliva.

Dental caries is such a frequent and widespread disease that a detailed epidemiological study
is required to determine whether the condition is really of occupational origin. The most
typical example is that of the caries found in workers exposed to flour and sugar dust
(flourmillers, bakers, confectioners, sugar industry workers). This is a soft caries which
develops rapidly; it starts at the base of the tooth (rampant caries) and immediately progresses
to the crown; the affected sides blacken, the tissue is softened and there is considerable loss of
substance and finally the pulp is affected. These lesions begin after a few years of exposure
and their severity and extent increases with the duration of this exposure. X rays may also
cause rapidly developing dental caries which usually commences at the base of the tooth.

In addition to pulpites due to dental caries and erosion, an interesting aspect of pulp pathology
is barotraumatic odontalgia, i.e., pressure-induced toothache. This is caused by the rapid
development of gas dissolved in the pulp tissue following sudden atmospheric decompression:
this is a common symptom in the clinical manifestations observed during rapid climbing in
aircrafts. In the case of persons suffering from septic-gangrenous pulpites, where gaseous
material is already present, this toothache may commence at an altitude of 2,000-3,000 m.

Occupational fluorosis does not lead to dental pathology as is the case with endemic fluorosis:
fluorine causes dystrophic changes (mottled enamel) only when the period of exposure
precedes the eruption of permanent teeth.

Mucous membrane changes and stomatitis. Of definite diagnostic value are the various
discolorations of the mucous membranes due to the impregnation or precipitation of metals
and their insoluble compounds (lead, antimony, bismuth, copper, silver, arsenic). A typical
example is Burton’s line in lead poisoning, caused by the precipitation of lead sulphide
following the development in the oral cavity of hydrogen sulphide produced by the
putrefaction of food residues. It has not been possible to reproduce Burton’s line
experimentally in herbivorous animals.

There is a very curious discoloration in the lingual mucous membrane of workers exposed to
vanadium. This is due to impregnation by vanadium pentoxide which is subsequently reduced
to trioxide; the discoloration cannot be cleaned away but disappears spontaneously a few days
after termination of exposure.

The oral mucous membrane can be the site of severe corrosive damage caused by acids,
alkalis and other caustic substances. Alkalis cause maceration, suppuration and tissue necrosis
with the formation of lesions which slough off easily. Ingestion of caustic or corrosive
substances produces severe ulcerative and very painful lesions of the mouth, oesophagus and
stomach, which may develop into perforations and frequently leave scars. Chronic exposure
favours the formation of inflammation, fissures, ulcers and epithelial desquamation of the
tongue, palate and other parts of the oral mucous membranes. Inorganic and organic acids
have a coagulating effect on proteins and cause ulcerous, necrotic lesions which heal with
contractive scarring. Mercury chloride and zinc chloride, certain copper salts, alkaline
chromates, phenol and other caustic substances produce similar lesions.

A prime example of chronic stomatitis is that caused by mercury. It commences gradually,


with discreet symptoms and a prolonged course; the symptoms include excessive saliva,
metallic taste in the mouth, bad breath, slight gingival reddening and swelling, and these
constitute the first phase of periodontitis leading towards loss of teeth. A similar clinical
picture is found in stomatitis due to bismuth, gold, arsenic, etc.

Salivary glands. Increased salivary secretion has been observed in the following cases:

 in a variety of acute and chronic stomatites which is due mainly to the irritant action of the
toxic substances and may, in certain cases, be extremely intense. For example, in cases of
chronic mercurial poisoning, this symptom is so prominent and occurs at such an early stage
that English workers have called this the “salivation disease”.
 in cases of poisoning in which there is central nervous system involvement—as is the case in
manganese poisoning. However, even in the case of chronic mercurial poisoning, salivary
gland hyperactivity is thought to be, at least in part, nervous in origin.
 in cases of acute poisoning with organophosphorus pesticides which inhibit cholinesterases.

There is reduction in salivary secretion in severe thermoregulation disorders (heatstroke, acute


dinitrocresol poisoning), and in serious disorders of water and electrolyte balance during toxic
hepatorenal insufficiency.

In cases of acute or chronic stomatitis, the inflammatory process may, sometimes, affect the
salivary glands. In the past there have been reports of “lead parotitis”, but this condition has
become so rare nowadays that doubts about its actual existence seem justified.

Maxillary bones. Degenerative, inflammatory and productive changes in the skeleton of the
mouth may be caused by chemical, physical and biological agents. Probably the most
important of the chemical agents is white or yellow phosphorus which causes phosphorus
necrosis of the jaw or “phossy jaw”, at one time a distressing disease of match industry
workers. The absorption of phosphorus is facilitated by the presence of gingival and dental
lesions, and produces, initially, productive periosteal reaction followed by destructive and
necrotic phenomena which are activated by bacterial infection. Arsenic also causes
ulceronecrotic stomatitis which may have further bone complications. The lesions are limited
to the roots in the jaw, and lead to the development of small sheets of dead bones. Once the
teeth have fallen out and the dead bone eliminated, the lesions have a favourable course and
nearly always heal.

Radium was the cause of maxillary osteonecrotic processes observed during the First World
War in workers handling luminous compounds. In addition, damage to the bone may also be
caused by infection.
Preventive Measures

A programme for the prevention of mouth and teeth diseases should be based on the following
four main principles:

 application of measures of industrial hygiene and preventive medicine including monitoring


of workplace environment, analysis of production processes, elimination of hazards in the
environment, and, where necessary, the use of personal protective equipment
 education of workers in the need for scrupulous oral hygiene—in many cases it has been
found that lack of oral hygiene may reduce resistance to general and localized occupational
diseases
 a careful check on the mouth and teeth when workers undergo pre-employment or
periodical medical examinations
 early detection and treatment of any mouth or teeth disease, whether of an occupational
nature or not.

Liver

Written by ILO Content Manager

The liver acts as a vast chemical factory with diverse vital functions. It plays an essential role
in the metabolism of protein, carbohydrate and fat, and is concerned with the absorption and
storage of vitamins and with the synthesis of prothrombin and other factors concerned with
blood clotting. The liver is responsible for the inactivation of hormones and the detoxification
of many drugs and exogenous toxic chemical substances. It also excretes the breakdown
products of haemoglobin, which are the principal constituents of the bile. These widely
varying functions are performed by parenchymal cells of uniform structure which contain
many complex enzyme systems.

Pathophysiology

An important feature of liver disease is a rise in the level of bilirubin in the blood; if of
sufficient magnitude, this stains the tissues to give rise to jaundice. The mechanism of this
process is shown in figure 1. Haemoglobin released from worn out red blood cells is broken
down to haem and then, by removal of iron, to bilirubin before it reaches the liver (prehepatic
bilirubin). In its passage through the liver cell, bilirubin is conjugated by enzymatic activity
into water-soluble glucuronides (posthepatic bilirubin) and then secreted as bile into the
intestine. The bulk of this pigment is eventually excreted in the stool, but some is reabsorbed
through the intestinal mucosa and secreted a second time by the liver cell into the bile
(enterohepatic circulation). However, a small proportion of this reabsorbed pigment is finally
excreted in the urine as urobilinogen. With normal liver function there is no bilirubin in the
urine, as prehepatic bilirubin is protein bound, but a small amount of urobilinogen is present.

Figure 1. The excretion of bilirubinthrough thte liver, showing the enterohepatic circulation.
Obstruction to the biliary system can occur in the bile ducts, or at cellular level by swelling of
the hepatic cells due to injury, with resulting obstruction to the fine bile canaliculi.
Posthepatic bilirubin then accumulates in the bloodstream to produce jaundice, and overflows
into the urine. The secretion of bile pigment into the intestine is hindered, and urobilinogen is
no longer excreted in the urine. The stools are therefore pale due to lack of pigment, the urine
dark with bile, and the serum conjugated bilirubin raised above its normal value to give rise to
obstructive jaundice.

Damage to the liver cell, which may follow injection of or exposure to toxic agents, also gives
rise to an accumulation of posthepatic, conjugated bilirubin (hepatocellular jaundice). This
may be sufficiently severe and prolonged to give rise to a transient obstructive picture, with
bilirubin but no urobilinogen in the urine. However, in the early stages of hepatocellular
damage, without obstruction present, the liver is unable to re-excrete reabsorbed bilirubin, and
an excessive amount of urobilinogen is excreted in the urine.

When blood cells are broken down at an excessive rate, as in the haemolytic anaemias, the
liver becomes overloaded and the unconjugated prehepatic bilirubin is raised. This again gives
rise to jaundice. However, prehepatic bilirubin cannot be excreted in the urine. Excessive
amounts of bilirubin are secreted into the intestine, rendering the faeces dark. More is
reabsorbed via the enterohepatic circulation and an increased amount of urobilinogen excreted
in the urine (haemolytic jaundice).
Diagnosis

Liver function tests are used to confirm suspected liver disease, to estimate progress and to
assist in the differential diagnosis of jaundice. A series of tests is usually applied to screen the
various functions of the liver, those of established value being:

1. Examination of the urine for the presence of bilirubin and urobilinogen: The former is
indicative of hepatocellular damage or of biliary obstruction. The presence of excessive
urobilinogen can precede the onset of jaundice and forms a simple and sensitive test of
minimal hepatocellular damage or of the presence of haemolysis.
2. Estimation of total serum bilirubin: Normal value 5-17 mmol/l.
3. Estimation of serum enzyme concentration: Hepatocellular damage is accompanied by a
raised level of a number of enzymes, in particular of g-glutamyl transpeptidase, alanine
amino-transferase (glutamic pyruvic transaminase) and aspartate amino-transferase
(glutamic oxalo-acetic transaminase), and by a moderately raised level of alkaline
phosphatase. An increasing level of alkaline phosphatase is indicative of an obstructive
lesion.
4. Determination of plasma protein concentration and electrophoretic pattern: Hepatocellular
damage is accompanied by a fall in plasma albumin and a differential rise in the globulin
fractions, in particular in g-globulin. These changes form the basis for the flocculation tests of
liver function.
5. Bromsulphthalein excretion test: This is a sensitive test of early cellular damage, and is of
value in detecting its presence in the absence of jaundice.
6. Immunological tests: Estimation of the levels of immunoglobulins and detection of
autoantibodies is of value in the diagnosis of certain forms of chronic liver disease. The
presence of hepatitis B surface antigen is indicative of serum hepatitis and the presence of
alpha-fetoprotein suggests a hepatoma.
7. Haemoglobin estimation, red cell indices and report on blood film.

Other tests used in the diagnosis of liver disease include scanning by means of ultrasound or
radio-isotope uptake, needle biopsy for histological examination and peritoneoscopy.
Ultrasound examination provides a simple, safe, non-invasive diagnostic technique but which
requires skill in application.

Occupational disorders

Infections. Schistosomiasis is a widespread and serious parasitic infection which may give rise
to chronic hepatic disease. The ova produce inflammation in the portal zones of the liver,
followed by fibrosis. The infection is occupational where workers have to be in contact with
water infested with the free-swimming cercariae.

Hydatid disease of the liver is common in sheep-raising communities with poor hygienic
standards where people are in close contact with the dog, the definitive host, and sheep, the
intermediate host for the parasite, Echinococcus granulosus. When a person becomes the
intermediate host, a hydatid cyst may form in the liver giving rise to pain and swelling, which
may be followed by infection or rupture of the cyst.

Weil’s disease may follow contact with water or damp earth contaminated by rats harbouring
the causative organism, Leptospira icterohaemorrhagiae. It is an occupational disease of
sewer workers, miners, workers in rice-fields, fishmongers and butchers. The development of
jaundice some days after the onset of fever forms only one stage of a disease which also
involves the kidney.

A number of viruses give rise to hepatitis, the most common being virus type A (HAV)
causing acute infective hepatitis and virus type B (HBV) or serum hepatitis. The former,
which is responsible for world-wide epidemics, is spread by the faecal-oral route, is
characterized by febrile jaundice with liver cell injury and is usually followed by recovery.
Type B hepatitis is a disease with a more serious prognosis. The virus is readily transmitted
following skin or venipuncture, or transfusion with infected blood products and has been
transmitted by drug addicts using the parenteral route, by sexual, especially homosexual
contact or by any close personal contact, and also by blood-sucking arthropods. Epidemics
have occurred in dialysis and organ transplant units, laboratories and hospital wards. Patients
on haemodialysis and those in oncology units are particularly liable to become chronic
carriers and hence provide a reservoir of infection. The diagnosis can be confirmed by the
identification of an antigen in the serum originally called Australia antigen but now termed
hepatitis B surface antigen HBsAg. Serum containing the antigen is highly infectious. Type B
hepatitis is an important occupational hazard for health care personnel, especially for those
working in clinical laboratories and on dialysis units. High levels of serum positivity have
been found in pathologists and surgeons, but low in doctors without patient contact. There is
also a hepatitis virus non-A, non-B, identified as hepatitis virus C (HCV). Other hepatitis
virus types are likely to be still unidentified. The delta virus cannot cause hepatitis
independently but it acts in conjunction with the hepatitis B virus. Chronic virus hepatitis is
an important aetiology of liver cirrhosis and cancer (malignant hepatoma).

Yellow fever is an acute febrile illness resulting from infection with a Group B arbovirus
transmitted by culicine mosquitoes, in particular Aedes aegypti. It is endemic in many parts of
West and Central Africa, in tropical South America and some parts of the West Indies. When
jaundice is prominent, the clinical picture resembles infective hepatitis. Falciparum malaria
and relapsing fever may also give rise to high fever and jaundice and require careful
differentiation.

Toxic conditions. Excessive red blood cell destruction giving rise to haemolytic jaundice may
result from exposure to arsine gas, or the ingestion of haemolytic agents such as
phenylhydrazine. In industry, arsine may be formed whenever nascent hydrogen is formed in
the presence of arsenic, which may be an unsuspected contaminant in many metallurgical
processes.

Many exogenous poisons interfere with liver-cell metabolism by inhibiting enzyme systems,
or may damage or even destroy the parenchymal cells, interfering with the excretion of
conjugated bilirubin and giving rise to jaundice. The injury caused by carbon tetrachloride
may be taken as a model for direct hepatotoxicity. In mild cases of poisoning, dyspeptic
symptoms may be present without jaundice, but liver damage is indicated by the presence of
excess urobilinogen in the urine, raised serum amino-transferase (transaminase) levels and
impaired bromsulphthalein excretion. In more severe cases the clinical features resemble
those of acute infective hepatitis. Loss of appetite, nausea, vomiting and abdominal pain are
followed by a tender, enlarged liver and jaundice, with pale stools and dark urine. An
important biochemical feature is the high level of serum amino-transferase (transaminase)
found in these cases. Carbon tetrachloride has been widely used in dry cleaning, as a
constituent of fire extinguishers and as an industrial solvent.
Many other halogenated hydrocarbons have similar hepatotoxic properties. Those of the
aliphatic series which damage the liver are methyl chloride, tetrachloroethane, and
chloroform. In the aromatic series the nitrobenzenes, dinitrophenol, trinitrotoluene and rarely
toluene, the chlorinated naphthalenes and chlorinated diphenyl may be hepatotoxic. These
compounds are used variously as solvents, degreasers and refrigerants, and in polishes, dyes
and explosives. While exposure may produce parenchymal cell damage with an illness not
dissimilar to infectious hepatitis, in some cases (e.g., following exposure to trinitrotoluene or
tetrachlorethane) the symptoms may become severe with high fever, rapidly increasing
jaundice, mental confusion and coma with a fatal termination from massive necrosis of the
liver.

Yellow phosphorus is a highly poisonous metalloid whose ingestion gives rise to jaundice
which may have a fatal termination. Arsenic, antimony and ferrous iron compounds may also
give rise to liver damage.

Exposure to vinyl chloride in the polymerization process for the production of polyvinyl
chloride has been associated with the development of hepatic fibrosis of a non-cirrhotic type
together with splenomegaly and portal hypertension. Angiosarcoma of the liver, a rare and
highly malignant tumour developed in a small number of exposed workers. Exposure to vinyl
chloride monomer, in the 40-odd years preceding the recognition of angiosarcoma in 1974,
had been high, especially in men engaged in the cleaning of the reaction vessels, in whom
most of the cases occurred. During that period the TLV for vinyl chloride was 500 ppm,
subsequently reduced to 5 ppm (10 mg/m3). While liver damage was first reported in Russian
workers in 1949, attention was not paid to the harmful effects of vinyl chloride exposure until
the discovery of Raynaud’s syndrome with sclerodermatous changes and acro-osteolysis in
the 1960s.

Hepatic fibrosis in vinyl chloride workers can be occult, for as parenchymal liver function can
be preserved, conventional liver function tests may show no abnormality. Cases have come to
light following haematemesis from the associated portal hypertension, the discovery of
thrombocytopoenia associated with splenomegaly or the development of angiosarcoma. In
surveys of vinyl chloride workers, a full occupational history including information on
alcohol and drug consumption should be taken, and the presence of hepatitis B surface antigen
and antibody determined. Hepatosplenomegaly may be detected clinically, by radiography or
more precisely by grey scale ultrasonography. The fibrosis in these cases is of a periportal
type, with a mainly presinusoidal obstruction to portal flow, attributed to an abnormality of
the portal vein radicles or the hepatic sinusoids and giving rise to portal hypertension. The
favourable progress of workers who have undergone portocaval shunt operations following
haematemesis is likely to be attributed to the sparing of the liver parenchymal cells in this
condition.

Fewer than 200 cases of angiosarcoma of the liver which fulfil current diagnostic criteria have
been reported. Less than half of these have occurred in vinyl chloride workers, with an
average duration of exposure of 18 years, range 4-32 years. In Britain, a register set up in
1974 has collected 34 cases with acceptable diagnostic criteria. Two of these occurred in vinyl
chloride workers, with possible exposure in four others, eight were attributable to past
exposure to thorotrast and one to arsenical medication. Thorium dioxide, used in the past as a
diagnostic aid, is now responsible for new cases of angiosarcoma and hepatoma. Chronic
arsenic intoxication, following medication or as an occupational disease among vintners in the
Moselle has also been followed by angiosarcoma. Non-cirrhotic perisinusoidal fibrosis has
been observed in chronic arsenic intoxication, as in vinyl chloride workers.

Aflatoxin, derived from a group of moulds, in particular Aspergillus flavus, gives rise to liver
cell damage, cirrhosis and liver cancer in experimental animals. The frequent contamination
of cereal crops, particularly on storage in warm, humid conditions, with A. flavus, may
explain the high incidence of hepatoma in certain parts of the world, especially in tropical
Africa. In industrialized countries hepatoma is uncommon, more often developing in cirrhotic
livers. In a proportion of cases HBsAg antigen has been present in the serum and some cases
have followed treatment with androgens. Hepatic adenoma has been observed in women
taking certain oral contraceptive formulations.

Alcohol and cirrhosis. Chronic parenchymal liver disease may take the form of chronic
hepatitis or of cirrhosis. The latter condition is characterized by cellular damage, fibrosis and
nodular regeneration. While in many cases the aetiology is unknown, cirrhosis may follow
viral hepatitis, or acute massive necrosis of the liver, which itself may result from drug
ingestion or industrial chemical exposure. Portal cirrhosis is frequently associated with
excessive alcohol consumption in industrialized countries such as France, Britain and the
United States, although multiple risk factors may be involved to explain variation in
susceptibility. While its mode of action is unknown, liver damage is primarily dependent on
the amount and duration of drinking. Workers who have easy access to alcohol are at greatest
risk of developing cirrhosis. Among the occupations with the highest mortality from cirrhosis
are bartenders and publicans, restaurateurs, seafarers, company directors and medical
practitioners.

Fungi. Mushrooms of the amanita species (e.g., Amanita phalloides) are highly toxic.
Ingestion is followed by gastro-intestinal symptoms with watery diarrhoea and after an
interval by acute liver failure due to centrizonal necrosis of the parenchyma.

Drugs. A careful drug history should always be taken before attributing liver damage to an
industrial exposure, for a variety of drugs are not only hepatotoxic, but are capable of enzyme
induction which may alter the liver’s response to other exogenous agents. Barbiturates are
potent inducers of liver microsomal enzymes, as are some food additives and DDT.

The popular analgesic acetaminophen (paracetamol) gives rise to hepatic necrosis when taken
in overdose. Other drugs with a predictable dose-related direct toxic action on the liver cell
are hycanthone, cytotoxic agents and tetracyclines (though much less potent). Several
antituberculous drugs, in particular isoniazid and para-aminosalicylic acid, certain monoamine
oxidase inhibitors and the anaesthetic gas halothane may also be hepatotoxic in some
hypersensitive individuals.

Phenacetin, sulphonamides and quinine are examples of drugs which may give rise to a mild
haemolytic jaundice, but again in hypersensitive subjects. Some drugs may give rise to
jaundice, not by damaging the liver cell, but by damaging the fine biliary ducts between the
cells to give rise to biliary obstruction (cholestatic jaundice). The steroid hormones
methyltestosterone and other C-17 alkyl-substituted compounds of testosterone are
hepatotoxic in this way. It is important to determine, therefore, whether a female worker is
taking an oral contraceptive in the evaluation of a case of jaundice. The epoxy resin hardener
4,4´-diamino-diphenylmethane led to an epidemic of cholestatic jaundice in England
following ingestion of contaminated bread.
Several drugs have given rise to what appears to be a hypersensitive type of intrahepatic
cholestasis, as it is not dose related. The phenothiazine group, and in particular
chlorpromazine are associated with this reaction.

Preventive Measures

Workers who have any disorder of the liver or gall bladder, or a past history of jaundice,
should not handle or be exposed to potentially hepatotoxic agents. Similarly, those who are
receiving any drug which is potentially injurious to the liver should not be exposed to other
hepatic poisons, and those who have received chloroform or trichlorethylene as an anaesthetic
should avoid exposure for a subsequent interval. The liver is particularly sensitive to injury
during pregnancy, and exposure to potentially hepatotoxic agents should be avoided at this
time. Workers who are exposed to potentially hepatotoxic chemicals should avoid alcohol.
The general principle to be observed is the avoidance of a second potentially hepatotoxic
agent where there has to be exposure to one. A balanced diet with an adequate intake of first
class protein and essential food factors affords protection against the high incidence of
cirrhosis seen in some tropical countries. Health education should stress the importance of
moderation in the consumption of alcohol in protecting the liver from fatty infiltration and
cirrhosis. The maintenance of good general hygiene is invaluable in protecting against
infections of the liver like hepatitis, hydatid disease and schistosomiasis.

Control measures for type B hepatitis in hospitals include precautions in the handling of blood
samples in the ward; adequate labelling and safe transmission to the laboratory; precautions in
the laboratory, with the prohibition of mouth pipetting; the wearing of protective clothing and
disposable gloves; prohibition of eating, drinking or smoking in areas where infectious
patients or blood samples might be handled; extreme care in the servicing of non-disposable
dialysis equipment; surveillance of patients and staff for hepatitis and mandatory screening at
intervals for the presence of HBsAg antigen. Vaccination against hepatitis A and B viruses is
an efficient method to prevent infection in high risk occupations.

Peptic Ulcer

Written by ILO Content Manager

Gastric and duodenal ulcers—collectively called “peptic ulcers”—are a sharply circumscribed


loss of tissue, involving the mucosa, submucosa and muscular layer, occurring in areas of the
stomach or duodenum exposed to acid-pepsin gastric juice. Peptic ulcer is a common cause of
recurring or persistent upper abdominal distress, especially in young men. Duodenal ulcer
comprises about 80% of all peptic ulcers, and is commoner in men than in women; in gastric
ulcer the gender ratio is about one. It is important to distinguish between gastric ulcer and
duodenal ulcer because of differences in diagnosis, treatment and prognosis. The causes of
peptic ulcer have not been completely determined; many factors are believed to be involved,
and in particular nervous tension, the ingestion of certain drugs (such as salicylates and
corticoids) and hormonal factors may play roles.
Persons at Risk

Although peptic ulcer cannot be regarded as a specific occupational disease, it has a higher-
than-average incidence among professional people and those working under stress. Stress,
either physical or emotional, is believed to be an important factor in the aetiology of peptic
ulcer; prolonged emotional stress in various occupations may increase the secretion of
hydrochloric acid and the susceptibility of the gastroduodenal mucosa to injury.

The results of many investigations of the relationship between peptic ulcer and occupation
clearly reveal substantial variations in the incidence of ulcers in different occupations.
Numerous studies point to the likelihood of transport workers, such as drivers, motor
mechanics, tramcar conductors and railway employees, contracting ulcers. Thus, in one
survey covering over 3,000 railway workers, peptic ulcers were found to be more frequent in
train crew, signal operators and inspectors than in maintenance and administrative staff; shift
work, hazards and responsibility being noted as contributing factors. In another large-scale
survey, however, transport workers evidenced “normal” ulcer rates, the incidence being
highest in doctors and a group of unskilled workers. Fishers and sea pilots also tend to suffer
from peptic ulcer, predominantly of the gastric type. In a study of coal miners, the incidence
of peptic ulcers was found to be proportional to the arduousness of the work, being highest in
miners employed at the coal face. Reports of cases of peptic ulcer in welders and in workers
in a magnesium refining plant suggest that metal fumes are capable of inducing this condition
(although here the cause would appear to be not stress, but a toxic mechanism). Elevated
incidences have also been found among overseers and business executives, i.e., generally in
persons holding responsible posts in industry or trade; it is noteworthy that duodenal ulcers
account almost exclusively for the high incidence in these groups, the incidence of gastric
ulcer being average.

On the other hand, low incidences of peptic ulcer have been found among agricultural
workers, and apparently prevail among sedentary workers, students and draftsmen.

Thus, while the evidence regarding the occupational incidence of peptic ulcer appears to be
contradictory to a degree, there is agreement at least on one point, namely that the higher the
stresses of the occupation, the higher the ulcer rate. This general relationship can also be
observed in the developing countries, where, during the process of industrialization and
modernization, many workers are coming increasingly under the influence of stress and strain,
caused by such factors as congested traffic and difficult commuting conditions, introduction
of complex machinery, systems and technologies, heavier workloads and longer working
hours, all of which are found to be conducive to the development of peptic ulcer.

Diagnosis

The diagnosis of peptic ulcer depends upon obtaining a history of characteristic ulcer distress,
with relief of distress on ingestion of food or alkali, or other manifestations such as gastro-
intestinal bleeding; the most useful diagnostic technique is a thorough x-ray study of the
upper gastro-intestinal tract.

Attempts to gather data on the prevalence of this condition have been seriously hampered by
the fact that peptic ulcer is not a reportable disease, that workers with peptic ulcer frequently
put off consulting a physician about their symptoms, and that when they do so, the criteria for
diagnosis are not uniform. The detection of peptic ulcer in workers is, therefore, not simple.
Some excellent researchers, indeed, have had to rely on attempts to gather data from necropsy
records, questionnaires to physicians, and insurance company statistics.

Preventive Measures

From the viewpoint of occupational medicine, the prevention of peptic ulcer—seen as a


psychosomatic ailment with occupational connotations—must be based primarily on the
alleviation, wherever possible, of overstress and nervous tension due to directly or indirectly
work-related factors. Within the broad framework of this general principle, there is room for a
wide variety of measures, including, for example, action on the collective plane towards a
reduction of working hours, the introduction or improvement of facilities for rest and
relaxation, improvements in financial conditions and social security, and (hand in hand with
local authorities) steps to improve commuting conditions and make suitable housing available
within a reasonable distance of workplaces—not to mention direct action to pinpoint and
eliminate particular stress-generating situations in the working environment.

At the personal level, successful prevention depends equally on proper medical guidance and
on intelligent cooperation by the worker, who should have an opportunity of seeking advice
on work-connected and other personal problems.

The liability of individuals to contract peptic ulcers is heightened by various occupational


factors and personal attributes. If these factors can be recognized and understood, and above
all, if the reasons for the apparent correlation between certain occupations and high ulcer rates
can be clearly demonstrated, the chances of successful prevention, and treatment of relapses,
will be greatly enhanced. A possible Helicobacter infection should also be eradicated. In the
meantime, as a general precaution, the implications of a past history of peptic ulcer should be
borne in mind by persons conducting pre-employment or periodic examinations, and efforts
should be made not to place—or to leave—the workers concerned in jobs or situations where
they will be exposed to severe stresses, particularly of a nervous or psychological nature.

Liver Cancer

Written by ILO Content Manager

The predominant type of malignant tumour of the liver (ICD-9 155) is hepatocellular
carcinoma (hepatoma; HCC), i.e., a malignant tumour of the liver cells. Cholangiocarcinomas
are tumours of the intrahepatic bile ducts. They represent some 10% of liver cancers in the US
but may account for up to 60% elsewhere, such as in north-eastern Thai populations (IARC
1990). Angiosarcomas of the liver are very rare and very aggressive tumours, occurring
mostly in men. Hepatoblastomas, a rare embryonal cancer, occur in early life, and have little
geographic or ethnic variation.

The prognosis for HCC depends on the size of the tumour and on the extent of cirrhosis,
metastases, lymph node involvement, vascular invasion and presence/absence of a capsule.
They tend to relapse after resection. Small HCCs are resectable, with a five-year survival of
40-70%. Liver transplantation results in about 20% survival after two years for patients with
advanced HCC. For patients with less advanced HCC, the prognosis after transplantation is
better. For hepatoblastomas, complete resection is possible in 50-70% of the children. Cure
rates after resection range from 30-70%. Chemotherapy can be used both pre- and
postoperatively. Liver transplantation may be indicated for unresectable hepatoblastomas.

Cholangiocarcinomas are multifocal in more than 40% of the patients at the time of diagnosis.
Lymph node metastases occur in 30-50% of these cases. The response rates to chemotherapy
vary widely, but usually are less than 20% successful. Surgical resection is possible in only a
few patients. Radiation therapy has been used as the primary treatment or adjuvant therapy,
and may improve survival in patients who have not undergone a complete resection. Five-year
survival rates are less than 20%. Angiosarcoma patients usually present distant metastases.
Resection, radiation therapy, chemotherapy and liver transplantation are, in most cases,
unsuccessful. Most patients die within six months of diagnosis (Lotze, Flickinger and Carr
1993).

An estimated 315,000 new cases of liver cancer occurred globally in 1985, with a clear
absolute and relative preponderance in populations of developing countries, except in Latin
America (IARC 1994a; Parkin, Pisani and Ferlay 1993). The average annual incidence of
liver cancer shows considerable variation across cancer registries worldwide. During the
1980s, average annual incidence ranged from 0.8 in men and 0.2 in women in Maastricht, The
Netherlands, to 90.0 in men and 38.3 in women in Khon Kaen, Thailand, per 100,000 of
population, standardized to the standard world population. China, Japan, East Asia, and Africa
represented high rates, while Latin and North American, European, and Oceanian rates were
lower, except for New Zealand Maoris (IARC 1992). The geographic distribution of liver
cancer is correlated with the distribution of the prevalence of chronic carriers of hepatitis B
surface antigen and also with the distribution of local levels of aflatoxin contamination of
foodstuffs (IARC 1990). Male-to-female ratios in incidence are usually between 1 and 3, but
may be higher in high-risk populations.

Statistics on the mortality and incidence of liver cancer by social class indicate a tendency of
excess risk to concentrate in the lower socio-economic strata, but this gradient is not observed
in all populations.

The established risk factors for primary liver cancer in humans include aflatoxin-
contaminated food, chronic infection with hepatitis B virus (IARC 1994b), chronic infection
with hepatitis C virus (IARC 1994b), and heavy consumption of alcoholic beverages (IARC
1988). HBV is responsible for an estimated 50-90% of hepatocellular carcinoma incidence in
high-risk populations, and for 1-10% in low-risk populations. Oral contraceptives are a further
suspected factor. The evidence implicating tobacco smoking in the aetiology of liver cancer is
insufficient (Higginson, Muir and Munoz 1992).

The substantial geographical variation in the incidence of liver cancer suggests that a high
proportion of liver cancers might be preventable. The preventive measures include HBV
vaccination (estimated potential theoretical reduction in incidence is roughly 70% in endemic
areas), reduction of contamination of food by mycotoxins (40% reduction in endemic areas),
improved methods of harvesting, dry storing of crops, and reduction of consumption of
alcoholic beverages (15% reduction in Western countries; IARC 1990).

Liver cancer excesses have been reported in a number of occupational and industrial groups in
different countries. Some of the positive associations are readily explained by workplace
exposures such as the increased risk of liver angiosarcoma in vinyl chloride workers (see
below). For other high-risk jobs, such as metal work, construction painting, and animal feed
processing, the connection with workplace exposures is not firmly established and is not
found in all studies, but could well exist. For others, such as service workers, police officers,
guards, and governmental workers, direct workplace carcinogens may not explain the excess.
Cancer data for farmers do not provide many clues for occupational aetiologies in liver
cancer. In a review of 13 studies involving 510 cases or deaths of liver cancer among farmers
(Blair et al. 1992), a slight deficit (aggregated risk ratio 0.89; 95% confidence interval 0.81-
0.97) was observed.

Some of the clues provided by industry- or job-specific epidemiological studies do suggest


that occupational exposures may have a role in the induction of liver cancer. Minimization of
certain occupational exposures therefore would be instrumental in the prevention of liver
cancer in occupationally exposed populations. As a classical example, occupational exposure
to vinyl chloride has been shown to cause angiosarcoma of the liver, a rare form of liver
cancer (IARC 1987). As a result, vinyl chloride exposure has been regulated in a large
number of countries. There is increasing evidence that chlorinated hydrocarbon solvents may
cause liver cancer. Aflatoxins, chlorophenols, ethylene glycol, tin compounds, insecticides
and some other agents have been associated with the risk of liver cancer in epidemiological
studies. Numerous chemical agents occurring in occupational settings have caused liver
cancer in animals and may therefore be suspected of being liver carcinogens in humans. Such
agents include aflatoxins, aromatic amines, azo dyes, benzidine-based dyes, 1,2-
dibromoethane, butadiene, carbon tetrachloride, chlorobenzenes, chloroform, chlorophenols,
diethylhexyl phthalate, 1,2-dichloroethane, hydrazine, methylene chloride, N-nitrosoamines, a
number of organochlorine pesticides, perchloroethylene, polychlorinated biphenyls and
toxaphene.

Pancreatic Cancer

Written by ILO Content Manager

Pancreatic cancer (ICD-9 157; ICD-10 C25), a highly fatal malignancy, ranks amongst the 15
most common cancers globally but belongs to the ten most common cancers in the
populations of developed countries, accounting for 2 to 3% of all new cases of cancer (IARC
1993). An estimated 185,000 new cases of pancreatic cancer occurred globally in 1985
(Parkin, Pisani and Ferlay 1993). The incidence rates of pancreatic cancer have been
increasing in developed countries. In Europe, the increase has levelled off, except in the UK
and some Nordic countries (Fernandez et al. 1994). The incidence and mortality rates rise
steeply with advancing age between 30 and 70 years. The age-adjusted male/female ratio of
new cases of pancreatic cancer is 1.6/1 in developed countries but only 1.1/1 in developing
countries.

High annual incidence rates of pancreatic cancer (up to 30/100,000 for men; 20/100,000 for
women) in the period 1960-85, have been recorded for New Zealand Maoris, Hawaiians, and
in Black populations in the US. Regionally, the highest age-adjusted rates in 1985 (over
7/100,000 for men and 4/100,000 in women) were reported for both genders in Japan, North
America, Australia, New Zealand, and Northern, Western and Eastern Europe. The lowest
rates (up to 2/100,000 for both men and women) were reported in the regions of West and
Middle Africa, South-eastern Asia, Melanesia, and in temperate South America (IARC 1992;
Parkin, Pisani and Ferlay 1993).
Comparisons between populations in time and space are subject to several cautions and
interpretation difficulties because of variations in diagnostic conventions and technologies
(Mack 1982).

The vast majority of pancreatic cancers occur in the exocrine pancreas. The major symptoms
are abdominal and back pain and weight loss. Further symptoms include anorexia, diabetes
and obstructive jaundice. Symptomatic patients are subjected to procedures such as a series of
blood and urine tests, ultrasound, computerized tomography, cytological examination and
pancreatoscopy. Most patients have metastases at diagnosis, which makes their prognosis
bleak.

Only 15% of patients with pancreatic cancer are operable. Local recurrence and distant
metastases occur frequently after surgery. Irradiation therapy or chemotherapy do not bring
about significant improvements in survival except when combined with surgery on localized
carcinomas. Palliative procedures provide little benefit. Despite some diagnostic
improvements, survival remains poor. During the period 1983-85, the five-year average
survival in 11 European populations was 3% for men and 4% for women (IARC 1995). Very
early detection and diagnosis or identification of high-risk individuals may improve the
success of surgery. The efficacy of screening for pancreatic cancer has not been determined.

Mortality and incidence of pancreatic cancer do not reveal a consistent global pattern across
socio-economic categories.

The dismal picture offered by diagnostic problems and treatment inefficacy is completed by
the fact that the causes of pancreatic cancer are largely unknown, which effectively hampers
the prevention of this fatal disease. The unique established cause of pancreatic cancer is
tobacco smoking, which explains about 20-50% of the cases, depending on the smoking
patterns of the population. It has been estimated that elimination of tobacco smoking would
decrease the incidence of pancreatic cancer by about 30% worldwide (IARC 1990). Alcohol
consumption and coffee drinking have been suspected as increasing the risk of pancreatic
cancer. On closer scrutiny of the epidemiological data, however, coffee consumption appears
unlikely to be causally connected to pancreatic cancer. For alcoholic beverages, the only
causal link with pancreatic cancer is probably pancreatitis, a condition associated with heavy
alcohol consumption. Pancreatitis is a rare but potent risk factor of pancreatic cancer. It is
possible that some as yet unidentified dietary factors might account for a part of the aetiology
of pancreatic cancer.

Workplace exposures may be causally associated with pancreatic cancer. Results of several
epidemiological studies that have linked industries and jobs with an excess of pancreatic
cancer are heterogeneous and inconsistent, and exposures shared by alleged high-risk jobs are
hard to identify. The population aetiologic fraction for pancreatic cancer from occupational
exposures in Montreal, Canada, has been estimated to lie between 0% (based on recognized
carcinogens) and 26% (based on a multi-site case-control study in the Montreal area, Canada)
(Siemiatycki et al. 1991).

No single occupational exposure has been confirmed to increase the risk of pancreatic cancer.
Most of the occupational chemical agents that have been associated with an excess risk in
epidemiological studies emerged in one study only, suggesting that many of the associations
may be artefacts from confounding or chance. If no additional information, e.g., from animal
bio-assays, is available, the distinction between spurious and causal associations presents
formidable difficulties, given the general uncertainty about the causative agents involved in
the development of pancreatic cancer. Agents associated with increased risk include
aluminium, aromatic amines, asbestos, ashes and soot, brass dust, chromates, combustion
products of coal, natural gas and wood, copper fumes, cotton dust, cleaning agents, grain dust,
hydrogen fluoride, inorganic insulation dust, ionizing radiation, lead fumes, nickel
compounds, nitrogen oxides, organic solvents and paint thinners, paints, pesticides, phenol-
formaldehyde, plastic dust, polycyclic aromatic hydrocarbons, rayon fibres, stainless steel
dust, sulphuric acid, synthetic adhesives, tin compounds and fumes, waxes and polishes, and
zinc fumes (Kauppinen et al. 1995). Among these agents, only aluminium, ionizing radiation
and unspecified pesticides have been associated with excess risk in more than one study.
Digestive System References
Blair, A, S Hoar Zahm, NE Pearce, EF Heineman, and JF Fraumeni. 1992. Clues to cancer
aetiology from studies of farmers. Scand J Work Environ Health 18:209-215.

Fernandez, E, C LaVecchia, M Porta, E Negri, F Lucchini, and F Levi. 1994. Trends in


pancreatic cancer mortality in Europe, 1955-1989. Int J Cancer 57:786-792.

Higginson, J, CS Muir, and N Munoz. 1992. Human Cancer: Epidemiology and


Environmental Causes. In Cambridge Monographs On Cancer Research Cambridge:
Cambridge Univ. Press.

International Agency for Research on Cancer (IARC). 1987. IARC Monographs On the
Evaluation of Carcinogenic Risks to Humans. An Updating of IARC Monographs Volumes 1
to 42, Suppl. 7. Lyon: IARC.

—. 1988. Alcohol drinking. IARC Monographs On the Evaluation of Carcinogenic Risks to


Humans, No. 44. Lyon: IARC.

—. 1990. Cancer: Causes, occurrence and control. IARC Scientific Publications, No. 100.
Lyon: IARC.

—. 1992. Cancer incidence in five continents. Vol. VI. IARC Scientific Publications, No. 120.
Lyon: IARC.

—. 1993. Trends in cancer incidence and mortality. IARC Scientific Publications, No. 121.
Lyon: IARC.

—. 1994a. Hepatitis viruses. IARC Monographs On the Evaluation of Carcinogenic Risks to


Humans, No. 59. Lyon: IARC.

—. 1994b. Occupational cancer in developing countries. IARC Scientific Publications, No.


129. Lyon: IARC.

—. 1995. Survival of cancer patients in Europe. The EUROCARE study. Vol. 132. IARC
Scientific Publications. Lyon: IARC.

Kauppinen, T, T Partanen, R Degerth, and A Ojajärvi. 1995. Pancreatic cancer and


occupational exposures. Epidemiology 6(5):498-502.

Lotze, MT, JC Flickinger, and BI Carr. 1993. Hepatobiliary Neoplasms. In Cancer: Principles
and Practice of Oncology, edited by VT DeVita Jr, S Hellman, and SA Rosenberg.
Philadelphia: JB Lippincott.

Mack, TM. 1982. Pancreas. In Cancer Epidemiology and Prevention, edited by


D.Schottenfeld and JF Fraumeni. Philadelphia: WB Sanders.

Parkin, DM, P Pisani, and J Ferlay. 1993. Estimates of the worldwide incidence of eighteen
major cancers in 1985. Int J Cancer 54:594-606.
Siemiatycki, J, M Gerin, R Dewar, L Nadon, R Lakhani, D Begin, and L Richardson. 1991.
Associations between occupational circumstances and cancer. In Risk Factors for Cancer in
the Workplace, edited by J Siemiatycki. Boca Raton: CRC Press.

Copyright 2015 International Labour Organization


5. Mental Health
Chapter Editors: Joseph J. Hurrell, Lawrence R. Murphy, Steven L. Sauter and
Lennart Levi

Mood and Affect


Depression

Written by ILO Content Manager

Depression is an enormously important topic in the area of workplace mental health, not only
in terms of the impact depression can have on the workplace, but also the role the workplace
can play as an aetiological agent of the disorder.

In a 1990 study, Greenberg et al. (1993a) estimated that the economic burden of depression in
the United States that year was approximately US$ 43.7 billion. Of that total, 28% was
attributable to direct costs of medical care, but 55% was derived from a combination of
absenteeism and decreased productivity while at work. In another paper, the same authors
(1993b) note:

“two distinguishing features of depression are that it is highly treatable and not widely
recognized. The NIMH has noted that between 80% and 90% of individuals suffering
from a major depressive disorder can be treated successfully, but that only one in three
with the illness ever seeks treatment.… Unlike some other diseases, a very large share
of the total costs of depression falls on employers. This suggests that employers as a
group may have a particular incentive to invest in programs that could reduce the costs
associated with this illness.”

Manifestations

Everyone feels sad or “depressed” from time to time, but a major depressive episode,
according to the Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM
IV) (American Psychiatric Association 1994), requires that several criteria be met. A full
description of these criteria is beyond the scope of this article, but portions of criterion A,
which describes the symptoms, can give one a sense of what a true major depression looks
like:

A. Five (or more) of the following symptoms have been present during the same 2-week
period and represent a change from previous functioning; at least one of the symptoms is
number 1 or 2.

1. depressed mood most of the day, nearly every day


2. markedly diminished interest or pleasure in all, or almost all, activities most of the day,
nearly every day
3. significant weight loss when not dieting or weight gain, or decrease or increase in appetite
nearly every day
4. insomnia or hypersomnia nearly every day
5. psychomotor agitation or retardation nearly every day
6. fatigue or loss of energy nearly every day
7. feelings of worthlessness or excessive or inappropriate guilt nearly every day
8. diminished ability to think or concentrate, or indecisiveness nearly every day
9. recurrent thoughts of death, recurrent suicidal ideation, with or without a plan, or a suicide
attempt.

Besides giving one an idea of the discomfort suffered by a person with depression, a review of
these criteria also shows the many ways depression can impact negatively on the workplace. It
is also important to note the wide variation of symptoms. One depressed person may present
barely able to move to get out of bed, while others may be so anxious they can hardly sit still
and describe themselves as crawling out of their skin or losing their mind. Sometimes
multiple physical aches and pains without medical explanation may be a hint of depression.

Prevalence

The following passage from Mental Health in the Workplace (Kahn 1993) describes the
pervasiveness (and increase) of depression in the workplace:

“Depression … is one of the most common mental health problems in the workplace.
Recent research … suggests that in industrialized countries the incidence of depression
has increased with each decade since 1910, and the age at which someone is likely to
become depressed has dropped with every generation born after 1940. Depressive
illnesses are common and serious, taking a tremendous toll on both workers and
workplace. Two out of ten workers can expect a depression during their lifetime, and
women are one and a half times more likely than men to become depressed. One out
of ten workers will develop a clinical depression serious enough to require time off
from work.”

Thus, in addition to the qualitative aspects of depression, the quantitative/epidemiological


aspects of the disease make it a major concern in the workplace.

Related Illnesses

Major depressive disorder is only one of a number of closely related illnesses, all under the
category of “mood disorders”. The most well known of these is bipolar (or “manic-
depressive”) illness, in which the patient has alternating periods of depression and mania,
which includes a feeling of euphoria, a decreased need for sleep, excessive energy and rapid
speech, and can progress to irritability and paranoia.

There are several different versions of bipolar disorder, depending on the frequency and
severity of the depressive and manic episodes, the presence or absence of psychotic features
(delusions, hallucinations) and so on. Similarly, there are several different variations on the
theme of depression, depending on severity, presence or absence of psychosis, and types of
symptom most prominent. Again, it is beyond the scope of this article to delineate all of these,
but the reader is again referred to DSM IV for a complete listing of all the different forms of
mood disorder.
Differential Diagnosis

The differential diagnosis of major depression involves three major areas: other medical
disorders, other psychiatric disorders and medication-induced symptoms.

Just as important as the fact that many patients with depression first present to their general
practitioners with physical complaints is the fact that many patients who initially present to a
mental health clinician with depressive complaints may have an undiagnosed medical illness
causing the symptoms. Some of the most common illnesses causing depressive symptoms are
endocrine (hormonal), such as hypothyroidism, adrenal problems or changes related to
pregnancy or the menstrual cycle. Particularly in older patients, neurological diseases, such as
dementia, strokes or Parkinson’s disease, become more prominent in the differential
diagnosis. Other illnesses that can present with depressive symptoms are mononucleosis,
AIDS, chronic fatigue syndrome and some cancers and joint diseases.

Psychiatrically, the disorders which share many common features with depression are the
anxiety disorders (including generalized anxiety, panic disorder and post-traumatic stress
disorder), schizophrenia and drug and alcohol abuse. The list of medications that can cause
depressive symptoms is quite lengthy, and includes pain medications, some antibiotics, many
anti-hypertensives and cardiac drugs, and steroids and hormonal agents.

For further detail on all three areas of the differential diagnosis of depression, the reader is
referred to Kaplan and Sadock’s Synopsis of Psychiatry (1994), or the more detailed
Comprehensive Textbook of Psychiatry (Kaplan and Sadock 1995).

Workplace Aetiologies

Much can be found elsewhere in this Encyclopaedia regarding workplace stress, but what is
important in this article is the manner in which certain aspects of stress can lead to depression.
There are many schools of thought regarding the aetiology of depression, including
biological, genetic and psychosocial. It is in the psychosocial realm that many factors relating
to the workplace can be found.

Issues of loss or threatened loss can lead to depression and, in today’s climate of downsizing,
mergers and shifting job descriptions, are common problems in the work environment.
Another result of frequently changing job duties and the constant introduction of new
technologies is to leave workers feeling incompetent or inadequate. According to
psychodynamic theory, as the gap between one’s current self image and “ideal self” widens,
depression ensues.

An animal experimental model known as “learned helplessness” can also be used to explain
the ideological link between stressful workplace environments and depression. In these
experiments, animals were exposed to electric shocks from which they could not escape. As
they learned that none of the actions they took had any effect on their eventual fate, they
displayed increasingly passive and depressive behaviours. It is not difficult to extrapolate this
model to today’s workplace, where so many feel a sharply decreasing amount of control over
both their day-to-day activities and long-range plans.
Treatment

In light of the aetiological link of the workplace to depression described above, a useful way
of looking at the treatment of depression in the workplace is the primary, secondary, tertiary
model of prevention. Primary prevention, or trying to eliminate the root cause of the problem,
entails making fundamental organizational changes to ameliorate some of the stressors
described above. Secondary prevention, or trying to “immunize” the individual from
contracting the illness, would include such interventions as stress management training and
lifestyle changes. Tertiary prevention, or helping to return the individual to health, involves
both psychotherapeutic and psychopharmacological treatment.

There is an increasing array of psychotherapeutic approaches available to the clinician today.


The psychodynamic therapies look at the patient’s struggles and conflicts in a loosely
structured format that allows explorations of whatever material may come up in a session,
however tangential it may initially appear. Some modifications of this model, with boundaries
set in terms of number of sessions or breadth of focus, have been made to create many of the
newer forms of brief therapy. Interpersonal therapy focuses more exclusively on the patterns
of the patient’s relationships with others. An increasingly popular form of therapy is cognitive
therapy, which is driven by the precept, “What you think is how you feel”. Here, in a very
structured format, the patient’s “automatic thoughts” in response to certain situations are
examined, questioned and then modified to produce a less maladaptive emotional response.

As rapidly as the psychotherapies have developed, the psychopharmacological


armamentarium has probably grown even faster. In the few decades before the 1990s, the
most common medications used to treat depression were the tricyclics (imipramine,
amitriptyline and nortriptyline are examples) and the monoamine oxidase inhibitors (Nardil,
Marplan and Parnate). These medications act on neurotransmitter systems thought to be
involved with depression, but also affect many other receptors, resulting in a number of side
effects. In the early 1990s, several new medications (fluoxetine, sertraline, Paxil, Effexor,
fluvoxamine and nefazodone) were introduced. These medications have enjoyed rapid growth
because they are “cleaner” (bind more specifically to depression-related neurotransmitter
sites) and can thus effectively treat depression while causing much fewer side effects.

Summary

Depression is extremely important in the world of workplace mental health, both because of
depression’s impact on the workplace, and the workplace’s impact on depression. It is a
highly prevalent disease, and very treatable; but unfortunately frequently goes undetected and
untreated, with serious consequences for both the individual and the employer. Thus,
increased detection and treatment of depression can help lessen individual suffering and
organizational losses.

Work-Related Anxiety

Written by ILO Content Manager

Anxiety disorders as well as subclinical fear, worry and apprehension, and associated stress-
related disorders such as insomnia, appear to be pervasive and increasingly prevalent in
workplaces in the 1990s—so much so, in fact, that the Wall Street Journal has referred to the
1990s as the work-related “Age of Angst” (Zachary and Ortega 1993). Corporate downsizing,
threats to existing benefits, lay-offs, rumours of impending lay-offs, global competition, skill
obsolescence and “de-skilling”, re-structuring, re-engineering, acquisitions, mergers and
similar sources of organizational turmoil have all been recent trends that have eroded
workers’ sense of job security and have contributed to palpable, but difficult to precisely
measure, “work-related anxiety” (Buono and Bowditch 1989). Although there appear to be
some individual differences and situational moderator variables, Kuhnert and Vance (1992)
reported that both blue-collar and white-collar manufacturing employees who reported more
“job insecurity” indicated significantly more anxiety and obsessive-compulsive symptoms on
a psychiatric checklist. For much of the 1980s and accelerating into the 1990s, the transitional
organizational landscape of the US marketplace (or “permanent whitewater”, as it has been
described) has undoubtedly contributed to this epidemic of work-related stress disorders,
including, for example, anxiety disorders (Jeffreys 1995; Northwestern National Life 1991).

The problems of occupational stress and work-related psychological disorders appear to be


global in nature, but there is a dearth of statistics outside of the United States documenting
their nature and extent (Cooper and Payne 1992). The international data that are available,
mostly from European countries, seem to confirm similar adverse mental health effects of job
insecurity and high-strain employment on workers as those seen in US workers (Karasek and
Theorell 1990). However, because of the very real stigma associated with mental disorders in
most other countries and cultures, many, if not most, psychological symptoms, such as
anxiety, related to work (outside of the United States) go unreported, undetected and untreated
(Cooper and Payne 1992). In some cultures, these psychological disorders are somatized and
manifested as “more acceptable” physical symptoms (Katon, Kleinman and Rosen 1982). A
study of Japanese government workers has identified occupational stressors such as workload
and role conflict as significant correlates of mental health in these Japanese workers (Mishima
et al. 1995). Further studies of this kind are needed to document the impact of psychosocial
job stressors on workers’ mental health in Asia, as well as in the developing and post-
Communist countries.

Definition and Diagnosis of Anxiety Disorders

Anxiety disorders are evidently among the most prevalent of mental health problems
afflicting, at any one time, perhaps 7 to 15% of the US adult population (Robins et al. 1981).
Anxiety disorders are a family of mental health conditions which include agoraphobia (or,
loosely, “houseboundness”), phobias (irrational fears), obsessive-compulsive disorder, panic
attacks and generalized anxiety. According to the American Psychiatric Association’s
Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM IV), symptoms of a
generalized anxiety disorder include feelings of “restlessness or feeling keyed up or on edge”,
fatigue, difficulties with concentration, excess muscle tension and disturbed sleep (American
Psychiatric Association 1994). An obsessive-compulsive disorder is defined as either
persistent thoughts or repetitive behaviours that are excessive/unreasonable, cause marked
distress, are time consuming and can interfere with a person’s functioning. Also, according to
DSM IV, panic attacks, defined as brief periods of intense fear or discomfort, are not actually
disorders per se but may occur in conjunction with other anxiety disorders. Technically, the
diagnosis of an anxiety disorder can be made only by a trained mental health professional
using accepted diagnostic criteria.
Occupational Risk Factors for Anxiety Disorders

There is a paucity of data pertaining to the incidence and prevalence of anxiety disorders in
the workplace. Furthermore, since the aetiology of most anxiety disorders is multifactorial, we
cannot rule out the contribution of individual genetic, developmental and non-work factors in
the genesis of anxiety conditions. It seems likely that both work-related organizational and
such individual risk factors interact, and that this interaction determines the onset, progression
and course of anxiety disorders.

The term job-related anxiety implies that there are work conditions, tasks and demands,
and/or related occupational stressors that are associated with the onset of acute and/or chronic
states of anxiety or manifestations of anxiety. These factors may include an overwhelming
workload, the pace of work, deadlines and a perceived lack of personal control. The demand-
control model predicts that workers in occupations which offer little personal control and
expose employees to high levels of psychological demand would be at risk of adverse health
outcomes, including anxiety disorders (Karasek and Theorell 1990). A study of pill
consumption (mostly tranquilizers) reported for Swedish male employees in high-strain
occupations supported this prediction (Karasek 1979). Certainly, the evidence for an increased
prevalence of depression in certain high-strain occupations in the United States is now
compelling (Eaton et al. 1990). More recent epidemiological studies, in addition to theoretical
and biochemical models of anxiety and depression, have linked these disorders not only by
identifying their co-morbidity (40 to 60%), but also in terms of more fundamental
commonalities (Ballenger 1993). Hence, the Encyclopaedia chapter on job factors associated
with depression may provide pertinent clues to occupational and individual risk factors also
associated with anxiety disorders. In addition to risk factors associated with high-strain work,
a number of other workplace variables contributing to employee psychological distress,
including an increased prevalence of anxiety disorders, have been identified and are briefly
summarized below.

Individuals employed in dangerous lines of work, such as law enforcement and firefighting,
characterized by the probability that a worker will be exposed to a hazardous agent or
injurious activity, would also seem to be at risk of heightened and more prevalent states of
psychological distress, including anxiety. However, there is some evidence that individual
workers in such dangerous occupations who view their work as “exhilarating” (as opposed to
dangerous) may cope better in terms of their emotional responses to work (McIntosh 1995).
Nevertheless, an analysis of stress symptomatology in a large group of professional
firefighters and paramedics identified a central feature of perceived apprehension or dread.
This “anxiety stress pathway” included subjective reports of “being keyed up and jittery” and
“being uneasy and apprehensive.” These and similar anxiety-related complaints were
significantly more prevalent and frequent in the firefighter/paramedic group relative to a male
community comparison sample (Beaton et al. 1995).

Another worker population evidently at risk of experiencing high, and at times debilitating,
levels of anxiety are professional musicians. Professional musicians and their work are
exposed to intense scrutiny by their supervisors; they must perform before the public and
must cope with performance and pre-performance anxiety or “stage fright”; and they are
expected (by others as well as by themselves) to produce “note-perfect performances”
(Sternbach 1995). Other occupational groups, such as theatrical performers and even teachers
who give public performances, may have acute and chronic anxiety symptoms related to their
work, but very little data on the actual prevalence or significance of such occupational anxiety
disorders have been collected.

Another class of work-related anxiety for which we have little data is “computer phobics”,
people who have responded anxiously to the advent of computing technology (Stiles 1994).
Even though each generation of computer software is arguably more “user-friendly”, many
workers are uneasy, while other workers are literally panicked by challenges of “techno-
stress”. Some fear personal and professional failure associated with their inability to acquire
the necessary skills to cope with each successive generation of technology. Finally, there is
evidence that employees subjected to electronic performance monitoring perceive their jobs as
more stressing and report more psychological symptoms, including anxiety, than workers not
so monitored (Smith et al. 1992).

Interaction of Individual and Occupational Risk Factors for Anxiety

It is likely that individual risk factors interact with and may potentiate the above-cited
organizational risk factors at the onset, progression and course of anxiety disorders. For
example, an individual employee with a “Type A personality” may be more prone to anxiety
and other mental health problems in high-strain occupational settings (Shima et al. 1995). To
offer a more specific example, an overly responsible paramedic with a “rescue personality”
may be more on edge and hypervigilant while on duty then another paramedic with a more
philosophical work attitude: “You can’t save them all” (Mitchell and Bray 1990). Individual
worker personality variables may also serve to potentially buffer attendant occupational risk
factors. For instance, Kobasa, Maddi and Kahn (1982) reported that corporate managers with
“hardy personalities” seem better able to cope with work-related stressors in terms of health
outcomes. Thus, individual worker variables need to be considered and evaluated within the
context of the particular occupational demands to predict their likely interactive impact on a
given employee’s mental health.

Prevention and Remediation ofWork-related Anxiety

Many of the US and global workplace trends cited at the beginning of this article seem likely
to persist into the foreseeable future. These workplace trends will adversely impact workers’
psychological and physical health. Psychological job enhancement, in terms of interventions
and workplace redesign, may deter and prevent some of these adverse effects. Consistent with
the demand-control model, workers’ well-being can be improved by increasing their decision
latitude by, for example, designing and implementing a more horizontal organizational
structure (Karasek and Theorell 1990). Many of the recommendations made by NIOSH
researchers, such as improving workers’ sense of job security and decreasing work role
ambiguity, if implemented, would also likely reduce job strain and work-related psychological
disorders considerably, including anxiety disorders (Sauter, Murphy and Hurrell 1992).

In addition to organizational policy changes, the individual employee in the modern


workplace also has a personal responsibility to manage his or her own stress and anxiety.
Some common and effective coping strategies employed by US workers include separating
work and non-work activities, getting sufficient rest and exercise, and pacing oneself at work
(unless, of course, the job is machine paced). Other helpful cognitive-behavioural alternatives
in self-managing and preventing anxiety disorders include deep-breathing techniques,
biofeedback-aided relaxation training, and meditation (Rosch and Pelletier 1987). In certain
cases medications may be necessary to treat a severe anxiety disorder. These medications,
including antidepressants and other anxiolytic agents, are generally available only by
prescription.

Post-Traumatic Stress Disorder and its Relation to Occupational Health and


Injury Prevention

Written by ILO Content Manager

Beyond the broad concept of stress and its relationship to general health issues, there has been
little attention to the role of psychiatric diagnosis in the prevention and treatment of the
mental health consequences of work-related injuries. Most of the work on job stress has been
concerned with the effects of exposure to stressful conditions over time, rather than to
problems associated with a specific event such as a traumatic or life-threatening injury or the
witnessing of an industrial accident or act of violence. At the same time, Post-traumatic Stress
Disorder (PTSD), a condition which has received considerable credibility and interest since
the mid-1980s, is being more widely applied in contexts outside of cases involving war
trauma and victims of crime. With respect to the workplace, PTSD has begun to appear as the
medical diagnosis in cases of occupational injury and as the emotional outcome of exposure
to traumatic situations occurring in the workplace. It is often the subject of controversy and
some confusion with respect to its relationship to work conditions and the responsibility of the
employer when claims of psychological injury are made. The occupational health practitioner
is called upon increasingly to advise on company policy in the handling of these exposures
and injury claims, and to render medical opinions with respect to the diagnosis, treatment and
ultimate job status of these employees. Familiarity with PTSD and its related conditions is
therefore increasingly important for the occupational health practitioner.

The following topics will be reviewed in this article:

 differential diagnosis of PTSD with other conditions such as primary depression and anxiety
disorders
 relationship of PTSD to stress-related somatic complaints
 prevention of post-traumatic stress reactions in survivors and witnesses of psychologically
traumatic events occurring in the workplace
 prevention and treatment of complications of work injury related to post-traumatic stress.

Post-traumatic Stress Disorder affects people who have been exposed to traumatizing events
or conditions. It is characterized by symptoms of numbing, psychological and social
withdrawal, difficulties controlling emotion, especially anger, and intrusive recollection and
reliving of experiences of the traumatic event. By definition, a traumatizing event is one that
is outside the normal range of everyday life events and is experienced as overwhelming by the
individual. A traumatic event usually involves a threat to one’s own life or to someone close,
or the witnessing of an actual death or serious injury, especially when this occurs suddenly or
violently.

The psychiatric antecedents of our current concept of PTSD go back to the descriptions of
“battle fatigue” and “shell shock” during and after the World Wars. However, the causes,
symptoms, course and effective treatment of this often debilitating condition were still poorly
understood when tens of thousands of Vietnam-era combat veterans began to appear in the US
Veterans Administration Hospitals, offices of family doctors, jails and homeless shelters in
the 1970s. Due in large part to the organized effort of veterans’ groups, in collaboration with
the American Psychiatric Association, PTSD was first identified and described in 1980 in the
3rd edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM III)
(American Psychiatric Association 1980). The condition is now known to affect a wide range
of trauma victims, including survivors of civilian disasters, victims of crime, torture and
terrorism, and survivors of childhood and domestic abuse. Although changes in the
classification of the disorder are reflected in the current diagnostic manual (DSM IV), the
diagnostic criteria and symptoms remain essentially unchanged (American Psychiatric
Association 1994).

Diagnostic Criteria for Post-TraumaticStress Disorder

A. The person has been exposed to a traumatic event in which both of the following were
present:

1. The person experienced, witnessed, or was confronted with an event or events that involved
actual or threatened death or serious injury, or a threat to the physical integrity of self or
others.
2. The person’s response involved intense fear, helplessness or horror.

B. The traumatic event is persistently re-experienced in one (or more) of the following ways:

1. Recurrent and intrusive distressing recollections of the event, including images, thoughts or
perceptions.
2. Recurrent distressing dreams of the event.
3. Acting or feeling as if the traumatic event were recurring.
4. Intense psychological distress at exposure to internal or external cues that symbolize or
resemble an aspect of the traumatic event.
5. Physiological reactivity on exposure to internal or external cues that symbolize or resemble
an aspect of the traumatic event.

C. Persistent avoidance of stimuli associated with the trauma and numbing of general
responsiveness (not present before the trauma), as indicated by three (or more) of the
following:

1. Efforts to avoid thoughts, feelings or conversations associated with the trauma.


2. Efforts to avoid activities, places or people that arouse recollections of the trauma.
3. Inability to recall an important aspect of the trauma.
4. Markedly diminished interest or participation in significant activities.
5. Feeling of detachment or estrangement from others.
6. Restricted range of affect (e.g., unable to have loving feelings).
7. Sense of a foreshortened future (e.g., does not expect to have a career, marriage, children or
a normal life span).
D. Persistent symptoms of increased arousal (not present before the trauma), as indicated by
two (or more) of the following:

1. Difficulty falling or staying asleep.


2. Irritability or outbursts of anger.
3. Difficulty concentrating.
4. Hypervigilance.
5. Exaggerated startle response.

E. Duration of the disturbance (symptoms in criteria B, C and D) is more than 1 month.

F. The disturbance causes clinically significant distress or impairment in social, occupational


or other important areas of functioning.

Specify if:

Acute: if duration of symptoms is less than 3 months

Chronic: if duration of symptoms is 3 months or more.

Specify if:

With Delayed Onset: if onset of symptoms is at least 6 months after the stressor.

Psychological stress has achieved increasing recognition as an outcome of work-related


hazards. The link between work hazards and post-traumatic stress was first established in the
1970s with the discovery of high incident rates of PTSD in workers in law enforcement,
emergency medical, rescue and firefighting. Specific interventions have been developed to
prevent PTSD in workers exposed to job-related traumatic stressors such as mutilating injury,
death and use of deadly force. These interventions emphasize providing exposed workers with
education about normal traumatic stress reactions, and the opportunity to actively surface their
feelings and reactions with their peers. These techniques have become well established in
these occupations in the United States, Australia and many European nations. Job-related
traumatic stress, however, is not limited to workers in these high-risk industries. Many of the
principles of preventive intervention developed for these occupations can be applied to
programmes to reduce or prevent traumatic stress reactions in the general workforce.

Issues in Diagnosis and Treatment

Diagnosis

The key to the differential diagnosis of PTSD and traumatic-stress-related conditions is the
presence of a traumatic stressor. Although the stressor event must conform to criterion A—
that is, be an event or situation that is outside of the normal range of experience—individuals
respond in various ways to similar events. An event that precipitates a clinical stress reaction
in one person may not affect another significantly. Therefore, the absence of symptoms in
other similarly exposed workers should not cause the practitioner to discount the possibility of
a true post-trauma reaction in a particular worker. Individual vulnerability to PTSD has as
much to do with the emotional and cognitive impact of an experience on the victim as it does
to the intensity of the stressor itself. A prime vulnerability factor is a history of psychological
trauma due to a previous traumatic exposure or significant personal loss of some kind. When
a symptom picture suggestive of PTSD is presented, it is important to establish whether an
event that may satisfy the criterion for a trauma has occurred. This is particularly important
because the victim himself may not make the connection between his symptoms and the
traumatic event. This failure to connect the symptom with the cause follows the common
“numbing” reaction, which may cause forgetting or dissociation of the event, and because it is
not unusual for symptom appearance to be delayed for weeks or months. Chronic and often
severe depression, anxiety and somatic conditions are often the result of a failure to diagnose
and treat. Thus, early diagnosis is particularly important because of the often hidden nature of
the condition, even to the sufferer him- or herself, and because of the implications for
treatment.

Treatment

Although the depression and anxiety symptoms of PTSD may respond to usual therapies such
as pharmacology, effective treatment is different from those usually recommended for these
conditions. PTSD may be the most preventable of all psychiatric conditions and, in the
occupational health sphere, perhaps the most preventable of all work-related injuries. Because
its occurrence is linked so directly to a specific stressor event, treatment can focus on
prevention. If proper preventive education and counselling are provided soon after the
traumatic exposure, subsequent stress reactions can be minimized or prevented altogether.
Whether the intervention is preventive or therapeutic depends largely on timing, but the
methodology is essentially similar. The first step in successful treatment or preventive
intervention is allowing the victim to establish the connection between the stressor and his or
her symptoms. This identification and “normalization” of what are typically frightening and
confusing reactions is very important for reduction or prevention of symptoms. Once the
normalization of the stress response has been accomplished, treatment addresses the
controlled processing of the emotional and cognitive impact of the experience.

PTSD or conditions related to traumatic stress result from the sealing off of unacceptable or
unacceptably intense emotional and cognitive reactions to traumatic stressors. It is generally
considered that the stress syndrome can be prevented by providing the opportunity for
controlled processing of the reactions to the trauma before the sealing off of the trauma
occurs. Thus, prevention through timely and skilled intervention is the keystone for the
treatment of PTSD. These treatment principles may depart from the traditional psychiatric
approach to many conditions. Therefore, it is important that employees at risk of post-
traumatic stress reactions be treated by mental health professionals with specialized training
and experience in treating trauma-related conditions. The length of treatment is variable. It
will depend on the timing of the intervention, the severity of the stressor, symptom severity
and the possibility that a traumatic exposure may precipitate an emotional crisis linked to
earlier or related experiences. A further issue in treatment concerns the importance of group
treatment modalities. Victims of trauma can achieve enormous benefit from the support of
others who have shared the same or similar traumatic stress experience. This is of particular
importance in the workplace context, when groups of co-workers or entire work organizations
are affected by a tragic accident, act of violence or traumatic loss.

Prevention of Post-Traumatic Stress Reactionsafter Incidents of Workplace Trauma

A range of events or situations occurring in the workplace may put workers at risk of post-
traumatic stress reactions. These include violence or threat of violence, including suicide,
inter-employee violence and crime, such as armed robbery; fatal or severe injury; and sudden
death or medical crisis, such as heart attack. Unless properly managed, these situations can
cause a range of negative outcomes, including post-traumatic stress reactions that may reach
clinical levels, and other stress-related effects that will affect health and work performance,
including avoidance of the workplace, concentration difficulties, mood disturbances, social
withdrawal, substance abuse and family problems. These problems can affect not only line
employees but management staff as well. Managers are at particular risk because of conflicts
between their operational responsibilities, their feelings of personal responsibility for the
employees in their charge and their own sense of shock and grief. In the absence of clear
company policies and prompt assistance from health personnel to deal with the aftermath of
the trauma, managers at all levels may suffer from feelings of helplessness that compound
their own traumatic stress reactions.

Traumatic events in the workplace require a definite response from upper management in
close collaboration with health, safety, security, communications and other functions. A crisis
response plan fulfils three primary goals:

1. prevention of post-traumatic stress reactions by reaching affected individuals and groups


before they have a chance to seal over
2. communication of crisis-related information in order to contain fears and control rumours
3. fostering of confidence that management is in control of the crisis and demonstrating
concern for employees’ welfare.

The methodology for the implementation of such a plan has been fully described elsewhere
(Braverman 1992a,b; 1993b). It emphasizes adequate communication between management
and employees, assembling of groups of affected employees and prompt preventive
counselling of those at highest risk for post-traumatic stress because of their levels of
exposure or individual vulnerability factors.

Managers and company health personnel must function as a team to be sensitive for signs of
continued or delayed trauma-related stress in the weeks and months after the traumatic event.
These can be difficult to identify for manager and health professional alike, because post-
traumatic stress reactions are often delayed, and they can masquerade as other problems. For a
supervisor or for the nurse or counsellor who becomes involved, any signs of emotional
stress, such as irritability, withdrawal or a drop in productivity, may signal a reaction to a
traumatic stressor. Any change in behaviour, including increased absenteeism, or even a
marked increase in work hours (“workaholism”) can be a signal. Indications of drug or
alcohol abuse or change in moods should be explored as possibly linked to post-traumatic
stress. A crisis response plan should include training for managers and health professionals to
be alert for these signs so that intervention can be rendered at the earliest possible point.
Stress-related Complications of Occupational Injury

It has been our experience reviewing workers’ compensation claims up to five years post-
injury that post-traumatic stress syndromes are a common outcome of occupational injury
involving life-threatening or disfiguring injury, or assault and other exposures to crime. The
condition typically remains undiagnosed for years, its origins unsuspected by medical
professionals, claims administrators and human resource managers, and even the employee
him- or herself. When unrecognized, it can slow or even prevent recovery from physical
injury.

Disabilities and injuries linked to psychological stress are among the most costly and difficult
to manage of all work-related injuries. In the “stress claim”, an employee maintains he or she
has been emotionally damaged by an event or conditions at work. Costly and hard to fight,
stress claims usually result in litigation and in the separation of the employee. There exists,
however, a vastly more frequent but seldom recognized source of stress-related claims. In
these cases, serious injury or exposure to life-threatening situations results in undiagnosed and
untreated psychological stress conditions that significantly affect the outcome of work-related
injuries.

On the basis of our work with traumatic worksite injuries and violent episodes over a wide
range of worksites, we estimate that at least half of disputed workers’ compensation claims
involve unrecognized and untreated post-traumatic stress conditions or other psychosocial
components. In the push to resolve medical problems and determine the employee’s
employment status, and because of many systems’ fear and mistrust of mental health
intervention, emotional stress and psychosocial issues take a back seat. When no one deals
with it, stress can take the form of a number of medical conditions, unrecognized by the
employer, the risk manager, the health care provider and the employee him- or herself.
Trauma-related stress also typically leads to avoidance of the workplace, which increases the
risk of conflicts and disputes regarding return to work and claims of disability.

Many employers and insurance carriers believe that contact with a mental health professional
leads directly to an expensive and unmanageable claim. Unfortunately, this is often the case.
Statistics bear out that claims for mental stress are more expensive than claims for other kinds
of injuries. Furthermore, they are increasing faster than any other kind of injury claim. In the
typical “physical-mental” claim scenario, the psychiatrist or psychologist appears only at the
point—typically months or even years after the event—when there is a need for expert
assessment in a dispute. By this time, the psychological damage has been done. The trauma-
related stress reaction may have prevented the employee from returning to the workplace,
even though he or she appeared visibly healed. Over time, the untreated stress reaction to the
original injury has resulted in a chronic anxiety or depression, a somatic illness or a substance
abuse disorder. Indeed, it is rare that mental health intervention is rendered at the point when
it can prevent the trauma-related stress reaction and thus help the employee fully recover from
the trauma of a serious injury or assault.

With a small measure of planning and proper timing, the costs and suffering associated with
injury-related stress are among the most preventable of all injuries. The following are the
components of an effective post-injury plan (Braverman 1993a):
Early intervention

Companies should require a brief mental health intervention whenever a severe accident,
assault or other traumatic event impacts on an employee. This evaluation should be seen as
preventive, rather than as tied to the standard claims procedure. It should be provided even if
there is no lost time, injury or need for medical treatment. The intervention should emphasize
education and prevention, rather than a strictly clinical approach that may cause the employee
to feel stigmatized. The employer, perhaps in conjunction with the insurance provider, should
take responsibility for the relatively small cost of providing this service. Care should be taken
that only professionals with specialized expertise or training in post-traumatic stress
conditions be involved.

Return to work

Any counselling or assessment activity should be coordinated with a return-to-work plan.


Employees who have undergone a trauma often feel afraid or tentative about returning to the
worksite. Combining brief education and counselling with visits to the workplace during the
recovery period has been used to great advantage in accomplishing this transition and
speeding return to work. Health professionals can work with the supervisor or manager in
developing gradual re-entry into job functioning. Even when there is no remaining physical
limitation, emotional factors may necessitate accommodations, such as allowing a bank teller
who was robbed to work in another area of the bank for part of the day as she gradually
becomes comfortable returning to work at the customer window.

Follow-up

Post-traumatic reactions are often delayed. Follow-up at 1- and 6-month intervals with
employees who have returned to work is important. Supervisors are also provided with fact
sheets on how to spot possible delayed or long-term problems associated with post-traumatic
stress.

Summary: The Link between Post-Traumatic Stress Studies and Occupational Health

Perhaps more than any other health science, occupational medicine is concerned with the
relationship between human stress and disease. Indeed, much of the research in human stress
in this century has taken place within the occupational health field. As the health sciences in
general became more involved in prevention, the workplace has become increasingly
important as an arena for research into the contribution of the physical and psychosocial
environment to disease and other health outcomes, and into methods for the prevention of
stress-related conditions. At the same time, since 1980 a revolution in the study of post-
traumatic stress has brought important progress to the understanding of the human stress
response. The occupational health practitioner is at the intersection of these increasingly
important fields of study.

As the landscape of work undergoes revolutionary transformation, and as we learn more about
productivity, coping and the stressful impact of continued change, the line between chronic
stress and acute or traumatic stress has begun to blur. The clinical theory of traumatic stress
has much to tell us about how to prevent and treat work-related psychological stress. As in all
health sciences, knowledge of the causes of a syndrome can help in prevention. In the area of
traumatic stress, the workplace has shown itself to be an excellent place to promote health and
healing. By being well acquainted with the symptoms and causes of post-traumatic stress
reactions, occupational health practitioners can increase their effectiveness as agents of
prevention.

Stress and Burnout and their Implication in the Work Environment

Written by ILO Content Manager

“An emerging global economy mandates serious scientific attention to discoveries that foster
enhanced human productivity in an ever-changing and technologically sophisticated work
world” (Human Capital Initiative 1992). Economic, social, psychological, demographic,
political and ecological changes around the world are forcing us to reassess the concept of
work, stress and burnout on the workforce.

Productive work “calls for a primary focus on reality external to one self. Work therefore
emphasizes the rational aspects of people and problem solving” (Lowman 1993). The
affective and mood side of work is becoming an ever-increasing concern as the work
environment becomes more complex.

A conflict that may arise between the individual and the world of work is that a transition is
called for, for the beginning worker, from the self-centredness of adolescence to the
disciplined subordination of personal needs to the demands of the workplace. Many workers
need to learn and adapt to the reality that personal feelings and values are often of little
importance or relevance to the workplace.

In order to continue a discussion of work-related stress, one needs to define the term, which
has been used widely and with varying meanings in the behavioural science literature. Stress
involves an interaction between a person and the work environment. Something happens in
the work arena which presents the individual with a demand, constraint, request or
opportunity for behaviour and consequent response. “There is a potential for stress when an
environmental situation is perceived as presenting a demand which threatens to exceed the
person’s capabilities and resources for meeting it, under conditions where he/she expects a
substantial differential in the rewards and costs from meeting the demand versus not meeting
it” (McGrath 1976).

It is appropriate to state that the degree to which the demand exceeds the perceived
expectation and the degree of differential rewards expected from meeting or not meeting that
demand reflect the amount of stress that the person experiences. McGrath further suggests
that stress may present itself in the following ways: “Cognitive-appraisal wherein subjectively
experienced stress is contingent upon the person’s perception of the situation. In this category
the emotional, physiological and behavioural responses are significantly influenced by the
person’s interpretation of the ‘objective’ or external stress situation.”

Another component of stress is the individual’s past experience with a similar situation and
his or her empirical response. Along with this is the reinforcement factor, whether positive or
negative, successes or failures which can operate to reduce or enhance, respectively, levels of
subjectively experienced stress.
Burnout is a form of stress. It is a process defined as a feeling of progressive deterioration and
exhaustion and an eventual depletion of energy. It is also often accompanied by a loss of
motivation, a feeling that suggests “enough, no more”. It is an overload that tends during the
course of time to affect attitudes, mood and general behaviour (Freudenberger 1975;
Freudenberger and Richelson 1981). The process is subtle; it develops slowly and sometimes
occurs in stages. It is often not perceived by the person most affected, since he or she is the
last individual to believe that the process is taking place.

The symptoms of burnout manifest themselves on a physical level as ill-defined


psychosomatic complaints, sleep disturbances, excessive fatigue, gastrointestinal symptoms,
backaches, headaches, various skin conditions or vague cardiac pains of an unexplained origin
(Freudenberger and North 1986).

Mental and behavioural changes are more subtle. “Burnout is often manifest by a quickness to
be irritated, sexual problems (e.g. impotence or frigidity), fault finding, anger and low
frustration threshold” (Freudenberger 1984a).

Further affective and mood signs may be progressive detachment, loss of self-confidence and
lowered self-esteem, depression, mood swings, an inability to concentrate or pay attention, an
increased cynicism and pessimism, as well as a general sense of futility. Over a period of time
the contented person becomes angry, the responsive person becomes silent and withdrawn and
the optimist becomes a pessimist.

The affect feelings that appear to be most common are anxiety and depression. The anxiety
most typically associated with work is performance anxiety. The forms of work conditions
that are relevant in promoting this form of anxiety are role ambiguity and role overload
(Srivastava 1989).

Wilke (1977) has indicated that “one area that presents particular opportunity for conflict for
the personality-disordered individual concerns the hierarchical nature of work organizations.
The source of such difficulties can rest with the individual, the organization, or some
interactive combination.”

Depressive features are frequently found as part of the presenting symptoms of work-related
difficulties. Estimates from epidemiological data suggest that depression affects 8 to 12% of
men and 20 to 25% of women. The life expectancy experience of serious depressive reactions
virtually assures that workplace issues for many people will be affected at some time by
depression (Charney and Weissman 1988).

The seriousness of these observations was validated by a study conducted by Northwestern


National Life Insurance Company—“Employee Burnout: America’s Newest Epidemic”
(1991). It was conducted among 600 workers nationwide and identified the extent, causes,
costs and solutions related to workplace stress. The most striking research findings were that
one in three Americans seriously thought about quitting work in 1990 because of job stress,
and a similar portion expected to experience job burnout in the future. Nearly half of the 600
respondents experienced stress levels as “extremely or very high.” Workplace changes such as
cutting employee benefits, change of ownership, required frequent overtime or reduced
workforce tend to speed up job stress.
MacLean (1986) further elaborates on job stressors as uncomfortable or unsafe working
conditions, quantitative and qualitative overload, lack of control over the work process and
work rate, as well as monotony and boredom.

Additionally, employers are reporting an ever-increasing number of employees with alcohol


and drug abuse problems (Freudenberger 1984b). Divorce or other marital problems are
frequently reported as employee stressors, as are long-term or acute stressors such as caring
for an elderly or disabled relative.

Assessment and classification to diminish the possibility of burnout may be approached from
the points of view related to vocational interests, vocational choices or preferences and
characteristics of people with different preferences (Holland 1973). One might utilize
computer-based vocational guidance systems, or occupational simulation kits (Krumboltz
1971).

Biochemical factors influence personality, and the effects of their balance or imbalance on
mood and behaviour are found in the personality changes attendant on menstruation. In the
last 25 years a great deal of work has been done on the adrenal catecholamines, epinephrine
and norepinephrine and other biogenic amines. These compounds have been related to the
experiencing of fear, anger and depression (Barchas et al. 1971).

The most commonly used psychological assessment devices are:

 Eysenck Personality Inventory and Mardsley Personality Inventory


 Gordon Personal Profile
 IPAT Anxiety Scale Questionnaire
 Study of Values
 Holland Vocational Preference Inventory
 Minnesota Vocational Interest Test
 Rorschach Inkblot Test
 Thematic Apperception Test

A discussion of burnout would not be complete without a brief overview of the changing
family-work system. Shellenberger, Hoffman and Gerson (1994) indicated that “Families are
struggling to survive in an increasingly complex and bewildering world. With more choices
than they can consider, people are struggling to find the right balance between work, play,
love and family responsibility.”

Concomitantly, women’s work roles are expanding, and over 90% of women in the US cite
work as a source of identity and self-worth. In addition to the shifting roles of men and
women, the preservation of two incomes sometimes requires changes in living arrangements,
including moving for a job, long-distance commuting or establishing separate residences. All
of these factors can put a great strain on a relationship and on work.

Solutions to offer to diminish burnout and stress on an individual level are:

 Learn to balance your life.


 Share your thoughts and communicate your concerns.
 Limit alcohol intake.
 Re-evaluate personal attitudes.
 Learn to set priorities.
 Develop interests outside of work.
 Do volunteer work.
 Re-evaluate your need for perfectionism.
 Learn to delegate and ask for assistance.
 Take time off.
 Exercise, and eat nutritional meals.
 Learn to take yourself less seriously.

On a larger scale, it is imperative that government and corporations accommodate to family


needs. To reduce or diminish stress in the family-work system will require a significant
reconfiguration of the entire structure of work and family life. “A more equitable arrangement
in gender relationships and the possible sequencing of work and non-work over the life span
with parental leaves of absence and sabbaticals from work becoming common occurrences”
(Shellenberger, Hoffman and Gerson 1994).

As indicated by Entin (1994), increased differentiation of self, whether in a family or


corporation, has important ramifications in reducing stress, anxiety and burnout.

Individuals need to be more in control of their own lives and take responsibility for their
actions; and both individuals and corporations need to re-examine their value systems.
Dramatic shifts need to take place. If we do not heed the statistics, then most assuredly
burnout and stress will continue to remain the significant problem it has become for all
society.

Cognitive Disorders

Written by ILO Content Manager

A cognitive disorder is defined as a significant decline in one’s ability to process and recall
information. The DSM IV (American Psychiatric Association 1994) describes three major
types of cognitive disorder: delirium, dementia and amnestic disorder. A delirium develops
over a short period of time and is characterized by an impairment of short-term memory,
disorientation and perceptual and language problems. Amnestic disorders are characterized by
impairment of memory such that sufferers are unable to learn and recall new information.
However, no other declines in cognitive functioning are associated with this type of disorder.
Both delirium and amnestic disorders are usually due to the physiological effects of a general
medical condition (e.g., head injuries, high fevers) or of substance use. There is little reason to
suspect that occupational factors play a direct role in the development of these disorders.

However, research has suggested that occupational factors may influence the likelihood of
developing the multiple cognitive deficits involved in dementia. Dementia is characterized by
memory impairment and at least one of the following problems: (a) reduced language
function; (b) a decline in one’s ability to think abstractly; or (c) an inability to recognize
familiar objects even though one’s senses (e.g., vision, hearing, touch) are not impaired.
Alzheimer’s disease is the most common type of dementia.

The prevalence of dementia increases with age. Approximately 3% of people over the age of
65 years will suffer from a severe cognitive impairment during any given year. Recent studies
of elderly populations have found a link between a person’s occupational history and his or
her likelihood of suffering from dementia. For example, a study of the rural elderly in France
(Dartigues et al. 1991) found that people whose primary occupation had been farm worker,
farm manager, provider of domestic service or blue-collar worker had a significantly elevated
risk of having a severe cognitive impairment when compared to those whose primary
occupation had been teacher, manager, executive or professional. Furthermore, this elevated
risk was not due to differences between the groups of workers in terms of age, sex, education,
drinking of alcoholic beverages, sensory impairments or the taking of psychotropic drugs.

Because dementia is so rare among people younger than 65 years, no study has examined
occupation as a risk factor among this population. However, a large study in the United States
(Farmer et al. 1995) has shown that people under the age of 65 who have high levels of
education are less likely to experience declines in cognitive functioning than are similarly
aged people with less education. The authors of this study commented that education level
may be a “marker variable” that is actually reflecting the effects of occupational exposures. At
this point, such a conclusion is highly speculative.

Although several studies have found an association between one’s principal occupation and
dementia among the elderly, the explanation or mechanism underlying the association is not
known. One possible explanation is that some occupations involve higher exposure to toxic
materials and solvents than do other occupations. For example, there is growing evidence that
toxic exposures to pesticides and herbicides can have adverse neurological effects. Indeed, it
has been suggested that such exposures may explain the elevated risk of dementia found
among farm workers and farm managers in the French study described above. In addition,
some evidence suggests that the ingestion of certain minerals (e.g., aluminium and calcium as
components of drinking water) may affect the risk of cognitive impairment. Occupations may
involve differential exposure to these minerals. Further research is needed to explore possible
pathophysiological mechanisms.

Psychosocial stress levels of employees in various occupations may also contribute to the link
between occupation and dementia. Cognitive disorders are not among the mental health
problems that are commonly thought to be stress related. A review of the role of stress in
psychiatric disorders focused on anxiety disorders, schizophrenia and depression, but made no
mention of cognitive disorders (Rabkin 1993). One type of disorder, called dissociative
amnesia, is characterized by an inability to recall a previous traumatic or stressful event but
carries with it no other type of memory impairment. This disorder is obviously stress-related,
but is not categorized as a cognitive disorder according to the DSM IV.

Although psychosocial stress has not been explicitly linked to the onset of cognitive disorders,
it has been demonstrated that the experience of psychosocial stress affects how people process
information and their ability to recall information. The arousal of the autonomic nervous
system that often accompanies exposure to stressors alerts a person to the fact that “all is not
as expected or as it should be” (Mandler 1993). At first, this arousal may enhance a person’s
ability to focus attention on the central issues and to solve problems. However, on the
negative side, the arousal uses up some of the “available conscious capacity” or the resources
that are available for processing incoming information. Thus, high levels of psychosocial
stress ultimately (1) limit one’s ability to scan all of the relevant available information in an
orderly fashion, (2) interfere with one’s ability to rapidly detect peripheral cues, (3) decrease
one’s ability to sustain focused attention and (4) impair some aspects of memory
performance. To date, even though these decrements in information-processing skills can
result in some of the symptomatology associated with cognitive disorders, no relationship has
been demonstrated between these minor impairments and the likelihood of exhibiting a
clinically diagnosed cognitive disorder.

A third possible contributor to the relationship between occupation and cognitive impairment
may be the level of mental stimulation demanded by the job. In the study of rural elderly
residents in France described above, the occupations associated with the lowest risk of
dementia were those that involved substantial intellectual activity (e.g., physician, teacher,
lawyer). One hypothesis is that the intellectual activity or mental stimulation inherent in these
jobs produces certain biological changes in the brain. These changes, in turn, protect the
worker from decline in cognitive function. The well-documented protective effect of
education on cognitive functioning is consistent with such a hypothesis.

It is premature to draw any implications for prevention or treatment from the research
findings summarized here. Indeed, the association between one’s lifetime principal
occupation and the onset of dementia among the elderly may not be due to occupational
exposures or the nature of the job. Rather, the relationship between occupation and dementia
may be due to differences in the characteristics of workers in various occupations. For
example, differences in personal health behaviours or in access to quality medical care may
account for at least part of the effect of occupation. None of the published descriptive studies
can rule out this possibility. Further research is needed to explore whether specific
psychosocial, chemical and physical occupational exposures are contributing to the aetiology
of this cognitive disorder.

Karoshi: Death from Overwork

Written by ILO Content Manager

What Is Karoshi?

Karoshi is a Japanese word which means death from overwork. The phenomenon was first
identified in Japan, and the word is being adopted internationally (Drinkwater 1992). Uehata
(1978) reported 17 karoshi cases at the 51st annual meeting of the Japan Association of
Industrial Health. Among them seven cases were compensated as occupational diseases, but
ten cases were not. In 1988 a group of lawyers established the National Defense Counsel for
Victims of Karoshi (1990) and started telephone consultation to handle inquiries about
karoshi-related workers’ compensation insurance. Uehata (1989) described karoshi as a
sociomedical term that refers to fatalities or associated work disability due to cardiovascular
attacks (such as strokes, myocardial infarction or acute cardiac failure) which could occur
when hypertensive arteriosclerotic diseases are aggravated by a heavy workload. Karoshi is
not a pure medical term. The media have frequently used the word because it emphasizes that
sudden deaths (or disabilities) were caused by overwork and should be compensated. Karoshi
has become an important social problem in Japan.
Research on Karoshi

Uehata (1991a) conducted a study of 203 Japanese workers (196 males and seven females)
who had cardiovascular attacks. They or their next of kin consulted with him regarding
workers’ compensation claims between 1974 and 1990. A total of 174 workers had died; 55
cases had already been compensated as occupational disease. A total of 123 workers had
suffered strokes (57 arachnoidal bleedings, 46 cerebral bleedings, 13 cerebral infarctions,
seven unknown types); 50, acute heart failure; 27, myocardial infarctions; and four, aortic
ruptures. Autopsies were performed in only 16 cases. More than half of the workers had
histories of hypertension, diabetes or other atherosclerotic problems. A total of 131 cases had
worked for long hours—more than 60 hours per week, more than 50 hours overtime per
month or more than half of their fixed holidays. Eighty-eight workers had identifiable trigger
events within 24 hours before their attack. Uehata concluded that these were mostly male
workers, working for long hours, with other stressful overload, and that these working styles
exacerbated their other lifestyle habits and resulted in the attacks, which were finally triggered
by minor work-related troubles or events.

Karasek Model and Karoshi

According to the demand-control model by Karasek (1979), a high-strain job—one with a


combination of high demand and low control (decision latitude)—increases the risk of
psychological strain and physical illness; an active job—one with a combination of high
demand and high control—requires learning motivation to develop new behaviour patterns.
Uehata (1991b) reported that the jobs in karoshi cases were characterized by a higher degree
of work demands and lower social support, whereas the degree of work control varied greatly.
He described the karoshi cases as very delighted and enthusiastic about their work, and
consequently likely to ignore their needs for regular rest and so on—even the need for health
care. It is suggested that workers in not only high-strain jobs but also active jobs could be at
high risk. Managers and engineers have high decision latitude. If they have extremely high
demands and are enthusiastic in their work, they may not control their working hours. Such
workers may be a risk group for karoshi.

Type A Behaviour Pattern in Japan

Friedman and Rosenman (1959) proposed the concept of Type A behaviour pattern (TABP).
Many studies have showed that TABP is related to the prevalence or incidence of coronary
heart disease (CHD).

Hayano et al. (1989) investigated the characteristics of TABP in Japanese employees using
the Jenkins Activity Survey (JAS). Responses of 1,682 male employees of a telephone
company were analysed. The factor structure of the JAS among the Japanese was in most
respects equal to that found in the Western Collaborative Group Study (WCGS). However,
the average score of factor H (hard-driving and competitiveness) among the Japanese was
considerably lower than that in the WCGS.

Monou (1992) reviewed TABP research in Japan and summarized as follows: TABP is less
prevalent in Japan than in the United States; the relationship between TABP and coronary
heart disease in Japan seems to be significant but weaker than that in the US; TABP among
Japanese places more emphasis on “workaholism” and “directivity into the group” than in the
US; the percentage of highly hostile individuals in Japan is lower than in the US; there is no
relationship between hostility and CHD.

Japanese culture is quite different from those of Western countries. It is strongly influenced
by Buddhism and Confucianism. Generally speaking, Japanese workers are organization
centred. Cooperation with colleagues is emphasized rather than competition. In Japan,
competitiveness is a less important factor for coronary-prone behaviour than job involvement
or a tendency to overwork. Direct expression of hostility is suppressed in Japanese society.
Hostility may be expressed differently than in Western countries.

Working Hours of Japanese Workers

It is well known that Japanese workers work long hours compared with workers in other
developed industrial countries. Normal annual working hours of manufacturing workers in
1993 were 2,017 hours in Japan; 1,904 in the United States; 1,763 in France; and 1,769 in the
UK (ILO 1995). However, Japanese working hours are gradually decreasing. Average annual
working hours of manufacturing employees in enterprises with 30 employees or more was
2,484 hours in 1960, but 1,957 hours in 1994. Article 32 of the Labor Standards Law, which
was revised in 1987, provides for a 40-hour week. The general introduction of the 40-hour
week is expected to take place gradually in the 1990s. In 1985, the 5-day work week was
granted to 27% of all employees in enterprises with 30 employees or more; in 1993, it was
granted to 53% of such employees. The average worker was allowed 16 paid holidays in
1993; however, workers actually used an average of 9 days. In Japan, paid holidays are few,
and workers tend to save them to cover absence due to sickness.

Why do Japanese workers work such long hours? Deutschmann (1991) pointed out three
structural conditions underlying the present pattern of long working hours in Japan: first, the
continuing need of Japanese employees to increase their income; second, the enterprise-
centred structure of industrial relations; and third, the holistic style of Japanese personnel
management. These conditions were based on historical and cultural factors. Japan was
defeated in war in 1945 for the first time in history. After the war Japan was a cheap wage
country. The Japanese were used to working long and hard to earn their subsistence. As
labour unions were cooperative with employers, there have been relatively few labour
disputes in Japan. Japanese companies adopted the seniority-oriented wage system and
lifetime employment. The number of hours is a measure of the loyalty and cooperativeness of
an employee, and becomes a criterion for promotion. Workers are not forced to work long
hours; they are willing to work for their companies, as if the company is their family.
Working life has priority over family life. Such long working hours have contributed to the
remarkable economic achievements of Japan.

National Survey of Workers’ Health

The Japanese Ministry of Labour conducted surveys on the state of employees’ health in
1982, 1987 and 1992. In the survey in 1992, 12,000 private worksites employing 10 or more
workers were identified, and 16,000 individual workers from them were randomly selected
nationwide based on industry and job classification to fill out questionnaires. The
questionnaires were mailed to a representative at the workplace who then selected workers to
complete the survey.
Sixty-five per cent of these workers complained of physical fatigue due to their usual work,
and 48% complained of mental fatigue. Fifty-seven per cent of workers stated that they had
strong anxieties, worries or stress concerning their job or working life. The prevalence of
stressed workers was increasing, as the prevalence had been 55% in 1987 and 51% in 1982.
The main causes of stress were: unsatisfactory relations in the workplace, 48%; quality of
work, 41%; quantity of work, 34%.

Eighty-six per cent of these worksites conducted periodic health examinations. Worksite
health promotion activities were conducted at 44% of the worksites. Of these worksites, 48%
had sports events, 46% had exercise programmes and 35% had health counselling.

National Policy to Protect and PromoteWorkers’ Health

The purpose of the Industrial Safety and Health Law in Japan is to secure the safety and
health of workers in workplaces as well as to facilitate the establishment of a comfortable
working environment. The law states that the employer shall not only comply with the
minimum standards for preventing occupational accidents and diseases, but also endeavour to
ensure the safety and health of workers in workplaces through the realization of a comfortable
working environment and the improvement of working conditions.

Article 69 of the law, amended in 1988, states that the employer shall make continuous and
systematic efforts for the maintenance and promotion of workers’ health by taking appropriate
measures, such as providing health education and health counselling services to the workers.
The Japanese Ministry of Labour publicly announced guidelines for measures to be taken by
employers for the maintenance and promotion of workers’ health in 1988. It recommends
worksite health promotion programmes called the Total Health Promotion Plan (THP):
exercise (training and counselling), health education, psychological counselling and
nutritional counselling, based on the health status of employees.

In 1992, the guidelines for the realization of a comfortable working environment were
announced by the Ministry of Labour in Japan. The guidelines recommend the following: the
working environment should be properly maintained under comfortable conditions; work
conditions should be improved to reduce the workload; and facilities should be provided for
the welfare of employees who need to recover from fatigue. Low-interest loans and grants for
small and medium-sized enterprises for workplace improvement measures have been
introduced to facilitate the realization of a comfortable working environment.

Conclusion

The evidence that overwork causes sudden death is still incomplete. More studies are needed
to clarify the causal relationship. To prevent karoshi, working hours should be reduced.
Japanese national occupational health policy has focused on work hazards and health care of
workers with problems. The psychological work environment should be improved as a step
towards the goal of a comfortable working environment. Health examinations and health
promotion programmes for all workers should be encouraged. These activities will prevent
karoshi and reduce stress.
Work and Mental Health

Written by ILO Content Manager

This chapter provides an overview of major types of mental health disorder that can be
associated with work—mood and affective disorders (e.g., dissatisfaction), burnout, post-
traumatic stress disorder (PTSD), psychoses, cognitive disorders and substance abuse. The
clinical picture, available assessment techniques, aetiological agents and factors, and specific
prevention and management measures will be provided. The relationship with work,
occupation or branch of industry will be illustrated and discussed where possible.

This introductory article first will provide a general perspective on occupational mental health
itself. The concept of mental health will be elaborated upon, and a model will be presented.
Next, we will discuss why attention should be paid to mental (ill) health and which
occupational groups are at greatest risk. Finally, we will present a general intervention
framework for successfully managing work-related mental health problems.

What Is Mental Health: A Conceptual Model

There are many different views about the components and processes of mental health. The
concept is heavily value laden, and one definition is unlikely to be agreed upon. Like the
strongly associated concept of “stress”, mental health is conceptualized as:

 a state—for example, a state of total psychological and social well-being of an individual in a


given sociocultural environment, indicative of positive moods and affects (e.g., pleasure,
satisfaction and comfort) or negative ones (e.g., anxiety, depressive mood and
dissatisfaction).
 a process indicative of coping behaviour—for example, striving for independence, being
autonomous (which are key aspects of mental health).
 the outcome of a process—a chronic condition resulting either from an acute, intense
confrontation with a stressor, such as is the case in a post-traumatic stress disorder, or from
the continuing presence of a stressor which may not necessarily be intense. This is the case
in burnout, as well as in psychoses, major depressive disorders, cognitive disorders and
substance abuse. Cognitive disorders and substance abuse are, however, often considered as
neurological problems, since pathophysiological processes (e.g., degeneration of the myelin
sheath) resulting from ineffective coping or from the stressor itself (alcohol use or
occupational exposition to solvents, respectively) can underlie these chronic conditions.

Mental health may also be associated with:

 Person characteristics like “coping styles”—competence (including effective coping,


environmental mastery and self-efficacy) and aspiration are characteristic of a mentally
healthy person, who shows interest in the environment, engages in motivational activity and
seeks to extend him- or herself in ways that are personally significant.

Thus, mental health is conceptualized not only as a process or outcome variable, but also as
an independent variable—that is, as a personal characteristic that influences our behaviour.
In figure 1 a mental health model is presented. Mental health is determined by environmental
characteristics, both in and outside the work situation, and by characteristics of the individual.
Major environmental job characteristics are elaborated upon in the chapter “Psychosocial and
organizational factors”, but some points on these environmental precursors of mental (ill)
health have to be made here as well.

Figure 1. A model for mental health.

There are many models, most of them stemming from the field of work and organizational
psychology, that identify precursors of mental ill health. These precursors are often labelled
“stressors”. Those models differ in their scope and, related to this, in the number of stressor
dimensions identified. An example of a relatively simple model is that of Karasek (Karasek
and Theorell 1990), describing only three dimensions: psychological demands, decision
latitude (incorporating skill discretion and decision authority) and social support. A more
elaborate model is that of Warr (1994), with nine dimensions: opportunity for control
(decision authority), opportunity for skill use (skill discretion), externally generated goals
(quantitative and qualitative demands), variety, environmental clarity (information about
consequences of behaviour, availability of feedback, information about the future, information
about required behaviour), availability of money, physical security (low physical risk, absence
of danger), opportunity for interpersonal contact (prerequisite for social support), and valued
social position (cultural and company evaluations of status, personal evaluations of
significance). From the above it is clear that the precursors of mental (ill) health are generally
psychosocial in nature, and are related to work content, as well as working conditions,
conditions of employment and (formal and informal) relationships at work.

Environmental risk factors for mental (ill) health generally result in short-term effects such as
changes in mood and affect, like feelings of pleasure, enthusiasm or a depressed mood. These
changes are often accompanied by changes in behaviour. We may think of restless behaviour,
palliative coping (e.g., drinking) or avoiding, as well as active problem-solving behaviour.
These affects and behaviours are generally accompanied by physiological changes as well,
indicative of arousal and sometimes also of a disturbed homeostasis. When one or more of
these stressors remains active, the short-term, reversible responses may result in more stable,
less reversible mental health outcomes like burnout, psychoses or major depressive disorder.
Situations that are extremely threatening may even immediately result in chronic mental
health disorders (e.g., PTSD) which are difficult to reverse.

Person characteristics may interact with psychosocial risk factors at work and exacerbate or
buffer their effects. The (perceived) coping ability may not only moderate or mediate the
effects of environmental risk factors, but may also determine the appraisal of the risk factors
in the environment. Part of the effect of the environmental risk factors on mental health
results from this appraisal process.

Person characteristics (e.g., physical fitness) may not only act as precursors in the
development of mental health, but may also change as a result of the effects. Coping ability
may, for example, increase as the coping process progresses successfully (“learning”). Long-
term mental health problems will, on the other hand, often reduce coping ability and capacity
in the long run.

In occupational mental health research, attention has been particularly directed to affective
well-being—factors such as job satisfaction, depressive moods and anxiety. The more chronic
mental health disorders, resulting from long-term exposure to stressors and to a greater or
lesser extent also related to personality disorders, have a much lower prevalence in the
working population. These chronic mental health problems have a multitude of causal factors.
Occupational stressors will consequently be only partly responsible for the chronic condition.
Also, people suffering from these kinds of chronic problem will have great difficulty in
maintaining their position at work, and many are on sick leave or have dropped out of work
for quite a long period of time (1 year), or even permanently. These chronic problems,
therefore, are often studied from a clinical perspective.

Since, in particular, affective moods and affects are so frequently studied in the occupational
field, we will elaborate on them a little bit more. Affective well-being has been treated both in
a rather undifferentiated way (ranging from feeling good to feeling bad), as well as by
considering two dimensions: “pleasure” and “arousal” (figure 2). When variations in arousal
are uncorrelated with pleasure, these variations alone are generally not considered to be an
indicator of well-being.

Figure 2. Three principal axes for the measurement of affective well-being.

When, however, arousal and pleasure are correlated, four quadrants can be distinguished:

1. Highly aroused and pleased indicates enthusiasm.


2. Low aroused and pleased indicates comfort.
3. Highly aroused and displeased indicates anxiety.
4. Low aroused and displeased indicates depressed mood (Warr 1994).
Well-being can be studied at two levels: a general, context-free level and a context-specific
level. The work environment is such a specific context. Data analyses support the general
notion that the relation between job characteristics and context-free, non-work mental health
is mediated by an effect on work-related mental health. Work-related affective well-being has
commonly been studied along the horizontal axis (Figure 2) in terms of job satisfaction.
Affects related to comfort in particular have, however, largely been ignored. This is
regrettable, since this affect might indicate resigned job satisfaction: people may not complain
about their jobs, but may still be apathetic and uninvolved (Warr 1994).

Why Pay Attention to Mental Health Issues?

There are several reasons that illustrate the need for attention to mental health issues. First of
all, national statistics of several countries indicate that a lot of people drop out of work
because of mental health problems. In the Netherlands, for example, for one-third of those
employees who are diagnosed as disabled for work each year, the problem is related to mental
health. The majority of this category, 58%, is reported to be work related (Gründemann,
Nijboer and Schellart 1991). Together with musculoskeletal problems, mental health
problems account for about two-thirds of those who drop out for medical reasons each year.

Mental ill health is an extensive problem in other countries as well. According to the Health
and Safety Executive Booklet, it has been estimated that 30 to 40% of all sickness absence
from work in the UK is attributable to some form of mental illness (Ross 1989; O’Leary
1993). In the UK, it has been estimated that one in five of the working population suffers each
year from some form of mental illness. It is difficult to be precise about the number of
working days lost each year because of mental ill health. For the UK, a figure of 90 million
certified days—or 30 times that lost as a result of industrial disputes—is widely quoted
(O’Leary 1993). This compares with 8 million days lost as a result of alcoholism and drink-
related diseases and 35 million days as a result of coronary heart disease and strokes.

Apart from the fact that mental ill health is costly, both in human and financial terms, there is
a legal framework provided by the European Union (EU) in its framework directive on health
and safety at work (89/391/EEC), enacted in 1993. Although mental health is not as such an
element which is central to this directive, a certain amount of attention is given to this aspect
of health in Article 6. The framework directive states, among other things, that the employer
has:

“a duty to ensure the safety and health of workers in every aspect related to work,
following general principles of prevention: avoiding risks, evaluating the risks which
cannot be avoided, combating the risks at source, adapting the work to the individual,
especially as regards the design of workplaces, the choice of work equipment and the
choice of work and production methods, with a view, in particular, to alleviating
monotonous work and work at a predetermined work rate and to reduce their effects
on health.”

Despite this directive, not all European countries have adopted framework legislation on
health and safety. In a study comparing regulations, policies and practices concerning mental
health and stress at work in five European countries, those countries with such framework
legislation (Sweden, the Netherlands and the UK) recognize mental health issues at work as
important health and safety topics, whereas those countries which do not have such a
framework (France, Germany) do not recognize mental health issues as important (Kompier et
al. 1994).

Last but not least, prevention of mental ill health (at its source) pays. There are strong
indications that important benefits result from preventive programmes. For example, of the
employers in a national representative sample of companies from three major branches of
industry, 69% state that motivation increased; 60%, that absence due to sickness decreased ;
49%, that the atmosphere improved; and 40%, that productivity increased as a result of a
prevention programme (Houtman et al. 1995).

Occupational Risk Groups of Mental Health

Are specific groups of the working population at risk of mental health problems? This
question cannot be answered in a straightforward manner, since hardly any national or
international monitoring systems exist which identify risk factors, mental health consequences
or risk groups. Only a “scattergram” can be given. In some countries national data exist for
the distribution of occupational groups with respect to major risk factors (e.g., for the
Netherlands, Houtman and Kompier 1995; for the United States, Karasek and Theorell 1990).
The distribution of the occupational groups in the Netherlands on the dimensions of job
demands and skill discretion (figure 3) agree fairly well with the US distribution shown by
Karasek and Theorell, for those groups that are in both samples. In those occupations with
high work pace and/or low skill discretion, the risk of mental health disorders is highest.

Figure 3. Risk for stress and mental ill health for different occupational groups, as determined
by the combined effects of work pace and skill discretion.
Also, in some countries there are data for mental health outcomes as related to occupational
groups. Occupational groups that are especially prone to drop out for reasons of mental ill
health in the Netherlands are those in the service sector, such as health care personnel and
teachers, as well as cleaning personnel, housekeepers and occupations in the transport branch
(Gründemann, Nijboer and Schellart1991).

In the United States, occupations which were highly prone to major depressive disorder, as
diagnosed with standardized coding systems (i.e., the third edition of the Diagnostic and
Statistical Manual of Mental Disorders (DSM III)) (American Psychiatric Association 1980),
are juridicial employees, secretaries and teachers (Eaton et al. 1990).

Management of Mental Health Problems

The conceptual model (figure 1) suggests at least two targets of intervention in mental health
issues:

1. The (work) environment.


2. The person—either his or her characteristics or the mental health consequences.

Primary prevention, the type of prevention that should prevent mental ill health from
occurring, should be directed at the precursors by alleviating or managing the risks in the
environment and increasing the coping ability and capacity of the individual. Secondary
prevention is directed at the maintenance of people at work who already have some form of
(mental) health problem. This type of prevention should embrace the primary prevention
strategy, accompanied by strategies to make both employees and their supervisors sensitive to
signals of early mental ill health in order to reduce the consequences or prevent them from
getting worse. Tertiary prevention is directed at the rehabilitation of people who have dropped
out of work due to mental health problems. This type of prevention should be directed at
adapting the workplace to the possibilities of the individual (which is often found to be quite
effective), along with individual counselling and treatment. Table 1 provides a schematic
framework for the management of mental health disorders at the workplace. Effective
preventive policy plans of organizations should, in principle, take into account all three types
of strategy (primary, secondary and tertiary prevention), as well as be directed at risks,
consequences and person characteristics.

Table 1. A schematic overview of management strategies on mental health problems, and


some examples.

Type of Intervention level


prevention
Work environment Person characteristics and/or
health outcomes
Primary Redesign of task content Training groups of employees on
signalling and handling specific
Redesign of communication work- related problems (e.g., how to
structure manage time pressure, robberies etc.)
Secondary Introduction of a policy on how to Training in relaxation techniques
act in case of absenteeism (e.g.,
training supervisors to discuss
absence and return with employees
concerned)

Provide facilities within the


organization, especially for risk
groups (e.g., counsellor for sexual
harassment)
Tertiary Adaptation of an individual Individual counselling
workplace
Individual treatment or therapy (may
also be with medication)

The schedule as presented provides a method for systematic analysis of all possible types of
measure. One can discuss whether a certain measure belongs somewhere else in the schedule;
such a discussion is, however, not very fruitful, since it is often the case that primary
preventive measures can work out positively for secondary prevention as well. The proposed
systematic analysis may well result in a large number of potential measures, several of which
may be adopted, either as a general aspect of the (health and safety) policy or in a specific
case.

In conclusion: Although mental health is not a clearly defined state, process or outcome, it
covers a generally agreed upon area of (ill) health. Part of this area can be covered by
generally accepted diagnostic criteria (e.g., psychosis, major depressive disorder); the
diagnostic nature of other parts is neither as clear nor as generally accepted. Examples of the
latter are moods and affects, and also burnout. Despite this, there are many indications that
mental (ill) health, including the more vague diagnostic criteria, is a major problem. Its costs
are high, both in human and financial terms. In the following articles of this chapter, several
mental health disorders—moods and affects (e.g., dissatisfaction), burnout, post-traumatic
stress disorder, psychoses, cognitive disorders and substance abuse—will be discussed in
much more depth with respect to the clinical picture, available assessment techniques,
aetiological agents and factors, and specific prevention and management measures.

Work-Related Psychosis

Written by ILO Content Manager

Psychosis is a general term often used to describe a severe impairment in mental functioning.
Usually, this impairment is so substantial that the individual is unable to carry on normal
activities of daily living, including most work activities. More formally, Yodofsky, Hales and
Fergusen (1991) define psychosis as:

“A major mental disorder of organic or emotional origin in which a person’s ability to


think, respond emotionally, remember, communicate, interpret reality and behave
appropriately is sufficiently impaired so as to interfere grossly with the capacity to
meet the ordinary demands of life. [Symptoms are] often characterized by regressive
behaviour, inappropriate mood, diminished impulse control and such abnormal mental
context as delusions and hallucinations [p. 618].”

Psychotic disorders are comparatively rare in the general population. Their incidence in the
workplace is even lower, probably due to the fact that many individuals who frequently
become psychotic often have problems maintaining stable employment (Jorgensen 1987).
Precisely how rare it is, is difficult to estimate. However, there are some suggestions that the
prevalence within the general population of psychoses (e.g., schizophrenia) is less than 1%
(Bentall 1990; Eysenck 1982). While psychosis is rare, individuals who are actively
experiencing a psychotic state usually exhibit profound difficulties in functioning at work and
in other aspects of their lives. Sometimes acutely psychotic individuals exhibit behaviours
which are engaging, inspiring or even humorous. For example, some individuals who suffer
from bipolar illness and are entering a manic phase exhibit high energy and grand ideas or
plans. For the most part, however, psychosis is associated with behaviours which evoke
reactions such as discomfort, anxiety, anger or fear in co-workers, supervisors and others.

This article will first provide an overview of the various neurological conditions and mental
states in which psychosis can occur. Then, it will review workplace factors potentially
associated with the occurrence of psychosis. Finally, it will summarize treatment approaches
for managing both the psychotic worker and the work environment (i.e., medical
management, return-to-work clearance procedures, workplace accommodations and
workplace consultations with supervisors and co-workers).

Neurological Conditions and Mental Stateswithin which Psychosis Occurs

Psychosis can occur within a number of diagnostic categories identified in the fourth edition
of the Diagnostic and Statistical Manual of Mental Disorders (DSM IV) (American
Psychiatric Association 1994). At this point, there is no commonly agreed upon definitive
diagnostic set. The following are widely accepted as medical conditions within which
psychoses arise.

Neurological and general medical conditions

Delusional symtomatology can be caused by a range of neurological disorders affecting the


limbic system or basal ganglia, where cerebral cortical functioning remains intact. Partial
complex seizure episodes are often preceded by olfactory hallucinations of peculiar smells. To
an external observer, this seizure activity may appear to be simple staring or day-dreaming.
Cerebral neoplasms, especially in temporal and occipital areas, can cause hallucinations. Also,
delirium-causing diseases, such as Parkinson’s, Huntington’s, Alzheimer’s, and Pick’s, can
result in altered states of consciousness. Several sexually transmitted diseases such as tertiary
syphilis and AIDS can also produce psychosis. Lastly, deficiencies of certain nutrients, such
as B-12, niacin, folic acid and thiamine, have the potential of causing neurological problems
which can result in psychosis.

Psychotic symptoms such as hallucinations and delusions also occur among patients with
various general medical conditions. These include several systemic diseases, such as hepatic
encephalopathy, hypercalcaemia, diabetic ketoacidosis, and malfunction of endocrine glands
(i.e., adrenal, thyroid, parathyroid and pituitary). Sensory and sleep deprivation have also
been shown to cause psychosis.
Mental states

Schizophrenia is probably the most widely known of the psychotic disorders. It is a


progressively deteriorating condition which usually has an insidious onset. A number of
specific subcategories have been identified including paranoid, disorganized, catatonic,
undifferentiated and residual types. People who suffer from this disorder often have limited
work histories and often do not remain in the workforce. Occupational impairment among
schizophrenics is very common, and many schizophrenics lose their interest or will to work as
the disease progresses. Unless a job is of very low complexity, it is usually very difficult for
them to stay employed.

Schizophreniform disorder is similar to schizophrenia, but an episode of this disorder is of


short duration, usually lasting less than six months. Generally, persons with this disorder have
good premorbid social and occupational functioning. As the symptoms resolve, the person
returns to baseline functioning. Consequently, the occupational impact of this disorder may be
significantly less than in cases of schizophrenia.

Schizoaffective disorder also has a better prognosis than schizophrenia but a worse prognosis
than affective disorders. Occupational impairment is quite common in this group. Psychosis is
also sometimes observed in major affective disorders. With appropriate treatment,
occupational functioning among workers suffering from major affective disorders is generally
substantially better than for those with schizophrenia or schizoaffective disorders.

Severe stressors such as losing a loved one or losing one’s job can result in a brief reactive
psychosis. This psychotic disorder is probably observed more frequently in the workplace
than other types of psychotic disorder, especially with schizoid, schizotypal and borderline
features.

Delusional disorders are probably relatively common in the workplace. There are several
types. The erotomanic type typically believes that another person, usually of a higher social
status, is in love with them. Sometimes, they harass the person who they believe is in love
with them by attempting contact via telephone calls, letters or even stalking. Often,
individuals with these disorders are employed in modest occupations, living isolated and
withdrawn lives with limited social and sexual contact. The grandiose type usually exhibits
delusions of inflated worth, power, knowledge or a special relationship with a deity or a
famous person. The jealous type believes inaccurately that their sexual partner has been
unfaithful. The persecutory type believes inaccurately that they (or someone to whom they are
close) are being cheated, maligned, harassed or in other ways malevolently treated. These
persons are often resentful and angry and may resort to violence against those they believe to
be hurting them. They rarely want to seek help, as they do not think there is anything wrong
with them. Somatic types develop delusions, contrary to all evidence, that they are afflicted
with infections. They can also believe that a part of their body is disfigured, or worry about
having a bad body odour. These workers with delusional beliefs can often create work-related
difficulties.

Work-related chemical factors

Chemical factors such as mercury, carbon disulphide, toluene, arsenic and lead have been
known to cause psychosis in blue-collar workers. For example, mercury has been found to be
responsible for causing psychosis in workers in the hat industry, appropriately named the
“Mad Hatter’s psychosis” (Kaplan and Sadock 1995). Stopford (personal communication, 6
November 1995) suggests that carbon disulphide was found to cause psychosis among
workers in France in 1856. In the United States, in 1989, two brothers in Nevada purchased a
carbon disulphide compound to kill gophers. Their physical contact with this chemical
resulted in severe psychosis—one brother shooting a person and the other shooting himself
due to severe confusion and psychotic depression. The incidence of suicide and homicide
increases thirteenfold with exposure to carbon disulphide. Further, Stopford reports that
exposure to toluene (used in making explosives and dyestuffs) is known to cause acute
encephalopathy and psychosis. Symptoms can manifest also as memory loss, mood changes
(e.g., dysphoria), deterioration in eye-hand coordination and speech impediments. Hence,
some organic solvents, especially those found in the chemical industry, have a direct influence
on the human central nervous system (CNS), causing biochemical changes and unpredictable
behaviour (Levi, Frandenhaeuser and Gardell 1986). Special precautions, procedures and
protocols have been established by the US Occupational Safety and Health Administration
(OSHA), National Institute for Occupational Safety and Health (NIOSH) and the chemical
industry to ensure minimum risk to employees working with toxic chemicals in their work
environments.

Other factors

A number of medications can cause delirium which in turn can result in psychosis. These
include antihypertensives, anticholinergics (including a number of medications used to treat
the common cold), antidepressants, anti-tuberculosis medicines, anti-Parkinson’s disease
medicines, and ulcer medicines (such as cimetidine). Further, substance-induced psychosis
can be caused by a number of licit and illicit drugs which are sometimes abused, such as
alcohol, amphetamines, cocaine, PCP, anabolic steroids and marijuana. The delusions and
hallucinations which result are usually temporary. Although the content may vary,
persecutory delusions are quite common. In alcohol-related hallucinations a person may
believe that he or she is hearing voices which are threatening, insulting, critical or
condemning. Sometimes, these insulting voices speak in the third person. As with individuals
exhibiting paranoid or persecutory delusions, these individuals should be carefully evaluated
for dangerousness to self or others.

Post-partum psychosis is comparatively uncommon in the workplace, but is worth noting as


some women are returning to work more quickly. It tends to occur in new mothers (or more
rarely fathers), usually within two to four weeks after delivery.

In a number of cultures, psychosis may result from various commonly held beliefs. A number
of culturally based psychotic reactions have been described, including episodes such as
“koro” in South and East Asia, “qi-gong psychotic reaction” within Chinese populations,
“piblokto” in Eskimo communities and “whitigo” among several American Indian groups
(Kaplan and Sadock 1995). The relationship of these psychotic phenomena to various
occupational variables does not appear to have been studied.

Workplace Factors Associated with the Occurrence of Psychosis

Although information and empirical research on work-related psychosis are extremely scarce,
due in part to the low prevalence in the work setting, researchers have noted a relationship
between psychosocial factors in the work environment and psychological distress (Neff 1968;
Lazarus 1991; Sauter, Murphy and Hurrell 1992; Quick et al. 1992). Significant psychosocial
stressors on the job, such as role ambiguity, role conflicts, discrimination, supervisor-
supervisee conflicts, work overload and work setting have been found to be associated with
greater susceptibility to stress-related illness, tardiness, absenteeism, poor performance,
depression, anxiety and other psychological distress (Levi, Frandenhaeuser and Gardell 1986;
Sutherland and Cooper 1988).

Stress appears to have a prominent role in the complex manifestations of various types of
physiological and psychological disorders. In the workplace, Margolis and Kroes (1974)
believe that occupational stress occurs when some factor or combination of factors at work
interact with the worker to disrupt his or her psychological or physiological homeostasis.
These factors can be external or internal. External factors are the various pressures or
demands from the external environment which stem from a person’s occupation, as well as
from marriage, family or friends, whereas internal factors are the pressures and demands a
worker places upon him- or herself—for example, by being “ambitious, materialistic,
competitive and aggressive” (Yates 1989). It is these internal and external factors, separately
or in combination, which can result in occupational distress whereby the worker experiences
significant psychological and physical health problems.

Researchers have speculated on whether severe or cumulative stress, known as “stress-


induced arousal”, originating from the work environment, could induce work-related
psychotic disorders (Bentall, Dohrenwend and Skodol 1990; Link, Dohrenwend and Skodol
1986). For example, there is evidence linking hallucinatory and delusional experiences to
specific stressful events. Hallucinations have been associated with stress-induced arousal
occurring as a result of mining accidents, hostage situations, chemical-factory explosions,
wartime exposure, sustained military operations and loss of a spouse (Comer, Madow and
Dixon 1967; Hobfoll 1988; Wells 1983).

DeWolf (1986) believes that either the exposure to or interaction of multiple stressful
conditions over an extended period of time is a complex process whereby some workers
experience psychological health-related problems. Brodsky (1984) found in her examination
of 2,000 workers who were her patients over 18 years that: (1) the timing, frequency, intensity
and duration of unpleasant work conditions were potentially harmful, and she believed that 8
to 10% of the workforce experienced disabling psychological, emotional and physical health-
related problems; and (2) workers react to work-related stress in part as “a function of
perceptions, personality, age, status, life stage, unrealized expectations, prior experiences,
social support systems and their capacity to respond adequately or adapt.” In addition,
psychological distress can potentially be exacerbated by the worker feeling a sense of
uncontrollability (e.g., inability to make decisions) and unpredictability in the work
environment (e.g., corporate downsizing and reorganizing) (Labig 1995; Link and Stueve
1994).

Specific examination of the work-related “antecedents” of workers experiencing psychosis


has received limited attention. The few researchers who have empirically examined the
relationship between psychosocial factors in the work environment and severe
psychopathology have found a relationship between “noisome” work conditions (i.e., noise,
hazardous conditions, heat, humidity, fumes and cold) and psychosis (Link, Dohrenwend and
Skodol 1986; Muntaner et al. 1991). Link, Dohrenwend and Skodol (1986) were interested in
understanding the types of jobs schizophrenics had when they experienced their first
schizophrenic episode. First full-time occupations were examined for workers who
experienced: (a) schizophrenic or schizophrenic-like episodes; (b) depression; and (c) no
psychopathology. These researchers found that noisome work conditions existed among more
blue-collar than white-collar professions. These researchers concluded that noisome work
conditions were potentially significant risk factors in the manifestation of psychotic episodes
(i.e., schizophrenia).

Muntaner et al. (1991) replicated the findings of Link, Dohrenwend and Skodol (1986) and
examined in greater detail whether various occupational stressors contributed to increased risk
of developing or experiencing psychoses. Three types of psychotic condition were examined
using the criteria of DSM III—schizophrenia; schizophrenia criterion A (hallucinations and
delusions); and schizophrenia criterion A with affective episode (psychotic-affective
disorder). Participants in their retrospective study were from a larger Epidemiologic
Catchment Area (ECA) study examining the incidence of psychiatric disorders across five
sites (Connecticut, Maryland, North Carolina, Missouri and California). These researchers
found that psychosocial work characteristics (i.e., high physical demands, lack of control over
work and working conditions—noisome factors) placed participants at increased risk of
psychotic occurrences.

As illustrations, in the Muntaner et al. (1991) study, people in construction trade occupations
(i.e., carpenters, painters, roofers, electricians, plumbers) were 2.58 times more likely to
experience delusions or hallucinations than people in managerial occupations. Workers in
housekeeping, laundry, cleaning and servant-type occupations were 4.13 times more likely to
become schizophrenic than workers in managerial occupations. Workers who identified
themselves as writers, artists, entertainers and athletes were 3.32 times more likely to
experience delusions or hallucinations in comparison to workers in executive, administrative
and managerial occupations. Lastly, workers in occupations such as sales, mail and message
delivery, teaching, library science and counselling were more at risk of psychotic, affective
disorders. It is important to note that the associations between psychotic conditions and
occupational variables were examined after alcohol and drug use was controlled for in their
study.

A significant difference between blue-collar and white-collar professions is the types of


psychological demand and psychosocial stress placed on the worker. This is illustrated in the
findings of Muntaner et al. (1993). They found an association between a work environment’s
cognitive complexity and psychotic forms of mental illness. The most frequent occupations
held by schizophrenic patients during their last full-time job were characterized by their low
level of complexity in dealing with people, information and objects (e.g., janitors, cleaners,
gardeners, guards). A few researchers have examined some of the consequences of first
episodic psychosis relative to employment, job performance and capacity to work (Jorgensen
1987; Massel et al. 1990; Beiser et al. 1994). For example, Beiser and co-workers examined
occupational functioning after the first episode of psychosis. These researchers found 18
months after the first episode that the “psychosis compromise[d] occupational functioning”.
In other words, there was a higher post-morbid decline among schizophrenic workers than
among those suffering from affective disorders. Similarly, Massel et al. (1990) found that the
work capacity of psychotics (e.g., people with schizophrenia, affective disorders with
psychotic features or atypical psychotic disorders) was impaired in comparison to non-
psychotics (e.g., people with affective disorders without psychotic features, anxiety disorders,
personality disorders and substance abuse disorders). Psychotics in their study showed
marked thought disturbance, hostility and suspiciousness which correlated with poor work
performance.
In summary, our knowledge about the relationship between work-related factors and
psychosis is in the embryonic stage. As Brodsky (1984) states, “the physical and chemical
hazards of the workplace have received considerable attention, but the psychological stresses
associated with work have not been as widely discussed, other than in relation to managerial
responsibilities or to the coronary-prone behaviour pattern”. This means that research on the
topic of work-related psychosis is vitally needed, especially since workers spend an average
of 42 to 44% of their lives working (Hines, Durham and Geoghegan 1991; Lemen 1995) and
work has been associated with psychological well-being (Warr 1978). We need to have a
better understanding of what types of occupational stressor under what types of condition
influence which types of psychological disorder. For example, research is needed to
determine whether there are stages which workers move through based upon intensity,
duration and frequency of psychosocial stress in the work environment, in conjunction with
personal, social, cultural and political factors occurring in their daily lives. We are dealing
with complex issues which will require in-depth inquiries and ingenious solutions.

Acute Management of the Psychotic Worker

Typically, the primary role of persons in the workplace is to respond to an acutely psychotic
worker in a manner which facilitates the person being transported safely to an emergency
room or psychiatric treatment facility. The process may be greatly facilitated if the
organization has an active employee assistance programme and a critical incident response
plan. Ideally, the organization will train key employees in advance for emergency crisis
responses and will have a plan in place for coordinating as needed with local emergency
response resources.

Treatment approaches for the psychotic worker will vary depending upon the specific type of
underlying problem. In general, all psychotic disorders should be evaluated by a professional.
Often, immediate hospitalization is warranted for the safety of the worker and the workplace.
Thereafter, a thorough evaluation can be completed to establish a diagnosis and develop a
treatment plan. The primary goal is to treat the underlying cause(s). However, even prior to
conducting a comprehensive evaluation or initiating a comprehensive treatment plan, the
physician responding to the emergency may need to focus initially on providing symptomatic
relief. Providing a structured, low-stress environment is desirable. Neuroloptics may be used
to help the patient calm down. Benzodiazepines may help reduce acute anxiety.

After managing the acute crisis, a comprehensive evaluation may include collecting a detailed
history, psychological testing, a risk assessment to establish dangerousness to self or others
and careful monitoring of response to treatment (including not only response to medications,
but also to psychotherapeutic interventions). One of the more difficult problems with many
patients who exhibit psychotic symptomatology is treatment compliance. Often these
individuals tend not to believe that they have serious difficulties, or, even if they recognize the
problem, they are sometimes inclined to decide unilaterally to discontinue treatment
prematurely. In these instances, family members, co-workers, treating clinicians, occupational
health personnel and employers are sometimes placed in awkward or difficult situations.
Sometimes, for the safety of the employee and the workplace, it becomes necessary to
mandate compliance with treatment as a condition for returning to the job.
Managing the Psychotic Worker and the Work Environment

Case example

A skilled worker on the third shift at a chemical plant began to exhibit unusual behaviour as the
company began to modify its production schedule. For several weeks, instead of leaving work after his
shift ended, he began to stay for several hours discussing his concerns about increased job demands,
quality control and changes in production procedures with his counterparts on the morning shift. He
appeared quite distressed and behaved in a manner which was atypical for him. He had formerly
been somewhat shy and distant, with an excellent job performance history. During this period of time,
he became more verbal. He also approached individuals and stood close to them in a manner which
several co-workers reported made them feel uncomfortable. While these co-workers later reported
that they felt his behaviour was unusual, no one notified the employee assistance programme (EAP)
or management of their concerns. Then, suddenly one evening, this employee was observed by his co-
workers as he began to shout incoherently, walked over to a storage area for volatile chemicals, laid
down on the ground and began to flick a cigarette lighter on and off. His co-workers and supervisor
interfered and, after consultation with the EAP, he was taken by ambulance to a nearby hospital. The
treating physician determined that he was acutely psychotic. After a brief treatment period he was
successfully stabilized on medications.

After several weeks, his treating physician felt he was able to return to his job. He underwent
a formal return-to-work evaluation with an independent clinician and was judged ready to
return to work. While his company doctor and the treating physician determined that it was
safe for him to return, his co-workers and supervisors expressed substantial concerns. Some
employees noted that they might be harmed if this episode were repeated and the chemical
storage areas ignited. The company took steps to increase security in safety sensitive areas.
Another concern also surfaced. A number of workers stated that they believed this individual
might bring a weapon to work and start shooting. None of the professionals involved in
treating this worker or in evaluating him for return to work believed that there was a risk of
violent behaviour. The company then elected to bring in mental health professionals (with the
worker’s consent) to assure co-workers that the risk of violent behaviour was exceedingly
low, to provide education on mental illnesses, and to identify proactive steps that co-workers
could take to facilitate the return to work of a colleague who had undergone treatment.
However, in this situation, even after this educational intervention, co-workers were unwilling
to interact with this worker, further compounding the return-to-work process. While the legal
rights of individuals suffering from mental disorders, including those associated with
psychotic states, have been addressed by the Americans with Disabilities Act, practically
speaking the organizational challenges to effectively managing occurrences of psychosis at
work are often as great or greater than the medical treatment of psychotic workers.
Return to Work

The primary question to be addressed after a psychotic episode is whether the employee can
safely return to his or her current job. Sometimes organizations permit this decision to be
made by the treating clinicians. However, ideally, the organization should require their
occupational medical system to conduct an independent fitness-for-duty evaluation
(Himmerstein and Pransky 1988). In the fitness-for-duty evaluation process a number of key
pieces of information should be reviewed, including the treating clinician’s evaluation,
treatment and recommendations, as well as the worker’s prior job performance and the
specific features of the job, including the required job tasks and the organizational
environment.

If the occupational medical physician is not trained in psychiatric or psychological fitness-for-


duty evaluation, then the evaluation should be performed by an independent mental health
professional who is not the treating clinician. If some aspects of the job pose safety risks, then
specific work restrictions should be developed. These restrictions may range from minor
alterations in work activities or work schedule to more significant modifications such as
alternate job placement (e.g., a light-duty assignment or a job transfer to an alternate
position). In principle, these work restrictions are not different in kind from other restrictions
commonly provided by occupational health physicians, such as specifying the amount of
weight which a worker may be cleared to lift following a musculoskeletal injury.

As is evident in the case example above, the return to work often raises challenges not only
for the affected worker, but also for co-workers, supervisors and the broader organization.
While professionals are obligated to protect the confidentiality of the affected worker to the
fullest extent permitted by law, if the worker is willing and competent to sign an appropriate
release of information, then the occupational medical system can provide or coordinate
consultation and educational interventions to facilitate the return-to-work process. Often,
coordination between the occupational medical system, the employee assistance programme,
supervisors, union representatives and co-workers is critical to a successful outcome.

The occupational health system should also periodically monitor the worker’s readjustment to
the workplace in collaboration with the supervisor. In some instances, it may be necessary to
monitor the worker’s compliance with a medication regimen recommended by the treating
physician—for example, as a precondition for being permitted to engage in certain safety-
sensitive job tasks. More importantly, the occupational medical system must consider not only
what is best for the worker, but also what is safe for the workplace. The occupational medical
system may also play a critical role in assisting the organization in complying with legal
requirements such as the Americans with Disabilities Act as well as in interfacing with
treatments provided under the organization’s health care plan and/or the workers’
compensation system.

Prevention Programming

At present, there is no literature on specific prevention or early intervention programmes for


reducing the incidence of psychosis in the workforce. Employee assistance programmes may
play a crucial role in the early identification and treatment of psychotic workers. Since stress
may contribute to the incidence of psychotic episodes within working populations, various
organizational interventions which identify and modify organizationally created stress may
also be helpful. These general programmatic efforts may include job redesign, flexible
scheduling, self-paced work, self-directed work teams and microbreaks, as well as specific
programming to reduce the stressful impact of reorganization or downsizing.

Conclusion

While psychosis is a comparatively rare and multiply determined phenomenon, its occurrence
within working populations raises substantial practical challenges for co-workers, union
representatives, supervisors and occupational health professionals. Psychosis may occur as a
direct consequence of a work-related toxic exposure. Work-related stress may also increase
the incidence of psychosis among workers who suffer from (or are at risk of developing)
mental disorders which place them at risk of psychosis. Additional research is needed to:
(1) better understand the relationship between workplace factors and psychosis; and
(2) develop more effective approaches for managing psychosis in the workplace and reduce
its incidence.
Mental Health References
American Psychiatric Association (APA). 1980. Diagnostic and Statistical Manual of Mental
Disorders (DSM III). 3rd edition. Washington, DC: APA Press.

—. 1994. Diagnostic and Statistical Manual of Mental Disorders (DSM IV). 4th edition.
Washington, DC: APA Press.

Ballenger, J. 1993. The co-morbidity and etiology of anxiety and depression. Update on
Depression. Smith-Kline Beecham Workshop. Marina del Rey, Calif., 4 April.

Barchas, JD, JM Stolk, RD Ciaranello, and DA Hamberg. 1971. Neuroregulatory agents and
psychological assessment. In Advances in Psychological Assessment, edited by P
McReynolds. Palo Alto, Calif.: Science and Behavior Books.

Beaton, R, S Murphy, K Pike, and M Jarrett. 1995. Stress-symptom factors in firefighters and
paramedics. In Organizational Risk Factors for Job Stress, edited by S Sauter and L Murphy.
Washington, DC: APA Press.

Beiser, M, G Bean, D Erickson, K Zhan, WG Iscono, and NA Rector. 1994. Biological and
psychosocial predictors of job performance following a first episode of psychosis. Am J
Psychiatr 151(6):857-863.

Bentall, RP. 1990. The illusion or reality: A review and integration of psychological research
on hallucinations. Psychol Bull 107(1):82-95.

Braverman, M. 1992a. Post-trauma crisis intervention in the workplace. In Stress and Well-
Being at Work: Assessments and Interventions for Occupational Mental Health, edited by JC
Quick, LR Murphy, and JJ Hurrell. Washington, DC: APA Press.

—. 1992b. A model of intervention for reducing stress related to trauma in the workplace.
Cond Work Dig 11(2).

—. 1993a. Preventing stress-related losses: Managing the psychological consequences of


worker injury. Compens Benefits Manage 9(2) (Spring).

—. 1993b. Coping with trauma in the workplace. Compens Benefits Manage 9(2) (Spring).

Brodsky, CM. 1984. Long-term workstress. Psychomatics 25 (5):361-368.

Buono, A and J Bowditch. 1989. The Human Side of Mergers and Acquisitions. San
Francisco: Jossey-Bass.

Charney, EA and MW Weissman. 1988. Epidemiology of depressive and manic syndromes.


In Depression and Mania, edited by A Georgotas and R Cancro. New York: Elsevier.

Comer, NL, L Madow, and JJ Dixon. 1967. Observation of sensory deprivation in a life-
threatening situation. Am J Psychiatr 124:164-169.
Cooper, C and R Payne. 1992. International perspectives on research into work, well-being
and stress management. In Stress and Well-Being at Work, edited by J Quick, L Murphy, and
J Hurrell. Washington, DC: APA Press.

Dartigues, JF, M Gagnon, L Letenneur, P Barberger-Gateau, D Commenges, M Evaldre, and


R Salamon. 1991. Principal lifetime occupation and cognitive impairment in a French elderly
cohort (Paquid). Am J Epidemiol 135:981-988.

Deutschmann, C. 1991. The worker-bee syndrome in Japan: An analysis of working-time


practices. In Working Time in Transition: The Political Economy of Working Hours in
Industrial Nations, edited by K Hinrichs, W Roche, and C Sirianni. Philadephia: Temple
Univ. Press.

DeWolf, CJ. 1986. Methodological problems in stress studies. In The Psychology of Work
and Organizations, edited by G Debus and HW Schroiff. North Holland: Elsevier Science.

Drinkwater, J. 1992. Death from overwork. Lancet 340: 598.

Eaton, WW, JC Anthony, W Mandel, and R Garrison. 1990. Occupations and the prevalence
of major depressive disorder. J Occup Med 32(111):1079-1087.

Entin, AD. 1994. The work place as family, the family as work place. Unpublished paper
presented at the American Psychological Association, Los Angeles, California.

Eysenck, HJ. 1982. The definition and measurement of psychoticism. Personality Indiv Diff
13(7):757-785.

Farmer, ME, SJ Kittner, DS Rae, JJ Bartko, and DA Regier. 1995. Education and change in
cognitive function. The epidemiological catchment area study. Ann Epidemiol 5:1-7.

Freudenberger, HJ. 1975. The staff burn-out syndrome in alternative institutions. Psycother
Theory, Res Pract 12:1.

—. 1984a. Burnout and job dissatisfaction: Impact on the family. In Perspectives on Work
and Family, edited by JC Hammer and SH Cramer. Rockville, Md: Aspen.

—. 1984b. Substance abuse in the work place. Cont Drug Prob 11(2):245.

Freudenberger, HJ and G North. 1986. Women’s Burnout: How to Spot It, How to Reverse It
and How to Prevent It. New York: Penguin Books.

Freudenberger, HJ and G Richelson. 1981. Burnout: How to Beat the High Cost of Success.
New York: Bantam Books.

Friedman, M and RH Rosenman. 1959. Association of specific overt behavior pattern with
blood and cardiovascular findings. J Am Med Assoc 169:1286-1296.

Greenberg, PE, LE Stiglin, SN Finkelstein, and ER Berndt. 1993a. The economic burden of
depression in 1990. J Clin Psychiatry 54(11):405-418.
—. 1993b. Depression: A neglected major illness. J Clin Psychiatry 54(11):419-424.

Gründemann, RWM, ID Nijboer, and AJM Schellart. 1991. The Work-Relatedness of Drop-
Out from Work for Medical Reasons. Den Haag: Ministry of Social Affairs and Employment.

Hayano, J, S Takeuchi, S Yoshida, S Jozuka, N Mishima, and T Fujinami. 1989. Type A


behavior pattern in Japanese employees: Cross-cultural comparison of major factors in
Jenkins Activity Survey (JAS) responses. J Behav Med 12(3):219-231.

Himmerstein, JS and GS Pransky. 1988. Occupational Medicine: Worker Fitness and Risk
Evaluations. Vol. 3. Philadelphia: Hanley & Belfus.

Hines, LL, TW Durham, and GR Geoghegan. 1991. Work and self-concept: The development
of a scale. J Soc Behav Personal 6:815-832.

Hobfoll, WE. 1988. The Ecology of Stress. New York: Hemisphere.

Holland, JL. 1973. Making Vocational Choices: A Theory of Careers. Englewood Cliffs, NJ:
Prentice Hall.

Houtman, ILD and MAJ Kompier. 1995. Risk factors and occupational risk groups for work
stress in the Netherlands. In Organizational Risk Factors for Job Stress, edited by SL Sauter
and LR Murphy. Washington, DC: APA Press.

Houtman, I, A Goudswaard, S Dhondt, M van der Grinten, V Hildebrandt, and M Kompier.


1995.
Evaluation of the Monitor on Stress and Physical Load. The Hague: VUGA.

Human Capital Initiative (HCI). 1992. Changing nature of work. APS Observer Special Issue.

International Labour Organization (ILO). 1995. World Labour Report. No. 8. Geneva: ILO.

Jeffreys, J. 1995. Coping With Workplace Change: Dealing With Loss and Grief. Menlo Park,
Calif.: Crisp.

Jorgensen, P. 1987. Social course and outcome of delusional psychosis. Acta Psychiatr Scand
75:629-634.

Kahn, JP. 1993. Mental Health in the Workplace -A Practical Psychiatric Guide. New York:
Van Nostrand Reinhold.

Kaplan, HI and BJ Sadock. 1994. Synopsis of Psychiatry—Behavioral Sciences Clinical


Psychiatry. Baltimore: Williams & Wilkins.

Kaplan, HI and BJ Sadock. 1995. Comprehensive Textbook of Psychiatry. Baltimore:


Williams & Wilkins.

Karasek, R. 1979. Job demands, job decision latitude, and mental strain: Implications for job
redesign. Adm Sci Q 24:285-307.
Karasek, R and T Theorell. 1990. Healthy Work. London: Basic Works.
Katon, W, A Kleinman, and G Rosen. 1982. Depression and somatization: A review. Am J
Med 72:241-247.

Kobasa, S, S Maddi, and S Kahn. 1982. Hardiness and health: A prospective study. J Personal
Soc Psychol 45:839-850.

Kompier, M, E de Gier, P Smulders, and D Draaisma. 1994. Regulations, policies and


practices concerning work stress in five European countries. Work Stress 8(4):296-318.

Krumboltz, JD. 1971. Job Experience Kits. Chicago: Science Research Associates.

Kuhnert, K and R Vance. 1992. Job insecurity and moderators of the relation between job
insecurity and employee adjustment. In Stress and Well-Being at Work, edited by J Quick, L
Murphy, and J Hurrell Jr. Washington, DC: APA Press.

Labig, CE. 1995. Preventing Violence in the Workplace. New York: AMACON.

Lazarus, RS. 1991. Psychological stress in the workplace. J Soc Behav Personal 6(7):114.

Lemen, R. 1995. Welcome and opening remarks. Presented at Work, Stress and Health ’95:
Creating Healthier Workplaces Conference, 15 September 1995, Washington, DC.

Levi, L, M Frandenhaeuser, and B Gardell. 1986. The characteristics of the workplace and the
nature of its social demands. In Occupational Stress: Health and Performance at Work, edited
by SG Wolf and AJ Finestone. Littleton, Mass: PSG.

Link, BP, PB Dohrenwend, and AE Skodol. 1986. Socio-economic status and schizophrenia:
Noisome occupational characteristics as a risk factor. Am Soc Rev 51 (April):242-258.

Link, BG and A Stueve. 1994. Psychotic symptoms and the violent/illegal behaviour of
mental patients compared to community controls. In Violence and Mental Disorders:
Development in Risk Assessment, edited by J Mohnhan and HJ Steadman. Chicago, Illinois:
Univ. of Chicago.

Lowman, RL. 1993. Counseling and Psychotherapy of Work Dysfunctions. Washington, DC:
APA Press.

MacLean, AA. 1986. High Tech Survival Kit: Managing Your Stress. New York: John Wiley
& Sons.

Mandler, G. 1993. Thought, memory and learning: Effects of emotional stress. In Handbook
of Stress: Theoretical and Clinical Aspects, edited by L Goldberger and S Breznitz. New
York: Free Press.

Margolis, BK and WH Kroes. 1974. Occupational stress and strain. In Occupational Stress,
edited by A McLean. Springfield, Ill: Charles C. Thomas.

Massel, HK, RP Liberman, J Mintz, HE Jacobs, RV Rush, CA Giannini, and R Zarate. 1990.
Evaluating the capacity to work of the mentally ill. Psychiatry 53:31-43.
McGrath, JE. 1976. Stress and behavior in organizations. In Handbook of Industrial and
Organizational Psychology, edited by MD Dunnette. Chicago: Rand McNally College.

McIntosh, N. 1995. Exhilarating work: An antidote for dangerous work. In Organizational


Risk Factors for Job Stress, edited by S Sauter and L Murphy. Washington, DC: APA Press.

Mishima, N, S Nagata, T Haratani, N Nawakami, S Araki, J Hurrell, S Sauter, and N


Swanson. 1995. Mental health and occupational stress of Japanese local government
employees. Presented at Work, Stress, and Health ‘95: Creating Healthier Workplaces, 15
September 1995, Washington, DC.

Mitchell, J and G Bray. 1990. Emergency Service Stress. Englewood Cliffs, NJ: Prentice Hall.

Monou, H. 1992. Coronary-prone behavior pattern in Japan. In Behavioral Medicine: An


Integrated Biobehavioral Approach to Health and Illness, edited by S Araki. Amsterdam:
Elsevier Science.

Muntaner, C, A Tien, WW Eaton, and R Garrison. 1991. Occupational characteristics and the
occurrence of psychotic disorders. Social Psych Psychiatric Epidemiol 26:273-280.

Muntaner, C, AE Pulver, J McGrath, and WW Eaton. 1993. Work environment and


schizophrenia: An extension of the arousal hypothesis to occupational self-selection. Social
Psych Psychiatric Epidemiol 28:231-238.

National Defense Council for Victims of Karoshi. 1990. Karoshi. Tokyo: Mado Sha.
Neff, WS. 1968. Work and Human Behavior. New York: Altherton.

Northwestern National Life. 1991. Employee Burnout: America’s Newest Epidemic. Survey
Findings. Minneapolis, Minn: Northwestern National Life.

O’Leary, L. 1993. Mental health at work. Occup Health Rev 45:23-26.

Quick, JC, LR Murphy, JJ Hurrell, and D Orman. 1992. The value of work, the risk of distress
and the power of prevention. In Stress and Well-Being: Assessment and Interventions for
Occupational Mental Health, edited by JC Quick, LR Murphy, and JJ Hurrell. Washington,
DC: APA Press.

Rabkin, JG. 1993. Stress and psychiatric disorders. In Handbook of Stress: Theoretical and
Clinical Aspects, edited by L Goldberger and S Breznitz. New York: Free Press.

Robins, LN, JE Heltzer, J Croughan, JBW Williams, and RE Spitzer. 1981. NIMH Diagnostic
Interviews Schedule: Version III. Final report on contract no. 278-79-00 17DB and Research
Office grant no. 33583. Rockville, Md: Department of Health and Human Services.

Rosch, P and K Pelletier. 1987. Designing workplace stress management programs. In Stress
Management in Work Settings, edited by L Murphy and T Schoenborn. Rockville, Md: US
Department of Health and Human Services.

Ross, DS. 1989. Mental health at work. Occup Health Safety 19(3):12.
Sauter, SL, LR Murphy, and JJ Hurrell. 1992. Prevention of work-related psychological
disorders: A national strategy proposed by the National Institute for Occupational Safety and
Health (NIOSH). In Work and Well-Being: An Agenda for 1990’s, edited by SL Sauter and G
Puryear Keita. Washington, DC: APA Press.

Shellenberger, S, SS Hoffman, and R Gerson. 1994. Psychologists and the changing family-
work system. Unpublished paper presented at the American Psychological Association, Los
Angeles, California.

Shima, S, H Hiro, M Arai, T Tsunoda, T Shimomitsu, O Fujita, L Kurabayashi, A Fujinawa,


and M Kato. 1995. Stress coping style and mental health in the workplace. Presented at Work,
Stress and Health ‘95: Creating Healthier Workplaces, 15 September, 1995, Washington, DC.

Smith, M, D Carayon, K Sanders, S Lim, and D LeGrande. 1992. Employee stress and health
complaints in jobs with and without electronic performance monitoring. Appl Ergon 23:17-
27.

Srivastava, AK. 1989. Moderating effect of n-self actualization on the relationship of role
stress with job anxiety. Psychol Stud 34:106-109.

Sternbach, D. 1995. Musicians: A neglected working population in crisis. In Organizational


Risk Factors for Job Stress, edited by S Sauter and L Murphy. Washington, DC: APA Press.

Stiles, D. 1994. Video display terminal operators. Technology’s biopsychosocial stressors. J


Am Assoc Occup Health Nurses 42:541-547.

Sutherland, VJ and CL Cooper. 1988. Sources of work stress. In Occupational Stress: Issues
and Development in Research, edited by JJ Hurrell Jr, LR Murphy, SL Sauter, and CL
Cooper. New York: Taylor & Francis.

Uehata, T. 1978. A study on death from overwork. (I) Considerations about 17 cases. Sangyo
Igaku (Jap J Ind Health) 20:479.

—. 1989. A study of Karoshi in the field of occupational medicine. Bull Soc Med 8:35-50.

—. 1991a. Long working hours and occupational stress-related cardiovascular attacks among
middle-aged workers in Japan. J Hum Ergol 20(2):147-153.

—. 1991b. Karoshi due to occupational stress-related cardiovascular injuries among middle-


aged workers in Japan. J Sci Labour 67(1):20-28.

Warr, P. 1978. Work and Well-Being. New York: Penguin.

—. 1994. A conceptual framework for the study of work and mental health. Work Stress
8(2):84-97.
Wells, EA. 1983. Hallucinations associated with pathological grief reaction. J Psychiat Treat
Eval 5:259-261.

Wilke, HJ. 1977. The authority complex and the authoritarian personality. J Anal Psychol
22:243-249.
Yates, JE. 1989. Managing Stress. New York: AMACON.

Yodofsky, S, RE Hales, and T Fergusen. 1991. What You Need to Know about Psychiatric
Drugs. New York: Grove Weidenfeld.

Zachary, G and B Ortega. 1993. Age of Angst—Workplace revolutions boost productivity at


cost of job security. Wall Street J, 10 March.

Copyright 2015 International Labour Organization


6. Musculoskeletal System
Chapter Editors: Hilkka Riihimäki and Eira Viikari-Juntura

Overview

Written by ILO Content Manager

Musculoskeletal disorders are among the most important occupational health problems in both
developed and developing countries. These disorders affect the quality of life of most people
during their lifetime. The annual cost of musculoskeletal disorders is great. In the Nordic
countries, for example, it is estimated to vary from 2.7 to 5.2 % of the gross national product
(Hansen 1993; Hansen and Jensen 1993). The proportion of all musculo-skeletal diseases that
are attributable to work is thought to be approximately 30%. Thus, much is to be gained by
prevention of work-related musculoskeletal disorders. To accomplish this goal, a good
understanding is needed of the healthy musculoskeletal system, musculoskeletal diseases and
the risk factors for musculo-skeletal disorders.

Most musculoskeletal diseases cause local ache or pain and restriction of motion that may
hinder normal performance at work or in other everyday tasks. Nearly all musculoskeletal
diseases are work-related in the sense that physical activity can aggravate or provoke
symptoms even if the diseases were not directly caused by work. In most cases, it is not
possible to point to one causal factor for musculoskeletal diseases. Conditions caused solely
by accidental injuries are an exception; in most cases several factors play a role. For many of
the musculoskeletal diseases, mechanical load at work and leisure is an important causal
factor. Sudden overload, or repetitive or sustained loading can injure various tissues of the
musculoskeletal system. On the other hand, too low a level of activity can lead to
deterioration of the condition of muscles, tendons, ligaments, cartilage and even bones.
Keeping these tissues in good condition requires appropriate use of the musculoskeletal
system.

The musculoskeletal system essentially consists of similar tissues in different parts of the
body, which provide a panorama of diseases. The muscles are the most common site of pain.
In the lower back the intervertebral discs are common problem tissues. In the neck and the
upper limbs, tendon and nerve disorders are common, while in the lower limbs, osteoarthritis
is the most important pathological condition.

In order to understand these bodily differences, it is necessary to comprehend basic


anatomical and physiological features of the musculoskeletal system and to learn the
molecular biology of various tissues, the source of nutrition and the factors affecting normal
function. The biomechanical properties of various tissues are also fundamental. It is necessary
to understand both the physiology of normal function of the tissues, and pathophysi- ology—
that is, what goes wrong. These aspects are described in the first articles for intervertebral
discs, bones and joints, tendons, muscles and nerves. In the articles which follow,
musculoskeletal disorders are described for the different anatomical regions. Symptoms and
signs of the most important diseases are outlined and the occurrence of the disorders in
populations is described. Current understanding, based on epidemiological research, of both
work-and person- related risk factors is presented. For many disorders there are quite
convincing data on work-related risk factors, but, for the time being, only limited data are
available on exposure effect relationships between the risk factors and the disorders. Such
data are needed in order to set guidelines to design safer work.

Despite the lack of quantitative knowledge, directions for prevention can be proposed. The
primary approach to prevention of work-related musculoskeletal disorders is redesign of work
in order to optimize the workload and make it compatible with the physical and mental
performance capacity of the workers. It is also important to encourage workers to keep fit
through regular physical exercise.

Not all musculoskeletal diseases described in this chapter have a causal relationship to work.
It is, however, important for occupational health and safety personnel to be aware of such
diseases and consider workload also in relation to them. Fitting the work to the performance
capacity of the worker will help him or her to work successfully and healthfully.

Muscles

Written by ILO Content Manager

Physical activity may increase muscle strength and working capa-city through changes such
as growth in muscle volume and increased metabolic capacity. Different activity patterns
cause a variety of biochemical and morphological adaptations in the muscles. In general, a
tissue must be active to remain capable of living. Inactivity causes atrophy, especially in
muscle tissue. Sports medicine and scientific investigations have shown that various training
regimes can produce very specific muscular changes. Strength training, which places strong
forces on the muscles, increases the number of contractile filaments (myofibrils) and the
volume of the sarcoplasmic reticulum (see figure 1). High-intensity exercise increases
muscular enzyme activity. The fractions of glycolytic and oxidative enzymes are closely
related to the work intensity. In addition, prolonged intense exercise increases the capillary
density.

Figure 1. A diagrammatic representation of the major components of a muscle cell involved in


excitation-contraction coupling as well as the site for ATP production, the mitochondrion.
Sometimes, too much exercise can induce muscle soreness, a phenomenon well known to
everyone who has demanded muscular performance beyond his or her capacity. When a
muscle is overused, first deteriorating processes set in, which are followed by reparative
processes. If sufficient time for repair is allowed, the muscle tissue may end up with increased
capacities. Prolonged overuse with insufficient time for repair, on the other hand, causes
fatigue and impairs muscle performance. Such prolonged overuse may induce chronic
degenerative changes in the muscles.

Other aspects of muscle use and misuse include the motor control patterns for various work
tasks, which depend on force level, rate of force development, type of contraction, duration
and the precision of the muscle task (Sjøgaard et al. 1995). Individual muscle fibres are
“recruited” for these tasks, and some recruitment patterns may induce a high load on
individual motor units even when the load on the muscle as a whole is small. Extensive
recruitment of a particular motor unit will inevitably induce fatigue; and occupational muscle
pain and injury may follow and could easily be related to the fatigue caused by insufficient
muscle blood flow and intramuscular biochemical changes due to this high demand (Edwards
1988). High muscle tissue pressures may also impede muscle blood flow, which can reduce
the ability of essential chemicals to reach the muscles, as well as the ability of the blood to
remove waste products; this can cause energy crises in the muscles. Exercise can induce
calcium to accumulate, and free radical formation may also promote degenerative processes
such as the breakdown of muscle membrane and the impairment of normal metabolism
(mitochondrial energy turnover) (figure 2). These processes may ultimately lead to
degenerative changes in the muscle tissue itself. Fibres with marked degenerative
characteristics have been found more frequently in muscle biopsies from patients with work-
related chronic muscle pain (myalgia) than in normal subjects. Interestingly, the degenerated
muscle fibres thus identified are “slow twitch fibres”, which connect with low-threshold
motor nerves. These are the nerves normally recruited at low sustained forces, not high force
related tasks. The perception of fatigue and pain may play an important role in preventing
muscle injury. Protective mechanisms induce the muscles to relax and recover in order to
regain strength (Sjøgaard 1990). If such biofeedback from the peripheral tissues is ignored,
the fatigue and pain may eventually result in chronic pain.

Figure 2. A blow-up of the muscle membrane and structures inside the muscle in figure 2. The
chain of events in the pathogenesis of calcium () induced damage in muscle cells is illustrated

Sometimes, after frequent overuse, various normal cellular chemical substances may not only
cause pain themselves but may increase the response of muscular receptors to other stimuli,
thereby lowering the threshold of activation (Mense 1993). The nerves which carry the signals
from the muscles to the brain (sensory afferents) may thus be sensitized over time, which
means that a given dose of substances which cause pain elicit a stronger excitation response.
That is, the threshold of activation is reduced and smaller exposures may cause larger
responses. Interestingly, the cells which normally serve as pain receptors (nociceptors) in
uninjured tissue are silent, but these nerves may also develop ongoing pain activity which can
persist even after the cause of the pain has terminated. This effect may explain chronic states
of pain which are present after the initial injury has healed. When pain persists after healing,
the original morphological changes in the soft tissues may be difficult to identify, even if the
primary or initial cause of the pain is located in these peripheral tissues. Thus, the real “cause”
of the pain may be impossible to trace.

Risk Factors and Preventive Strategies

Work-related risk factors of muscle disorders include repetition, force, static load, posture,
precision, visual demand and vibration. Inappropriate work/rest cycles may be a potential risk
factor for musculoskeletal disorders if sufficient recovery periods are not allowed before the
next working period, thus never affording enough time for physiological rest. Environmental,
sociocultural or personal factors may also play a role. Musculoskeletal disorders are
multifactorial, and, in general, simple cause-effect relationships are difficult to detect. It is,
however, important to document the extent to which occupational factors can be causally
related to the disorders, since, only in the case of causality, the elimination or minimization of
the exposure will help prevent the disorders. Of course, different preventive strategies must be
implemented depending on the type of work task. In the case of high-intensity work the aim is
to reduce force and work intensity, while for monotonous repetitive work it is more important
to induce variation in the work. In short, the aim is optimization of the exposure.

Occupational Diseases

Work-related muscle pain is reported most frequently in the neck and shoulder area, the
forearm and the low back. Although it is a major cause of sick-leave there is much confusion
with regard to classifying the pain and specifying diagnostic criteria. Common terms which
are used are given in three categories (see figure 3).

Figure 3. Classification of muscle diseases.

When muscular pain is assumed to be work-related, it can be classified into one of the
following disorders:

 Occupational cervicobrachial disorders (OCD)


 Repetition strain injury (RSI)
 Cumulative trauma disorders (CTD)
 Overuse (injury) syndrome
 Work-related neck and upper-limb disorders.
The taxonomy of the work-related neck and upper-limb disorders clearly demonstrates that
the aetiology includes external mechanical loads, which may well occur in the work place.
Besides disorders in the muscle tissue itself, this category includes also disorders in other soft
tissues of the musculoskeletal system. Of note is, that the diagnostic criteria may not allow to
identify the location of the disorder specifically to one of these soft tissues. In fact it is likely
that morphological changes at the musculo-tendinous junctions are related to the perception of
muscle pain. This advocates the term fibromyalgia to be used among local muscle disorders.
(See figure 3)

Unfortunately, different terms are used for essentially the same medical condition. In recent
years, the international scientific community has focused increasingly on classification and
diagnostic criteria for musculoskeletal disorders. A distinction is made between generalized
and local or regional pain (Yunus 1993). Fibromyalgia syndrome is a generalized pain
condition but is not considered to be work related. On the other hand, localized pain disorders
are likely to be associated with specific work tasks. Myofascial pain syndrome, tension neck
and rotator cuff syndrome are localized pain disorders that can be considered as work-related
diseases.

Tendons

Written by ILO Content Manager

The deformation that occurs as force is applied and removed is called “elastic” deformation.
The deformation that occurs after force application or removal is called “viscous”
deformation. Because tissues of the body exhibit both elastic and viscous properties, they are
called “viscoelastic”. If the recovery time between successive exertions is not long enough for
a given force and duration, the recovery will not be complete and the tendon will be stretched
further with each successive exertion. Goldstein et al. (1987) found that when finger flexor
tendons were subjected to 8 seconds (s) physiological loads and 2 s rest, the accumulated
viscous strain after 500 cycles was equal to the elastic strain. When the tendons were
subjected to 2 s work and 8 s rest, the accumulated viscous strain after 500 cycles was
negligible. Critical recovery times for given work-rest profiles have not yet been determined.

Tendons can be characterized as composite structures with parallel bundles of collagen fibres
arranged in a gelatinous matrix of mucopolysaccharide. Tensile forces on the ends of the
tendon cause unfolding of corrugations and straightening of the collagen strands. Additional
loads cause stretching of the straightened strands. Consequently, the tendon gets stiffer as it
gets longer. Compressive forces perpendicular to the long axis of the tendon cause the
collagen strands to be forced closer together, and result in a flattening of the tendon. Shear
forces on the side of the tendon cause displacement of the collagen strands closest to the
surface with respect to those farthest away, and gives the side view of the tendon a skewed
look.
Tendons as Structures

Forces are transmitted through tendons to maintain static and dynamic balance for specified
work requirements. Contracting muscles tend to rotate the joints in one direction while the
weight of the body and of work objects tends to rotate them in the other. Exact determination
of these tendon forces is not possible because there are multiple muscles and tendons acting
about each joint structure; however, it can be shown that the muscle forces acting on the
tendons are much greater than the weight or reaction forces of work objects.

The forces exerted by contracting muscles are called tensile forces because they stretch the
tendon. Tensile forces can be demonstrated by pulling on the ends of a rubber band. Tendons
also are subjected to compressive and shear forces and to fluid pressures, which are illustrated
in Figure 4 for the finger flexor tendons in the wrist.

Figure 1. Schematic diagram of tendon stretched around an anatomical surface or pulley and
the corresponding tensile forces (Ft), compressive forces (Fc), friction forces (Ff) and
hydrostatic or fluid pressure (Pf).

Exertion of the fingers to grasp or manipulate work objects requires the contraction of
muscles in the forearm and hand. As the muscles contract, they pull on the ends of their
respective tendons, which pass through the centre and circumference of the wrist. If the wrist
is not held in a position so that the tendons are perfectly straight, they will press against
adjacent structures. The finger flexor tendons press against the bones and ligaments inside the
carpal tunnel. These tendons can be seen to protrude under the skin toward the palm during
forceful pinching with a flexed wrist. Similarly, the extensor and abductor tendons can be
seen to protrude on the back and side of the wrist when it is extended with outstretched
fingers.

Friction or shear forces are caused by dynamic exertions in which the tendons rub against
adjacent anatomical surfaces. These forces act on and parallel to the surface of the tendon.
Friction forces can be felt by simultaneously pressing and sliding the hand against a flat
surface. The sliding of tendons over an adjacent anatomical surface is analogous to a belt
sliding around a pulley.

Fluid pressure is caused by exertions or postures that displace fluid out of the spaces around
the tendons. Studies of carpal canal pressure show that wrist contact with external
surfaces and
certain postures produce pressures high enough to
impair circulation
and threaten tissue viability (Lundborg 1988).

Contraction of a muscle produces an immediate stretching of its tendon. Tendons join muscles
together. If the exertion is sustained, the tendon will continue to stretch. Relaxation of the
muscle will result in a rapid recovery of the tendon followed by a slowed recovery. If the
initial stretching was within certain limits, the tendon will recover to its initial unloaded
length (Fung 1972).

Tendons as Living Tissues

The strength of tendons belies the delicacy of the underlying physiological mechanisms by
which they are nourished and heal. Interspersed within the tendon matrix are living cells,
nerve endings and blood vessels. Nerve endings provide information to the central nervous
system for motor control and warning of acute overload. Blood vessels play an important role
in the nourishment of some areas of the tendon. Some areas of tendons are avascular and rely
on diffusion from fluid secreted by synovial linings of outer tendon sheaths (Gelberman et al.
1987). Synovial fluid also lubricates movements of the tendons. Synovial sheaths are found at
locations where tendons come into contact with adjacent anatomical surfaces.

Excessive elastic or viscous deformation of the tendon can damage these tissues and impair
their ability to heal. It is hypothesized that deformation may impede or arrest circulation and
nourishment of tendons (Hagberg 1982; Viikari-Juntura 1984; Armstrong et al. 1993).
Without adequate circulation, cell viability will be impaired and the tendon’s capacity to heal
will be reduced. Tendon deformation can lead to small tears that further contribute to cell
damage and inflammation. If circulation is restored and the tendon is given adequate recovery
time, the damaged tissues will heal (Gelberman et al. 1987; Daniel and Breidenbach 1982;
Leadbetter 1989).

Tendon Disorders

It has been shown that tendon disorders occur in predictable patterns (Armstrong et al. 1993).
Their locations occur in those parts of the body associated with high stress concentrations
(e.g., in the tendons of the supraspinatus, the biceps, the extrinsic finger flexor and extensor
muscles). Also, there is an association between the intensity of work and the prevalence of
tendon disorders. This pattern also has been shown for amateur and professional athletes
(Leadbetter 1989). The common factors in both workers and athletes are repetitive exertions
and overloading of the muscle-tendon units.

Within certain limits, the injuries produced by mechanical loading will heal. The healing
process is divided into three stages: inflammatory, proliferatory and remodelling (Gelberman
et al. 1987; Daniel and Breidenbach 1982). The inflammatory stage is characterized by the
presence of polymorphonuclear cell infilt- ration, capillary budding and exudation, and lasts
for several days. The proliferatory stage is characterized by the proliferation of fibroblasts and
randomly oriented collagen fibres between areas of the wound and adjacent tissues, and lasts
for several weeks. The remodelling phase is characterized by the alignment of the collagen
fibres along the direction of loading, and lasts for several months. If the tissues are re-injured
before healing is complete, recovery may be delayed and the condition may worsen
(Leadbetter 1989). Normally healing leads to a strengthening or adaptation of the tissue to
mechanical stress.

The effects of repetitive loading are apparent in the forearm finger flexor tendons where they
contact the inside walls of the carpal tunnel (Louis 1992; Armstrong et al. 1984). It has been
shown that there is progressive thickening of the synovial tissue between the edges of the
carpal tunnel and the centre where the contact stresses on the tendons are the greatest.
Thickening of the tendons is accompanied by synovial hyperplasia and proliferation of
connective tissue. Thickening of the tendon sheaths is a widely cited factor in compression of
the median nerve inside the carpal tunnel. It can be argued that thickening of the synovial
tissues is an adaptation of the tendons to mechanical trauma. Were it not for the secondary
effect on the median nerve compression resulting in carpal tunnel syndrome, it might be
considered a desirable outcome.

Until optimal tendon loading regimes are determined, employers should monitor workers for
signs or symptoms of tendon disorders so that they can intervene with work modifications to
prevent further injuries. Jobs should be inspected for conspicuous risk factors any time an
upper limb problem is identified or suspected. Jobs also should be inspected any time there is
a change in the work standard, procedure or tooling, to insure that risk factors are minimized.

Bones and Joints

Written by ILO Content Manager

Bone and cartilage are part of the specialized connective tissues that make up the skeletal
system. Bone is a living tissue that replaces itself continuously. The hardness of bone is well
suited to the task of providing mechanical support function, and the elasticity of cartilage, to
the ability of joints to move. Both cartilage and bone consist of specialized cells that produce
and regulate a matrix of material outside the cells. The matrix is abundant in collagens,
proteoglycans and non-collagenous proteins. Minerals are present in bone matrix as well.

The external part of bone is called the cortex and is compact bone. The more spongy inner
part (trabecular bone) is filled with blood-forming (haematopoietic) bone marrow. The inner
and outer parts of the bone have different metabolic turnover rates, with important
consequences for late life osteoporosis. Trabecular bone regenerates itself at a greater rate
than compact bone, which is why osteoporosis is first seen in the vertebral bodies of the spine,
which have large trabecular parts.

Bone in the skull and other selected sites forms directly by bone formation (intramembranous
ossification) without passing through a cartilage intermediate phase. The long bones of the
limbs develop from cartilage through a process known as endochondral ossification. This
process is what leads to the normal growth of long bones, to the repair of fractures and, in late
adult life, to the unique formation of new bone in a joint which has become osteoarthritic.

The osteoblast is a type of bone cell that is responsible for synthesis of the matrix components
in bone: the distinct collagen (type I) and proteoglycans. Osteoblasts also synthesize other
non-collagenous proteins of bone. Some of these proteins can be measured in serum to
determine the rate of bone turnover.

The other distinct bone cell is called the osteoclast. The osteoclast is responsible for
resorption of bone. Under normal circumstances, old bone tissue is resorbed while new bone
tissue is generated. Bone is resorbed by production of enzymes that dissolve proteins. Bone
turnover is called remodelling and is normally a balanced and coordinated process of
resorption and formation. Remodelling is influenced by body hormones and by local growth
factors.

Movable (diarthrodial) joints are formed where two bones fit together. Joint surfaces are
designed for weight bearing, and to accommodate a range of motion. The joint is enclosed by
a fibrous capsule, whose inner surface is a synovial membrane, which secretes synovial fluid.
The joint surface is made of hyaline cartilage, beneath which is a backing of hard
(subchondral) bone. Within the joint, ligaments, tendons and fibrocartilaginous structures
(menisci in certain joints, such as the knee), provide stability and a close fit between joint
surfaces. The specialized cells of these joint components synthesize and maintain the matrix
macromolecules whose interactions are responsible for maintaining the tensile strength of
ligaments and tendons, the loose connective tissue that supports the blood vessels and cellular
elements of the synovial membrane, the viscous synovial fluid, the elasticity of hyaline
cartilage, and the rigid strength of subchondral bone. These joint components are
interdependent, and their relationships are shown in table 1.

Table 1. Structure-function relationships and inter-dependence of joint components.

Components Structure Functions


Ligaments and tendons Dense, fibrous, connective Prevents over-extension of
tissue joints, provides stability and
strength
Synovial membrane Areolar, vascular and cellular Secretes synovial fluid,
dissolves (phagocytoses)
particulate material in synovial
fluid
Synovial fluid Viscous fluid Provides nutrients for cartilage
injoints, lubricates cartilage
during joint motion
Cartilage Firm hyaline cartilage Constitutes the joint surface,
bears weight, responds
elastically to compression
Tidemark Calcified cartilage Separates joint cartilage from
underlying bone
Subchondral bone Hard bone with marrow spaces Provides backing for joint
surface; marrow cavity
provides nutrients to base of
cartilage and is thesource of
cells with potential fornew
bone formation

Source: Hamerman and Taylor 1993.

Selected Diseases of Bones and Joints

Osteopenia is the general term used to describe reduced bone substance detected on x rays.
Often asymptomatic in early stages, it may eventually manifest itself as weakening of bones.
Most of the conditions listed below induce osteopenia, although the mechanisms by which
this occurs differ. For example, excessive parathyroid hormone enhances bone resorption,
while calcium and phosphate deficiency, which can arise from multiple causes and is often
due to inadequate vitamin D, results in deficient mineralization. As people age, there is an
imbalance between formation and resorption of bone. In women around the age of
menopause, resorption often predominates, a condition called type I osteoporosis. In advanced
age, resorption can again dominate and lead to type II osteoporosis. Type I osteoporosis
usually affects vertebral bone loss and collapse, while hip fracture predominates in type II.

Osteoarthritis (OA) is the principal chronic disorder of certain movable joints, and its
incidence increases with age. By age 80, almost all people have enlarged joints on the fingers
(Heberden’s nodes). This is usually of very limited clinical significance. The principal
weight-bearing joints which are subject to osteoarthritis are the hip, knee, feet and facets of
the spine. The shoulder, while it is not weight bearing, may also suffer from a variety of
arthritic changes, including rotator cuff tear, subluxation of the humeral head and an effusion
high in proteolytic enzymes—a clinical picture often called “Milwaukee Shoulder” and
associated with substantial pain and limitation of motion. The main change in OA is primarily
one of degradation of cartilage, but new bone formation called osteophytes is usually seen on
x rays.

Intervertebral Discs

Written by ILO Content Manager

The intervertebral discs occupy about one-third of the spine. Since they not only provide the
spinal column with flexibility but also transmit load, their mechanical behaviour has a great
influence on the mechanics of the whole spine. A high proportion of cases of low-back pain
are associated with the disc, either directly through disc herniation, or indirectly, because
degenerated discs place other spinal structures under abnormal stress. In this article, we
review the structure and composition of the disc in relation to its mechanical function and
discuss changes to the disc in disease.
Anatomy

There are 24 intervertebral discs in the human spine, interspersed between the vertebral
bodies. Together these make up the anterior (front) component of the spinal column, with the
articulating facet joints and the transverse and spinous processes making up the posterior
(rear) elements. The discs increase in size down the spine, to approximately 45 mm antero-
posteriorly, 64 mm laterally and 11 mm height in the lower back region.

The disc is made of cartilage-like tissue and consists of three distinct regions, (see figure 1).
The inner region (nucleus pulposus) is a gelatinous mass, particularly in the young person.
The outer region of the disc (annulus fibrosus) is firm and banded. The fibres of the annulus
are criss-crossed in an arrangement which allows it to withstand high bending and twisting
loads. With increasing age the nucleus loses water, becomes firmer and the distinction
between the two regions is less clear than early in life. The disc is separated from the bone by
a thin layer of hyaline cartilage, the third region. In adulthood the cartilage endplate and the
disc itself normally have no blood vessels of their own but rely on the blood supply of
adjacent tissues, such as ligaments and vertebral body, to transport nutrients and remove
waste products. Only the outer portion of the disc is innervated.

Figure 1. The relative proportions of the three main components of the normal adult human
intervertebral disc and cartilage endplate.

Composition

The disc, like other cartilage, consists mainly of a matrix of collagen fibres (which are
embedded in a gel of proteoglycan) and of water. These together make up 90 to 95% of the
total tissue mass, although the proportions vary with location within the disc and with age and
degeneration. There are cells interspersed throughout the matrix that are responsible for
synthesizing and maintaining the different components within it (figure 2). A review of the
biochemistry of the disc can be found in Urban and Roberts 1994.
Figure 2. Schematic representation of disc structure, showing banded collagen fibres
interspersed with numerous bottle-brush-like proteoglycan molecules and few cells.

Proteoglycans: The major proteoglycan of the disc, aggrecan, is a large molecule consisting
of a central protein core to which many glycosaminoglycans (repeating chains of
disaccharides) are attached (see figure 3). These side chains have a high density of negative
charges associated with them, thus making them attractive to water molecules (hydrophilic), a
property described as swelling pressure. It is very important to the functioning of the disc.

Figure 3. Diagram of part of a disc proteoglycan aggregate. G1, G2 and G3 are globular,
folded regions of the central core protein.

Huge aggregates of proteoglycans can form when individual molecules link onto a chain of
another chemical, hyaluronic acid. The size of aggrecans varies (ranging in molecular weight
from 300,000 to 7 million dalton) depending on how many molecules make up the aggregate.
Other smaller types of proteoglycans have recently also been found in the disc and cartilage
endplate—for example, decorin, biglycan, fibromodulin and lumican. Their function is
generally unknown but fibromodulin and decorin may be involved in regulating collagen
network formation.

Water: Water is the main constituent in the disc, making up 65 to 90% of the tissue volume,
depending on age and region of the disc. There is a correlation between the quantity of
proteoglycan and the water content of the matrix. The amount of water also varies depending
on the load applied to the disc, hence water content differs night and day since load will be
very different when sleeping. Water is important both to the mechanical functioning of the
disc and for providing the medium for transport of dissolved substances within the matrix.

Collagen: Collagen is the main structural protein in the body, and consists of a family of at
least 17 distinct proteins. All collagens have helical regions and are stabilized by a series of
intra- and inter- molecular crosslinks, which make the molecules very strong in resisting
mechanical stresses and enzymatic degradation. The length and shape of different types of
collagen molecules and the proportion which is helical, vary. The disc is composed of several
types of collagens, with the outer annulus being predominantly type I collagen, and the
nucleus and cartilage endplate predominantly type II. Both types form fibrils which provide
the structural framework of the disc. The fibrils of the nucleus are much finer (>> mm in
diameter) than those of the annulus (0.1 to 0.2 mm in diameter). The disc cells are often
surrounded by a capsule of some of the other types of collagen, such as type VI.

Cells: The intervertebral disc has a very low density of cells in comparison to other tissues.
Although the density of cells is low, their continuing activity is vital for the health of the disc,
as the cells produce macromolecules throughout life, to replace those which break down and
are lost with the passage of time.

Function

The main function of the disc is mechanical. The disc transmits load along the spinal column
and also allows the spine to bend and twist. The loads on the disc arise from body weight and
muscular activity, and change with posture (see figure 4). During daily activities the disc is
subject to complex loads. Extending or flexing the spine produces mainly tensile and
compressive stresses on the disc, which increase in magnitude going down the spine, due to
differences in body weight and geometry. Rotating the spine produces crosswise (shear)
stresses.
Figure 4. Relative intradiscal pressures in different postures compared to the pressure in
upright standing (100%).

Discs are under pressure, which varies with posture from around 0.1 to 0.2 MPa at rest, to
around 1.5 to 2.5 MPa while bending and lifting. The pressure is mainly due to water pressure
across the nucleus and inner annulus in a normal disc. When load on the disc is increased,
pressure is distributed evenly across the endplate and throughout the disc.

During loading the disc deforms and loses height. The endplate and annulus bulge, increasing
the tension on these structures, and the pressure of the nucleus consequently rises. The degree
of deformation of the disc depends on the rate at which it is loaded. The disc can deform
considerably, compressing or extending by 30 to 60% during flexion and extension. Distances
between adjacent spinal processes can increase by over 300%. If a load is re- moved within a
few seconds, the disc quickly returns to its former state, but if the load is maintained, the disc
continues to lose height. This “creep” results from the continuing deformation of the disc
structures and also from fluid loss, because discs lose fluid as a result of the increased
pressure. Between 10 and 25% of the disc’s fluid is slowly lost during daily activities, when
the disc is under much greater pressures, and regained when lying down at rest. This loss of
water can lead to a decrease in an individual’s height of 1 to 2 cm from morning to evening
among dayworkers.
As the disc changes its composition because of ageing or degeneration, the response of the
disc to mechanical loads also changes. With a loss of proteoglycan and thus water content, the
nucleus can no longer respond as efficiently. This change results in uneven stresses across the
endplate and the annulus fibres, and, in severe cases of degeneration, the inner fibres may
bulge inward when the disc is loaded, which, in turn, may lead to abnormal stresses on other
disc structures, eventually causing their failure. The rate of creep is also increased in
degenerated discs, which thus lose height faster than normal discs under the same load.
Narrowing of the disc space affects other spinal structures, such as muscles and ligaments,
and, in particular, leads to an increase in pressure on the facet joints, which may be the cause
of the degenerative changes seen in the facet joints of spines with abnormal discs.

Contribution of Major Components to Function

Proteoglycans

Disc function depends on maintaining equilibrium in which the water pressure of the disc is
balanced by the disc swelling pressure. The swelling pressure depends on the concentration of
ions attracted into the disc by the negatively charged proteoglycans, and thus depends directly
on the concentration of proteoglycans. If the load on the disc is increased, water pressure rises
and disturbs the equilibrium. To compensate, fluid seeps out of the disc, increasing
proteoglycan concentration and disc osmotic pressure. Such fluid expression continues either
until the balance is restored or the load on the disc is removed.

Proteoglycans affect fluid movement in other ways, as well. Because of their high
concentration in the tissue, the spaces between chains are very small (0.003 to 0.004 mm).
Fluid flow through such small pores is very slow, and thus even though there is a large
pressure differential, the rate at which fluid is lost, and hence the rate of disc creep, is slow.
However, since discs which have degenerated have lower proteoglycan concentrations, fluid
can flow through the matrix faster. This may be why degenerated discs lose height more
quickly than normal discs. The charge and high concentration of proteoglycans control the
entry and movement of other dissolved substances into the disc. Small molecules (nutrients
like glucose, oxygen) can easily enter the disc and move through the matrix. Electropositive
chemicals and ions, such as Na+or Ca2+, have higher concentrations in the negatively charged
disc than in the surrounding interstitial fluid. Large molecules, such as serum albumin or
immunoglobulins, are too bulky to enter the disc, and are present only in very low
concentrations. Proteoglycans may also affect cellular activity and metabolism. Small
proteoglycans, such as biglycan, may bind growth factors and other mediators of cellular
activity, releasing them when the matrix is degraded.

Water

Water is the major component of the disc and rigidity of the tissue is maintained by the
hydrophilic properties of the proteoglycans. With initial loss of water, the disc becomes more
flaccid and deformable as the collagen network relaxes. However, once the disc has lost a
significant fraction of water, its mechanical properties change drastically; the tissue behaves
more like a solid than a composite under load. Water also provides the medium through which
nutrients and wastes are exchanged between the disc and the surrounding blood supply.
Collagen

The collagen network, which can support high tensile loads, provides a framework for the
disc, and anchors it to the neighbouring vertebral bodies. The network is inflated by the water
taken in by the proteoglycans; in turn the network restrains the proteoglycans and prevents
them from escaping from the tissue. These three components together thus form a structure
which is able to support high compressive loads.

The organization of the collagen fibrils provides the disc with its flexibility. The fibrils are
arranged in layers, with the angle at which the fibrils of each layer run between the
neighbouring vertebral bodies, alternating in direction. This highly specialized weave allows
the disc to wedge extensively, thus allowing bending of the spine, even though collagen fibrils
themselves can extend by only about 3%.

Metabolism

The cells of the disc produce both large molecules and enzymes which can break down matrix
components. In a healthy disc, the rates of matrix production and breakdown are balanced. If
the balance is upset, the composition of the disc must ultimately change. In growth, synthesis
rates for new and replacement mol-ecules are higher than degradation rates, and matrix
materials accumulate around the cells. With ageing and degeneration, the reverse occurs.
Proteoglycans normally last for about two years. Collagen lasts for many more years. If the
balance is disturbed, or if cellular activity falls, the proteoglycan content of the matrix
eventually decreases, which affects the mechanical properties of the disc.

Disc cells also respond to changes in mechanical stress. Loading affects disc metabolism,
although the mechanisms are not clear. At present it is impossible to predict which
mechanical demands encourage a stable balance, and which may favour degradation over
synthesis of matrix.

Supply of nutrients

Because the disc receives nutrients from the blood supply of the adjacent tissues, the nutrients
such as oxygen and glucose must diffuse through the matrix to the cells in the centre of the
disc. Cells may be as much as 7 to 8 mm from the nearest blood supply. Steep gradients
develop. At the interface between the disc and the vertebral body, the concentration of oxygen
is around 50%, while at the centre of the disc it is below 1%. Disc metabolism is mainly
anaerobic. When oxygen falls below 5%, the disc increases production of lactate, a metabolic
waste product. The lactate concentration in the centre of the nucleus may be six to eight times
higher than that in the blood or interstitium (see figure 5).

Figure 5. The main nutritional pathways to the inter- vertebral disc are by diffusion from the
vasculature within the vertebral body (V), through the endplate (E) to the nucleus (N) or from
the blood supply outside the annulus (A).
A fall in the supply of nutrients is often suggested to be a major cause of disc degeneration.
Endplate permeability of the disc decreases with age, which may impede nutrient transport
into the disc and could lead to accumulation of wastes, such as lactate. In discs where nutrient
transport has been reduced, oxygen concentrations in the disc centre can fall to very low
levels. Here anaerobic metabolism, and consequently lactate production, increases, and the
acidity in the disc centre may fall to as low as pH 6.4. Such low values of pH, as well as low
oxygen tensions, reduce the rate of matrix synthesis, resulting in a fall in proteoglycan
content. In addition, the cells themselves may not survive prolonged exposure to acid pH. A
high percentage of dead cells have been found in human discs.

Degeneration of the disc leads to a loss of proteoglycan and a shift in its structure,
disorganization of the collagen network and an ingrowth of blood vessels. There is the
possibility that some of these changes could be reversed. The disc has been shown to have
some capability of repair.

Diseases

Scoliosis: Scoliosis is a sideways bend of the spine, where both the intervertebral disc and
vertebral bodies are wedged. It is usually associated with a twisting or rotation of the spine.
Because of the manner in which the ribs are attached to the vertebrae this gives rise to a “rib
hump”, visible when the affected individual bends forward. Scoliosis may be due to a
congenital defect in the spine, such as a wedge-shaped hemi-vertebra, or it may arise
secondary to a disorder such as neuromuscular dystrophy. However, in the majority of cases
the cause is unknown and it is hence termed idiopathic scoliosis. Pain is rarely a problem in
scoliosis and treatment is carried out, mainly to halt further development of the lateral
curvature of the spine. (For details on clinical treatment of this and other spinal pathologies
see Tidswell 1992.)

Spondylolisthesis: Spondylolisthesis is a forward, horizontal slip of one vertebra in relation to


another. It may result from a fracture in the bridge of bone connecting the front to the
posterior of the vertebra. Obviously the intervertebral disc between two such vertebrae is
stretched and subjected to abnormal loads. The matrix of this disc, and to a lesser extent,
adjacent discs, shows changes in composition typical of degeneration—loss of water and
proteoglycan. This condition can be diagnosed by x ray.

Ruptured or prolapsed disc: Rupture of the posterior annulus is quite common in physically
active young or middle-aged adults. It cannot be diagnosed by x ray unless a discogram is
carried out, whereby radio-opaque material is injected into the centre of the disc. A tear can
then be demonstrated by the tracking of the discogram fluid. Sometimes isolated and
sequestered pieces of disc material can pass through this tear into the spinal canal. Irritation or
pressure on the sciatic nerve causes intense pain and paraesthesia (sciatica) in the lower limb.

Degenerative disc disease: This is a term applied to an ill-defined group of patients who
present with low-back pain. They may show changes in the x ray appearance, such as a
decrease in disc height and possibly osteophyte formation at the rim of the vertebral bodies.
This group of patients could represent the endstage of several pathological pathways. For
example, untreated annular tears may eventually take on this form.

Spinal stenosis: The narrowing of the spinal canal that occurs in spinal stenosis causes
mechanical compression of spinal nerve roots and its blood supply. As such it can lead to
symptoms such as weakness, altered reflexes, pain or loss of feeling (paraesthesia), or
sometimes have no symptoms. The narrowing of the canal can, in turn, be caused by various
factors including protrusion of the intervertebral disc into the canal space, new bone
formation in the facet joints (facet hypertrophy) and arthritis with inflammation of other soft
connective tissues.

Interpretation of more recent imaging techniques in relation to disc pathology has not been
completely established. For example, degenerated discs on magnetic resonance imaging
(MRI) give an altered signal from that seen for “normal” discs. However, the correlation
between a disc of “degenerate” appearance on MRI and clinical symptoms is poor, with 45%
of MRI-degenerate discs being asymptomatic and 37% of patients with low-back pain having
normal MRI of the spine.

Risk Factors

Loading

Load on the discs depends on posture. Intradiscal measurements show that the sitting position
leads to pressures five times greater than those within the resting spine (see Figure 8). If
external weights are lifted this can greatly increase the intradiscal pressure, especially if the
weight is held away from the body. Obviously an increased load can lead to a rupture in discs
that otherwise might remain intact.

Epidemiological investigations reviewed by Brinckmann and Pope (1990) agree in one


respect: repetitive lifting or carrying of heavy objects or performing work in flexed or
hyperextended posture represent risk factors for low-back problems. Similarly, certain sports,
such as weight lifting, may be associated with a higher incidence of back pain than, for
example, swimming. The mechanism is not clear, although the different loading patterns
could be relevant.
Smoking

The nutrition of the disc is very precarious, requiring only a small reduction in the flow of
nutrients to render it insufficient for the normal metabolism of the disc cells. Cigarette
smoking can cause such a reduction because of its effect on the circulatory system outside the
intervertebral disc. The transport of nutrients, such as oxygen, glucose or sulphate, into the
disc is significantly reduced after only 20 to 30 minutes of smoking, which may explain the
higher incidence of low-back pain in individuals who smoke compared to those who do not
(Rydevik and Holm 1992).

Vibration

Epidemiological studies have shown that there is an increased incidence of low-back pain in
individuals exposed to high levels of vibration. The spine is susceptible to damage at its
natural frequencies, particularly from 5 to 10 Hz. Many vehicles excite vibrations at these
frequencies. Studies reported by Brinckmann and Pope (1990) have shown a relationship
between such vibrations and the incidence of low-back pain. Since vibration has been shown
to affect the small blood vessels in other tissues, this may also be the mechanism for its effect
on the spine.

Low-Back Region

Written by ILO Content Manager

Low-back pain is a common ailment in populations of working age. About 80% of people
experience low-back pain during their lifetime, and it is one of the most important causes for
short- and long-term disability in all occupational groups. Based on the aetiology, low-back
pain can be classified into six groups: mechanical, infectious (e.g., tuberculosis),
inflammatory (e.g., ankylosing spondylitis), metabolic (e.g., osteoporosis), neoplastic (e.g.,
cancer) and visceral (pain caused by diseases of the inner organs).

The low-back pain in most people has mechanical causes, which include lumbosacral
sprain/strain, degenerative disc disease, spondylolisthesis, spinal stenosis and fracture. Here
only mechanical low-back pain is considered. Mechanical low-back pain is also called
regional low-back pain, which may be local pain or pain radiating to one or both legs
(sciatica). It is characteristic for mechanical low-back pain to occur episodically, and in most
cases the natural course is favourable. In about half of acute cases low-back pain subsides in
two weeks, and in about 90% within two months. About every tenth case is estimated to
become chronic, and it is this group of low-back pain patients that accounts for the major
proportion of the costs due to low-back disorders.

Structure and Function

Due to upright posture the structure of the lower part of the human spine (lumbosacral spine)
differs anatomically from that of most vertebrate animals. The upright posture also increases
mechanical forces on the structures in the lumbosacral spine. Normally the lumbar spine has
five vertebrae. The sacrum is rigid and the tail (coccyx) has no function in human beings as
shown in figure 1.
Figure 1. The spine, its vertebrae and curvature.

The vertebrae are bound together by intervertebral discs between the vertebral bodies, and by
ligaments and muscles. These soft-tissue bindings make the spine flexible. Two adjacent
vertebrae form a functional unit, as shown in figure 2. The vertebral bodies and the discs are
the weight-bearing elements of the spine. The posterior parts of the vertebrae form the neural
arch that protects the nerves in the spinal canal. The vertebral arches are attached to each
other via facet joints (zygapophyseal joints) that determine the direction of motion. The
vertebral arches are also bound together by numerous ligaments that determine the range of
motion in the spine. The muscles that extend the trunk backward (extensors) are attached to
the vertebral arches. Important attachment sites are three bony projections (two lateral and the
spinal process) of the vertebral arches.

Figure 2. The basic functional unit of the spine.

The spinal cord terminates at the level of the highest lumbar vertebrae (L1-L2). The lumbar
spinal canal is filled by the extension of the spinal cord, cauda equina, which is composed of
the spinal nerve roots. The nerve roots exit the spinal canal pairwise through intervertebral
openings (foramina). A branch innervating the tissues in the back departs from each of the
spinal nerve roots. There are nerve endings transmitting pain sensations (nociceptive endings)
in muscles, ligaments and joints. In a healthy intervertebral disc there are no such nerve
endings except for the outermost parts of the annulus. Yet, the disc is considered the most
important source of low-back pain. Annular ruptures are known to be painful. As a sequel of
disc degeneration a herniation of the semigelatinous inner part of the intervertebral disc, the
nucleus, can occur into the spinal canal and lead to compression and/or inflammation of a
spinal nerve along with symptoms and signs of sciatica, as shown in figure 3.

Figure 3. Herniation of the intervertebrai disc.

Muscles are responsible for the stability and motion of the back. Back muscles bend the trunk
backward (extension), and abdominal muscles bend it forward (flexion). Fatigue due to
sustained or repetitive loading or sudden overexertion of muscles or ligaments can cause low-
back pain, albeit the exact origin of such pain is difficult to localize. There is controversy
about the role of soft tissue injuries in low-back disorders.

Low-Back Pain

Occurrence

The prevalence estimates of low-back pain vary depending on the definitions used in different
surveys. The prevalence rates of low-back pain syndromes in the Finnish general population
over 30 years of age are given in table 1. Three in four people have experienced low-back
pain (and one in three, sciatic pain) during their lifetime. Every month one in five people
suffers from low-back or sciatic pain, and at any point in time, one in six people has a
clinically verifiable low-back pain syndrome. Sciatica or herniated intervertebral disc is less
prevalent and afflicts 4% of the population. About half of those with a low-back pain
syndrome have functional impairment, and the impairment is severe in 5%. Sciatica is more
common among men than among women, but other low-back disorders are equally common.
Low-back pain is relatively uncommon before the age of 20, but then there is a steady
increase in the prevalence until the age of 65, after which there is a decline.
Table 1. Prevalence of back disorders in the Finnish population over 30 years of age,
percentages.

Men+ Women+
Lifetime prevalence of back pain 76.3 73.3
Lifetime prevalence of sciatic pain 34.6 38.8
Five-year prevalence of sciatic pain having caused bedrest for at 17.3 19.4
least two weeks
One-month prevalence of low-back or sciatic pain 19.4 23.3
Point prevalence of clinically verified:
Low-back pain syndrome 17.5 16.3
Sciatica or prolapsed disc* 5.1 3.7

+ age-adjusted
* p < 0.005
Source: Adapted from Heliövaara et al. 1993.

The prevalence of degenerative changes in the lumbar spine increases with increasing age.
About half of 35- to 44-year-old men and nine out of ten men 65 years or older have
radiographic signs of disc degeneration of the lumbar spine. Signs of severe disc degeneration
are noted in 5 and 38%, respectively. Degener-ative changes are slightly more common in
men than in women. People who have degenerative changes in the lumbar spine have low-
back pain more frequently than those without, but degener-ative changes are also common
among asymptomatic people. In magnetic resonance imaging (MRI), disc degeneration has
been found in 6% of asymptomatic women 20 years or younger and in 79% of those 60 years
or older.

In general, low-back pain is more common in blue-collar occupations than in white-collar


occupations. In the United States, materials handlers, nurses’ aides and truck drivers have the
highest rates of compensated back injuries.

Risk factors at work

Epidemiological studies have quite consistently found that low-back pain, sciatica or
herniated intervertebral disc and degener-ative changes of the lumbar spine are associated
with heavy physical work. Little is known, however, of the acceptable limits of physical load
on the back.

Low-back pain is related to frequent or heavy lifting, carrying, pulling and pushing. High
tensile forces are directed to the muscles and ligaments, and high compression to the bones
and joint surfaces. These forces can cause mechanical injuries to the vertebral bodies,
intervertebral discs, ligaments and the posterior parts of the vertebrae. The injuries may be
caused by sudden overloads or fatigue due to repetitive loading. Repeated microtrauma,
which may even occur without being noticed, have been proposed as a cause for degeneration
of the lumbar spine.

Low-back pain is also associated with frequent or prolonged twisting, bending or other non-
neutral trunk postures. Motion is necessary for the nutrition of the intervertebral disc and
static postures may impair the nutrition. In other soft tissues, fatigue can develop. Also
prolonged sitting in one position (for instance, machine seamstresses or motor vehicle drivers)
increases the risk of low-back pain.

Prolonged driving of motor vehicles has been found to increase the risk of low-back pain and
sciatica or herniated disc. Drivers are exposed to whole-body vibration that has an adverse
effect on disc nutrition. Also sudden impulses from rough roads, postural stress and materials
handling by professional drivers may contribute to the risk.

An obvious cause for back injuries is direct trauma caused by an accident such as falling or
slipping. In addition to the acute injuries, there is evidence that traumatic back injuries
contribute substantially to the development of chronic low-back syndromes.

Low-back pain is associated with various psychosocial factors at work, such as monotonous
work and working under time pressure, and poor social support from co-workers and
superiors. The psychosocial factors affect reporting and recovery from low-back pain, but
there is controversy about their aetiological role.

Individual risk factors

Height and overweight: Evidence for a relationship of low-back pain with body stature and
overweight is contradictory. Evidence is, however, quite convincing for a relationship
between sciatica or herniated disc and tallness. Tall people may have a nutritional
disadvantage due to a greater disc volume, and they may also have ergonomic problems at the
worksite.

Physical fitness: Study results on an association between physical fitness and low-back pain
are inconsistent. Low-back pain is more common in people who have less strength than their
job requires. In some studies poor aerobic capacity has not been found to predict future low-
back pain or injury claims. The least fit people may have an increased overall risk for back
injuries, but the most fit people may have the most expensive injuries. In one study, good
back muscle endurance prevented first-time occurrence of low-back pain.

There is considerable variation in the mobility of the lumbar spine among people. People with
acute and chronic low-back pain have reduced mobility, but in prospective studies mobility
has not predicted the incidence of low-back pain.

Smoking: Several studies have shown that smoking is associated with an increase in the risk of
low-back pain and herniated disc. Smoking also seems to enhance disc degeneration. In
experimental studies, smoking has been found to impair the nutrition of the disc.

Structural factors: Congenital defects of the vertebrae as well as unequal leg length can cause
abnormal loading in the spine. Such factors are, however, not considered very important in the
caus-ation of low-back pain. Narrow spinal canal predisposes to nerve root compression and
sciatica.

Psychological factors: Chronic low-back pain is associated with psychological factors (e.g.,
depression), but not all people who suffer from chronic low-back pain have psychological
problems. A variety of methods have been used to differentiate low-back pain caused by
psychological factors from low-back pain caused by physical factors, but the results have
been contradictory. Mental stress symptoms are more common among people with low-back
pain than among symptomless people, and mental stress even seems to predict the incidence
of low-back pain in the future.

Prevention

The accumulated knowledge based on epidemiological studies on the risk factors is largely
qualitative and thus can give only broad guidelines for the planning of preventive
programmes. There are three principal approaches in prevention of work-related low-back
disorders: ergonomic job design, education and training, and worker selection.

Job design

It is widely believed that the most effective means to prevent work-related low-back disorders
is job design. An ergonomic intervention should address the following parameters (shown in
table 2).

Table 2. Parameters which should be addressed in order to reduce the risks for low-back pain
at work.

Parameter Example
1. Load The weight of the object handled, the size of the object handled
2. Object design The shape, location and size of handles
3. Lifting technique The distance from the centre of gravity of the object and the
worker, twisting motions
4. Workplace layout The spatial features of the task, such as carrying distance, range
of motion, obstacles such as stairs
5. Task design Frequency and duration of the tasks
6. Psychology Job satisfaction, autonomy and control, expectations
7. Environment Temperature, humidity, noise, foot traction, whole-body
vibration
8. Work Team work, incentives, shifts, job rotation, machine pacing, job
organization security.

Source: Adapted from Halpern 1992.

Most ergonomic interventions modify the loads, the design of objects handled, lifting
techniques, workplace layout and task design. The effectiveness of these measures in
controlling the occurrence of low-back pain or medical costs has not been clearly
demonstrated. It may be most efficient to reduce the peak loads. One suggested approach is to
design a job so that it is within the physical capacity of a large percentage of the working
population (Waters et al. 1993). In static jobs restoration of motion can be achieved by
restructuring the job, by job rotation or job enrichment.
Education and training

Workers should be trained to perform their work appropriately and safely. Education and
training of workers in safe lifting have been widely implemented, but the results have not
been convincing. There is general agreement that it is beneficial to keep the load close to the
body and to avoid jerking and twisting, but as to the advantages of leg lift and back lift, the
opinions of the experts are conflicting.

If mismatch between job demands and the strength of workers is detected and job redesign is
not possible, a fitness training programme should be provided for the workers.

In prevention of disability due to low-back pain or chronicity, back school has proven
effective in subacute cases, and general fitness training in subchronic cases.

Training needs to be extended also to management. Aspects of management training include


early intervention, initial conservative treatment, patient follow-up, job placement and
enforcement of safety rules. Active management programmes can significantly reduce long-
term disability claims and accident rates.

Medical personnel should be trained in the benefits of early intervention, conservative


treatment, patient follow-up and job placement techniques. The Quebec Task Force report on
the management of activity-related spinal disorders and other clinical practice guidelines
gives sound guidance for proper treatment. (Spitzer et al. 1987; AHCPR 1994.)

Worker selection

In general, pre-employment selection of workers is not considered an appropriate measure for


prevention of work-related low-back pain. History of previous back trouble, radiographs of
the lumbar spine, general strength and fitness testing—none of these has shown good enough
sensitivity and specificity in identifying persons with an increased risk for future low-back
trouble. The use of these measures in pre-employment screening can lead to undue
discrimination against certain groups of workers. There are, however, some special
occupational groups (e.g., fire-fighters and police officers) in which pre-employment
screening can be considered appropriate.

Clinical characteristics

The exact origin of low-back pain often cannot be determined, which is reflected as
difficulties in the classification of low-back disorders. To a great extent the classification
relies on symptom characteristics supported by clinical examination or by imaging results.
Basically, in clinical physical examination patients with sciatica caused by compression
and/or inflammation of a spinal nerve root can be diagnosed. As to many other clinical
entities, such as facet syndrome, fibrositis, muscular spasms, lumbar compartment syndrome
or sacro-iliac syndrome, clinical verification has proven unreliable.

As an attempt to resolve the confusion the Quebec Task Force on Spinal Disorders carried out
a comprehensive and critical literature review and ended up recommending the use of the
classification for low-back pain patients shown in table 3.
Table 3. Classification of low-back disorders according to the Quebec Task Force on Spinal
Disorders

1. Pain

2. Pain with radiation to lower limb proximally

3. Pain with radiation to lower limb distally

4. Pain with radiation to lower limb and neurological signs

5. Presumptive compression of a spinal nerve root on a simple radiogram (i.e., spinal


instability or fracture)

6. Compression of a spinal nerve root confirmed by: Specific imaging techniques


(computerized tomography,

myelography, or magnetic resonance imaging), Other diagnostic techniques (e.g.,


electromyography,

venography)

7. Spinal stenosis

8. Postsurgical status, 1-6 weeks after intervention

9. Postsurgical status, >6 weeks after intervention

9.1. Asymptomatic

9.2. Symptomatic

10. Chronic pain syndrome

11. Other diagnoses

For categories 1-4, additional classification is based on


(a) Duration of symptoms (< 7 days; 7 days-7 weeks; >7 weeks),
(b) Working status (working; idle, i.e., absent from work, unemployed or inactive).

Source: Spitzer et al. 1987.

For each category, appropriate treatment measures are given in the report, based on critical
review of the literature.
Spondylolysis and spondylolisthesis

Spondylolysis means a defect in the vertebral arch (pars inter- articularis or isthmus), and
spondylolisthesis denotes forward displacement of a vertebral body relative to the vertebra
below. The derangement occurs most frequently at the fifth lumbar vertebra.

Spondylolisthesis can be caused by congenital abnormalities, by a fatigue fracture or an acute


fracture, instability between two adjacent vertebrae due to degeneration, and by infectious or
neo- plastic diseases.

The prevalence of spondylolysis and spondylolisthesis ranges from 3 to 7%, but in certain
ethnic groups the prevalence is considerably higher (Lapps, 13%; Eskimos in Alaska, 25 to
45%; Ainus in Japan, 41%), which indicates a genetic predisposition. Spondylolysis is equally
common in people with and without low-back pain, but people with spondylolisthesis are
susceptible to recurrent low-back pain.

An acute traumatic spondylolisthesis can develop due to an accident at work. The prevalence
is increased among athletes in certain athletic activities, such as American football,
gymnastics, javelin throwing, judo and weight lifting, but there is no evidence that physical
exertion at work would cause spondylolysis or spondylolisthesis.

Piriformis syndrome

Piriformis syndrome is an uncommon and controversial cause for sciatica characterized by


symptoms and signs of sciatic nerve compression at the region of the piriformis muscle where
it passes through the greater sciatic notch. No epidemiological data on the prevalence of this
syndrome are available. The present knowledge is based on case reports and case series.
Symptoms are aggravated by prolonged hip flexion, adduction and internal rotation. Recently
piriformis muscle enlargement has been verified in some cases of piriformis syndrome by
computed tomography and magnetic resonance imaging. The syndrome can result from an
injury to the piriformis muscle.

Thoracic Spine Region

Written by ILO Content Manager

The most common symptoms and signs that occur in the upper region of the back and spine
are pain, tenderness, weakness, stiffness and/or deformity in the back. Pain is much more
frequent in the lower (lumbar) back and in the neck than in the upper trunk (thoracic back).
Besides local symptoms, the thoracic disorders may cause pain that radiates to the lumbar
region and the lower limbs, to the neck and shoulders, to the rib cage and to the abdomen.

Painful Soft-Tissue Disorders

The causes of thoracic back pain are multifactorial and often obscure. The symptoms in many
cases arise from an overuse, an overstretching and/or usually mild ruptures of the soft tissues.
There are, however, also many specific disorders that can lead to back pain, such as severe
scoliosis (hunchback) or kyphosis of different aetiology, Morbus Sheuermann
(osteochondritis of the thoracic spine, sometimes painful in adolescents but seldom in adults),
and other deformities which may follow trauma or some neurologic and muscular diseases.
Infection in the spine (spondylitis) is often localized to the thoracic region. Many kinds of
microbes may cause spondylitis, such as tuberculosis. Thoracic back pain may occur in
rheumatic diseases, especially in ankylosing spondylitis and in severe osteoporosis. Many
other intraspinal, intrathoracal and intra-abdominal diseases, such as tumours, may also result
in back symptoms. Generally, it is common that the pain may be felt in the thoracic spine
(referred pain). Skeletal metastases of cancer from other sites are frequently localized to the
thoracic spine; this is especially true of metastatic breast, kidney, lung and thyroid cancers. It
is extremely rare for a thoracic disc to rupture, the incidence being 0.25 to 0.5% of all
intervertebral disc ruptures.

Examination: At examination many intra- and extraspinal disorders causing symptoms in the
thoracic back should always be kept in mind. The older the patient, the more frequent the
back symptoms arising from primary tumours or metastases. A comprehensive interview and
a careful examination are therefore very important. The purpose of the examination is to
clarify the aetio-logy of the disease. The clinical examination should include ordinary
procedures, such as inspection, palpation, testing of the muscle strength, the joint mobility,
the neurological state and so on. In cases with prolonged and severe symptoms and signs, and
when a specific disease is suspected by plain x ray, other radiography tests, such as MRI, CT,
isotope imaging and ENMG can contribute to clarifying the aetiological diagnosis and to
localizing the disorder process. Nowadays, MRI is usually the radiological method of choice
in thoracic back pain.

Degenerative Thoracic Spine Disorders

All adults suffer spinal degenerative changes which progress with age. Most people do not
have any symptoms from these changes, which are often found while investigating other
diseases, and are usually without any clinical importance. Infrequently, the degenerative
changes in the thoracic region lead to local and radiating symptoms—pain, tenderness,
stiffness and neurological signs.

Narrowing of the spinal canal, spinal stenosis, may lead to compression of vascular and
neurologic tissues resulting in local and/or radiating pain and neurologic deficiency. A
thoracic disc prolapse seldom provokes symptoms. In many cases a radiologically detected
disc prolapse is a side finding and does not provoke any symptoms.

The main signs of degenerative disorders of the thoracic spine are local tenderness, muscle
spasm or weakness and locally decreased mobility of the spine. In some cases there may be
neurological disturbances—muscle paresis, reflex and sensation deficiencies locally and/or
distally of the affected tissues.

The prognosis in thoracic disc prolapse is usually good. The symptoms subside as in the
lumbar and neck region within a few weeks.

Examination. A proper examination is essential especially in old persons in prolonged and


severe pain and in paresis. Besides a detailed interview, there should be an adequate clinical
examination, including inspection, palpation, testing of mobility, muscle strength and
neurological state. Of the radiological examinations, plain radiography, CT and especially
MRI are advantageous in evaluating the aetiological diagnosis and the localization of the
pathological changes in the spine. ENMG and isotope imaging may contribute to the
diagnosis. In the differential diagnosis laboratory tests may be valuable. In pure spinal disc
prolapse and degenerative changes there are no specific abnormalities in the laboratory tests.

Neck

Written by ILO Content Manager

Pain and discomfort in the neck are some of the most common symptoms associated with
work. They occur in heavy, manual work as well as in seated, sedentary work, and the
symptoms often last for prolonged periods of time—in fact, in some cases, over the whole
lifetime. It follows that disorders of the neck are difficult to cure once they have arisen, and
therefore much emphasis should be put into primary prevention. There are three main reasons
why neck disorders are common in working life:

1. The load on the neck structures is maintained for prolonged periods of time, due to high
visual demands of the job and to the need of stabilization of the neck-shoulder region in
working with the arms.
2. Psychologically demanding jobs with high demands on concentration and on quality and
quantity of work output are common, and induce an increased activity in neck muscles. This
tension increases further if the job in general is psychologically stressful, due to, for example,
poor industrial relations, little influence on the organization of work and so on.
3. The discs and joints of the neck are frequently the site of degenerative changes, which
increase in prevalence with age. This reduces the capacity to withstand occupational
workloads. It is also likely that the rate of degeneration increases as the result of physical
demands of the job.

Anatomy and Biomechanics of the Neck

The musculoskeletal part of the neck consists of seven vertebral bodies, six intervertabral
discs (consisting of cartilage), ligaments to hold these together and linking them to the skull
and to the thoracic spine, and muscles surrounding the spine. Although each joint of the
cervical spine has a very limited range of motion, the neck can be bent, extended, twisted and
tilted with a relatively large range of motion (see table 1). In a normal upright posture and
looking straight forward, the centre of gravity of the head and neck is actually situated in front
of the centre of support, and therefore needs to be balanced by the dorsal muscles, that is,
those situated behind the vertebral bodies. When the head is tilted forward more muscle force
is needed to balance the head, and when forward tilt of the head is maintained for prolonged
periods of time a substantial muscle fatigue can develop. In addition to muscle fatigue, tilting
and bending the head leads to increased compression of the inter-vertebral discs, which may
accelerate degenerative processes.
Table 1. Normal and permissible for prolonged driving range of motion (ROM) in degrees,
for head.

Normal1 Permissible2 for prolonged driving


Lateral bend 45 –
Twist 60 0 – 15
Flexion 45 0 – 25
Extension –45 0 – –5

1 American Academy of Orthopedic Surgeons 1988.


2 Hansson 1987

The muscles surrounding the neck are also active in arm work, in order to stabilize the
shoulder/arm complex. The trapezius and several other muscles originate on the cervical spine
and extend downwards/outwards to insert on the shoulder. These muscles are commonly the
site of dysfunction and disorders, especially in static or repetitive work tasks where the arms
are elevated and the vision is fixed.

The structures stabilizing the neck are very robust, which serves to protect the nervous tissue
inside the spinal canal and the nerves emerging from the intervertebral openings and
supplying the neck, upper extremity and upper part of the thorax. The intervertebral discs, the
adjoining parts of the vertebral bodies and the facet joints of the intervertebral foramina are
often the site of degenerative changes, which can exert pressure on the nerves and narrow
their space. (See figure 1).

Figure 1. Schematic drawing of a cross-section of three of the lower cervical vertebral bodies
(1) with intervertebral discs; (2) intervertebral foramina; (3) and nerve roots; (4) seen from the
side.

As mentioned in the introduction, symptoms like pain, ache and discomfort in the neck are
very common. Depending on the criteria used and the method of investigation, the prevalence
rates for neck disorders vary. If a postal enquiry or an interview focusing on musculoskeletal
disorders is used, the prevalence of disorders is usually higher than in a thorough investigation
also including a physical examination. Thus comparisons between groups should be made
only when the same investigation technique has been employed. Figure 2 gives one-year
prevalence figures for a representative sample of the Icelandic population who answered a
postal enquiry, the so-called “Nordic” questionnaire on musculo- skeletal disorders (Kuorinka
et al. 1987). Neck trouble (pain, ache or discomfort) was the third most common (38%
average for the whole sample), after shoulder (43%), and low-back (56%) problems. Neck
trouble among women was more common than among men, and there was an increase in
prevalence up to age 25 to 30, when the rates stabilized; they again went down somewhat at
age 50 to 55. In a representative sample of 200 men and women from Stockholm, aged 16 to
65 years, the 12-month prevalence was about 30% among the men and 60% among women.
The experience of recent pain in the neck with a duration of at least one month, was found
among 22% of a population sample in Gothenburg, Sweden—again rated third most common
after shoulder and low-back pain.

Figure 2. Twelve-month prevalence of symptoms of neck trouble of a random sample of the


Icelandic population (n=1000)

Risk Factors at Work

Neck disorders are considerably more prevalent in certain occupational groups. Using the
Nordic questionnaire (Kuorinka et al. 1987), Swedish occupational health services have
compiled data from several occupations. The results indicate that the risk of neck trouble
(pain, ache or discomfort) is very high among visual display unit (VDU) operators, sewing
machine operators, seamstresses and electronic assembly workers, with a 12-month period
prevalence greater than 60%. In addition, up to one-third of those who report disorders also
state that the problems have an impact on their working lives, either causing them to take
sick-leave, or necessitating a change of job or work tasks.

Epidemiological studies of neck and shoulder disorders have been reviewed, and the different
studies have been pooled by type of exposure (repetitive work and work above shoulder level,
respectively). Soft-tissue disorders of the neck, such as tension neck and other myalgias, were
considerably increased in a number of occupational tasks like data entry, typing, scissors
manufacturing, lamp assembly and film rolling.

Degenerative disorders of the intervertebral discs of the neck are more common among coal-
miners, dentists and meat industry workers (Hagberg and Wegman 1987).
Posture

Prolonged flexion, extension, lateral bending and twisting of the neck induce muscle fatigue,
and may lead to chronic muscle injuries and degenerative changes of the cervical spine. The
muscle activity needed to counteract the weight of the head in forward flexion of the neck
increases with the flexion angle, as shown in figure 3. Fatigue and pain are common in neck
flexion if prolonged work is performed. When the head is tilted forward to the extreme of its
range of motion, the main load is transferred from muscles to ligaments and joint capsules
surrounding the cervical spine. It has been calculated that if the entire cervical spine is flexed
maximally, the torque exerted by the head and neck on the disc between the seventh cervical
and the first thoracic vertebral body is increased by a factor of 3.6. Such postures lead to pain
within only about 15 minutes, and usually the posture has to be normalized within 15 to 60
minutes because of intense pain. Postures where the neck is bent forward for prolonged
periods of time—several hours—are common in assembly work in industry, in VDT work and
in packaging and inspection tasks where the work stations are poorly designed. Such postures
are frequently caused by a compromise between the need to perform work with the hands,
without elevating the arms, and the simultaneous need for visual control. For a review of the
mechanisms leading from muscle fatigue to injury, see the accompanying article “Muscles”.

Figure 3. Percentage of maximal neck extension strength required at increasing neck


inclination (flexion).

Extension of the neck for prolonged periods, as in overhead work in the building industry, can
be very tiring for the muscles in front of the cervical spine. Especially when carrying heavy
protective equipment like safety helmets, the torque tilting the head backwards can be high.
Repetitive movements

Repetitive movements performed by the hands increase the demands on stabilization of the
neck and shoulder region, and thereby increase the risk of neck complaints. Factors like high
demands on speed and precision of movements, as well as high demands on force exerted by
the hands, imply even larger demands on stabilization of the proximal body regions.
Repetitive movements of the head are less common. Rapid and repeated changes between
visual targets are usually accomplished through eye movements, unless the distance between
the objects observed is fairly large. This may occur for example at large computerized work
stations.

Vibration

Local vibration of the hands, such as working with drills and other vibrating hand-held
machines, is transferred along the arm but the fraction transferred up to the shoulder-neck
region is negligible. However, the holding of a vibrating tool may induce muscle contractions
in the proximal shoulder-neck muscles in order to stabilize the hand and the tool, and may
thereby exert a tiring effect on the neck. The mechanisms and the prevalence of such
vibration-induced complaints are not well known.

Work organization

Work organization in this context is defined as the distribution of work tasks over time and
between workers, the duration of work tasks, and the duration and distribution of rest periods
and breaks. The duration of work and rest periods has a profound effect on tissue fatigue and
recovery. Few specific studies on the effect of work organization on neck disorders have been
performed. In a large epidemiological study in Sweden, it was found that VDU work
exceeding four hours per day was associated with elevated rates of neck symptoms
(Aronsson, Bergkvist and Almers 1992). These findings have subsequently been confirmed in
other studies.

Psychological and social factors

Associations between psychological and social factors at work and disorders of the neck
region have been demonstrated in several studies. Especially factors such as perceived
psychological stress, poor control of work organization, poor relations with management and
work mates and high demands on accuracy and speed of work have been highlighted. These
factors have been associated with an increased risk (up to twofold) of disorders in cross-
sectional studies. The mechanism is likely to be an increase of tension in the trapezius and
other muscles surrounding the neck, as part of a general “stress” reaction. Since well-
controlled longitudinal studies are scarce, it is still uncertain whether these factors are causal
or aggravating. Moreover, poor psychological and social conditions often occur in jobs also
characterized by prolonged awkward postures.

Individual factors

Individual characteristics like age, sex, muscle strength and endurance, physical fitness, body
size, personality, intelligence, leisure time habits (physical activity, smoking, alcohol, diet)
and previous musculoskeletal disorders have been discussed as factors which might modify
the response to physical and psychosocial exposures. Age as a risk factor is discussed above
and is illustrated in figure 2.

Females usually report a higher prevalence of neck symptoms than males. The most likely
explanation is that exposure to both physical and psychosocial risk factors is higher in women
than among men, such as in work with VDUs, assembly of small components and machine
sewing.

Studies of muscle groups other than those of the neck do not consistently indicate that a low
static strength implies an elevated risk of development of disorders. No data are available
concerning neck muscles. In a recent study of a random population of Stockholm, low
endurance at neck extension was weakly associated with later development of neck disorders
(Schüldt et al. 1993). Similar results have been reported for low-back disorders.

In a longitudinal study in Sweden, personality type was a risk factor for development of
shoulder-neck disorders (Hägg, Suurküla and Kilbom 1990). Those employees who had a
type A personality (e.g., were ambitious and impatient) developed more serious problems than
others, and these associations were not related to individual productivity.

Little is known of the association between other individual characteristics and neck disorders.

Prevention

Work station design

The work station should be organized so that the head is not statically bent, extended or
twisted beyond the limits given for the permissible range of motion given for prolonged
driving in table 1. Now and then, movements that are within the limits for normal range of
motion are acceptable, as well as the occasional movement to the individual extremes.
Experimental studies have shown that the load of the neck muscles is lower with a slightly
backward tilted trunk than with a straight upright posture, which in turn is better than a
forward tilted trunk (Schüldt 1988).

The set-up of the workstation and the positioning of the work object requires a careful
consideration and a trade-off between the demands for optimal head and shoulder-arm
posture. Usually the work object is positioned somewhat below elbow height, which may
however induce a high strain on the neck muscles (e.g., in assembly work). This requires
individually adjustable work stations.

Visual strain will increase the tension of the neck muscles, and therefore attention should be
given to the lighting and contrasts of the work station and to readability of information given
on VDUs and on printed material. For VDU work the viewing distance should be optimized
to about 45 to 50 cm and the viewing angle to 10 to 20 degrees. The vision of the worker
should be optimized with the aid of glasses.

Work organization

In work with static loads on the neck, such as in assembly and data entry VDU work, frequent
breaks should be introduced to provide recovery from fatigue. Recommendations to introduce
one break of about 10 minutes per hour and to limit VDU work to a maximum of four hours
per day have been issued in some localities. As pointed out above, the scientific basis for
these recommendations with regard to the neck is relatively weak.

Clinical Characteristics and Treatmentof Neck Disorders

Painful soft-tissue disorders

Tension neck and other myalgias

The most common localization for neck tension and other myalgias is in the upper part of the
trapezius muscle, but other muscles originating in the neck are often affected simultaneously.
Symptoms are stiffness of the neck and ache at work and at rest. Frequently, excessive muscle
fatigue is perceived, even during short-lasting and low-level periods of work. The muscles are
tender, and often “tender points” can be found on palpation. Tension neck is common in jobs
with prolonged static loads on neck and shoulders. Microscopic examination of the tissue has
shown changes in the muscle morphology, but the mechanisms are incompletely understood
and are likely to involve both the blood circulation and the nervous regulation.

Acute torticollis

This state of acute pain and stiffness of the neck can be provoked by sudden twisting of the
head and extension of the opposite arm. Sometimes no provoking event can be identified.
Acute torticollis is believed to be caused by strain and partial ruptures of the ligaments of the
neck. Usually the pain and stiffness subsides within a week following rest, external support of
the neck (collar) and muscle-relaxing medication.

Degenerative disorders

Acute disorder (disc herniation)

Degeneration of the cervical spine involves the discs, which lose some of their resistance to
even mild stresses. Herniation of the disc with extrusion of its contents, or bulging of it, can
compromise nervous tissue and blood vessels laterally and posteriorly to the disc. One acute
degenerative disorder of the disc is compression of the nerve roots extending from the spinal
cord and supplying the neck, arms and upper thorax. Depending on the level of compression
(disc between second and third cervical vertebrae, third and fourth, and so on), acute sensory
and motor symptoms arise from the regions supplied by the nerves. The investigation of acute
symptoms of the neck and arms includes a thorough neurological examination in order to
identify the level of a possible disc prolapse and plain x-ray examination, usually
supplemented with CT scanning and MRI.

Chronic disorders (Cervical spondylosisand cervical syndrome)

Degeneration of the cervical spine involves narrowing of the disc, formation of new bone (so-
called osteophytes) extending from the edges of the cervical vertebra, and thickening of the
ligaments as in acute disorder. When osteophytes extend into the foramina, they may
compress the nerve roots. Spondylosis is the term used for the radiological changes in the
neck. Sometimes these changes are associated with chronic local symptoms. Radiological
changes may be advanced without serious symptoms and vice versa. Symptoms are usually
ache and pain in the neck, sometimes extending to the head and the shoulder region, and
reduced mobility. Whenever nerve roots are compressed, the diagnosis cervical syndrome is
used. Symptoms of cervical syndrome are ache and pain in the neck, reduced mobility of the
neck, and sensory and motor symptoms from the side of the compressed nerve root.
Symptoms like reduced sensitivity to touch, numbness, tingling and reduced strength are
common in the hand and arm. Thus symptoms are similar to those arising from acute disc
prolapse, but usually the onset is more gradual and the severity may fluctuate depending on
the external workload. Both cervical spondylosis and cervical syndrome are common in the
general population, particularly among aged persons. The risk of cervical spondylosis is
elevated in occupational groups with a sustained, high biomechanical load on the neck
structures, like coal-miners, dentists and meat industry workers.

Traumatic disorders (whiplash injuries)

In rear-end car accidents, the head (if not restricted by support from behind) is tilted backward
at high speed and with great force. In less severe accidents only partial muscle ruptures may
occur, whereas severe accidents may seriously damage the muscles and ligaments in front of
the cervical spine and also damage nerve roots. The most serious cases occur when the
cervical vertebrae are dislocated. Whiplash injuries need careful examination and treatment,
as long-lasting symptoms such as headaches may persist if the injury is not cared for properly.

Shoulder

Written by ILO Content Manager

Disorders of the shoulder region are common problems in both the general and working
population. As many as one-third of all women and one-quarter of all men report feeling pain
in the neck and shoulder every day or every other day. It is estimated that the prevalence of
shoulder tendinitis in the general population is about 2%. Among male and female workers in
the United States, the prevalence of shoulder tendinitis has been estimated to be as high as 8%
among those exposed to highly repetitive or high-force hand motions, compared to about 1%
for those without this type of musculoskeletal stress.

Anatomy

The bones in the shoulder include the collarbone (clavicle), the shoulder blade (scapula) and
the (shoulder) glenohumeral joint, as shown in figure 1. The collarbone is connected to the
body by the sternoclavicular joint, and to the shoulder-blades by the acromioclavicular joint.
The sternoclavicular joint is the sole connection between the upper extremity and the rest of
the body. The shoulder blade has no direct connection of its own and thus the shoulder is
dependent on muscles for being fixed to the trunk. The upper arm is connected to the shoulder
blade by the glenohumeral joint.
Figure 1. Schematic view of the skeletal parts of the shoulder-girdle.

The function of the shoulder is to provide a platform for the upper extremity, and for some of
its muscles. Although the glenohumeral joint has a greater range of movement than, for
example, the lower extremity in the hip, this flexibility has developed at the price of stability.
While the hip joint has very strong ligaments, the ligaments in the glenohumeral joint are few
and weak. In order to compensate for this comparative weakness, the glenohumeral joint is
surrounded by shoulder muscles in the form of a cuff and is called the rotator cuff.

Biomechanics

The arm represents about 5% of the total body weight, and its centre of gravity is about
midway down between the glenohumeral joint and the wrist. When the arm is raised and bent
either away from or towards the body (abduction or flexion), a lever is created in which the
distance from the centre of gravity increases, and hence the twisting force, and the loading
torque, on the glenohumeral joint increases. The rate at which the torque increases, however,
is not simply directly proportional to the angle at which the arm is bent, because the
mathematical function which describes the biomechanical forces is not linear but is rather a
sine function of the abduction angle. The torque will decrease only by about 10% if the
flexion or abduction angle is decreased from 90 to 60 degrees. However, if the angle is
decreased from 60 to 30 degrees, torque is reduced by as much as 50%.
The flexion strength in the glenohumeral joint is about 40 to 50 Nm for women and about 80
to 100 Nm for men. When the arm is held straight out (90-degree forward flexion) and no
external load is placed on the arm—that is, the person is not holding anything or using the
arm to exert a force—the static load is still about 15 to 20% of the maximal voluntary
capacity (MVC) for women and about 10 to 15% MVC for men. If a tool weighing 1 kg is
held in the hand with an arm extended, the corresponding load in the shoulder would be about
80% of the MVC for women, as illustrated in figure 2.

Figure 2. Female and male strength showing the results of holding a 1 kilogramme tool in the
hand with the arm held straight at different angles of shoulder flexion.

The most important muscles for abduction—or raising the arm away from the body to the
side—are the deltoid muscle, the rotator cuff muscles and the long head of the biceps. The
most important muscles for forward flexion—raising the arm away from the body to the
front—are the anterior part of the deltoid muscle, the rotator cuff muscles, the
coracobrachialis muscle and the short head of the biceps brachii muscle. Inward rotation is
performed by the pectoralis major muscle, the subscapularis muscle, the anterior part of the
deltoid muscle and by the latissimus dorsi muscle. Outward rotation is performed by the
posterior part of the deltoid muscle, the infraspinatus muscle and the minor and major teres
muscles.

Rotator cuff muscles are engaged in any movement of the glenohumeral joint, which is to say
any movement of the arm. The rotator cuff muscles originate from the shoulder blade, and
their tendons are arranged around the humerus in the form of a cuff, from which their name is
derived. The four rotator cuff muscles are the supraspinate, the infraspinate, the teres minor
and the subscapularis muscle. These muscles function as ligaments in the glenohumeral joint
and also keep the humeral head against the shoulder-blade. A rupture of the rotator cuff (e.g.,
of the supraspinous tendon) will cause a reduction in abduction strength, particularly
involving those positions where the arm is bent away from the body. When the function of the
deltoid muscles is lost, the abduction strength can be reduced by as much as 50%, regardless
of the angle at which the arm is being bent.

Any time there is forward flexion or abduction of the arm, a load will be placed on the
system. Many motions will cause a twisting force, or torque, as well. Since the arm is
connected to the shoulder blade by the glenohumeral joint, any load that is placed on this joint
will be transferred to the shoulder-blade. The load in the glenohumeral joint, measured in %
MVC, is almost directly proportional to the load placed on the muscle which fixes the
shoulder blade into place, the upper trapezius.

Major Specific Work-Related Diseases

Rotator cuff disorders and biceps tendinitis

Tendinitis and tenosynovitis are inflammations of a tendon and the synovial membrane of a
tendon sheath. The tendons to the rotator cuff muscles (supraspinatus, infraspinatus,
subscapularis, and teres minor muscles) and the long head of the biceps brachii are common
sites for inflammation in the shoulder. Large movements of the tendons are involved at these
locations. During elevation, as the tendons pass to the shoulder joint and under the bony
structure there (the coraco-acromial arch), they may be impinged upon, and inflammation
may result. These disorders are sometimes termed impingement syndromes. Inflammation of
a tendon may be part of a general inflammatory disease, such as in rheumatoid arthritis, but
also may be caused by local inflammation which results from mechanical irritation and
friction.

Shoulder joint and acromioclavicular joint osteoarthritis

Shoulder joint and acromioclavicular joint osteoarthritis, OA, are degenerative changes of
cartilage and bone in the joints and intervertebral discs.

Epidemiology

There is a high prevalence of shoulder tendinitis among welders and steel-platers, with rates
of 18% and 16%, respectively. In one study which compared welders and steel-platers to male
office workers, the welders and steel-platers were 11 to 13 times more likely to suffer from
the disorder, as measured by the odds ratio. A similar odds ratio of 11 was found in a case-
control study of male industrial workers who worked with their hands held at or about
shoulder level. Automobile assemblers who suffered from acute shoulder pain and tendinitis
were required to elevate their arms more frequently and for longer durations than were
workers who did not have such job requirements.

Studies of industrial workers in the United States have shown there to be a prevalence of 7.8%
of shoulder tendinitis and degenerative joint disease (shoulder) of cumulative trauma
disorders (CTDs) among workers whose tasks involved exerting force or repetitive motions,
or both, on the wrist and hands. In one study, female students performing repetitive shoulder
flexion developed reversible shoulder tendinitis. They developed the condition when the
flexion rate, over the course of one hour, was 15 forward flexions per minute and the angle of
flexion was between 0 and 90 degrees. Boarding, folding and sewing workers suffered about
twice as much shoulder tendinitis as did knitting workers. Among professional baseball
pitchers, approximately 10% have experienced shoulder tendinitis. A survey of swimmers in
Canadian swimming clubs found that 15% of the swimmers reported having significant
shoulder disability, primarily due to impingement. The problem was particularly related to the
butterfly and freestyle strokes. Tendinitis of the biceps brachii was found in 11% of the 84
best tennis players in the world.

Another study showed that shoulder joint osteoarthritis was more common in dentists than
among farmers, but the ergonomic exposure related to shoulder joint OA has not been
identified. An increased risk for acromioclavicular OA has been reported among construction
workers. Heavy lifting and handling of heavy tools with hand-arm vibration have been
suggested as the exposure related to acromioclavicular joint OA.

Mechanisms and Risk Factors of Disease

Pathophysiology of shoulder tendinitis

Tendon degeneration is often the predisposing factor for development of shoulder tendinitis.
Such degeneration of the tendon can be caused by impairment of circulation to the tendon so
that metabolism is disrupted. Mechanical stress may also be a cause. Cell death within the
tendon, which forms debris and in which calcium may deposit, may be the initial form of
degeneration. The tendons to the supraspinatus, the biceps brachii (long head) and the upper
parts of the infraspinatus muscles have a zone in which there are no blood vessels
(avascularity), and it is in this area that signs of degeneration, including cell death, calcium
deposits and microscopic ruptures, are predominantly located. When blood circulation is
impaired, such as through compression and static load on the shoulder tendons, then
degeneration can be accelerated because normal body maintenance will not be functioning
optimally.

Compression of the tendons occurs when the arm is elevated. A process that is often referred
to as impingement involves forcing the tendons through the bony passageways of the
shoulder, as illustrated in figure 3. Compression of the rotator cuff tendons (especially the
supraspinatus tendon) results because the space between the humeral head and the tight
coracoacromial arch is narrow. People who are suffering with long-term disability due to
chronic bursitis or complete or partial tears of the rotator cuff tendons or biceps brachii
usually also have impingement syndrome.
Figure 3. Impingement

The circulation of blood to the tendon also depends on muscle tension. In the tendon,
circulation will be inversely proportional to the tension. At very high tension levels,
circulation may cease completely. Recent studies have shown that the intramuscular pressure
in the supraspinous muscle can exceed 30 mm Hg at 30 degrees of forward flexion or
abduction in the shoulder joint, as shown in figure 4. Impairment of blood circulation occurs
at this pressure level. Since the major blood vessel supplying the supraspinous tendon runs
through the supraspinous muscle, it is likely that the circulation of the tendon may even be
disturbed at 30 degrees of forward flexion or abduction in the shoulder joint.
Figure 4. Raising the arm to different elevations and at different angles exerts different
intramuscular pressures on the supraspinous muscle.

Because of these biomechanical effects, it is not surprising to find a high risk of shoulder
tendon injuries among those who are involved in activities that require static contractions of
the supraspinatus muscle or repetitive shoulder forward flexions or abductions. Welders,
steel-platers and sewers are among the occupational groups whose work involves static
tension of these muscles. Assembly line workers in the automotive industry, painters,
carpenters and athletes such as swimmers are other occupational groups in which repetitive
shoulder joint movements are performed.

In the degenerated tendon, exertion may trigger an inflammatory response to the debris of
dead cells, resulting in an active tendinitis. Also, infection (e.g., viral, urogenital) or systemic
inflammation may predispose an individual to reactive tendinitis in the shoulder. One
hypothesis is that an infection, which makes the immune system active, increases the
possibility of a foreign body response to the degenerative structures in the tendon.

Pathogenesis of osteoarthrosis

The pathogenesis of osteoarthrosis, OA, is not known. Primary (idiopathic) OA is the most
common diagnosis in absence of predisposing factors such as previous fractures. If
predisposing factors exist, the OA is termed secondary. There are disputes between those who
claim (primary) OA to be a metabolic or genetic disorder and those who claim that cumulative
mechanical trauma also may play a part in the pathogenesis of primary OA. Microfractures
due to sudden impact or repetitive impact loading may be one pathogenic mechanism for
load-related OA.
Management and Prevention

In this section, non-medical management of shoulder disorders is considered. A change of


workplace design or change of work task is necessary if the tendinitis is considered to be due
to high local shoulder load. A history of shoulder tendinitis makes a worker doing repetitive
or overhead work susceptible to a relapse of tendinitis. Loading of the osteoarthritic joint
should be minimized by ergonomic optimization of work.

Primary prevention

Prevention of work-related musculoskeletal disorders in the shoulder can be achieved by


improving work postures, motions, material handling and work organization, and eliminating
external hazardous factors such as hand-arm vibration or whole body vibration. A
methodology that may be advantageous in improving ergonomic working conditions is
participatory ergonomics, taking a macro-ergonomic approach.

 Work postures: Since compression of the shoulder tendons occurs at 30 degrees of arm
elevation (abduction), work should be designed that allows the upper arm to be kept close to
the trunk.
 Motions: Repetitive arm elevations may trigger shoulder tendinitis, and work should be
designed to avoid highly repetitive arm motions.
 Material handling: Handling of tools or objects may cause severe loading on shoulder
tendons and muscles. Hand-held tools and objects should be kept at the lowest weight
feasible and should be used with supports to assist in lifting.
 Work organization: Work organization should be designed to allow pauses and rests.
Vacations, rotations and job enlargement are all techniques which may avoid repetitive
loading of single muscles or structures.
 External factors: Impact vibration and other impacts from power tools may cause strain on
both tendons and joint structures, increasing risk of osteoarthrosis. Vibration levels of power
tools should be minimized and impact vibration and other types of impact exposure
avoided by using different types of support or levers. Whole-
body vibrations may cause reflectory contractions of shoulder muscles and increase the
load on the shoulder.
 Participatory ergonomics: This method involves workers themselves in defining the
problems, and solutions, and in the evaluation of the solutions. Participatory ergonomics
starts from a macro-ergonomic view, involving analysis of the whole production system.
Results of this analysis might lead to large-scale changes in production methods that could
increase health and safety as well as profit and productivity. The analysis could lead to
smaller-scale changes as well, such as in workstation design.
 Preplacement examinations: Current available information does not support the idea that
preplacement screening is effective in reducing the occurrence of work-related shoulder
disorders.
 Medical control and surveillance: Surveillance of shoulder symptoms is readily carried out
using standardized questionnaires and inspection walk-throughs of workplaces.
Elbow

Written by ILO Content Manager

Epicondylitis

Epicondylitis is a painful condition which occurs at the elbow, where muscles which permit
the wrist and fingers to move, meet the bone. When this painful condition occurs on the
outside it is called tennis elbow (lateral epicondylitis). When it occurs on the inside of the
elbow bend, it is called golfer’s elbow (medial epicondylitis). Tennis elbow is a fairly
common disease in the general population, and in some studies high occurrence has been
observed in some occupational groups with hand-intensive tasks (table 1); it is more common
than medial epicondylitis.

Table 1. Incidence of epicondylitis in various populations.

Study population Rate per 100 Reference

person-years
5,000 workers of diverse trades 1.5 Manz and Rausch 1965
15,000 subjects of a normal population <1.0 Allander 1974
7,600 workers of diverse trades 0.6 Kivi 1982
102 male meatcutters 6.4 Kurppa et al. 1991
107 female sausage makers 11.3 Kurppa et al. 1991
118 female packers 7.0 Kurppa et al. 1991
141 men in non-strenuous jobs 0.9 Kurppa et al. 1991
197 women in non-strenuous jobs 1.1 Kurppa et al. 1991

Epicondylitis is thought to be caused by repetitive and forceful exertions of the wrist and
fingers; controlled studies have, however, given contradictory results concerning the role of
hand- intensive tasks in the development of the disease. Trauma can also play a role, and the
proportion of cases occurring after trauma has ranged from 0 to 26% in different studies.
Epicondylitis usually occurs in people aged 40 years and older. The disease is rare under the
age of 30. Little is known of other individual risk factors. A common view about the
pathology is that there is a tear at the insertion of the muscles. Symptoms of epicondylitis
include pain, especially during exertion of the hand and wrist, and gripping with the elbow
extended may be extremely painful.

There are various concepts of the pathogenesis of epicondylitis. The duration of epicondylitis
is usually from some weeks to some months, after which there is usually complete recovery.
Among workers with hand-intensive tasks the length of sick-leave due to epicondylitis has
usually been about or slightly over two weeks.

Olecranon Bursitis

Olecranon bursitis is an inflammation of a liquid-filled sac on the dorsal side of the elbow
(olecranon bursa). It may be caused by repeated mechanical trauma (traumatic or “student’s”
bursitis). It may also be due to infection or associated with gout. There is local swelling and
wavelike motion on palpation due to accumulation of fluid in the bursa. When there is raised
skin temperature, an infectious process (septic bursitis) is suggested.

Osteoarthrosis

Osteoarthrosis or degenerative disease that results from a breakdown of cartilage in the elbow
is rarely observed in people under the age of 60. However, an excess prevalence of
osteoarthrosis has been found among some occupational groups whose work includes
intensive use of hand tools or other heavy manual work, such as coal miners and road
construction workers. Valid studies with no excessive risk in such occupations have also been
reported, however. Elbow arthrosis has also been associated with vibration, but it is believed
that osteoarthrosis of the elbow is not specific to vibration.

The symptoms include local pain, first during movement and later also during rest, and
limitation of the range of motion. In the presence of loose bodies in the joint, locking of the
joint may occur. Loss of the ability to extend the joint completely is especially disabling.
Abnormalities seen on x rays include the growth of new bone tissue at the sites where
ligaments and tendons meet the bone. Sometimes loose pieces of cartilage or bone can be
seen. Damage to the joint cartilage may result in destruction of the underlying bone tissue and
deformation of joint surfaces.

The prevention and treatment of elbow osteoarthrosis emphasize optimizing work load by
improving tools and work methods to decrease the mechanical loads imposed on the upper
limb, and minimizing exposure to vibration. Active and passive movement therapy may be
used in order to minimize restrictions in the range of motion.

Forearm, Wrist and Hand

Written by ILO Content Manager

Tenosynovitis and Peritendinitis

Wrist and finger extensors and flexors

In the wrist and hand the tendons are surrounded by tendon sheaths, which are tubular
structures containing fluid to provide lubrication and protection for the tendon. An
inflammation of the tendon sheath is called tenosynovitis. Inflammation of the site where the
muscle meets the tendon is called peritendinitis The location of wrist tenosynovitis is at the
tendon sheath area in the wrist, and the location of peritendinitis is above the tendon sheath
area in the forearm. Insertion tendinitis denotes an inflammation of the tendon at the site
where it meets the bone (figure 1).
Figure 1. The muscle-tendon unit.

The terminology for the diseases of the tendon and its adjacent structures is often used
loosely, and sometimes “tendinitis” has been used for all painful conditions in the forearm-
wrist-hand region, regardless of the type of clinical appearance. In North America an umbrella
diagnosis “cumulative trauma disorder” (CTD) has been used for all upper extremity soft
tissue disorders believed to be caused, precipitated or aggravated by repetitive exertions of the
hand. In Australia and some other countries, the diagnosis of “repetitive strain injury” (RSI)
or “overuse injury” has been used, while in Japan the concept of “occupational
cervicobrachial disorder” (OCD) has covered soft-tissue disorders of the upper limb. The two
latter diagnoses include also shoulder and neck disorders.

The occurrence of tenosynovitis or peritendinitis varies widely according to the type of work.
High incidences have been reported typically among manufacturing workers, such as food-
processing workers, butchers, packers and assemblers. Some recent studies show that high
incidence rates exist even in modern industries, as shown in table 1. Tendon disorders are
more common on the back side than on the flexor side of the wrist. Upper extremity pain and
other symptoms are prevalent also in other types of tasks, such as modern keyboard work. The
clinical signs that keyboard workers present are, however, rarely compatible with
tenosynovitis or peritendinitis.

Table 1. Incidence of tenosynovitis/peritendinitis in various populations.

Study population Rate per 100 Reference

person-years
700 Muscovite tea packers 40.5 Obolenskaja and Goljanitzki
1927
12,000 car factory workers 0.3 Thompson et al. 1951
7,600 workers of diverse trades 0.4 Kivi 1982
102 male meatcutters 12.5 Kurppa et al. 1991
107 female sausage makers 16.8 Kurppa et al. 1991
118 female packers 25.3 Kurppa et al. 1991
141 men in non-strenuous jobs 0.9 Kurppa et al. 1991
197 women in non-strenuous jobs 0.7 Kurppa et al. 1991

Frequent repetition of work movements and high force demands on the hand are powerful risk
factors, especially when they occur together (Silverstein, Fine and Armstrong 1986).
Generally accepted values for acceptable repetitiveness and use of force do not, however, yet
exist (Hagberg et al. 1995). Being unaccustomed to hand-intensive work, either as a new
worker or after an absence from work, increases the risk. Deviated or bent postures of the
wrist at work and low environmental temperature have also been considered as risk factors,
although the epidemiological evidence to support this is weak. Tenosynovitis and
peritendinitis occur at all ages. Some evidence exists that women might be more susceptible
than men (Silverstein, Fine and Armstrong 1986). This has, however, been difficult to
investigate, because in many industries the tasks differ so widely between women and men.
Tenosynovitis may be due to bacterial infection, and some systemic diseases such as
rheumatoid arthritis and gout are often associated with tenosynovitis. Little is known about
other individual risk factors.

In tenosynovitis the tendon sheath area is painful, especially at the ends of the tendon sheath.
The movements of the tendon are restricted or locked, and there is weakness in gripping. The
symptoms are often worst in the morning, and functional ability improves after some activity.
The tendon sheath area is tender on palpation, and tender nodes may be found. Bending of the
wrist increases pain. The tendon sheath area may also be swollen, and bending the wrist back
and forth may produce crepitation or crackling. In peritendinitis, a typical fusiform swelling is
often visible on the backside of the forearm.

Tenosynovitis of the flexor tendons at the palmar aspect of the wrist may cause entrapment of
the median nerve as it runs through the wrist, resulting in carpal tunnel syndrome.

The pathology at an acute stage of the disease is characterized by the accumulation of fluid
and a substance called fibrin in the tendon sheath in tenosynovitis, and in the paratenon and
between the muscle cells in peritendinitis. Later, cell growth is noticed (Moore 1992).

It should be emphasized that tenosynovitis or peritendinitis that is clinically identifiable as


occupational is found in only a minor proportion of cases of wrist and forearm pain among
working populations. The majority of workers first seek medical attention with the symptom
of tenderness to palpation as the sole clinical finding. It is not fully known whether the
pathology in such conditions is similar to that in tenosynovitis or peritendinitis.

In the prevention of tenosynovitis and peritendinitis, highly repetitive and forceful work
movements should be avoided. In addition to attention to work methods, work organizational
factors (the quantity and pace of work, pauses and work rotation) also determine the local
load imposed on the upper limb, and the possibility of introducing variability to work by
affecting these factors should be considered as well. New workers and workers returning from
a leave or changing tasks should be gradually accustomed to repetitive work.

For industrial workers with hand-intensive tasks, the typical length of sick leave due to
tenosynovitis or peritendinitis has been about ten days. The prognosis of tenosynovitis and
peritendinitis is usually good, and most workers are able to resume their previous work tasks.
De Quervain’s tenosynovitis

De Quervain’s tenosynovitis is a stenosing (or constricting) tenosynovitis of the tendon


sheaths of the muscles that extend and abduct the thumb at the outer aspect of the wrist. The
condition occurs in early childhood and at any age later. It may be more common among
women than among men. Prolonged repetitive movements of the wrist and blunt trauma have
been suggested as causative factors, but this has not been epidemiologically investigated.

The symptoms include local pain at the wrist and weakness of grip. The pain may sometimes
extend into the thumb or up into the forearm. There is tenderness and eventual thickening on
palpation at the constriction site. Sometimes nodular thickening may be visible. Bending the
wrist towards the little finger with the thumb flexed in the palm (Finkelstein’s test) typically
exacerbates the symptoms. Some cases show triggering or snapping upon moving the thumb.

The pathological changes include thickened outer layers of the tendon sheaths. The tendon
may be constricted and show enlargement beyond the site of constriction.

Stenosing tenosynovitis of the fingers

The tendon sheaths of the flexor tendons of the fingers are held close to the joint axes by tight
bands, called pulleys . The pulleys may thicken and the tendon may show nodular swelling
beyond the pulley, resulting in stenosing tenosynovitis often accompanied by painful locking
or triggering of the finger. Trigger finger or trigger thumb have been used to denote such
conditions.

The causes of trigger finger are largely unknown. Some cases that occur in early childhood
are likely to be congenital, and some seem to appear after trauma. Trigger finger has been
postulated to be caused by repetitive movements, but no epidemiological studies to test this
have been carried out.

The diagnosis is based on local swelling, eventual nodular thickening, and snapping or
locking. The condition is often encountered in the palm at the level of the metacarpal heads
(the knuckles), but may occur also elsewhere and in multiple sites.

Osteoarthrosis

The prevalence of radiographically detectable osteoarthrosis in the wrist and hand is rare in
the normal population under the age of 40, and is more common among men than women
(Kärkkäinen 1985). After the age of 50, hand arthrosis is more prevalent among women than
among men. Heavy manual labour with and without exposure to low-frequency (below
40 Hz) vibration have been associated, although not consistently, with excess prevalence of
osteoarthrosis in the wrist and hand. For higher frequencies of vibration, no excess joint
pathology has been reported (Gemne and Saraste 1987).

Osteoarthrosis of the first joint between the base of the thumb and the wrist (carpometacarpal
joint) occurs fairly commonly among the general population and is more common among
women than men. Osteoarthrosis is less common in the knuckles (metacarpo-phalangeal
joints), with the exception of the meta- carpophalangeal joint of the thumb. Aetiology of these
disorders is not well known.
Osteoarthrotic changes are common in the joints closest to the fingertip (distal interphalangeal
joints of fingers), in which the age-adjusted prevalence of radiographically detectable changes
(mild to severe) in different fingers varies between 9 and 16% among the men and 13 and
22% among the women of a normal population. Distal interphalangeal osteoarthrosis can be
detected by clinical examination as nodular outgrowths on the joints, called Heberden’s
nodes. In a Swedish population study among 55-year-old women and men, Heberden’s nodes
were detected in 5% of men and 28% of women. Most subjects showed changes in both
hands. The presence of Heberden’s nodes showed a correlation with heavy manual labour
(Bergenudd, Lindgärde and Nilsson 1989).

Joint load associated with the manipulation of tools, repetitive movements of the hand and
arm possibly together with minor traumatization, loading of the joint surfaces in extreme
postures, and static work have been considered as possible causative factors for wrist and
hand osteoarthrosis. Although osteoarthrosis has not been considered specific to low-
frequency vibration, the following factors might play a role as well: damage of the joint
cartilage from shocks from the tool, additional joint load associated with a vibration- induced
increase in the need for joint stabilization, the tonic vibration reflex and a stronger grip on the
tool handle induced when sensitivity to touch is diminished by vibration (Gemne and Saraste
1987).

The symptoms of osteoarthrosis include pain during movement in the initial stages, later also
during rest. Limitation of motion in the wrist does not markedly interfere with work activities
or other activities of daily living, whereas osteoarthrosis of the finger joints may interfere with
gripping.

To avoid osteoarthrosis, tools should be developed that help to minimize heavy manual
labour. Vibration from tools should be minimized as well.

Compartment Syndrome

The muscles, nerves and blood vessels in the forearm and hand are located in specific
compartments limited by bones, membranes and other connective tissues. Compartment
syndrome denotes a condition in which the intracompartmental pressure is constantly or
repeatedly increased to a level at which the compartmental structures may be injured
(Mubarak 1981). This may occur after trauma, such as fracture or crush injury to the arm.
Compartment syndrome after strenuous exertion of the muscles is a well-known disease in the
lower extremity. Some cases of exertional compartment syndrome in the forearm and hand
have also been described, although the cause of these conditions is not known. Neither have
generally accepted diagnostic criteria nor indications for treatment been defined. The afflicted
workers have usually had hand-intensive work, although no epidemiological studies on the
association between work and these diseases have been published.

The symptoms of compartment syndrome include tenseness of the fascial boundaries of the
compartment, pain during muscle contraction and later also during rest, and muscle weakness.
In clinical examination, the compartment area is tender, painful on passive stretching, and
there may be diminished sensitivity in the distribution of the nerves running through the
compartment. Intracompartmental pressure measurements during rest and activity, and after
activity, have been used to confirm the diagnosis, but full agreement on normal values does
not exist.
Intracompartmental pressure increases when the volume of the contents increases in the rigid
compartment. This is followed by an increase in venous blood pressure, a decrease in the
arterial and venous blood pressure difference which in turn affects blood supply of the
muscle. This is followed by anaerobic energy production and muscle injury.

The prevention of exertional compartment syndrome includes avoiding or restricting the


activity causing the symptoms to a level that can be tolerated.

Ulnar Artery Thrombosis(Hypothenar Hammer Syndrome)

The ulnar artery may undergo damage and subsequent thrombosis and occlusion of the vessel
in the Guyon’s canal on the inner (ulnar) aspect of the palm. A history of repeated trauma to
the ulnar side of the palm (hypothenar eminence), such as intensive hammering or using the
hypothenar eminence as a hammer, has often preceded the disease (Jupiter and Kleinert
1988).

The symptoms include pain and cramping and cold intolerance of the fourth and fifth fingers.
Neurological complaints may also be present, such as aching, numbness and tingling, but the
performance of the muscles is usually normal. On clinical examination, coolness and
blanching of the fourth and fifth fingers may be observed, as well as nutritional changes of the
skin. The Allen’s test is usually positive, indicating that after compressing the radial artery, no
blood flows to the palm via the ulnar artery. A palpable tender mass may be found in the
hypothenar region.

Dupuytren’s Contracture

Dupuytren’s contracture is a progressive shortening (fibrosis) of the palmar fascia (connective


tissue joining the flexor tendons of the fingers) of the hand, leading to permanent contracture
of the fingers in a flexion posture. It is a common condition in people of North-European
origin, affecting about 3% of the general population. The prevalence of the disease among the
men is twice that among the women, and may be as high as 20% among males aged over 60.
Dupuytren’s contracture is associated with epilepsy, type 1 diabetes, alcohol consumption and
smoking. There is evidence for an association between vibration exposure from hand-held
tools and Dupuytren’s contracture. The presence of the disease has been associated also with
single injury and heavy manual labour. Some evidence exists to support an association
between heavy manual work and Dupuytren’s contracture, whereas the role of single injury
has not been adequately addressed (Liss and Stock 1996).

The fibrotic change appears first as a node. Later the fascia thickens and shortens, forming a
chordlike attachment to the digit. As the process progresses, the fingers turn to permanent
flexion. The fifth and fourth fingers are usually affected first, but other fingers also may be
involved. Knuckle pads may be seen on the back side of the digits.

Wrist and Hand Ganglia

A ganglion is a soft, liquid-filled small sac; ganglia represent the majority of all soft tissue
tumours of the hand. Ganglia are common, although the prevalence in populations is not
known. In clinical populations, women have shown a higher prevalence than men, and both
children and adults have been represented. Controversy exists on the causes of ganglia. Some
consider them inborn while others believe that acute or repeated trauma play a role in their
development. Different opinions exist also on the development process (Angelides 1982).

The most typical location of the ganglion is at the outer aspect of the back of the wrist
(dorsoradial ganglion), where it can present as a soft, clearly visible formation. A smaller
dorsal ganglion may not be noticeable without flexing the wrist markedly. The volar wrist
ganglion (at the palmar aspect of the wrist) is typically located on the outer side of the tendon
of the radial flexor of the wrist. The third commonly occurring ganglion is located at the
pulley of the finger flexor tendon sheath at the level of the knuckles. A volar wrist ganglion
may cause entrapment of the median nerve in the wrist, resulting in carpal tunnel syndrome.
In rare cases a ganglion may be located in the ulnar canal (Guyon’s canal) in the inner palm
and cause entrapment of the ulnar nerve.

The symptoms of wrist ganglia include local pain typically during exertion and deviated
postures of the wrist. The ganglia in the palm and fingers are usually painful during gripping.

Disorders of Motor Control of the Hand(Writer’s Cramp)

Tremor and other uncontrolled movements may disturb hand functions which demand high
precision and control, such as writing, assembly of small parts and playing musical
instruments. The classical form of the disorder is writer’s cramp . The occurrence rate of
writer’s cramp is not known. It affects both sexes and seems to be common in the third, fourth
and fifth decades.

The causes of writer’s cramp and the related disorders are not fully understood. A hereditary
predisposition has been suggested. The conditions are nowadays considered as a form of task-
specific dystonia. (Dystonias are a group of disorders characterized by involuntary sustained
muscle contractions, causing twisting and repetitive movements, or abnormal postures.)
Pathological evidence of brain disease has not been reported for patients with writer’s cramp.
Electrophysiological investigations have revealed abnormally prolonged activation of muscles
involved in writing, and excess activation of those muscles that are not directly involved with
the task (Marsden and Sheehy 1990).

In writer’s cramp, usually painless muscle spasm appears immediately or shortly after starting
to write. The fingers, wrist and hand may assume abnormal postures, and the pen is often
gripped with excessive force. The neurological status may be normal. In some cases an
increased tension or tremor of the affected arm is observed.

Some of the subjects with writer’s cramp learn to write with the non-dominant hand, and a
small proportion of these do develop cramp in the non-dominant hand as well. Spontaneous
healing of writer’s cramp is rare.

Hip and Knee

Written by ILO Content Manager

The hip joint is a ball-and-socket joint surrounded by ligaments, strong muscles and bursae.
The joint is weight bearing and has both high intrinsic stability and a wide range of motion. In
young people pain in the hip region usually originates in the muscles, tendon insertions or
bursae, while in older people, osteoarthrosis is the predominant disorder causing hip pain.

The knee is a weight-bearing joint that is important for walking, standing, bending, stooping
and squatting. The knee is rather unstable and depends for support on ligaments and strong
muscles as shown in figure 1. There are two joints in the knee, the femorotibial and the
femoropatellar. On both inner and outer side of the joint there are strong ligaments, and in the
centre of the femorotibial joint are the cruciate ligaments, which give stability and assist in the
normal mechanical function of the knee. The menisci are curved, fibrocartilaginous structures
that lie between the femoral (femoral condyles) and the tibial bones (tibial plateau). The knee
joint is both stabilized and empowered by muscles that originate above the hip joint and at the
shaft of the femur and are inserted upon bony structures below the knee joint. Around the
knee joint there is a synovial capsule, and the joint is protected by several bursae.

Figure 1. The knee.


All these structures are easily hurt by trauma and overuse, and medical treatment for knee
pain is rather common. Osteoarthrosis of the knee is a common disorder among the elderly,
leading to pain and disability. In younger people, patellar bursitis and patellofemoral pain
syndromes like a painful pes anserinus are rather common.

Osteoarthrosis

Osteoarthrosis (OA) is a common degenerative joint disorder in which the cartilage is more or
less destroyed and the structure of the underlying bone is affected. Sometimes it is
accompanied by few symptoms, but usually OA causes suffering, changes in ability to work
and a decreased quality of life. Changes in the joint can be seen on x ray, and an OA sufferer
usually seeks medical care because of pain, which is present even at rest, and a diminished
range of motion. In severe cases, the joint may become totally stiff, and even destroyed.
Surgery to replace a destroyed joint and replace it with a prosthesis is well developed today.

Studying the causes of osteoarthrosis of the hip is difficult. The onset of the disorder is
usually hard to pinpoint; the development is usually slow and insidious (that is, one doesn’t
necessarily know it is happening). The end point, for research purposes, can be different
things, varying from slight changes in x rays to symptomatic disorders that require surgery.
Indeed the end points used to identify the condition may differ because of different traditions
in different countries, and even between different clinics in the same town. These factors
cause problems in the interpretation of research studies.

Epidemiological research tries to identify associations between exposures such as physical


load, and outcomes, such as osteo-arthrosis. When combined with other knowledge, it is
possible to find associations that could be considered causal, but the cause-effect chain is
complicated. Osteoarthrosis is common in every population, and one must remember that the
disorder exists among persons with no known hazardous exposure, while there are healthy
subjects in the group with high and well-known harmful exposure. Unknown paths between
exposure and disorder, unknown health factors, genetics and selection forces may be a few of
the contributors to that.

Individual risk factors

Age: The occurrence of arthrosis increases with age. X-ray investigations of osteoarthrosis of
different joints, mainly the hip and the knee, have been made in different populations and the
prevalences found to vary. The explanation might be ethnic differences or variations in
investigation techniques and diagnostic criteria.

Congenital and developmental diseases and changes: Early changes in the joint, such as
congenital malformations, those caused by infections and so on, lead to an earlier and faster
progression of osteoarthrosis of the hip. Knock-knees (varus) and bandy-legs (valgus) put an
uneven distribution of forces on the knee joint, for example, which can have some importance
for arthrosis development.

Heredity: Hereditary factors are present for osteoarthrosis. For example, osteoarthrosis of the
hip is a rare disease among people of Asian origin but more common among Caucasians,
which suggests a hereditary factor. Osteoarthrosis in three or more joints is called generalized
osteoarthrosis and has a hereditary pattern. The hereditary pathway for osteoarthrosis of the
knee is not very well known.

Overweight: Overweight can probably cause osteoarthrosis of the knee and hip. The
relationship between overweight and knee osteoarthrosis has been shown in large
epidemiological studies of the general population, such as the National Health and Nutrition
Examination Survey (NHANES) and Framingham study in the United States. The association
was strongest for women but existed even for men (Anderson and Felson 1988; Felson et al.
1988).

Trauma: Accidents or causes of trauma or injury, especially those that interfere with the
mechanics and circulation of the joint and ligament, can give rise to an early osteoarthrosis.

Sex and oestrogen use: Osteoarthrosis of the hip and knee seems to be equally distributed
among men and women. From a study on female participants in the Framingham study, it was
concluded that oestrogen use in women is associated with a modest but insignificant
protective effect against osteoarthrosis of the knee (Hannan et al. 1990).

Mechanical load

Experimental studies in monkeys, rabbits, dogs and sheep have shown that compression
forces on a joint, especially when it is held in an extreme position, with or without
simultaneous shifting loads, can lead to changes in the cartilage and bone similar to those of
osteoarthrosis in human beings.

Sports activities: Participation in sports can increase the load on different joints. The risk of
trauma is also increased. On the other hand, however, good muscle function and coordination
are developed at the same time. Few data are available as to whether participating in sports
prevents trauma or is harmful to the joints. Data drawn from good scientific studies are very
limited, and some are described here. Several studies of soccer players have shown that both
professionals and amateurs have more osteoarthrosis of the hip and knee than the general
male population. For example, one Swedish study of 50- to 70-year-old men with a severe
osteoarthrosis who were compared with healthy men in the same age group, showed that the
men with osteoarthrosis had been more heavily involved in sports activities in their youth.
Track and field, racket sports, and soccer seemed to be most harmful (Vingård et al. 1993). In
the scientific literature there are other studies that have not shown any differences between
athletes and those who do not participate in sports. However most of them are performed on
still active athletes and are thus not conclusive.

Workload factors

The aetiology of osteoarthrosis of the knee and hip is, as for all diseases, complex and
multifactorial. Recent well-performed studies have shown that physical load on the joint from
occupational exposures will play a role as a contributing cause for the development of
premature osteoarthrosis.

Most epidemiological studies concerning physical workload are cross-sectional and carried
out on occupational groups without making individual exposure assessments. These serious
methodological problems make generalizing the results of such studies extremely difficult.
Farmers have been found to have more osteoarthrosis of the hip than other occupational
groups in several studies. In a Swedish study of 15,000 farmers, farmers’ wives and other
farm workers were asked about past x-ray examinations in which the hip joint could be seen.
Among the 565 men and 151 women who had been examined, hip joints were studied using
the same criteria and the same investigator as in a population study from Sweden 1984. The
distribution of osteoarthrosis of the hip among male farmers and the male population of
Malmö is shown in table 1 (Axmacher and Lindberg 1993).

Table 1. Prevalence of primary osteoarthrosis of the hip among male farmers and population
of different age groups in the city of Malmö.

Male farmers Male Malmö population


Age group N Cases Prevalence N Cases Prevalence
40–44 96 1 1.0% 250 0 0.0%
45–49 127 5 3.9% 250 1 0.4%
50–54 156 12 6.4% 250 2 0.8%
55–59 127 17 13.4% 250 3 1.2%
60–64 59 10 16.9% 250 4 1.6%

N = Number of men studied; cases = men with osteoarthrosis of the hip.


Source: Axmacher and Lindberg 1993.

In addition to farmers, construction workers, food-processing workers (grain-mill workers,


butchers and meat preparers), firefighters, mail carriers, shipyard workers and professional
ballet dancers have all been found to be at an increased risk of hip osteoarthrosis. It is
important to realize that an occupational title alone does not adequately describe the stress on
a joint—the same job type can mean different loads for different workers. Further, the load of
interest in a study is the exact pressure placed on a joint. In a study from Sweden, physical
workload has been quantified retrospectively through individual interviews (Vingård et al.
1991). Men with high physical load exposures due to their occupations up to the age of 49 had
more than double the risk for developing osteoarthrosis of the hip compared to those with low
exposure. Both dynamic exposures, such as heavy lifting, and static exposure, such as
prolonged sitting in a twisted position, seemed to be equally harmful to the joint.

The risk of knee osteoarthrosis has been found to be increased in coal miners, dockers,
shipyard workers, carpet and floor layers and other construction workers, firefighters, farmers
and cleaners. Moderate to heavy physical demands at work, knee bending and traumatic
injury increase the risk.

In another English study from 1968, dockers were found to have more osteoarthrosis of the
knee than civil servants in sedentary occupations (Partridge and Duthie 1968).

In Sweden, Lindberg and Montgomery investigated workers in a shipyard and compared them
to office workers and teachers (Lindberg and Montgomery 1987). Among shipyard workers
3.9% had gonarthrosis, compared to 1.5% among office workers and teachers.

In Finland, Wickström compared concrete reinforcement workers with painters, but no


differences in disability from the knees were found (Wickström et al. 1983). In a later Finnish
study, knee disorders in carpet and floor layers and painters were compared (Kivimäki,
Riihimäki and Hänninen 1992). Knee pain, knee accidents, and treatment regimes for the
knees, as well as osteophytes around the patella, were more common among carpet and floor
layers than among the painters. The authors suggest that kneeling work increases the risk of
knee disorders and that the changes observed in x rays might be an initial sign of knee
degeneration.

In the United States, factors associated with osteoarthrosis of the knee in the first National
Health and Nutrition Examination Survey (NHANES 1) were examined for a total of 5,193
men and women aged 35 to 74 years, 315 of whom had x-ray-diagnosed osteoarthrosis of the
knee (Anderson 1988). In investigating occupational load the authors characterized the
physical demands and knee-bending stress from occupational titles in US Department of
Labor Dictionary of Occupational Titles. For both men and women, for those whose jobs
were described as involving a lot of knee-bending, the risk for developing an osteoarthrosis of
the knee was more than double that for those without such jobs. When controlling for age and
weight in the statistical analysis, they found that 32% of the osteoarthrosis of the knee
occurring in these workers was attributable to occupation.

In the Framingham study in the United States, subjects from Framingham, a town outside
Boston, have been followed in an epidemiological study for more than 40 years (Felson
1990). Occupational status was reported for the years 1948–51 and 1958–61 and the results of
x rays looking for radiographic osteo- arthrosis of the knee during the years 1983–85. Each
subject’s job was characterized by its level of physical demand and whether the job was
associated with knee-bending. This study also found that the risk for developing
osteoarthrosis of the knee was doubled for those with a lot of knee bending and at least
medium physical demands in their occupation.

In a study from California the roles of physical activity, obesity and knee injury on the
development of severe osteoarthrosis of the knee was evaluated (Kohatsu and Schurman
1990). Forty-six people with gonarthrosis and 46 healthy people from the same community
were studied. The persons with osteoarthrosis were two to three times more likely than the
controls to have performed moderate to heavy work earlier in life and 3.5 times more likely to
have been obese at the age of 20. They were almost five times more likely to have had a knee
injury. There was no difference in the leisure time activities reported in the two groups.

In a register-based cohort study from Sweden (Vingärd et al. 1991) subjects born between
1905 and 1945, living in 13 of the 24 counties in Sweden in 1980 and reporting that they held
the same blue-collar occupation in the censuses of 1960 and of 1970, were studied. The blue-
collar occupations reported were then classified as to whether they were associated with high
(more than average) or low (less than average) load on the lower extremity. During 1981,
1982 and 1983 it was determined whether the study population sought hospital care for
osteoarthrosis of the knee. Firefighters, farmers and construction workers had an elevated
relative risk among men to develop osteoarthrosis of the knee. Among women, cleaners were
found to be at greater risk.

Chondromalacia patellae

A special case of osteoarthrosis is chondromalacia patellae, which often starts in the young. It
is a degenerative change in the cartilage on the back of the patella bone. The symptom is pain
in the knee, especially while bending it. Among sufferers, the patella is very tender when
tapped, and especially if pressure is put on it. The treatment is quadriceps muscle training and,
in severe cases, surgery. The connection to occupational activity is unclear.
Patellar bursitis

In the knee, there is a bursa between the skin and the patella. The bursa, which is a sac
containing fluid, can be subject to mechanical pressure during kneeling and thus become
inflamed. Symptoms are pain and swelling. A substantial amount of serous fluid can be
aspirated from the bursa. This disorder is rather common among occupational groups that do a
lot of kneeling. Kivimäki (1992) has investigated soft-tissue changes in the front of the knee
using ultrasonography in two occupational groups. Among carpet and floor layers 49% had
thickening of the prepatellar or superficial infrapatellar bursa, compared to 7% among
painters.

Pes anserinus bursitis

The pes anserinus consists of the tendons of the sartorius, semimembranous and gracilis
muscles at the inner aspect of the knee joint. Under the insertion point of these tendons, there
is a bursa that can be inflamed. Pain is increased by forceful extension of the knee.

Trochanter bursitis

The hip has many bursae that surround it. The trochanteric bursa lies between the tendon of
the gluteus maximus muscle and the posterolateral prominence of the greater trochanter (the
other side of the hip). Pain in this area is usually called trochanter bursitis. Sometimes it is a
true bursitis. The pain can radiate down the thigh and may simulate sciatic pain.

Theoretically it is possible that a special occupational posture can cause the disorder, but there
are no scientific investigations.

Meralgia paresthetica

Meralgia paresthetica belongs to the entrapment disorders, and the cause is probably an
entrapment of the nervus cutaneus femoris lateralis where the nerve comes out between
muscles and fasciae above the edge of the pelvis (spina iliaca anterior superior). The sufferer
will have pain along the front and lateral side of the thigh. The disorder can be rather tricky to
cure. Different remedies, from pain killers to surgery, have been used with varying success.
Since there are occupational exposures which cause pressure against the nerve, so this
condition may be an occupational disorder. Anecdotal accounts of this exist, but there are no
epidemiological investigations available that verify it.

Leg, Ankle and Foot

Written by ILO Content Manager

In general, pain is the main symptom of disorders of the leg, ankle and foot. It often follows
exercise and may be aggravated by exercise. Muscle weakness, neurological deficiency,
problems with fitting shoes, instability or stiffness of joints, and difficulties in walking and
running are common problems in these disorders.
The causes of problems are usually multifactorial, but most often they arise from
biomechanical factors, infections and/or systemic diseases. Foot, knee and leg deformities,
bone and/or soft-tissue changes that follow an injury, excessive stress such as repetitive use,
instability or stiffness and improper shoes are common causes of these symptoms. Infections
may occur in the bony or soft tissues. Diabetes, rheumatic diseases, psoriasis, gout and blood
circulation disturbances often lead to such symptoms in the lower limb.

Besides the history, a proper clinical examination is always necessary. Deformities, function
disturbances, the blood circulation and the neurological state should be carefully examined.
Analysis of gait may be indicated. Plain radiographs, CT, MRI, sonography, ENMG, vascular
imaging and blood tests may contribute to the pathological and aetiological diagnosis and
treatment.

Principles of treatment . The treatment should always be directed towards eliminating the
cause. Except in traumas, the main treatment is usually conservative. The deformity will be, if
possible, corrected by proper shoes and/or orthosis. Good ergonomic advice, including
correction of wrong walking and running behaviour, is often beneficial. Diminishing of
excessive loading, physiotherapy, anti-inflammatory drugs and in rare cases a short
immobilization may be indicated. Redesign of work may be indicated.

Surgery may also be recommended in some acute traumas, especially for some persistent
symptoms which have not benefited from conservative therapy, but specific medical advice is
needed for each case.

Achilles Tendinitis

The disorder is usually due to overuse of the Achilles tendon, which is the strongest tendon in
the human organism and is found in the lower leg/ankle. The tendon is exposed to excessive
loading, especially in sports, resulting in pathological inflammatory and degenerative changes
in the tendon and its surrounding tissues, bursae and paratenon. In severe cases a complete
rupture may follow. Predisposing factors are improper shoes, malalignment and deformities of
the foot, weakness or stiffness of the calf muscles, running on hard and uneven surfaces and
intensive training. Achilles tendinitis occasionally occurs in some rheumatic diseases, after
fractures of the crus or foot, in some metabolic diseases and following renal transplantation.

Pain and swelling in the region of the calcaneal tendon, the Achilles tendon, are rather
common symptoms, especially in sportsmen. The pain is located in the tendon or its
attachment to the calcaneum.

More men than women develop Achilles tendinitis. The symptoms are more frequent in
recreational sports than in professional athletics. Running and jumping sports may especially
lead to Achilles tendinitis.

The tendon is tender, often nodular, with swelling, and the tendon is fibrotic. Microruptures
may be present. A clinical examination can be supported mainly by MRI and ultrasonography
(US). MRI and US are superior to CT for demonstration of the region and quality of the soft-
tissue changes.

Proper shoes in misalignment, orthotics, and advice in correct biomechanical training may
prevent the development of Achilles tendinitis. When symptoms are present a conservative
treatment is often successful: prevention of excessive training, proper shoes with heal lifts and
shock absorption, physiotherapy, anti-inflammatory drugs, stretching and strengthening of the
calf muscles.

Calcaneal Bursitis

Pain behind the heel, usually aggravated by walking, is often caused by a calcaneal bursitis,
which frequently is associated with Achilles tendinitis. The disorder may be found in both
heels and can occur at any age. In children, calcaneal bursitis is often combined with an
exostosis or osteochondritis of the calcaneum.

In most cases improper footwear with narrow and hard back of the shoe is the cause of this
disorder. In athletics excessive loading of the heel region, as in running, may provoke
Achilles tendinitis and retrocalcaneal bursitis. A deformity of the back of the foot is a
predisposing factor. There is usually no infection involved.

Upon examination, the tender heel is thickened and the skin may be red. There is often an
inward bending of the hind part of the foot. Especially for differential diagnosis, radiographs
are important and may reveal changes in the calcaneum (e.g., Sever’s disease, osteochondral
fractures, osteophytes, bone tumours and osteitis). In most cases the history and the clinical
examination will be supported by MRI or sonography. A retrocalcaneal bursogram can
provide further insight into chronic cases.

The symptoms may subside without any treatment. In mild cases conservative treatment is
usually successful. The painful heel should be protected with strapping and proper shoes with
soft backs. An orthosis correcting the wrong position of the hind part of the foot may be
valuable. A correction of the walking and running behaviour is often successful.

Surgical excision of the bursa and the impinging part of the calcaneum is indicated only when
conservative treatment has failed.

Morton’s Metatarsalgia

Metatarsalgia is pain in the forefoot. It may be due to a neuroma of the plantar digital nerve,
Morton’s neuroma. The typical pain is in the forefoot, usually radiating in the third and fourth
toes, rarely in the second and third toes. The pain occurs on standing or walking at any age
but is most frequent in middle-aged women. At rest the pain disappears.

The condition is often connected with flat forefoot and callosities. Compression of the
metatarsal heads from side to side and of the space between the metatarsal heads may elicit
pain. In plain radiographs the neuroma is not seen but other changes (e.g., bony deformities
causing metatarsalgia) may be visible. MRI may reveal the neuroma.

Conservative treatment—proper shoes and pads—to support the anterior arch is often
successful.

Tarsal Tunnel Syndrome

Burning pain along the sole of the foot and in all toes which may be due to compression of the
posterior tibial nerve within the fibro-osseous tunnel under the flexor retinacle of the ankle,
are all symptoms of tarsal tunnel syndrome. There are many conditions leading to
compression of the nerve. The most common causes are bone irregularities, ankle fractures or
dislocations, local ganglia or tumours, or bad footwear.

There may be loss of feeling in the areas where the medial and lateral plantar nerves lie,
weakness and paralysis of foot muscles, especially the toe flexors, a positive Tinel’s sign and
tenderness in the region of the nerve course.

A proper clinical examination of the function and the neurological and vascular state is
essential. The syndrome may also be diagnosed by electrophysiological tests.

Compartment Syndromes of the Lower Limb

A compartmental syndrome is a result of prolonged high pressure of a closed intrafascial


muscle space leading to markedly reduced blood circulation in the tissues. The high
intracompartmental pressure is usually due to trauma (crush injuries, fractures and
dislocations), but it will also result from overuse, from tumours and from infections. A tight
cast may lead to a compartmental syndrome, as may diabetes and blood vessel disorders. The
first symptoms are tense swelling, pain and curtailment of function which are not relieved
when the leg is elevated, immobilized or treated with common drugs. Later on there will be
paresthesia, numbness and paresis. In growing persons, a compartment syndrome may result
in growth disturbances and deformities in the affected region.

If a compartment syndrome is suspected, a good clinical examination should be performed


including that of the vascular, the neurological and the muscular state, the active and passive
mobility of the joint and so on. Measurement of the pressure by multi-stick catheterization of
the compartments should be performed. MRI, Doppler investigation and sonography may be
helpful in the diagnosis.

Foot and Ankle Region Tenosynovitises

Of many symptoms in the foot, pain following tenosynovitis is rather common, especially in
the ankle region and the longitudinal arch. The cause of the synovitis may be deformities of
the foot, such as planovalgus, excessive stress, improper shoe fit, or sequelae to fractures and
other injuries, rheumatological disorders, diabetes, psoriasis and gout. Synovitis may occur in
many tendons, but the Achilles tendon is most often affected. Only rarely does tendinitis
involve infection. A medical history and clinical examination are essential in the diagnosis of
synovitis. Local pain, tenderness and painful movement are the main symptoms. Plain
radiographs that show the bone changes and MRI, especially for changes in the soft tissues,
are needed.

Ergonomic advice is needed. Proper shoes, correction of the walking and running habits and
prevention of excessive stress situations on the job are usually beneficial. A short period of
rest, immobilization in a cast and anti-inflammatory drugs are often indicated.

Hallux Valgus

Hallux valgus consists of extreme deviation of the first joint of the great toe towards the
midline of the foot. It is often associated with other foot disorders (varus of the first
metatarsal; flat foot, pes planotransversus or planovalgus). Hallux valgus may occur at any
age, and it is seen more commonly in women than in men. The condition is in most cases
familial, and it is often due to the wearing of improperly fitted shoes, such as ones with high
heels and narrow pointed toe boxes.

The metatarsal joint is prominent, the first metatarsal head is enlarged, and there may be a
(often inflamed) bursa bunion over the medial aspect of the joint in this condition. The great
toe frequently overrides the second toe. The soft tissues of the toe are often changed due to
the deformity. The range of extension and flexion of the metatarsophalangeal joint is usually
normal, but it may be stiff due to osteoarthritis (hallux rigidus). In the vast majority of cases
the hallux valgus is painless and requires no treatment. In some cases, however, hallux valgus
causes shoe fitting problems and pain.

The treatment should be individualized according to the age of the patient, the degree of the
deformity and the symptoms. Especially with adolescents and cases with mild symptoms,
conservative treatment is recommended—proper shoes, insoles, pads to protect the bunion
and so on.

Surgery is reserved especially for adult patients with severe shoe-fitting problems and pain,
whose symptoms are not relieved by conservative treatment. The surgical procedures are not
always successful, and therefore mere cosmetic factors should not be a real indication for
surgery; but there is a great range of opinions regarding the usefulness of the approximately
150 different surgical procedures for hallux valgus.

Fascitis Plantaris

The sufferer feels pain under the heel, especially on long standing and walking. The pain
radiates frequently to the sole of the foot. Plantar fascitis may occur at any age, but it is most
frequent in middle-aged persons. The patients are often obese. It is also a rather common
disorder in people who engage in sports. Often the foot has a flattened longitudinal arch.

There is local tenderness especially beneath the calcaneum at the attachment of the plantar
fascia. All the fascia may be tender. On x ray a bony spur is seen in the calcaneum in about
50% of the patients, but it is also present in 10 to 15% of symptomless feet.

The causes of fascitis plantaris are not always clear. An infection, particularly gonorrhaea,
rheumatoid arthritis and gout may cause the symptoms. Most frequently no specific diseases
are connected to the condition. Increased pressure and tension of the fascia may be the main
cause of the tenderness. The calcaneal spur may be a result of overusing of the fascia
plantaris. It is probably not the primary cause of the calcaneal tenderness, because so many
patients with those symptoms have no calcaneal spur and many with calcaneal spur are
symptomless.
Other Diseases

Written by ILO Content Manager

Primary Fibromyalgia

The cause of fibromyalgia is not known. Some patients associate trauma and infections with
the development of symptoms, but there is no hard evidence in favour of such triggering
events. However, many factors are known to aggravate the existing symptoms. Cold, damp
weather, mental disturbance, physical or mental stress, and also physical inactivity have all
been associated with fibromyalgia (Wolfe 1986).

A major feature is patients waking up in the morning feeling tired. Abnormal serotonin
metabolism is associated with both the sleep disturbance and decreased pain threshold typical
in these patients (Goldberg 1987).

The symptoms of fibromyalgia start insidiously with persisting widespread musculoskeletal


pains, multiple general symptoms such as fatigue, stiffness, subjective swelling of the fingers
not observed by the examining physician, non-refreshing sleep and muscular pain after
exertion. About one-third of the patients have additional symptoms, such as irritable bowel
syndrome, tension headaches, premenstrual syndrome, numbness and tingling in the
extremities, dryness of mouth and eyes and constriction of the blood vessels in the fingers
when exposed to cold (Raynaud’s phenomenon).

Typically, a patient with fibromyalgia has a large variety of symptoms, which, with the
exception of tender points, have no objective counterparts. Fibromyalgia runs a chronic
course. Most of the patients continue to have symptoms with varying intensity. Complete
remission is an exception. In primary fibromyalgia, no laboratory signs referring to
inflammatory arthritis are present. Patients with inflammatory arthritis (e.g. rheumatoid
arthritis) can also have fibromyalgic symptoms, in which case the term secondary
fibromyalgia is applied.

There is no one test for fibromyalgia. The diagnosis of fibro- myalgia is based on the history
of the patient and on the clinical observation of tender points (figure 1). The prevalence of
fibro-myalgia in the general population is 0.5 to 1%. Most (75 to 90%) of the patients are
women, usually between 25 and 45 years of age; children are rarely affected.
Figure 1. Tender point sites in fibromyalgia.

The American College of Rheumatology has set criteria for the classification of fibromyalgia
(figure 2).
Figure 2. The American College of Rheumatology 1990 criteria for the diagnosis of
fibromyalgia.

Diagnosis

Other ailments with similar symptoms must be excluded. Widespread pain must be present for
at least three months. In addition there must be pain in 11 of the 18 tender point sites shown in
figure 1 when pressed by an examiner’s finger.

Rheumatoid Arthritis

About 1% of the adult population has rheumatoid arthritis. The onset of the disease is usually
at 30 to 50 years of age, with females having a threefold higher risk than males. The
prevalence of the disease increases in older populations.

The cause of rheumatoid arthritis is not known. It is not inherited, but genetic factors increase
the risk for the development of the disease. In addition to one or multiple genetic factors,
some environmental triggering factors are thought to play a role in its pathogenesis, and viral
or bacterial infections are heavily suspected.

Rheumatoid arthritis usually starts gradually. Typically, the patient has mild swelling of the
small joints of fingers, and tenderness of the feet that manifests in a symmetrical fashion. If
joints on one hand, for example, are involved it is likely that the same joints on the other hand
will be affected. Stiffness of hands and feet in the morning is a major symptom. The patient
often has fatigue and can have mild fever. Laboratory features include evidence of
inflammation (elevated erythrocyte sedimentation rate and C-reactive protein level) and often
mild anaemia. About 70% of the patients have circulating rheumatoid factor (autoantibody
against IgG-class immunoglobulin). In early cases, radiological examination of hands and feet
is often normal, but later on, most patients develop radiological evidence of joint destruction
(erosions). The diagnosis of rheumatoid arthritis is based on a mixture of clinical, laboratory
and radiological findings (see figure 3).

Figure 3. Criteris for the diagnosis of rheumatoid arthritis.

The diseases which most often cause differential diagnostic problems are degenerative joint
diseases of the hands, arthritis following infections, spondylarthropathies, and some rare
connective tissue diseases (Guidelines 1992).

Patient education to diminish the work-load of the joints, use of ergonomic appliances, good
footwear, and proper treatment of infections form the basis of preventive measures. Treatment
guidelines are given in table 1.

Table 1. Guidelines for the treatment of rheumatoid arthritis

1. Treatment of joint pain Nonsteroidal anti-inflammatory drugs

Acetaminophen (Dextropropoxyphen)
2. Treatment of joint inflammation Intramuscular gold
(disease-modifying
antirheumatic drugs) Sulfasalazine

Auranofin

Antimalarials

D-penicillamine
Methotrexate

Azathioprine

Cyclosporine (Cyclophosphamide)

Glucocorticosteroid therapy
3. Local injections Glucocorticosteroid

Chemical synovectomy osmium tetroxide

Injection of radioactive isotopes


4. Surgery Early reparative surgery (synovectomy,
tenosynovectomy)

Reconstructive surgery
5. Rehabilitation Occupational therapy

Physiotherapy

Education

Evaluation of needs of aids and appliances

Spondylarthropathies

Epidemiology and aetiology

Spondylarthropathies include typical clinical entities such as ankylosing spondylitis and some
forms of arthritides associated with psoriasis, with chronic inflammatory bowel diseases, or
with bacterial infections in the urogenital tract or in the gut (so called reactive arthritis). The
diseases are common. The prevalence of the most chronic form, ankylosing spondylitis, in
Western populations varies between 0.1 and 1.8% (Gran and Husby 1993). Three new cases
of transient arthritis such as reactive arthritis compared to one patient with ankylosing
spondylitis are estimated to occur annually in a population of 10,000. Most of the patients
who develop spondylarthropathy are young adults, between 20 and 40 years of age. There is
evidence that the mean onset of symptoms for patients with ankylosing spondylitis is
increasing (Calin et al. 1988).

Spondylarthropathies have a strong genetic component since a majority of the patients have
an inherited genetic marker, HLA-B27. The frequency of this marker is about 7 to 15% in
Western populations; 90 to 100% of patients with ankylosing spondylitis and 70 to 90% of
patients with reactive arthritis are HLA-B27 positive. However, at population levels, most of
the subjects with this marker are healthy. Therefore, it is thought that exogenous factors, in
addition to the genetic susceptibility, are needed for the development of the disease. Such
triggering factors include bacterial infections in the urogenital tract or in the gut (table 2) skin
lesions, and chronic inflammatory bowel diseases. The evidence in favour of infections is
most direct in the case of reactive arthritis. Salmonella infections are widely increasing, as a
sequel of which an increase in the cases with joint complications can be expected. Agriculture
and poultry can be the sources of these infections. As to yersinia infections, pigs harbour
yersinia bacteria in their tonsils. Slaughtering followed by storage of the meat products in the
cold has been suggested to contribute to the dispersion of infections in humans. In patients
with ankylosing spondylitis, however, usually no preceding infections can be traced as an
initiating event. Recent results have, however, focused on the finding that patients with
ankylosing spondylitis often have asymptomatic chronic gut inflammation, which could serve
as a triggering factor or as a contributing inflammatory focus in the chronicity of the disease.

Table 2. Infections known to trigger reactive arthritis

Focus Bacteria
Upper respiratory tract Chlamydia pneumoniae

Beta-haemolytic streptococcus (usually causes rheumatic


fever)
Gut Salmonella

Shigella

Yersinia enterocolitica

Yersinia pseudotuberculosis

Campylobacter jejuni
Urogenital tract Chlamydia trachomatis

Neisseria gonorrhoeae

Signs and symptoms

Peripheral arthritis is asymmetric, affects large joints, and has a predilection to low
extremities. The patients often also have inflammatory low back pain, worse by night and
relieved by movement, not by rest. A typical feature is a tendency to inflammation of the
junction between tendons and bones (enthesopathy), which can manifest itself as pain under
the heel or in calcaneus at the insertion of Achilles tendon. In addition to inflammation in the
joints and at ligamentous insertions, patients also can have inflammatory symptoms in eyes
(iritis or conjunctivitis), skin (psoriasis, skin lesions in palms, soles, or leg induration) and
sometimes in the heart.

The following are the diagnostic criteria for spondylarthro- pathy (Dougados et al. 1991).

Inflammatory low back pain or Joint inflammation (synovitis):

 asymmetric
 predilection to low extremities

and
at least one of the following:

 positive family history for spondylarthropathy


 psoriasis
 inflammatory bowel disease
 buttock pain changing from side to side
 pain at the junction between tendon and bone (enthesopathy).

Patients with ankylosing spondylitis have low-back pain, worse by night, and tenderness
between the spine and the pelvis at the sacroiliac joints. They can have limited mobility of the
spine with chest tenderness. A third of the patients have peripheral arthritis and enthesopathy.
The cornerstone of the diagnosis of ankylosing spondylitis is the presence of radiological
changes in the sacroiliac joints; there is a loss of space between the joints, and bony
outgrowths. Such changes add to the diagnostic accuracy of patients with spondylarthropathy
but are necessary only in the case of ankylosing spondylitis.

Gout

Epidemiology and aetiology

Gout is a metabolic disorder that is the most common cause of inflammatory arthritis in men.
Its prevalence among adults varies from 0.2 to 0.3 per 1000, and is 1.5% in adult males. The
prevalence of gout increases with age and with increasing serum urate levels.

Hyperuricemia (high level of uric acid in serum) is a risk factor. Contributing factors are
chronic kidney diseases which lead to renal insufficiency, hypertension, use of diuretic drugs,
high alcohol intake, lead exposure and obesity. Gouty attacks are precipitated by
hypersaturation of joint fluid with uric acid; the precipitated crystals irritate the joint, with the
development of acute arthritis.

Signs and symptoms

The natural course of gout runs through several stages from a- symptomatic hyperuricemia to
acute gouty arthritis, asymptomatic periods, and to chronic tophaceous gout (gout with
nodules).

Acute gouty arthritis often manifests itself as an acute inflammation in one joint, usually at
the base of the big toe. The joint is very tender, swollen and highly painful; it is often red. The
acute attack can subside spontaneously within days. If untreated, repeated attacks can occur,
and in some patients these continue (during the subsequent years) so that the patient develops
chronic arthritis. In these patients, urate deposits can be observed in the ear helices, at the
elbows, or at the Achilles tendons, where they form non-tender subcutaneous palpable masses
(tophi).
Infectious Arthritis

Epidemiology and aetiology

In children, infectious arthritis often develops in a previously healthy child, but adults often
have some predisposing factor, such as diabetes, chronic arthritis, use of glucocorticosteroid
or immunosuppressive therapy, previous injections or trauma in the joint. Patients with
endoprosthesis are also susceptible to infections in the operated joint.

Bacteria are most often the cause of infectious arthritis. In immunosuppressed patients, fungi
can be found. Although bacterial infection in the joint is rare, it is very important to recognize,
because, if untreated, the infection rapidly destroys the joint. The microbes can reach the joint
by circulation (septic infection), by direct penetrating wound or during a joint injection, or
from an adjacent infectious focus.

Signs and symptoms

In a typical case, a patient has acute joint inflammation, usually in one single joint, which is
painful, hot, red and tender to movement. There are general symptoms of infection (fever,
chills) and laboratory evidence of acute inflammation. The joint aspiration is turbid, and in
microscopic examination a high number of white blood cells are seen, with positive stain and
cultures for bacteria. The patient can have signs of a focus of infection elsewhere, such as
pneumonia.

Osteoporosis

Epidemiology and aetiology

Bone mass increases from childhood until adolescence. Women gain 15% less bone density
than men. It is at its highest between 20 and 40 years, after which there is a constant decrease.
Osteoporosis is a condition in which bone mass decreases and bones become susceptible to
fracture. Osteoporosis is a major cause of morbidity in the elderly. The most important
manifestation is lumbar and hip fractures. About 40% of women who have reached 70 years
have suffered from fractures.

Peak bone mass is influenced by genetic factors. In women, bone mass decreases after
menopause. The decrease of bone mass in men is less distinct than in women. In addition to
lack of oestrogen, other factors influence the rate of bone loss and the development of
osteoporosis. These include physical inactivity, low amount of calcium in the diet, smoking,
coffee consumption and low body weight. Use of systemic corticosteroid therapy is also
associated with enhanced risk for osteoporosis.

Signs and symptoms

Osteoporosis can be asymptomatic. On the other hand, the most distinct manifestation of
osteoporosis is bone fracture, typically that of the hip, vertebrae (spine) and wrist. Hip and
wrist fractures usually are a result of falling, but vertebral fractures can develop insidiously
after a trivial trauma. The patient has back pain, kyphosis and loss of height.
Bone Cancer

Epidemiology and aetiology

Primary malignant bone tumours are uncommon. They occur most often in children and
young adults. Osteosarcoma is the most frequent of the malignant bone tumours. It is most
frequently observed in the second decade of life, and in older adults it can result secondary to
a bone disease (Paget’s disease). Ewing’s sarcoma is also mostly observed in children, who
present with destructive changes in the pelvis or in the long bones. Malignant tumours
originating from the cartilage (chondrosarcoma) can occur in many cartilage areas. In adults,
malignant bone lesions are often metastatic (i.e., the primary malignant disease is somewhere
else in the body).

Most malignant primary tumours have no known aetiology. However, Paget’s disease of
bone, osteomyelitis, osteonecrosis and radiation injury have been associated with malignant
transformation. Bone metastases are frequent in primary cancers of the breast, lung, prostate,
kidney or thyroid gland.

Signs and symptoms

Pain, limitation of movement and swelling are present in patients with osteosarcoma. In
addition to bone pain, patients with Ewing’s sarcoma often have systemic symptoms such as
fever, malaise and chills. Chondrosarcomas can cause varying symptoms depending on the
site of the tumour and its histologic details.

Osteomyelitis

Epidemiology and aetiology

Osteomyelitis is a bone infection that is usually bacterial, but can be fungal or viral. In an
otherwise healthy person, osteomyelitis is a rare event, but in patients with chronic diseases
such as diabetes or rheumatoid arthritis, infection in the body can spread by the bloodstream
or by direct invasion to bones. In children, the most favoured site for the spreading is the shaft
of long bone, but in adults the infection is often in the spine. A focal point from which an
infection can spread by bloodstream or by direct invasion, penetrating or blunt trauma, prior
orthopaedic surgery (insertion of prosthesis) can all be complicated by osteomyelitis.

Signs and symptoms

Acute bone infection of the long bones is associated with fever, chills and bone pain. Spinal
osteomyelitis can cause more vague symptoms with progressive pain and low-grade fever.
Infections around a prosthesis cause pain and tenderness when moving the operated joint.
Musculoskeletal System References
Agency for Health Care Policy and Research (AHCPR). 1994. Acute low-back problems in
adults. Clinical Pratice Guidelines 14. Washington, DC: AHCPR.

Allander, E. 1974. Prevalence, incidence and remission rates of some common rheumatic
diseases or syndromes. Scand J Rheumatol 3:145-153.

American Academy of Orthopaedic Surgeons. 1988. Joint Motion. New York: Churchill
Livingstone.
Anderson, JAD. 1988. Arthrosis and its relation to work. Scand J Work Environ Health
10:429-433.

Anderson, JJ and DT Felson. 1988. Factors associated with osteoarthritis of the knee in the
first National Health and Nutrition Survey (HANES 1): Evidence for an association with
overweight, race and physical demands of work. Am J Epidemiol 128:179-189.

Angelides, AC. 1982. Ganglions of the hand and wrist. In Operative Hand Surgery, edited by
DP Green. New York: Churchill Livingstone.

Armstrong, TJ, WA Castelli, G Evans, and R Diaz-Perez. 1984. Some histological changes in
carpal tunnel contents and their biomechanical implications. J Occup Med 26(3):197-201.

Armstrong, TJ, P Buckle, L Fine, M Hagberg, B Jonsson, A Kilbom, I Kuorinka, B


Silverstein, B Sjøgaard, and E Viikari-Juntura. 1993. A conceptual model for work-related
neck and upper-limb musculoskeletal disorders. Scand J Work Environ Health 19:73-84.

Arnett, FC, SM Edworthy, DA Bloch, DJ McShane, JF Fries, NS Cooper, LA Healey, SR


Kaplan, MH Liang, HS Luthra, TAJ Medsger, DM Mitchell, DH Neustadt, RS Pinals, JG
Schaller, JT Sharp, RL Wilder, and GG Hunder. 1988. The American Rheumatism
Association 1987 revised criteria for the classification of rheumatoid arthritis. Arthritis
Rheum 31:315-324.

Aronsson, G, U Bergkvist, and S Almers. 1992. Work Oganization and Musculoskeletal


Disorders in VDU-Work (Swedish with Summary in English). Solna: National Institute of
Occupational Health.
Axmacher, B and H Lindberg. 1993. Coxarthrosis in farmers. Clin Orthop 287:82-86.

Bergenudd, H, F Lindgärde, and B Nilsson. 1989. Prevalence and coincidence of degenerative


changes of the hands and feet in middle age and their relationship to occupational work load,
intelligence, and social background. Clin Orthop 239:306-310.

Brinckmann, P and MH Pope. 1990. Effects of repeat-ed loads and vibration. In The Lumbar
Spine, edited by J Weinstein and SW Weisel. Philadelphia: WB Saunders.

Calin, A, J Elswood, S Rigg, and SM Skevington. 1988. Ankylosing spondylitis - an


analytical review of 1500 patients: The changing pattern of disease. J Rheumatol 15:1234-
1238.

Chaffin, D and GBJ Andersson. 1991. Occupational Bio-mechanics. New York: Wiley.
Daniel, RK and WC Breidenbach. 1982. Tendon: structure, organization and healing. Chap.
14 in The Musculoskeletal System: Embryology, Biochemistry and Physiology, edited by RL
Cruess. New York: Churchill Livingstone.

Dougados, M, S van der Linden, R Juhlin, B Huitfeldt, B Amor, A Calin, A Cats, B


Dijkmans, I Olivieri, G Pasero, E Veys, and H Zeidler. 1991. The European
Spondylarthropathy Study Group preliminary criteria for the clasification of
spondylarthropathy. Arthritis Rheum 34:1218-1227.

Edwards, RHT. 1988. Hypotheses of peripheral and central mechanisms underlying


occupational muscle pain and injury. Eur J Appl Physiol 57(3):275-281.

Felson, DT. 1990. The epidemiology of knee osteoarthritis: Results from the Framingham
Osteoarthritis Study. Sem Arthrit Rheumat 20:42-50.

Felson, DT, JJ Anderson, A Naimark, AM Walker, and RF Meenan. 1988. Obesity and knee
osteoarthritis: The Framingham study. Ann Intern Med 109:18-24.

Fung, YB. 1972. Stress-strain history relations of soft tissues in simple elongation. Chap. 7 in
Biomechanics: Its Foundations and Objectives, edited by YC Fung, N Perrone, and M
Anliker. Englewood Cliffs, NJ: Prentice Hall.

Gelberman, R, V Goldberg, K An, and A Banes. 1987. Tendon. Chap. 1 in Injury and Repair
of the Musculoskeletal Soft Tissue, edited by SL Woo and JA Buckwalter. Park Ridge, Ill:
American Academy of Orthopaedic Surgeons.

Gemne, G and H Saraste. 1987. Bone and joint pathology in workers using hand-held
vibrating tools. Scand J Work Environ Health 13:290-300.

Goldberg, DL. 1987. Fibromyalgia syndrome. An emerging but controversial condition.


JAMA 257:2782-2787.

Goldstein, SA, TJ Armstrong, DB Chaffin, and LS Matthews. 1987. Analysis of cumulative


strain in tendons and tendon sheaths. J Biomech 20(1):1-6.

Gran, JT and G Husby. 1993. The epidemiology of ankylosing spondylitis. Sem Arthrit
Rheumat 22:319-334.

Guidelines and audit measures for the specialist supervision of patients with rheumatoid
arthritis. Report of a Joint Working Group of the British Society for Rheumatology and the
Research Unit of the Royal College of Physicians. 1992. J Royal Coll Phys 26:76-82.

Hagberg, M. 1982. Local shoulder muscular strain symptoms and disorders. J Hum Ergol
11:99-108.
Hagberg, M and DH Wegman. 1987. Prevalence rates and odds ratios of shoulder neck
diseases in different occupational groups. Brit J Ind Med 44:602-610.

Hagberg, M, H Hendrick, B Silverstein, MJ Smith, R Well and P Carayon. 1995. Work


Related Musculoskeletal Disorders (WMSDs): A Reference Book for Prevention, edited by I
Kuorinka, and L Forcier. London: Taylor & Francis.
Hägg, GM, J Suurküla, and Å Kilbom. 1990. Predictors for Work-Related Shoulder-Neck
Disorders (Swedish with Summary in English). Solna: National Institute of Occupational
Health.

Halpern, M. 1992. Prevention of low back pain: Basic ergonomics in the workplace and the
clinic. Bailliere’s Clin Rheum 6:705-730.

Hamerman, D and S Taylor. 1993. Humoral factors in the pathogenesis of osteoarthritis. In


Humoral Factors in the Regulation of Tissue Growth, edited by PP Foá. New York: Springer.

Hannan, MT, DT Felson, JJ Anderson, A Naimark, and WB Kannel. 1990. Estrogen use and
radiographic osteoarthritis of the knee in women. Arthritis Rheum 33:525-532.

Hansen, SM. 1993. Arbejdsmiljø Og Samfundsøkonomi -En Metode Til


Konsekvensbeskrivning. Nord: Nordisk Ministerråd.

Hansen, SM and PL Jensen. 1993. Arbejdsmiljø Og Samfundsøkonomi -Regneark Og


Dataunderlag. Nord: Nordisk Ministerråd. (Nordiske Seminar - og Arbejdsrapporter
1993:556.)

Hansson, JE. 1987. Förararbetsplatser [Work stations for driving, in Swedish]. In Människan I
Arbete, edited by N Lundgren, G Luthman, and K Elgstrand. Stockholm:Almqvist & Wiksell.

Heliövaara, M, M Mäkelä, and K Sievers. 1993. Musculoskeletal Diseases in Finland (in


Finnish). Helsinki: Kansaneläkelaitoksen julkaisuja AL.

Järvholm U, G Palmerud, J Styf, P Herberts, R Kadefors. 1988. Intramuscular pressure in the


supraspinatus muscle. J Orthop Res 6:230-238.

Jupiter, JB and HE Kleinert. 1988. Vascular injuries of the upper extremity. In The Hand,
edited by R Tubiana. Philadelphia: WB Saunders.

Kärkkäinen, A. 1985. Osteoarthritis of the Hand in the Finnish Population Aged 30 Years and
Over (in Finnish with an English summary). Finland: Publications of the Social Insurance
Institution.

Kivi, P. 1982. The etiology and conservative treatment of humeral epicondylitis. Scand J
Rehabil Med 15:37-41.

Kivimäki, J. 1992. Occupationally related ultrasonic findings in carpet and floor layers knees.
Scand J Work Environ Health 18:400-402.

Kivimäki, J, H Riihimäki and K Hänninen. 1992. Knee disorders in carpet and floor layers
and painters. Scand J Work Environ Health 18:310-316.

Kohatsu, ND and D Schurman. 1990. Risk factors for the development of osteoarthrosis of the
knee. Clin Orthop 261:242-246.
Kuorinka, I, B Jonsson, Å Kilbom, H Vinterberg, F Biering-Sørensen, G Andersson, and K
Jørgensen. 1987. Standardised Nordic questionnaires for the analysis of musculoskeletal
symptoms. Appl Ergon 18:233-237.

Kurppa, K, E Viikari-Juntura, E Kuosma, M Huus-konen, and P Kivi. 1991. Incidence of


tenosynovitis or peritendinitis and epicondylitis in a meat-processing factory. Scand J Work
Environ Health 17:32-37.

Leadbetter, WB. 1989. Clinical staging concepts in sports trauma. Chap. 39 in Sports-Induced
Inflammation: Clinical and Basic Science Concepts, edited by WB Leadbetter, JA
Buckwalter, and SL Gordon. Park Ridge, Ill: American Academy of Orthopaedic Surgeons.

Lindberg, H and F Montgomery. 1987. Heavy labor and the occurence of gonarthrosis. Clin
Orthop 214:235-236.

Liss, GM and S Stock. 1996. Can Dupuytren’s contracture be work-related?: Review of the
evidence. Am J Ind Med 29:521-532.

Louis, DS. 1992. The carpal tunnel syndrome in the work place. Chap. 12 in Occupational
Disorders of the Upper Extremity, edited by LH Millender, DS Louis, and BP Simmons. New
York: Churchill Livingstone.

Lundborg, G. 1988. Nerve Injury and Repair. Edinburgh: Churchill Livingstone.


Manz, A, and W Rausch. 1965. Zur Pathogenese und Begutachtung der Epicondylitis humeri.
Münch Med Wochenshcr 29:1406-1413.

Marsden, CD and MP Sheehy. 1990. Writer’s cramp. Trends Neurosci 13:148-153.

Mense, S. 1993. Peripheral mechanisms of muscle nociception and local muscle pain. J
Musculoskel Pain 1(1):133-170.

Moore, JS. 1992. Function, structure, and responses of the muscle-tendon unit. Occup Med:
State Art Rev 7(4):713-740.

Mubarak, SJ. 1981. Exertional compartment syndromes. In Compartment Syndromes and


Volkmann’s Contracture, edited by SJ Mubarak and AR Hargens. Philadelphia: WB
Saunders.

Nachemson, A. 1992. Lumbar mechanics as revealed by lumbar intradiscal pressure


measurements. In The Lumbar Spine and Back Pain, edited by MIV Jayson. Edinburgh:
Churchill Livingstone.

Obolenskaja, AJ, and Goljanitzki, JA. 1927. Die seröse Tendovaginitis in der Klinik und im
Experiment. Dtsch Z Chir 201:388-399.

Partridge, REH and JJR Duthie. 1968. Rheumatism in dockers and civil servants: A
comparison of heavy manual and sedentary workers. Ann Rheum Dis 27:559-568.

Rafusson V, OA Steingrímsdóttir, MH Olafsson and T Sveinsdóttir. 1989. Muskuloskeletala


besvär bland islänningar. Nord Med 104: 1070.
Roberts, S. 1990. Sampling of the intervertebral disc. In Methods in Cartilage Research,
edited by A Maroudas and K Kuettner. London: Academic Press.

Rydevik, BL and S Holm. 1992. Pathophysiology of the intervertebral disc and adjacent
structures. In The Spine, edited by RH Rothman and FA Simeone. Philadelphia: WB
Saunders.

Schüldt, K. 1988. On neck muscle activity and load reduction in sitting postures. Ph.D. thesis,
Karolinska Institute. Stockholm.

Schüldt, K, J Ekholm, J Toomingas, K Harms-Ringdahl, M Köster, and Stockholm MUSIC


Study Group 1. 1993. Association between endurance/exertion in neck extensors and reported
neck disorders (In Swedish). In Stockholm Investigation 1, edited by M Hagberg and C
Hogstedt. Stockholm:MUSIC Books.

Silverstein, BA, LJ Fine, and J Armstrong. 1986. Hand wrist cumulative trauma disorders in
industry. Brit J Ind Med 43:779-784.

Sjøgaard, G. 1990. Exercise-induced muscle fatigue: The significance of potassium. Acta


Physiol Scand 140 Suppl. 593:1-64.

Sjøgaard, G, OM Sejersted, J Winkel, J Smolander, K Jørgensen, and R Westgaard. 1995.


Exposure assessment and mechanisms of pathogenesis in work-related musculoskeletal
disorders: Significant aspects in the documentation of risk factors. In Work and Health.
Scientific Basis of Progress in the Working Environment, edited by O Svane and C Johansen.
Luxembourg: European Commission, Directorate-General V.

Spitzer, WO, FE LeBlanc, M Dupuis, et al. 1987. Scientific approach to the assessment and
management of activity-related spinal disorders. Spine 12(7S).

Tidswell, M. 1992. Cash’s Textbook of Orthopaedics and Rheumatology for Physiotherapists.


Europa: Mosby.

Thompson, AR, LW Plewes, and EG Shaw. 1951. Peritendinitis crepitans and simple
tenosynovitis: A clinical study of 544 cases in industry. Brit J Ind Med 8:150-160.

Urban, JPG and S Roberts. 1994. Chemistry of the intervertebral disc in relation to functional
requirements. In Grieve’s Modern Manual Therapy, edited by JD Boyling and N Palastanga.
Edinburgh: Churchill Livingstone.

Viikari-Juntura, E. 1984. Tenosynovitis, peritendinitis and the tennis elbow syndrome. Scand
J Work Environ Health 10:443-449.

Vingård, E, L Alfredsson, I Goldie, and C Hogstedt. 1991. Occupation and osteoarthrosis of


the hip and knee. Int J Epidemiol 20:1025-1031.

Vingård, E, L Alfredsson, I Goldie, and C Hogstedt. 1993. Sports and osteoarthrosis of the
hip. Am J Sports Med 21:195-200.
Waters, TR, V Putz-Anderson, A Garg, and LJ Fine. 1993. Revised NIOSH equation for
design and evaluation of manual lifting tasks. Ergonomics 36:739-776.

Wickström, G, K Hänninen, T Mattsson, T Niskanen, H Riihimäki, P Waris, and A Zitting.


1983. Knee degeneration in concrete reinforcement workers. Brit J Ind Med 40:216-219.

Wolfe, F. 1986. The clinical syndrome of fibrositis. Am J Med 81 Suppl. 3A:7-14.

Wolfe, F, HA Smythe, MB Yunus, RM Bennett, C Bombardier, DL Goldenberg, P Tugwell,


SM Campbell, M Abeles, P Clark, AG Fam, SJ Farber, JJ Fiechtner, CM Franklin, RA Gatter,
D Hamaty, J Lessard, AS Lichtbroun, AT Masi, GA McCain, WJ Reynolds, TJ Romano, IJ
Russell, and RP Sheon. 1990. The American College of Rheumatology criteria for the
classification of fibromyalgia. Report of the multicenter criteria committee. Arthritis Rheum
33:160-172.

Yunus, MB. 1993. Research in fibromyalgia and myofascial pain syndromes: Current status,
problems and future directions. J Musculoskel Pain 1(1):23-41.

Copyright 2015 International Labour Organization


7. Nervous System
Chapter Editor: Donna Mergler

Anatomy and Physiology

Written by ILO Content Manager

Nerve cells are the functional units of the nervous system. The nervous system is believed to
have ten thousand million of such cells, called neurons and glia, the glia being present in
greater numbers than neurons.

The Neuron

Figure 1 is an idealized diagram of a neuron with its three most important structural features:
the cell body, the dendrites and the axon terminal.

Figure 1. The anatomy of the neuron

The dendrites are finely branched processes arising near the cell body of a neuron. The
dendrites receive excitatory or inhibitory effects via chemical messengers called
neurotransmitters. The cytoplasm is the material of the cell body in which the organelles—
including the cell nucleus—and other inclusions are found Figure 2. The nucleus contains the
cell’s chromatin, or genetic material.

Figure 2. The organelles

The nucleus of the nerve cell is atypical compared with that of other living cells in that,
although it contains the genetic material deoxyribonucleic acid (DNA), the DNA is not
involved in the process of cell division; that is, after reaching maturity, nerve cells do not
divide. (An exception to this rule are the neurons in the nose lining (olfactory epithelium).)
The nucleus is rich in ribonucleic acid (RNA), which is necessary for the synthesis of protein.
Three types of proteins have been identified: cytosolic proteins, which form the fibrillar
elements of the nerve cell; intracondrial proteins, which generate energy for cell activity; and
proteins that form membranes and secretory products. Neurons are now conceived of as
modified secretory cells. Secretory granules are formed, stored in synaptic vesicles and later
released as neurotransmitter substances, the chemical messengers between nerve cells.

The fibrillar elements, which form the skeleton of the neuron, participate in the trophic
function of the neuron, acting as vehicles of transmission. Axonal transport can be
anterograde (cell body to axon terminal) and retrograde (axon terminal to cell body). From the
thickest to the thinnest, three types of fibrillar elements are recognized: microtubules,
neurofilaments and microfilaments.
Glial Cells

In contrast to neurons, glial cells do not, by themselves, carry electrical messages. There are
two types of glial cells: the macroglia and the microglia. The macroglia is a name given to at
least three types of cells: astrocytes, oligodendrocytes and ependymal cells. Microglial cells
are primarily scavenger cells for removing debris after neural damage or infection has
occurred.

The glial cells also have distinctive microscopic and ultramicroscopic features. Glial cells
physically support neurons, but a number of physiological properties are also now beginning
to be understood. Among the most important neuron-glial interactions are the glial cell’s role
in providing the neurons with nutrients, removing fragments of neurons after their death and,
most importantly, contributing to the process of chemical communication. Glial cells, in sharp
contrast to neurons, can divide and thus can reproduce themselves. Tumours of the nervous
system, for example, result from an abnormal reproduction of glial cells.

Myelin

What appears in the macroscopic observation of neural tissue as “grey matter” and “white
matter” has a microscopic and biochemical basis. Microscopically, the grey matter contains
the neuronal cell bodies, whereas the white matter is where neural fibres or axons are found.
The “white” appearance is due to a sheath—composed of a fatty substance called myelin—
covering these fibres. Myelin of the peripheral nerves originates from the membrane of the
Schwann cell which wraps around the axon. The myelin of fibres in the central nervous
system is provided by the membranes of the oligodendrocytes (a variety of glial cells).
Oligodendrocytes usually myelinate several axons, whereas the Schwann cell is associated
with only one axon. A discontinuity of the myelin sheath—designated as nodes of Ranvier—
exists between continuous Schwann cells or oligodendrocytes. It is estimated that in the
longest central motor pathway, up to 2,000 Schwann cells form the myelin cover. Myelin,
whose role is to facilitate the propagation of the action potential, may be a specific target of
neurotoxic agents. A morphological classification of neurotoxic substances describes
characteristic neuropathological changes of the myelin as myelinopathies.

Trophic Function of the Neuron

The normal functions of the neuron include protein synthesis, axonal transport, generation and
conduction of the action potential, synaptic transmission, and formation and maintenance of
the myelin. Some of the basic trophic functions of the neuron were described as early as the
19th century by sectioning the axons (axotomy). Among the processes uncovered, one of the
most important was the Wallerian degeneration—after Waller, the English physiologist who
described it.

Wallerian degeneration provides a good opportunity to describe well-known changes in


organelles as a result of either traumatic or toxic damage. Parenthetically, the terms used to
describe Wallerian degeneration produced by traumatic axotomy are the same ones used to
describe changes resulting from neurotoxic agents. At the cellular level, neuropathological
changes resulting from toxic damage to neural tissue are far more complex than those
occurring as a result of traumatic damage. It is only recently that changes in neurons affected
by neurotoxic agents have been observed.
Twenty-four hours after cutting of the axon, the most distinctive feature is swelling of both
sides of the mechanical trauma. Swelling results from accumulation of fluids and
membranous elements on both sides of the site of injury. These changes are not unlike those
observed in a rain-flooded two-way road with vehicles stopped on both sides of the flooded
area. In this analogy, stalled vehicles are the swelling. After a few days, regeneration of the
ensheathed axons—i.e., those covered with myelin—occurs. Sprouts grow from the proximal
stump moving at the rate of 1 to 3 mm per day. Under favourable conditions, sprouts reach the
distal (farther from the cell body) stump. When renervation—joining of the stumps—is
completed, the basic features of normal transmission have been re-established. The cell body
of the injured neuron undergoes profound structural changes in protein synthesis and axonal
transport.

If molecular neurobiology is said to be a young discipline, the neurobiology of the neurotoxic


processes is even younger, and still in its infancy. True, the molecular basis of action of many
neurotoxins and pharmacological agents is now well understood. But with some notable
exceptions (e.g., lead, methyl mercury, acrylamide) the molecular basis of toxicity of the vast
majority of environmental and neurotoxic agents is unknown. That is why, instead of
describing the molecular neurobiology of a select group of occupational and environmental
neurotoxic agents, we still are forced to refer to the comparatively abundant strategies and
examples from classical neuropharmacology or from work in modern drug manufacture.

Neurotransmitters

A neurotransmitter is a chemical substance which, when released from axon terminals by the
action potential, produces the momentary change in electrical potential when another nerve
fibre is stimulated. Neurotransmitters stimulate or inhibit adjacent neurons or effector organs
such as muscle and glands. Known neurotransmitters and their neural pathways are now being
intensively studied, and new ones are constantly being discovered. Some neurological and
psychiatric disorders are now understood to be caused by chemical changes in
neurotransmission—for example, myasthenia gravis, Parkinson’s disease, certain forms of
affective disorders such as depression, severe distortion of thought processes such as in
schizophrenia, and Alzheimer’s disease. Although excellent isolated reports on the effect of
several environmental and occupational neurotoxic agents on neurotransmission have been
published, the body of knowledge is meagre compared with that existing for neuropsychiatric
diseases. Pharmacological studies of manufactured drugs require an understanding of how
drugs affect neurotransmission. Drug manufacture and neurotransmission research are thus
intimately related. The changing views of drug action have been summarized by Feldman and
Quenzer (1984).

The effects of neurotoxic agents on neurotransmission are characterized by where in the


nervous system they act, their chemical receptors, the time course of their effects, whether
neurotoxic agents facilitate, block or inhibit neurotransmission, or whether neurotoxic agents
alter the termination or removal of the neurotransmitter’s pharmacological action.

One difficulty experienced by neuroscientists is the need to link known processes that occur at
the molecular level in the neuron with events at the cellular level, which in turn may explain
how normal and pathological neuropsychological changes occur, as clearly stated in the
following which to a large extent still applies: “(A)t the molecular level, an explanation of the
action of a drug is often possible; at the cellular level, an explanation is sometimes possible,
but at a behavioural level, our ignorance is abysmal” (Cooper, Bloom and Roth 1986).
The Main Components of the Nervous System

Knowledge of the main components of the nervous system is essential for the understanding
of the gross neuropsychological manifestations of neurotoxic illness, the rationale for the use
of specific techniques for the assessment of nervous system functions, and the understanding
of pharmacological mechanisms of neurotoxic action. From a functional standpoint, the
nervous system can be divided into two major compartments: The somatic nervous system
conveys sensory information (touch, temperature, pain and limb position—even when the
eyes are closed) from the body segments and carries the neural pathways that innervate and
control the movement of skeletal muscles, such as those of the arms, fingers, legs and toes.
The visceral nervous system controls internal organs that are not normally under the influence
of blood vessels, the dilation and constriction of the pupils of the eyes and so on.

From an anatomical viewpoint, four main components need to be identified: the central
nervous system, the peripheral nervous system including cranial nerves, the autonomic system
and the neuroendocrine system.

The Central Nervous System

The central nervous system contains the brain and the spinal cord Figure 3. The brain lies in
the skull cavity and is protected by the meninges. It is divided into three major components; in
ascending order—that is, from the caudal (tail) to cervical (head) portion of the nervous
system—they are the hindbrain (also called, the rhombencephalon), the midbrain (the
mescencephalon) and the forebrain (the proscencephalon).

Figure 3. The central and peripheral divisions of the nervous system


The hindbrain

The three major components of the hindbrain are the medulla oblongata, the pons and the
cerebellum figure 4.

Figure 4. The brain shown from a lateral side.

The medulla oblongata contains neural structures that control heart rate and breathing,
sometimes the targets of neurotoxic agents and drugs causing death. Located between the
medulla oblongata and the midbrain, the pons (bridge) derives its names from the large
number of fibres traversing its anterior aspect en route to the cerebellar hemispheres. The
cerebellum—in Latin, little brain—is characteristically corrugated in appearance. The
cerebellum receives sensory information and sends motor messages essential for motor
coordination. It is responsible (among other functions) for the execution of fine movements.
This scheduling—or programming—requires the adequate timing of sensory inputs and motor
responses. The cerebellum is often the target of numerous neurotoxic agents—for example,
alcoholic beverages, many industrial solvents, lead—which affect motor responses.

The midbrain

The midbrain is a narrow part of the brain connecting the hindbrain to the forebrain.
Structures of the midbrain are the cerebral aqueduct, the tectum, the cerebral peduncles, the
substantia nigra and the red nucleus. The cerebral aqueduct is a channel that connects the third
with the fourth ventricles (liquid-filled cavities of the brain); the cerebrospinal fluid (CSF)
flows through this opening.

The forebrain

This part of the brain is subdivided into diencephalon (“between brain”) and the cerebrum.
The major regions of the diencephalon are the thalamus and the hypothalamus. “Thalamus”
means “inner room”. The thalami are made up of neuronal groupings, called nuclei, which
have five main functions:

 receiving sensory information and sending it to primary areas of the cerebral cortex
 sending information about ongoing movement to motor areas of the cerebral cortex
 sending information on the activity of the limbic system to areas of the cerebral cortex
related to this system
 sending information on intrathalamic activity to association areas of the cerebral cortex
 sending information of brain-stem reticular formation activity to widespread areas of the
cerebral cortex.

The name hypothalamus means “under the thalamus”. It forms the base of the third ventricle,
an important reference point for the imaging of the brain. The hypothalamus is a complex,
minute neural structure responsible for many aspects of behaviour such as basic biological
drives, motivation and emotion. It is the link between the nervous and the neuroendocrine
system, to be reviewed below. The pituitary gland (also called the hypophysis) is linked by
neurons to the hypothalamic nuclei. It is well established that the hypothalamic nerve cells
perform many neurosecretory functions. The hypothalamus is linked with many other major
regions of the brain including the rhinencephalon—the primitive cortex originally associated
with olfaction—and the limbic system, including the hippocampus.

The cerebral cortex is the largest component of the brain, consisting of two cerebral
hemispheres connected by a mass of white matter called the corpus callosum. The cerebral
cortex is the surface layer of each cerebral hemisphere. Deep sulci in the cerebral cortex—the
central and the lateral sulci Figure 4 —are taken as reference points to separate anatomical
regions of the brain. The frontal lobe lies in front of the central sulcus. The parietal lobe
begins at the back of the central sulcus, and lies next to the occipital lobe, which occupies the
posterior portion of the brain. The temporal lobe begins well inside the folding of the lateral
sulcus and extends into the ventral aspects of the brain hemispheres. Two important
components of the cerebrum are the basal ganglia and the limbic system.

The basal ganglia are nuclei—that is, clusters of nerve cells—located toward the centre of the
brain. The basal ganglia comprise major centres of the extra-pyramidal motor system. (The
pyramidal system, to which the term is contrasted, is involved in the voluntary control of
movement.) The extrapyramidal system is selectively affected by many neurotoxic agents
(e.g., manganese). In the past two decades, important discoveries have been made concerning
the role these nuclei play in several neural degenerative diseases (e.g., Parkinson’s disease,
Huntington’s chorea).

The limbic system is comprised of convoluted neural structures branching out into many
directions and establishing connections with many “old” regions of the brain, particularly with
the hypothalamus. It is involved in the control of emotional expression. The hippocampus is
believed to be a structure where many memory processes occur.

The spinal cord

The spinal cord is a whitish structure situated within the vertebral canal. It is divided into four
regions: cervical, thoracic, lumbar and sacral-coccyxeal. The two most easily recognizable
features of the spinal cord are the grey matter containing the cell bodies of the neurons, and
the white matter containing the myelinated axons of the neurons. The ventral region of the
spinal cord’s grey matter contains nerve cells that regulate motor function; the middle region
of the thoracic spinal cord is associated with autonomic functions. The dorsal portion receives
sensory information from the spinal nerves.
The Peripheral Nervous System

The peripheral nervous system includes those neurons that are outside the central nervous
system. The term peripheral describes the anatomical distribution of this system, but
functionally it is artificial. The cell bodies of peripheral motor fibres, for example, are located
within the central nervous system. In experimental, clinical and epidemiological
neurotoxicology, the term peripheral nervous system (PNS) describes a system that is
selectively vulnerable to the effects of toxic agents and that is able to regenerate.

The spinal nerves

The ventral and dorsal roots are where the peripheral nerves enter and leave the spinal cord
along its length. Adjoining vertebrae contain openings to allow root fibres forming the spinal
nerves to leave the spinal canal. There are 31 pairs of spinal nerves, which are named
according to the region of the vertebral column with which they are associated: 8 cervical, 12
thoracic, 5 lumbar, 5 sacral and 1 coccyxeal. A metamera is a region of the body innervated
by a spinal nerve figure 5.

Figure 5. The segmental distribution of the spinal nerves (the metamera).

Carefully examining the motor and sensory functions of metamerae, neurologists can infer the
location of lesions where damage has occurred.
Table 1. Names and main functions of each pair of cranial nerves

Nerve1 Conducts impulses Functions

I. Olfactory From nose to brain Sense of smell

II. Optic From eye to brain Vision

III. Oculomotor From brain to eye muscles Eye movements

IV. Trochlear From brain to external eye muscles Eye movements

V. Trigeminal From skin and mucous membrane Sensations of face, scalp and
(or trifacial) of head and from teeth to brain; teeth; chewing movements
also from brain to chewing muscles

VI. Abducens From brain to external eye muscles Turning eyes outward

VII. Facial From taste buds of tongue to brain; Sense of taste; contraction of
from brain to face muscles muscles of facial expression

VIII. Acoustic From ear to brain Hearing; sense of balance

IX. Glossopharyngeal From throat and taste buds of Sensations of throat, taste,
tongue to brain; also from brain to swallowing movements,
throat muscles and salivary glands secretion of saliva

X. Vagus From throat, larynx, and organs in Sensations of throat, larynx,


thoracic and abdominal cavities to and for thoracic and abdominal
brain; also from brain to muscles of organs; swallowing, voice
throat and to organs in thoracic and production, slowing of
abdominal cavities heartbeat, acceleration of
peristalsis

XI. Spinal accessory From brain to certain shoulder and Shoulder movements; turning
neck muscles movements of head

XII. Hypoglossal From brain to muscles of tongue Tongue movements

1 The first letter of the words of the following sentence are the first letters of the names of
cranial nerves: “On Old Olympus’ Tiny Tops A Finn and German Viewed Some
Hops”. Many generations of students have used this or a similar sentence to help them
remember the names of cranial nerves.

The cranial nerves

Brain stem is a comprehensive term that designates the region of the nervous system that
includes the medulla, the pons and the midbrain. The brain stem is a continuation of the spinal
cord upward and forward (ventrally). It is in this region where most of the cranial nerves
make their exits and entrances. There are 12 pairs of cranial nerves; Table 1 describes the
name and main function of each pair and Figure 6 shows the entrance and exits of some
cranial nerves in the brain.

Figure 6. The brain shown from below with the entrance and exits of many cranial nerves.

The Autonomic Nervous System

The autonomic nervous system is that part of the nervous system controlling the activity of
the visceral components of the human body. It is called “autonomic” because it performs its
functions automatically, meaning that its functioning cannot be easily controlled at will. From
an anatomical point of view, the autonomic system has two main components: the
sympathetic and the parasympathetic nervous system. The sympathetic nerves controlling
visceral activity arise from the thoracic and lumbar portions of the spinal cord;
parasympathetic nerves arise from the brain stem and the sacral portion of the spinal cord.

From a physiological point of view, no single generalization can be made that applies to the
manner in which the sympathetic and the parasympathetic nervous systems control different
body organs. In most cases, visceral organs are innervated by both systems, and each type has
an opposite effect in a system of checks and balances. The heart, for example, is innervated by
sympathetic nerves whose excitation produces an acceleration of the heartbeat, and by
parasympathetic nerves whose excitation produce a slowing of the heartbeat. Either system
can stimulate or inhibit the organs it innervates. In other cases, organs are predominantly or
exclusively controlled by one system or the other. A vital function of the autonomic nervous
system is the maintenance of homeostasis (stable state of equilibrium) and for the adaptation
of the animal body to its external environment. Homeostasis is the state of equilibrium of
body functions achieved by an active process; the control of body temperature, water and
electrolytes are all examples of homeostatic processes.
From the pharmacological point of view, there is no single neurotransmitter associated with
either sympathetic or parasympathetic functions, as was once believed. The old view that
acetylcholine was the predominant transmitter of the autonomic system had to be abandoned
when new classes of neurotransmitters and neuromodulators were found (e.g., dopamine,
serotonin, purines and various neuropeptides).

Neuroscientists have recently revived the behavioural point of view of the autonomic nervous
system. The autonomic nervous system is involved in the fight-or-flight instinctive reaction
still present in humans, which is, for the most part, the basis for the physiological reactions
caused by stress. Interactions between the nervous system and immunological functions are
possible through the autonomic nervous system. Emotions that originate from the autonomic
nervous system can be expressed via the skeletal muscles.

The autonomic control of smooth muscles

The muscles of the viscera—except for the heart—are the smooth muscles. Heart muscle has
characteristics of both skeletal and smooth muscle. Like skeletal muscles, smooth muscles
also contain the two proteins actin and, in smaller proportions, myosin. Unlike skeletal
muscles, they do not present the regular organization of sarcolemes, the contractile unit of the
muscle fibre. The heart is unique in that it can generate myogenic activity—even after its
neural innervations have been severed, it can contract and relax for several hours by itself.

The neuromuscular coupling in smooth muscles differs from that of skeletal muscles. In
skeletal muscles, the neuromuscular junction is the link between the nerve and the muscle
fibres. In smooth muscle, there is no neuromuscular junction; the nerve endings enter the
muscle, spreading in all directions. Electrical events inside the smooth muscle therefore are
much slower than those in skeletal muscles. Finally, smooth muscle has the unique
characteristic of exhibiting spontaneous contractions, such as that exhibited by the gut. To a
large extent, the autonomic nervous system regulates the smooth muscles’ spontaneous
activity.

The central components of the autonomic nervous system

The main role of the autonomic nervous system is to regulate the activity of smooth muscles,
heart, glands in the digestive tract, sweat glands, and adrenal and other endocrine glands. The
autonomic nervous system has a central component—the hypothalamus, located at the base of
the brain—where many autonomic functions are integrated. Most importantly, the central
components of the autonomic system are directly involved in the regulation of biological
drives (temperature regulation, hunger, thirst, sex, urination, defecation and so on),
motivation, emotion and to a great extent in “psychological” functions such as moods, affect
and feelings.

Neuroendocrine System

Glands are the organs of the endocrine system. They are called endocrine glands because their
chemical messages are delivered inside the body, directly into the blood stream (in contrast
with exocrine glands, such as sweat glands, whose secretions appear on the outer surface of
the body). The endocrine system provides slow but long-lasting control over organs and
tissues through chemical messengers called hormones. Hormones are the main regulators of
body metabolism. But, because of intimate links among the central, peripheral, and autonomic
nervous systems, the neuroendocrine system—a term that captures such complex links—is
now conceived of as a powerful modifier of the structure and function of the human body and
behaviour.

Hormones have been defined as chemical messengers which are released from cells into the
bloodstream to exert their action on target cells some distance away. Until recently, hormones
were distinguished from neurotransmitters, discussed above. The latter are chemical
messengers released from neurons onto a synapse between the nerve terminals and another
neuron or an effector (i.e., muscle or gland). However, with the discovery that classical
neurotransmitters such as dopamine can also act as hormones, the distinction between
neurotransmitters and hormones is now less and less clear. Thus, based on purely anatomical
considerations, hormones derived from nerve cells may be called neurohormones. From a
functional point of view, the nervous system can be thought of as a truly neurosecretory
system.

The hypothalamus controls endocrine functions through a link with the pituitary gland (also
called the hypophysis, a tiny gland located at the base of the brain). Until the middle 1950s
the endocrine glands were viewed as a separate system governed by the pituitary gland, often
called the “master gland”. At that time, a neurovascular hypothesis was advanced that
established the functional role of the hypothalamic/hypophysial factors in the control of
endocrine function. In this view, the endocrine hypothalamus provides the final common
neuroendocrine pathway in the control of the endocrine system. It has now been firmly
established that the endocrine system is itself regulated by the central nervous system as well
as the endocrine inputs. Thus, neuroendocrinology is now an appropriate term to describe the
discipline that studies the reciprocal integrated roles of the nervous and the endocrine systems
in the control of physiological processes.

With increasing understanding of neuroendocrinology, original divisions are breaking down.


The hypothalamus, which is located above and connected to the pituitary gland, is the link
between the nervous and the endocrine systems, and many of its nerve cells perform secretory
functions. It is also linked with other major regions of the brain, including the
rhinencephalon—the primitive cortex originally associated with olfaction or sense of smell—
and the limbic system, associated with emotions. It is in the hypothalamus that hormones
released by the posterior pituitary gland are produced. The hypothalamus also produces
substances that are called releasing and inhibiting hormones. These act on the
adenohypophysis, causing it to enhance or inhibit the production of anterior pituitary gland
hormones, which act on glands located elsewhere (thyroid, adrenal cortex, ovaries, testicles
and others).

Chemical Neurotoxic Agents

Written by ILO Content Manager

Definition of Neurotoxicity

Neurotoxicity refers to the capability of inducing adverse effects in the central nervous
system, peripheral nerves or sensory organs. A chemical is considered to be neurotoxic if it is
capable of inducing a consistent pattern of neural dysfunction or change in the chemistry or
structure of the nervous system.

Neurotoxicity is generally manifested as a continuum of symptoms and effects, which depend


on the nature of the chemical, the dose, the duration of exposure and the traits of the exposed
individual. The severity of the observed effects, as well as the evidence for neurotoxicity,
increases through levels 1 to 6, shown in Table 1. Short-term or low-dose exposure to a
neurotoxic chemical may result in subjective symptoms such as headache and dizziness, but
the effect usually is reversible. With increasing dose, neurological changes may show up, and
eventually irreversible morphological changes are generated. The degree of abnormality
needed for implying neurotoxicity of a chemical agent is a controversial issue. According to
the definition, a consistent pattern of neural dysfunction or change in the chemistry or
structure of the nervous system is considered if there is well-documented evidence for
persistent effects on level 3, 4, 5 or 6 in Table 1. These levels reflect the weight of evidence
provided by different signs of neurotoxicity. Neurotoxic substances include naturally
occurring elements such as lead, mercury and manganese; biological compounds such as
tetrodotoxin (from the puffer fish, a Japanese delicacy) and domoic acid (from contaminated
mussels); and synthetic compounds including many pesticides, industrial solvents and
monomers.

Table 1. Grouping neurotoxic effects to reflect their relative strength for establishing
neurotoxicity

Level Grouping Explanation/Examples


6 Morphological changes Morphological changes include cell death and
axonopathy as well as subcellular morphological
changes.
5 Neurological changes Neurological change embraces abnormal findings
in neurological examinations on single individuals.
4 Physiological/behavioural Physiological/behavioural changes comprise
changes experimental findings on groups of animals or
humans such as changes in evoked potentials and
EEG, or changes in psychological and behavioural
tests.
3 Biochemical changes Biochemical changes cover changes in relevant
biochemical parameters (e.g., transmitter level,
GFA-protein content (glial fibrillary acidic protein)
or enzyme activities).
21 Irreversible, subjective Subjective symptoms. No evidence of abnormality
symptoms on neurological, psychological or other medical
examination.
11 Reversible, subjective Subjective symptoms. No evidence of abnormality
symptoms on neurological, psychological, or other medical
examination.

1Humans only
Source: Modified from Simonsen et al. 1994.

In the United States between 50,000 and 100,000 chemicals are in commerce, and 1,000 to
1,600 new chemicals are submitted for evaluation each year. More than 750 chemicals and
several classes or groups of chemical compounds are suspected to be neurotoxic
(O’Donoghue 1985), but the majority of chemicals have never been tested for neurotoxic
properties. Most of the known neurotoxic chemicals available today have been identified by
case-reports or through accidents.

Although neurotoxic chemicals often are produced to fulfil specific uses, exposure may arise
from several sources—use in private homes, in agriculture and in industries, or from polluted
drinking water and so on. Fixed a priori preconceptions about which neurotoxic compounds
are expected to be found in which occupations should therefore be viewed with caution, and
the following citations should be looked upon as possible examples including a few of the
most common neurotoxic chemicals (Arlien-Søborg 1992; O’Donoghue 1985; Spencer and
Schaumburg 1980; WHO 1978).

Symptoms of Neurotoxicity

The nervous system generally reacts rather stereotypically to exposure to neurotoxic


substances Figure 1. Some typical syndromes are indicated below.

Figure 1. Neurological and behavioural effects of exposure to neurotoxic chemicals.

Polyneuropathy

This is caused by impairment of motor and sensory nerve function leading to weakness of the
muscles, with paresis usually most pronounced peripherally in the upper and lower
extremities (hands and feet). Prior or simultaneous paraesthesia (tingling or numbness in the
fingers and toes) may occur. This may lead to difficulties in walking or in the fine
coordination of hands and fingers. Heavy metals, solvents and pesticides, among other
chemicals, may result in such disability, even if the toxic mechanism of these compounds may
be totally different.
Encephalopathy

This is caused by a diffuse impairment of the brain, and may result in fatigue; impairment of
learning, memory and ability to concentrate; anxiety, depression, increased irritability and
emotional instability. Such symptoms may indicate early diffuse degenerative brain disorder
as well as occupational chronic toxic encephalopathy. Often increased frequency of
headaches, dizziness, changes in sleep pattern and reduced sexual activity may also be present
from the early stages of the disease. Such symptoms may develop following long-term, low-
level exposure to several different chemicals such as solvents, heavy metals or hydrogen
sulphide, and are also seen in several dementing disorders not related to work. In some cases
more specific neurological symptoms can be seen (e.g., Parkinsonism with tremor, rigidity of
the muscles and slowing of movements, or cerebellar symptoms such as tremor and reduced
coordination of hand movements and gait). Such clinical pictures can be seen following
exposure to some specific chemicals such as manganese, or MPTP (1-methyl-4-phenyl-
1,2,3,6-tetrahydropyridine) in the former condition, and toluene or mercury in the latter.

Gases

A wide variety of chemicals with totally different chemical structures are gases at normal
temperature and have been proven neurotoxic Table 3. Some of them are extremely toxic even
in very small doses, and have even been used as war gases (phosgene and cyanide); others
require high doses over longer periods to give symptoms (e.g., carbon dioxide). Some are
used for general anaesthesia (e.g., nitrous oxide); others are widely used in industry and in
agents used for disinfection (e.g., formaldehyde). The former may induce irreversible changes
in the nervous system after repeated low-level exposure, the latter apparently produce only
acute symptoms. Exposure in small rooms with poor ventilation is particularly hazardous.
Some of the gases are odourless, which makes them particularly dangerous (e.g., carbon
monoxide). As shown in Table 2, some gases are important constituents in industrial
production, while others are the result of incomplete or complete combustion (e.g., CO and
CO2 respectively). This is seen in mining, steel works, power stations and so on, but may also
be seen in private homes with insufficient ventilation. Essential for treatment is to stop further
exposure and provide fresh air or oxygen, and in severe cases artificial ventilation.

Table 2. Gases associated with neurotoxic effects

Chemical Examples of Selected industries Effects1


source of exposure at risk
Carbon Welding; Metal industry; M: Dilate vessels
dioxide (CO2 fermentation; mining; breweries
) manufacture, A: Headache; dyspnoea;
storage and use of tremor; loss of
dry ice consciousness

C: Hardly any
Carbon Car repair; welding; Metal industry; M: Deprivation of oxygen
monoxide metal melting; mining;
(CO) drivers; firemen transportation; A: Headache; drowsiness;
power station loss of consciousness
Hydrogen Fumigating of green Agriculture; fishing; M: Blocking oxidative
sulphide house; manure; sewer work metabolism
(H2S) fishermen; fish
unloading; sewerage A: Loss of consciousness
handling
C: Encephalopathy
Cyanide Electro-welding; Metal industry; M: Blocking of respiratory
(HCN) galvanic surface chemical industry; enzymes
treatment with nursery; mining;
nickel; copper and gasworks A: Dyspnoea; falling blood
silver; fumigation of pressure; convulsions; loss
ships, houses foods of consciousness; death
and soil in green
houses C: Encephalopathy; ataxia;
neuropathy (e.g., aftereating
cavasava)

Occupational impairment
uncertain
Nitrous oxide General anaesthesia Hospitals M: Acute change in nerve
(N2O) during operation; (anaesthesia); cell membrane;
light narcosis at dentists; midwife degeneration of nerve cells
dental care and after long-term exposure
delivery
A: Light-headedness;
drowsiness; loss of
consciousness

C: Numbness of fingers and


toes; reduced coordination;
encephalopathy

1M: mechanism; A: acute effects; C: chronic effects.


Neuropathy: dysfunction of motor- and sensory peripheral nerve fibres.
Encephalopathy: brain dysfunction due to generalized impairment of the brain.
Ataxia: impaired motor coordination.

Metals

As a rule the toxicity of metals increases with increasing atomic weight, lead and mercury
being particularly toxic. Metals are usually found in nature at low concentrations, but in
certain industries they are used in great amounts (see Table 3) and may give rise to
occupational risk for the workers. Moreover, considerable amounts of metals are found in
waste water and may give rise to environmental risk for the residents close to the plants but
also at greater distances. Often the metals (or, for example, organic mercury compounds) are
taken up into the food chain and will accumulate in fish, birds and animals, representing a risk
for consumers. The toxicity and the way in which the metals are handled by the organism may
depend on the chemical structure. Pure metals may be taken up by inhalation or skin contact
of vapour (mercury) and/or small particles (lead), or orally (lead). Inorganic mercury
compounds (e.g., HgCl2) are mainly taken up by mouth, while organic metal compounds (e.g.,
tetraethyl lead) mainly are taken up by inhalation or by skin contact. The body burden may to
a certain degree be reflected in the concentration of metal in the blood or urine. This is the
basis for biological monitoring. In treatment it must be recalled that especially lead is released
very slowly from deposits in the body. The amount of lead in bones will normally be reduced
by only 50% over 10 years. This release may be speeded up by the use of chelating agents:
BAL (dimercapto-1-propanol), Ca-EDTA or penicillamine.

Table 3. Metals and their inorganic compounds associated with neurotoxicity

Chemical Examples of source Selected industries Effects1


of exposure at risk
Lead Melting; soldering; Metal work; M: Impairment of oxidative
grinding; repair; mining; metabolism of nerve cells
glazing; plasticizer accumulator plants; and glia
car repair;
shipyards; glass A: Abdominal pain;
workers; ceramics; headache; encephalopathy;
pottery; plastic seizures

C: Encephalopathy;
polyneuropathy, including
drop hand
Mercury Electrolysis; Chloralkali plants; M: Impairment at multiple
Elemental electrical mining; electronics; sites in nerve cells
instruments dentistry; polymer
(gyroscope; production; paper A: Lung inflammation;
manometer; and pulp industry headache; impaired speech
thermometer;
battery; electric C: Inflammation of gums;
bulb; tubes, etc.); appetite loss;
amalgam filling encephalopathy; including
tremor; irritability
Calomel Laboratories A: Low acute toxicity
Hg2Cl2 chronic toxic effects, see
above
Sublimate Disinfection Hospitals; clinics; M: Acute tubular and
HgCl2 laboratories glomerular renal
degeneration. Verytoxic
even in small oral doses,
lethal down to 30
mg/kgweight

C: See above.
Manganese Melting (steel Manganese mining; M: Not known, possible
alloy); cutting; steel and aluminium changes in dopamine and
welding in steel; dry production; metal catecholamine in basal
batteries industry; battery
production;
chemical industry; ganglia in the centre of the
brickyard brain

A: Dysphoria

C: Encephalopathy
including Parkinsonism;
psychosis; appetite loss;
irritability; headache;
weakness
Aluminium Metallurgy; Metal industry M: Unknown
grinding; polishing
C: Possibly encephalopathy

1
M: mechanism; A: acute effects; C: chronic effects.
Neuropathy: dysfunction of motor- and sensory peripheral nerve fibres.
Encephalopathy: brain dysfunction due to generalized impairment of the brain.

Monomers

Monomers constitute a large, heterogeneous group of reactive chemicals used for chemical
synthesis and production of polymers, resins and plastics. Monomers comprise
polyhalogenated aromatic compounds such as p-chlorobenzene and 1,2,4-trichlorbenzene;
unsaturated organic solvents such as styrene and vinyltoluene, acrylamide and related
compounds, phenols, ɛ-caprolactam and ζ-aminobutyrolactam. Some of the widely used
neurotoxic monomers and their effect on the nervous system are listed in Table 3.
Occupational exposure to neurotoxic monomers may take place at industries manufacturing,
transporting and using chemical products and plastic products. During handling of polymers
containing rest monomers, and during moulding in boat yards and in dental clinics, a
substantial exposure to neurotoxic monomers takes place. Upon exposure to these monomers
uptake may take place during inhalation (e.g., carbon disulphide and styrene) or by skin
contact (e.g., acrylamide). As monomers are a heterogeneous group of chemicals, several
different mechanisms of toxicity are likely. This is reflected by differences in symptoms
(Table 4).

Table 4. Neurotoxic monomers

Compound Examples of source Selected Effects1


of exposure industries at risk
Acrylamide Employees exposed Polymer M: Impaired axonal
to the monomer production; transport
tunnelling and
drilling operationsC: Polyneuropathy;
dizziness; tremor and ataxia
Acrylonitrile Accidents in labs Polymer and A: Hyperexcitability;
and industries; rubber production; salivation; vomiting;
house fumigation chemical cyanosis; ataxia; difficulty
synthesis breathing
Carbon Production of rubber Rubber and M: Impaired axonal
disulphide and viscose rayon viscose rayon transport and enzyme
industries activity is likely

C: Peripheral neuropathy;
encephalopathy; headache;
vertigo; gastrointestinal
disturbances
Styrene Production of glass- Chemical M: Unknown
reinforced plastics; industry;
monomer fibreglass A: Central nervous system
manufacture and production; depression; headache
transportation; use polymer industry
of styrene- C: Polyneuropathy;
containing resins encephalopathy; hearing loss
and coatings
Vinyltoluene Resin production; Chemical and C: Polyneuropathy; reduced
insecticide polymer industry motor nerve
compounds conductionvelocity

1M: mechanism; A: acute effects; C: chronic effects.


Neuropathy: dysfunction of motor and sensory peripheral nerve fibres.
Encephalopathy: brain dysfunction due to generalized impairment of the brain.
Ataxia: impaired motor coordination.

Organic solvents

Organic solvents is a common designation for a large group of more than 200 lipophilic
chemical compounds capable of dissolving fats, oils, waxes, resins, rubber, asphalt, cellulose
filaments and plastic materials. They are usually fluids at room temperature with boiling
points below 200 to 250°C, and are easily evaporated. They are mainly taken up via the lungs
but some may penetrate the skin as well. Due to their lipophilicity they are distributed to
organs rich in fat. Thus high concentrations are found in body fat, bone marrow, liver and
brain, which also may act as reservoirs of solvents. The partition coefficient octanol/water can
indicate whether high brain concentrations are to be expected. The mechanism of toxicity is
not yet known, but several possibilities have been envisioned: blocking important enzymes in
the metabolic breakdown of glucose and thus reducing energy available for neuronal
processing; reducing energy formation in the mitochondria; changing neuronal membranes,
leading to impairment of ion channel function; slowing of axonal flow. Methylene chloride is
metabolized to CO, which blocks the transport of oxygen in the blood. Large groups of
workers in a great variety of professions are exposed daily or at least frequently (see Table 5).
In some countries the consumption of organic solvents has declined in some occupations due
to hygienic improvements and substitution (e.g., house painters, graphic industry workers,
metal workers), while in other occupations the pattern of exposure has changed but the total
amount of organic solvents has remained unchanged. For example, trichloroethylene has been
replaced by 1,1,1-trichloroethane and freon. So solvents are still a major hygienic problem at
many workplaces. People are at particular risk when exposed in small rooms with poor
ventilation and with high temperature, increasing the evaporation. Physical work increases the
pulmonary uptake of solvents. In several countries (in particular the Nordic countries),
compensation has been given to workers who have developed chronic toxic encephalopathy
following long-term, low-level exposure to solvents.

Table 5. Organic solvents associated with neurotoxicity

Chemical Examples of source of Selected industries Effects1


exposure at risk

Chlorinated Degreasing; Metal industry; M: Unknown


hydrocarbons: electroplating; graphic industry;
trichloroethylene; painting; printing; electronic industry; A: Prenarcotic
cleaning; general and dry cleaners; symptoms
1,1,1- light anaesthesia anaesthetists
trichloroethane; C:
tetrachloroethylene Encephalopathy;
polyneuropathy;
trigeminal
affection (TRI);
hearing loss

Methylene chloride Extraction, including Food industry; M: Metabolism ®


extraction of caffeine; painters; graphic CO
paint remover industry
A: Prenarcotic
symptoms; coma

C: Encephalopathy

Methyl chloride Refrigerator production Refrigerator M: Unknown


and repair production; rubber
industry; plastic A: Prenarcotic
industry symptoms; loss of
consciousness;
death

C: Encephalopathy

Toluene Printing; cleaning; Graphic industry; M: Unknown


degreasing; electronic industry
electroplating; A: Prenarcotic
painting; spray painting symptoms

C:
Encephalopathy;
cerebellar
dysfunction;
polyneuropathy;
hearing loss;
visual disturbance
Xylene Printing; synthesis of Graphic industry; M: Unknown
phthalic anhydride; plastic industry;
painting; histology histology A: Prenarcotic
laboratory procedures laboratories symptoms

C:
Encephalopathy;
visual disturbance;
hearing loss

Styrene Polymerization; Plastic industry; M: Unknown


moulding fibreglass
production A: Prenarcotic
symptoms

C:
Encephalopathy;
polyneuropathy;
hearing loss

Hexacarbons: n-
M: Impairment of
hexane;
axonal transport
Leather and shoe
Gluing; printing; plastic
methyl butyl ketone industry; graphic
coating; painting; A: Prenarcotic
(MBK); industry; painter;
extraction
laboratories
C: Polyneuropathy;
methyl ethyl ketone
encephalopathy
(MEK)

M: Unknown
Refrigerator
Refrigerator production
Various solvents: production; metal A: Mild prenarcotic
and repair; dry cleaning;
Freon 113 industry; electronic symptoms
degreasing
industry; dry cleaning
C: Encephalopathy

M: Unknown

Diethylether; General anaesthetics A: Prenarcotic


Hospitals; clinics
halothane (nurses; doctors) symptoms

C: Encephalopathy

Carbon disulphide See monomers See monomers See monomers

Painting; degreasing; Metal industry; M: Unknown


Mixtures: white spirit cleaning; printing; graphic industry;
and thinner impregnation; surface wood industry; A: Prenarcotic
treatment painters symptoms
C: Encephalopathy

1 M: mechanism; A: acute effects; C: chronic effects.

Neuropathy: dysfunction of motor- and sensory peripheral nerve fibres.


Encephalopathy: brain dysfunction due to generalized impairment of the brain

Pesticides

Pesticides is used as a generic term for any chemical designed to kill groups of plants or
animals that are a human health hazard or may cause economic loss. It includes insecticides,
fungicides, rodenticides, fumigants and herbicides. Approximately 5 billion pounds of
pesticide products made up of more than 600 active pesticide ingredients are annually used in
agriculture worldwide. Organophosphorus, carbamate and organochlorine pesticides together
with pyrethroids, chlorophenoxy herbicides and organic metal compounds used as fungicides
have neurotoxic properties (Table 6). Among the many different chemicals used as
rodenticides, some (e.g., strychnine, zinc phosphide and thallium) are neurotoxic too.
Occupational exposure to neurotoxic pesticides is mainly associated with agricultural work
such as pesticide handling and working with treated crops, but exterminators, pesticide
manufacturing and formulating employees, highway and railway workers, as well as
greenhouse, forestry and nursery workers may have a substantial risk of being exposed to
neurotoxic pesticides as well. Children, who constitute a significant proportion of the
agricultural workforce, are especially vulnerable because their nervous systems are not fully
developed. The acute effects of pesticides are generally well described, and long-lasting
effects upon repeated exposure or single high dose exposure are often seen (Table 6), but the
effect of repeated subclinical exposure is uncertain.

Table 6. Classes of common neurotoxic pesticides, exposure, effects and associated symptoms

Compound Examples of Selected industries Effects1


source of at risk
exposure
Organo- Handling; Agriculture; M: Acetyl cholinesterase
phosphorus treatment of forestry; chemical; inhibition
compounds: crops; working gardening
Beomyl; with treated crops; A: Hyperactivity;
Demethon; dock labourer neuromuscular paralysis;
Dichlorvos; Ethyl visual impairment;
parathion; breathing difficulty;
Mevinphos; restlessness; weakness;
Phosfolan; vomiting; convulsions
Terbufos;
Malathion
Carbamates: M: Delayed neurotoxicity
Aldicarb; axonopathy2
Carbaryl;
Carbofuran; C: Polyneuropathy;
Propoxur numbness and tingling in
feet; muscle weakness;
sensory disturbance;
paralysis
Organochlorine: See above See above A: Excitability;
Aldrin; Dieldrin; apprehension; dizziness;
DDT; Endrin; headache; confusion; loss
Heptachlor; of balance; weakness;
Lindane; ataxia; tremors;
Methoxychlor; convulsions; coma
Mirex; Toxaphene
C: Encephalopathy
Pyrethroids See above See above M: Altering flow of
sodium ions through nerve
cellmembrane

A: Repeated firing of the


nerve cell; tremor;
convulsion
2,4-D Herbicide Agriculture C: Polyneuropathy
Triethyltin Surface treatment; Wood and wood A: Headache; weakness;
hydroxide handling treated products paralysis; visual
wood disturbances

C: Polyneuropathy; CNS
effects
Methyl bromide Fumigating Greenhouses; M: Unknown
insecticide;
manufacture of A: Visual and speech
refrigerators disturbances; delirium;
convulsion

C: Encephalopathy

1
M: mechanism; A: acute effects; C: chronic effects.
Neuropathy: dysfunction of motor and sensory peripheral nerve fibres.
Encephalopathy: brain dysfunction due to generalized impairment of the brain.
Ataxia: impaired motor coordination.
2
Mainly phosphates or phosphonates.

Other chemicals

Several different chemicals which do not fit into the above-mentioned groups also possess
neurotoxicity. Some of these are used as pesticides but also in different industrial processes.
Some have well-documented acute and chronic neurotoxic effects; others have obvious acute
effects, but the chronic effects are only poorly examined. Examples of these chemicals, their
uses and effects are listed in Table 7.
Table 7. Other chemicals associated with neurotoxicity

Chemical Examples of Selected industries at Effects1


source of risk
exposure
Boric acid Welding; fluxes; Metal; glass A: Delirium;
preservation convulsion

C: CNS depression.
Disulfiram Pharmaceutical Rubber C: Fatigue; peripheral
neuropathy;
sleepiness
Hexachlorophene Antibacterial Chemical C: CNS oedema;
soaps peripheral nerve
damage
Hydrazine Reducing agents Chemical; army A: Excitement;
appetite loss; tremor;
convulsion
Phenol/Cresol Antiseptics Plastics; resins; M: Denatures proteins
chemical; hospitals; and enzymes
laboratories
A: Reflex loss;
weakness; tremor;
sweating; coma

C: Appetite loss;
mental disturbance;
ringing in the ears
Pyridine Ethanol Chemical; textile A: CNS depression;
denaturation mental depression;
fatigue; appetite loss

C: Irritability; sleep
disorders;
polyneuropathy;
double vision
Tetraethyl lead Gasoline additive Chemical; transport C: Irritability;
weakness; tremor;
vision difficulties
Arsine Batteries; Smelting; glasswork; M: Impairing enzyme
insecticide; ceramics; manufacture function
melting of paper
A: Reduced sensation;
paresis; convulsion;
coma

C: Motor impairment;
ataxia; vibration sense
loss; polyneuropathy
Lithium Oil additive; Petrochemical A/C: Appetite loss;
pharmaceutical ringing in the ears;
vision blurring;
tremor; ataxia
Selenium Melting; Electronic; glass works; A: Delirium; anosmia
production of metal industry; rubber
rectifiers; industry C: Odour of garlic;
vulcanization; polyneuropathy;
cutting oils; nervousness
antioxidant
Thallium Rodenticide Glass; glass products A: Appetite loss;
tiredness; drowsiness;
metallic taste;
numbness; ataxia
Tellurium Melting; rubber Metal; chemical; rubber; A: Headache;
production; electronic drowsiness;
catalyst neuropathy

C: Odour of garlic;
metallic taste;
Parkinsonism;
depression
Vanadium Melting Mining; steel A: Appetite loss;
production; chemical ringing in the ears;
industry somnolence, tremor

C: Depression;
tremor; blindness

1M: mechanism; A: acute effects; C: chronic effects.


Neuropathy: dysfunction of motor and sensory peripheral nerve fibres.
Encephalopathy: brain dysfunction due to generalized impairment of the brain.
Ataxia: impaired motor coordination

Clinical Syndromes Associated with Neurotoxicity

Written by ILO Content Manager

Neurotoxicant syndromes, brought about by substances which adversely affect nervous tissue,
constitute one of the ten leading occupational disorders in the United States. Neurotoxicant
effects constitute the basis for establishing exposure limit criteria for approximately 40% of
agents considered hazardous by the United States National Institute for Occupational Safety
and Health (NIOSH).

A neurotoxin is any substance capable of interfering with the normal function of nervous
tissue, causing irreversible cellular damage and/or resulting in cellular death. Depending on
its particular properties, a given neurotoxin will attack selected sites or specific cellular
elements of the nervous system. Those compounds, which are non-polar, have greater lipid
solubility, and thus have greater access to nervous tissue than highly polar and less lipid-
soluble chemicals. The type and size of cells and the various neurotransmitter systems
affected in different regions of the brain, innate protective detoxifying mechanisms, as well as
the integrity of cellular membranes and intracellular organelles all influence neurotoxicant
responses.

Neurons (the functional cell unit of the nervous system) have a high metabolic rate and are at
greatest risk for neurotoxicant damage, followed by oligodendrocytes, astrocytes, microglia
and cells of the capillary endothelium. Changes in cellular membrane structure impair
excitability and impede impulse transmission. Toxicant effects alter protein form, fluid
content and ionic exchange capability of membranes, leading to swelling of neurons,
astrocytes and damage to the delicate cells lining blood capillaries. Disruption of
neurotransmitter mechanisms block access to post-synaptic receptors, produce false
neurotransmitter effects, and alter the synthesis, storage, release, re-uptake or enzymatic
inactivation of natural neurotransmitters. Thus, clinical manifestations of neurotoxicity are
determined by a number of different factors: the physical characteristics of the neurotoxicant
substance, the dose of exposure to it, the vulnerability of the cellular target, the organism’s
ability to metabolize and excrete the toxin, and by the reparative abilities of the structures and
mechanisms affected. Table 1 lists various chemical exposures and their neurotoxic
syndromes.

Table 1. Chemical exposures and associated neurotoxic syndromes

Neurotoxin Sources of Clinical diagnosis Locus of


exposure pathology1

Metals

Arsenic Pesticides; Acute: encephalopathy Unknown (a)


pigments;
antifouling paint; Chronic: peripheral Axon (c)
electroplating neuropathy
industry; seafood;
smelters;
semiconductors

Lead Solder; lead shot; Acute: encephalopathy Blood vessels (a)


illicit whiskey;
insecticides; auto Chronic: encephalopathy Axon (c)
body shop; storage and peripheral
battery neuropathy
manufacturing;
foundries, smelters;
lead-based paint;
lead pipes

Manganese Iron, steel industry; Acute: encephalopathy Unknown (a)


welding operations;
metal-finishing Chronic: parkinsonism Basal ganglia
operations; neurons (c)
fertilizers;
manufacturers of
fireworks, matches;
manufacturers of
dry cell batteries

Mercury Scientific Acute: headache, nausea, Unknown (a)


instruments; onset of tremor
electrical Axon (c)
equipment; Chronic: ataxia,
amalgams; peripheral neuropathy, Unknown (c)
electroplating encephalopathy
industry;
photography; felt
making

Tin Canning industry; Acute: memory defects, Neurons of the


solder; electronic seizures, disorientation limbic system (a &
components; c)
polyvinyl plastics; Chronic:
fungicides encephalomyelopathy Myelin (c)

Solvents

Carbon disulphide Manufacturers of Acute: encephalopathy Unknown (a)


viscose rayon;
preservatives; Chronic: peripheral Axon (c)
textiles; rubber neuropathy, parkinsonism
cement; varnishes; Unknown
electroplating
industry

n-hexane, Paints; lacquers; Acute: narcosis


varnishes; metal-
methyl butyl cleaning Chronic: peripheral
ketone compounds; quick- neuropathy, unknown (a)
drying inks; paint Axon (c),
removers; glues,
adhesives

Perchloroethylene Paint removers; Acute: narcosis Unknown (a)


degreasers;
extraction agents; Chronic: peripheral Axon (c)
dry cleaning neuropathy,
industry; textile encephalopathy Unknown
industry

Toluene Rubber solvents; Acute: narcosis Unknown (a)


cleaning agents;
glues; Cerebellum (c)
manufacturers of
benzene; gasoline, Chronic: ataxia, Unknown
aviation fuels; encephalopathy
paints, paint
thinners; lacquers

Trichloroethylene Degreasers; Acute: narcosis Unknown (a)


painting industry;
varnishes; spot Chronic: encephalopathy, Unknown (c)
removers; process cranial neuropathy
of decaffeination; Axon (c)
dry cleaning
industry; rubber
solvents

Insecticides

Acetylcholinesterase
Acute: cholinergic (a)
Agricultural industry poisoning
Organophosphates manufacturing and Long tracts of spinal
application Chronic: ataxia, paralysis, cord (c)
peripheral neuropathy
Axon (c)

Acetylcholinesterase
Agricultural industry
Acute: cholinergic (a)
manufacturing and
Carbamates poisoning Chronic: tremor,
application flea
peripheral neuropathy Dopaminergic
powders
system (c)

1 (a), acute; (c), chronic.

Source: Modified from Feldman 1990, with permission of the publisher.

Establishing a diagnosis of a neurotoxicant syndrome and differentiating it from neurologic


diseases of non-neurotoxicant aetiology requires an understanding of the pathogenesis of the
neurological symptoms and observed signs and symptoms; an awareness that particular
substances are capable of affecting nervous tissue; documentation of exposure; evidence of
presence of neurotoxin and/or metabolites in tissues of an affected individual; and careful
delineation of a time relationship between exposure and the appearance of symptoms with
subsequent decrease in symptoms after exposure is ended.

Proof that a particular substance has reached a toxicant dose level is usually lacking after
symptoms appear. Unless environmental monitoring is ongoing, a high index of suspicion is
necessary to recognize cases of neurotoxicologic injury. Identifying symptoms referable to the
central and/or the peripheral nervous systems can help the clinician focus on certain
substances, which have a greater predilection for one part or another of the nervous system, as
possible culprits. Convulsions, weakness, tremor/twitching, anorexia (weight loss),
equilibrium disturbance, central nervous system depression, narcosis (a state of stupor or
unconsciousness), visual disturbance, sleep disturbance, ataxia (inability to coordinate
voluntary muscle movements), fatigue and tactile disorders are commonly reported symptoms
following exposure to certain chemicals. Constellations of symptoms form syndromes
associated with neurotoxicant exposure.

Behavioural Syndromes

Disorders with predominantly behavioural features ranging from acute psychosis, depression
and chronic apathy have been described in some workers. It is essential to differentiate
memory impairment associated with other neurological diseases, such as Alzheimer’s disease,
arteriosclerosis or presence of a brain tumour, from the cognitive deficits associated with
toxicant exposure to organic solvents, metals or insecticides. Transient disturbances of
awareness or epileptic seizures with or without associated motor involvement must be
identified as a primary diagnosis separate from similarly appearing disturbances of
consciousness related to neurotoxicant effects. Subjective and behavioural toxicant syndromes
such as headache, vertigo, fatigue and personality change manifest as mild encephalopathy
with inebriation, and may indicate the presence of exposure to carbon monoxide, carbon
dioxide, lead, zinc, nitrates or mixed organic solvents. Standardized neuropsychological
testing is necessary to document elements of cognitive impairment in patients suspected of
toxicant encephalopathy, and these must be differentiated from those dementing syndromes
caused by other pathologies. Specific tests used in the diagnostic batteries of tests must
include a broad sampling of cognitive function tests which will generate predictions about the
patient’s functioning and daily life, as well as tests which have been demonstrated previously
to be sensitive to the effects of known neurotoxins. These standardized batteries must include
tests which have been validated on patients with specific types of brain damage and structural
deficits, to clearly separate these conditions from neurotoxic effects. In addition, tests must
include internal control measures to detect the influence of motivation, hypochondriasis,
depression and learning difficulties, and must contain language that takes into account cultural
as well as educational background effects.

A continuum exists from mild to severe central nervous system impairment experienced by
patients exposed to toxicant substances:

 Organic affective syndrome (Type I Effect), in which mild mood disorders predominate as the
patient’s chief complaint, with features most consistent with those of organic affective
disorders of the depressive type. This syndrome seems to be reversible following cessation of
exposure to the offending agent.
 Mild chronic toxicant encephalopathy, in which, in addition to mood disturbances, central
nervous system impairment is more prominent. Patients have evidence of memory and
psychomotor function disturbance which can be confirmed by neuropsychological testing. In
addition, features of visual spatial impairment and abstract concept formation may be seen.
Activities of daily living and work performance are impaired.
 Sustained personality or mood change (Type IIA Effect) or impairment in intellectual function
(Type II) may be seen. In mild chronic toxicant encephalopathy, the course is insidious.
Features may persist after the cessation of exposure and disappear gradually, while in some
individuals, persistent functional impairment may be observed. If exposure continues, the
encephalopathy may progress to a more severe stage.
 In severe chronic toxicant encephalopathy (Type III Effect) dementia with global deterioration
of memory and other cognitive problems are noted. The clinical effects of toxicant
encephalopathy are not specific to a given agent. Chronic encephalopathy associated with
toluene, lead and arsenic is not different from that of other toxicant aetiologies. The
presence of other associated findings, however (visual disturbances with methyl alcohol),
may help differentiate syndromes according to particular chemical aetiologies.

Workers exposed to solvents for long periods of time may exhibit disturbances of central
nervous system function which are permanent. Since an excess of subjective symptoms,
including headache, fatigue, impaired memory, loss of appetite and diffuse chest pains, have
been reported, it is often difficult to confirm this effect in any individual case. An
epidemiological study comparing house painters exposed to solvents with unexposed
industrial workers showed, for example, that painters had significantly lower mean scores on
psychological tests measuring intellectual capacity and psychomotor coordination than
referent subjects. The painters also had significantly lower performances than expected on
memory and reaction time tests. Differences between workers exposed for several years to jet
fuel and unexposed workers, in tests demanding close attention and high sensory motor speed,
were apparent as well. Impairments in psychological performance and personality changes
have also been reported among car painters. These included visual and verbal memory,
reduction of emotional reactivity, and poor performance on verbal intelligence tests.

Most recently, a controversial neurotoxicant syndrome, multiple chemical sensitivity, has been
described. Such patients develop a variety of features involving multiple organ systems when
they are exposed to even low levels of various chemicals found in the workplace and the
environment. Mood disturbances are characterized by depression, fatigue, irritability and poor
concentration. These symptoms reoccur on exposure to predictable stimuli, by elicitation by
chemicals of diverse structural and toxicological classes, and at levels much lower than those
causing adverse responses in the general population. Many of the symptoms of multiple
chemical sensitivity are shared by individuals who show only a mild form of mood
disturbance, headache, fatigue, irritability and forgetfulness when they are in a building with
poor ventilation and with off-gassing of volatile substances from synthetic building materials
and carpets. The symptoms disappear when they leave these environments.

Disturbances of consciousness, seizures and coma

When the brain is deprived of oxygen—for example, in the presence of carbon monoxide,
carbon dioxide, methane or agents which block tissue respiration such as hydrocyanic acid, or
those which cause massive impregnation of the nerve such as certain organic solvents—
disturbances of consciousness may result. Loss of consciousness may be preceded by seizures
in workers with exposure to anticholinesterase substances such as organophosphate
insecticides. Seizures may also occur with lead encephalopathy associated with brain
swelling. Manifestations of acute toxicity following organophosphate poisoning have
autonomic nervous system manifestations which precede the occurrence of dizziness,
headache, blurred vision, myosis, chest pain, increased bronchial secretions, and seizures.
These parasympathetic effects are explained by the inhibitory action of these toxicant
substances on cholinesterase activity.

Movement disorders

Slowness of movement, increased muscle tone, and postural abnormalities have been
observed in workers exposed to manganese, carbon monoxide, carbon disulphide and the
toxicity of a meperidine by-product, 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP).
At times, the individuals may appear to have Parkinson’s disease. Parkinsonism secondary to
toxicant exposure has features of other nervous disorders such as chorea and athetosis. The
typical “pill-rolling” tremor is not seen in these instances, and usually the cases do not
respond well to the drug levodopa. Dyskinesia (impairment of the power of voluntary motion)
can be a common symptom of bromomethane poisoning. Spasmodic movements of the
fingers, face, peribuccal muscles and the neck, as well as extremity spasms, may be seen.
Tremor is common following mercury poisoning. More obvious tremor associated with ataxia
(lack of coordination of muscular action) is noted in individuals following toluene inhalation.

Opsoclonus is an abnormal eye movement which is jerky in all directions. This is often seen
in brain-stem encephalitis, but may also be a feature following chlordecone exposure. The
abnormality consists of irregular bursts of abrupt, involuntary, rapid, simultaneous jerking of
both eyes in a conjugate manner, possibly multidirectional in severely affected individuals.

Headache

Common complaints of head pain following exposure to various metal fumes such as zinc and
other solvent vapours may result from vasodilation (widening of the blood vessels), as well as
cerebral oedema (swelling). The experiencing of pain is common to these conditions, as well
as carbon monoxide, hypoxia (low oxygen), or carbon dioxide conditions. “Sick building
syndrome” is thought to cause headaches because of excess carbon dioxide present in a poorly
ventilated area.

Peripheral neuropathy

Peripheral nerve fibres serving motor functions begin in motor neurons in the ventral horn of
the spinal cord. The motor axons extend peripherally to the muscles they innervate. A sensory
nerve fibre has its nerve cell body in the dorsal root ganglion or in the dorsal grey matter of
the spinal cord. Having received information from the periphery detected at distal receptors,
nerve impulses are conducted centrally to the nerve cell bodies where they connect with
spinal cord pathways transmitting information to the brain stem and cerebral hemispheres.
Some sensory fibres have immediate connections with motor fibres within the spinal cord,
providing a basis for reflex activity and quick motor responses to noxious sensations. These
sensory-motor relationships exist in all parts of the body; the cranial nerves are the peripheral
nerve equivalents arising in brain stem, rather than spinal cord, neurons. Sensory and motor
nerve fibres travel together in bundles and are referred to as the peripheral nerves.

Toxicant effects of peripheral nerve fibres may be divided into those which primarily affect
axons (axonopathies), those which are involved in distal sensory-motor loss, and those which
primarily affect myelin sheath and Schwann cells. Axonopathies are evident in early stages in
the lower extremities where the axons are the longest and farthest from the nerve cell body.
Random demyelination occurs in segments between nodes of Ranvier. If sufficient axonal
damage occurs, secondary demyelination follows; as long as axons are preserved,
regeneration of Schwann cells and remyelination can occur. A pattern seen commonly in
toxicant neuropathies is distal axonopathy with secondary segmental demyelination. The loss
of myelin reduces the speed of conducting nerve impulses. Thus, gradual onset of intermittent
tingling and numbness progressing to lack of sensation and unpleasant sensations, muscle
weakness, and atrophy results from damage to the motor and sensory fibres. Reduced or
absent tendon reflexes and anatomically consistent patterns of sensory loss, involving the
lower extremities more than upper, are features of peripheral neuropathy.

Motor weaknesses may be noted in distal extremities and progress to unsteady gait and
inability to grasp objects. The distal portions of the extremities are involved to a greater
extent, but severe cases may produce proximal muscle weakness or atrophy as well. Extensor
muscle groups are involved before the flexors. Symptoms may sometimes progress for a few
weeks even after removal from exposure. Deterioration of nerve function may persist for
several weeks after removal from exposure.

Depending on the type and severity of neuropathy, an electrophysiological examination of the


peripheral nerves is useful to document impaired function. Slowing of conduction velocity,
reduced amplitudes of sensory or motor action potentials, or prolonged latencies can be
observed. Slowing of motor or sensory conduction velocities is generally associated with
demyelination of nerve fibres. Preservation of normal conduction velocity values in the
presence of muscle atrophy suggests axonal neuropathy. Exceptions occur when there is
progressive loss of motor and sensory nerve fibres in axonal neuropathy which affects the
maximal conduction speed as a result of the dropping out of the larger diameter faster
conducting nerve fibres. Regenerating fibres occur in early stages of recovery in
axonopathies, in which conduction is slowed, especially in the distal segments. The
electrophysiological study of patients with toxicant neuropathies should include
measurements of motor and sensory conduction velocity in the upper and lower extremities.
Special attention should be given to the primarily sensory conducting characteristics of the
sural nerve in the leg. This is of great value when the sural nerve is then used for biopsy,
providing anatomical correlation between the histology of teased nerve fibres and the
conduction characteristics. A differential electrophysiological study of the conducting
capabilities of proximal segments versus distal segments of a nerve is useful in identifying a
distal toxicant axonopathy, or to localize a neuropathic block of conduction, probably due to
demyelination.

Understanding the pathophysiology of a suspected neurotoxicant polyneuropathy has great


value. For example, in patients with neuropathy caused by n-hexane and methylbutyl ketone,
motor nerve conduction velocities are reduced, but in some cases, the values may fall within
the normal range if only the fastest firing fibres are stimulated and used as the measured
outcome. Since neurotoxicant hexacarbon solvents cause axonal degeneration, secondary
changes arise in myelin and explain overall reduction in conduction velocity despite the value
within the normal range produced by the preserved conducting fibres.

Electrophysiological techniques include special tests other than the direct conduction velocity,
amplitude and latency studies. Somatosensory evoked potentials, auditory evoked potentials,
and visual evoked potentials are ways of studying the characteristics of the sensory
conducting systems, as well as specific cranial nerves. Afferent-efferent circuitry can be
tested by using blink reflex tests involving the 5th cranial nerve to 7th cranial innervated
muscle responses; H-reflexes involve segmental motor reflex pathways. Vibration stimulation
selects out larger fibres from smaller fibre involvements. Well-controlled electronic
techniques are available for measuring the threshold needed to elicit a response, and then to
determine the speed of travel of that response, as well as the amplitude of the muscle
contraction, or the amplitude and pattern of an evoked sensory action potential. All
physiological results must be evaluated in light of the clinical picture and with an
understanding of the underlying pathophysiological process.
Conclusion

The differentiation of a neurotoxicant syndrome from a primary neurological disease poses a


formidable challenge to physicians in the occupational setting. Obtaining a good history,
maintaining a high degree of suspicion and adequate follow-up of an individual, as well as
groups of individuals, is necessary and rewarding. Early recognition of illness related to
toxicant agents in their environment or to a particular occupational exposure is critical, since
proper diagnosis can lead to early removal of an individual from the hazards of ongoing
exposure to a toxicant substance, preventing possible irreversible neurological damage.
Furthermore, recognition of the earliest affected cases in a particular setting may result in
changes that will protect others who have not yet become affected.

Diagnosis

Written by ILO Content Manager

The diagnosis of neurotoxic disease is not easy. The errors are usually of two types: either it is
not recognized that a neurotoxic agent is the cause of neurological symptoms, or neurological
(and especially neurobehavioural) symptoms are erroneously diagnosed as resulting from an
occupational, neurotoxic exposure. Both of these errors can be hazardous since an early
diagnosis is important in the case of neurotoxic disease, and the best treatment is avoiding
further exposure for the individual case and the surveillance of the condition of other workers
in order to prevent their exposure to the same danger. On the other hand, sometimes undue
alarm can develop in the workplace if a worker claims to have serious symptoms and suspects
a chemical exposure as the cause but in fact, either the worker is mistaken or the hazard is not
actually present for others. There is practical reason for correct diagnostic procedures, as well,
since in many countries, the diagnosis and treatment of occupational diseases and the loss of
working capacity and invalidity caused by those diseases are covered by insurance; thus the
financial compensation may be disputed, if the diagnostic criteria are not solid. An example of
a decision tree for neurological assessment is given in Table 1.

Table 1. Decision tree for neurotoxic disease

I. Relevant exposure level, length and type

II. Appropriate symptoms insidiously increasing central (CNS) or peripheral (PNS) nervous
system symptoms

III. Signs and additional tests CNS dysfunction: neurology, psychology tests PNS
dysfunction: quantitative sensory test, nerve conduction studies

IV. Other diseases excluded in differential diagnosis


Exposure and Symptoms

Acute neurotoxic syndromes occur mainly in accidental situations, when workers are exposed
short-term to very high levels of a chemical or to a mixture of chemicals generally through
inhalation. The usual symptoms are vertigo, malaise and possible loss of consciousness as a
result of depression of the central nervous system. When the subject is removed from the
exposure, the symptoms disappear rather quickly, unless the exposure has been so intense that
it is life-threatening, in which case coma and death may follow. In these situations recognition
of the hazard must occur at the workplace, and the victim should be taken out into the fresh air
immediately.

In general, neurotoxic symptoms arise after short-term or long-term exposures, and often at
relatively low-level occupational exposure levels. In these cases acute symptoms may have
occurred at work, but the presence of acute symptoms is not necessary for diagnosis of
chronic toxic encephalopathy or toxic neuropathy to be made. However, patients do often
report headache, light-headedness or mucosal irritation at the end of a working day, but these
symptoms initially disappear during the night, weekend or vacation. A useful checklist can be
found in Table 2.

Table 2. Consistent neuro-functional effects of worksite exposures to some leading neurotoxic


substances

Mixed Carbon Styren Organophos Lea Mercur


organic disulphid e - d y
solvent e phates
s
Acquisition + +

Affect + + +

Categorizatio +
n
Coding + + + +

Colour vision + +

Concept +
shifting
Distractibility +

Intelligence + + + + +

Memory + + + + + +
Motor + + + + +
coordination
Motor speed + + + + +

Near visual +
contrast
sensitivity
Odour +
perception
threshold
Odour + +
identification
Personality + + +

Spatial + + +
relations
Vibrotactile + + +
threshold
Vigilance + + +

Visual field + +

Vocabulary +

Source: Adapted from Anger 1990.

Assuming that the patient has been exposed to neurotoxic chemicals, the diagnosis of
neurotoxic disease starts with symptoms. In 1985, a joint working group of the World Health
Organization and the Nordic Council of Ministers discussed the matter of chronic organic
solvent intoxication and found a set of core symptoms, which are found in most cases
(WHO/Nordic Council 1985). The core symptoms are fatigability, memory loss, difficulties in
concentration, and loss of initiative. These symptoms usually start after a basic change in
personality, which develops gradually and affects energy, intellect, emotion and motivation.
Among other symptoms of chronic toxic encephalopathy are depression, dysphoria, emotional
lability, headache, irritability, sleep disturbances and dizziness (vertigo). If there is also
involvement of the peripheral nervous system, numbness and possibly muscular weakness
develop. Such chronic symptoms last for at least a year after the exposure itself has ended.

Clinical Examination and Testing

The clinical examination should include a neurological examination, where attention should
be paid to impairment of higher nervous functions, such as memory, cognition, reasoning and
emotion; to impaired cerebellar functions, like tremor, gait, station and coordination; and to
peripheral nervous functions, especially vibration sensitivity and other tests of sensation.
Psychological tests can provide objective measures of higher nervous system functions,
including psychomotor, short-term memory, verbal and non-verbal reasoning and perceptual
functions. In individual diagnosis the tests should include some tests that give a clue as to the
person’s premorbid intellectual level. History of school performance and previous job
performance as well as possible psychological tests administered previously, for example in
connection with military service, can help in the evaluation of the person’s normal level of
performance.

The peripheral nervous system can be studied with quantitative tests of sensory modalities,
vibration and thermosensibility. Nerve conduction velocity studies and electromyography can
often reveal neuropathy at an early stage. In these tests special emphasis should be on sensory
nerve functions. The amplitude of the sensory action potential (SNAP) decreases more often
than the sensory conduction velocity in axonal neuropathies, and most toxic neuropathies are
axonal in character. Neuroradiological studies such as computed tomography (CT) and
magnetic resonance imaging (MRI) usually do not reveal anything pertinent to chronic toxic
encephalopathy, but they may be useful in the differential diagnosis.

In the differential diagnosis other neurological and psychiatric diseases should be considered.
Dementia of other aetiology should be ruled out, as well as depression and stress symptoms of
various causes. Psychiatric consultation may be necessary. Alcohol abuse is a relevant
confounding factor; excessive use of alcohol causes symptoms similar to those of solvent
exposure, and on the other hand there are papers indicating that solvent exposure may induce
alcohol abuse. Other causes for neuropathy also have to be ruled out, especially entrapment
neuropathies, diabetes and kidney disease; also alcohol causes neuropathy. The combination
of encephalopathy and neuropathy is more likely of toxic origin than either of these alone.

In the final decision the exposure should be evaluated again. Was there relevant exposure,
considering the level, length and quality of exposure? Solvents are more likely to induce
psycho-organic syndrome or toxic encephalopathy; hexacarbons, however, usually first cause
neuropathy. Lead and some other metals cause neuropathy, although CNS involvement can be
detected later on.

Manifestations of Acute and Early Chronic Poisoning

Written by ILO Content Manager

Current knowledge of the short- and long-term manifestations of exposure to neurotoxic


substances comes from experimental animal studies and human chamber studies,
epidemiological studies of active and retired and/or diseased workers, clinical studies and
reports, as well as large-scale disasters, such as those that occurred in Bhopal, following a
leak of methyl isocyanate, and in Minamata, from methyl mercury poisoning.

Exposure to neurotoxic substances can produce immediate effects (acute) and/or long-term
effects (chronic). In both cases, the effects can be reversible and disappear over time
following reduction or cessation of exposure, or result in permanent, irreversible damage. The
severity of acute and chronic nervous system impairment depends on exposure dose, which
includes both the quantity and duration of exposure. Like alcohol and recreational drugs,
many neurotoxic substances may initially be excitatory, producing a sensation of well-being
or euphoria and/or speeding up motor functions; as the dose increases in quantity or in time,
these same neurotoxins will depress the nervous system. Indeed, narcosis (a state of stupor or
insensibility) is induced by a large number of neurotoxic substances, which are mind-altering
and depress the central nervous system.

Acute Poisoning

Acute effects reflect the immediate response to the chemical substance. The severity of the
symptoms and resulting disorders depends on the quantity that reaches the nervous system.
With mild exposures, acute effects are mild and transient, disappearing when exposure ceases.
Headache, tiredness, light-headedness, difficulty concentrating, feelings of drunkenness,
euphoria, irritability, dizziness and slowed reflexes are the types of symptoms experienced
during exposure to neurotoxic chemicals. Although these symptoms are reversible, when
exposure is repeated day after day, the symptoms recur as well. Moreover, since the
neurotoxic substance is not immediately eliminated from the body, symptoms can persist
following work. Reported symptoms at a particular workstation are a good reflection of
chemical interference with the nervous system and should be considered a warning signal for
potential over-exposure; preventive measures to reduce exposure levels should be initiated.

If exposure is very high, as can occur with spills, leaks, explosions and other accidents,
symptoms and signs of intoxication are debilitating (severe headaches, mental confusion,
nausea, dizziness, incoordination, blurred vision, loss of consciousness); if exposure is high
enough, effects can be long-lasting, possibly resulting in coma and death.
Acute pesticide-related disorders are a common occurrence among agricultural workers in
food-producing countries, where large amounts of toxic substances are used as insecticides,
fungicides, nematicides, and herbicides. Organophosphates, carbamates, organochlorines,
pyrethrum, pyrethrin, paraquat and diquat are among the major categories of pesticides;
however, there are thousands of pesticide formulations, containing hundreds of different
active ingredients. Some pesticides, such as maneb, contain manganese, while others are
dissolved in organic solvents. In addition to the symptoms mentioned above, acute
organophosphate and carbamate poisoning may be accompanied by salivation, incontinence,
convulsions, muscle twitching, diarrhoea, visual disturbances, as well as respiratory
difficulties and a rapid heart rate; these result from an excess of the neurotransmitter
acetylcholine, which occurs when these substances attack a chemical called cholinesterase.
Blood cholinesterase decreases proportionally to the degree of acute organophosphate or
carbamate intoxication.

With some substances, such as organophosphorus pesticides and carbon monoxide, high-level
acute exposures can produce delayed deterioration of certain parts of the nervous system. For
the former, numbness and tingling, weakness and disequilibrium can occur a few weeks after
exposure, while for the latter, delayed neurologic deterioration can take place, with symptoms
of mental confusion, ataxia, motor incoordination and paresis. Repeated acute episodes of
high levels of carbon monoxide have been associated with later-life Parkinsonism. It is
possible that high exposures to certain neurotoxic chemicals may be associated with an
increased risk for neurodegenerative disorders later on in life.

Chronic Poisoning

Recognition of the hazards of neurotoxic chemicals has led many countries to reduce the
permissible exposure levels. However, for most chemicals, the level at which no adverse
effect will occur over long-term exposure is still unknown. Repeated exposure to low to
medium levels of neurotoxic substances throughout many months or years can alter nervous
system functions in an insidious and progressive manner. Continued interference with
molecular and cellular processes causes neurophysiological and psychological functions to
undergo slow alterations, which in the early stages may go unseen since there are large
reserves in the nervous system circuitry and damage can, in the first stages, be compensated
through new learning.

Thus, initial nervous system injury is not necessarily accompanied by functional disorders and
may be reversible. However, as the damage progresses, symptoms and signs, often non-
specific in nature, become apparent, and individuals may seek medical attention. Finally,
impairment may become so severe that a clear clinical syndrome, generally irreversible, is
manifest.

Figure 1 schematizes the health deterioration continuum associated with exposure to


neurotoxic substances. Progression of neurotoxic dysfunction is dependent on both the
duration and concentration of exposure (dose), and may be influenced by other workplace
factors, individual health status and susceptibility as well as lifestyle, particularly drinking
and exposure to neurotoxic substances used in hobbies, such as glues applied in furniture
assembly or plastic model building, paints and paint removers.

Figure 1. Health deterioration on a continuum with increasing dosage


Different strategies are adopted for identification of neurotoxin-related illness among
individual workers and for the surveillance of early nervous system deterioration among
active workers. Clinical diagnosis relies on a constellation of signs and symptoms, coupled to
the medical and exposure history for an individual; aetiologies other than exposure must be
systematically ruled out. For the surveillance of early dysfunction among active workers, the
group portrait of dysfunction is important. Most often, the pattern of dysfunction observed for
the group will be similar to the pattern of impairment clinically observed in the disease. It is
somewhat like summing early, mild alterations to produce a picture of what is happening to
the nervous system. The pattern or profile of the overall early response provides an indication
of the specificity and the type of action of the particular neurotoxic substance or mixture. In
workplaces with potential exposure to neurotoxic substances, health surveillance of groups of
workers may prove particularly useful for prevention and workplace action in order to avoid
the development of more severe illness (see Figure 2). Workplace studies carried out
throughout the world, with active workers exposed to specific neurotoxic substances or to
mixtures of various chemicals, have provided valuable information on early manifestations of
nervous system dysfunction in groups of exposed workers.
Figure 2. Preventing neurotoxicity at work.
Early symptoms of chronic poisoning
Altered mood states are most often the first symptoms of the initial changes in nervous system
functioning. Irritability, euphoria, sudden mood changes, excessive tiredness, feelings of
hostility, anxiousness, depression and tension are among the mood states most often
associated with neurotoxic exposures. Other symptoms include memory problems,
concentration difficulties, headaches, blurred vision, feelings of drunkenness, dizziness,
slowness, tingling sensation in hands or feet, loss of libido and so on. Although in the early
stages these symptoms are usually not sufficiently severe to interfere with work, they do
reflect diminished well-being and affect one’s capacity to fully enjoy family and social
relations. Often, because of the non-specific nature of these symptoms, workers, employers
and occupational health professionals tend to ignore them and look for causes other than
workplace exposure. Indeed, such symptoms may contribute to or aggravate an already
difficult personal situation.

In workplaces where neurotoxic substances are used, workers, employers and occupational
health and safety personnel should be particularly aware of the symptomatology of early
intoxication, indicative of nervous system vulnerability to exposure. Symptom questionnaires
have been developed for worksite studies and surveillance of workplaces where neurotoxic
substances are used. Table 1 contains an example of such a questionnaire.

Table 1. Chronic symptoms checklist

Symptoms experienced in the past month

1. Have you tired more easily than expected for the type of activity you do?

2. Have you felt light-headed or dizzy?

3. Have you had difficulty concentrating?

4. Have you been confused or disoriented?

5. Have you had trouble remembering things?

6. Have your relatives noticed that you have trouble remembering things?

7. Have you had to make notes to remember things?

8. Have you found it hard to understand the meaning of newspapers?

9. Have you felt irritable?

10. Have you felt depressed?

11. Have you had heart palpitations even when you are not exerting yourself?
12. Have you had a seizure?

13. Have you been sleeping more often than is usual for you?

14. Have you had difficulty falling asleep?

15. Have you been bothered by incoordination or loss of balance?

16. Have you had any loss of muscle strength in your legs or feet?

17. Have you had any loss of muscle strength in your arms or hands?

18. Have you had difficulty moving your fingers or grasping things?

19. Have you had hand numbness and tingling in your fingers lasting for more than a day?

20. Have you had hand numbness and tinging in your toes lasting more than a day?

21. Have you had headaches at least once a week?

22. Have you had difficulty driving home from work because you felt dizzy or tired?

23. Have you felt “high” from the chemicals used at work?

24. Have you had a lower tolerance for alcohol (takes less to get drunk)?

Source: Taken from Johnson 1987.

Early motor, sensory and cognitive changes in chronicpoisoning

With increasing exposure, changes can be observed in motor, sensory and cognitive functions
in workers exposed to neurotoxic substances, who do not present clinical evidence of
abnormality. Since the nervous system is complex, and certain areas are vulnerable to specific
chemicals, while others are sensitive to the action of a large number of toxic agents, a wide
range of nervous system functions may be affected by a single toxic agent or a mixture of
neurotoxins. Reaction time, hand-eye coordination, short-term memory, visual and auditory
memory, attention and vigilance, manual dexterity, vocabulary, switching attention, grip
strength, motor speed, hand steadiness, mood, colour vision, vibrotactile perception, hearing
and smell are among the many functions that have been shown to be altered by different
neurotoxic substances.

Important information on the type of early deficits that result from exposure has been
provided by comparing performance between exposed and non-exposed workers and with
respect to the degree of exposure. Anger (1990) provides an excellent review of worksite
neurobehavioural research up to 1989. Table 2 adapted from this article, provides an example
of the type of neuro-functional deficits that have been consistently observed in groups of
active workers exposed to some of the most common neurotoxic substances.

Table 2. Consistent neuro-functional effects of worksite exposures to some leading neurotoxic


substances

Mixed Carbon Styrene Organophos- Lead Mercury


organic disulphide phates
solvents
Acquisition + +
Affect + + +
Categorization +
Coding + + + +
Colour vision + +
Concept shifting +
Distractibility +
Intelligence + + + + +
Memory + + + + + +
Motor + + + + +
coordination
Motor speed + + + + +
Near visual +
contrast
sensitivity
Odour +
perception
threshold
Odour + +
identification
Personality + + +
Spatial relations + + +
Vibrotactile + + +
threshold
Vigilance + + +
Visual field + +
Vocabulary +

Source: Adapted from Anger 1990.

Although at this stage in the continuum from well-being to disease, loss is not in the clinically
abnormal range, there can be health-related consequences associated with such changes. For
example, decreased vigilance and reduced reflexes may put workers in greater danger of
accidents. Smell is used to identify leaks and mask saturation (cartridge breakthrough), and
acute or chronic loss of smell renders one less apt to identify a potentially hazardous situation.
Mood changes may interfere with inter-personal relations at work, socially and in the home.
These initial stages of nervous system deterioration, which can be observed by examining
groups of exposed workers and comparing them to non-exposed workers or with respect to
their degree of exposure, reflect diminished well-being and may be predictive of risk of more
serious neurological problems in the future.

Mental health in chronic poisoning

Neuropsychiatric disorders have long been attributed to exposure to neurotoxic substances.


Clinical descriptions range from affective disorders, including anxiety and depression, to
manifestations of psychotic behaviour and hallucinations. Acute high-level exposure to many
heavy metals, organic solvents and pesticides can produce delirium. “Manganese madness”
has been described in persons with long-term exposure to manganese, and the well-known
“mad hatter” syndrome results from mercury intoxication. Type 2a Toxic Encephalopathy,
characterized by sustained change in personality involving fatigue, emotional lability, impulse
control and general mood and motivation, has been associated with organic solvent exposure.
There is growing evidence from clinical and population studies that personality disorders
persist over time, long after exposure ceases, although other types of impairment may
improve.

On the continuum from well-being to disease, mood changes, irritability and excessive fatigue
are often the very first indications of over-exposure to neurotoxic substances. Although
neuropsychiatric symptoms are routinely surveyed in worksite studies, these are rarely
presented as a mental health problem with potential consequences on mental and social well-
being. For example, changes in mental health status affect one’s behaviour, contributing to
difficult inter-personal relationships and disagreements in the home; these in turn can
aggravate one’s mental state. In workplaces with employee aid programmes, designed to help
employees with personal problems, ignorance of the potential mental health effects of
exposure to neurotoxic substances can lead to treatment dealing with the effects rather than
the cause. It is interesting to note that among the many reported outbreaks of “mass hysteria”
or psychogenic illness, industries with exposure to neurotoxic substances are over-
represented. It is possible that these substances, which, for the large part, went unmeasured,
contributed to the reported symptoms.

Mental health manifestations of neurotoxin exposure can be similar to those that are caused
by psychosocial stressors associated with poor work organization, as well as psychological
reactions to accidents, very stressful occurrences and severe intoxications, called post-
traumatic stress disorder (as discussed elsewhere in this Encyclopaedia). A good
understanding of the relation between mental health problems and working conditions is
important to initiating adequate preventive and curative actions.

General considerations in assessing early neurotoxicdysfunction

When evaluating early nervous system dysfunction among active workers, a number of
factors must be taken into account. Firstly, many of the neuropsychological and
neurophysiological functions that are examined diminish with age; some are influenced by
culture or educational level. These factors must be taken into account when considering the
relation between exposure and nervous system alterations. This can be done by comparing
groups with similar socio-demographic status or by using statistical methods of adjustment.
There are, however, certain pitfalls that should be avoided. For example, older workers may
have longer work histories, and it has been suggested that some neurotoxic substances may
accelerate ageing. Job segregation may confine poorly educated workers, women and
minorities in jobs with higher exposures. Secondly, alcohol consumption, smoking and drugs,
which all contain neurotoxic substances, may also affect symptoms and performance. A good
understanding of the workplace is important in unravelling the different factors that contribute
to nervous system dysfunction and the implementation of preventive measures.

Measuring Neurotoxic Deficits

Written by ILO Content Manager

Neuro-functional Test Batteries

Sub-clinical neurologic signs and symptoms have long been noted among active workers
exposed to neurotoxins; however, it is only since the mid-1960s that research efforts have
focused on the development of sensitive test batteries capable of detecting subtle, mild
changes that are present in the early stages of intoxication, in perceptual, psychomotor,
cognitive, sensory and motor functions, and affect.

The first neurobehavioural test battery for use in worksite studies was developed by Helena
Hänninen, a pioneer in the field of neurobehavioural deficits associated with toxic exposure
(Hänninen Test Battery) (Hänninen and Lindstrom 1979). Since then, there have been
worldwide efforts to develop, refine and, in some cases, computerize neurobehavioural test
batteries. Anger (1990) describes five worksite neurobehavioural test batteries from Australia,
Sweden, Britain, Finland and the United States, as well as two neurotoxic screening batteries
from the United States, that have been used in studies of neurotoxin-exposed workers. In
addition, the computerized Neurobehavioral Evaluation System (NES) and the Swedish
Performance Evaluation System (SPES) have been extensively used around the world. There
are also test batteries designed to assess sensory functions, including measures of vision,
vibrotactile perception threshold, smell, hearing and sway (Mergler 1995). Studies of various
neurotoxic agents using one or another of these batteries have greatly contributed to our
knowledge of early neurotoxic impairment; however, cross-study comparisons have been
difficult since different tests are used and tests with similar names may be administered using
a different protocol.

In an attempt to standardize information from studies on neurotoxic substances, the notion of


a “core” battery was put forward by a working committee of the World Health Organization
(WHO) (Johnson 1987). Based on knowledge at the time of the meeting (1985), a series of
tests were selected to make up the Neurobehavioral Core Test Battery (NCTB), a relatively
inexpensive, hand-administered battery, which has been successfully used in many countries
(Anger et al. 1993). The tests that make up this battery were chosen to cover specific nervous
system domains, which had been previously shown to be sensitive to neurotoxic damage. A
more recent core battery, which comprises both hand-administered and computerized tests,
has been proposed by a workgroup of the United States Agency for Toxic Substances and
Disease Registry (Hutchison et al. 1992). Both batteries are presented in Table 1.
Table 1. Examples of "core" batteries for assessment of early neurotoxic effects

Neurobehavioural Core Test Test Agency for Toxic Substances and


Battery (NCTB)+ order Disease Registry Adult Environmental
Neurobehavioural Test Battery
(AENTB)+
Functional domain Test Functional Test
domain
Motor steadiness Aiming 1 Vision Visual acuity, near
(Pursuit contrast sensitivity
Aiming II)
Attention/response Simple 2 Colour vision
speed Reaction Time (Lanthony D-15
desaturated test)
Perceptual motor Digit Symbol 3 Somatosensory Vibrotactile
speed (WAIS-R) perception threshold
Manual dexterity Santa Ana 4 Motor strength Dynamometer
(Helsinki (including fatigue
Version) assessment)
Visual Benton Visual 5 Motor Santa Ana
perception/memory Retention coordination
Auditory memory Digit Span 6 Higher Raven Progressive
(WAIS-R, intellectual Matrices (Revised)
WMS) function
Affect POMS (Profile 7 Motor Fingertapping Test
of Mood coordination (one hand)1
States)
8 Sustained Simple Reaction
attention Time (SRT)
(cognitive), speed (extended)1
(motor)
9 Cognitive coding Symbol-digit with
delayed recall1
10 Learning and Serial Digit
memory Learning1
11 Index of Vocabulary1
educational level
12 Mood Mood Scale1

1
Available in computerized version; WAIS = Wechsler Adult Intelligence Scale; WMS =
Wechsler Memory Scale.

The authors of both core batteries stress that, although the batteries are useful to standardize
results, they by no means provide complete assessment of nervous system functions.
Additional tests should be used depending upon the type of exposure; for example, a test
battery to assess nervous system dysfunction among manganese-exposed workers would
include more tests of motor functions, particularly those that require rapid alternating
movements, while one for methylmercury-exposed workers would include visual field testing.
The choice of tests for any particular workplace should be made on the basis of current
knowledge on the action of the particular toxin or toxins to which the persons are exposed.

More sophisticated test batteries, administered and interpreted by trained psychologists, are an
important part of the clinical assessment for neurotoxic poisoning (Hart 1988). It includes
tests of intellectual ability, attention, concentration and orientation, memory, visuo-
perceptive, constructive and motor skills, language, conceptual and executive functions, and
psychological well-being, as well as an assessment of possible malingering. The profile of the
patient’s performance is examined in the light of past and present medical and psychological
history, as well as exposure history. The final diagnosis is based on a constellation of deficits
interpreted in relation to the type of exposure.

Measures of Emotional State and Personality

Studies of the effects of neurotoxic substances usually include measures of affective or


personality disturbance, in the form of symptoms questionnaires, mood scales or personality
indices. The NCTB, described above, includes the Profile of Mood States (POMS), a
quantitative measure of mood. Using 65 qualifying adjectives of mood states over the past
8 days, the degrees of tension, depression, hostility, vigour, fatigue and confusion are derived.
Most comparative workplace studies of neurotoxic exposure indicate differences between
exposed and non-exposed. A recent study of styrene-exposed workers shows dose-response
relations between post-shift urinary mandelic acid level, a biological indicator of styrene, and
scale scores of tension, hostility, fatigue and confusion (Sassine et al. 1996).

Lengthier and more sophisticated tests of affect and personality, such as the Minnesota
Multiphasic Personality Index (MMPI), which reflect both emotional states and personality
traits, have been used primarily for clinical evaluation, but also in workplace studies. The
MMPI likewise provides an assessment of symptom exaggeration and inconsistent responses.
In a study of microelectronics workers with a history of exposure to neurotoxic substances,
results from the MMPI indicated clinically significant levels of depression, anxiety, somatic
concerns and disturbances of thinking (Bowler et al. 1991).

Electrophysiological Measures

Electrical activity generated by the transmission of information along nerve fibres and from
one cell to another, can be recorded and used in the determination of what is happening in the
nervous system of persons with toxic exposures. Interference with neuronal activity can slow
down transmission or modify the electrical pattern. Electrophysiological recordings require
precise instruments and are most frequently carried out in a laboratory or hospital setting.
There have, however, been efforts to develop more portable equipment for use in workplace
studies.

Electrophysiological measures record a global response of a large number of nerve fibres


and/or fibres, and a fair amount of damage must exist before it can be adequately recorded.
Thus, for most neurotoxic substances, symptoms, as well as sensory, motor and cognitive
changes, usually can be detected in groups of exposed workers before electrophysiological
differences are observed. For clinical examination of persons with suspected neurotoxic
disorders, electrophysiological methods provide information concerning the type and extent of
nervous system damage. A review of electrophysiological techniques used in the detection of
early neurotoxicity in humans is provided by Seppalaïnen (1988).

The nerve conduction velocity of sensory nerves (going towards the brain) and motor nerves
(going away from the brain) are measured by electroneurography (ENG). By stimulating at
different anatomical positions and recording at another, the conduction velocity can be
calculated. This technique can provide information about the large myelinated fibres; slowing
of conduction velocity occurs when demyelination is present. Reduced conduction velocities
have frequently been observed among lead-exposed workers, in the absence of neurological
symptoms (Maizlish and Feo 1994). Slow conduction velocities of peripheral nerves have also
been associated with other neurotoxins, such as mercury, hexacarbons, carbon disulphide,
styrene, methyl-n-butyl ketone, methyl ethyl ketone, and certain solvent mixtures. The
trigeminal nerve (a facial nerve) is affected by trichloroethylene exposure. However, if the
toxic substance acts primarily on thinly myelinated or unmyelinated fibres, conduction
velocities usually remain normal.

Electromyography (EMG) is used for measuring the electrical activity in muscles.


Electromyographic abnormalities have been observed among workers with exposure to such
substances as n-hexane, carbon disulphide, methyl-n-butyl ketone, mercury and certain
pesticides. These changes are often accompanied by changes in ENG and symptoms of
peripheral neuropathy.

Changes in brainwaves are evidenced by electroencephalography (EEG). In patients with


organic solvent poisoning, local and diffuse slow wave abnormalities have been observed.
Some studies report evidence of dose-related EEG alterations among active workers, with
exposure to organic solvent mixtures, styrene and carbon disulphide. Organochlorine
pesticides can cause epileptic seizures, with EEG abnormalities. EEG changes have been
reported with long-term exposure to organophosphorus and zinc phosphide pesticides.

Evoked potentials (EP) provides another means of examining nervous system activity in
response to a sensory stimulus. Recording electrodes are placed on the specific area of the
brain that responds to the particular stimuli, and the latency and amplitude of the event-related
slow potential are recorded. Increased latency and/or reduced peak amplitudes have been
observed in response to visual, auditory and somatosensory stimuli for a wide range of
neurotoxic substances.

Electrocardiography (ECG or EKG) records changes in the electrical conduction of the heart.
Although it is not often used in studies of neurotoxic substances, changes in ECG waves have
been observed among persons with exposure to trichloroethylene. Electro-oculographic
(EOG) recordings of eye movements have shown alterations among workers with exposure to
lead.

Brain Imaging Techniques

In recent years, different techniques have been developed for brain imaging. Computed
tomographic (CT) images reveal the anatomy of the brain and spinal cord. They have been
used to study cerebral atrophy among solvent-exposed workers and patients; however, the
results are not consistent. Magnetic resonance imaging (MRI) examines the nervous system
using a powerful magnetic field. It is particularly useful clinically to rule out an alternative
diagnosis, such as brain tumours. Positron Emission Tomography (PET), which yields images
of biochemical processes, has been successfully used to study changes in the brain induced by
manganese intoxication. Single photon emission computed tomography (SPECT) provides
information about brain metabolism and may prove to be an important tool in understanding
how neurotoxins act on the brain. These techniques are all very costly, and not readily
available in most hospitals or laboratories throughout the world.

Nervous System: Overview

Written by ILO Content Manager

Knowledge of the nervous system in general and of the brain and human behaviour in
particular are of paramount importance to those who are dedicated to a safe and healthy
environment. Work conditions, and exposures that directly affect the operations of the brain,
influence the mind and behaviour. To evaluate information, to make decisions and to react in
a consistent and reasonable manner to perceptions of the world require that the nervous
system functions properly and that behaviour not be damaged by dangerous conditions, such
as accidents (e.g., a fall from a poorly designed ladder) or exposure to hazardous levels of
neurotoxic chemicals.

Damage to the nervous system can cause changes in sensory input (loss of vision, hearing,
smell, etc.), can hinder the capacity to control movement and body functions and/or can affect
the brain’s capacity to treat or store information. In addition, altered nervous system
functioning can cause behavioural or psychological disorders. Mood and personality changes
are a common occurrence following physical or organic damage to the brain. As our
knowledge develops, we are learning more about the way in which nervous system processes
are modified. Neurotoxic substances can cross the brain’s natural barrier and directly interfere
with its intricate workings. Although some substances have a particular affinity to certain
areas of the nervous system, most neurotoxins have widespread effects, targeting cell
processes involved in membrane transport, internal cellular chemical reactions, liberation of
secretory substances, and so on.

Damage to the various components of the nervous system can occur in different ways:

 direct physical injury from falling objects, collisions, blows or undue pressure on nerves
 changes in the internal environment, such as insufficient oxygen due to asphyxiants and heat
exposure
 interference in the cellular processes through chemical action by substances, such as metals,
organic solvents and pesticides

The insidious and multifaceted development of many nervous system disorders requires
persons working in the field of occupational health to adopt different but complementary
approaches to the study, understanding, prevention and treatment of the problem. Early
alterations can be detected in groups of active, exposed workers using sensitive measures of
impairment. Identification of initial dysfunction can lead to preventive actions. In the latter
stages, a good clinical knowledge is required and differential diagnosis is essential to the
adequate treatment and care of disabled workers.
Although chemical substances are mostly examined one by one, it should be remembered that
in many workplaces mixtures of potentially neurotoxic chemicals are used, exposing workers
to what can be called a “cocktail”. In processes such as printing, painting, cleaning, in poorly
ventilated offices, in laboratories, pesticide application, microelectronics and many other
sectors, workers are exposed to chemical mixtures. Although there may be information on
each one of the substances separately, we have to consider the combined nocivity and possible
additive or even synergistic effects on the nervous system. In some cases of multiple
exposure, each particular chemical may be present in very small quantity, even below the
detection level of exposure assessment techniques; however, when all are added together, the
total concentration can be very high.

The reader should be aware of three major difficulties in reviewing facts about the nervous
system within the scope of this Encyclopaedia.

First, the understanding of occupational diseases affecting the nervous system and behaviour
has changed substantially as new approaches to viewing brain-behavioural relationships have
developed. The main interest of characterization of gross morphological changes that occur
due to mechanical trauma to the nervous system—particularly, but not exclusively to the
brain—was followed by an interest in the absorption of neurotoxic agents by the nervous
system; interest in the study of cellular mechanisms of nervous system pathology; and finally,
the search for the molecular basis of these pathologic processes began to grow. These
approaches coexist today and all contribute information for evaluating the working conditions
affecting the brain, mind, and behaviour.

Second, the information generated by neuroscientists is staggering. The third edition of the
book Principles of Neural Sciences edited by Kandel, Schwartz and Kessell which appeared
in 1991—one of the most valuable reviews of the field—weighs 3.5 kg and is more than
1,000 pages long.

Third, it is very difficult to review knowledge about the functional organization of the nervous
system as it applies to all niches of occupational health and safety. Until about 25 years ago,
the theoretical views that gave support to those concerned health experts who specialize in the
detection, monitoring, prevention, and the clinical treatment of a worker who has absorbed a
neurotoxic agent sometimes did not overlap with theoretical views regarding workers’ brain
trauma and the behavioural manifestations of minimal brain damage. Behavioural
manifestations said to be the consequence of the disruption of specific chemical pathways in
the brain were the exclusive territory of the neurotoxicologist; both structural tissue damage
of specific regions of the brain, and distant neural structures linked to the area where the
lesions occurred, were explanations invoked by neurologists. It is only in the past few years
that converging views are appearing.

With this in mind, this chapter addresses issues important to the understanding of the nervous
system and the effects of workplace conditions on its functioning. It begins with a description
of the anatomy and physiology, followed by a section on neurotoxicity, which reviews
exposure, outcomes and prevention.

Since the nervous system is central to the body’s well-being, many non-chemical hazards can
likewise affect its normal functioning. Many of these are considered in different chapters
dealing with these hazards. Traumatic head injuries are included in First Aid, heat stress is
considered in the article “Effects of heat stress and work in the heat”, and decompression
sickness is reviewed in the article “Gravitational stress”. Hand-arm vibration (“Hand-
transmitted vibration”) and repetitive movement (“Chronic outcomes, musculoskeletal”) in
the chapter Musculoskeletal System, which are risk factors for peripheral neuropathies, are
likewise considered in these sections of the Encyclopaedia.

The chapter ends with a review of special issues and the outlook for future research avenues.

Occupational Neuroepidemiology

Written by ILO Content Manager

Olav Axelson*

*Adapted from Axelson 1996.

Early knowledge about the neurotoxic effects of occupational exposures appeared through
clinical observations. The observed effects were more or less acute and concerned exposure to
metals such as lead and mercury or solvents like carbon disulphide and trichloroethylene.
With time, however, more chronic and clinically less obvious effects of neurotoxic agents
have been assessed through modern examination methods and systematic studies of larger
groups. Still, the interpretation of the findings has been controversial and debated such as the
chronic effects of solvent exposure (Arlien-Søborg 1992).

The difficulties met in interpreting chronic neurotoxic effects depend on both the diversity
and vagueness of symptoms and signs and the associated problem of defining a proper disease
entity for conclusive epidemiological studies. For example, in solvent exposure, the chronic
effects might include memory and concentration problems, tiredness, lack of initiative, affect
liability, irritability, and sometimes dizziness, headache, alcohol intolerance, and reduced
libido. Neurophysiological methods have also revealed various functional disturbances, again
difficult to condense into any single disease entity.

Similarly, a variety of neurobehavioural effects also seems to occur due to other occupational
exposures, such as moderate lead exposure or welding with some exposure to aluminium,
lead, and manganese or exposure to pesticides. Again there are also neurophysiological or
neurological signs, among others, polyneuropathy, tremor, and disturbance of equilibrium, in
individuals exposed to organochlorine, organophosphorus and other insecticides.

In view of the epidemiological problems involved in defining a disease entity out of the many
types of neurobehavioural effects referred to, it has also become natural to consider some
clinically, more or less well-defined neuropsychiatric disorders in relation to occupational
exposures.

Since the 1970s several studies have especially focused on solvent exposure and the psycho-
organic syndrome, when of disabling severity. More recently also Alzheimer’s dementia,
multiple sclerosis, Parkinson’s disease, amyotrophic lateral sclerosis, and related conditions
have attracted interest in occupational epidemiology.
Regarding solvent exposure and the psycho-organic syndrome (or toxic chronic
encephalopathy in clinical occupational medicine, when exposure is taken into diagnostic
account), the problem of defining a proper disease entity was apparent and first led to
considering en bloc the diagnoses of encephalopathia, dementia, and cerebral atrophy, but
neurosis, neurasthenia, and nervositas were also included as not necessarily distinct from each
other in medical practice (Axelson, Hane and Hogstedt 1976). Recently, more specific disease
entities, such as organic dementia and cerebral atrophy, have also been associated with
solvent exposure (Cherry, Labréche and McDonald 1992). The findings have not been totally
consistent, however, as no excess of “presenile dementia” appeared in a large-scale case-
referent study in the United States with as many as 3,565 cases of various neuropsychiatric
disorders and 83,245 hospital referents (Brackbill, Maizlish and Fischbach 1990). However,
in comparison with bricklayers, there was about a 45% excess of disabling neuropsychiatric
disorders among white male painters, except spray painters.

Occupational exposures also seem to play a role for disorders more specific than the psycho-
organic syndrome. Hence, in 1982, an association between multiple sclerosis and solvent
exposure from glues was first indicated in the Italian shoe industry (Amaducci et al. 1982).
This relationship has been considerably strengthened by further studies in Scandinavia (Flodin
et al. 1988; Landtblom et al. 1993; Grönning et al. 1993) and elsewhere, so that 13 studies
with some information on solvent exposure could be considered in a review (Landtblom et al.
1996). Ten of these studies provided enough data for inclusion in a meta-analysis, showing
about a twofold risk for multiple sclerosis among individuals with solvent exposure. Some
studies also associate multiple sclerosis with radiological work, welding, and work with
phenoxy herbicides (Flodin et al. 1988; Landtblom et al. 1993). Parkinson’s disease seems to
be more common in rural areas (Goldsmith et al. 1990), especially at younger ages (Tanner
1989). More interestingly, a study from Calgary, Canada, showed a threefold risk for
herbicide exposure (Semchuk, Love and Lee 1992).

All the case persons who recalled specific exposures reported exposure to phenoxy herbicides
or thiocarbamates. One of them recalled exposure to paraquat, which is chemically similar to
MPTP (N-methyl-4-phenyl-1,2,3,6-tetrahydropyridine), an inducer of a Parkinson-like
syndrome. Paraquat workers have not yet been found to suffer from such a syndrome,
however (Howard 1979). Case-referent studies from Canada, China, Spain, and Sweden have
indicated a relation with exposure to unspecified industrial chemicals, pesticides, and metals,
especially manganese, iron and aluminium (Zayed et al. 1990).

In a study from the United States, an increased risk of motor neuron disease (encompassing
amyotrophic lateral sclerosis, progressive bulbar palsy and progressive muscular atrophy)
appeared in connection with welding and soldering (Armon et al. 1991). Welding also
appeared as a risk factor, as did electricity work, and also work with impregnating agents in a
Swedish study (Gunnarsson et al. 1992). Hereditability for neurodegenerative and thyroid
disease, combined with solvent exposure and male gender, showed a risk as high as 15.6.
Other studies also indicate that exposure to lead and solvents could be of importance
(Campbell, Williams and Barltrop 1970; Hawkes, Cavanagh and Fox 1989; Chio, Tribolo and
Schiffer 1989; Sienko et al. 1990).

For Alzheimer’s disease, no clear indication of any occupational risk appeared in a meta-
analysis of eleven case-referent studies (Graves et al. 1991), but more recently an increased
risk was connected with blue-collar work (Fratiglioni et al. 1993). Another new study, which
included also the oldest ages, indicated that solvent exposure could be a rather strong risk
factor (Kukull et al. 1995). The recent suggestion that Alzheimer’s disease might be related to
exposure to electromagnetic fields was perhaps even more surprising (Sobel et al. 1995). Both
these studies are likely to stimulate interest in several new investigations along the indicated
lines.

Hence, in view of the current perspectives in occupational neuroepidemiology, as briefly


outlined, there seems to be a reason for conducting additional work-related studies of
different, hitherto more or less neglected, neurological and neuropsychiatric disorders. It is
not unlikely that there are some contributing effects from various occupational exposures, in
the same manner as we have seen for many cancer types. In addition, as in etiologic cancer
research, new clues suggesting ultimate causes or triggering mechanisms behind some of the
serious neurological disorders may be obtained from occupational epidemiology.

Preventing Neurotoxicity at Work

Written by ILO Content Manager

A worker not exposed to a neurotoxic substance will never develop any adverse neurotoxic
health effects from that substance. Zero exposure leads to total protection against neurotoxic
health effects. This is the essence of all primary prevention measures.

Toxicity Testing

New chemical compounds introduced into the workplace and in occupational settings should
have already been tested for neurotoxicity. Failure to do pre-market toxicity testing can lead
to workers’ contact and potentially severe adverse health effects. The introduction of methyl
n-butyl ketone into a workplace in the United States is a classic example of the possible
hazards of untested neurotoxicants being introduced into the workplace (Spencer and
Schaumburg 1980).

Engineering Controls

Engineering controls (e.g., ventilation systems, closed production facilities) are the best
means for keeping workers’ exposures below permissible exposure limits. Closed chemical
processes that keep all toxicants from being released into the workplace environment are the
ideal. If this is not possible, closed ventilation systems that exhaust ambient air vapours and
are designed so as to pull contaminated air away from workers are useful when well designed,
adequately maintained, and properly operated.

Personal Protection Equipment

In situations where engineering controls are unavailable to reduce workers’ contact with
neurotoxicants, personal protective equipment must be provided. Because workplace
neurotoxicants are many, and routes of exposure differ across workplaces and work
conditions, the kind of personal protective equipment must be carefully selected for the
situation at hand. For example, the neurotoxicant lead can exert its toxicity when lead-laden
dust is breathed and when lead particles are ingested in food or water. Therefore, personal
protective equipment must protect against both routes of exposure. This would mean
respiratory protection equipment and adoption of personal hygiene measures to prevent
consumption of lead-contaminated food or beverages. For many neurotoxicants (like
industrial solvents), absorption of the substance through intact skin is a main route of
exposure. Impermeable gloves, aprons and other appropriate equipment must therefore be
provided to prevent skin absorption. This would be in addition to engineering controls or
personal respiratory protection equipment. Considerable planning must be given to match
personal protective equipment to the specific work being performed.

Administrative Controls

Administrative controls consist of managerial efforts to reduce workplace hazards through


planning, training, employee rotation on job sites, changes in production processes, and
product substitution (Urie 1992), as well as strict adherence to all existing regulations.
Workers’ Right-to-Know

While the employer bears the responsibility for providing a workplace or work experience
that does not harm workers’ health, workers have the responsibility to follow workplace rules
that are intended to protect them. Workers must be in a position to know what actions to take
in protecting themselves. This means workers have the right to know about the neurotoxicity
of substances with which they come into contact, and what protective measures they can take.

Worker Health Surveillance

Where conditions permit, workers should be regularly given medical examinations. A regular
evaluation by occupational physicians or other medical specialists constitutes worker health
surveillance. For workers known to be working with or around neurotoxicants, physicians
should be knowledgeable of the effects of exposure. For example, low-level exposure to many
organic solvents will produce symptoms of fatigue, sleep disorders, headaches and memory
disturbances. For heavy doses of lead, wrist drop and peripheral nerve impairment would be
signs of lead intoxication. Any signs and symptoms of neurotoxicant intoxication should
result in reassignment of the worker to an area free of the neurotoxicant, and efforts to reduce
workplace levels of the neurotoxicant.
Nervous System References
Amaducci, L, C Arfaioli, D Inzitari, and M Marchi. 1982. Multiple sclerosis among shoe and
leather workers: An epidemiological survey in Florence. Acta Neurol Scand 65:94-103.

Anger, KW. 1990. Worksite neurobehavioral research: Result, sensitive methods, test
batteries and the transition from laboratory data to human health. Neurotoxicology 11:629-
720.

Anger, WK, MG Cassitto, Y Liang, R Amador, J Hooisma, DW Chrislip, D Mergler, M


Keifer, and J Hörtnagel. 1993. Comparison of performance from three continents on the
WHO-recommended neurobehavioral core test battery (NCTB). Environ Res 62:125-147.

Arlien-Søborg, P. 1992. Solvent Neurotoxicity. Boca Raton: CRC Press.


Armon, C, LT Kurland, JR Daube, and PC O’Brian. 1991. Epidemiologic correlates of
sporadic amyotrophic lateral sclerosis. Neurology 41:1077-1084.

Axelson, O. 1996. Where do we go in occupational neuroepidemiology? Scand J Work


Environ Health 22: 81-83.

Axelson, O, M Hane, and C Hogstedt. 1976. A case-referent study on neuropsychiatric


disorders among workers exposed to solvents. Scand J Work Environ Health 2:14-20.

Bowler, R, D Mergler, S Rauch, R Harrison, and J Cone. 1991. Affective and personality
disturbance among women former microelectronics workers. J Clin Psychiatry 47:41-52.

Brackbill, RM, N Maizlish, and T Fischbach. 1990. Risk of neuropsychiatric disability among
painters in the United States. Scand J Work Environ Health 16:182-188.

Campbell, AMG, ER Williams, and D Barltrop. 1970. Motor neuron disease and exposure to
lead. J Neurol Neurosurg Psychiatry 33:877-885.

Cherry, NM, FP Labrèche, and JC McDonald. 1992. Organic brain damage and occupational
solvent exposure. Br J Ind Med 49:776-781.

Chio, A, A Tribolo, and D Schiffer. 1989. Motorneuron disease and glue exposure. Lancet
2:921.

Cooper, JR, FE Bloom, and RT Roth. 1986. The Biochemical Basis of Neuropharmacology.
New York: Oxford Univ. Press.

Dehart, RL. 1992. Multiple chemical sensitivity—What is it? Multiple chemical sensitivities.
Addendum to: Biologic markers in immunotoxicology. Washington, DC: National Academy
Press.

Feldman, RG. 1990. Effects of toxins and physical agents on the nervous system. In
Neurology in Clinical Practice, edited by WG Bradley, RB Daroff, GM Fenichel, and CD
Marsden. Stoneham, Mass: Butterworth.
Feldman, RG and LD Quenzer. 1984. Fundamentals of Neuropsychopharmacology.
Sunderland, Mass: Sinauer Associates.

Flodin, U, B Söderfeldt, H Noorlind-Brage, M Fredriksson, and O Axelson. 1988. Multiple


sclerosis, solvents and pets: A case-referent study. Arch Neurol 45:620-623.

Fratiglioni L, A Ahlbom, M Viitanen and B Winblad. 1993. Risk factors for late-onset
Alzheimer’s disease: a population-based case-control study. Ann Neurol 33:258-66.

Goldsmith, JR, Y Herishanu, JM Abarbanel, and Z Weinbaum. 1990. Clustering of


Parkinson’s disease points to environmental etiology. Arch Environ Health 45:88-94.

Graves, AB, CM van Duijn, V Chandra, L Fratiglioni, A Heyman, AF Jorm, et al. 1991.
Occupational exposure to solvents and lead as risk factors for Alzheimer’s disease: A
collaborative re-analysis of case-control studies. Int J Epidemiol 20 Suppl. 2:58-61.

Grönning, M, G Albrektsen, G Kvåle, B Moen, JA Aarli, and H Nyland. 1993. Organic


solvents and multiple sclerosis. Acta Neurol Scand 88:247-250.

Gunnarsson, L-G, L Bodin, B Söderfeldt, and O Axelson. 1992. A case-control study of


motor neuron disease: Its relation to heritability and occupational exposures, particularly
solvents. Br J Ind Med 49:791-798.

Hänninen, H and K Lindstrom. 1979. Neurobehavioral Test Battery of the Institute of


Occupational Health. Helsinki: Institute of Occupational Health.

Hagberg, M, H Morgenstem, and M Kelsh. 1992. Impact of occupations and job tasks on the
prevalence of carpal tunnel syndrome. Scand J Work Environ Health 18:337-345.

Hart, DE. 1988. Neuropsychological Toxicology: Identification and Assessment of Human


Neurotoxic Syndromes. New York: Pergamon Press.

Hawkes, CH, JB Cavanagh, and AJ Fox. 1989. Motorneuron disease: A disorder secondary to
solvent exposure? Lancet 1:73-76.

Howard, JK. 1979. A clinical survey of paraquat formulation workers. Br J Ind Med 36:220-
223.

Hutchinson, LJ, RW Amsler, JA Lybarger, and W Chappell. 1992. Neurobehavioral Test


Batteries for Use in Environmental Health Field Studies. Atlanta: Agency for Toxic
Substances and Disease Registry (ATSDR).

Johnson, BL. 1987. Prevention of Neurotoxic Illness in Working Populations. Chichester:


Wiley.

Kandel, ER, HH Schwartz, and TM Kessel. 1991. Principles of Neural Sciences. New York:
Elsevier.
Kukull, WA, EB Larson, JD Bowen, WC McCormick, L Teri, ML Pfanschmidt, et al. 1995.
Solvent exposure as a risk factor for Alzheimer’s disease: A case-control study. Am J
Epidemiol 141:1059-1071.

Landtblom, A-M, U Flodin, M Karlsson, S Pålhagen, O Axelson, and B Söderfeldt. 1993.


Multiple sclerosis and exposure to solvents, ionizing radiation and animals. Scand J Work
Environ Health 19:399-404.

Landtblom, A-M, U Flodin, B Söderfeldt, C Wolfson and O Axelson. 1996. Organic solvents
and multiple sclerosis: A synthesis of the cement evidence. Epidemiology 7: 429-433.

Maizlish, D and O Feo. 1994. Alteraciones neuropsicológicas en trabajadores expuestos a


neurotóxicos. Salud de los Trabajadores 2:5-34.

Mergler, D. 1995. Behavioral neurophysiology: Quantitative measures of sensory toxicity. In


Neurotoxicology: Approaches and Methods, edited by L Chang and W Slikker. New York:
Academic Press.

O’Donoghue, JL. 1985. Neurotoxicity of Industrial and Commercial Chemicals. Vol. I & II.
Boca Raton: CRC Press.

Sassine, MP, D Mergler, F Larribe, and S Bélanger. 1996. Détérioration de la santé mentale
chez des travailleurs exposés au styrène. Rev epidmiol med soc santé publ 44:14-24.

Semchuk, KM, EJ Love, and RG Lee. 1992. Parkinson’s disease and exposure to agricultural
work and pesticide chemicals. Neurology 42:1328-1335.

Seppäläinen, AMH. 1988. Neurophysiological approaches to the detection of early


neurotoxicity in humans. Crit Rev Toxicol 14:245-297.

Sienko, DG, JD Davis, JA Taylor, and BR Brooks. 1990. Amyotrophic lateral sclerosis: A
case-control study following detection of a cluster in a small Wisconsin community. Arch
Neurol 47:38-41.

Simonsen, L, H Johnsen, SP Lund, E Matikainen, U Midtgård, and A Wennberg. 1994.


Evaluation of neurotoxicity data: A methodological approach to classification of neurotoxic
chemicals. Scand J Work Environ Health 20:1-12.

Sobel, E, Z Davanipour, R Sulkava, T Erkinjuntti, J Wikström, VW Henderson, et al. 1995.


Occupations with exposure to electromagnetic fields: A possible risk factor for Alzheimer’s
disease. Am J Epidemiol 142:515-524.

Spencer, PS and HH Schaumburg. 1980. Experimental and Clinical Neurotoxicology.


Baltimore: Williams & Wilkins.

Tanner, CM. 1989. The role of environmental toxins in the etiology of Parkinson’s disease.
Trends Neurosci 12:49-54.
Urie, RL. 1992. Personal protection from hazardous materials exposures. In Hazardous
Materials Toxicology: Clinical Principles of Environmental Health, edited by JB Sullivan and
GR Krieger. Baltimore: Williams & Wilkins.

World Health Organization (WHO). 1978. Principles and Methods of Evaluating the Toxicity
of Chemicals, Part 1 and 2. EHC, No. 6, Part 1 and 2. Geneva: WHO.

World Health Organization and Nordic Council of Ministers. 1985. Chronic Effects of
Organic Solvents On the Central Nervous System and Diagnostic Criteria. EHC, No. 5.
Geneva: WHO.

Zayed, J, G Ducic, G Campanella, JC Panisset, P André, H Masson, et al. 1990. Facteurs


environnementaux dans l’étiologie de la maladie de Parkinson. Can J Neurol Sci 17:286-291.

Copyright 2015 International Labour Organization


8. Renal-Urinary System
Chapter Editor: George P. Hemstreet

Renal-Urinary Systems

Written by ILO Content Manager

The renal and urinary systems are comprised of a complex series of organs which together
function to filter wastes from the blood, and manufacture, store and discharge urine. These
organ systems are vital to homeostasis through maintaining fluid balance, acid-base balance
and blood pressure. The primary organs of the renal-urinary systems are the two kidneys and
the urinary bladder. In the process of filtering waste products from the blood the kidneys are
potentially exposed to high concentrations of endogenous and exogenous toxic substances.
Thus, some kidney cells are exposed to concentrations a thousand times higher than in blood.

Problems that result in damage to the kidney may be pre-renal (affect blood supply to the
kidney), renal (affect the kidney itself) or post-renal (affect any point along the path which the
urine travels from the kidney to the end of the urethra or penis). Post-renal problems are
usually obstructive in nature; a common site of obstruction is the prostate, juxtapositioned
between the bladder and the urethra. Pre-existing disease of the prostate, bladder or ureters,
particularly infection, obstruction or foreign bodies such as stones, can compromise kidney
function and increase susceptibility to either acquired or genetic defects.

Understanding the microanatomy and molecular mechanisms of the kidneys and bladder is
important to assessing susceptibility to, and monitoring and prevention of, occupational
exposures. Toxicants seem to target specific parts of the kidney or bladder and result in the
expression of specific biomarkers directly related to the damaged segment. Historically,
predisposition to disease has been viewed from the epidemiological perspective of identifying
a group of workers at risk. Today, with better understanding of the fundamental mechanisms
of disease, individual risk assessment through the use of biomarkers of susceptibility,
exposure, effect and disease is on the horizon. New ethical issues arise because of the pressure
to develop cost-effective strategies to protect workers from occupational hazards. The
pressure arises, in part, because genetic testing is gaining acceptance for evaluating disease
predisposition and biomarkers of exposure and effect can serve as intermediate end-points at
which intervention may be beneficial. The purpose of this chapter is to provide a medical
review of the renal and urinary systems on the basis of which guidelines for assessing and
reducing individual risk in the workplace could be set forth with due account taken of the
ethical aspects involved.

Anatomy and Pathophysiology of the Kidney

The human kidney is a complex organ which functions to filter wastes from the blood through
the production of urine. The two kidneys also perform a variety of other vital functions
including maintaining homeostasis, regulating blood pressure, osmotic pressure and acid-base
balance. The kidneys receive 25% of the total cardiac output of blood, potentially exposing
them to endogenous and exogenous toxins.
The kidneys are located on each side of the spine in the lower portion of the back. Each
weighs about 150 grams and is about the size of an orange. The kidney consists of three
layers: the cortex (outer layer), the medulla and the renal pelvis. Blood flows into the cortex
and medulla through the renal artery and branches into increasingly smaller arteries. Each of
the arteries ends in a blood filtration unit called a nephron. A healthy kidney contains
approximately 1.2 million nephrons, strategically positioned within the cortex and medulla.

A nephron consists of the glomerulus (a group of tiny blood vessels) surrounded by


Bowman’s capsule (a two-layer membrane) that opens into a convoluted tubule. The fluid
portion of blood, plasma, is forced through the glomerulus into Bowman’s capsule and then,
as filtered plasma, passes into the convoluted tubule. About 99% of the water and essential
nutrients that have been filtered are reabsorbed by the tubule cells and passed into the
capillaries which surround the convoluted tubule. The unfiltered blood which remains in the
glomerulus also flows into capillaries and returns through the renal vein to the heart.

The nephrons appear as long, looped ducts comprised of multiple segments each of which
performs a variety of different functions designed to maintain the body’s homeostatic
mechanisms. Figure 1 depicts a nephron and its orientation within the renal cortex and the
medulla. Each nephron segment has a differential blood supply regulating the ionic gradient.
Certain chemicals may directly affect specific segments of the nephron acutely or chronically
depending on the type and dose of xenobiotic exposure. Depending on the segment of the
microanatomy targeted, various aspects of kidney function may be affected.

Figure 1. Relationships of the vascular supply, the glomerulus and the tubular components of
the nephron to each other and the orientation of these components within the renal cortex and
medulla
Blood vessels to the kidney supply only the glomerular and tubular elements, delivering
wastes to be filtered and absorbing nutrients, proteins and electrolytes in addition to supplying
oxygen for organ viability. Ninety per cent of the blood flow is to the cortex, with a gradient
decrease to the medulla. Such differential blood flow, and the positioning of the nephron
units, are vital to the countercurrent mechanism which further concentrates the urine and
potential nephrotoxins.

The glomerulus is positioned between the afferent and efferent arterioles. The efferent
arterioles form a web of capillaries around each nephron unit with the exception of the distal
tubule juxtaposition adjacent to the afferent blood supply of the glomerulus. Afferent and
efferent tubules enervated by the sympathetic nerves respond to autonomic stimulation and
hormonal mediators such as vasopression and antidiuretic hormone (ADH). An area called the
macula densa, part of the juxtaglomerular apparatus, produces renin, a mediator of blood
pressure, in response to osmotic changes and blood pressure. Renin is converted by liver
enzymes to an octapeptide, angiotensin II, that regulates blood flow to the kidneys
preferentially targeting the afferent arterioles and the mesangial cells of the glomerulus.

The glomerulus allows only certain size proteins with defined charge to pass through during
filtration. Plasma filtration is controlled by a balance of osmotic and hydrostatic pressure.
Specialized sugar molecules, glycosaminoglycans, provide negative anionic charge which
inhibit, by electrostatic forces, the filtration of negatively charged materials. The three-cell
layer of the glomerular basement membrane consists of multiple foot processes that increase
the absorption area and create the pores through which the filtrate passes. Damage to the
specialized basement membrane or the capillary endothelium may permit albumin, a type of
protein, to be spilled in increased amounts into the urine. The presence of an excess amount of
albumin or other micro-proteins in the urine serves as a marker of glomerular or tubular
damage.

The renal interstitium is the space between the nephron units and is more prominent in the
central medullary portion than in the outer cortex. Within the interstitium are interstitial cells
that are in close proximity to the medullary blood vessels and tubule cells. With ageing there
may be an increased prominence of interstitial cells in the cortex with associated fibrosis and
scarring. The interstitial cells contain lipid droplets and may be involved in the control of
blood pressure with the release of vascular relaxing or constricting factors. Chronic disease of
the interstitium may affect the glomerulus and tubules, or conversely, disease of the
glomerulus and tubules may affect the interstitium. Thus, in end-stage kidney disease it is
sometimes difficult to precisely define the pathological mechanisms of renal failure.

The proximal collecting tubules absorb 80% of the sodium, water and chloride, and 100% of
the urea. Each proximal tubule has three segments, with the last segment (P-3) the most
vulnerable to xenobiotic (toxic foreign substance) exposures. When the proximal cells are
damaged by heavy metals such as chromium, the concentrating ability of the kidney is
impaired and urine may be more dilute. Toxicity to the P-3 segment results in the release into
the urine of enzymes, such as intestinal alkaline phosphatase, N-acetyl-beta-D-
glucosaminidase (NAG), or Tamm-Horsfall protein, which is associated with the brush-like
border of the proximal tubule cells increasing the effective absorbing area.
Diagnosis and Testing for Nephrotoxicity

Serum creatinine is another substance filtered by the glomerulus but minimally absorbed by
the proximal tubules. Damage to the glomerulus results in its inability to remove toxins
produced by the body and there is an accumulation of serum creatinine. Because serum
creatinine is a product of muscle metabolism and dependent on the patient’s body mass, it has
low sensitivity and specificity for measuring renal function, but it is used frequently because it
is convenient. A more sensitive and specific test is to quantitate the filtrate by measuring the
creatinine (Cr) clearance; serum urinary creatinine clearance is calculated by the general
formula CCr=UCr V/PCr, where UCrV is the amount of Cr excreted per unit time and PCr is the
plasma concentration of the Cr. However, creatinine clearance is more complex, in terms of
sampling for the test, and is thus impractical for occupational testing. Isotope clearance tests
performed by radioactive labelling of compounds such as ortho-iodohippurate which are also
cleared by the kidney are also effective, but not practical or cost-effective in the workplace
setting. Differential function of individual kidneys may be determined using differential renal
nuclear scans or selective catheterization of both kidneys by passage of a catheter from the
bladder up through the ureter into the kidney. However these methods also are not readily
employed for large-scale workplace testing. Because kidney function may be reduced by 70 to
80% prior to a detectable elevation in serum creatinine, and because other existing tests are
either impractical or costly, non-invasive biomarkers are needed to detect low-dose acute
intermittent exposures to the kidney. A number of biomarkers for detecting low-dose kidney
damage or early changes associated with carcinogenesis are discussed in the section on
biomarkers.

Although the proximal tubule cells absorb 80% of the fluids, the countercurrent mechanism
and the distal collecting ducts fine-tune the amount of fluids absorbed by regulating ADH.
ADH is released from the pituitary gland deep within the brain and responds to osmotic
pressures and fluid volume. Exogenous compounds such as lithium may damage the distal
collecting ducts and result in renal diabetes insipidus (passage of dilute urine). Inherited
genetic disorders may also cause this defect. Xenobiotics normally affect both kidneys but
complexities of interpretation arise when exposures are difficult to document or when there is
pre-existing renal disease. Consequently, high-dose accidental exposures have served as
markers for identifying nephrotoxic compounds in many instances. The majority of
occupational exposures occur at low doses, and are masked by the reserve filtration and repair
compensatory capability (hypertrophy) of the kidney. The challenge which remains is to
detect low-dose exposures clinically undetected by current methods.

Anatomy and Pathophysiology of the Bladder

The urinary bladder is a hollow pouch in which urine is stored; normally, it contracts on
demand for controlled emptying through the urethra. The bladder is located in the front, lower
part of the pelvic cavity. The bladder is joined on either side to the two kidneys by muscular,
peristaltic tubes, the ureters, which carry the urine from the kidneys to the bladder. The renal
pelvis, ureters and bladder are lined with transitional epithelium. The outer layer of the
urothelium consists of umbrella cells coated with a carbohydrate, glycosaminoglycan (GAG),
layer. The transitional cells extend to the basement membrane of the bladder. The deep basal
cells are thus protected by the umbrella cells but if the protective GAG layer is damaged the
basal cells are susceptible to injury from urinary components. The microanatomy of the
transitional epithelium allows it to expand and contract, and even with normal shedding of the
umbrella cells the protective integrity of the basal cells is maintained.
The balanced neurological system that regulates storage and emptying may be damaged
during electroshock or other trauma, such as spinal cord injury, occurring in the workplace. A
major cause of death among quadraplegics is loss of bladder function resulting in chronic
renal damage secondary to infection and stone formation. Chronic infection from incomplete
emptying due to neurogenic or obstructive causes such as pelvic fracture or other trauma to
the urethra and subsequent stricture formation is common. Persistent bacterial infection or
stone formation that results in chronic inflammatory and malignant conditions of the bladder
may be caused by reduced resistance (i.e., susceptibility) to exogenous exposures in the
workplace.

Molecules associated with damage and repair within the bladder serve as potential
intermediate end-point markers for both toxic and malignant conditions because many
biochemical alterations occur during the changes related to cancer development. Like the
kidney, bladder cells have active enzyme systems such as the cytochrome P-450 which may
activate or inactivate xenobiotics. The functional activity of the enzymes is determined by
genetic inheritance and exhibits genetic polymorphism. Voided urine contains cells exfoliated
from the kidney, ureters, bladder, prostate and urethra. These cells provide targets, through
the use of biomarkers, for evaluating changes in bladder and renal pathology. Remembering
Virchow’s comment that all diseases start in the cells focuses our attention on the importance
of cells, which are the molecular mirror of exposure episodes.

Environmental and Occupational Toxicology

A considerable volume of epidemiological data supports the causal relationship of


occupational exposures in bladder cancer, but the precise contributions of workplace
exposures to kidney failure and kidney cancer are difficult to estimate. In a recent report, it
was estimated that up to 10% of end-stage renal disease could be attributed to workplace
exposures, but results are difficult to validate because of changing environmental and
chemical hazards, variations in diagnostic criteria and the often long latency period between
exposure and disease. It is estimated that function of two-thirds of the nephrons of both
kidneys may be lost before renal damage is clinically evident. However, evidence is mounting
that what were previously thought to be socioeconomic or ethnic causes of nephrotoxicity
may in fact be environmental, adding validity to the role of toxicants in disease development.

Nephrotoxicity may be directly related to the xenobiotic, or the xenobiotic may go through a
single-step or multi-step activation or inactivation in the kidney or the liver. Activation of
xenobiotics is regulated by complex sets of enzymes identified as Phase I, II and Ancillary.
One Phase I enzyme is the P-450 oxidative system which acts through reduction or hydrolysis
pathways. Phase II enzymes catalyse conjugation while ancillary enzymes regulate drug
metabolism (Table 1 lists these enzymes). Various animal models have provided insight into
metabolic mechanisms, and studies of kidney slices and microdissection of the kidney
nephron units in tissue culture add insight into the pathological mechanisms. However,
species and individual variables are considerable and, although mechanisms may be similar,
caution is mandated in extrapolating results to humans in the workplace. The primary issues
now are to determine which xenobiotics are nephrotoxic and/or carcinogenic, and to what
target sites, and to develop methods to identify more accurately subclinical toxicity in the
renal-urinary system.
Table 1. Drug-metabolism enzymes in kidney1

ENZYMES

Phase I Phase II Ancillary

Cytochrome P-450 Esterase GSH peroxides

Microsomal FAD-containing N-Acetyltransferase GSSG reductase


mono-oxygenase

Alcohol and aldehyde GSH S-transferase Superoxide dismutase


dehydrogenases

Epoxide hydrolase Thiol S-methyl-transferase Catalase

Prostaglandin synthase UDP glucuronosyltransferase DT-diaphorase

Monoamine oxidase Sulphotransferase NADPH-generating pathways

1 Phase I enzymes catalyse oxidation, reduction or hydrolysis.

Phase II enzymes generally catalyse conjugation.

Ancillary enzymes function in a secondary or supporting manner to facilitate drug


metabolism.

Source: National Research Council 1995.

Non-malignant Renal-Urinary Disorders

Glomerulonephritis is an inflammatory reactive condition of the glomerular basement


membrane or capillary endothelium. Acute and chronic forms of the disease are caused by a
variety of infectious, autoimmune or inflammatory conditions or by exposure to toxic agents.
Glomerulonephritis is associated with vasculitis, either systemic or limited to the kidney.
Secondary chronic damage to the glomerulus also occurs during an intense cycle of assault
from nephrotoxicity to the interstitium of the tubule cells. Epithelial glomerular crescents or
proliferative forms are a hallmark of glomerulonephritis in kidney biopsy specimens. Blood,
red blood cell (RBC) casts, or protein in the urine, and hypertension are symptoms of
glomerulonephritis. A change in blood proteins may occur with lowering of certain fractions
of the serum complement, a complex set of interacting proteins involved in the immune
system, host defenses and clotting functions. Direct and indirect evidence supports the
significance of xenobiotics as a causal factor of glomerulonephritis.

The glomerulus protects the oxygen-carrying red blood cells from passing through its filter.
After centrifugation, normal urine contains only one RBC in 10 ml when viewed with high-
power light microscopy. When RBCs leak through the glomerular filter and perhaps become
individually dysmorphic, RBC casts that assume the cylindrical shape of the collecting
nephrons form.
In support of the importance of toxins as an aetiological factor in glomerulonephritis,
epidemiological studies reveal increased evidence of toxic exposures in patients who have
undergone dialysis or who have been diagnosed with glomerulonephritis. Evidence of
glomerular injury from acute hydrocarbon exposure is rare, but has been observed in
epidemiological studies, with odds ratios ranging from 2.0 to 15.5. One example of acute
toxicity is Goodpasture’s disease which results from hydrocarbon stimulation of antibody
production to liver and lung proteins that cross-react with the basement membrane.
Exacerbation of nephrotic syndrome, large amounts of protein in the urine, has also been
observed in individuals re-exposed to organic solvents, while other studies reveal an historic
relationship with a spectrum of renal disorders. Other solvents such as degreasing agents,
paints and glues are implicated in more chronic forms of the disease. Awareness of the
mechanisms of solvent excretion and reabsorption assists in identifying biomarkers because
even minimal damage to the glomerulus results in increased leakage of RBCs into the urine.
Although RBCs in the urine are a cardinal sign of glomerular injury, it is important to rule out
other causes of haematuria.

Interstitial and tubular nephritis. As mentioned previously, the aetiology of chronic end-stage
renal disease is frequently difficult to ascertain. It may be primarily glomerular, tubular or
interstitial in origin and occur because of multiple acute episodes or chronic, low-dose
processes. Chronic interstitial nephritis involves fibrosis and tubular atrophy. In its acute
form, the disease is expressed by marked inflammatory infiltrate with accompanying fluid
collection in the interstitial spaces. Interstitial nephritis may involve primarily the interstitium,
or be manifest as a secondary event from chronic tubular injury, or it may result from post-
renal causes such as obstruction. Prostaglandin-A synthase, an enzyme, is found primarily in
the interstitium and is associated with the endoplasmic reticulum, a part of the protein
machinery of the cell. Certain xenobiotics, such as benzidine and nitrofuranes, are reducing
co-substrates for prostaglandin synthase and are toxic to the tubular interstitium.

Tubular and interstitial injury may occur from exposure to cadmium, lead or a variety of
organic solvents. Most of the exposures are chronic, low-dose and toxicity is masked by the
renal function reserve and the ability of the kidney to recover some functions. Interstitial
nephritis may also result from vascular injury as caused, for example, by chronic carbon
monoxide exposure. Proximal tubule cells are the most vulnerable to toxic substances in the
blood because of intense exposure to toxins which filter through the glomerulus, internal
enzyme systems that activate toxicants and the selective transport of toxicants. The epithelium
in the various segments of the proximal tubule has slightly different qualities of lysosomal
peroxidase enzymes and other compounds of genetic machinery. Thus, chromium exposure
may result in both interstitial and tubular injury. Damage to the collecting tubules may occur
when specific enzymes activate various xenobiotics such as chloroform, acetaminophen and
p-aminophenol, and antibiotics such as Loradine. A secondary result of damage to the
collecting ducts is the inability of the kidney to acidify the urine and the subsequent
development of a metabolic acid state.

Nephrogenic diabetes insipidus, the condition in which urine becomes dilute, may be genetic
or acquired. The genetic form involves mutations of the ADH receptors which are located on
the basal lateral membrane of the collecting ducts, in the descending loop of Henle. ADH
fine-tunes the reabsorption of water and certain ions such as potassium. Acquired diabetes
insipidus may involve the tubule cells or the associated interstitium, both of which may be
diseased because of a variety of conditions. Nephrogenic diabetes insipidus may accompany
end-stage renal disease because of diffuse involvement of the interstitium. Consequently, the
interstitium is unable to maintain a hypertonic environment for passive water movement from
the tubular collecting ducts. Conditions which may cause diffuse interstitial changes are
pyelonephritis, sickle cell anaemia and obstructive uropathy. The possible association of these
conditions in relation to occupational exposure is an increased susceptibility of the kidney to
xenobiotics. A limited number of nephrotoxic compounds have been identified that especially
target the collecting tubule cells. Frequency, nocturia (more frequent voiding at night) and
polydipsia (chronic thirst) are symptoms of nephrogenic diabetes insipidus. Movement of
fluids through the collecting duct cells results in channels that form in response to ADH,
affecting the microtubular function of the cells; consequently, drugs such as colchicine may
affect the ADH. Two drugs which appear to act by slightly different mechanisms to correct
ADH are hydrochlorothiazide and indomethacin, a prostaglandin synthase inhibitor.

Lithium-induced diabetes insipidus correlates with the duration of lithium therapy, average
serum lithium level and total lithium carbonate dose. Interestingly, lithium concentrates in the
collecting ducts and affects cyclic AMP, part of the energy metabolic pump pathway.
Exposure to other compounds such as methoxyflurane and demeclocycline, the latter of which
is used for the treatment of acne, also results in nephrogenic diabetes insipidus through an
alternative pathway rendering the epithelial cells unresponsive to ADH.

Hypertension, or elevated blood pressure, the second most common cause of end-stage renal
disease, is associated with multiple aetiological pathways. Hypertension can be caused by
diabetic nephropathy, obstructive nephropathy, glomerulonephritis, polycystic kidney disease,
pyelonephritis and vasculitis, and many of those diseases are associated with exposure to
toxic compounds. A limited number of occupational exposures are directly associated with
hypertension. One is lead, which causes renal vascular ischaemia and injury. The mechanism
for lead-induced hypertension is probably regulated through the juxtaglomerular apparatus,
the release of renin and the cleavage of renin by liver enzymes to angiotensin II. Drugs
implicated in hypertension include amphetamines, oestrogens and oral contraceptives,
steroids, cis-platinum, alcohol and tricyclic antidepressants. Hypertension may be gradual in
onset or acute and malignant in nature. Malignant hypertension in which diastolic pressure is
greater than 110 mm Hg is associated with nausea, vomiting and severe headache, and
constitutes a medical emergency. Numerous drugs are available for the treatment of
hypertension but over-treatment may result in decreased renal perfusion and a further loss of
renal function. Whenever possible, withdrawal of the nephrotoxicant is the treatment of
choice.

Differential diagnosis of haematuria and proteinuria

Haematuria (RBCs in the urine) and pyuria (white blood cells in the urine) are primary
symptoms of many diseases of the renal-urinary system, and for categorical purposes may be
considered non-specific cellular biomarkers. Because of their importance they are discussed
separately here. A challenge to the occupational practitioner is to determine if haematuria
signifies a permanent underlying medical condition that may be potentially life threatening or
if it is attributable to occupational exposures. Clinical assessment of haematuria requires
standardization and determination of whether it is pre-renal, renal or post-renal in origin.

Haematuria may be derived from lesions in the kidney per se or anywhere along the pathway
of voided urine. Sites of origin include the kidney, collecting renal pelvis, ureters, bladder,
prostate and urethra. Because of the serious diseases associated with haematuria, a single
episode warrants a medical or urological evaluation. Greater than one RBC per high-power
field can be a signal of disease, but significant haematuria may be missed on microscopic
analysis in the presence of hypotonic (dilute) urine which may lyse RBCs. Pseudo-haematuria
may be caused by beets, berries, vegetable dyes and concentrated urates. Initial haematuria
suggests a urethral origin, terminal haematuria is usually prostatic in origin, and blood
throughout voiding is from the bladder, kidney or ureter. Gross haematuria is associated with
bladder tumours in 21% of the cases, but microscopic haematuria is much less frequently
associated (2.2 to 12.5%).

Finding dysmorphic cells when haematuria is quantitatively assessed suggests an upper tract
origin, particularly when associated with red blood cell casts. Understanding haematuria in
relation to proteinuria provides additional information. The glomerular filtration device
almost completely excludes proteins of a molecular weight greater than 250,000 Daltons,
while low molecular weight proteins are freely filtered and absorbed normally by the tubule
cells. The presence of high molecular weight proteins in the urine suggests lower tract
bleeding while low molecular weight proteins are associated with tubular injury. Evaluation
of the ratio of α-microglobulin to albumin and α-macroglobulin to albumin helps delineate
glomerular from tubular interstitial nephropathy and lower tract bleeding potentially
associated with urothelial neoplasia and other post-renal causes such as urinary tract
infections.

A special diagnostic problem arises when two or more disease processes that cause the same
symptoms are present concurrently. For example, haematuria is seen in both urothelial
neoplasia and urinary tract infections. In a patient with both diseases, if the infection is treated
and resolved, the cancer would remain. Therefore, it is important to identify the true cause of
the symptoms. Haematuria is present in 13% of screened populations; approximately 20% of
individuals have significant renal or bladder disorders and 10% of those will go on to develop
genitourinary malignancy. Consequently, haematuria is an important biomarker of disease
that must be appropriately evaluated.

Clinical interpretation of haematuria is enhanced by a knowledge of the patient’s age and sex,
as indicated in Table 2 which shows causes of haematuria relative to the age and sex of the
patient. Other causes of haematuria include renal vein thrombosis, hypercalcuria and
vasculitis, as well as trauma such as jogging or other sports, and occupational events or
exposures. Clinical evaluation of haematuria requires an x ray of the kidney, intravenous
pyelogram (IVP), to rule out upper tract diseases including kidney stones and tumours, and a
cystoscopy (looking into the bladder through a lighted instrument) to exclude bladder,
prostate or urothelial cancers. Subtle vaginal causes must be excluded in women. Regardless
of a patient’s age, a clinical evaluation is indicated if haematuria occurs and, depending on the
identified aetiology, sequential follow-up evaluations may be indicated.

Table 2. The most common causes of haematuria, by age and sex

0–20 Years 40–60 Years (females)

Acute glomerulonephritis Acute urinary tract infection


Acute urinary tract infection Stones
Congenital urinary tract anomalies with Bladder tumour
obstruction

20–40 Years 60+Years (males)


Acute urinary tract infection Benign prostatic hyperplasia
Stones Bladder tumour
Bladder tumour Acute urinary tract infection

40–60 Years (males) 60+Years (females)

Bladder tumour Bladder tumours


Stones Acute urinary tract infection
Acute urinary tract infection

Source: Wyker 1991.

The use of recently identified biomarkers in conjunction with conventional cytology for
evaluation of haematuria helps to assure that no occult or incipient malignancy is missed (see
next section on biomarkers). For the occupational specialist, determining whether haematuria
is a result of toxic exposure or occult malignancy is important. Knowledge of exposure and
the patient’s age are critical parameters for making an informed clinical management
decision. A recent study has demonstrated that together haematuria and biomarker analysis on
exfoliated urinary cells from the bladder were the two best markers for detecting premalignant
bladder lesions. Haematuria is observed in all cases of glomerular injury, in only 60% of
patients with bladder cancer and in only 15% of patients with malignancies of the kidney
itself. Thus, haematuria remains a cardinal symptom of renal and post-renal disease, but the
final diagnosis may be complex.

Tests for nephrotoxicity: biomarkers

Historically, monitoring of toxins in the work environment has been the primary method of
identifying risk. However, not all toxicants are known and, therefore, cannot be monitored.
Also, susceptibility is a factor in whether xenobiotics will affect individuals.

Figure 2. Categories of biomarkers.

Biomarkers provide new opportunities for defining individual risk. For descriptive purposes
and to provide a framework for interpretation, biomarkers have been classified according to
the schema depicted in Figure 2. As in other diseases, biomarkers of nephrotoxicity and
genitourinary toxicity may be related to susceptibility, exposure, effect or disease. Biomarkers
may be genotypic or phenotypic, and may be functional, cellular or soluble in urine, blood or
other body fluids. Examples of soluble markers are proteins, enzymes, cytokines and growth
factors. Biomarkers may be assayed as the gene, message or protein product. These variable
systems add to the complexity of biomarker evaluation and selection. One advantage of
assaying the protein is that it is the functional molecule. The gene may not be transcribed and
the quantity of message may not correspond to the protein product. A list of criteria for
biomarker selection is shown in Table 3.

Table 3. Criteria for biomarker selection

Clinical utility Assay considerations

Strong biomarker Stability of reagent

Sensitivity Cost of reagent

Specificity Fixation requirements

Negative predictive value Reproducibility of the assay

Positive predictive value Machine sensible parameters

Functional role Contribution to biomarker profile

Sequence in oncogenesis Adaptability to automation

Source: Hemstreet et al. 1996.

The international scientific commitment to map the human genome made possible by
advances in molecular biology established the basis for identifying biomarkers of
susceptibility. Most instances of human disease, especially those resulting from
environmental exposure to toxicants, involve a constellation of genes reflecting marked
genetic diversity (genetic polymorphism). An example of such a gene product, as mentioned
previously, is the P-450 oxidative enzyme system which may metabolize xenobiotics in the
liver, kidney or bladder. Susceptibility factors may also control the basic mechanism for DNA
repair, influence the susceptibility of various signalling pathways important to tumourigenesis
(i.e., growth factors) or be related to inherited conditions that predispose to disease. An
important example of an inherited susceptibility factor is the slow or fast acetylation
phenotype that regulates the acetylation and inactivation of certain aromatic amines known to
cause bladder cancer. Biomarkers of susceptibility may include not only genes that regulate
the activation of xenobiotics but also proto-oncogenes and suppressor-oncogenes. The control
of tumour cell growth involves a number of complex, interacting systems. These include a
balance of positive (proto) oncogenes and negative (suppressor) oncogenes. Proto-oncogenes
control normal cell growth and development, while suppressor-oncogenes control normal
cellular division and differentiation. Other genes may contribute to pre-existing conditions
such as a propensity to renal failure triggered by underlying conditions such as polycystic
kidney disease.

A biomarker of exposure may be the xenobiotic itself, the metabolic metabolite or markers
such as DNA adducts. In some instances the biomarker may be bound to a protein.
Biomarkers of exposure may also be biomarkers of effect, if the effect is transient. If a
biomarker of effect persists, it may become a biomarker of disease. Useful biomarkers of
effect have a high association with a toxicant and are indicative of exposure. For disease
detection, expression of the biomarker in close sequence to the onset of disease will have the
highest specificity. The expected sensitivity and specificity of a biomarker depends on the risk
versus benefit of the intervention. For instance, a biomarker such as F-actin, a cytoskeletal
protein differentiation marker, that appears altered in early carcinogenesis may have a poor
specificity for detection of pre-cancerous states because not all individuals with an abnormal
marker will progress to disease. It may, however, be useful for selecting individuals and
monitoring them while undergoing chemoprevention, provided the therapy is non-toxic.
Understanding the time-frame and functional linkage between individual biomarkers is
extremely important to individual risk assessment and to comprehending the mechanisms of
carcinogenesis and nephrotoxicity.

Biomarkers of nephrotoxicity

Biomarkers of nephrotoxicity may be related to the aetiology of kidney failure (i.e., pre-renal,
renal or post-renal) and the mechanisms involved in the pathogenesis of the process. This
process includes cellular damage and repair. Toxic injury can affect the cells, glomerulus,
interstitium or tubules with release of corresponding biomarkers. Xenobiotics may affect
more than one compartment or may cause biomarker changes because of the interdependence
of cells within the compartment. Inflammatory changes, autoimmune processes and
immunological processes further promote the release of biomarkers. Xenobiotics may target
one compartment in some circumstances and another under different conditions. One example
is mercury which is, acutely, nephrotoxic to the proximal tubule while chronically it affects
the arterioles. Response to injury can be divided into several major categories including
hypertrophy, proliferation, degeneration (necrosis and apoptosis, or programmed cell death)
and membrane alterations.

The majority of susceptibility factors are related to non-xenobiotic-associated renal disease.


However, 10% of renal failure cases are attributed to environmental exposures to toxic
compounds or iatrogenic induction by various compounds, such as antibiotics, or procedures
such as administration of kidney x-ray contrast to a diabetic. In the workplace, identifying
subclinical renal failure prior to potential additional nephrotoxic stress has potential practical
utility. If a compound is suspected to be xenobiotic and it results in an effect specifically in
the causal pathway of disease, intervention to reverse the effect is a possibility. Thus,
biomarkers of effect eliminate many of the problems of calculating exposure and defining
individual susceptibility. Statistical analysis of biomarkers of effect in relation to biomarkers
of susceptibility and exposure should improve marker specificity. The more specific the
biomarker of effect the less the requirement for a large sample size required for scientifically
identifying potential toxins.

Biomarkers of effect are the most important class of markers and link exposure to
susceptibility and disease. We have previously addressed the combining of cellular and
soluble biomarkers to differentiate between haematuria originating in the upper tract or the
lower tract. A list of soluble biomarkers potentially related to cellular nephrotoxicity is shown
in Table 4. To date, none of these alone or as multiple biomarker panels detects subclinical
toxicity with adequate sensitivity. Some problems with using soluble biomarkers are lack of
specificity, enzyme instability, the dilutional effect of urine, variations in renal function, and
non-specific protein interactions that may cloud the specificity of analysis.
Table 4. Potential biomarkers linked to cell injury

Immunological factors: Extracellular-matrix components:

-Humoral-antibodies and antibody -Collagens


fragments; components of complement
cascade, and coagulation factors -Procollagen

-Cellular-lymphocytes, mononuclear -Laminin


phagocytes, and other marrow- derived
effectors (oesinophils, basophils, -Fibronectin
neutrophils and platelets)

Lymphokines Adhesion molecules

Major histocompatibility antigens Reactive oxygen and nitrogen species

Growth factors and cytokines: platelet- Transcription factors and proto-oncogenes:


derived growth factor, epidermal growth c-myc, c-fos, c-jun, c-Haras, c-Ki-ras, and
factor, transforming growth factor (TGF), Egr-1
tumour-necrosis factor, interleukin-1, etc.

Lipid mediators: prostaglandins Thromboxanes, leukotrienes, and platelet


activating factor
Endothelin
Heat shock proteins

Source: Finn, Hemstreet et al. in National Research Council 1995.

One soluble growth factor with potential clinical application is urinary epidermal growth
factor (EGF) which may be excreted by the kidney and is also altered in patients with
transitional cell carcinoma of the bladder. Quantitation of urinary enzymes has been
investigated but the usefulness of this has been limited by the inability to determine the origin
of the enzyme and lack of assay reproducibility. The use of urinary enzymes and their
widespread acceptance has been slow because of the restrictive criteria mentioned previously.
Enzymes evaluated include alaminopeptidase, NAG and intestinal alkaline phosphatase. NAG
is perhaps the most widely accepted marker for monitoring proximal tubule cell injury
because of its localization in the S3 segment of the tubule. Because the precise cell of origin
and pathological cause of urinary enzyme activity are unknown, interpretation of results is
difficult. Furthermore, drugs, diagnostic procedures and co-existing diseases such as
myocardial infarction may cloud the interpretation.

An alternative approach is to use monoclonal antibody biomarkers to identify and quantitate


tubular cells in urine from various areas of the nephron segment. The utility of this approach
will depend on maintaining the integrity of the cell for quantification. This requires
appropriate fixation and sample handling. Monoclonal antibodies are now available which
target specific tubule cells and distinguish, for example, proximal tubule cells from distal
tubule cells or convoluted tubule cells. Transmission microscopy cannot effectively resolve
differences between leukocytes and various types of tubule cells in contrast to electron
microscopy which has been effective in detecting transplant rejection. Techniques such as
high-speed quantitative fluorescence image analysis of tubular cells stained with monoclonal
antibodies should solve this problem. In the near future, it should be possible to detect
subclinical nephrotoxicity with a high degree of certainty as exposure occurs.

Biomarkers of malignant disease

Solid cancers arise in many cases from a field of biochemically altered cells which may or
may not be histologically or cytologically altered. Technologies such as quantitative
fluorescence image analysis capable of detecting biomarkers associated with premalignant
conditions with certainty provide the horizon for targeted chemoprevention. Biochemical
alterations may occur in a varied or ordered process. Phenotypically, these changes are
expressed by a gradual morphological progression from atypia to dysplasia and finally to
overt malignancy. Knowledge of the “functional role” of a biomarker and “when in the
sequence of tumorigenesis it is expressed” assists in defining its utility for identifying
premalignant disease, for making an early diagnosis and for developing a panel of biomarkers
to predict tumour recurrence and progression. A paradigm for biomarker evaluation is
evolving and requires the identification of single and multiple biomarker profiles.

Bladder cancer appears to develop along two separate pathways: a low-grade pathway
seemingly associated with alterations on chromosome 9 and a second pathway associated with
P-53 suppressor gene genetically altered on chromosome 17. Clearly, multiple genetic factors
are related to cancer development, and defining the genetic factors in each individual is a
difficult task, particularly when the genetic pathway must be linked to a complexity of
perhaps multiple exposures. In epidemiological studies, exposures over prolonged intervals
have been difficult to reconstruct. Batteries of phenotypic and genotypic markers are being
identified to define individuals at risk in occupational cohorts. One profile of phenotypic
biomarkers and their relationship to bladder cancer is shown in Figure 3, which illustrates that
G-actin, a precursor protein to the cytoskeletal protein F-actin, is an early differentiation
marker and may be followed by sequential alterations of other intermediate end-point markers
such as M344, DD23 and DNA ploidy. The strongest biomarker panels for detecting
premalignant disease and overt cancer, and for prognostication, remain to be determined. As
machine-sensible biochemical criteria are defined it may be possible to detect disease risk at
prescribed points in the disease continuum.

Figure 3. Four biomarkers, G-actin, P-300, DD23 and DNA, in relation to tumour progression
and response to surgical treatment and chemoprevention.
Diagnosis and management of work-relatedrenal-urinary disease

Pre-existing renal disease

Changes in health care delivery systems worldwide bring into focus issues of insurability and
protection of workers from additional exposure. Significant pre-existing renal disease is
manifest by increased serum creatinine, glucosuria (sugar in the urine), proteinuria,
haematuria and dilute urine. Immediately ruling out systemic underlying causes such as
diabetes and hypertension is required, and depending on the age of the patient other
congenital aetiologies such as multiple cysts in the kidney should be investigated. Thus, the
urinalysis, both dipstick and microscopic evaluations, for detection of biochemical and
cellular alterations, is useful to the occupational physician. Tests of serum creatinine and
creatinine clearance are indicated if significant haematuria, pyuria or proteinuria suggests
underlying pathology.

Multiple factors are important to assess risk for progression of chronic disease or acute kidney
failure. The first is inherent or acquired limitation of the kidney to resist xenobiotic exposure.
The kidney’s response to the nephrotoxicant, such as an increase in the amount of toxicant
absorbed or alterations in kidney metabolism, may be influenced by a pre-existing condition.
Of particular importance is a decrease in detoxifying function in the very young or the very
old. In one study susceptibility to occupational exposure was correlated highly with family
history of renal disease, signifying the importance of hereditary predisposition. Underlying
conditions, such as diabetes and hypertension, increase susceptibility. Rare conditions, such as
lupus erythematosis and vasculitis, may be additional susceptibility factors. In the majority of
cases, increased susceptibility is multifactorial and frequently involves a battery of insults
which occur either alone or simultaneously. Thus, the occupational physician should be
cognizant of the patient’s family history of renal disease and pre-existing conditions affecting
renal function, as well as any vascular or cardiac disease, particularly in older workers.

Acute renal failure

Acute renal failure may arise from pre-renal, renal, or post-renal causes. The condition is
usually caused by an acute insult resulting in rapid, progressive loss of kidney function. When
the nephrotoxicant or precipitating causal factor is removed there is a progressive return of
renal function with a gradual decline of serum creatinine and improved renal concentrating
ability. A listing of occupational causes of acute renal failure is shown in Table 5. Acute renal
failure from high-dose xenobiotic exposure has been useful to signal potential aetiological
causes that may also contribute to more chronic forms of progressive renal disease. Acute
renal failure from obstruction of the outflow tract caused by benign disease or malignancy is
relatively rare, but surgical causes may contribute more frequently. Ultrasound of the upper
tract delineates the problem of obstruction, whatever the contributing factor. Renal failure
associated with drug or occupational toxicants results in a mortality rate of approximately
37%; the remainder of affected individuals improve to various degrees.
Table 5. Principal causes of acute renal insufficiency of occupational origin

Renal ischaemia Tubular necrosis Haemoglobinuria,


myoglobinuria

Traumatic shock Mercury Arsine


Anaphylactic shock Chromium Crush syndrome
Acute carbon monoxide poisoning Arsenic Struck by lightning
Heat stroke Oxalic acid
Tartrates
Ethylene glycol
Carbon tetrachloride
Tetrachlorethane

Source: Crepet 1983.

Acute renal failure may be attributed to a variety of pre-renal causes which have as an
underlying theme renal ischaemia resulting from a prolonged decreased renal perfusion.
Cardiac failure and renal artery obstruction are two examples. Tubular necrosis may be
caused by an ever-growing number of nephrotoxicants present in the workplace. Herbicides
and pesticides have all been implicated in a number of studies. In a recent report, hemlock
poisoning resulted in the deposition of the myosin and actin from the breakdown of muscle
cells in the tubules and an acute decrease in renal function. Endosulfan, an insecticide, and
triphenyltin acetate (TPTA), an organotin, both were initially classified as neurotoxins but
have recently been reported to be associated with tubular necrosis. Anecdotal reports of
additional cases bring into perspective the need for finding biomarkers to identify more subtle
subclinical toxicants that may not yet have resulted in high-dose toxic exposures.

Signs and symptoms of acute renal failure are: no urine output (anuria); oliguria (decreased
urine output); decreased renal concentrating capacity; and/or a rising serum potassium that
may stop the heart in a relaxed state (diastolic arrest). Treatment involves clinical support and,
whenever possible, removal from exposure to the toxicant. Rising serum potassium or
excessive fluid retention are the two primary indicators for either haemodialysis or peritoneal
dialysis, with the choice dependent on the patient’s cardiovascular stability and vascular
access for haemodialysis. The nephrologist, a medical kidney specialist, is key in the
management strategy for these patients who may also require the care of a urological surgical
specialist.

Long-term management of patients following renal failure is largely dependent on the degree
of recovery and rehabilitation and the patient’s overall health status. A return to limited work
and avoiding conditions that will stress the underlying condition are desirable. Patients with
persistent haematuria or pyuria require careful monitoring, possibly with biomarkers, for
2 years following recovery.

Chronic renal disease

Chronic or end-stage renal disease is most frequently the result of a chronic, ongoing
subclinical process that involves a multiplicity of factors most of which are poorly
understood. Glomerulonephritis, cardiovascular causes and hypertension are major
contributing factors. Other factors include diabetes and nephrotoxicants. Patients present with
progressive elevations in serum blood urea nitrogen, creatinine, serum potassium and oliguria
(decreased urine output). Improved biomarkers or biomarker panels are needed to identify
more precisely subclinical nephrotoxicity. For the occupational practitioner, the methods of
assessment need to be non-invasive, highly specific and reproducible. No single biomarker
has as yet met these criteria to become practical on a large clinical scale.

Chronic renal disease may result from a variety of nephrotoxicants, the pathogenesis of which
is better understood for some than others. A list of nephrotoxicants and sites of toxicity is
shown in Table 6. As mentioned, toxins may target the glomerulus, segments of the tubules or
the interstitial cells. Symptoms of xenobiotic exposure may include haematuria, pyuria,
glucosuria, amino acids in the urine, frequent urination and decreased urine output. The
precise mechanisms of renal damage for many nephrotoxicants have not been defined but the
identification of specific biomarkers of nephrotoxicity should assist in addressing this
problem. Although some protection of the kidney is afforded by the prevention of
vasoconstriction, tubular injury persists in most cases. As an example, lead toxicity is
primarily vascular in origin, while chromium at low doses affects the proximal tubule cells.
These compounds appear to affect the metabolic machinery of the cell. Multiple forms of
mercury have been implicated in acute elemental nephrotoxicity. Cadmium, in contrast to
mercury and like many other occupational nephrotoxicants, first targets the proximal tubule
cells.

Table 6. Segments of the nephron affected by selected toxicants

Proximal tubule Glomerulus

Antibiotics Immune complexes

-Cephalosporins Aminoglycoside antibiotics

Aminoglycosides Puromycin aminonucleoside

Antineoplastics Adriamycin

-Nitrosoureas Penicillamine

-Cisplatin and analogs


Distal tubule/collecting duct
Radiographic contrast agents
-Lithium
Halogenated hydrocarbons
-Tetracyclines
-Chlorotrifluoroethylene
-Amphotericin
-Hexafluropropene
-Fluoride
-Hexachlorobutadiene

-Trichloroethylene
-Chloroform -Methoxyflurane

-Carbon tetrachloride

Maleic acid

Citrinin Papilla

Metals -Aspirin

-Mercury -Phenacetin

-Uranyl nitrate -Acetaminophen

-Cadmium -Non-steroidal anti-inflammatory agents

-Chromium -2-Bromoethylamine

Source: Tarloff and Goldstein 1994.

Renal-Urinary Cancers

Written by ILO Content Manager

Kidney Cancer

Epidemiology

Historically, kidney cancer has been used to mean either all malignancies of the renal system
(renal cell carcinoma (RCC), ICD-9 189.0; renal pelvis, ICD-9 189.1; and ureter, ICD-9
189.2) or RCC only. This categorization has led to some confusion in epidemiological studies,
resulting in a need to scrutinize previously reported data. RCC comprises 75 to 80% of the
total, with the remainder being primarily transitional cell carcinomas of the renal pelvis and
ureter. Separation of these two cancer types is appropriate since the pathogenesis of RCC and
of transitional cell carcinoma is quite different, and epidemiological risk factors are distinct as
are the signs and symptoms of the two diseases. This section focuses on RCC.

The major identified risk factor for kidney cancer is tobacco smoking, followed by suspected
but poorly defined occupational and environmental risk factors. It is estimated that the
elimination of tobacco smoking would decrease the incidence of kidney cancer by 30 to 40%
in industrialized countries, but occupational determinants of RCC are not well established.
The population attributable risk due to occupational exposures has been estimated to be
between zero, based on recognized carcinogenesis, and 21%, based on a multicentric multisite
case-control study in the Montreal area of Canada. Early biomarkers of effect in association
with biomarkers of exposure should assist in clarifying important risk factors. Several
occupations and industries have been found in epidemiological studies to entail an increased
risk of renal cancer. However, with the possible exception of agents used in dry cleaning and
exposures in petroleum refining, the available evidence is not consistent. Statistical analysis
of epidemiological exposure data in relation to biomarkers of susceptibility and effect will
clarify additional aetiological causes.

Several epidemiological studies have associated specific industries, occupations and


occupational exposures with increased risks of renal cell carcinoma. The pattern that emerges
from these studies is not fully consistent. Oil refining, printing, dry cleaning and truck driving
are examples of jobs associated with excess risk of kidney cancer. Farmers usually display
decreased risk of RCC, but a Danish study linked long-term exposure to insecticides and
herbicides with an almost fourfold excess of RCC risk. This finding requires confirmation in
independent data, including specification of the possible causal nature of the association.
Other products suspected of being associated with RCC include: various hydrocarbon
derivatives and solvents; products of oil refining; petroleum, tar and pitch products; gasoline
exhaust; jet fuel; jet and diesel engine emissions; arsenic compounds; cadmium; chromium
(VI) compounds; inorganic lead compounds; and asbestos. Epidemiological studies have
associated occupational gasoline vapour exposure with kidney cancer risk, some in a dose-
response fashion, a phenomenon observed in the male rat for unleaded gasoline vapour
exposure. These findings gain some potential weight, given the widespread human exposure
to gasoline vapours in retail service stations and the recent increase in kidney cancer
incidence. Gasoline is a complex mixture of hydrocarbons and additives, including benzene,
which is a known human carcinogen.

The risk of kidney cancer is not consistently linked with social class, although increased risk
has occasionally been associated with higher socio-economic status. However, in some
populations a reverse gradient was observed, and in yet others, no clear pattern emerged.
Possibly these variations may be related to lifestyle. Studies with migrant people show
modification in RCC risk towards the level of the host country population, suggesting that
environmental factors are important in the development of this malignancy.

Except for nephroblastoma (Wilms’ tumour), which is a childhood cancer, kidney cancer
usually occurs after 40 years of age. An estimated 127,000 new cases of kidney cancer
(including RCC and transitional cell carcinoma (TCC) of the renal pelvis and ureter),
corresponding to 1.7% of the world total cancer incidence, occurred globally in 1985. The
incidence of kidney cancer varies among populations. High rates have been reported for both
men and women in North America, Europe, Australia and New Zealand; low rates in
Melanesia, middle and eastern Africa and southeastern and eastern Asia. The incidence of
kidney cancer has been increasing in most western countries, but stagnated in a few. Age-
standardized incidence of kidney cancer in 1985 was highest in North America and western,
northern and eastern Europe, and lowest in Africa, Asia (except in Japanese men) and the
Pacific. Kidney cancer is more frequent in men than in women and ranks among the ten most
frequent cancers in a number of countries.

Transitional cell carcinoma (TCC) of the renal pelvis is associated with similar aetiological
agents as bladder cancer, including chronic infection, stones and phenacetin-containing
analgesics. Balkan nephropathy, a slowly progressive, chronic and fatal nephropathy
prevalent in the Balkan countries, is associated with high rates of tumours of the renal pelvis
and ureter. The causes of Balkan nephropathy are unknown. Excessive exposure to ochratoxin
A, which is considered possibly carcinogenic to humans, has been associated with the
development of Balkan nephropathy, but the role of other nephrotoxic agents cannot be
excluded. Ochratoxin A is a toxin produced by fungi which can be found in many food stuffs,
particularly cereals and pork products.
Screening and diagnosis of kidney cancer

The sign and symptom pattern of RCC varies among patients, even up to the stage when
metastasis appears. Because of the location of the kidneys and the mobility of contiguous
organs to the expanding mass, these tumours are frequently very large at
the time of clinical detection. Although haematuria is the primary symptom of RCC,
bleeding occurs late compared to transitional cell tumours because of the intra-renal location
of RCC. RCC has been considered the “medical doctor’s dream” but the “surgeon’s curse”
because of the interesting constellation of symptoms related to paraneoplastic syndromes.
Substances that increase the red blood cell count, calcium and factors which mimic abnormal
adrenal gland function have been reported, and abdominal mass, weight loss, fatigue, pain,
anaemia, abnormal liver function and hypertension have all been observed. Computerized
axial tomography (CAT scan) of the abdomen and ultrasound are being ordered by physicians
with increased frequency so, consequently, it is estimated that 20% of RCCs are diagnosed
serendipitously as a result of evaluation for other medical problems.

Clinical evaluation of an RCC case consists of a physical examination to identify a flank


mass, which occurs in 10% of patients. A kidney x ray with contrast may delineate a renal
mass and the solid or cystic nature is usually clarified by ultrasound or CAT scan. The
tumours are highly vascular and have a characteristic appearance when the artery is injected
with radio-opaque contrast material. Arteriography is performed to embolize the tumour if it
is very large or to define the arterial blood supply if a partial nephrectomy is anticipated. Fine-
needle aspiration may be used to sample suspect RCC.

Localized RCC tumours are surgically removed with regional lymph nodes and, operatively,
early ligation of the artery and vein is important. Symptomatically, the patient may be
improved by removing large or bleeding tumours that have metastasized, but this does not
improve survival. For metastatic tumours, localized pain control may be achieved with
radiation therapy but the treatment of choice for metastatic disease is biological response
modifiers (Interleukin-2 or α-interferon), although chemotherapy is occasionally used alone or
in combination with other therapies.

Markers such as the cancer gene on chromosome 3 observed in cancer families and in von
Hippel-Lindau disease may serve as biomarkers of susceptibility. Although tumour marker
antigens have been reported for RCC, there is currently no way to detect these reliably in the
urine or blood with adequate sensitivity and specificity. The low prevalence of this disease in
the general population requires a high specificity and sensitivity test for early disease
detection. Occupational cohorts at risk could potentially be screened with ultrasound.
Evaluation of this tumour remains a
challenge to the basic scientist, molecular epidemiologist and clinician alike.

Bladder Cancer

Epidemiology

More than 90% of bladder cancers in Europe and North America are transitional cell
carcinomas (TCC). Squamous cell carcinoma and adenocarcinoma account for 5 and 1%,
respectively, of bladder cancer in these regions. The distribution of histopathological types in
bladder cancer is strikingly different in regions such as the Middle East and Africa where
bladder cancer is associated with schistosomal infection. For instance, in Egypt, where
schistosomiasis is endemic and bladder cancer is the major oncogenic problem, the most
common type is squamous cell carcinoma, but the incidence of TCC is increasing with the
rising prevalence of cigarette smoking. The discussion which follows focuses on TCC.

Bladder cancer continues to be a disease of significant importance. It accounted for about


3.5% of all malignancies in the world in 1980. In 1985, bladder cancer was estimated to be
11th in frequency on a global scale, being the eighth most frequent cancer among men, with
an expected total of 243,000 new cases. There is a peak incidence in the seventh decade of
life, and worldwide the male to female ratio is around three to one. Incidence has been
increasing in almost all populations in Europe, particularly in men. In Denmark, where annual
incidence rates are among the highest in the world, at 45 per 100,000 in men and 12 per
100,000 in women, the recent trend has been a further rise of 8 to 9% every 5 years. In Asia,
the very high rates among the Chinese in Hong Kong have declined steadily, but in both sexes
bladder cancer incidence is still much higher than elsewhere in Asia and more than twice as
high as that among the Chinese in Shanghai or Singapore. Bladder cancer rates among the
Chinese in Hawaii are also high.

Cigarette smoking is the single most important aetiological factor in bladder cancer, and
occupational exposures rank second. It has been estimated that tobacco is responsible for one-
third of all bladder cancer cases outside of regions where schistosomal infection is prevalent.
The number of bladder cancer cases attributed in 1985 to tobacco smoking has been estimated
at more than 75,000 worldwide, and may account for 50% of bladder cancer in western
populations. The fact that all individuals who smoke similar amounts do not develop bladder
cancer at the same rate suggests genetic factors are important in controlling the susceptibility.
Two aromatic amines, 4-aminobiphenyl and 2-naphthylamine, are carcinogens associated
with cigarette smoking; these are found in higher concentrations in “black tobacco” (air-
cured) than in “blend tobacco” (flue-cured). Passive smoke increases the adducts in the blood
and a dose-response of adduct formation has been correlated with increased risk of bladder
cancer. Higher levels of adduct formation have been observed in cigarette smokers who are
slow acetylators compared to fast acetylators, which suggests that genetically inherited
acetylation status may be an important biomarker of susceptibility. The lower incidence of
bladder cancer in Black compared to White races may be attributed to conjugation of
carcinogenic metabolic intermediates by sulphotransferases that produce electrophiles.
Detoxified phenolic sulphates may protect the urothelium. Liver sulphotransferase activity for
N-hydroxyarylamines has been reported to be higher in Blacks than Whites. This may result
in a decrease in the amount of free N-hydroxymetabolites to function as carcinogens.

Occupational bladder cancer is one of the earliest known and


best documented occupational cancers. The first identified case of occupational bladder
cancer appeared some 20 years after the inception of the synthetic dye industry in Germany.
Numerous other occupations have been identified in the last 25 years as occupational
bladder cancer risks. Occupational exposures may contribute to up to 20% of bladder cancers.
Workers occupationally exposed include those working with coal-tar pitches, coal gasification
and production of rubber, aluminium, auramine and magenta, as well as those working as
hairdressers and barbers. Aromatic amines have been shown to cause bladder cancer in
workers in many countries. Notable among this class of chemicals are 2-naphthylamine,
benzidine, 4-nitrobiphenyl and 3,3r´-dichlorobenzidine. Two other aromatic amines, 4,4´-
methylene dianiline (MDA) and 4,4´-methylene-bis-2-chloroaniline (MOCA) are among the
most widely used of the suspected bladder carcinogens. Other carcinogens associated with
industrial exposures are largely undetermined; however, aromatic amines are frequently
present in the workplace.

Screening and diagnosis of bladder cancer

Screening for bladder cancer continues to receive attention in the quest to diagnose bladder
cancer before it becomes symptomatic and, presumably, less amenable to curative treatment.
Voided urine cytology and urinalysis for haematuria have been considered candidate
screening tests. A pivotal question for screening is how to identify high-risk groups and then
individuals within these groups. Epidemiological studies identify groups at risk while
biomarkers potentially identify individuals within groups. In general, occupational screening
for bladder cancer with haematuria testing and Papanicolaou cytology has been ineffective.

Improved detection of bladder cancer may be possible using the 14-day hemastick testing
described by Messing and co-workers. A positive test was observed at least once in 84% of 31
patients with bladder cancer at least 2 months prior to the cystoscopic diagnosis of disease.
This test suffers from a false-positive rate of 16 to 20% with half of these patients having no
urological disease. The low cost may make this a useful test in a two-tier screen in
combination with biomarkers and cytology (Waples and Messing 1992).

In a recent study, the DD23 monoclonal antibody using quantitative fluorescence image
analysis detected bladder cancer in exfoliated uroepithelial cells. A sensitivity of 85% and
specificity of 95% were achieved in a mixture of low- and high-grade transitional cell
carcinomas including TaT1 tumours. The M344 tumour-associated antigen in conjunction
with DNA ploidy had a sensitivity approaching 90%.

Recent studies indicate combining biomarkers with haematuria testing may be the best
approach. A list of the applications of quantitative fluorescence urinary cytology in
combination with biomarkers is summarized in Table 1. Genetic, biochemical and
morphological early cell changes associated with premalignant conditions support the concept
that individuals at risk can be identified years in advance of the development of overt
malignancy. Biomarkers of susceptibility in combination with biomarkers of effect promise to
detect individuals at risk with an even higher precision. These advances are made possible by
new technologies capable of quantitating phenotypic and genotypic molecular changes at the
single cell level thus identifying individuals at risk. Individual risk assessment facilitates
stratified, cost-effective monitoring of selected groups for targeted chemoprevention.

Table 1. Applications of urinary cytology

Detection of CIS1 and bladder cancer

Monitoring surgical therapy:

Monitoring bladder following TURBT2


Monitoring upper urinary tract
Monitoring urethral remnant
Monitoring urinary diversion
Monitoring intravesical therapy

Selecting intravesical therapy

Monitoring effect of laser therapy

Evaluation of patients with haematuria

Establishing need for cystoscopy

Screening high-risk populations:


Occupational exposure groups
Drug abuse groups at risk for bladder cancer

Decision criteria for:


Cystectomy
Segmental ureteral resection versus nephroureterectomy

Other indications:
Detecting vesicoenteric fistula
Extraurological tumours invading the urinary tract
Defining effective chemopreventive agents
Monitoring effective chemotherapy
1 CIS, carcinoma in situ.
2TURBT, transurethral resection for bladder tumour.
Source: Hemstreet et al. 1996.

Signs and symptoms of bladder cancer are similar to those of urinary tract infection and may
include pain on urination, frequent voiding and blood and pus cells in the urine. Because
symptoms of a urinary tract infection may herald a bladder tumour particularly when
associated with gross haematuria in older patients, confirmation of the presence of bacteria
and a keen awareness by the examining physician is needed. Any patient treated for a urinary
tract infection which does not resolve immediately should be referred to a urology specialist
for further evaluation.

Diagnostic evaluation of bladder cancer first requires an intravenous pyelogram (IVP) to


exclude upper tract disease in the renal pelvis or ureters. Confirmation of bladder cancer
requires looking in the bladder with a light (cystoscope) with multiple biopsies performed
with a lighted instrument through the urethra to determine if the tumour is non-invasive (i.e.,
papillary or CIS) or invasive. Random biopsies of the bladder and prostatic urethra help to
define field cancerization and field effect changes. Patients with non-
invasive disease require close monitoring, as they are at risk of subsequent recurrences,
although stage and grade progression are uncommon. Patients who present with bladder
cancer that is already high-grade or invasive into the lamina propria are at equally high risk of
recurrence but stage progression is much more likely. Thus, they usually receive intravesical
instillation of immuno- or chemotherapeutic agents following transurethral resection. Patients
with tumours invading the muscularis propria or beyond are much more likely to have
metastasis already and can rarely be managed by conservative means. However, even when
treated by total cystectomy (the standard therapy for muscle-invading bladder cancer), 20 to
60% eventually succumb to their disease, almost always due to metastasis. When regional or
distal metastasis is present at diagnosis, the 5-year survival rates drop to 35 and 9%,
respectively, despite aggressive treatment. Systemic chemotherapy for metastatic bladder
cancer is improving with complete response rates reported at 30%. Recent studies suggest
chemotherapy prior to cystectomy may improve survival in selected patients.

Bladder cancer staging is predictive of the biological potential for progression, metastasis, or
recurrence in 70% of the cases. Staging of bladder cancer usually requires CAT scan to rule
out liver metastasis, radioisotope bone scan to exclude spread to the bone, and chest x ray or
CAT scan to exclude lung metastasis. A search continues for biomarkers in the tumour and
the bladder cancer field that will predict which tumours will metastasize or recur. The
accessibility of exfoliated bladder cells in voided specimens shows promise for using
biomarkers for monitoring recurrence and for cancer prevention.
Renal-Urinary System References
Committee on Biological Markers of the National Research Council. 1987. Biological
markers in environmental health research. Environ Health Persp 74:3-9.

Crepet, M. 1983. In ILO Encyclopaedia of Occupational Health and Safety. Geneva:


International Labour Office (ILO).

Hemstreet, G, R Bonner, R Hurst, and G O’Dowd. 1996. Cytology of bladder cancer. In


Comprehensive Textbook of Genitourinary Oncology, edited by NJ Vogelzang, Wu Shipley,
PT Scardino, and DS Coffey. Baltimore: Williams & Wilkins.

National Research Council. 1995. Biological Markers in Urinary Toxicology. Washington,


DC: National Academy Press.

Schulte, PA, K Ringen, GP Hemstreet, and E Ward. 1987. Exposure: Occupational cancer of
the urinary tract. In Occupational Cancer and Carcinogenesis. Philadelphia: Hanley & Belfus.

Tarloff, JB and RS Goldstein. 1994. Biochemical mechanisms of renal toxicity. In


Introduction to Biochemical Toxicology, edited by E Hodgson and PE Levi. E. Norwalk,
Conn.: Appleton and Lange.

Waples, M and EM Messing. 1992. The management of stage T1, grade 3 transitional cell
carcinoma of the bladder. In Advances in Urology. St. Louis: Mosby.

Wyker, A. 1991. Standard diagnostic considerations. In Adult and Pediatric Urology, edited
by JY Gillenwater et al. (3rd edn. 1996). St. Louis: Mosby.

Copyright 2015 International Labour Organization


9. Reproductive System
Chapter Editor: Grace Kawas Lemasters

Reproductive System: Introduction

Written by ILO Content Manager

Male and female reproductive toxicity are topics of increasing interest in consideration of
occupational health hazards. Reproductive toxicity has been defined as the occurrence of
adverse effects on the reproductive system that may result from exposure to environmental
agents. The toxicity may be expressed as alterations to the reproductive organs and/or the
related endocrine system. The manifestations of such toxicity may include:

 alterations in sexual behaviour


 reduced fertility
 adverse pregnancy outcomes
 modifications of other functions that are dependent on the integrity of the reproductive
system.

Mechanisms underlying reproductive toxicity are complex. More xenobiotic substances have
been tested and demonstrated to be toxic to the male reproductive process than to the female.
However, it is not known whether this is due to underlying differences in toxicity or to the
greater ease of studying sperm than oocytes.

Developmental Toxicity

Developmental toxicity has been defined as the occurrence of adverse effects on the
developing organism that may result from exposure prior to conception (either parent), during
pprenatal development or postnatally to the time of sexual maturation. Adverse
developmental effects may be detected at any point in the life span of the organism. The
major manifestations of developmental toxicity include:

 death of the developing organism


 structural abnormality
 altered growth
 functional deficiency.

In the following discussion, developmental toxicity will be used as an all-inclusive term to


refer to exposures to the mother, father or conceptus that lead to abnormal development. The
term teratogenesis will be used to refer more specifically to exposures to the conceptus which
produce a structural malformation. Our discussion will not include the effects of postnatal
exposures on development.
Mutagenesis

In addition to reproductive toxicity, exposure to either parent prior to conception has the
potential of resulting in developmental defects through mutagenesis, changes in the genetic
material that is passed from parent to offspring. Such changes can occur either at the level of
individual genes or at the chromosomal level. Changes in individual genes can result in the
transmission of altered genetic messages while changes at the chromosomal level can result in
the transmission of abnormalities in chromosomal number or structure.

It is interesting that some of the strongest evidence for a role for preconception exposures in
developmental abnormalities comes from studies of paternal exposures. For example, Prader-
Willi syndrome, a birth defect characterized by hypotonicity in the newborn period and, later,
marked obesity and behaviour problems, has been associated with paternal occupational
exposures to hydrocarbons. Other studies have shown associations between paternal
preconception exposures to physical agents and congenital malformations and childhood
cancers. For example, paternal occupational exposure to ionizing radiation has been
associated with an increased risk of neural tube defects and increased risk of childhood
leukaemia, and several studies have suggested associations between paternal preconception
occupational exposure to electromagnetic fields and childhood brain tumours (Gold and Sever
1994). In assessing both reproductive and developmental hazards of workplace exposures
increased attention must be paid to the ppossibleeffects among males.

It is quite likely that some defects of unknown aetiology involve a genetic component which
may be related to parental exposures. Because of associations demonstrated between father’s
age and mutation rates it is logical to believe that other paternal factors and exposures may be
associated with gene mutations. The well-established association between maternal age and
chromosomal non-disjunction, resulting in abnormalities in chromosomal number, suggests a
significant role for maternal exposures in chromosomal abnormalities.

As our understanding of the human genome increases it is likely that we will be able to trace
more developmental defects to mutagenic changes in the DNA of single genes or structural
changes in portions of chromosomes.

Teratogenesis

The adverse effects on human development of exposure of the conceptus to exogenous


chemical agents have been recognized since the discovery of the teratogenicity of thalidomide
in 1961. Wilson (1973) has developed six “general principles of teratology” that are relevant
to this discussion. These principles are:

1. The final manifestations of abnormal development are death, malformation, growth


retardation and functional disorder.
2. Susceptibility of the conceptus to teratogenic agents varies with the developmental stage at
the time of exposure.
3. Teratogenic agents act in specific ways (mechanisms) on developing cells and tissues in
initiating abnormal embryogenesis (pathogenesis).
4. Manifestations of abnormal development increase in degree from the no-effect to the totally
lethal level as dosage increases.
5. The access of adverse environmental influences to developing tissues depends on the nature
of the agent.
6. Susceptibility to a teratogen depends on the genotype of the conceptus and on the manner
in which the genotype interacts with environmental factors.

The first four of these principles will be discussed in further detail, as will the combination of
principles 1, 2 and 4 (outcome, exposure timing and dose).

Spectrum of Adverse Outcomes Associatedwith Exposure

There is a spectrum of adverse outcomes potentially associated with exposure. Occupational


studies that focus on a single outcome risk overlooking other important reproductive effects.

Figure 1 lists some examples of developmental outcomes potentially associated with exposure
to occupational teratogens. Results of some occupational studies have suggested that
congenital malformations and spontaneous abortions are associated with the same
exposures—for example, anaesthetic gases and organic solvents.

Spontaneous abortion is an important outcome to consider because it can result from different
mechanisms through several pathogenic processes. A spontaneous abortion can be the result
of toxicity to the embryo or foetus, chromosomal alterations, single gene effects or
morphological abnormalities. It is important to try to differentiate between karyotypically
normal and abnormal conceptuses in studies of spontaneous abortions.

Figure 1. Developmental abnormalities and reproductive outcomes potentially associated with


occupational exposures.
Timing of Exposure

Wilson’s second principle relates susceptibility to abnormal development to the time of


exposure, that is, the gestational age of the conceptus. This principle has been well established
for the induction of structural malformations, and the sensitive periods for organogenesis are
known for many structures. Considering an expanded array of outcomes, the sensitive period
during which any effect can be induced must be extended throughout gestation.

In assessing occupational developmental toxicity, exposure should be determined and


classified for the appropriate critical period—that is, gestational age(s)—for each outcome.
For example, spontaneous abortions and congenital malformations are likely to be related to
first and second trimester exposure, whereas low birth weight and functional disorders such as
seizure disorders and mental retardation are more likely to be related to second and third
trimester exposure.

Teratogenic Mechanisms

The third principle is the importance of considering the potential mechanisms that might
initiate abnormal embryogenesis. A number of different mechanisms have been suggested
which could lead to teratogenesis (Wilson 1977). These include:

 mutational changes in DNA sequences


 chromosomal abnormalities leading to structural or quantitative changes in DNA
 alteration or inhibition of intracellular metabolism, e.g., metabolic blocks and lack of co-
enzymes, precursors or substrates for biosynthesis
 interruption of DNA or RNA synthesis
 interference with mitosis
 interference with cell differentiation
 failure of cell-to-cell interactions
 failure of cell migrations
 cell death through direct cytotoxic effects
 effects on cell membrane permeability and osmolar changes
 physical disruption of cells or tissues.

By considering mechanisms, investigators can develop biologically meaningful groupings of


outcomes. This can also provide insight into potential teratogens; for example, relationships
between carcinogenesis, mutagenesis and teratogenesis have been discussed for some time.
From the perspective of assessing occupational reproductive hazards, this is of particular
importance for two distinct reasons: (1) substances that are carcinogenic or mutagenic have an
increased probability of being teratogenic, suggesting that particular attention should be paid
to the reproductive effects of such substances, and (2) effects on deoxyribonucleic acid
(DNA), producing somatic mutations, are thought to be mechanisms for both carcinogenesis
and teratogenesis.

Dose and Outcome

The fourth principle concerning teratogenesis is the relationship of outcome to dose. This
principle is clearly established in many animal studies, and Selevan (1985) has discussed its
potential relevance to the human situation, noting the importance of multiple reproductive
outcomes within specific dose ranges and suggesting that a dose-response relationship could
be reflected in an increasing rate of a particular outcome with increasing dose and/or a shift in
the spectrum of the outcomes observed.

In regard to teratogenesis and dose, there is considerable concern about functional


disturbances resulting from the ppossiblebehavioural effects of pprenatal exposure to
environmental agents. Animal behavioural teratology is expanding rapidly, but human
behavioural environmental teratology is in a relatively early stage of development. At present,
there are critical limitations in the definition and ascertainment of appropriate behavioural
outcomes for epidemiological studies. In addition it is ppossiblethat low-level exposures to
developmental toxicants are important for some functional effects.

Multiple Outcomes and Exposure Timing and Dose

Of particular importance with respect to the identification of workplace developmental


hazards are the concepts of multiple outcomes and exposure timing and dose. On the basis of
what we know about the biology of development, it is clear that there are relationships
between reproductive outcomes such as spontaneous abortion and intrauterine growth
retardation and congenital malformations. In addition, multiple effects have been shown for
many developmental toxicants (table 1).

Table 1. Examples of exposures associated with multiple adverse reproductive end-points

Exposure Outcome

Spontaneous Congenital Low birth Developmental


abortion malformation weight disabilities

Alcohol X X X X

Anaesthetic X X
gases

Lead X X X

Organic solvents X X X

Smoking X X X

Relevant to this are issues of exposure timing and dose-response relationships. It has long
been recognized that the embryonic period during which organogenesis occurs (two to eight
weeks post-conception) is the time of greatest sensitivity to the induction of structural
malformations. The foetal period from eight weeks to term is the time of histogenesis, with
rapid increase in cell number and cellular differentiation occurring during this time. It is then
that functional abnormalities and growth retardation are most likely to be induced. It is
ppossiblethat there may be relationships between dose and response during this period where
a high dose might lead to growth retardation and a lower dose might result in functional or
behavioural disturbance.
Male-Mediated Developmental Toxicity

While developmental toxicity is usually considered to result from exposure of the female and
the conceptus—that is, teratogenic effects—there is increasing evidence from both animal and
human studies for male-mediated developmental effects. Proposed mechanisms for such
effects include transmission of chemicals from the father to the conceptus via seminal fluid,
indirect contamination of the mother and the conceptus by substances carried from the
workplace into the home environment through personal contamination, and—as noted
earlier—paternal preconception exposures that result in transmissible genetic changes
(mutations).

Introduction to the Male and Female Reproductive Function

Written by ILO Content Manager

Reproductive toxicity has many unique and challenging differences from toxicity to other
systems. Whereas other forms of environmental toxicity typically involve development of
disease in an exposed individual, because reproduction requires interaction between two
individuals, reproductive toxicity will be expressed within a reproductive unit, or couple. This
unique, couple- dependent aspect, although obvious, makes reproductive toxicology distinct.
For example, it is ppossiblethat exposure to a toxicant by one member of a reproductive
couple (e.g., the male) will be manifest by an adverse reproductive outcome in the other
member of the couple (e.g., increased frequency of spontaneous abortion). Any attempt to
deal with environmental causes of reproductive toxicity must address the couple-specific
aspect.

There are other unique aspects that reflect the challenges of reproductive toxicology. Unlike
renal, cardiac or pulmonary function, reproductive function occurs intermittently. This means
that occupational exposures can interfere with reproduction but go unnoticed during periods
when fertility is not desired. This intermittent characteristic can make the identification of a
reproductive toxicant in humans more difficult. Another unique characteristic of reproduction,
which follows directly from the consideration above, is that complete assessment of the
functional integrity of the reproductive system requires that the couple attempt pregnancy.

Male Reproductive System and Toxicology

Written by ILO Content Manager

Spermatogenesis and spermiogenesis are the cellular processes that produce mature male sex
cells. These processes take place within the seminiferous tubules of the testes of the sexually
mature male, as shown in Figure 1. The human seminiferous tubules are 30 to 70 cm long and
150 to 300 mm in diameter (Zaneveld 1978). The spermatogonia (stem cells) are ppositioned
along the basement membrane of the seminiferous tubules and are the basic cells for the
production of sperm.
Figure 1. The male reproductive system

Sperm mature through a series of cellular divisions in which the spermatogonia proliferate
and become primary spermatocytes. The resting primary spermatocytes migrate through tight
junctions formed by the Sertoli cells to the luminal side of this testis barrier. By the time the
spermatocytes reach the membrane barrier in the testis, the synthesis of DNA, the genetic
material in the nucleus of the cell, is essentially complete. When the primary spermatocytes
actually encounter the lumen of the seminiferous tubule, these undergo a special type of cell
division which occurs only in germ cells and is known as meiosis. Meiotic cellular divison
results in the splitting up of the chromosomes pairs in the nucleus, so that each resulting germ
cell contains only a single copy of each chromosome strand rather than a matched pair.

During meiosis the chromosomes change shape by condensing and becoming filamentous. At
a certain point, the nuclear membrane which surrounds them breaks down and microtubular
spindles attach to the chromosomal pairs, causing them to separate. This completes the first
meiotic division and two haploid secondary spermatocytes are formed. The secondary
spermatocytes then undergo a second meiotic division to form equal numbers of X- and Y-
chromosome bearing spermatids.

The morphological transformation of spermatids to spermatozoa is called spermiogenesis.


When spermiogenesis is complete, each sperm cell is released by the Sertoli cell into the
seminiferous tubule lumen by a process referred to as spermiation. The sperm migrate along
the tubule to the rete testis and into the head of the epididymis. Sperm leaving the
seminiferous tubules are immature: unable to fertilize an ovum and unable to swim.
Spermatozoa released into the lumen of the seminiferous tubule are suspended in fluid
pproduced primarily by the Sertoli cells. Concentrated sperm suspended within this fluid flow
continuously from the seminiferous tubules, through slight changes in the ionic milieu within
the rete testis, through the vasa efferentia, and into the epididymis. The epididymis is a single
highly coiled tube (five to six metres long) in which sperm spend 12 to 21 days.

Within the epididymis, sperm progressively acquire motility and fertilizing capacity. This
may be due to the changing nature of the suspension fluid in the epididymis. That is, as the
cells mature the epididymis absorbs components from the fluid including secretions from the
Sertoli cells (e.g., androgen binding protein), thereby increasing the concentration of
spermatozoa. The epididymis also contributes its own secretions to the suspension fluid,
including the chemicals glycerylphosphorylcholine (GPC) and carnitine.

Sperm morphology continues to transform in the epididymis. The cytoplasmic droplet is shed
and the sperm nucleus condenses further. While the epididymis is the principal storage
reservoir for sperm until ejaculation, about 30% of the sperm in an ejaculate have been stored
in the vas deferens. Frequent ejaculation accelerates passage of sperm through the epididymis
and may increase the number of immature (infertile) sperm in the ejaculate (Zaneveld 1978).

Ejaculation

Once within the vas deferens, the sperm are transported by the muscular contractions of
ejaculation rather than by the flow of fluid. During ejaculation, fluids are forcibly expelled
from the accessory sex glands giving rise to the seminal plasma. These glands do not expel
their secretions at the same time. Rather, the bulbourethral (Cowper’s) gland first extrudes a
clear fluid, followed by the prostatic secretions, the sperm-concentrated fluids from the
epididymides and ampulla of the vas deferens, and finally the largest fraction primarily from
the seminal vesicles. Thus, seminal plasma is not a homogeneous fluid.

Toxic Actions on Spermatogenesisand Spermiogenesis

Toxicants may disrupt spermatogenesis at several points. The most damaging, because of
irreversibility, are toxicants that kill or genetically alter (beyond repair mechanisms)
spermatogonia or Sertoli cells. Animal studies have been useful to determine the stage at
which a toxicant attacks the spermatogenic process. These studies employ short term exposure
to a toxicant before sampling to determine the effect. By knowing the duration for each
spermatogenic stage, one can extrapolate to estimate the affected stage.

Biochemical analysis of seminal plasma pprovides insights into the function of the accessory
sex glands. Chemicals that are secreted primarily by each of the accessory sex glands are
typically selected to serve as a marker for each respective gland. For example, the epididymis
is represented by GPC, the seminal vesicles by fructose, and the prostate gland by zinc. Note
that this type of analysis pprovides only gross information on glandular function and little or
no information on the other secretory constituents. Measuring semen pH and osmolality
provide additional general information on the nature of seminal plasma.

Seminal plasma may be analysed for the presence of a toxicant or its metabolite. Heavy
metals have been detected in seminal plasma using atomic absorption spectrophotometry,
while halogenated hydrocarbons have been measured in seminal fluid by gas chromatography
after extraction or protein-limiting filtration (Stachel et al. 1989; Zikarge 1986).

The viability and motility of spermatozoa in seminal plasma is typically a reflection of


seminal plasma quality. Alterations in sperm viability, as measured by stain exclusion or by
hypoosmotic swelling, or alterations in sperm motility parameters would suggest post-
testicular toxicant effects.

Semen analyses also can indicate whether production of sperm cells has been affected by a
toxicant. Sperm count and sperm morphology provide indices of the integrity of
spermatogenesis and spermiogenesis. Thus, the number of sperm in the ejaculate is directly
correlated with the number of germ cells per gram of testis (Zukerman et al. 1978), while
abnormal morphology is probably a result of abnormal spermiogenesis. Dead sperm or
immotile sperm often reflect the effects of post-testicular events. Thus, the type or timing of a
toxic effect may indicate the target of the toxicant. For example, exposure of male rats to 2-
methoxyethanol resulted in reduced fertility after four weeks (Chapin et al. 1985). This
evidence, corroborated by histological examination, indicates that the target of toxicity is the
spermatocyte (Chapin et al. 1984). While it is not ethical to intentionally expose humans to
suspected reproductive toxicants, semen analyses of serial ejaculates of men inadvertently
exposed for a short time to potential toxicants may provide similar useful information.

Occupational exposure to 1,2-dibromochloropropane (DBCP) reduced sperm concentration in


ejaculates from a median of 79 million cells/ml in unexposed men to 46 million cells/ml in
exposed workers (Whorton et al. 1979). Upon removing the workers from the exposure, those
with reduced sperm counts experienced a partial recovery, while men who had been
azoospermic remained sterile. Testicular biopsy revealed that the target of DBCP was the
spermatogonia. This substantiates the severity of the effect when stem cells are the target of
toxicants. There were no indications that DBCP exposure of men was associated with adverse
pregnancy outcome (Potashnik and Abeliovich 1985). Another example of a toxicant
targeting spermatogenesis/spermiogenesis was the study of workers exposed to ethylene
dibromide (EDB). They had more sperm with tapered heads and fewer sperm per ejaculate
than did controls (Ratcliffe et al. 1987).

Genetic damage is difficult to detect in human sperm. Several animal studies using the
dominant lethal assay (Ehling et al. 1978) indicate that paternal exposure can produce an
adverse pregnancy outcome. Epidemiological studies of large populations have demonstrated
increased frequency of spontaneous abortions in women whose husbands were working as
motor vehicle mechanics (McDonald et al. 1989). Such studies indicate a need for methods to
detect genetic damage in human sperm. Such methods are being developed by several
laboratories. These methods include DNA probes to discern genetic mutations (Hecht 1987),
sperm chromosome karyotyping (Martin 1983), and DNA stability assessment by flow
cytometry (Evenson 1986).
Figure 2. Exposures positively associated with adversely affecting semen quality

Figure 2 lists exposures known to affect sperm quality and table 1 provides a summary of the
results of epidemiological studies of paternal effects on reproductive outcomes.

Table 1. Epidemiological studies of paternal effects on pregnancy outcome

Reference Type of exposure or Association with Effect


occupation exposure1

Record-based population studies

Lindbohm et al. 1984 Solvents – Spontaneous abortion

Lindbohm et al. 1984 Service station + Spontaneous abortion

Daniell and Vaughan 1988 Organic solvents – Spontaneous abortion

McDonald et al. 1989 Mechanics + Spontaneous abortion

McDonald et al. 1989 Food processing + Developmental defects

Lindbohm et al. 1991a Ethylene oxide + Spontaneous abortion

Lindbohm et al. 1991a Petroleum refinery + Spontaneous abortion


Lindbohm et al. 1991a Impregnates of + Spontaneous abortion
wood

Lindbohm et al. 1991a Rubber chemicals + Spontaneous abortion

Olsen et al. 1991 Metals + Child cancer risk

Olsen et al. 1991 Machinists + Child cancer risk

Olsen et al. 1991 Smiths + Child cancer risk

Kristensen et al. 1993 Solvents + Preterm birth

Kristensen et al. 1993 Lead and solvents + Preterm birth

Kristensen et al. 1993 Lead + Perinatal death

Kristensen et al. 1993 Lead + Male child morbidity

Case-control studies

Kucera 1968 Printing industry (+) Cleft lip

Kucera 1968 Paint (+) Cleft palate

Olsen 1983 Paint + Damage to central


nervous system

Olsen 1983 Solvents (+) Damage to central


nervous system

Sever et al. 1988 Low-level radiation + Neural tube defects

Taskinen et al. 1989 Organic solvents + Spontaneous abortion

Taskinen et al. 1989 Aromatic + Spontaneous abortion


hydrocarbons

Taskinen et al. 1989 Dust + Spontaneous abortion

Gardner et al. 1990 Radiation + Childhood leukaemia

Bonde 1992 Welding + Time to conception

Wilkins and Sinks 1990 Agriculture (+) Child brain tumour

Wilkins and Sinks 1990 Construction (+) Child brain tumour

Wilkins and Sinks 1990 Food/tobacco (+) Child brain tumour


processing

Wilkins and Sinks 1990 Metal + Child brain tumour

Lindbohmn et al. 1991b Lead (+) Spontaneous abortion

Sallmen et al. 1992 Lead (+) Congenital defects


Veulemans et al. 1993 Ethylene glycol + Abnormal
ether spermiogram

Chia et al. 1992 Metals + Cadmium in semen

1–no significant association; (+) marginally significant association; + significant association.


Source: Adapted from Taskinen 1993.

Neuroendocrine System

The overall functioning of the reproductive system is controlled by the nervous system and
the hormones pproduced by the glands (the endocrine system). The reproductive
neuroendocrine axis of the male involves principally the central nervous systems (CNS), the
anterior pituitary gland and the testes. Inputs from the CNS and from the periphery are
integrated by the hypothalamus, which directly regulates gonadotrophin secretion by the
anterior pituitary gland. The gonadotrophins, in turn, act principally upon the Leydig cells
within the interstitium and Sertoli and germ cells within the seminiferous tubules to regulate
spermatogenesis and hormone production by the testes.

Hypothalamic–Pituitary Axis

The hypothalamus secretes the neurohormone gonadotrophin releasing hormone (GnRH) into
the hypophysial portal vasculature for transport to the anterior pituitary gland. The pulsatile
secretion of this decapeptide causes the concomitant release of luteinizing hormone (LH), and
with lesser synchrony and one-fifth the potency, the release of follicle stimulating hormone
(FSH) (Bardin 1986). Substantial evidence exists to support the presence of a separate FSH
releasing hormone, although none has yet been isolated (Savy-Moore and Schwartz 1980;
Culler and Negro-Vilar 1986). These hormones are secreted by the anterior pituitary gland.
LH acts directly upon the Leydig cells to stimulate synthesis and release of testosterone,
whereas FSH stimulates aromatization of testosterone to estradiol by the Sertoli cell.
Gonadotropic stimulation causes the release of these steroid hormones into the spermatic vein.

Gonadotrophin secretion is, in turn, checked by testosterone and estradiol through negative
feedback mechanisms. Testosterone acts principally upon the hypothalamus to regulate GnRH
secretion and thereby reduces the pulse frequency, primarily, of LH release. Estradiol, on the
other hand, acts upon the pituitary gland to reduce the magnitude of gonadotrophin release.
Through these endocrine feedback loops, testicular function in general and testosterone
secretion specifically are maintained at a relatively steady state.

Pituitary–Testicular Axis

LH and FSH are generally viewed as necessary for normal spermatogenesis. Presumably the
effect of LH is secondary to inducing high intratesticular concentrations of testosterone.
Therefore, FSH from the pituitary gland and testosterone from the Leydig cells act upon the
Sertoli cells within the seminiferous tubule epithelium to initiate spermatogenesis. Sperm
production persists, although quantitatively reduced, after removing either LH (and
presumably the high intratesticular testosterone concentrations) or FSH. FSH is required for
initiating spermatogenesis at puberty and, to a lesser extent, to reinitiate spermatogenesis that
has been arrested (Matsumoto 1989; Sharpe 1989).
The hormonal synergism that serves to maintain spermatogenesis may entail recruitment by
FSH of differentiated spermatogonia to enter meiosis, while testosterone may control specific,
subsequent stages of spermatogenesis. FSH and testosterone may also act upon the Sertoli cell
to stimulate production of one or more paracrine factors which may affect the number of
Leydig cells and testosterone production by these cells (Sharpe 1989). FSH and testosterone
stimulate protein synthesis by Sertoli cells including synthesis of androgen binding protein
(ABP), while FSH alone stimulates synthesis of aromatase and inhibin. ABP is secreted
primarily into the seminiferous tubular fluid and is transported to the proximal portion of the
caput epididymis, possibly serving as a local carrier of androgens (Bardin 1986). Aromatase
catalyses the conversion of testosterone to estradiol in the Sertoli cells and in other peripheral
tissues.

Inhibin is a glycoprotein consisting of two dissimilar, disulphide-linked subunits, a and b.


Although inhibin preferentially inhibits FSH release, it may also attenuate LH release in the
presence of GnRH stimulation (Kotsugi et al. 1988). FSH and LH stimulate inhibin release
with approximately equal potency (McLachlan et al. 1988). Interestingly, inhibin is secreted
into the spermatic vein blood as pulses which are synchronous to those of testosterone
(Winters 1990). This probably does not reflect direct actions of LH or testosterone on Sertoli
cell activity, but rather the effects of other Leydig cell products secreted either into the
interstitial spaces or the circulation.

Prolactin, which is also secreted by the anterior pituitary gland, acts synergistically with LH
and testosterone to promote male reproductive function. Prolactin binds to specific receptors
on the Leydig cell and increases the amount of androgen receptor complex within the nucleus
of androgen responsive tissues (Baker et al. 1977). Hyperprolactinaemia is associated with
reductions of testicular and prostate size, semen volume and circulating concentrations of LH
and testosterone (Segal et al. 1979). Hyperprolactinaemia has also been associated with
impotency, apparently independent of altering testosterone secretion (Thorner et al. 1977).

If measuring steroid hormone metabolites in urine, consideration must be given to the


potential that the exposure being studied may alter the metabolism of excreted metabolites.
This is especially pertinent since most metabolites are formed by the liver, a target of many
toxicants. Lead, for example, reduced the amount of sulphated steroids that were excreted into
the urine (Apostoli et al. 1989). Blood levels for both gonadotrophins become elevated during
sleep as the male enters puberty, while testosterone levels maintain this diurnal pattern
through adulthood in men (Plant 1988). Thus blood, urine or saliva samples should be
collected at approximately the same time of day to avoid variations due to diurnal secretory
patterns.

The overt effects of toxic exposure targeting the reproductive neuroendocrine system are most
likely to be revealed through altered biological manifestations of the androgens.
Manifestations significantly regulated by androgens in the adult man that may be detected
during a basic physical examination include: (1) nitrogen retention and muscular
development; (2) maintenance of the external genitalia and accessory sexual organs; (3)
maintenance of the enlarged larynx and thickened vocal cords causing the male voice; (4)
beard, axillary and pubic hair growth and temporal hair recession and balding; (5) libido and
sexual performance; (6) organ specific proteins in tissues (e.g., liver, kidneys, salivary
glands); and (7) aggressive behaviour (Bardin 1986). Modifications in any of these traits may
indicate that androgen production has been affected.
Examples of Toxicant Effects

Lead is a classic example of a toxicant that directly affects the neuroendocrine system. Serum
LH concentrations were elevated in men exposed to lead for less than one year. This effect did
not progress in men exposed for more than five years. Serum FSH levels were not affected.
On the other hand, serum levels of ABP were elevated and those of total testosterone were
reduced in men exposed to lead for more than five years. Serum levels of free testosterone
were significantly reduced after exposure to lead for three to five years (Rodamilans et al.
1988). In contrast, serum concentrations of LH, FSH, total testosterone, prolactin, and total
neutral 17-ketosteroids were not altered in workers with lower circulating levels of lead, even
though the distribution frequency of sperm count was altered (Assennato et al. 1986).

Exposure of shipyard painters to 2-ethoxyethanol also reduced sperm count without a


concurrent change in serum LH, FSH, or testosterone concentrations (Welch et al. 1988).
Thus toxicants may affect hormone production and sperm measures independently.

Male workers involved in the manufacture of the nematocide DBCP experienced elevated
serum levels of LH and FSH and reduced sperm count and fertility. These effects are
apparently sequelae to DBCP actions upon the Leydig cells to alter androgen production or
action (Mattison et al. 1990).

Several compounds may exert toxicity by virtue of structural similarity to reproductive steroid
hormones. Thus, by binding to the respective endocrine receptor, toxicants may act as
agonists or antagonists to disrupt biological responses. Chlordecone (Kepone), an insecticide
that binds to oestrogen receptors, reduced sperm count and motility, arrested sperm
maturation and reduced libido. While it is tempting to suggest that these effects result from
chlordecone interfering with oestrogen actions at the neuroendocrine or testicular level, serum
levels of testosterone, LH and FSH were not shown to be altered in these studies in a manner
similar to the effects of oestradiol therapy. DDT and its metabolites also exhibit steroidal
properties and might be expected to alter male reproductive function by interfering with
steroidal hormone functions. Xenobiotics such as polychlorinated biphenyls, polybrominated
biphenyls, and organochlorine pesticides may also interfere with male reproductive functions
by exerting oestrogenic agonist/antagonist activity (Mattison et al. 1990).

Sexual Function

Human sexual function refers to the integrated activities of the testes and secondary sex
glands, the endocrine control systems, and the central nervous system-based behavioural and
psychological components of reproduction (libido). Erection, ejaculation and orgasm are three
distinct, independent, physiological and psychodynamic events which normally occur
concurrently in men.

Little reliable data are available on occupational exposure effects on sexual function due to
the problems described above. Drugs have been shown to affect each of the three stages xof
male sexual function (Fabro 1985), indicating the potential for occupational exposures to
exert similar effects. Antidepressants, testosterone antagonists and stimulants of prolactin
release effectively reduce libido in men. Antihypertensive drugs which act on the sympathetic
nervous system induce impotence in some men, but surprisingly, priapism in others.
Phenoxybenzamine, an adrenoceptive antagonist, has been used clinically to block seminal
emission but not orgasm (Shilon, Paz and Homonnai 1984). Anticholinergic antidepressant
drugs permit seminal emission while blocking seminal ejection and orgasm which results in
seminal plasma seeping from the urethra rather than being ejected.

Recreational drugs also affect sexual function (Fabro 1985). Ethanol may reduce impotence
while enhancing libido. Cocaine, heroin and high doses of cannabinoids reduce libido.
Opiates also delay or impair ejaculation.

The vast and varied array of pharmaceuticals that has been shown to affect the male
reproductive system pprovides support for the notion that chemicals found in the workplace
may also be reproductive toxicants. Research methods that are reliable and practical for field
study conditions are needed to assess this important area of reproductive toxicology.

Structure of the Female Reproductive System and Target Organ Vulnerability

Written by ILO Content Manager

Figure 1. The female reproductive system.

The female reproductive system is controlled by components of the central nervous system,
including the hypothalamus and pituitary. It consists of the ovaries, the fallopian tubes, the
uterus and the vagina (Figure 1). The ovaries, the female gonads, are the source of oocytes
and also synthesize and secrete oestrogens and progestogens, the major female sex hormones.
The fallopian tubes transport oocytes to and sperm from the uterus. The uterus is a pear-
shaped muscular organ, the upper part of which communicates through the fallopian tubes to
the abdominal cavity, while the lower part is contiguous through the narrow canal of the
cervix with the vagina, which passes to the exterior. Table 1 summarizes compounds, clinical
manifestations, site and mechanisms of action of potential reproductive toxicants.

Table 1. Potential female reproductive toxicants

Compound Clinical manifestation Site Mechanism/target

Chemical reactivity

Alkylating Altered menses Ovary Granulosa cell cytotoxicity


agents Amenorrhoea Oocyte cytotoxicity
Ovarian atrophy Uterus Endometrial cell
cytotoxicity
Decreased fertility
Premature menopause

Lead Abnormal menses Hypothalamus Decreased FSH


Ovarian atrophy Pituitary Decreased progesterone
Decreased fertility Ovary

Mercury Abnormal menses Hypothalamus Altered gonadotrophin


production and secretion
Ovary Follicle toxicity
Granulosa cell proliferation

Cadmium Follicular atresia Ovary Vascular toxicity


Persistent diestrus Pituitary Granulosa cell cytotoxicity
Hypothalamus Cytotoxicity

Structural similarity

Azathioprine Reduced follicle Ovary Purine analog


numbers
Oogenesis Disruption of DNA/RNA
synthesis

Chlordecone Impaired fertility Hypothalamus Oestrogen agonist

DDT Altered menses Pituitary FSH, LH disruption

2,4-D Infertility

Lindane Amenorrhoea

Toxaphene Hypermenorrhoea

PCBs, PBBs Abnormal menses FSH, LH disruption

Source: From Plowchalk, Meadows and Mattison 1992. These compounds are suggested to be
direct-acting reproductive toxicants based primarily on toxicity testing in experimental
animals.
The Hypothalamus and Pituitary

The hypothalamus is located in the diencephalon, which sits on top of the brainstem and is
surrounded by the cerebral hemispheres. The hypothalamus is the principal intermediary
between the nervous and the endocrine systems, the two major control systems of the body.
The hypothalamus regulates the pituitary gland and hormone production.

The mechanisms by which a chemical might disrupt the reproductive function of the
hypothalamus generally include any event that could modify the pulsatile release of
gonadotrophin releasing hormone (GnRH). This may involve an alteration in either the
frequency or the amplitude of GnRH pulses. The processes susceptible to chemical injury are
those involved in the synthesis and secretion of GnRH—more specifically, transcription or
translation, packaging or axonal transport, and secretory mechanisms. These processes
represent sites where direct-acting chemically reactive compounds might interfere with
hypothalmic synthesis or release of GnRH. An altered frequency or amplitude of GnRH
pulses could result from disruptions in stimulatory or inhibitory pathways that regulate the
release of GnRH. Investigations of the regulation of the GnRH pulse generator have shown
that catecholamines, dopamine, serotonin, γ-aminobutyric acid, and endorphins all have some
potential for altering the release of GnRH. Therefore, xenobiotics that are agonists or
antagonists of these compounds could modify GnRH release, thus interfering with
communication with the pituitary.

Prolactin, follicle-stimulating hormone (FSH) and luteinizing hormone (LH) are three protein
hormones secreted by the anterior pituitary that are essential for reproduction. These play a
critical role in maintaining the ovarian cycle, governing follicle recruitment and maturation,
steroidogenesis, completion of ova maturation, ovulation and luteinization.

The precise, finely tuned control of the reproductive system is accomplished by the anterior
pituitary in response to positive and negative feedback signals from the gonads. The
appropriate release of FSH and LH during the ovarian cycle controls normal follicular
development, and the absence of these hormones is followed by amenorrhoea and gonadal
atrophy. The gonadotrophins play a critical role in initiating changes in the morphology of
ovarian follicles and in their steroidal microenvironments through the stimulation of steroid
production and the induction of receptor populations. Timely and adequate release of these
gonadotrophins is also essential for ovulatory events and a functional luteal phase. Because
gonadotrophins are essential for ovarian function, altered synthesis, storage or secretion may
seriously disrupt reproductive capacity. Interference with gene expression—whether in
transcription or translation, post-translational events or packaging, or secretory mechanisms—
may modify the level of gonadotrophins reaching the gonads. Chemicals that act by means of
structural similarity or altered endocrine homeostasis might produce effects by interference
with normal feedback mechanisms. Steroid-receptor agonists and antagonists might initiate an
inappropriate release of gonadotrophins from the pituitary, thereby inducing steroid-
metabolizing enzymes, reducing steroid half-life and subsequently the circulating level of
steroids reaching the pituitary.

The Ovary

The ovary in primates is responsible for the control of reproduction through its principal
products, oocytes and steroid and protein hormones. Folliculogenesis, which involves both
intraovarian and extraovarian regulatory mechanisms, is the process by which oocytes and
hormones are produced. The ovary itself has three functional subunits: the follicle, the oocyte
and the corpus luteum. During the normal menstrual cycle, these components, under the
influence of FSH and LH, function in concert to produce a viable ovum for fertilization and a
suitable environment for implantation and subsequent gestation.

During the preovulatory period of the menstrual cycle, follicle recruitment and development
occur under the influence of FSH and LH. The latter stimulates the production of androgens
by thecal cells, whereas the former stimulates the aromatization of androgens into oestrogens
by the granulosa cells and the production of inhibin, a protein hormone. Inhibin acts at the
anterior pituitary to decrease the release of FSH. This prevents excess stimulation of follicular
development and allows continuing development of the dominant follicle—the follicle
destined to ovulate. Oestrogen production increases, stimulating both the LH surge (resulting
in ovulation) and the cellular and secretory changes in the vagina, cervix, uterus and oviduct
that enhance spermatozoa viability and transport.

In the postovulatory phase, thecal and granulosa cells remaining in the follicular cavity of the
ovulated ovum, form the corpus luteum and secrete progesterone. This hormone stimulates
the uterus to provide a proper environment for implantation of the embryo if fertilization
occurs. Unlike the male gonad, the female gonad has a finite number of germ cells at birth and
is therefore uniquely sensitive to reproductive toxicants. Such exposure of the female can lead
to decreased fecundity, increased pregnancy wastage, early menopause or infertility.

As the basic reproductive unit of the ovary, the follicle maintains the delicate hormonal
environment necessary to support the growth and maturation of an oocyte. As previously
noted, this complex process is known as folliculogenesis and involves both intraovarian and
extraovarian regulation. Numerous morphological and biochemical changes occur as a
primordial follicle progresses to a pre-ovulatory follicle (which contains a developing oocyte),
and each stage of follicular growth exhibits unique patterns of gonadotrophin sensitivity,
steroid production and feedback pathways. These characteristics suggest that a number of
sites are available for xenobiotic interaction. Also, there are different follicle populations
within the ovary, which further complicates the situation by allowing for differential follicle
toxicity. This creates a situation in which the patterns of infertility induced by a chemical
agent would depend on the follicle type affected. For example, toxicity to primordial follicles
would not produce immediate signs of infertility but would ultimately shorten the
reproductive lifespan. On the other hand, toxicity to antral or preovulatory follicles would
result in an immediate loss of reproductive function. The follicle complex is composed of
three basic components: granulosa cells, thecal cells and the oocyte. Each of these
components has characteristics that may make it uniquely susceptible to chemical injury.

Several investigators have explored methodology for screening xenobiotics for granulosa cell
toxicity by measuring the effects on progesterone production by granulosa cells in culture.
Oestradiol suppression of progesterone production by granulosa cells has been utilized to
verify granulosa cell responsiveness. The pesticide p,p’-DDT and its o,p’-DDT isomer
produce supression of progesterone production apparently with potencies equal to that of
oestradiol. By contrast, the pesticides malathion, arathion and dieldrin and the fungicide
hexachlorobenzene are without effect. Further detailed analysis of isolated granulosa cell
responses to xenobiotics is needed to define the utility of this assay system. The attractiveness
of isolated systems such as this is economy and ease of use; however, it is important to
remember that granulosa cells represent only one component of the reproductive system.
Thecal cells provide precursors for steroids synthesized by granulosa cells. Thecal cells are
believed to be recruited from ovarian stroma cells during follicle formation and growth.
Recruitment may involve stromal cellular proliferation as well as migration to regions around
the follicle. Xenobiotics that impair cell proliferation, migration and communication will
impact on thecal cell function. Xenobiotics that alter thecal androgen production may also
impair follicle function. For example, the androgens metabolized to oestrogens by granulosa
cells are provided by thecal cells. Alterations in thecal cell androgen production, either
increases or decreases, are expected to have a significant effect on follicle function. For
example, it is believed that excess production of androgens by thecal cells will lead to follicle
atresia. In addition, impaired production of androgens by thecal cells may lead to decreased
poestrogen production by granulosa cells. Either circumstance will clearly impact on
reproductive performance. At resent, little is known about thecal cell vulnerability to
xenobiotics.

Although there is a acuity of information defining the vulnerability of ovarian cells to


xenobiotics, there are data clearly demonstrating that oocytes can be damaged or destroyed by
such agents. Alkylating agents destroy oocytes in humans and experimental animals. Lead
produces ovarian toxicity. Mercury and cadmium also produce ovarian damage that may be
mediated through oocyte toxicity.

Fertilization to Implantation

Gametogenesis, release and union of male and female germ cells are all preliminary events
leading to a zygote. Sperm cells deposited in the vagina must enter the cervix and move
through the uterus and into the fallopian tube to meet the ovum. penetration of ovum by sperm
and the merging of their respective DNA comprise the process of fertilization. After
fertilization cell division is initiated and continues during the next three or four days, forming
a solid mass of cells called a morula. The cells of the morula continue to divide, and by the
time the developing embryo reaches the uterus it is a hollow ball called a blastocyst.

Following fertilization, the developing embryo migrates through the fallopian tube into the
uterus. The blastocyst enters the uterus and implants in the endometrium approximately seven
days after ovulation. At this time the endometrium is in the postovulatory phase. Implantation
enables the blastocyst to absorb nutrients or toxicants from the glands and blood vessels of the
endometrium.

Maternal Occupational Exposures and Adverse Pregnancy Outcomes

Written by ILO Content Manager

Paid employment among women is growing worldwide. For example, almost 70% of women
in the United States are employed outside the home during their predominant childbearing
years (ages 20 to 34). Furthermore, since the 1940s there has been an almost linear trend in
synthetic organic chemical production, creating a more hazardous environment for the
pregnant worker and her offspring.

Ultimately, a couple’s reproductive success depends on a delicate physiochemical balance


within and between the father, the mother and the foetus. Metabolic changes occurring during
a pregnancy can increase exposure to hazardous toxicants for both worker and concetus.
These metabolic changes include increased pulmonary absorption, increased cardiac output,
delayed gastric emptying, increased intestinal motility and increased body fat. As shown in
figure 1, exposure of the concetus can produce varying effects depending on the phase of
development—early or late embryogenesis or the foetal period.

Figure 1. Consequences of maternal exposure to toxicants on the offspring.

Transport time of a fertilized ovum before implantation is between two and six days. During
this early stage the embryo may be exposed to chemical compounds that penetrate into the
uterine fluids. Absorption of xenophobic compounds may be accompanied by degenerative
changes, alteration in the blastocystic protein profile or failure to implant. Insult during this
period is likely to lead to a spontaneous abortion. Based on experimental data, it is thought
that the embryo is fairly resistant to teratogenic insult at this early stage because the cells have
not initiated the complex sequence of chemical differentiation.

The period of later embryogenesis is characterized by differentiation, mobilization and


organization of cells and tissue into organ rudiments. Early pathogenesis may induce cell
death, failed cellular interaction, reduced biosynthesis, impaired morphogenic movement,
mechanical disruption, adhesions or oedema (Paul 1993). The mediating factors that
determine susceptibility include route and level of exposure, pattern of exposure and foetal
and maternal genotype. Extrinsic factors such as nutritional deficiencies, or the additive,
synergistic or antagonistic effects associated with multiple exposures may further impact the
response. Untoward responses during late embryogenesis may culminate in spontaneous
abortion, gross structural defects, foetal loss, growth retardation or developmental
abnormalities.

The foetal period extends from embryogenesis to birth and is defined as beginning at 54 to 60
gestational days, with the concetus having a crown-rum length of 33 mm. The distinction
between the embryonic and foetal period is somewhat arbitrary. The foetal period is
characterized developmentally by growth, histogenesis and functional maturation. Toxicity
may be manifested by a reduction in cell size and number. The brain is still sensitive to injury;
myelination is incomplete until after birth. Growth retardation, functional defects, disruption
in the pregnancy, behavioural effects, translacental carcinogenesis or death may result from
toxicity during the foetal period. This article discusses the biological, sociological and
epidemiological effects of maternal environmental/occupational exposures.

Embryonic/Foetal Loss

The developmental stages of the zygote, defined in days from ovulation (DOV), proceed from
the blastocyst stage at days 15 to 20 (one to six DOV), with implantation occurring on day 20
or 21 (six or seven DOV), to the embryonic period from days 21 to 62 (seven to 48 DOV),
and the foetal period from day 63 (49+ DOV) until the designated period of viability, ranging
from 140 to 195 days. Estimates of the probability of pregnancy termination at one of these
stages depend on both the definition of foetal loss and the method used to measure the event.
Considerable variability in the definition of early versus late foetal loss exists, ranging from
the end of week 20 to week 28. The definitions of foetal and infant death recommended by the
World Health Organization (1977) are listed in table 1. In the United States the gestational
age setting the lower limit for stillbirths is now widely accepted to be 20 weeks.

Table 1. Definition of foetal loss and infant death

Spontaneous abortion ≤500 g or 20-22 weeks or 25 cm length

Stillbirth 500 g (1000 g International) nonviable

Early neonatal death Death of a live-born infant ≤7 days (168 hours)

Late neonatal death 7 days to ≤28 days

Source: World Health Organization 1977.

Because the majority of early aborted foetuses have chromosomal anomalies, it has been
suggested that for research purposes a finer distinction should be made—between early foetal
loss, before 12 weeks’ gestation, and later foetal loss (Källén 1988). In examining late foetal
losses it also may be appropriate to include early neonatal deaths, as the cause may be similar.
WHO defines early neonatal death as the death of an infant aged seven days or younger and
late neonatal death as occurring between seven and 29 days. In studies conducted in
developing countries, it is important to distinguish between prepartum and intrapartum deaths.
Because of problematic deliveries, intrapartum deaths account for a large portion of stillbirths
in less developed countries.

In a review by Kline, Stein and Susser (1989) of nine retrospective or cross-sectional studies,
the foetal loss rates before 20 weeks’ gestation ranged from 5.5 to 12.6%. When the definition
was expanded to include losses u to 28 weeks’ gestation, the foetal loss rate varied between
6.2 and 19.6%. The rates of foetal loss among clinically recognized pregnancies in four
prospective studies, however, had a relatively narrow range of 11.7 to 14.6% for the
gestational period u to 28 weeks. This lower rate, seen in prospective versus retrospective or
cross-sectional designs, may be attributable to differences in underlying definitions,
misreporting of induced abortions as spontaneous or misclassification of delayed or heavy
menses as foetal loss.
When occult abortions or early “chemical” losses identified by an elevated level of human
chorionic gonadotrohins (hCG) are included, the total spontaneous abortion rate jumps
dramatically. In a study using hCG methods, the incidence of post-implantation subclinical
loss of fertilized ova was 22% (Wilcox et al. 1988). In these studies urinary hCG was
measured with immunoradiometric assay using a detection antibody. The assay originally
used by Wilcox employed a now extinct high affinity, polyclonal rabbit antibody. More recent
studies have used an inexhaustible monoclonal antibody that requires less than 5 ml of urine
for replicate samples. The limiting factor for use of these assays in occupational field studies
is not only the cost and resources needed to coordinate collection, storage and analysis of
urine samples but the large population needed. In a study of early pregnancy loss in women
workers exposed to video display terminals (VDTs), approximately 7,000 women were
screened in order to acquire a usable population of 700 women. This need for ten times the
population size in order to achieve an adequate sample stems from reduction in the available
number of women because of ineligibility due to age, sterility and the enrollment exclusively
of women who are using either no contraceptives or relatively ineffective forms of
contraception.

More conventional occupational studies have used recorded or questionnaire data to identify
spontaneous abortions. Recorded data sources include vital statistics and hospital, private
practitioner and outpatient clinic records. Use of record systems identifies only a subset of all
foetal losses, principally those that occur after the start of prenatal care, typically after two to
three missed periods. Questionnaire data are collected by mail or in personal or telephone
interviews. By interviewing women to obtain reproductive histories, more complete
documentation of all recognized losses is possible. Questions that are usually included in
reproductive histories include all pregnancy outcomes; prenatal care; family history of
adverse pregnancy outcomes; marital history; nutritional status; re-pregnancy weight; height;
weight gain; use of cigarettes, alcohol and prescription and nonprescription drugs; health
status of the mother during and prior to a pregnancy; and exposures at home and in the
workplace to physical and chemical agents such as vibration, radiation, metals, solvents and
pesticides. Interview data on spontaneous abortions can be a valid source of information,
particularly if the analysis includes those of eight weeks’ gestation or later and those that
occurred within the last 10 years.

The principal physical, genetic, social and environmental factors associated with spontaneous
abortion are summarized in table 2. To ensure that the observed exposure-effect relationship
is not due to a confounding relationship with another risk factor, it is important to identify the
risk factors that may be associated with the outcome of interest. Conditions associated with
foetal loss include syphilis, rubella, genital Mycolasma infections, herpes simplex, uterine
infections and general hyperpyrexia. One of the most important risk factors for clinically
recognized spontaneous abortion is a history of pregnancy ending in foetal loss. Higher
gravidity is associated with increased risk, but this may not be independent of a history of
spontaneous abortion. There are conflicting interpretations of gravidity as a risk factor
because of its association with maternal age, reproductive history and heterogeneity of women
at different gravidity ranks. Rates of spontaneous abortion are higher for women younger than
16 and older than 36 years. After adjusting for gravidity and a history of pregnancy loss,
women older than 40 were shown to have twice the risk of foetal loss of younger women. The
increased risk for older women has been associated with an increase in chromosomal
anomalies, particularly trisomy. possiblemale-mediated effects associated with foetal loss
have been recently reviewed (Savitz, Sonnerfeld and Olshaw 1994). A stronger relationship
was shown with paternal exposure to mercury and anaesthetic gases, as well as a suggestive
but inconsistent relationship with exposure to lead, rubber manufacturing, selected solvents
and some pesticides.

Table 2. Factors associated with small for gestational age and foetal loss

Small for gestational age

Physical-genetic Environmental-social

Preterm delivery Malnutrition


Multiple births Low income/poor education
Malformed foetus Maternal smoking
Hypertension Maternal alcohol consumption
Placental or cord anomaly Occupational exposure
Maternal medical history Psychosocial stress
History of adverse pregnancy outcomes Altitude
Race History of infections
Chromosome anomalies Marijuana use
Sex
Maternal height, weight, weight gain
Paternal height
Parity
Length of gestation
Short interval between pregnancies

Foetal loss

Physical-genetic Environmental-social

Higher gravidity Socio-economic status


Maternal age Smoking history
Birth order Prescribed and recreational drugs
Race Alcohol use
Repeat spontaneous abortion Poor nutrition
Insulin dependent diabetes Infections/maternal fever
Uterine disorders Spermicides
Twinning Employment factors
Immunological factor Chemical exposure
Hormonal factors Irradiation

Employment status may be a risk factor regardless of a specific physical or chemical hazard
and may act as a confounder in assessment of occupational exposure and spontaneous
abortion. Some investigators suggest that women who stay in the workforce are more likely to
have an adverse pregnancy history and as a result are able to continue working; others believe
this group is an inherently more fit subpopulation due to higher incomes and better prenatal
care.
Congenital Anomalies

During the first 60 days after conception, the developing infant may be more sensitive to
xenobiotic toxicants than at any other stage in the life cycle. Historically, terata and
congenital malformations referred to structural defects resent at birth that may be gross or
microscopic, internal or external, hereditary or nonhereditary, single or multiple. Congenital
anomaly, however, is more broadly defined as including abnormal behaviour, function and
biochemistry. Malformations may be single or multiple; chromosomal defects generally
produce multiple defects, whereas single-gene changes or exposure to environmental agents
may cause either single defects or a syndrome.

The incidence of malformations depends on the status of the concetus—live birth,


spontaneous abortus, stillbirth. Overall, the abnormality rate in spontaneous abortuses is
approximately 19%, a tenfold increase in what is seen in the live born (Sheard, Fantel and
Fitsimmons 1989). A 32% rate of anomalies was found among stillborn foetuses weighing
more than 500 g. The incidence of major defects in live births is about 2.24% (Nelson and
Holmes 1989). The prevalence of minor defects ranges between 3 and 15% (averaging about
10%). Birth anomalies are associated with genetic factors (10.1%), multifactorial inheritance
(23%), uterine factors (2.5%), twinning (0.4%) or teratogens (3.2%). The causes of the
remaining defects are unknown. Malformation rates are approximately 41% higher for boys
than for girls and this is explained by the significantly higher rate of anomalies for male
genital organs.

One challenge in studying malformations is deciding how to group defects for analysis.
Anomalies can be classified by several parameters, including seriousness (major, minor),
pathogenesis (deformation, disruption), associated versus isolated, anatomic by organ system,
and aetiological (e.g., chromosomal, single gene defects or teratogen induced). Often, all
malformations are combined or the combination is based either on major or minor
categorization. A major malformation can be defined as one that results in death, requires
surgery or medical treatment or constitutes a substantial physical or psychological handicap.
The rationale for combining anomalies into large groups is that the majority arise, at
approximately the same time period, during organogenesis. Thus, by maintaining larger
sample sizes, the total number of cases is increased with a concomitant increase in the
statistical power. If, however, the exposure effect is specific to a particular type of
malformation (e.g., central nervous system), such grouping may mask the effect.
Alternatively, malformations may be grouped by organ system. Though this method may be
an improvement, certain defects may dominate the class, such as varus deformities of the feet
in the musculoskeletal system. Given a sufficiently large sample, the optimal approach is to
divide the defects into embryologically or pathogenetically homogenous groups (Källén
1988). Considerations should be given to the exclusion or inclusion of certain malformations,
such as those that are likely caused by chromosomal defects, autosomal dominant conditions
or malposition in utero. Ultimately, in analysing congenital anomalies, a balance has to be
maintained between maintaining precision and compromising statistical power.

A number of environmental and occupational toxicants have been associated with congenital
anomalies in offspring. One of the strongest associations is maternal consumption of food
contaminated with methylmercury causing morphological, central nervous system and
neurobehavioural abnormalities. In Japan, the cluster of cases was linked to consumption of
fish and shellfish contaminated with mercury derived from the effluent of a chemical factory.
The most severely affected offspring developed cerebral palsy. Maternal ingestion of
polychlorinated biphenyl’s (CBs) from contaminated rice oil gave rise to babies with several
disorders, including growth retardation, dark brown skin pigmentation, early eruption of teeth,
gingival hyperplasia, wide sagittal suture, facial oedema and exophthalmoses. Occupations
involving exposures to mixtures have been linked with a variety of adverse outcomes. The
offspring of women working in the ul and aer industry, in either laboratory work or jobs
involving “conversions” or aer refinement, also had increased risk of central nervous system,
heart and oral cleft defects. Women working in industrial or construction work with
unspecified exposures had a 50% increase in central nervous system defects, and women
working in transportation and communication had two times the risk of having a child with an
oral cleft. Veterinarians represent a unique group of health care personnel exposed to
anaesthetic gases, radiation, trauma from animal kicks, insecticides and zoonotic diseases.
Though no difference was found in the rate of spontaneous abortions or in birth weight of the
offspring between female veterinarians and female lawyers, there was a significant excess of
birth defects among veterinarians (Schenker et al. 1990). Lists of known, possible and
unlikely teratogens are available as well as computer databases and risk lines for obtaining
current information on potential teratogens (Paul 1993). Evaluating congenital anomalies in
an occupational cohort is particularly difficult, however, because of the large sample size
needed for statistical power and our limited ability to identify specific exposures occurring
during a narrow window of time, primarily the first 55 days of gestation.

Small for Gestational Age

Among the many factors linked with infant survival, physical underdevelopment associated
with low birth weight (LBW) resents one of the greatest risks. Significant weight gain of the
foetus does not begin until the second trimester. The concetus weighs 1 g at eight weeks, 14 g
at 12 weeks, and reaches 1.1 kg at 28 weeks. An additional 1.1 kg is gained every six weeks
thereafter until term. The normal newborn weighs approximately 3,200 g at term. The
newborn’s weight is dependent on its rate of growth and its gestational age at delivery. An
infant that is growth retarded is said to be small for gestational age (SGA). If an infant is
delivered prior to term it will have a reduced weight but will not necessarily be growth
retarded. Factors associated with a preterm delivery are discussed elsewhere, and the focus of
this discussion is on the growth-retarded newborn. The terms SGA and LBW will be used
interchangeably. A low birth-weight infant is defined as an infant weighing less than 2,500 g,
a very low birth weight is defined as less than 1,500 g, and extremely low birth weight is one
that is less than 1,000 g (WHO 1969).

When examining causes of reduced growth, it is important to distinguish between


asymmetrical and symmetrical growth retardation. Asymmetrical growth retardation, i.e.,
where the weight is affected more than the skeletal structure, is primarily associated with a
risk factor operating during late pregnancy. On the other hand, symmetrical growth
retardation may more likely be associated with an aetiology that operates over the entire
period of gestation (Kline, Stein and Susser 1989). The difference in rates between
asymmetrical and symmetrical growth retardation is especially apparent when comparing
developing and developed countries. The rate of growth retardation in developing countries is
10 to 43%, and is primarily symmetrical, with the most important risk factor being poor
nourishment. In developed countries foetal growth retardation is usually much lower, 3 to 8%,
and is generally asymmetrical with a multifactorial aetiology. Hence, worldwide, the
proportion of low birth-weight infants defined as intrauterine growth retarded rather than
preterm varies dramatically. In Sweden and the United States, the proportion is approximately
45%, while in developing countries, such as India, the proportion varies between
approximately 79 and 96% (Villar and Belizan 1982).

Studies of the Dutch famine showed that starvation confined to the third trimester depressed
foetal growth in an asymmetric pattern, with birth weight being primarily affected and head
circumference least affected (Stein, Susser and Saenger 1975). Asymmetry of growth also has
been observed in studies of environmental exposures. In a study of 202 expectant mothers
residing in neighbourhoods at high risk for lead exposures, prenatal maternal blood samples
were collected between the sixth and the 28th week of gestation (Bornschein, Grote and
Mitchell 1989). Blood lead levels were associated with both a decreased birth weight and
length, but not head circumference, after adjustment for other relevant risk factors including
length of gestation, socioeconomic status and use of alcohol or cigarettes. The finding of
maternal blood lead as a factor in birth length was seen entirely in Caucasian infants. The
birth length of Caucasian infants decreased approximately 2.5 cm per log unit increment in
maternal blood lead. Care should be given to selection of the outcome variable. If only birth
weight had been selected for study, the finding of the effects of lead on other growth
parameters might have been missed. Also, if Caucasians and African Americans had been
pooled in the above analysis, the differential effects on Caucasians, perhaps due to genetic
differences in the storage and binding capacity of lead, may have been missed. A significant
confounding effect also was observed between prenatal blood lead and maternal age and the
birth weight of the offspring after adjustment for other covariables. The findings indicate that
for a 30-year-old woman with an estimated blood lead level of about 20 mg/dl, the offspring
weighed proximately 2,500 g compared with proximately 3,000 g for a 20-year-old with
similar lead levels. The investigators speculated that this observed difference may indicate
that older women are more sensitive to the additional insult of lead exposure or that older
women may have had higher total lead burden from greater numbers of years of exposure or
higher ambient lead levels when they were children. Another factor may be increased blood
pressure. Nonetheless, the important lesson is that careful examination of high-risk
subpopulations by age, race, economic status, daily living habits, sex of the offspring and
other genetic differences may be necessary in order to discover the more subtle effects of
exposures on foetal growth and development.

Risk factors associated with low birth weight are summarized in Table 5. Social class as
measured by income or education persists as a risk factor in situations in which there are no
ethnic differences. Other factors that may be operating under social class or race may include
cigarette smoking, physical work, prenatal care and nutrition. Women between the ages of 25
and 29 are least likely to deliver a growth-retarded offspring. Maternal smoking increases the
risk of low birth-weight offspring by about 200% for heavy smokers. Maternal medical
conditions associated with LBW include placental abnormalities, heart disease, viral
pneumonia, liver disease, re-eclamsia, eclamsia, chronic hypertension, weight gain and
hyeremesis. An adverse pregnancy history of foetal loss, preterm delivery or prior LBW
infant increases the risk of a current preterm low birth-weight infant two- to fourfold. An
interval between births of less than a year triples the risk of having a low birth-weight
offspring. Chromosomal anomalies associated with abnormal growth include Down’s
syndrome, trisomy 18 and most malformation syndromes.

Smoking cigarettes is one of the primary behaviours most directly linked with lower weight
offspring. Maternal smoking during pregnancy has been shown to increase the risk of a low
birth-weight offspring two to three times and to cause an overall weight deficit of between
150 and 400 g. Nicotine and carbon monoxide are considered the most likely causative agents
since both are rapidly and referentially transferred across the placenta. Nicotine is a powerful
vasoconstrictor, and significant differences in the size of umbilical vessels of smoking
mothers have been demonstrated. Carbon monoxide levels in cigarette smoke range from
20,000 to 60,000 m. Carbon monoxide has an affinity for haemoglobin 210 times that of
oxygen, and because of lower arterial oxygen tension the foetus is especially compromised.
Others have suggested that these effects are not due to smoking but are attributable to
characteristics of smokers. Certainly occupations with potential carbon monoxide exposure,
such as those associated with ul and aer, blast furnaces, acetylene, breweries, carbon black,
coke ovens, garages, organic chemical synthesizers and petroleum refineries should be
considered possible high risk occupations for pregnant employees.

Ethanol is also a widely used and researched agent associated with foetal growth retardation
(as well as congenital anomalies). In a prospective study of 9,236 births, it was found that
maternal alcohol consumption of more than 1.6 oz per day was associated with an increase in
stillbirths and growth-retarded infants (Kaminski, Rumeau and Schwartz 1978). Smaller
infant length and head circumference also are related to maternal alcohol ingestion.

In evaluating the possible effects of exposures on birth weight, some problematic issues must
be considered. preterm delivery should be considered as a possible mediating outcome and the
potential effects on gestational age considered. In addition, pregnancies having longer
gestational length also have a longer opportunity for exposure. If enough women work late in
pregnancy, the longest cumulative exposure may be associated with the oldest gestational
ages and heaviest babies purely as an artifact. There are a number of procedures that can be
used to overcome this problem including a variant of the Cox life-table regression model,
which has the ability to handle time-dependent covariables.

Another problem centres on how to define lowered birth weight. Often studies define lower
birth weight as a dichotomous variable, less than 2,500 g. The exposure, however, must have
a very powerful effect in order to produce a drastic drop in the infant’s weight. Birth weight
defined as a continuous variable and analysed in a multiple regression model is more sensitive
for detecting subtle effects. The relative paucity of significant findings in the literature in
relationship to occupational exposures and SGA infants may, in art, be caused by ignoring
these design and analysis issues.

Conclusions

Studies of adverse pregnancy outcomes must characterize exposures during a fairly narrow
window of time. If the woman has been transferred to another job or laid off work during a
critical period of time such as organogenesis, the exposure-effect relationship can be severely
altered. Therefore, the investigator is held to a high standard of identifying the woman’s
exposure during a critical small time period as compared with other studies of chronic
diseases, where errors of a few months or even years may have minimal impact.

Uterine growth retardation, congenital anomaly and spontaneous abortions are frequently
evaluated in occupational exposure studies. There is more than one approach available to
assess each outcome. These end-points are of public health importance due to both the
psychological and the financial costs. Generally, nonsecificity in the exposure-outcome
relationships has been observed, e.g., with exposure to lead, anaesthetic gases and solvents.
Because of the potential for nonsecificity in the exposure-effect relationship, studies should be
designed to assess several end-points associated with a range of possible mechanisms.
Preterm Delivery and Work

Written by ILO Content Manager

The reconciliation of work and maternity is an important public health issue in industrialized
countries, where more than 50% of women of child-bearing age work outside the home.
Working women, unions, employers, politicians and clinicians are all searching for ways of
preventing work-induced unfavourable reproductive outcomes. Women want to continue
working while pregnant, and may even consider their physician’s advice about lifestyle
modifications during pregnancy to be overprotective and unnecessarily restrictive.

physiological Consequences of pregnancy

At this point, it would be useful to review a few of the physiological consequences of


pregnancy that may interfere with work.

A pregnant woman undergoes profound changes which allow her to adapt to the needs of the
foetus. Most of these changes involve the modification of physiological functions that are
sensitive to changes of posture or physical activity—the circulatory system, the respiratory
system and water balance. As a result, physically active pregnant women may experience
unique physiological and physiopathological reactions.

The main physiological, anatomical, and functional modifications undergone by pregnant


women are (Mamelle et al. 1982):

1. An increase in peripheral oxygen demand, leading to modification of the respiratory and


circulatory systems. Tidal volume begins to increase in the third month and may amount to
40% of re-pregnancy values by the end of the pregnancy. The resultant increase in gas
exchange may increase the hazard of the inhalation of toxic volatiles, while hyperventilation
related to increased tidal volume may cause shortness of breath on exertion.
2. Cardiac output increases from the very beginning of pregnancy, as a result of an increase in
blood volume. This reduces the heart’s ability to adapt to exertion and also increases venous
pressure in the lower limbs, rendering standing for long periods difficult.
3. Anatomical modifications during pregnancy, including exaggeration of dorsolumbar lordosis,
enlargement of the polygon of support and increases in abdominal volume, affect static
activities.
4. A variety of other functional modifications occur during pregnancy. Nausea and vomiting
result in fatigue; daytime sleepiness results in inattention; mood changes and feelings of
anxiety may lead to interpersonal conflicts.
5. Finally, it is interesting to note that the daily energy requirements during pregnancy are
equivalent to the requirements of two to four hours of work.

Because of these profound changes, occupational exposures may have special consequences
in pregnant women and may result in unfavourable pregnancy outcomes.
Epidemiological Studies of Working Conditions and preterm Delivery

Although there are many possible unfavourable pregnancy outcomes, we review here the data
on preterm delivery, defined as the birth of a child before the 37th week of gestation. preterm
birth is associated with low birth weight and with significant complications for the newborn.
It remains a major public health concern and is an ongoing reoccupation among obstetricians.

When we began research in this field in the mid-1980s, there was relatively strong legislative
protection of pregnant women’s health in France, with prenatal maternity leave mandated to
start six weeks prior to the due date. Although the preterm delivery rate has fallen from 10 to
7% since then, it appeared to have levelled off. Because medical prevention had apparently
reached the limit of its powers, we investigated risk factors likely to be amenable to social
intervention. Our hypotheses were as follows:

 Is working per se a risk factor for preterm birth?


 Are certain occupations associated with an increased risk of preterm delivery?
 Do certain working conditions constitute a hazard to the pregnant woman and foetus?
 Are there social preventive measures which could help reduce the risk of preterm birth?

Our first study, conducted in 1977–78 in two hospital maternity wards, examined 3,400
women, of whom 1,900 worked during pregnancy and 1,500 remained at home (Mamelle,
Laumon and Lazar 1984). The women were interviewed immediately after delivery and asked
to describe their home and work lifestyle during pregnancy as accurately as possible.

We obtained the following results:

Work per se

The mere fact of working outside the home cannot be considered to be a risk factor for
preterm delivery, since women remaining at home exhibited a higher prematurely rate than
did women who worked outside the home (7.2 versus 5.8%).

Working conditions

An excessively long work week appears to be a risk factor, since there was a regular increase
in preterm delivery rate with the number of work hours. Retail-sector workers, medical social
workers, specialized workers and service personnel were at higher risk of preterm delivery
than were office workers, teachers, management, skilled workers or supervisors. The
prematurely rates in the two groups were 8.3 and 3.8% respectively.

Table 1. Identified sources of occupational fatigue

Occupational fatigue index “HIGH” index if:

Posture Standing for more than 3 hours per day

Work on machines Work on industrial conveyor belts; independent work on


industrial machines with strenuous effort
Physical load Continuous or periodical physical effort; carrying loads of
more than 10kg

Mental load Routine work; varied tasks requiring little attention


without stimulation

Environment Significant noise level; cold temperature; very wet


atmosphere; handling of chemical substances

Source: Mamelle, Laumon and Lazar 1984.

Task analysis allowed identification of five sources of occupation fatigue: posture, work with
industrial machines, physical workload, mental workload and the work environment. Each of
these sources of occupational fatigue constitutes a risk factor for preterm delivery (see tables
1 and 2).

Table 2. Relative risks (RR) and fatigue indices for preterm delivery

Index Low index % High index % RR Statistical significance

Posture 4.5 7.2 1.6 Significant

Work on machines 5.6 8.8 1.6 Significant

Physical load 4.1 7.5 1.8 Highly significant

Mental load 4.0 7.8 2.0 Highly significant

Environment 4.9 9.4 1.9 Highly significant

Source: Mamelle, Laumon and Lazar 1984.

Exposure to multiple sources of fatigue may result in unfavourable pregnancy outcomes, as


evidenced by the significant increase of the rate of preterm delivery with an increased number
of sources of fatigue (table 3). Thus, 20% of women had concomitant exposure to at least
three sources of fatigue, and experienced a preterm delivery rate twice as high as other
women. Occupational fatigue and excessively long work weeks exert cumulative effects, such
that women who experience intense fatigue during long work weeks exhibit an even higher
prematurely rate. preterm delivery rates increase further if the woman also has a medical risk
factor. The detection of occupational fatigue is therefore even more important than the
detection of medical risk factors.

Table 3. Relative risk of prematurity according to number of occupational fatigue indices

Number of high Proportion of Estimated


fatigue indices exposed women % relative risk

0 24 1.0

1 28 2.2
2 25 2.4

3 15 4.1

4-5 8 4.8

Source: Mamelle, Laumon and Lazar 1984

European and North American studies have confirmed our results, and our fatigue scale has
been shown to be reproducible in other surveys and countries.

In a case-control follow-u study conducted in France a few years later in the same maternity
wards (Mamelle and Munoz 1987) , only two of the five previously defined indices of fatigue
were significantly related to preterm delivery. It should however be noted that women had a
greater opportunity to sit down and were withdrawn from physically demanding tasks as a
result of preventive measures implemented in the workplaces during this period. The fatigue
scale nevertheless remained a predictor of preterm delivery in this second study.

In a study in Montreal, Quebec (McDonald et al. 1988), 22,000 pregnant women were
interviewed retrospectively about their working conditions. Long work weeks, alternating
shift work and carrying heavy loads were all shown to exert significant effects. The other
factors studied did not appear to be related to preterm delivery, although there appears to be a
significant association between preterm delivery and a fatigue scale based on the total number
of sources of fatigue.

With the exception of work with industrial machines, no significant association between
working conditions and preterm delivery was found in a French retrospective study of a
representative sample of 5,000 pregnant women (Saurel-Cubizolles and Kaminski 1987).
However, a fatigue scale inspired by our own was found to be significantly associated with
preterm delivery.

In the United States, Homer, Beredford and James (1990), in a historical cohort study,
confirmed the association between physical workload and an increased risk of preterm
delivery. Teitelman and co-workers (1990), in a prospective study of 1,200 pregnant women,
whose work was classified as sedentary, active or standing, on the basis of job description,
demonstrated an association between work in a standing position and preterm delivery.

Barbara Luke and co-workers (in press) conducted a retrospective study of US nurses who
worked during pregnancy. Using our occupational risk scale, she obtained similar results to
ours, that is, an association between preterm delivery and long work weeks, standing work,
heavy workload and unfavourable work environment. In addition, the risk of preterm delivery
was significantly higher among women with concomitant exposure to three or four sources of
fatigue. It should be noted that this study included over half of all nurses in the United States.

Contradictory results have however been reported. These may be due to small sample sizes
(Berkowitz 1981), different definitions of prematurely (Launer et al. 1990) and classification
of working conditions on the basis of job description rather than actual workstation analysis
(Klebanoff, Shiono and Carey 1990). In some cases, workstations have been characterized on
a theoretical basis only—by the occupational physician, for example, rather than by the
women themselves (peoples-Shes et al. 1991). We feel that it is important to take subjective
fatigue—that is, fatigue as it is described and experienced by women—into account in the
studies.

Finally, it is possible that the negative results are related to the implementation of preventive
measures. This was the case in the prospective study of Ahlborg, Bodin and Hogstedt (1990),
in which 3,900 active Swedish women completed a self-administered questionnaire at their
first prenatal visit. The only reported risk factor for preterm delivery was carrying loads
weighing more than 12 kg more often than 50 times per week, and even then the relative risk
of 1.7 was not significant. Ahlborg himself points out that preventive measures in the form of
aid maternity leave and the right to perform less tiring work during the two months receding
their due date had been implemented for pregnant women engaged in tiring work. Maternity
leaves were five times as frequent among women who described their work as tiring and
involving the carrying of heavy loads. Ahlborg concludes that the risk of preterm delivery
may have been minimized by these preventive measures.

preventive Interventions: French Examples

Are the results of aetiological studies convincing enough for preventive interventions to be
applied and evaluated? The first question which must be answered is whether there is a public
health justification for the application of social preventive measures designed to reduce the
rate of preterm delivery.

Using data from our previous studies, we have estimated the proportion of preterm births
caused by occupational factors. Assuming a rate of preterm delivery of 10% in populations
exposed to intense fatigue and a rate of 4.5% in non-exposed populations, we estimate that
21% of premature births are caused by occupational factors. Reducing occupational fatigue
could therefore result in the elimination of one-fifth of all preterm births in French working
women. This is ample justification for the implementation of social preventive measures.

What preventive measures can be applied? The results of all the studies lead to the conclusion
that working hours can be reduced, fatigue can be lessened through workstation modification,
work breaks can be allowed and prenatal leave can be lengthened. Three cost-equivalent
alternatives are available:

 reducing the work week to 30 hours starting from the 20th week of gestation
 prescribing a work break of one week each month starting in the 20th week of gestation
 beginning prenatal leave at the 28th week of gestation.

It is relevant to recall here that French legislation provides the following preventive measures
for pregnant women:

 guaranteed employment after childbirth


 reduction of the workday by 30 to 60 minutes, applied through collective agreements
 workstation modification in cases of incompatibility with pregnancy
 work breaks during pregnancy, prescribed by attending physicians
 prenatal maternity leave six weeks prior to the due date, with a further two weeks available
in case of complications
 postnatal maternity leave of ten weeks.
A one-year prospective observational study of 23,000 women employed in 50 companies in
the Rhône-Ales region of France (Bertucat, Mamelle and Munoz 1987) examined the effect of
tiring work conditions on preterm delivery. Over the period of the study, 1,150 babies were
born to the study population. We analysed the modifications of working conditions to
accommodate pregnancy and the relation of these modifications to preterm delivery
(Mamelle, Bertucat and Munoz 1989), and observed that:

 Workstation modification was reformed for only 8% of women.


 33% of women worked their normal shifts, with the others having their workday reduced by
30 to 60 minutes.
 50% of women took at least one work break, apart from their prenatal maternity leave;
fatigue was the cause in one-third of cases.
 90% of women stopped working before their legal maternity leave began and obtained at
least the two weeks leave allowed for in the case of complications of pregnancy; fatigue was
the cause in half the cases.
 In all, given the legal prenatal leave period of six weeks prior to the due date (with an
additional two weeks available in some cases), the real duration of prenatal maternity leave
was 12 weeks in this population of women subjected to tiring work conditions.

Do these modifications of work have any effect on the outcome of pregnancy? Workstation
modification and the slight reduction of the workday (30 to 60 min) were both associated with
non-significant reductions of the risk of preterm delivery. We believe that further reductions
of the work week would have a greater effect (table 4).

Table 4. Relative risks of prematurity associated with modifications in working conditions

Modifications Number of Preterm Relative risk


in working women birth rates (95% confidence
conditions (%) intervals)

Change in work situation

No 1,062 6.2 0.5 (0.2-1.6)


Yes 87 3.4

Reduction of weekly working hours

No 388 7.7 0.7 (0.4-1.1)


Yes 761 5.1

Episodes of sick leave1

No 357 8.0 0.4 (0.2-0.7)


Yes 421 3.1

Increase of antenatal maternity leave1


None or only additional 2 487 4.3 1.7 (0.9-3.0)
weeks
Yes 291 7.2

1
In a reduced sample of 778 women with no previous or present obstetric pathology.

Source: Mamelle, Bertucat and Munoz 1989.

To analyse the relation between prenatal leave, work breaks and preterm delivery, it is
necessary to discriminate between preventive and curative work breaks. This requires
restriction of the analysis to women with uncomplicated pregnancies. Our analysis of this
subgroup revealed a reduction of the preterm delivery rate among women who took work
breaks during their pregnancy, but not in those who took prolonged prenatal leave (Table 9).

This observational study demonstrated that women who work in tiring conditions take more
work breaks during their pregnancies than do other women, and that these breaks, particularly
when motivated by intense fatigue, are associated with reductions of the risk of preterm
delivery (Mamelle, Bertucat and Munoz 1989).

Choice of preventive Strategies in France

As epidemiologists, we would like to see these observations verified by experimental


preventive studies. We must however ask ourselves which is more reasonable: to wait for
such studies or to recommend social measures aimed at preventing preterm delivery now?

The French Government recently decided to include a “work and pregnancy guide”, identical
to our fatigue scale, in each pregnant woman’s medical record. Women can thus calculate
their fatigue score for themselves. If work conditions are arduous, they may ask the
occupational physician or the person responsible for occupational safety in their company to
implement modifications aimed at alleviating their workload. Should this be refused, they can
ask their attending physician to prescribe rest weeks during their pregnancy, and even to
prolong their prenatal maternity leave.

The challenge is now to identify preventive strategies that are well adapted to legislation and
social conditions in every country. This requires a health economics approach to the
evaluation and comparison of preventive strategies. Before any preventive measure can be
considered generally applicable, many factors have to be taken into consideration. These
include effectiveness, of course, but also low cost to the social security system, resultant job
creation, women’s references and the acceptability to employers and unions.

This type of problem can be resolved using multicriteria methods such as the Electra method.
These methods allow both the classification of preventive strategies on the basis of each of a
series of criteria, and the weighting of the criteria on the basis of political considerations.
Special importance can thus be given to low cost to the social security system or to the ability
of women to choose, for example (Mamelle et al. 1986). While the strategies recommended
by these methods vary depending on the decision makers and political options, effectiveness
is always maintained from the public health standpoint.
Occupational and Environmental Exposures to the Newborn

Written by ILO Content Manager

Environmental hazards pose a special risk for infants and young children. Children are not
“little adults”, either in the way they absorb and eliminate chemicals or in their response to
toxic exposures. Neonatal exposures may have a greater impact because the body surface area
is disproportionately large and metabolic capacity (or the ability to eliminate chemicals) is
relatively underdeveloped. At the same time, the potential toxic effects are greater, because
the brain, the lungs and the immune system are still developing during the early years of life.

Opportunities for exposure exist at home, in day care facilities and on playgrounds:

 Young children can absorb environmental agents from the air (by inhalation) or through the
skin.
 Ingestion is a major route of exposure, especially when children begin to exhibit hand-to-
mouth activity.
 Substances on the hair, clothes or hands of the parents can be transferred to the young child.
 Breast milk is another potential source of exposure for infants, although the potential
benefits of nursing far outweigh the potential toxic effects of chemicals in breast milk.

For a number of the health effects discussed in connection with neonatal exposures, it is
difficult to distinguish prenatal from postnatal events. Exposures taking lace before birth
(through the placenta) can continue to be manifest in early childhood. Both lead and
environmental tobacco smoke have been associated with deficits in cognitive development
and lung function both before and after birth. In this review, we have attempted to focus on
postnatal exposures and their effects on the health of very young children.

Lead and Other Heavy Metals

Among the heavy metals, lead (b) is the most important elemental exposure for humans in
both environmental and occupational circumstances. Significant occupational exposures occur
in battery manufacture, smelters, soldering, welding, construction and paint removal. parents
employed in these industries have long been known to bring dust home on their clothes that
can be absorbed by their children. The primary route of absorption by children is through
ingestion of lead-contaminated paint chips, dust and water. Respiratory absorption is efficient,
and inhalation becomes a significant exposure pathway if an aerosol of lead or alkyl lead is
resent (Clement International Corporation 1991).

Lead poisoning can damage virtually every organ system, but current levels of exposure have
been associated chiefly with neurological and developmental changes in children. In addition,
renal and haematological disease have been observed among both adults and children
intensely exposed to lead. Cardiovascular disease as well as reproductive dysfunction are
known sequelae of lead exposure among adults. Subclinical renal, cardiovascular and
reproductive effects are suspected to arise from lower, chronic lead exposure, and limited data
support this idea. Animal data support human findings (Sager and Girard 1994).

In terms of measurable dose, neurological effects range from IQ deficits at low exposures
(blood lead = 10 μg/dl) to enceha-loathy (80 μg/dl). Levels of concern in children in 1985
were 25 μg/dl, which was lowered to 10 μg/dl in 1993.
Neonatal exposure, as it resulted from dust brought home by working parents, was described
as “fouling the nest” by Chisholm in 1978. Since that time, preventive measures, such as
showering and changing clothing before leaving the workplace, have reduced the take-home
dust burden. However, occupationally derived lead is still an important potential source of
neonatal exposure today. A survey of children in Denmark found that blood lead was
approximately twice as high among children of exposed workers than in homes with only
non-occupational exposures (Grandjean and Bach 1986). Exposure of children to
occupationally derived lead has been documented among electric cable splicers (Rinehart and
Yanagisawa 1993) and capacitor manufacturing workers (Kaye, Novotny and Tucker 1987).

Non-occupational sources of environmental lead exposure continue to be a serious hazard to


young children. Since the gradual ban of tetraethyl lead as a fuel additive in the United States
(in 1978), average blood lead levels in children have declined from 13 to 3 μg/dl (Pirkle et al.
1994). paint chips and paint dust are now the principal cause of childhood lead poisoning in
the United States (Roer 1991). For example in one report, younger children (neonates aged
less than 11 months) with excessive lead in their blood were at greatest risk of exposure
through dust and water while older children (aged 24 months) were at risk more from
ingestion of paint chips (ica) (Shannon and Graef 1992). Lead abatement through paint
removal has been successful in protecting children from exposure to dust and paint chips
(Farfel, Chisholm and Rohde 1994). Ironically, workers engaged in this enterprise have been
shown to carry lead dust home on their clothes. In addition, it has been noted that the
continuing exposure of young children to lead disproportionately affects economically
disadvantaged children (Brody et al. 1994; Goldman and Carra 1994). art of this inequity
arises from the poor condition of housing; as early as 1982, it was shown that the extent of
deterioration of housing was directly related to blood lead levels in children (Clement
International Corporation 1991).

Another potential source of occupationally derived exposure for the neonate is lead in breast
milk. Higher levels of lead in breast milk have been linked to both occupational and
environmental sources (Ryu, Ziegler and Fomon 1978; Dabeka et al. 1986). The
concentrations of lead in milk are small relative to blood (approximately 1/5 to 1/2) (Wolff
1993), but the large volume of breast milk ingested by an infant can add milligram quantities
to the body burden. In comparison, there is normally less than 0.03 mg b in the circulating
blood of an infant and the usual intake is less than 20 mg per day (Clement International
Corporation 1991). Indeed, absorption from breast milk is reflected in the blood lead level of
infants (Rabinowitz, Leviton and Needleman 1985; Ryu et al. 1983; Ziegler et al. 1978). It
should be noted that normal lead levels in breast milk are not excessive, and lactation
contributes an amount similar to that from other sources of infant nutrition. By comparison, a
small paint chi could contain more than 10 mg (10,000 mg) of lead.

Developmental decrements in children have been linked with both prenatal and postnatal
exposures to lead. prenatal exposure is thought to be responsible for lead-related deficits in
mental and behavioural development that have been found in children until the age of two to
four years (Landrigan and Cambell 1991; Bellinger et al. 1987). The effects of postnatal lead
exposure, such as that experienced by the neonate from occupational sources, may be detected
in children from ages two to six and even later. Among these are problem behaviour and
lower intelligence (Bellinger et al. 1994). These effects are not confined only to high
exposures; they have been observed at relatively low levels, e.g., where blood lead levels are
in the range of 10 mg/dl (Needleman and Bellinger 1984).
Mercury (Hg) exposure from the environment may occur as inorganic and organic (mainly
methyl) forms. Recent occupational exposures to mercury have been found among workers in
thermometer manufacture and in repair of high-voltage equipment containing mercury. Other
occupations with potential exposures include painting, dentistry, plumbing and chlorine
manufacture (Agency for Toxic Substance and Disease Registry 1992).

prenatal and postnatal mercury poisoning has been well documented among children.
Children are more susceptible to effects of methylmercury than adults. This is largely because
the developing human central nervous system is so “remarkably sensitive” to methylmercury,
an effect also seen at low levels in animals (Clarkson, Nordberg and Sager 1985).
Methylmercury exposures in children arise chiefly from ingestion of contaminated fish or
from breast milk, while elemental mercury is derived from occupational exposures.
Household exposure incidental to occupational exposure has been noted (Zirschky and
Wetherell 1987). Accidental exposures in the home have been reported in recent years in
domestic industries (Meeks, Keith and Tanner 1990; Rowens et al. 1991) and in an accidental
sill of metallic mercury (Florentine and Sanfilio 1991). Elemental mercury exposure occurs
mainly by inhalation, while alkyl mercury can be absorbed by ingestion, inhalation or dermal
contact.

In the best-studied episode of poisoning, sensory and motor dysfunction and mental
retardation were found following very high exposures to methylmercury either in utero or
from breast milk (Bakir et al. 1973). Maternal exposures resulted from ingestion of
methylmercury that had been used as a fungicide on grain.

Pesticides and Related Chemicals

Several hundred million tons of pesticides are produced worldwide each year. Herbicides,
fungicides and insecticides are employed mainly in agriculture by developed countries to
improve crop yield and quality. Wood preservatives are a much smaller, but still a major, art
of the market. Home and garden use represents a relatively minor proportion of total
consumption, but from the point of view of neonatal toxicity, domestic poisonings are perhaps
the most numerous. Occupational exposure is also a potential source of indirect exposure to
infants if a parent is involved in work that uses pesticides. Exposure to pesticides is possible
through dermal absorption, inhalation and ingestion. More than 50 pesticides have been
declared carcinogenic in animals (McConnell 1986).

Organochlorine pesticides include aromatic compounds, such as DDT (bis(4-chlorohenyl)-


1,1,1-trichloroethane), and cyclodienes, such as dieldrin. DDT came into use in the early
1940s as an effective means to eliminate mosquitoes carrying malaria, an application that is
still widely employed today in developing countries. Lindane is an organochlorine used
widely to control body lice and in agriculture, especially in developing countries.
olychlorinated bihenyls (CBs), another fat-soluble organochlorine mixture used since the
1940s, pose a potential health risk to young children exposed through breast milk and other
contaminated foods. Both lindane and CBs are discussed separately in this chapter.
olybrominated bihenyls (BBs) also have been detected in breast milk, almost exclusively in
Michigan. Here, a fire-retardant inadvertently mixed into livestock feed in 1973-74 became
widely dispersed across the state through dairy and meat products.

Chlordane has been used as a pesticide and as a termiticide in houses, where it is effective for
decades, no doubt because of its persistence. Exposure to this chemical can be from dietary
and direct respiratory or dermal absorption. Levels in human milk in Japan could be related
both to diet and to how recently homes had been treated. Women living in homes treated more
than two years earlier had chlordane levels in milk three times those of women living in
untreated homes (Taguchi and Yakushiji 1988).

Diet is the main source of persistent organochlorines, but smoking, air and water may also
contribute to exposure. This class of pesticides, also termed halogenated hydrocarbons, is
quite persistent in the environment, since these are lipophilic, resistant to metabolism or
biodegradation and exhibit low volatility. Several hundreds of m have been found in human
and animal fat among those with highest exposures. Because of their reproductive toxicity in
wildlife and their tendency to bioaccumulate, organochlorines have been largely banned or
restricted in developed countries.

At very high doses, neurotoxicity has been observed with organochlorines, but potential long-
term health effects are of more concern among humans. Although chronic health effects have
not been widely documented, heatotoxicity, cancer and reproductive dysfunction have been
found in experimental animals and in wildlife. Health concerns arise mainly from
observations in animal studies of carcinogenesis and of profound changes in the liver and the
immune system.

Organohoshates and carbamates are less persistent than the organochlorines and are the most
widely used class of insecticides internationally. pesticides of this class are degraded
relatively quickly in the environment and in the body. A number of the organohoshates and
carbamates exhibit high acute neurotoxicity and in certain cases chronic neurotoxicity as well.
Dermatitis is also a widely reported symptom of pesticide exposure.

The petroleum-based products used to apply some pesticides are also of potential concern.
Chronic effects including haematooietic and other childhood cancers have been associated
with parental or residential exposures to pesticides, but the epidemiological data are quite
limited. Nevertheless, based on the data from animal studies, exposures to pesticides should
be avoided.

For the newborn, a wide spectrum of exposure possibilities and toxic effects have been
reported. Among children who required hospitalization for acute poisoning, most had
inadvertently ingested pesticide products, while a significant number had been exposed while
laying on sprayed carets (Casey, Thomson and Vale 1994; Zwiener and Ginsburg 1988).
Contamination of workers’ clothing by pesticide dust or liquid has long been recognized.
Therefore, this route provides ample opportunity for home exposures unless workers take
proper hygienic precautions after work. For example, an entire family had elevated levels of
chlordecone (Keone) in their blood, attributed to home laundering of a worker’s clothes
(Grandjean and Bach 1986). Household exposure to TCDD (dioxin) has been documented by
the occurrence of chloracne in the son and wife of two workers exposed in the aftermath of an
explosion (Jensen, Sneddon and Walker 1972).

Most of the possible exposures to infants arise from pesticide applications within and around
the home (Lewis, Fortmann and Camann 1994). Dust in home carets has been found to be
extensively contaminated with numerous pesticides (Fenske et al. 1994). Much of reported
home contamination has been attributed to flea extermination or to lawn and garden
application of pesticides (Davis, Bronson and Garcia 1992). Infant absorption of chloryrifos
after treatment of homes for fleas has been predicted to exceed safe levels. Indeed, indoor air
levels following such fumigation procedures do not always rapidly diminish to safe levels.

Breast milk is a potential source of pesticide exposure for the neonate. Human milk
contamination with pesticides, especially the organochlorines, has been known for decades.
Occupational and environmental exposures can lead to significant pesticide contamination of
breast milk (D’Ercole et al. 1976; McConnell 1986). Organochlorines, which in the past have
been resent in breast milk at excessive levels, are declining in developed countries, paralleling
the decline in adipose concentrations that has occurred after restriction of these compounds.
Therefore, DDT contamination of human milk is now highest in developing countries. There
is little evidence of organohoshates in breast milk. This may be attributable to properties of
water solubility and raid metabolism of these compounds in the body.

Ingestion of water contaminated with pesticides is also a potential health risk for the neonate.
This problem is most renounced where infant formula must be reared using water. Otherwise,
commercial infant formulae are relatively free of contaminants (National Research Council
1993). Food contamination with pesticides may also lead to infant exposure. Contamination
of commercial milk, fruits and vegetables with pesticides exists at very low levels even in
developed countries where regulation and monitoring are most vigorous (The Referee 1994).
Although milk comprises most of the infant diet, fruits (especially ales) and vegetables
(especially carrots) are also consumed in a significant amount by young children and therefore
represent a possible source of pesticide exposure.

In the industrialized countries, including the United States and western Europe, most of the
organochlorine pesticides, including DDT, chlordane, dieldrin and lindane, have been either
banned, suspended or restricted since the 1970s (Maxcy Rosenau-Last 1994). pesticides still
used for agricultural and non-agricultural purposes are regulated in terms of their levels in
foods, water and pharmaceutical products. As a result of this regulation, the levels of
pesticides in adipose tissue and human milk have significantly declined over the past four
decades. However, the organochlorines are still widely used in developing countries, where,
for example, lindane and DDT are among the most frequently employed pesticides for
agricultural use and for malaria control (Awumbila and Bokuma 1994).

Lindane

Lindane is the γ-isomer and active ingredient of the technical grade of benzene hexachloride
(BHC). BHC, also known as hexachlorocyclohexane (HCH), contains 40 to 90% of other
isomers— α, β and δ. This organochlorine has been used as an agricultural and non-
agricultural pesticide throughout the world since 1949. Occupational exposures may occur
during the manufacture, formulation and application of BHC. Lindane as a pharmaceutical
reparation in creams, lotions and shampoos is also widely used to treat scabies and body lice.
Because these skin conditions commonly occur among infants and children, medical
treatment can lead to absorption of BHC by infants through the skin. Neonatal exposure can
also occur by inhalation of vapour or dust that may be brought home by a parent or that may
linger after home use. Dietary intake is also a possible means of exposure to infants since
BHC has been detected in human milk, dairy products and other foods, as have many
organochlorine insecticides. Exposure through breast milk was more prevalent in the United
States prior to the ban on the commercial production of lindane. According to the IARC
(International Agency for Research on Cancer 1987), it is possible that
hexachlorocyclohexane is carcinogenic to humans. However, evidence for adverse health
outcomes among infants has been reported chiefly as effects on the neurological and
haematooietic systems.

Household exposure to lindane has been described in the wife of a pesticide formulator,
demonstrating the potential for similar neonatal exposures. The wife had 5 ng/ml of γ-BHC in
her blood, a concentration lower than that of her husband (table 1) (Starr et al. 1974).
presumably, γ-BHC was brought into the home on the body and/or clothes of the worker.
Levels of γ-BHC in the woman and her husband were higher than those reported in children
treated with lotion containing 0.3 to 1.0% BHC.

BHC in breast milk exists mainly as the β-isomer (Smith 1991). The half-life of the γ-isomer
in the human body is approximately one day, while the β-isomer accumulates.

Table 1. Potential sources and levels of exposure to newborns

Source of exposure g-BHC in blood


(ng/ml; ppb)

Occupational exposures Low exposures 5


High exposures 36

Adult male Attempted suicide 1300

Child Acute poisoning 100-800

Children 1% BHC lotion (average) 13

Case report of home exposure1 Husband 17

Wife 5

Unexposed populations Yugoslavia 52


since1980 Africa 72
Brazil 92
India 752

1
Starr et al. (1974); other data from Smith (1991).
2
Largely b-isomer.

Dermal absorption of lindane from pharmaceutical products is a function of the amount


applied to the skin and duration of exposure. Compared with adults, infants and young
children appear to be more susceptible to the toxic effects of lindane (Clement International
Corporation 1992). One reason may be that dermal absorption is enhanced by increased
permeability of the infant’s skin and a large surface-to-volume ratio. Levels in the neonate
may persist longer because the metabolism of BHC is less efficient in infants and young
children. In addition, exposure in neonates may be increased by licking or mouthing treated
areas (Kramer et al. 1990). A hot shower or bath before dermal application of medical
products may facilitate dermal absorption, thereby exacerbating toxicity.

In a number of reported cases of accidental lindane poisoning, overt toxic effects have been
described, some in young children. In one case, a two-month-old infant died after multiple
exposures to 1% lindane lotion, including a full-body application following a hot bath (Davies
et al. 1983).

Lindane production and use is restricted in most developed countries. Lindane is still used
extensively in other countries for agricultural purposes, as noted in a study of pesticide use on
farms in Ghana, where lindane accounted for 35 and 85% of pesticide use for farmers and
herdsmen, respectively (Awumbila and Bokuma 1994).

olychlorinated bihenyls

olychlorinated bihenyls were used from the mid-1940s until the late 1970s as insulating fluids
in electrical capacitors and transformers. Residues are still resent in the environment because
of pollution, which is due largely to improper disposal or accidental sills. Some equipment
still in use or stored remains a potential source of contamination. An incident has been
reported in which children had detectable levels of CBs in their blood following exposure
while laying with capacitors (Wolff and Schecter 1991). Exposure in the wife of an exposed
worker has also been reported (Fishbein and Wolff 1987).

In two studies of environmental exposures, re- and postnatal exposure to CBs has been
associated with small but significant effects in children. In one study, slightly impaired motor
development was detected among children whose mothers had immediate postnatal breast
milk CB levels in the upper 95th percentile of the study group (Rogan et al. 1986). In the
other, sensory deficits (as well as smaller gestational size) were seen among children with
blood levels in approximately the to 25% (Jacobson et al. 1985; Fein et al. 1984). These
exposure levels were in the upper range for the studies (above 3 m in mother’s milk (fat basis)
and above 3 ng/ml in children’s blood), yet these are not excessively high. Common
occupational exposures result in levels ten to 100 times higher (Wolff 1985). In both studies,
effects were attributed to prenatal exposure. Such results however sound a cautionary note for
unduly exposing neonates to such chemicals both pre- and postnatally.

Solvents

Solvents are a group of volatile or semi-volatile liquids that are used mainly to dissolve other
substances. Exposure to solvents can occur in manufacturing processes, for example hexane
exposure during distillation of petroleum products. For most persons, exposures to solvents
will arise while these are being used on the job or in the home. Common industrial
applications include dry cleaning, degreasing, painting and paint removal, and printing.
Within the home, direct contact with solvents is possible during use of products such as metal
cleaners, dry cleaning products, paint thinners or sprays.

The major routes of exposure for solvents in both adults and infants are through respiratory
and dermal absorption. Ingestion of breast milk is one means of neonatal exposure to solvents
derived from the parent’s work. Because of the brief half-life of most solvents, their duration
in breast milk will be similarly short. However, following maternal exposure, some solvents
will be resent in breast milk at least for a short time (at least one half-life). Solvents that have
been detected in breast milk include tetrachloroethylene, carbon disulhide and halothane (an
anaesthetic). A detailed review of potential infant exposure to tetrachloroethylene (TCE) has
concluded that levels in breast milk can easily exceed recommended health risk guidelines
(Schreiber 1993). Excess risk was highest for infants whose mothers might be exposed in the
workplace (58 to 600 per million persons). For the highest non-occupational exposures,
excess risks of 36 to 220 per 10 million persons were estimated; such exposures can exist in
homes directly above dry-cleaners. It was further estimated that milk concentrations of TCE
would return to “normal” (re-exposure) levels four to eight weeks after cessation of exposure.

Non-occupational exposures are possible for the infant in the home where solvents or solvent-
based products are used. Indoor air has very low, but consistently detectable, levels of
solvents like tetrachloroethylene. Water may also contain volatile organic compounds of the
same type.

Mineral Dusts and Fibres: Asbestos, Fibreglass, Rock Wool, Zeolites, Talc

Mineral dust and fibre exposure in the workplace causes respiratory disease, including lung
cancer, among workers. Dust exposure is a potential problem for the newborn if a parent
carries articles into the home on the clothes or body. With asbestos, fibres from the workplace
have been found in the home environment, and resulting exposures of family members have
been termed bystander or family exposures. Documentation of familial asbestos disease has
been possible because of the occurrence of a signal tumour, mesothelioma, that is primarily
associated with asbestos exposure. Mesothelioma is a cancer of the leura or eritoneum (linings
of lung and abdomen, respectively) that occurs following a long latency period, typically 30
to 40 years after the first asbestos exposure. The aetiology of this disease appears to be related
only to the length of time after initial exposure, not to intensity or duration, nor to age at first
exposure (Nicholson 1986; Otte, Sigsgaard and Kjaerulff 1990). Respiratory abnormalities
have also been attributed to bystander asbestos exposure (Grandjean and Bach 1986).
Extensive animal experiments support the human observations.

Most cases of familial mesothelioma have been reported among wives of exposed miners,
millers, manufacturers and insulators. However, a number of childhood exposures have also
been associated with disease. Quite a few of these children had initial contact that occurred at
an early age (Dawson et al. 1992; Anderson et al. 1976; Roggli and Longo 1991). For
example, in one investigation of 24 familial contacts with mesothelioma who lived in a
crocidolite asbestos mining town, seven cases were identified whose ages were 29 to 39 years
at diagnosis or death and whose initial exposure had occurred at less than one year of age
(n=5) or at three years (n=2) (Hansen et al. 1993).

Exposure to asbestos is clearly causative for mesothelioma, but an epigenetic mechanism has
been further pro[osed to account for unusual clustering of cases within certain families. Thus,
the occurrence of mesothelioma among 64 persons in 27 families suggests a genetic trait that
may render certain individuals more sensitive to the asbestos insult leading to this disease
(Dawson et al. 1992; Bianchi, Brollo and Zuch 1993). However, it also has been suggested
that exposure alone may provide an adequate explanation for the reported familial aggregation
(Alderson 1986).

Other inorganic dusts associated with occupational disease include fibreglass, zeolites and
talc. Both asbestos and fibreglass have been widely used as insulating materials. pulmonary
fibrosis and cancer are associated with asbestos and much less clearly with fibreglass.
Mesothelioma has been reported in areas of Turkey with indigenous exposures to natural
zeolites. Exposures to asbestos may also arise from non-occupational sources. Diaers
(“naies”) constructed from asbestos fibre were implicated as a source of childhood asbestos
exposure (Li, Dreyfus and Antman 1989); however, parental clothing was not excluded as a
source of asbestos contact in this report. Asbestos also has been found in cigarettes,
hairdryers, floor tiles and some types of talcum powder. Its use has been eliminated in many
countries. However, an important consideration for children is residual asbestos insulation in
schools, which has been widely investigated as a potential public health problem.

Environmental Tobacco Smoke

Environmental tobacco smoke (ETS) is a combination of exhaled smoke and smoke emitted
from the smoldering cigarette. Although ETS is not itself a source of occupational exposure
that may affect the neonate, it is reviewed here because of its potential to cause adverse health
effects and because it provides a good example of other aerosol exposures. Exposure of a non-
smoker to ETS is often described as passive or involuntary smoking. prenatal exposure to
ETS is clearly associated with deficits or impairments in foetal growth. It is difficult to
distinguish postnatal outcomes from effects of ETS in the prenatal period, since parental
smoking is rarely confined to one time or the other. However, there is evidence to support a
relationship of postnatal exposure to ETS with respiratory illness and impaired lung function.
The similarity of these findings to experiences among adults strengthens the association.

ETS has been well characterized and extensively studied in terms of human exposure and
health effects. ETS is a human carcinogen (US Environmental protection Agency 1992). ETS
exposure can be assessed by measuring levels of nicotine, a component of tobacco, and
cotinine, its major metabolite, in biological fluids including saliva, blood and urine. Nicotine
and cotinine have also been detected in breast milk. Cotinine has also been found in the blood
and urine of infants who were exposed to ETS only by breast-feeding (Charlton 1994;
National Research Council 1986).

Exposure of the neonate to ETS has been clearly established to result from paternal and
maternal smoking in the home environment. Maternal smoking provides the most significant
source. For example, in several studies urinary cotinine in children has been shown to
correlate with the number of cigarettes smoked by the mother per day (Marbury, Hammon
and Haley 1993). The major routes of ETS exposure for the neonate are respiratory and
dietary (through breast milk). Day care centers represent another potential exposure situation;
many child care facilities do not have a no-smoking policy (Sockrider and Coultras 1994).

Hospitalization for respiratory illness occurs more often among newborns whose parents
smoke. In addition, the duration of hospital visits is longer among infants exposed to ETS. In
terms of causation, ETS exposure has not been associated with specific respiratory diseases.
There is evidence, however, that passive smoking increases the severity of re-existing
illnesses such as bronchitis and asthma (Charlton 1994; Chilmonczyk et al. 1993; Rylander et
al. 1993). Children and infants exposed to ETS also have higher frequencies of respiratory
infections. In addition, smoking parents with respiratory illnesses can transmit airborne
infections to infants by coughing.

Children exposed to ETS postnatally show small deficits in lung function which appear to be
independent of prenatal exposures (Frischer et al. 1992). Although the ETS-related changes
are small (0.5% decrement per year of forced expiratory volume), and while these effects are
not clinically significant, they suggest changes in the cells of the developing lung that may
portend later risk. parental smoking has also been associated with increased risk of otitis
media, or middle ear effusion, in children from infancy to age nine. This condition is a
common cause of deafness among children which can cause delays in educational progress.
Associated risk is supported by studies attributing one-third of all cases of otitis media to
parental smoking (Charlton 1994).

Radiation Exposures

Ionizing radiation exposure is an established health hazard which is generally the result of
intense exposure, either accidental or for medical purposes. It can be damaging to highly
proliferative cells, and can therefore be very harmful to the developing foetus or neonate.
Radiation exposures that result from diagnostic x rays are generally very low level, and
considered to be safe. A potential household source of exposure to ionizing radiation is radon,
which exists in certain geographic areas in rock formations.

prenatal and postnatal effects of radiation include mental retardation, lower intelligence,
growth retardation, congenital malformations and cancer. Exposure to high doses of ionizing
radiation is also associated with increased prevalence of cancer. Incidence for this exposure is
dependent upon dose and age. In fact, the highest relative risk observed for breast cancer (~9)
is among women who were exposed to ionizing radiation at a young age.

Recently, attention has focused on the possible effects of non-ionizing radiation, or


electromagnetic fields (EMF). The basis of a relationship between EMF exposure and cancer
is not yet known, and the epidemiological evidence is still unclear. However, in several
international studies an association has been reported between EMF and leukaemia and male
breast cancer.

Childhood exposure to excessive sunlight has been associated with skin cancer and melanoma
(Marks 1988).

Childhood Cancer

Although specific substances have not been identified, parental occupational exposures have
been linked to childhood cancer. The latency period for developing childhood leukaemia can
be two to 10 years following the onset of exposure, indicating that exposures in utero or in the
early postnatal period may be implicated in the cause of this disease. Exposure to a number of
organochlorine pesticides (BHC, DDT, chlordane) has been tentatively associated with
leukaemia, although these data have not been confirmed in more detailed studies. Moreover,
elevated risk of cancer and leukaemia has been reported for children whose parents engage in
work that involves pesticides, chemicals and fumes (O’Leary et al. 1991). Similarly, risk of
Ewing’s bone sarcoma in children was associated with parental occupations in agriculture or
exposure to herbicides and pesticides (Holly et al. 1992).

Summary

Many nations attempt to regulate safe levels of toxic chemicals in ambient air and food
products and in the workplace. Nevertheless, opportunities for exposure abound, and children
are particularly susceptible to both absorption and to effects of toxic chemicals. It has been
noted that “many of the 40,000 child lives lost in the developing world every day are a
consequence of environmental abuses reflected in unsafe water supplies, disease, and
malnutrition” (Schaefer 1994). Many environmental exposures are avoidable. Therefore,
prevention of environmental diseases takes high priority as a defence against adverse health
effects among children.
Maternity Protection in Legislation

Written by ILO Content Manager

During pregnancy, exposure to certain health and safety hazards of the job or the working
environment may have adverse effects on the health of a woman worker and her unborn child.
Before and after giving birth, she also needs a reasonable amount of time off from her job to
recuperate, breast-feed and bond with her child. Many women want and need to be able to
return to work after childbirth; this is increasingly recognized as a basic right in a world
where the participation of women in the labour force is continuously increasing and
approaching that of men in many countries. As most women need to support themselves and
their families, continuity of income during maternity leave is vital.

Over time, governments have enacted a range of legislative measures to protect women
workers during pregnancy and at childbirth. A feature of more recent measures is the
prohibition of discrimination in employment on the grounds of pregnancy. Another trend is to
provide the right for mothers and fathers to share leave entitlements after the birth so that
either may care for the child. Collective bargaining in many countries contributes to the more
effective application of such measures and often improves upon them. Employers also lay an
important role in furthering maternity protection through the terms of individual contracts of
employment and enterprise policies.

The Limits of Protection

Laws providing maternity protection for working women are usually restricted to the formal
sector, which may represent a small proportion of economic activity. These do not apply to
women working in unregistered economic activities in the informal sector, who in many
countries represent the majority of working women. While there is a trend worldwide to
improve and extend maternity protection, how to protect the large segment of the population
living and working outside the formal economy remains a major challenge.

In most countries, labour legislation provides maternity protection for women employed in
industrial and non-industrial enterprises in the private and often also the public sector.
Homeworkers, domestic employees, own-account workers and workers in enterprises
employing only family members are frequently excluded. Since many women work in small
firms, the relatively frequent exclusion of undertakings which employ less than a certain
number of workers (e.g., five permanent workers in the Republic of Korea) is of concern.

Many women workers in precarious employment, such as temporary workers, or casual


workers in Ireland, are excluded from the scope of labour legislation in a number of countries.
Depending on the number of hours they work, part-time workers may also be excluded. Other
groups of women may be excluded, such as women managers (e.g., Singapore, Switzerland),
women whose earnings exceed a certain maximum (e.g., Mauritius) or women who are paid
by results (e.g., the Philippines). In rare cases, unmarried women (e.g., teachers in Trinidad
and Tobago) do not qualify for maternity leave. However, in Australia (federal), where
parental leave is available to employees and their spouses, the term “spouse” is defined to
include a de facto spouse. Where age limits are set (e.g., in Israel, women below the age of
18) they usually do not exclude very many women as they are normally fixed below or above
the prime child-bearing ages.
Public servants are often covered by special rules, which may provide for more favourable
conditions than those applicable to the private sector. For example, maternity leave may be
longer, cash benefits may correspond to the full salary instead of a percentage of it, parental
leave is more likely to be available, or the right to reinstatement may be more clearly
established. In a significant number of countries, conditions in the public service can act as an
agent of progress since collective bargaining agreements in the private sector are often
negotiated along the lines of public service maternity protection rules.

Similar to labour legislation, social security laws may limit their application to certain sectors
or categories of workers. While this legislation is often more restrictive than the
corresponding labour laws in a country, it may provide access to maternity cash benefits to
groups not covered by labour laws, such as self-employed women or women who work with
their self-employed husbands. In many developing countries, owing to a lack of resources,
social security legislation may only apply to a limited number of sectors.

Over the decades, however, the coverage of legislation has been extended to more economic
sectors and categories of workers. Yet, while an employee may be covered by a law, the
enjoyment of certain benefits, in particular maternity leave and cash benefits, may depend on
certain eligibility requirements. Thus, while most countries protect maternity, working women
do not enjoy a universal right to such protection.

Maternity Leave

Time off work for childbirth can vary from a few weeks to several months, often divided into
two parts, before and after the birth. A period of employment prohibition may be stipulated
for a part or the whole of the entitlement to ensure that women have sufficient rest. Maternity
leave is commonly extended in case of illness, preterm or late birth, and multiple births, or
shortened in case of miscarriage, stillbirth or infant death.

Normal duration

Under the ILO’s Maternity protection Convention, 1919 (No. 3), “a woman shall not be
permitted to work during the six weeks following her confinement; [and] shall have the right
to leave her work if she produces a medical certificate stating that her confinement will
probably take lace within six weeks”. The Maternity protection Convention (Revised), 1952
(No. 103), confirms the 12-week leave, including an employment prohibition for six weeks
after the birth, but does not prescribe the use of the remaining six weeks. The Maternity
protection Recommendation, 1952 (No. 95), suggests a 14-week leave. The Maternity
protection Recommendation, 2000 (No. 191) suggests a 18-week leave [Edited, 2011]. Most
of the countries surveyed meet the 12-week standard, and at least one-third grant longer
periods.

A number of countries afford a possibility of choice in the distribution of maternity leave. In


some, the law does not prescribe the distribution of maternity leave (e.g., Thailand), and
women are entitled to start the leave as early or as late as they wish. In another group of
countries, the law indicates the number of days to be taken after confinement; the balance can
be taken either before or after the birth.

Other countries do not allow flexibility: the law provides for two periods of leave, before and
after confinement. These periods may be equal, especially where the total leave is relatively
short. Where the total leave entitlement exceeds 12 weeks, the prenatal period is often shorter
than the postnatal period (e.g., in Germany six weeks before and eight weeks after the birth).

In a relatively small number of countries (e.g., Benin, Chile, Italy), the employment of women
is prohibited during the whole period of maternity leave. In others, a period of compulsory
leave is prescribed, often after confinement (e.g., Barbados, Ireland, India, Morocco). The
most common requirement is a six-week compulsory period after birth. Over the past decade,
the number of countries providing for some compulsory leave before the birth has increased.
On the other hand, in some countries (e.g., Canada) there is no period of compulsory leave, as
it is felt that the leave is a right that should be freely exercised, and that time off should be
organized to suit the individual woman’s needs and preferences.

Eligibility for maternity leave

The legislation of most countries recognizes the right of women to maternity leave by stating
the amount of leave to which women are entitled; a woman needs only to be employed at the
time of going on leave to be eligible for the leave. In a number of countries, however, the law
requires women to have been employed for a minimum period prior to the date on which they
absent themselves. This period ranges from 13 weeks in Ontario or Ireland to two years in
Zambia.

In several countries, women must have worked a certain number of hours in the week or
month to be entitled to maternity leave or benefits. When such thresholds are high (as in
Malta, 35 hours per week), they can result in excluding a large number of women, who form
the majority of part-time workers. In a number of countries, however, thresholds have been
lowered recently (e.g., in Ireland, from 16 to eight hours per week).

A small number of countries limit the number of times a woman may request maternity leave
over a given period (for example two years), or restrict eligibility to a certain number of
pregnancies, either with the same employer or throughout the woman’s life (e.g., Egypt,
Malaysia). In Zimbabwe, for example, women are eligible for maternity leave once in every
24 months and for a maximum of three times during the period that they work for the same
employer. In other countries, the women who have more than the prescribed number of
children are eligible for maternity leave, but not for cash benefits (e.g., Thailand), or are
eligible for a shorter period of leave with benefits (e.g., Sri Lanka: 12 weeks for the first two
children, six weeks for the third and subsequent children). The number of countries that limit
eligibility for maternity leave or benefits to a certain number of pregnancies, children or
surviving children (between two and four) appears to be growing, although it is by no means
certain that the duration of maternity leave is a decisive factor in motivating decisions about
family size.

Advance notice to the employer

In most countries, the only requirement for women to be entitled to maternity leave is the
presentation of a medical certificate. Elsewhere, women are also required to give their
employer notice of their intention to take maternity leave. The period of notice ranges from as
soon as the pregnancy is known (e.g., Germany) to one week before going on leave (e.g.,
Belgium). Failure to meet the notice requirement may lose women their right to maternity
leave. Thus, in Ireland, information regarding the timing of maternity leave is to be supplied
as soon as reasonably practicable, but not later than four weeks before the commencement of
the leave. An employee loses her entitlement to maternity leave if she fails to satisfy this
requirement. In Canada (federal), the notice requirement is waived where there is a valid
reason why the notice cannot be given; at provincial level, the notice period ranges from four
months to two weeks. If the notice period is not complied with, a woman worker is still
entitled to the normal maternity leave in Manitoba; she is entitled to shorter periods (usually
six weeks as opposed to 17 or 18) in most other provinces. In other countries, the law does not
clarify the consequences of failing to give notice.

Cash Benefits

Most women cannot afford to forfeit their income during maternity leave; if they had to, many
would not use all their leave. Since the birth of healthy children benefits the whole nation, as a
matter of equity, employers should not bear the full cost of their workers’ absences. Since
1919, ILO standards have held that during maternity leave, women should receive cash
benefits, and that these should be paid out of public funds or through a system of insurance.
Convention No. 103 requires that contributions due under a compulsory social insurance
scheme be paid based on the total number of men and women employed by the undertakings
concerned, without distinction based on sex. Although in a few countries, maternity benefits
represent only a relatively small percentage of wages, the level of two-thirds called for in
Convention No. 103 is reached in several and exceeded in many others. In more than half of
the countries surveyed, maternity benefits constitute 100% of insured wages or of full wages.

Many social security laws may provide a specific maternity benefit, thus recognizing
maternity as a contingency in its own right. Others provide that during maternity leave, a
worker will be entitled to sickness or unemployment benefits. Treating maternity as a
disability or the leave as a period of unemployment could be considered unequal treatment
since, in general, such benefits are only available during a certain period, and women who use
them in connection with maternity may find they do not have enough left to cover actual
sickness or unemployment periods later. Indeed, when the 1992 European Council Directive
was drafted, a proposal that during maternity leave women would receive sickness benefits
was strongly challenged; it was argued that in terms of equal treatment between men and
women, maternity needed to be recognized as independent grounds for obtaining benefits. As
a compromise, the maternity allowance was defined as guaranteeing an income at least
equivalent to what the worker concerned would receive in the event of sickness.

In nearly 80 of the countries surveyed, benefits are paid by national social security schemes,
and in over 40, these are at the expense of the employer. In about 15 countries, the
responsibility for financing maternity benefits is shared between social security and the
employer. Where benefits are financed jointly by social security and the employer, each may
be required to pay half (e.g., Costa Rica), although other percentages may be found (e.g.,
Honduras: two-thirds by social security and one-third by the employer). Another type of
contribution may be required of employers: when the amount of maternity benefit paid by
social security is based on a statutory insurable income and represents a low percentage of a
woman’s full wage, the law sometimes provides that the employer will pay the balance
between the woman’s salary and the maternity benefit paid by the social security fund (e.g., in
Burkina Faso). Voluntary additional payment by the employer is a feature of many collective
agreements, and also of individual employment contracts. The involvement of employers in
the payment of cash maternity benefits may be a realistic solution to the problem posed by the
lack of other funds.
Protection of the Health of Pregnant and Nursing Women

In line with the requirements of the Maternity protection Recommendation, 1952 (No. 95),
many countries provide for various measures to protect the health of pregnant women and
their children, seeking to minimize fatigue by the reorganization of working time or to protect
women against dangerous or unhealthy work.

In a few countries (e.g., the Netherlands, Panama), the law specifies an obligation of the
employer to organize work so that it does not affect the outcome of the pregnancy. This
approach, which is in line with modern occupational health and safety practice, permits
matching the needs of individual women with the corresponding preventive measures, and is
therefore most satisfactory. Much more generally, protection is sought through prohibiting or
limiting work which may be harmful to the health of the mother or child. Such a prohibition
may be worded in general terms or may apply to certain types of hazardous work. However,
in Mexico, the prohibition of employing women in unhealthy or dangerous work does not
apply if the necessary health protection measures have, in the opinion of the competent
authority, been taken; nor does it apply to women in managerial positions or those who
possess a university degree or technical diploma, or the necessary knowledge and experience
to carry on the work.

In many countries, the law provides that pregnant women and nursing mothers may not be
allowed to do work that is “beyond their strength”, which “involves hazards”, “is dangerous
to their health or that of their child”, or “requires a physical effort unsuited to their condition”.
The application of such a general prohibition, however, can present problems: how, and by
whom, shall it be determined that a job is beyond a person’s strength? By the worker
concerned, the employer, the labour inspector, the occupational health physician, the woman’s
own doctor? Differences in appreciation might lead to a woman being kept away from work
which she could in fact do, while another might not be removed from work which is too
taxing.

Other countries list, sometimes in great detail, the type of work that is prohibited to pregnant
women and nursing mothers (e.g., Austria, Germany). The handling of loads is frequently
regulated. Legislation in some countries specifically prohibits exposure to certain chemicals
(e.g., benzene), biological agents, lead and radiation. Underground work is prohibited in
Japan during pregnancy and one year after confinement. In Germany, piece-rate work and
work on an assembly line with a fixed pace are prohibited. In a few countries, pregnant
workers may not be assigned to work outside their permanent place of residence (e.g., Ghana,
after the fourth month). In Austria, smoking is not permitted in places where pregnant women
are working.

In a number of countries (e.g., Angola, Bulgaria, Haiti, Germany), the employer is required to
transfer the worker to suitable work. Often, the worker must retain her former salary even if
the salary of the post to which she is transferred is lower. In the Lao people’s Democratic
Republic, the woman keeps her former salary during a three-month period, and is then paid at
the rate corresponding to the job she is actually performing. In the Russian Federation, where
a suitable post is to be given to a woman who can no longer perform her work, she retains her
salary during the period in which a new post is found. In certain cases (e.g., Romania), the
difference between the two salaries is paid by social security, an arrangement which is to be
referred, since the cost of maternity protection should not, as far as feasible, be borne by
individual employers.
Transfer may also be available from work that is not dangerous in itself but which a medical
practitioner has certified to be harmful to a particular woman’s state of health (e.g., France).
In other countries, a transfer is possible at the request of the worker concerned (e.g., Canada,
Switzerland). Where the law enables the employer to suggest a transfer, if there is a
disagreement between the employer and the worker, an occupational physician will determine
whether there is any medical need for changing jobs and whether the worker is fit to take up
the job that has been suggested to her.

A few countries clarify the fact that the transfer is temporary and that the worker must be
reassigned to her former job when she returns from maternity leave or at a specified time
thereafter (e.g., France). Where a transfer is not possible, some countries provide that the
worker will be granted sick leave (e.g., Seychelles) or, as was discussed above, that maternity
leave will start early (e.g., Iceland).

Non-discrimination

Measures are taken in a growing number of countries to ensure that women do not suffer
discrimination on account of pregnancy. Their aim is to ensure that pregnant women are
considered for employment and treated during employment on an equal basis with men and
with other women, and in particular are not demoted, do not lose seniority or are not denied
promotion solely on the grounds of pregnancy. It is now more and more common for national
legislation to prohibit discrimination on account of sex. Such a prohibition could be and
indeed has been in many cases interpreted by the courts as a prohibition to discriminate on
account of pregnancy. The European Court of Justice has followed this approach. In a 1989
judgement, the Court ruled that an employer who dismisses or refuses to recruit a woman
because she is pregnant is in breach of Directive 76/207/EEC of the European Council on
equal treatment. This judgement was important in clarifying the fact that sex discrimination
exists when employment decisions are made on the basis of pregnancy even though the law
does not specifically cite pregnancy as prohibited grounds for discrimination. It is customary
in sex equality cases to compare the treatment given to a woman with the treatment given to a
hypothetical man. The Court ruled that such comparison was not called for in the case of a
pregnant woman, since pregnancy was unique to women. Where unfavourable treatment is
made on grounds of pregnancy, there is by definition discrimination on grounds of sex. This is
consistent with the position of the ILO Committee of Exerts on the Application of
Conventions and Recommendations concerning the scope of the Discrimination (Employment
and Occupation) Convention, 1958 (No. 111), which notes the discriminatory nature of
distinctions on the basis of pregnancy, confinement and related medical conditions (ILO
1988).

A number of countries provide for an explicit prohibition of discrimination on the grounds of


pregnancy (e.g., Australia, Italy, US, Venezuela). Other countries define discrimination on
grounds of sex to include discrimination on grounds of pregnancy or absence on maternity
leave (e.g., Finland). In the US, protection is further ensured through treating pregnancy as a
disability: in undertakings with more than 15 workers, discrimination is prohibited against
pregnant women, women at childbirth and women who are affected by related medical
conditions; and policies and practices in connection with pregnancy and related matters must
be applied on the same terms and conditions as applied to other disabilities.

In several countries, the law contains precise requirements which illustrate instances of
discrimination on the grounds of pregnancy. For example, in the Russian Federation, an
employer may not refuse to hire a woman because she is pregnant; if a pregnant woman is not
hired, the employer must state in writing the reasons for not recruiting her. In France, it is
unlawful for an employer to take pregnancy into account in refusing to employ a woman, in
terminating her contract during a period of probation or in ordering her transfer. It is also
unlawful for the employer to seek to determine whether an applicant is pregnant, or to cause
such information to be sought. Similarly, women cannot be required to reveal the fact that
they are pregnant, whether they apply for a job or are employed in one, except when they
request to benefit from any law or regulation governing the protection of pregnant women.

Transfers unilaterally and arbitrarily imposed on a pregnant woman can constitute


discrimination. In Bolivia, as in other countries in the region, a woman is protected against
involuntary transfer during pregnancy and up to a year after the birth of her child.

The issue of combining the right of working women to health protection during pregnancy
and their right not to suffer discrimination poses a special difficulty at the time of recruitment.
Should a pregnant applicant reveal her condition, especially one who applies for a position
involving work which is prohibited to pregnant women? In a 1988 judgement, the Federal
Labour Court of Germany held that a pregnant woman applying for a job involving
exclusively night work, which is prohibited to pregnant women under German legislation,
should inform a potential employer of her condition. The judgement was overruled by the
European Court of Justice as being contrary to the 1976 EC Directive on equal treatment. The
Court found that the Directive precluded an employment contract from being held to be void
on account of the statutory prohibition of night work, or from being avoided by the employer
on account of a mistake on his or her part as to an essential personal characteristic of the
woman at the time of the conclusion of the contract. The employee’s inability, due to
pregnancy, to perform the work for which she was being recruited was temporary since the
contract was not concluded with a fixed term. It would therefore be contrary to the objective
of the Directive to hold it invalid or void because of such an inability.

Employment Security

Many women have lost their jobs because of a pregnancy. Nowadays, although the extent of
protection varies, employment security is a significant component of maternity protection
policies.

International labour standards address the issue in two different ways. The maternity
protection Conventions prohibit dismissal during maternity leave and any extension thereof,
or at such time as a notice of dismissal would expire during the leave under the terms of
Convention No. 3, Article 4 and Convention No. 103, Article 6. Dismissal on grounds that
might be regarded as legitimate is not considered to be permitted during this period (ILO
1965). In the event that a woman has been dismissed before going on maternity leave, the
notice should be suspended for the time she is absent and continue after her return. The
Maternity protection Recommendation, 1952 (No. 95), calls for the protection of a pregnant
woman’s employment from the date the employer is informed of the pregnancy until one
month after her return from maternity leave. It identifies cases of serious fault by the
employed woman, the shutting down of the undertaking and the expiry of a fixed-term
contract as legitimate grounds for dismissal during the protected period. The Termination of
Employment Convention, 1982 (No. 158; Article 5(d)–(e)), does not prohibit dismissal, but
provides that pregnancy or absence from work on maternity leave shall not constitute valid
reasons for termination of employment.
At the level of the European Union, the 1992 Directive prohibits dismissal from the beginning
of pregnancy until the end of the maternity leave, save in exceptional cases not connected
with the worker’s condition.

Usually, countries provide for two sets of rules regarding dismissal. Dismissal with notice
applies in such cases as the closure of the enterprise, redundancy and where, for a variety of
reasons, the worker is unable to perform the work for which he or she has been recruited or
fails to perform such work to the employer’s satisfaction. Dismissal without notice is used to
terminate the services of a worker who is guilty of gross negligence, serious misconduct or
other grave instances of behaviour, usually comprehensively listed in the legislation.

Where dismissal with notice is concerned, it is clear that employers could arbitrarily decide
that pregnancy is incompatible with a worker’s tasks and dismiss her on grounds of
pregnancy. Those who wish to avoid their obligations to pregnant women, or even simply do
not like to have pregnant women around the workplace, could find a pretext to dismiss
workers during pregnancy even if, in view of the existence of non-discrimination rules, they
would refrain from using pregnancy as grounds for dismissal. Many people agree that it is
legitimate to protect workers against such discriminatory decisions: the prohibition of
dismissal with notice on grounds of pregnancy or during pregnancy and maternity leave is
often viewed as a measure of equity and is in force in many countries.

The ILO Committee of Exerts on the Application of Conventions and Recommendations


considers that protection against dismissal does not preclude an employer from terminating an
employment relationship because he or she has detected a serious fault on the part of a woman
employee: rather, when there are reasons such as this to justify dismissal, the employer is
obliged to extend the legal period of notice by any period required to complete the period of
protection under the Conventions. This is the situation, for example, in Belgium, where an
employer who has legal grounds for dismissing a woman cannot do so while she is on
maternity leave, but can serve notice so that it expires after the woman returns from leave.

The protection of pregnant women against dismissal in case of closure of the undertaking or
economic retrenchment poses a similar problem. It is indeed a burden for a firm which ceases
operation to continue to pay the salary of a person who is not working for them any more,
even for a short period. However, recruitment prospects are often bleaker for women who are
pregnant than for women who are not, or for men, and pregnant women particularly need the
emotional and financial security of continuing to be employed. Where women may not be
dismissed during pregnancy, they can put off looking for a job until after the birth. In fact,
where legislation provides for the order in which various categories of workers to be
retrenched are to be dismissed, pregnant women are among those to be dismissed last or next
to last (e.g., Ethiopia).

Leave and Benefits for Fathers and Parents

Going beyond the protection of the health and employment status of pregnant and nursing
women, many countries provide for paternity leave (a short period of leave at or about the
time of birth). Other forms of leave are linked to the needs of children. One type is adoption
leave, and another is leave to facilitate child-rearing. Many countries foresee the latter type of
leave, but use different approaches. One group provides for time off for the mother of very
young children (optional maternity leave), while another provides additional leave for both
parents (parental education leave). The view that both the father and mother need to be
available to care for young children is also reflected in integrated parental leave systems,
which provide a long period of leave available to both parents.

Pregnancy and US Work Recommendations

Written by ILO Content Manager

Changes in family life over recent decades have had dramatic effects on the relationship
between work and pregnancy. These include the following:

 Women, particularly those of childbearing age, continue to enter the labour force in
considerable numbers.
 A tendency has developed on the part of many of these women to defer starting their
families until they are older, by which time they have often achieved positions of
responsibility and become important members of the productive apparatus.
 At the same time, there is an increasing number of teenage pregnancies, many of which are
high-risk pregnancies.
 Reflecting increasing rates of separation, of divorce and of choices of alternative lifestyles, as
well as an increase in the number of families in which both parents must work, financial
pressures are forcing many women to continue working for as long as possible during
pregnancy.

The impact of pregnancy-related absences and lost or impaired productivity, as well as


concern over the health and well-being of both the mothers and their infants, have led
employers to become more proactive in dealing with the problem of pregnancy and work.
Where employers pay all or part of health insurance premiums, the prospect of avoiding the
sometimes staggering costs of complicated pregnancies and neonatal problems is a potent
incentive. Certain responses are dictated by laws and government regulations, for example,
guarding against potential occupational and environmental hazards and providing maternity
leave and other benefits. Others are voluntary: prenatal education and care programmers,
modified work arrangements such as flex-time and other work schedule arrangements,
dependant care and other benefits.

Management of pregnancy

Of primary importance to the pregnant woman—and to her employer—whether or not she


continues working during her pregnancy, is access to a professional health management
programme designed to identify and avert or minimize risks to the mother and her foetus, thus
enabling her to remain on the job without concern. At each of the scheduled prenatal visits,
the physician or midwife should evaluate medical information (childbearing and other
medical history, current complaints, physical examinations and laboratory tests) and
information about her job and work environment, and develop appropriate recommendations.

It is important that health professionals not rely on the simple job descriptions pertaining to
their patients’ work, as these are often inaccurate and misleading. The job information should
include details concerning physical activity, chemical and other exposures and emotional
stress, most of which can be provided by the woman herself. In some instances, however,
input from a supervisor, often relayed by the safety department or the employee health service
(where there is one), may be needed to provide a more complete picture of hazardous or
trying work activities and the possibility of controlling their potential for harm. This can also
serve as a check on patients who inadvertently or deliberately mislead their physicians; they
may exaggerate the risks or, if they feel it is important to continue working, may understate
them.

Recommendations for Work

Recommendations regarding work during pregnancy fall into three categories:

The woman may continue to work without changes in her activities or the environment. This
is applicable in most instances. After extensive deliberation, the Task Force on the Disability
of pregnancy comprising obstetrical health professionals, occupational physicians and nurses,
and women’s representatives assembled by ACOG (the American College of Obstetricians
and Gynecologists) and NIOSH (the National Institute for Occupational Safety and Health)
concluded that “the normal woman with an uncomplicated pregnancy who is in a job that
presents no greater hazards than those encountered in normal daily life in the community, may
continue to work without interruption until the onset of labor and may resume working
several weeks after an uncomplicated delivery” (Isenman and Warshaw, 1977).

The woman may continue to work, but only with certain modifications in the work
environment or her work activities. These modifications would be either “desirable” or
“essential” (in the latter case, she should stop work if they cannot be made).

The woman should not work. It is the physician’s or midwife’s judgement that any work
would probably be detrimental to her health or to that of the developing foetus.

The recommendations should not only detail the needed job modifications but should also
stipulate the length of time they should be in effect and indicate the date for the next
professional examination.

Non-medical Considerations

The recommendations suggested above are based entirely on considerations of the health of
the mother and her foetus in relation to job requirements. They do not take into account the
burden of such off-the-job activities as commuting to and from the workplace, housework and
care of other children and family members; these may sometimes be even more demanding
than those of the job. When modification or restriction of activities is called for, one should
consider the question whether it should be implemented on the job, in the home or both.

In addition, recommendations for or against continuing work may form the basis of a variety
of non-medical considerations, for example, eligibility for benefits, paid versus unpaid leave
or guaranteed job retention. A critical issue is whether the woman is considered disabled.
Some employers categorically consider all pregnant workers to be disabled and strive to
eliminate them from the workforce, even though many are able to continue to work. Other
employers assume that all pregnant employees tend to magnify any disability in order to be
eligible for all available benefits. And some even challenge the notion that a pregnancy,
whether or not it is disabling, is a matter for them to be concerned about at all. Thus, disability
is a complex concept which, although fundamentally based on medical findings, involves
legal and social considerations.
Pregnancy and Disability

In many jurisdictions, it is important to distinguish between the disability of pregnancy and


pregnancy as a period in life that calls for special benefits and dispensations. The disability of
pregnancy falls into three categories:

1. Disability following delivery. From a purely medical standpoint, recovery following the
termination of pregnancy through an uncomplicated delivery lasts only a few weeks, but
conventionally it extends to six or eight weeks because that is when most obstetricians
customarily schedule their first postnatal check-up. However, from a practical and
sociological point of view, a longer leave is considered by many to be desirable in order to
enhance family bonding, to facilitate breast-feeding, and so on.
2. Disability resulting from medical complications. Medical complications such as eclamsia,
threatened abortion, cardiovascular or renal problems and so on, will dictate periods of
reduced activity or even hospitalization that will last as long as the medical condition persists
or until the woman has recovered from both the medical problem and the pregnancy.
3. Disability reflecting the necessity of avoiding exposure to toxicity hazards or abnormal
physical stress. Because of the greater sensitivity of the foetus to many environmental
hazards, the pregnant woman may be considered disabled even though her own health
might not be in danger of being compromised.

Conclusion

The challenge of balancing family responsibilities and work outside the home is not new to
women. What may be new is a modern society that values the health and well-being of
women and their offspring while confronting women with the dual challenges of achieving
personal fulfillment through employment and coping with the economic pressures of
maintaining an acceptable standard of living. The increasing number of single parents and of
married couples both of whom must work suggest that work-family issues cannot be ignored.
Many employed women who become pregnant simply must continue to work.

Whose responsibility is it to meet the needs of these individuals? Some would argue that it is
purely a personal problem to be dealt with entirely by the individual or the family. Others
consider it a societal responsibility and would enact laws and provide financial and other
benefits on a community-wide basis.

How much should be loaded on the employer? This depends largely on the nature, the
location and often the size of the organization. The employer is driven by two sets of
considerations: those imposed by laws and regulations (and sometimes by the need to meet
demands won by organized labour) and those dictated by social responsibility and the
practical necessity of maintaining optimal productivity. In the last analysis, it hinges on lacing
a high value on human resources and acknowledging the interdependence of work
responsibilities and family commitments and their sometimes counterbalancing effects on
health and productivity.
Reproductive System References
Agency for Toxic Substance and Disease Registry. 1992. Mercury toxicity. Am Fam Phys
46(6):1731-1741.

Ahlborg, JR, L Bodin, and C Hogstedt. 1990. Heavy lifting during pregnancy–A hazard to the
fetus? A prospective study. Int J Epidemiol 19:90-97.

Alderson, M. 1986. Occupational Cancer. London: Butterworths.


Anderson, HA, R Lilis, SM Daum, AS Fischbein, and IJ Selikoff. 1976. Household contact
asbestos neoplastic risk. Ann NY Acad Sci 271:311-332.

Apostoli, P, L Romeo, E Peroni, A Ferioli, S Ferrari, F Pasini, and F Aprili. 1989. Steroid
hormone sulphation in lead workers. Br J Ind Med 46:204-208.

Assennato, G, C Paci, ME Baser, R Molinini, RG Candela, BM Altmura, and R Giogino.


1986. Sperm count suppression with endocrine dysfunction in lead-exposed men. Arch
Environ Health 41:387-390.

Awumbila, B and E Bokuma. 1994. Survey of pesticides used in the control of ectoparasites
on farm animals in Ghana. Tropic Animal Health Prod 26(1):7-12.

Baker, HWG, TJ Worgul, RJ Santen, LS Jefferson, and CW Bardin. 1977. Effect of prolactin
on nuclear androgens in perifused male accessory sex organs. In The Testis in Normal and
Infertile Men, edited by P and HN Troen. New York: Raven Press.

Bakir, F, SF Damluji, L Amin-Zaki, M Murtadha, A Khalidi, NY Al-Rawi, S Tikriti, HT


Dhahir, TW Clarkson, JC Smith, and RA Doherty. 1973. Methyl mercury poisoning in Iraq.
Science 181:230-241.

Bardin, CW. 1986. Pituitary-testicular axis. In Reproductive Endocrinology, edited by SSC


Yen and RB Jaffe. Philadelphia: WB Saunders.

Bellinger, D, A Leviton, C Waternaux, H Needleman, and M Rabinowitz. 1987. Longitudinal


analyses of prenatal and postnatal lead exposure and early cognitive development. New Engl J
Med 316:1037-1043.

Bellinger, D, A Leviton, E Allred, and M Rabinowitz. 1994. Pre- and postnatal lead exposure
and behavior problems in school-aged children. Environ Res 66:12-30.

Berkowitz, GS. 1981. An epidemiologic study of preterm delivery. Am J Epidemiol 113:81-


92.

Bertucat, I, N Mamelle, and F Munoz. 1987. Conditions de travail des femmes enceintes–
étude dans cinq secteurs d’activité de la région Rhône-Alpes. Arch mal prof méd trav secur
soc 48:375-385.

Bianchi, C, A Brollo, and C Zuch. 1993. Asbestos-related familial mesothelioma. Eur J


Cancer 2(3) (May):247-250.
Bonde, JPE. 1992. Subfertility in relation to welding–A case referent study among male
welders. Danish Med Bull 37:105-108.

Bornschein, RL, J Grote, and T Mitchell. 1989. Effects of prenatal lead exposure on infant
size at birth. In Lead Exposure and Child Development, edited by M Smith and L Grant.
Boston: Kluwer Academic.

Brody, DJ, JL Pirkle, RA Kramer, KM Flegal, TD Matte, EW Gunter, and DC Pashal. 1994.
Blood lead levels in the US population: Phase one of the Third National Health and Nutrition
Examination survey (NHANES III, 1988 to 1991). J Am Med Assoc 272:277-283.

Casey, PB, JP Thompson, and JA Vale. 1994. Suspected paediatric poisoning in the UK; I-
Home accident surveillance system 1982-1988. Hum Exp Toxicol 13:529-533.

Chapin, RE, SL Dutton, MD Ross, BM Sumrell, and JC Lamb IV. 1984. The effects of
ethylene glycol monomethyl ether on testicular histology in F344 rats. J Androl 5:369-380.

Chapin, RE, SL Dutton, MD Ross, and JC Lamb IV. 1985. Effects of ethylene glycol
monomethyl ether (EGME) on mating performance and epididymal sperm parameters in F344
rats. Fund Appl Toxicol 5:182-189.

Charlton, A. 1994. Children and passive smoking. J Fam Pract 38(3)(March):267-277.

Chia, SE, CN Ong, ST Lee, and FHM Tsakok. 1992. Blood concentrations of lead, cadmium,
mercury, zinc, and copper and human semen parameters. Arch Androl 29(2):177-183.

Chisholm, JJ Jr. 1978. Fouling one’s nest. Pediatrics 62:614-617.

Chilmonczyk, BA, LM Salmun, KN Megathlin, LM Neveux, GE Palomaki, GJ Knight, AJ


Pulkkinen, and JE Haddow. 1993. Association between exposure to environmental tobacco
smoke and exacerbations of asthma in children. New Engl J Med 328:1665-1669.

Clarkson, TW, GF Nordberg, and PR Sager. 1985. Reproductive and developmental toxicity
of metals. Scand J Work Environ Health 11:145-154.
Clement International Corporation. 1991. Toxicological Profile for Lead. Washington, DC:
US Department of Health and Human Services, Public Health Service Agency for Toxic
Substances and Disease Registry.

——. 1992. Toxicological Profile for A-, B-, G-, and D-Hexachlorocyclohexane. Washington,
DC: US Department of Health and Human Services, Public Health Service Agency for Toxic
Substances and Disease Registry.

Culler, MD and A Negro-Vilar. 1986. Evidence that pulsatile follicle-stimulating hormone


secretion is independent of endogenous luteinizing hormone-releasing hormone.
Endocrinology 118:609-612.

Dabeka, RW, KF Karpinski, AD McKenzie, and CD Bajdik. 1986. Survey of lead, cadmium
and flouride in human milk and correlation of levels with environmental and food factors.
Food Chem Toxicol 24:913-921.
Daniell, WE and TL Vaughn. 1988. Paternal employment in solvent related occupations and
adverse pregnancy outcomes. Br J Ind Med 45:193-197.
Davies, JE, HV Dedhia, C Morgade, A Barquet, and HI Maibach. 1983. Lindane poisonings.
Arch Dermatol 119 (Feb):142-144.

Davis, JR, RC Bronson, and R Garcia. 1992. Family pesticide use in the home, garden,
orchard, and yard. Arch Environ Contam Toxicol 22(3):260-266.

Dawson, A, A Gibbs, K Browne, F Pooley, and M Griffiths. 1992. Familial mesothelioma.


Details of seventeen cases with histopathologic findings and mineral analysis. Cancer
70(5):1183-1187.

D’Ercole, JA, RD Arthur, JD Cain, and BF Barrentine. 1976. Insecticide exposure of mothers
and newborns in a rural agricultural area. Pediatrics 57(6):869-874.

Ehling, UH, L Machemer, W Buselmaier, J Dycka, H Froomberg, J Dratochvilova, R Lang, D


Lorke, D Muller, J Peh, G Rohrborn, R Roll, M Schulze-Schencking, and H Wiemann. 1978.
Standard protocol for the dominant lethal test on male mice. Arch Toxicol 39:173-185.

Evenson, DP. 1986. Flow cytometry of acridine orange stained sperm is a rapid and practical
method for monitoring occupational exposure to genotoxicants. In Monitoring of
Occupational Genotoxicants, edited by M Sorsa and H Norppa. New York: Alan R Liss.

Fabro, S. 1985. Drugs and male sexual function. Rep Toxicol Med Lettr 4:1-4.

Farfel, MR, JJ Chisholm Jr, and CA Rohde. 1994. The long-term effectiveness of residential
lead paint abatement. Environ Res 66:217-221.

Fein, G, JL Jacobson, SL Jacobson, PM Schwartz, and JK Dowler. 1984. Prenatal exposure to


polychlorinated biphenyls: effects on birth size and gestational age. J Pediat 105:315-320.

Fenske, RA, KG Black, KP Elkner, C Lee, MM Methner, and R Soto. 1994. Potential
exposure and health risks of infants following indoor residential pesticide applications. Am J
Public Health 80(6):689-693.

Fischbein, A and MS Wolff. 1987. Conjugal exposure to polychlorinated biphenyls (PCBs).


Br J Ind Med 44:284-286.

Florentine, MJ and DJ II Sanfilippo. 1991. Elemental mercury poisoning. Clin Pharmacol


10(3):213-221.

Frischer, T, J Kuehr, R Meinert, W Karmaus, R Barth, E Hermann-Kunz, and R Urbanek.


1992. Maternal smoking in early childhood: A risk factor for bronchial responsiveness to
exercise in primary-school children. J Pediat 121 (Jul):17-22.

Gardner, MJ, AJ Hall, and MP Snee. 1990. Methods and basic design of case-control study of
leukemia and lymphoma among young people near Sellafield nuclear plant in West Cumbria.
Br Med J 300:429-434.
Gold, EB and LE Sever. 1994. Childhood cancers associated with parental occupational
exposures. Occup Med .

Goldman, LR and J Carra. 1994. Childhood lead poisoning in 1994. J Am Med Assoc
272(4):315-316.

Grandjean, P and E Bach. 1986. Indirect exposures: the significance of bystanders at work
and at home. Am Ind Hyg Assoc J 47(12):819-824.
Hansen, J, NH de-Klerk, JL Eccles, AW Musk, and MS Hobbs. 1993. Malignant
mesothelioma after environmental exposure to blue asbestos. Int J Cancer 54(4):578-581.

Hecht, NB. 1987. Detecting the effects of toxic agents on spermatogenesis using DNA probes.
Environ Health Persp 74:31-40.
Holly, EA, DA Aston, DK Ahn, and JJ Kristiansen. 1992. Ewing’s bone sarcoma, paternal
occupational exposure and other factors. Am J Epidemiol 135:122-129.

Homer, CJ, SA Beredford, and SA James. 1990. Work-related physical exertion and risk of
preterm, low birthweight delivery. Paediat Perin Epidemiol 4:161-174.

International Agency for Research on Cancer (IARC). 1987. Monographs On the Evaluation
of Carcinogenic Risks to Humans, Overall Evaluations of Carcinogenicity: An Updating of
IARC Monographs. Vol. 1-42, Suppl. 7. Lyon: IARC.

International Labour Organization (ILO). 1965. Maternity Protection: A World Survey of


National Law and Practice. Extract from the Report of the Thirty-fifth Session of the
Committee of Experts on the Application of Conventions and Recommendations, para. 199,
note 1, p.235. Geneva:ILO.

——. 1988. Equality in Employment and Occupation, Report III (4B). International Labour
Conference, 75th Session. Geneva: ILO.

Isenman, AW and LJ Warshaw. 1977. Guidelines On Pregnancy and Work. Chicago:


American College of Obstetricians and Gynecologists.

Jacobson, SW, G Fein, JL Jacobson, PM Schwartz, and JK Dowler. 1985. The effect of
intrauterine PCB exposure on visual recognition memory. Child Development 56:853-860.

Jensen, NE, IB Sneddon, and AE Walker. 1972. Tetrachlorobenzodioxin and chloracne. Trans
St Johns Hosp Dermatol Soc 58:172-177.

Källén, B. 1988. Epidemiology of Human Reproduction. Boca Raton:CRC Press

Kaminski, M, C Rumeau, and D Schwartz. 1978. Alcohol consumption in pregnant women


and the outcome of pregnancy. Alcohol, Clin Exp Res 2:155-163.

Kaye, WE, TE Novotny, and M Tucker. 1987. New ceramics-related industry implicated in
elevated blood lead levels in children. Arch Environ Health 42:161-164.
Klebanoff, MA, PH Shiono, and JC Carey. 1990. The effect of physical activity during
pregnancy on preterm delivery and birthweight. Am J Obstet Gynecol 163:1450-1456.

Kline, J, Z Stein, and M Susser. 1989. Conception to birth-epidemiology of prenatal


development. Vol. 14. Monograph in Epidemiology and Biostatistics. New York: Oxford
Univ. Press.

Kotsugi, F, SJ Winters, HS Keeping, B Attardi, H Oshima, and P Troen. 1988. Effects of


inhibin from primate sertoli cells on follicle-stimulating hormone and luteinizing hormone
release by perifused rat pituitary cells. Endocrinology 122:2796-2802.

Kramer, MS, TA Hutchinson, SA Rudnick, JM Leventhal, and AR Feinstein. 1990.


Operational criteria for adverse drug reactions in evaluating suspected toxicity of a popular
scabicide. Clin Pharmacol Ther 27(2):149-155.

Kristensen, P, LM Irgens, AK Daltveit, and A Andersen. 1993. Perinatal outcome among


children of men exposed to lead and organic solvents in the printing industry. Am J Epidemiol
137:134-144.

Kucera, J. 1968. Exposure to fat solvents: A possible cause of sacral agenesis in man. J Pediat
72:857-859.

Landrigan, PJ and CC Campbell. 1991. Chemical and physical agents. Chap. 17 in Fetal and
Neonatal Effects of Maternal Disease, edited by AY Sweet and EG Brown. St. Louis: Mosby
Year Book.

Launer, LJ, J Villar, E Kestler, and M de Onis. 1990. The effect of maternal work on fetal
growth and duration of pregnancy: a prospective study. Br J Obstet Gynaec 97:62-70.

Lewis, RG, RC Fortmann, and DE Camann. 1994. Evaluation of methods for monitoring the
potential exposure of small children to pesticides in the residential environment. Arch Environ
Contam Toxicol 26:37-46.

Li, FP, MG Dreyfus, and KH Antman. 1989. Asbestos-contaminated nappies and familial
mesothelioma. Lancet 1:909-910.

Lindbohm, ML, K Hemminki, and P Kyyronen. 1984. Parental occupational exposure and
spontaneous abortions in Finland. Am J Epidemiol 120:370-378.

Lindbohm, ML, K Hemminki, MG Bonhomme, A Anttila, K Rantala, P Heikkila, and MJ


Rosenberg. 1991a. Effects of paternal occupational exposure on spontaneous abortions. Am J
Public Health 81:1029-1033.

Lindbohm, ML, M Sallmen, A Antilla, H Taskinen, and K Hemminki. 1991b. Paternal


occupational lead exposure and spontaneous abortion. Scand J Work Environ Health 17:95-
103.

Luke, B, N Mamelle, L Keith, and F Munoz. 1995. The association between occupational
factors and preterm birth in US nurses’ survey. Obstet Gynecol Ann 173(3):849-862.
Mamelle, N, I Bertucat, and F Munoz. 1989. Pregnant women at work: Rest periods to
prevent preterm birth? Paediat Perin Epidemiol 3:19-28.

Mamelle, N, B Laumon, and PH Lazar. 1984. Prematurity and occupational activity during
pregnancy. Am J Epidemiol 119:309-322.

Mamelle, N and F Munoz. 1987. Occupational working conditions and preterm birth: A
reliable scoring system. Am J Epidemiol 126:150-152.

Mamelle, N, J Dreyfus, M Van Lierde, and R Renaud. 1982. Mode de vie et grossesse. J
Gynecol Obstet Biol Reprod 11:55-63.

Mamelle, N, I Bertucat, JP Auray, and G Duru. 1986. Quelles mesures de la prevention de la


prématurité en milieu professionel? Rev Epidemiol Santé Publ 34:286-293.

Marbury, MC, SK Hammon, and NJ Haley. 1993. Measuring exposure to environmental


tobacco smoke in studies of acute health effects. Am J Epidemiol 137(10):1089-1097.

Marks, R. 1988. Role of childhood in the development of skin cancer. Aust Paediat J 24:337-
338.

Martin, RH. 1983. A detailed method for obtaining preparations of human sperm
chromosomes. Cytogenet Cell Genet 35:252-256.

Matsumoto, AM. 1989. Hormonal control of human spermatogenesis. In The Testis, edited by
H Burger and D de Kretser. New York: Raven Press.

Mattison, DR, DR Plowchalk, MJ Meadows, AZ Al-Juburi, J Gandy, and A Malek. 1990.


Reproductive toxicity: male and female reproductive systems as targets for chemical injury.
Med Clin N Am 74:391-411.

Maxcy Rosenau-Last. 1994. Public Health and Preventive Medicine. New York: Appleton-
Century-Crofts.

McConnell, R. 1986. Pesticides and related compounds. In Clinical Occupational Medicine,


edited by L Rosenstock and MR Cullen. Philadelphia: WB Saunders.

McDonald, AD, JC McDonald, B Armstrong, NM Cherry, AD Nolin, and D Robert. 1988.


Prematurity and work in pregnancy. Br J Ind Med 45:56-62.

——. 1989. Fathers’ occupation and pregnancy outcome. Br J Ind Med 46:329-333.

McLachlan, RL, AM Matsumoto, HG Burger, DM de Kretzer, and WJ Bremner. 1988.


Relative roles of follicle-stimulating hormone and luteinizing hormone in the control of
inhibin secretion in normal men. J Clin Invest 82:880-884.

Meeks, A, PR Keith, and MS Tanner. 1990. Nephrotic syndrome in two members of a family
with mercury poisoning. J Trace Elements Electrol Health Dis 4(4):237-239.
National Reasearch Council. 1986. Environmental Tobacco Smoke: Measuring Exposures and
Assessing Health Effects. Washington, DC: National Academy Press.

——. 1993. Pesticides in the Diets of Infants and Children. Washington, DC: National
Academy Press.

Needleman, HL and D Bellinger. 1984. The developmental consequences of childhood


exposure to lead. Adv Clin Child Psychol 7:195-220.

Nelson, K and LB Holmes. 1989. Malformations due to presumed spontaneous mutations in


newborn infants. New Engl J Med 320(1):19-23.

Nicholson, WJ. 1986. Airborne Asbestos Health Assessment Update. Document No.
EPS/600/8084/003F. Washington, DC: Environmental Criteria and Assessment.

O’Leary, LM, AM Hicks, JM Peters, and S London. 1991. Parental occupational exposures
and risk of childhood cancer: a review. Am J Ind Med 20:17-35.

Olsen, J. 1983. Risk of exposure to teratogens amongst laboratory staff and painters. Danish
Med Bull 30:24-28.

Olsen, JH, PDN Brown, G Schulgen, and OM Jensen. 1991. Parental employment at time of
conception and risk of cancer in offspring. Eur J Cancer 27:958-965.

Otte, KE, TI Sigsgaard, and J Kjaerulff. 1990. Malignant mesothelioma clustering in a family
producing asbestos cement in their home. Br J Ind Med 47:10-13.

Paul, M. 1993. Occupational and Environmental Reproductive Hazards: A Guide for


Clinicians. Baltimore: Williams & Wilkins.

Peoples-Sheps, MD, E Siegel, CM Suchindran, H Origasa, A Ware, and A Barakat. 1991.


Characteristics of maternal employment during pregnancy: Effects on low birthweight. Am J
Public Health 81:1007-1012.

Pirkle, JL, DJ Brody, EW Gunter, RA Kramer, DC Paschal, KM Flegal, and TD Matte. 1994.
The decline in blood lead levels in the United States. J Am Med Assoc 272 (Jul):284-291.

Plant, TM. 1988. Puberty in primates. In The Physiology of Reproduction, edited by E Knobil
and JD Neill. New York: Raven Press.

Plowchalk, DR, MJ Meadows, and DR Mattison. 1992. Female reproductive toxicity. In


Occupational and Environmental Reproductive Hazards: A Guide for Clinicians, edited by M
Paul. Baltimore: Williams and Wilkins.

Potashnik, G and D Abeliovich. 1985. Chromosomal analysis and health status of children
conceived to men during or following dibromochloropropane-induced spermatogenic
suppression. Andrologia 17:291-296.

Rabinowitz, M, A Leviton, and H Needleman. 1985. Lead in milk and infant blood: A dose-
response model. Arch Environ Health 40:283-286.
Ratcliffe, JM, SM Schrader, K Steenland, DE Clapp, T Turner, and RW Hornung. 1987.
Semen quality in papaya workers with long term exposure to ethylene dibromide. Br J Ind
Med 44:317-326.

Referee (The). 1994. J Assoc Anal Chem 18(8):1-16.

Rinehart, RD and Y Yanagisawa. 1993. Paraoccupational exposures to lead and tin carried by
electric-cable splicers. Am Ind Hyg Assoc J 54(10):593-599.

Rodamilans, M, MJM Osaba, J To-Figueras, F Rivera Fillat, JM Marques, P Perez, and J


Corbella. 1988. Lead toxicity on endocrine testicular function in an occupationally exposed
population. Hum Toxicol 7:125-128.

Rogan, WJ, BC Gladen, JD McKinney, N Carreras, P Hardy, J Thullen, J Tingelstad, and M


Tully. 1986. Neonatal effects of transplacental exposure to PCBs and DDE. J Pediat 109:335-
341.

Roggli, VL and WE Longo. 1991. Mineral fiber content of lung tissue in patients with
environmental exposures: household contacts vs. building occupants. Ann NY Acad Sci 643
(31 Dec):511-518.

Roper, WL. 1991. Preventing Lead Poisoning in Young Children: A Statement by the Centers
for Disease Control. Washington, DC: US Department of Health and Human Services.

Rowens, B, D Guerrero-Betancourt, CA Gottlieb, RJ Boyes, and MS Eichenhorn. 1991.


Respiratory failure and death following acute inhalation of mercury vapor. A clinical and
histologic perspective. Chest 99(1):185-190.

Rylander, E, G Pershagen, M Eriksson, and L Nordvall. 1993. Parental smoking and other
risk factors for wheezing bronchitis in children. Eur J Epidemiol 9(5):516-526.

Ryu, JE, EE Ziegler, and JS Fomon. 1978. Maternal lead exposure and blood lead
concentration in infancy. J Pediat 93:476-478.

Ryu, JE, EE Ziegler, SE Nelson, and JS Fomon. 1983. Dietary intake of lead and blood lead
concentration in early infancy. Am J Dis Child 137:886-891.

Sager, DB and DM Girard. 1994. Long term effects on reproductive parameters in female rats
after translactional exposure to PCBs. Environ Res 66:52-76.

Sallmen, M, ML Lindbohm, A Anttila, H Taskinen, and K Hemminki. 1992. Paternal


occupational lead exposure and congenital malformations. J Epidemiol Community Health
46(5):519-522.

Saurel-Cubizolles, MJ and M Kaminski. 1987. Pregnant women’s working conditions and


their changes during pregnancy: A national study in France. Br J Ind Med 44:236-243.

Savitz, DA, NL Sonnerfeld, and AF Olshaw. 1994. Review of epidemiologic studies of


paternal occupational exposure and spontaneous abortion. Am J Ind Med 25:361-383.
Savy-Moore, RJ and NB Schwartz. 1980. Differential control of FSH and LH secretion. Int
Rev Physiol 22:203-248.

Schaefer, M. 1994. Children and toxic substances: Confronting a major public health
challenge. Environ Health Persp 102 Suppl. 2:155-156.

Schenker, MB, SJ Samuels, RS Green, and P Wiggins. 1990. Adverse reproductive outcomes
among female veterinarians. Am J Epidemiol 132 (January):96-106.

Schreiber, JS. 1993. Predicted infant exposure to tetrachloroethene in human breastmilk. Risk
Anal 13(5):515-524.

Segal, S, H Yaffe, N Laufer, and M Ben-David. 1979. Male hyperprolactinemia: Effects on


fertility. Fert Steril 32:556-561.

Selevan, SG. 1985. Design of pregnancy outcome studies of industrial exposures. In


Occupational Hazards and Reproduction, edited by K Hemminki, M Sorsa, and H Vainio.
Washington, DC: Hemisphere.

Sever, LE, ES Gilbert, NA Hessol, and JM McIntyre. 1988. A case-control study of


congenital malformations and occupational exposure to low-level radiation. Am J Epidemiol
127:226-242.

Shannon, MW and JW Graef. 1992. Lead intoxication in infancy. Pediatrics 89:87-90.

Sharpe, RM. 1989. Follicle-stimulating hormone and spermatogenesis in the adult male. J
Endocrinol 121:405-407.

Shepard, T, AG Fantel, and J Fitsimmons. 1989. Congenital defect abortuses: Twenty years of
monitoring. Teratology 39:325-331.

Shilon, M, GF Paz, and ZT Homonnai. 1984. The use of phenoxybenzamine treatment in


premature ejaculation. Fert Steril 42:659-661.

Smith, AG. 1991. Chlorinated hydrocarbon insecticides. In Handbook of Pesticide


Toxicology, edited by WJ Hayes and ER Laws. New York: Acedemic Press.

Sockrider, MM and DB Coultras. 1994. Environmental tobacco smoke: a real and present
danger. J Resp Dis 15(8):715-733.

Stachel, B, RC Dougherty, U Lahl, M Schlosser, and B Zeschmar. 1989. Toxic environmental


chemicals in human semen: analytical method and case studies. Andrologia 21:282-291.

Starr, HG, FD Aldrich, WD McDougall III, and LM Mounce. 1974. Contribution of


household dust to the human exposure to pesticides. Pest Monit J 8:209-212.

Stein, ZA, MW Susser, and G Saenger. 1975. Famine and Human Development. The Dutch
Hunger Winter of 1944/45. New York: Oxford Univ. Press.
Taguchi, S and T Yakushiji. 1988. Influence of termite treatment in the home on the
chlordane concentration in human milk. Arch Environ Contam Toxicol 17:65-71.

Taskinen, HK. 1993. Epidemiological studies in monitoring reproductive effects. Environ


Health Persp 101 Suppl. 3:279-283.

Taskinen, H, A Antilla, ML Lindbohm, M Sallmen, and K Hemminki. 1989. Spontaneous


abortions and congenital malformations among the wives of men occupationally exposed to
organic solvents. Scand J Work Environ Health 15:345-352.

Teitelman, AM, LS Welch, KG Hellenbrand, and MB Bracken. 1990. The effects of maternal
work activity on preterm birth and low birth weight. Am J Epidemiol 131:104-113.

Thorner, MO, CRW Edwards, JP Hanker, G Abraham, and GM Besser. 1977. Prolactin and
gonadotropin interaction in the male. In The Testis in Normal and Infertile Men, edited by P
Troen and H Nankin. New York :Raven Press.

US Environmental Protection Agency (US EPA). 1992. Respiratory Health Effects of Passive
Smoking: Lung Cancer and Other Disorders. Publication No. EPA/600/6-90/006F.
Washington, DC: US EPA.

Veulemans, H, O Steeno, R Masschelein, and D Groesneken. 1993. Exposure to ethylene


glycol ethers and spermatogenic disorders in man: A case-control study. Br J Ind Med 50:71-
78.

Villar, J and JM Belizan. 1982. The relative contribution of prematurity and fetal growth
retardation to low birth weight in developing and developed societies. Am J Obstet Gynecol
143(7):793-798.

Welch, LS, SM Schrader, TW Turner, and MR Cullen. 1988. Effects of exposure to ethylene
glycol ethers on shipyard painters: ii. male reproduction. Am J Ind Med 14:509-526.

Whorton, D, TH Milby, RM Krauss, and HA Stubbs. 1979. Testicular function in DBCP


exposed pesticide workers. J Occup Med 21:161-166.

Wilcox, AJ, CR Weinberg, JF O’Connor, DD BBaird, JP Schlatterer, RE Canfield, EG


Armstrong, and BC Nisula. 1988. Incidence of early loss of pregnancy. New Engl J Med
319:189-194.

Wilkins, JR and T Sinks. 1990. Parental occupation and intracranial neoplasms of childhood:
Results of a case-control interview study. Am J Epidemiol 132:275-292.

Wilson, JG. 1973. Environment and Birth Defects. New York: Academic Press.

——. 1977. current status of teratology-general principles and mechanisms derived from
animal studies. In Handbook of Teratology, Volume 1, General Principles and Etiology,
edited by JG Fraser and FC Wilson. New York: Plenum.

Winters, SJ. 1990. Inhibin is released together with testosterone by the human testis. J Clin
Endocrinol Metabol 70:548-550.
Wolff, MS. 1985. Occupational exposure to polychlorinated biphenyls. Environ Health Persp
60:133-138.

——. 1993. Lactation. In Occupational and Environmental Reproductive Hazards: A Guide


for Clinicians, edited by M Paul. Baltimore: Williams & Wilkins.

Wolff, MS and A Schecter. 1991. Accidental exposure of children to polychlorinated


biphenyls. Arch Environ Contam Toxicol 20:449-453.

World Health Organization (WHO). 1969. Prevention of perinatal morbidity and mortality.
Public Health Papers, No. 42. Geneva: WHO.

——. 1977. Modification Recommended by FIGO. WHO recommended definitions,


terminology and format for statistical tables related to the perinatal period and use of a new
certificate for cause of perinatal death. Acta Obstet Gynecol Scand 56:247-253.

Zaneveld, LJD. 1978. The biology of human spermatozoa. Obstet Gynecol Ann 7:15-40.

Ziegler, EE, BB Edwards, RL Jensen, KR Mahaffey, and JS Fomon. 1978. Absorption and
retention of lead by infants. Pediat Res 12:29-34.

Zikarge, A. 1986. Cross-Sectional Study of Ethylene Dibromide-Induced Alterations of


Seminal Plasma Biochemistry as a Function of Post-Testicular Toxicity with Relationships to
Some Indices of Semen Analysis and Endocrine Profile. Dissertation, Houston, Texas:
Univ.of Texas Health Science Center.

Zirschky, J and L Wetherell. 1987. Cleanup of mercury contamination of thermometer


workers’ homes. Am Ind Hyg Assoc J 48:82-84.

Zukerman, Z, LJ Rodriguez-Rigau, DB Weiss, AK Chowdhury, KD Smith, and E


Steinberger. 1978. Quantitative analysis of the seminiferous epithelium in human testicular
biopsies, and the relation of spermatogenesis to sperm density. Fert Steril 30:448-455.

Zwiener, RJ and CM Ginsburg. 1988. Organophosphate and carbamate poisoning in infants


and children. Pediatrics 81(1):121-126

Copyright 2015 International Labour Organization


10. Respiratory System
Chapters Editors: Alois David and Gregory R. Wagner

Structure and Function

Written by ILO Content Manager

The respiratory system extends from the breathing zone just outside of the nose and mouth
through the conductive airways in the head and thorax to the alveoli, where respiratory gas
exchange takes place between the alveoli and the capillary blood flowing around them. Its
prime function is to deliver oxygen (O2) to the gas-exchange region of the lung, where it can
diffuse to and through the walls of the alveoli to oxygenate the blood passing through the
alveolar capillaries as needed over a wide range of work or activity levels. In addition, the
system must also: (1) remove an equal volume of carbon dioxide entering the lungs from the
alveolar capillaries; (2) maintain body temperature and water vapour saturation within the
lung airways (in order to maintain the viability and functional capacities of the surface fluids
and cells); (3) maintain sterility (to prevent infections and their adverse consequences); and
(4) eliminate excess surface fluids and debris, such as inhaled particles and senescent
phagocytic and epithelial cells. It must accomplish all of these demanding tasks continuously
over a lifetime, and do so with high efficiency in terms of performance and energy utilization.
The system can be abused and overwhelmed by severe insults such as high concentrations of
cigarette smoke and industrial dust, or by low concentrations of specific pathogens which
attack or destroy its defence mechanisms, or cause them to malfunction. Its ability to
overcome or compensate for such insults as competently as it usually does is a testament to its
elegant combination of structure and function.

Mass Transfer

The complex structure and numerous functions of the human respiratory tract have been
summarized concisely by a Task Group of the International Commission on Radiological
Protection (ICRP 1994), as shown in figure 1. The conductive airways, also known as the
respiratory dead space, occupy about 0.2 litres. They condition the inhaled air and distribute
it, by convective (bulk) flow, to the approximately 65,000 respiratory acini leading off the
terminal bronchioles. As tidal volumes increase, convective flow dominates gas exchange
deeper into the respiratory bronchioles. In any case, within the respiratory acinus, the distance
from the convective tidal front to alveolar surfaces is short enough so that efficient CO2-O2
exchange takes place by molecular diffusion. By contrast, airborne particles, with diffusion
coefficients smaller by orders of magnitude than those for gases, tend to remain suspended in
the tidal air, and can be exhaled without deposition.

Figure 1. Morphometry, cytology, histology, function and structure of the respiratory tract
and regions used in the 1994 ICRP dosimetry model.
A significant fraction of the inhaled particles do deposit within the respiratory tract. The
mechanisms accounting for particle deposition in the lung airways during the inspiratory
phase of a tidal breath are summarized in figure 2. Particles larger than about 2 mm in
aerodynamic diameter (diameter of a unit density sphere having the same terminal settling
(Stokes) velocity) can have significant momentum and deposit by impaction at the relatively
high velocities present in the larger airways. Particles larger than about 1 mm can deposit by
sedimentation in the smaller conductive airways, where flow velocities are very low. Finally,
particles with diameters between 0.1 and 1 mm, which have a very low probability of
depositing during a single tidal breath, can be retained within the approximately 15% of the
inspired tidal air that is exchanged with residual lung air during each tidal cycle. This
volumetric exchange occurs because of the variable time-constants for airflow in the different
segments of the lungs. Due to the much longer residence times of the residual air in the lungs,
the low intrinsic particle displacements of 0.1 to 1 mm particles within such trapped volumes
of inhaled tidal air become sufficient to cause their deposition by sedimentation and/or
diffusion over the course of successive breaths.

Figure 2. Mechanisms for particle deposition in lung airways


The essentially particle-free residual lung air that accounts for about 15% of the expiratory
tidal flow tends to act like a clean-air sheath around the axial core of distally moving tidal air,
such that particle deposition in the respiratory acinus is concentrated on interior surfaces such
as airway bifurcations, while interbranch airway walls have little deposition.

The number of particles deposited and their distribution along the respiratory tract surfaces
are, along with the toxic properties of the material deposited, the critical determinants of
pathogenic potential. The deposited particles can damage the epithelial and/or the mobile
phagocytic cells at or near the deposition site, or can stimulate the secretion of fluids and cell-
derived mediators that have secondary effects on the system. Soluble materials deposited as,
on, or within particles can diffuse into and through surface fluids and cells and be rapidly
transported by the bloodstream throughout the body.

Aqueous solubility of bulk materials is a poor guide to particle solubility in the respiratory
tract. Solubility is generally greatly enhanced by the very large surface-to-volume ratio of
particles small enough to enter the lungs. Furthermore, the ionic and lipid contents of surface
fluids within the airways are complex and highly variable, and can lead to either enhanced
solubility or to rapid precipitation of aqueous solutes. Furthermore, the clearance pathways
and residence times for particles on airway surfaces are very different in the different
functional parts of the respiratory tract.

The revised ICRP Task Group’s clearance model identifies the principal clearance pathways
within the respiratory tract that are important in determining the retention of various
radioactive materials, and thus the radiation doses received by respiratory tissues and other
organs after translocation. The ICRP deposition model is used to estimate the amount of
inhaled material that enters each clearance pathway. These discrete pathways are represented
by the compartment model shown in figure 3. They correspond to the anatomic compartments
illustrated in Figure 1, and are summarized in table 1, along with those of other groups
providing guidance on the dosimetry of inhaled particles.
Figure 3. Compartment model to represent time-dependent particle transport from each region
in 1994 ICRP model

Table 1. Respiratory tract regions as defined in particle deposition models

Anatomic ACGIH Region ISO and CEN 1966 ICRP Task 1994 ICRP
structures Regions Group Region Task Group
included Region

Nose, Head airways Extrathoracic (E) Nasopharynx (NP) Anterior nasal


nasopharynx (HAR) passages (ET1 )
Mouth, All other
oropharynx, extrathoracic
laryngopharynx (ET2 )

Trachea, bronchi Tracheobronchial Tracheobronchial Tracheobronchial Trachea and


(TBR) (B) (TB) large bronchi
(BB)
Bronchioles (to Bronchioles
terminal (bb)
bronchioles)

Respiratory Gas exchange Alveolar (A) Pulmonary (P) Alveolar-


bronchioles, (GER) interstitial (AI)
alveolar ducts,
alveolar sacs,
alveoli

Extrathoracic airways

As shown in figure 1, the extrathoracic airways were partitioned by ICRP (1994) into two
distinct clearance and dosimetric regions: the anterior nasal passages (ET1) and all other
extrathoracic airways (ET2)—that is, the posterior nasal passages, the naso- and oropharynx,
and the larynx. Particles deposited on the surface of the skin lining the anterior nasal passages
(ET1) are assumed to be subject only to removal by extrinsic means (nose blowing, wiping
and so on). The bulk of material deposited in the naso-oropharynx or larynx (ET2) is subject
to fast clearance in the layer of fluid that covers these airways. The new model recognizes that
diffusional deposition of ultrafine particles in the extrathoracic airways can be substantial,
while the earlier models did not.

Thoracic airways

Radioactive material deposited in the thorax is generally divided between the


tracheobronchial (TB) region, where deposited particles are subject to relatively fast
mucociliary clearance, and the alveolar-interstitial (AI) region, where the particle clearance is
much slower.

For dosimetry purposes, the ICRP (1994) divided deposition of inhaled material in the TB
region between the trachea and bronchi (BB), and the more distal, small airways, the
bronchioles (bb). However, the subsequent efficiency with which cilia in either type of
airways are able to clear deposited particles is controversial. In order to be certain that doses
to bronchial and bronchiolar epithelia would not be underestimated, the Task Group assumed
that as much as half the number of particles deposited in these airways is subject to relatively
“slow” mucociliary clearance. The likelihood that a particle is cleared relatively slowly by the
mucociliary system appears to depend on its physical size.

Material deposited in the AI region is subdivided among three compartments (AI1, AI2 and
AI3) that are each cleared more slowly than TB deposition, with the subregions cleared at
different characteristic rates.
Figure 4. Fractional deposition in each region of respiratory tract for reference light worker
(normal nose breather) in 1994 ICRP model.

Figure 4 depicts the predictions of the ICRP (1994) model in terms of the fractional
deposition in each region as a function of the size of the inhaled particles. It reflects the
minimal lung deposition between 0.1 and 1 mm, where deposition is determined largely by
the exchange, in the deep lung, between tidal and residual lung air. Deposition increases
below 0.1 mm as diffusion becomes more efficient with decreasing particle size. Deposition
increases with increasing particle size above 1 mm as sedimentation and impaction become
increasingly effective.

Less complex models for size-selective deposition have been adopted by occupational health
and community air pollution professionals and agencies, and these have been used to develop
inhalation exposure limits within specific particle size ranges. Distinctions are made between:

1. those particles that are not aspirated into the nose or mouth and therefore represent no
inhalation hazard
2. the inhalable (also known as inspirable) particulate mass (IPM)—those that are inhaled and
are hazardous when deposited anywhere within the respiratory tract
3. the thoracic particulate mass (TPM)—those that penetrate the larynx and are hazardous
when deposited anywhere within the thorax and
4. the respirable particulate mass (RPM)—those particles that penetrate through the terminal
bronchioles and are hazardous when deposited within the gas-exchange region of the lungs.

In the early 1990s there has been an international harmonization of the quantitative definitions
of IPM, TPM and RPM. The size-selective inlet specifications for air samplers meeting the
criteria of the American Conference of Governmental Industrial Hygienists (ACGIH 1993),
the International Organization for Standardization (ISO 1991) and the European
Standardization Committee (CEN 1991) are enumerated in table 2. They differ from the
deposition fractions of ICRP (1994), especially for larger particles, because they take the
conservative position that protection should be provided for those engaged in oral inhalation,
and thereby bypass the more efficient filtration efficiency of the nasal passages.
Table 2. Inhalable, thoracic and respirable dust criteria of ACGIH, ISO and CEN, and PM10
criteria of US EPA

Inhalable Thoracic Respirable PM10

Particle Inhalable Particle Thoracic Particle Respirable Particle Thoracic


aero- Particulate aero- Particulate aero- Particulate aero- Particulate
dynamic Mass dynamic Mass dynamic Mass dynamic Mass
diameter (IPM) (%) diameter (TPM) (%) diameter (RPM) (%) diameter (TPM) (%)
(mm) (mm) (mm) (mm)

0 100 0 100 0 100 0 100

1 97 2 94 1 97 2 94

2 94 4 89 2 91 4 89

5 87 6 80.5 3 74 6 81.2

10 77 8 67 4 50 8 69.7

20 65 10 50 5 30 10 55.1

30 58 12 35 6 17 12 37.1

40 54.5 14 23 7 9 14 15.9

50 52.5 16 15 8 5 16 0

100 50 18 9.5 10 1

20 6

25 2

The US Environmental Protection Agency (EPA 1987) standard for ambient air particle
concentration is known as PM10, that is, particulate matter less than 10 mm in aerodynamic
diameter. It has a sampler inlet criterion that is similar (functionally equivalent) to TPM but,
as shown in Table 2, somewhat different numerical specifications.

Air Pollutants

Pollutants can be dispersed in air at normal ambient temperatures and pressures in gaseous,
liquid and solid forms. The latter two represent suspensions of particles in air and were given
the generic term aerosols by Gibbs (1924) on the basis of analogy to the term hydrosol, used
to describe dispersed systems in water. Gases and vapours, which are present as discrete
molecules, form true solutions in air. Particles consisting of moderate to high vapour pressure
materials tend to evaporate rapidly, because those small enough to remain suspended in air for
more than a few minutes (i.e., those smaller than about 10 mm) have large surface-to-volume
ratios. Some materials with relatively low vapour pressures can have appreciable fractions in
both vapour and aerosol forms simultaneously.
Gases and vapours

Once dispersed in air, contaminant gases and vapours generally form mixtures so dilute that
their physical properties (such as density, viscosity, enthalpy and so on) are indistinguishable
from those of clean air. Such mixtures may be considered to follow ideal gas law
relationships. There is no practical difference between a gas and a vapour except that the latter
is generally considered to be the gaseous phase of a substance that can exist as a solid or
liquid at room temperature. While dispersed in air, all molecules of a given compound are
essentially equivalent in their size and probabilities of capture by ambient surfaces,
respiratory tract surfaces and contaminant collectors or samplers.

Aerosols

Aerosols, being dispersions of solid or liquid particles in air, have the very significant
additional variable of particle size. Size affects particle motion and, hence, the probabilities of
physical phenomena such as coagulation, dispersion, sedimentation, impaction onto surfaces,
interfacial phenomena and light-scattering properties. It is not possible to characterize a given
particle by a single size parameter. For example, a particle’s aerodynamic properties depend
on density and shape as well as linear dimensions, and the effective size for light scattering is
dependent on refractive index and shape.

In some special cases, all of the particles are essentially the same in size. Such aerosols are
considered to be monodisperse. Examples are natural pollens and some laboratory-generated
aerosols. More typically, aerosols are composed of particles of many different sizes and hence
are called heterodisperse or polydisperse. Different aerosols have different degrees of size
dispersion. It is, therefore, necessary to specify at least two parameters in characterizing
aerosol size: a measure of central tendency, such as a mean or median, and a measure of
dispersion, such as an arithmetic or geometric standard deviation.

Particles generated by a single source or process generally have diameters following a log-
normal distribution; that is, the logarithms of their individual diameters have a Gaussian
distribution. In this case, the measure of dispersion is the geometric standard deviation, which
is the ratio of the 84.1 percentile size to the 50 percentile size. When more than one source of
particles is significant, the resulting mixed aerosol will usually not follow a single log-normal
distribution, and it may be necessary to describe it by the sum of several distributions.

Particle characteristics

There are many properties of particles other than their linear size that can greatly influence
their airborne behaviour and their effects on the environment and health. These include:

Surface. For spherical particles, the surface varies as the square of the diameter. However, for
an aerosol of given mass concentration, the total aerosol surface increases with decreasing
particle size. For non-spherical or aggregate particles, and for particles with internal cracks or
pores, the ratio of surface to volume can be much greater than for spheres.

Volume. Particle volume varies as the cube of the diameter; therefore, the few largest particles
in an aerosol tend to dominate its volume (or mass) concentration.
Shape. A particle’s shape affects its aerodynamic drag as well as its surface area and therefore
its motion and deposition probabilities.

Density. A particle’s velocity in response to gravitational or inertial forces increases as the


square root of its density.

Aerodynamic diameter. The diameter of a unit-density sphere having the same terminal
settling velocity as the particle under consideration is equal to its aerodynamic diameter.
Terminal settling velocity is the equilibrium velocity of a particle that is falling under the
influence of gravity and fluid resistance. Aerodynamic diameter is determined by the actual
particle size, the particle density and an aerodynamic shape factor.

Types of aerosols

Aerosols are generally classified in terms of their processes of formation. Although the
following classification is neither precise nor comprehensive, it is commonly used and
accepted in the industrial hygiene and air pollution fields.

Dust. An aerosol formed by mechanical subdivision of bulk material into airborne fines
having the same chemical composition. Dust particles are generally solid and irregular in
shape and have diameters greater than 1 mm.

Fume. An aerosol of solid particles formed by condensation of vapours formed by combustion


or sublimation at elevated temperatures. The primary particles are generally very small (less
than 0.1 mm) and have spherical or characteristic crystalline shapes. They may be chemically
identical to the parent material, or may be composed of an oxidation product such as metal
oxide. Since they may be formed in high number concentrations, they often rapidly coagulate,
forming aggregate clusters of low overall density.

Smoke. An aerosol formed by condensation of combustion products, generally of organic


materials. The particles are generally liquid droplets with diameters less than 0.5 mm.

Mist. A droplet aerosol formed by mechanical shearing of a bulk liquid, for example, by
atomization, nebulization, bubbling or spraying. The droplet size can cover a very large range,
usually from about 2 mm to greater than 50 mm.

Fog. An aqueous aerosol formed by condensation of water vapour on atmospheric nuclei at


high relative humidities. The droplet sizes are generally greater than 1 mm.

Smog. A popular term for a pollution aerosol derived from a combination of smoke and fog. It
is now commonly used for any atmospheric pollution mixture.

Haze. A submicrometer-sized aerosol of hygroscopic particles that take up water vapour at


relatively low relative humidities.

Aitken or condensation nuclei (CN). Very small atmospheric particles (mostly smaller than
0.1 mm) formed by combustion processes and by chemical conversion from gaseous
precursors.
Accumulation mode. A term given to the particles in the ambient atmosphere ranging from 0.1
to about 1.0 mm in diameter. These particles generally are spherical (having liquid surfaces),
and form by coagulation and condensation of smaller particles that derive from gaseous
precursors. Being too large for rapid coagulation and too small for effective sedimentation,
they tend to accumulate in the ambient air.

Coarse particle mode. Ambient air particles larger than about 2.5 mm in aerodynamic
diameter and generally formed by mechanical processes and surface dust resuspension.

Biological Responses of the Respiratory System to Air Pollutants

Responses to air pollutants range from nuisance to tissue necrosis and death, from generalized
systemic effects to highly specific attacks on single tissues. Host and environmental factors
serve to modify the effects of inhaled chemicals, and the ultimate response is the result of
their interaction. The main host factors are:

1. age—for example, older people, especially those with chronically reduced cardiovascular and
respiratory function, who may not be able to cope with additional pulmonary stresses
2. state of health—for example, concurrent disease or dysfunction
3. nutritional status
4. immunological status
5. sex and other genetic factors—for example, enzyme-related differences in biotransformation
mechanisms, such as deficient metabolic pathways, and inability to synthesize certain
detoxification enzymes
6. psychological state—for example, stress, anxiety and
7. cultural factors—for example, cigarette smoking, which may affect normal defences, or may
potentiate the effect of other chemicals.

The environmental factors include the concentration, stability and physicochemical properties
of the agent in the exposure environment and the duration, frequency and route of exposure.
Acute and chronic exposures to a chemical may result in different pathological
manifestations.

Any organ can respond in only a limited number of ways, and there are numerous diagnostic
labels for the resultant diseases. The following sections discuss the broad types of responses
of the respiratory system which may occur following exposure to environmental pollutants.

Irritant response

Irritants produce a pattern of generalized, non-specific tissue inflammation, and destruction


may result at the area of contaminant contact. Some irritants produce no systemic effect
because the irritant response is much greater than any systemic effect, while some also have
significant systemic effects following absorption—for example, hydrogen sulphide absorbed
via the lungs.

At high concentrations, irritants may cause a burning sensation in the nose and throat (and
usually also in the eyes), pain in the chest and coughing producing inflammation of the
mucosa (tracheitis, bronchitis). Examples of irritants are gases such as chlorine, fluorine,
sulphur dioxide, phosgene and oxides of nitrogen; mists of acids or alkali; fumes of cadmium;
dusts of zinc chloride and vanadium pentoxide. High concentrations of chemical irritants may
also penetrate deep into the lungs and cause lung oedema (the alveoli are filled with liquid) or
inflammation (chemical pneumonitis).

Highly elevated concentrations of dusts which have no chemical irritative properties can also
mechanically irritate bronchi and, after entering the gastrointestinal tract, may also contribute
to stomach and colon cancer.

Exposure to irritants may result in death if critical organs are severely damaged. On the other
hand, the damage may be reversible, or it may result in permanent loss of some degree of
function, such as impaired gas-exchange capacity.

Fibrotic response

A number of dusts lead to the development of a group of chronic lung disorders termed
pneumoconioses. This general term encompasses many fibrotic conditions of the lung, that is,
diseases characterized by scar formation in the interstitial connective tissue. Pneumoconioses
are due to the inhalation and subsequent selective retention of certain dusts in the alveoli,
from which they are subject to interstitial sequestration.

Pneumoconioses are characterized by specific fibrotic lesions, which differ in type and pattern
according to the dust involved. For example, silicosis, due to the deposition of crystalline-free
silica, is characterized by a nodular type of fibrosis, while a diffuse fibrosis is found in
asbestosis, due to asbestos-fibre exposure. Certain dusts, such as iron oxide, produce only
altered radiology (siderosis) with no functional impairment, while the effects of others range
from a minimal disability to death.

Allergic response

Allergic responses involve the phenomenon known as sensitization. Initial exposure to an


allergen results in the induction of antibody formation; subsequent exposure of the now
“sensitized” individual results in an immune response—that is, an antibody-antigen reaction
(the antigen is the allergen in combination with an endogenous protein). This immune
reaction may occur immediately following exposure to the allergen, or it may be a delayed
response.

The primary respiratory allergic reactions are bronchial asthma, reactions in the upper
respiratory tract which involve the release of histamine or histamine-like mediators following
immune reactions in the mucosa, and a type of pneumonitis (lung inflammation) known as
extrinsic allergic alveolitis. In addition to these local reactions, a systemic allergic reaction
(anaphylactic shock) may follow exposure to some chemical allergens.

Infectious response

Infectious agents can cause tuberculosis, anthrax, ornithosis, brucellosis, histoplasmosis,


Legionnaires’ disease and so on.
Carcinogenic response

Cancer is a general term for a group of related diseases characterized by the uncontrolled
growth of tissues. Its development is due to a complex process of interacting multiple factors
in the host and the environment.

One of the great difficulties in attempting to relate exposure to a specific agent to cancer
development in humans is the long latent period, typically from 15 to 40 years, between onset
of exposure and disease manifestation.

Examples of air pollutants that can produce cancer of the lungs are arsenic and its compounds,
chromates, silica, particles containing polycyclic aromatic hydrocarbons and certain nickel-
bearing dusts. Asbestos fibres can cause bronchial cancer and mesothelioma of the pleura and
peritoneum. Deposited radioactive particles may expose lung tissue to high local doses of
ionizing radiation and be the cause of cancer.

Systemic response

Many environmental chemicals produce a generalized systemic disease due to their effects
upon a number of target sites. Lungs are not only the target for many harmful agents but the
site of entry of toxic substances which pass through the lungs into the bloodstream without
any damage to the lungs. However, when distributed by the blood circulation to various
organs, they can damage them or cause general poisoning and have systemic effects. This role
of the lungs in occupational pathology is not the subject of this article. However, the effect of
finely dispersed particulates (fumes) of several metal oxides which are often associated with
an acute systemic syndrome known as metal fume fever should be mentioned.

Lung Function Examination

Written by ILO Content Manager

Lung function may be measured in a number of ways. However, the aim of the measurements
has to be clear before the examination, in order to interpret the results correctly. In this article
we will discuss lung function examination with special regard to the occupational field. It is
important to remember the limitations in different lung function measurements. Acute
temporary lung function effects may not be discernible in case of exposure to fibrogenic dust
like quartz and asbestos, but chronic effects on lung function after long-term (>20 years)
exposure may be. This is due to the fact that chronic effects occur years after the dust is
inhaled and deposited in the lungs. On the other hand, acute temporary effects of organic and
inorganic dust, as well as mould, welding fumes and motor exhaust, are well suited to study.
This is due to the fact that the irritative effect of these dusts will occur after a few hours of
exposure. Acute or chronic lung function effects also may be discernible in cases of exposure
to concentrations of irritating gases (nitrogen dioxide, aldehydes, acids and acid chlorides) in
the vicinity of well documented exposure limit values, especially if the effect is potentiated by
particulate air contamination.

Lung function measurements have to be safe for the examined subjects, and the lung function
equipment has to be safe for the examiner. A summary of the specific requirements for
different kinds of lung function equipment are available (e.g., Quanjer et al. 1993). Of course,
the equipment must be calibrated according to independent standards. This may be difficult to
achieve, especially when computerized equipment is being used. The result of the lung
function test is dependent on both the subject and the examiner. To provide satisfactory
results from the examination, technicians have to be well trained, and able to instruct the
subject carefully and also encourage the subject to carry out the test properly. The examiner
should also have knowledge about the airways and lungs in order to interpret the results from
the recordings correctly.

It is recommended that the methods used have a fairly high reproducibility both between and
within subjects. Reproducibility may be measured as the coefficient of variation, that is, the
standard deviation multiplied by 100 divided by the mean value. Values below 10% in
repeated measurements on the same subject are deemed acceptable.

In order to determine if the measured values are pathological or not, they must be compared
with prediction equations. Usually the prediction equations for spirometric variables are based
on age and height, stratified for sex. Men have on the average higher lung function values
than women, of the same age and height. Lung function decreases with age and increases with
height. A tall subject will therefore have higher lung volume than a short subject of the same
age. The outcome from prediction equations may differ considerably between different
reference populations. The variation in age and height in the reference population will also
influence the predicted values. This means, for example, that a prediction equation must not
be used if age and/or height for the examined subject are outside the ranges for the population
that is the basis for the prediction equation.

Smoking will also diminish lung function, and the effect may be potentiated in subjects who
are occupationally exposed to irritating agents. Lung function used to be considered as not
being pathological if the obtained values are within 80% of the predicted value, derived from
a prediction equation.

Measurements

Lung function measurements are carried out to judge the condition of the lungs.
Measurements may either concern single or multiple measured lung volumes, or the dynamic
properties in the airways and lungs. The latter is usually determined through effort-dependent
manoeuvres. The conditions in the lungs may also be examined with regard to their
physiological function, that is, diffusion capacity, airway resistance and compliance (see
below).

Measurements concerning ventilatory capacity are obtained by spirometry. The breathing


manoeuvre is usually performed as a maximal inspiration followed by a maximal expiration,
vital capacity (VC, measured in litres). At least three technically satisfactory recordings (i.e.,
full inspiration and expiration effort and no observed leaks) should be done, and the highest
value reported. The volume may be directly measured by a water-sealed or a low-resistive
bell, or indirectly measured by pneumotachography (i.e., integration of a flow signal over
time). It is important here to note that all measured lung volumes should be expressed in
BTPS, that is, body temperature and ambient pressure saturated with water vapour.

Forced expired vital capacity (FVC, in litres) is defined as a VC measurement performed with
a maximally forced expiratory effort. Due to the simplicity of the test and the relatively
inexpensive equipment, the forced expirogram has become a useful test in the monitoring of
lung function. However, this has resulted in many poor recordings, of which the practical
value is debatable. In order to carry out satisfactory recordings, the updated guideline for the
collection and use of the forced expirogram, published by the American Thoracic Society in
1987, may be useful.

Instantaneous flows may be measured on flow-volume or flow-time curves, while time


average flows or times are derived from the spirogram. Associated variables which can be
calculated from the forced expirogram are forced expired volume in one second (FEV1, in
litres per second), in percentage of FVC (FEV1%), peak flow (PEF, l/s), maximal flows at
50% and 75% of forced vital capacity (MEF50 and MEF25, respectively). An illustration of the
derivation of FEV1 from the forced expirogram is outlined in figure 1. In healthy subjects,
maximal flow rates at large lung volumes (i.e., at the beginning of expiration) reflect mainly
the flow characteristics of the large airways while those at small lung volumes (i.e., the end of
expiration) are usually held to reflect the characteristics of the small airways, figure 2. In the
latter the flow is laminar, while in the large airways it may be turbulent.

Figure 1. Forced expiratory spirogram showing the derivation of FEV1 and FVC according to
the extrapolation principle.
Figure 2. Flow-volume curve showing the derivation of peak expiratory flow (PEF), maximal
flows at 50% and 75% of forced vital capacity ( and , respectively).

PEF may also be measured by a small portable device such as the one developed by Wright in
1959. An advantage with this equipment is that the subject may carry out serial
measurements—for example, at the workplace. To get useful recordings, however, it is
necessary to instruct the subjects well. Moreover, one should keep in mind that measurements
of PEF with, for example, a Wright meter and those measured by conventional spirometry
should not be compared due to the different blow techniques.

The spirometric variables VC, FVC and FEV1 show a reasonable variation between
individuals where age, height and sex usually explain 60 to 70% of the variation. Restrictive
lung function disorders will result in lower values for VC, FVC and FEV1. Measurements of
flows during expiration show a great individual variation, since the measured flows are both
effort and time dependent. This means, for example, that a subject will have extremely high
flow in case of diminished lung volume. On the other hand, the flow may be extremely low in
case of very high lung volume. However, the flow is usually decreased in case of a chronic
obstructive disease (e.g., asthma, chronic bronchitis).
Figure 3. A principal outline of the equipment for determination of total lung capacity (TLC)
according to the helium dilution technique.

The proportion of residual volume (RV), that is, the volume of air which still is in the lungs
after a maximal expiration, can be determined by gas dilution or by body plethysmography.
The gas dilution technique requires less complicated equipment and is therefore more
convenient to use in studies carried out at the workplace. In figure 3, the principle for the gas
dilution technique has been outlined. The technique is based on dilution of an indicator gas in
a rebreathing circuit. The indicator gas must be sparingly soluble in biological tissues so that
it is not taken up by the tissues and blood in the lung. Hydrogen was initially used, but
because of its ability to form explosive mixtures with air it was replaced by helium, which is
easily detected by means of the thermal conductivity principle.

The subject and the apparatus form a closed system, and the initial concentration of the gas is
thus reduced when it is diluted into the gas volume in the lungs. After equilibration, the
concentration of indicator gas is the same in the lungs as in the apparatus, and functional
residual capacity (FRC) can be calculated by means of a simple dilution equation. The volume
of the spirometer (including the addition of the gas mixture into the spirometer) is denoted by
VS, VL is the volume of the lung, Fi is the initial gas concentration and Ff is the final
concentration.

FRC = VL = [(VS · Fi) / Ff] – VS

Two to three VC manoeuvres are carried out to provide a reliable base for the calculation of
TLC (in litres). The subdivisions of the different lung volumes are outlined in figure 4.

Figure 4. Spirogram labelled to show the subdivisions of the total capacity.

Due to change in the elastic properties of the airways, RV and FRC increase with age. In
chronic obstructive diseases, increased values of RV and FRC are usually observed, while VC
is decreased. However, in subjects with badly ventilated lung areas—for example, subjects
with emphysema—the gas dilution technique may underestimate RV, FRC and also TLC.
This is due to the fact that the indicator gas will not communicate with closed-off airways,
and therefore the decrease in the indicator gas concentration will give erroneously small
values.
Figure 5. A principal outline of the recording of airway closure and the slope of the alveolar
plateau (% ).

Measures of airway closure and gas distribution in the lungs can be obtained in one and the
same manoeuvre by the single breath wash-out technique, figure 5. The equipment consists of
a spirometer connected to a bag-in-box system and a recorder for continuous measurements of
nitrogen concentration. The manoeuvre is carried out by means of a maximal inspiration of
pure oxygen from the bag. In the beginning of the expiration, the nitrogen concentration
increases as a result of emptying the subject’s deadspace, containing pure oxygen. The
expiration continues with the air from the airways and alveoli. Finally, air from the alveoli,
containing 20 to 40% nitrogen, is expired. When the expiration from the basal parts of the
lungs increases, the nitrogen concentration will rise abruptly in case of airway closure in
dependent lung regions, figure 5. This volume above RV, at which airways close during an
expiration, is usually expressed as closing volume (CV) in percentage of VC (CV%).
Distribution of the inspired air in the lungs is expressed as the slope of the alveolar plateau
(%N2 or phase III, %N2/l). It is obtained by taking the difference in nitrogen concentration
between the point when 30% of the air is exhaled and the point for airway closure, and
dividing this by the corresponding volume.

Ageing as well as chronic obstructive disorders will result in increased values for both CV%
and phase III. However, not even healthy subjects have a uniform gas distribution in the
lungs, resulting in slightly elevated values for phase III, that is, 1 to 2% N2/l. The variables
CV% and phase III are considered to reflect the conditions in the peripheral small airways
with an internal diameter about 2 mm. Normally, the peripheral airways contribute to a small
part (10 to 20%) of the total airway resistance. Quite extensive changes which are not
detectable by conventional lung function tests like dynamic spirometry, may occur, for
example, as a result of an exposure to irritating substances in the air in the peripheral airways.
This suggests that airway obstruction begins in the small airways. Results from studies also
have shown alterations in CV% and phase III before any changes from the dynamic and static
spirometry have occurred. These early changes may go into remission when exposure to
hazardous agents has ceased.

The transfer factor of the lung (mmol/min; kPa) is an expression of the diffusion capacity of
oxygen transport into the pulmonary capillaries. The transfer factor can be determined using
single or multiple breath techniques; the single breath technique is considered to be most
suitable in studies at the workplace. Carbon monoxide (CO) is used since the back pressure of
CO is very low in the peripheral blood, in contrast to that of oxygen. The uptake of CO is
assumed to follow an exponential model, and this assumption can be used to determine the
transfer factor for the lung.

Determination of TLCO (transfer factor measured with CO) is carried out by means of a
breathing manoeuvre including a maximal expiration, followed by a maximal inspiration of a
gas mixture containing carbon monoxide, helium, oxygen and nitrogen. After a breath-
holding period, a maximal exhalation is done, reflecting the content in the alveolar air, Figure
10. Helium is used for the determination of the alveolar volume (VA). Assuming that the
dilution of CO is the same as for helium, the initial concentration of CO, before the diffusion
has started, can be calculated. TLCO is calculated according to the equation outlined below,
where k depends on the dimensions of the component terms, t is the effective time for breath-
holding and log is base 10 logarithm. Inspired volume is denoted Vi and the fractions F of CO
and helium are denoted by i and a for inspired and alveolar, respectively.

TLCO = k Vi (Fa,He/Fi,He) log (Fi,CO Fa,He/Fa,CO Fi,He) (t)-1

Figure 6. A principal outline of the recording of transfer factor

The size of TLCO will depend on a variety of conditions—for example, the amount of
available haemoglobin, the volume of ventilated alveoli and perfused lung capillaries and
their relation to each other. Values for TLCO decrease with age and increase with physical
activity and increased lung volumes. Decreased TLCO will be found in both restrictive and
obstructive lung disorders.

Compliance (l/kPa) is a function, inter alia, of the elastic property of the lungs. The lungs
have an intrinsic tendency to collaborate—that is, to collapse. The power to keep the lungs
stretched will depend on the elastic lung tissue, the surface tension in the alveoli, and the
bronchial musculature. On the other hand, the chest wall tends to expand at lung volumes 1 to
2 litres above the FRC level. At higher lung volumes, power has to be applied to further
expand the chest wall. At the FRC level, the corresponding tendency in the lungs is balanced
by the tendency to expand. The FRC level is therefore denoted by the resting level of the lung.
The compliance of the lung is defined as the change in volume divided by the change in
transpulmonary pressure, that is, the difference between the pressures in the mouth
(atmospheric) and in the lung, as the result of a breathing manoeuvre. Measurements of the
pressure in the lung are not easily carried out and are therefore replaced by measurements of
the pressure in the oesophagus. The pressure in the oesophagus is almost the same as the
pressure in the lung, and it is measured with a thin polyethylene catheter with a balloon
covering the distal 10 cm. During inspiratory and expiratory manoeuvres, the changes in
volume and pressure are recorded by means of a spirometer and pressure transducer,
respectively. When the measurements are performed during tidal breathing, dynamic
compliance can be measured. Static compliance is obtained when a slow VC manoeuvre is
carried out. In the latter case, the measurements are carried out in a body plethysmograph, and
the expiration is intermittently interrupted by means of a shutter. However, measurements of
compliance are cumbersome to perform when examining exposure effects on lung function at
the worksite, and this technique is considered to be more appropriate in the laboratory.

A decreased compliance (increased elasticity) is observed in fibrosis. To cause a change in


volume, large changes in pressure are required. On the other hand, a high compliance is
observed, for example, in emphysema as the result of loss of elastic tissue and therefore also
elasticity in the lung.

The resistance in the airways essentially depends on the radius and length of the airways but
also on air viscosity. The airway resistance (RL in (kPa/l) /s), can be determined by use of a
spirometer, pressure transducer and a pneumotachograph (to measure the flow). The
measurements may also be carried out using a body plethysmograph to record the changes in
flow and pressure during panting manoeuvres. By administration of a drug intended to cause
broncho-constriction, sensitive subjects, as a result of their hyperreactive airways, may be
identified. Subjects with asthma usually have increased values for RL.

Acute and Chronic Effects of Occupational Exposure on Pulmonary Function

Lung function measurement may be used to disclose an occupational exposure effect on the
lungs. Pre-employment examination of lung function should not be used to exclude job-
seeking subjects. This is because the lung function of healthy subjects varies within wide
limits and it is difficult to draw a borderline below which it can safely be stated that the lung
is pathological. Another reason is that the work environment should be good enough to allow
even subjects with slight lung function impairment to work safely.

Chronic effects on the lungs in occupationally exposed subjects may be detected in a number
of ways. The techniques are designed to determine historical effects, however, and are less
suitable to serve as guidelines to prevent lung function impairment. A common study design
is to compare the actual values in exposed subjects with the lung function values obtained in a
reference population without occupational exposure. The reference subjects may be recruited
from the same (or nearby) workplaces or from the same city.

Multivariate analysis has been used in some studies to assess differences between exposed
subjects and matched unexposed referents. Lung function values in exposed subjects may also
be standardized by means of a reference equation based on lung function values in the
unexposed subjects.
Another approach is to study the difference between the lung function values in exposed and
unexposed workers after adjustment for age and height with the use of external reference
values, calculated by means of a prediction equation based on healthy subjects. The reference
population may also be matched to the exposed subjects according to ethnic group, sex, age,
height and smoking habits in order to further control for those influencing factors.

The problem is, however, to decide if a decrease is large enough to be classified as


pathological, when external reference values are being used. Although the instruments in the
studies have to be portable and simple, attention must be paid both to the sensitivity of the
chosen method for detecting small anomalies in airways and lungs and the possibility of
combining different methods. There are indications that subjects with respiratory symptoms,
such as exertion dyspnoea, are at a higher risk of having an accelerated decline in lung
function. This means that the presence of respiratory symptoms is important and so should not
be neglected.

The subject may also be followed-up by spirometry, for example, once a year, for a number of
years, in order to give a warning against the development of illness. There are limitations,
however, since this will be very time-consuming and the lung function may have deteriorated
permanently when the decrease can be observed. This approach therefore must not be an
excuse for delay in carrying out measures in order to decrease harmful concentrations of air
pollutants.

Finally, chronic effects on lung function may also be studied by examining the individual
changes in lung function in exposed and unexposed subjects over a number of years. One
advantage of the longitudinal study design is that the intersubject variability is eliminated;
however, the design is considered to be time-consuming and expensive.

Susceptible subjects may also be identified by comparing their lung function with and without
exposure during working shifts. In order to minimize possible effects of diurnal variations,
lung function is measured at the same time of day on one unexposed and one exposed
occasion. The unexposed condition can be obtained, for example, by occasionally moving the
worker to an uncontaminated area or by use of a suitable respirator during a whole shift, or in
some cases by performing lung function measurements in the afternoon of a worker’s day off.

One special concern is that repeated, temporary effects can result in chronic effects. An acute
temporary lung function decrease may not only be a biological exposure indicator but also a
predictor of a chronic lung function decrement. Exposure to air pollutants may result in
discernible acute effects on lung function, although the mean values of the measured air
pollutants are below the hygienic limit values. The question thus arises, whether these effects
really are harmful in the long run. This question is hard to answer directly, especially since
the air pollution in workplaces often has a complex composition and the exposure cannot be
described in terms of mean concentrations of single compounds. The effect of an occupational
exposure is also partly due to the sensitivity of the individual. This means that some subjects
will react sooner or to a larger extent than others. The underlying pathophysiological ground
for an acute, temporary decrease in lung function is not fully understood. The adverse reaction
upon exposure to an irritating air contaminant is, however, an objective measurement, in
contrast to subjective experiences like symptoms of different origin.

The advantage of detecting early changes in airways and lungs caused by hazardous air
pollutants is obvious—the prevailing exposure may be reduced in order to prevent more
severe illnesses. Therefore, an important aim in this respect is to use the measurements of
acute temporary effects on lung function as a sensitive early warning system that can be used
when studying groups of healthy working people.

Monitoring of Irritants

Irritation is one of the most frequent criteria for setting exposure limit values. It is, however,
not certain that compliance with an exposure limit based on irritation will protect against
irritation. It should be considered that an exposure limit for an air contaminant usually
contains at least two parts—a time-weighted average limit (TWAL) and a short-term exposure
limit (STEL), or at least rules for exceeding the time-weighted average limit, “excursion
limits”. In the case of highly irritating substances, such as sulphur dioxide, acrolein and
phosgene, it is important to limit the concentration even during very short periods, and it has
therefore been common practice to fix occupational exposure limit values in the form of
ceiling limits, with a sampling period that is kept as short as the measuring facilities will
allow.

Time-weighted average limit values for an eight-hour day combined with rules for excursion
above these values are given for most of the substances in the American Conference of
Governmental Industrial Hygienists (ACGIH) threshold limit value (TLV) list. The TLV list
of 1993-94 contains the following statement concerning excursion limits for exceeding limit
values:

“For the vast majority of substances with a TLV-TWA, there is not enough toxicological data
available to warrant a STEL = short-term exposure limit). Nevertheless, excursions above the
TLV-TWA should be controlled even where the eight-hour TWA is within recommended
limits.”

Exposure measurements of known air contaminants and comparison with well documented
exposure limit values should be carried out on a routine basis. There are, however, many
situations when the determination of compliance with exposure limit values is not enough.
This is the case in the following circumstances (inter alia):

1. when the limit value is too high to safeguard against irritation


2. when the irritant is unknown
3. when the irritant is a complex mixture and there is no suitable indicator known.

As advocated above, the measurement of acute, temporary effects on lung function can be
used in these cases as a warning against over-exposure to irritants.

In cases (2) and (3), acute, temporary effects on lung function may be applicable also in
testing the efficiency of control measures to decrease exposure to air contamination or in
scientific investigations, for example, in attributing biological effects to components of air
contaminants. A number of examples follow in which acute, temporary lung function effects
have been successfully employed in occupational health investigations.
Studies of Acute, Temporary Lung Function Effects

Work-related, temporary decrease of lung function over a work shift was recorded in cotton
workers at the end of 1950. Later, several authors reported work-related, acute, temporary
changes of lung function in hemp and textile workers, coal miners, workers exposed to
toluene di-isocyanate, fire-fighters, rubber processing workers, moulders and coremakers,
welders, ski waxers, workers exposed to organic dust and irritants in water-based paints.

However, there are also several examples where measurements before and after exposure,
usually during a shift, have failed to demonstrate any acute effects, despite a high exposure.
This is probably due to the effect of normal circadian variation, mainly in lung function
variables depending on the size of airway calibre. Thus the temporary decrease in these
variables must exceed the normal circadian variation to be recognized. The problem may be
circumvented, however, by measuring lung function at the same time of the day at each study
occasion. By using the exposed employee as his or her own control, the interindividual
variation is further decreased. Welders were studied in this way, and although the mean
difference between unexposed and exposed FVC values was less than 3% in 15 examined
welders, this difference was significant at the 95% confidence level with a power of more
than 99%.

The reversible transient effects on the lungs can be used as an exposure indicator of
complicated irritating components. In the study cited above, particles in the work environment
were crucial for the irritating effects on the airways and lungs. The particles were removed by
a respirator consisting of a filter combined with a welding helmet. The results indicated that
the effects on the lungs were caused by the particles in welding fumes, and that the use of a
particulate respirator might prevent this effect.

Exposure to diesel exhaust also gives measurable irritative effects on the lungs, shown as an
acute, temporary lung function decrease. Mechanical filters mounted on the exhaust pipes of
trucks used in loading operations by stevedores relieved subjective disorders and reduced the
acute, temporary lung function decrease observed when no filtration was done. The results
thus indicate that the presence of particles in the work environment does play a role in the
irritative effect on airways and lungs, and that it is possible to assess the effect by
measurements of acute changes in lung function.

A multiplicity of exposures and a continually changing work environment may present


difficulties in discerning the causal relationship of the different agents existing in a work
environment. The exposure scenario in sawmills is an illuminating example. It is not possible
(e.g., for economical reasons) to carry out exposure measurements of all possible agents
(terpenes, dust, mould, bacteria, endotoxin, mycotoxins, etc.) in this work environment. A
feasible method may be to follow the development of lung function longitudinally. In a study
of sawmill workers in the wood-trimming department, lung function was examined before and
after a working week, and no statistically significant decrease was found. However, a follow-
up study carried out a few years later disclosed that those workers who actually had a
numerical decrease in lung function during a working week also had an accelerated long-term
decline in lung function. This may indicate that vulnerable subjects can be detected by
measuring changes in lung function during a working week.
Diseases Caused by Respiratory Irritants and Toxic Chemicals

Written by ILO Content Manager

The presence of respiratory irritants in the workplace can be unpleasant and distracting,
leading to poor morale and decreased productivity. Certain exposures are dangerous, even
lethal. In either extreme, the problem of respiratory irritants and inhaled toxic chemicals is
common; many workers face a daily threat of exposure. These compounds cause harm by a
variety of different mechanisms, and the extent of injury can vary widely, depending on the
degree of exposure and on the biochemical properties of the inhalant. However, they all have
the characteristic of nonspecificity; that is, above a certain level of exposure virtually all
persons experience a threat to their health.

There are other inhaled substances that cause only susceptible individuals to develop
respiratory problems; such complaints are most appropriately approached as diseases of
allergic and immunological origin. Certain compounds, such as isocyanates, acid anhydrides
and epoxy resins, can act not only as non-specific irritants in high concentrations, but can also
predispose certain subjects to allergic sensitization. These compounds provoke respiratory
symptoms in sensitized individuals at very low concentrations.

Respiratory irritants include substances that cause inflammation of the airways after they are
inhaled. Damage may occur in the upper and lower airways. More dangerous is acute
inflammation of the pulmonary parenchyma, as in chemical pneumonitis or non-cardiogenic
pulmonary oedema. Compounds that can cause parenchymal damage are considered toxic
chemicals. Many inhaled toxic chemicals also act as respiratory irritants, warning us of their
danger with their noxious odour and symptoms of nose and throat irritation and cough. Most
respiratory irritants are also toxic to the lung parenchyma if inhaled in sufficient amount.

Many inhaled substances have systemic toxic effects after being absorbed by inhalation.
Inflammatory effects on the lung may be absent, as in the case of lead, carbon monoxide or
hydrogen cyanide. Minimal lung inflammation is normally seen in the inhalation fevers (e.g.,
organic dust toxic syndrome, metal fume fever and polymer fume fever). Severe lung and
distal organ damage occurs with significant exposure to toxins such as cadmium and mercury.

The physical properties of inhaled substances predict the site of deposition; irritants will
produce symptoms at these sites. Large particles (10 to 20mm) deposit in the nose and upper
airways, smaller particles (5 to 10mm) deposit in the trachea and bronchi, and particles less
than 5mm in size may reach the alveoli. Particles less than 0.5mm are so small they behave
like gases. Toxic gases deposit according to their solubility. A water-soluble gas will be
adsorbed by the moist mucosa of the upper airway; less soluble gases will deposit more
randomly throughout the respiratory tract.

Respiratory Irritants

Respiratory irritants cause non-specific inflammation of the lung after being inhaled. These
substances, their sources of exposure, physical and other properties, and effects on the victim
are outlined in Table 1. Irritant gases tend to be more water soluble than gases more toxic to
the lung parenchyma. Toxic fumes are more dangerous when they have a high irritant
threshold; that is, there is little warning that the fume is being inhaled because there is little
irritation.
Table 1. Summary of respiratory irritants

Chemical Sources of Important Injury produced Dangerou


exposure properties s exposure
level
under 15
min 

(PPM)

Acetaldehyde Plastics, High vapour Upper airway


synthetic pressure; high injury; rarely causes
rubber industry, water delayed pulmonary
combustion solubility oedema
products

Acetic acid, organic Chemical Water soluble Ocular and upper



 acids industry, airway injury
electronics,
combustion
products

Acid anhydrides Chemicals, Water soluble, Ocular, upper


paints, and highly airway injury,
plastics 
 reactive, may bronchospasm;
cause allergic pulmonary
industries;
sensitization haemorrhage after
components of
massive exposure
epoxy resins

Acrolein Plastics, High vapour Diffuse airway and


textiles, pressure, parenchymal injury
pharmaceutical intermediate
manufacturing, water
combustion solubility,
products extremely
irritating

Ammonia Fertilizers, Alkaline gas, Primarily ocular 500


animal feeds, very high and upper airway
chemicals, and water burn; massive
pharmaceutical solubility exposure may cause
s bronchiectasis
manufacturing

Antimony Alloys, organic Poorly Pneumonitis, non-


trichloride, catalysts soluble, injury cardiogenic
antimony penta- likely due to pulmonary oedema
chloride halide ion

Beryllium Alloys (with Irritant metal, Acute upper airway 25 μg/m3


copper), also acts as an injury,
ceramics; antigen to tracheobronchitis,
electronics, promote a chemical
aerospace and long-term pneumonitis
nuclear reactor granulomatou
equipment s response

Boranes (diborane) Aircraft fuel, Water soluble Upper airway


fungicide gas injury, pneumonitis
manufacturing with massive
exposure

Hydrogen bromide Petroleum Upper airway


refining injury, pneumonitis
with massive
exposure

Methyl bromide Refrigeration, Moderately Upper and lower


produce soluble gas airway injury,
fumigation pneumonitis, CNS
depression and
seizures

Cadmium Alloys with Zn Acute and Tracheobronchitis, 100


and Pb, chronic pulmonary oedema
electroplating, respiratory (often delayed onset
batteries, effects over 24–48 hours);
insecticides chronic low level
exposure leads to
inflammatory
changes and
emphysema

Calcium oxide, Lime, Moderately Upper and lower


calcium hydroxide photography, caustic, very airway
tanning, high doses inflammation,
insecticides required for pneumonitis
toxicity

Chlorine Bleaching, Intermediate Upper and lower 5–10


formation of water airway
chlorinated solubilty inflammation,
compounds, pneumonitis and
household non-cardiogenic
cleaners pulmonary oedema

Chloroacetophenon Crowd control Irritant Ocular and upper 1–10


e agent, “tear qualities are airway
gas” used to inflammation,
incapacitate; lower airway and
parenchymal injury
alkylating with masssive
agent exposure

o- Crowd control Irritant Ocular and upper


Chlorobenzomalo- agent, “tear qualities are airway

 nitrile gas” used to
 inflammation,
lower airway injury
incapacitate
with massive
exposure

Chloromethyl Solvents, used Upper and lower


ethers in manufacture airway irritation,
of other organic also a respiratory
compounds tract carcinogen

Chloropicrin Chemical Former First Upper and lower 15


manufacturing, World War airway
fumigant gas inflammation
component

Chromic acid Welding, Water soluble Nasal inflammation


(Cr(IV)) plating irritant, and ulceration,
allergic rhinitis,
sensitizer pneumonitis with
massive exposure

Cobalt High Non-specific Acute


temperature irritant, also bronchospasm
alloys, allergic and/or pneumonitis;
permanent sensitizer chronic exposure
magnets, hard can cause lung
metal tools fibrosis
(with tungsten
carbide)

Formaldehyde Manufacture of Highly water Ocular and upper 3


foam soluble, airway irritation;
insulation, rapidly bronchospasm in
plywood, metabolized; severe exposure;
textiles, paper, primarily acts contact dermatitis
fertilizers,
 via sensory in sensitized
nerve persons
resins;
stimulation;
embalming
sensitization
agents;
reported
combustion
products

Hydrochloric acid Metal refining, Highly water Ocular and upper 100
rubber soluble airway
manufacturing, inflammation,
organic lower airway
compound inflammation only
manufacture, with massive
photographic exposure
materials

Hydrofluoric acid Chemical Highly water Ocular and upper 20


catalyst, soluble, airway
pesticides, powerful and inflammation,
bleaching, rapid oxidant, tracheobronchitis
welding, lowers serum and pneumonitis
etching calcium in with massive
massive exposure
exposure

Isocyanates Polyurethane Low Ocular, upper and 0.1


production; molecular lower
paints; weight inflammation;
herbicide and organic asthma,
insecticide compounds, hypersensitivity
products; irritants, cause pneumonitis in
laminating, sensitization sensitized persons
furniture, in susceptible
enamelling,
 persons
resin work

Lithium hydride Alloys, Low Pneumonitis, non-


ceramics, solubility, cardiogenic
electronics, highly pulmonary oedema
chemical reactive
catalysts

Mercury Electrolysis, No respiratory Ocular and 1.1 mg/m3


ore and symptoms respiratory tract
amalgam with low inflammation,
extraction, level, chronic pneumonitis, CNS,
electronics exposure kidney and
manufacture systemic effects

Nickel carbonyl Nickel refining, Potent toxin Lower respiratory 8 μg/m3


electroplating, irritation,
chemical pneumonitis,
reagents delayed systemic
toxic effects

Nitrogen dioxide Silos after new Low water Ocular and upper 50
grain storage, solubility, airway
fertilizer brown gas at inflammation, non-
making, arc 
 high cardiogenic
welding, pulmonary oedema,
concentration
combustion delayed onset
products bronchiolitis

Nitrogen Military gases Causes severe Ocular, upper and 20mg/m3


mustards;
 sulphur injury, lower airway (N) 
 1
vesicant
 inflammation,
mustards mg/m3 (S)
pneumonitis
properties

Osmium tetroxide Copper Metallic Severe ocular and 1 mg/m3


refining, alloy osmium is upper airway
with iridium, inert, irritation; transient
catalyst for tetraoxide renal damage
steroid forms when
synthesis and heated in air
ammonia
formation

Ozone Arc welding, Sweet Upper and lower 1


copy machines, smelling gas, airway
paper bleaching moderate inflammation;
water asthmatics more
solubility susceptible

Phosgene Pesticide and Poorly water Upper airway 2


other chemical soluble, does inflammation and
manufacture, not irritate pneumonitis;
arc welding, airways in delayed pulmonary
paint removal low doses oedema in low
doses

Phosphoric Production of Ocular and upper


sulphides insecticides, airway
ignition inflammation
compounds,
matches

Phosphoric Manufacture of Form Ocular and upper 10 mg/m3


chlorides chlorinated phosphoric airway
organic acid and inflammation
compounds, hydrochloric
dyes, gasoline acid on
additives contact with
mucosal
surfaces

Selenium dioxide Copper or Strong Ocular and upper


nickel smelting, vessicant, airway
heating of forms inflammation,
selenium alloys selenious acid pulmonary oedema
(H2SeO3) on in massive exposure
mucosal
surfaces

Hydrogen selenide Copper Water soluble; Ocular and upper


refining, exposure to airway
sulphuric acid selenium inflammation,
production compounds delayed pulmonary
gives rise to oedema
garlic odour
breath

Styrene Manufacture of Highly Ocular, upper and 600


polystyrene and irritating lower airway
resins, inflammation,
polymers neurological
impairments

Sulphur dioxide Petroleum Highly water Upper airway 100


refining, pulp soluble gas inflammation,
mills, bronchoconstriction
refrigeration , pneumonitis on
plants, massive exposure
manufacturing
of sodium
sulphite

Titanium Dyes, Chloride ions Upper airway injury


tetrachloride pigments, sky form HCl on
writing mucosa

Uranium Metal coat Toxicity Upper and lower


hexafluoride removers, floor likely from airway injury,
sealants, spray chloride ions bronchospasm,
paints pneumonitis

Vanadium Cleaning oil Ocular, upper and 70


pentoxide tanks, lower airway
metallurgy symptoms

Zinc chloride Smoke More severe Upper and lower 200


grenades, than zinc airway irritation,
artillery oxide fever, delayed onset
exposure pneumonitis

Zirconium Pigments, Chloride ion Upper and lower


tetrachloride catalysts toxicity airway irritation,
pneumonitis
This condition is thought to result from persistent inflammation with reduction of epithelial
cell layer permeability or reduced conductance threshold for subepithelial nerve
endings.Adapted from Sheppard 1988; Graham 1994; Rom 1992; Blanc and Schwartz
1994; Nemery 1990; Skornik 1988.

The nature and extent of the reaction to an irritant depends on the physical properties of the
gas or aerosol, the concentration and time of exposure, and on other variables as well, such as
temperature, humidity and the presence of pathogens or other gases (Man and Hulbert 1988).
Host factors such as age (Cabral-Anderson, Evans and Freeman 1977; Evans, Cabral-
Anderson and Freeman 1977), prior exposure (Tyler, Tyler and Last 1988), level of
antioxidants (McMillan and Boyd 1982) and presence of infection may play a role in
determining the pathological changes seen. This wide range of factors has made it difficult to
study the pathogenic effects of respiratory irritants in a systematic way.

The best understood irritants are those which inflict oxidative injury. The majority of inhaled
irritants, including the major pollutants, act by oxidation or give rise to compounds that act in
this way. Most metal fumes are actually oxides of the heated metal; these oxides cause
oxidative injury. Oxidants damage cells primarily by lipid peroxidation, and there may be
other mechanisms. On a cellular level, there is initially a fairly specific loss of ciliated cells of
the airway epithelium and of Type I alveolar epithelial cells, with subsequent violation of the
tight junction interface between epithelial cells (Man and Hulbert 1988; Gordon, Salano and
Kleinerman 1986; Stephens et al. 1974). This leads to subepithelial and submucosal damage,
with stimulation of smooth muscle and parasympathetic sensory afferent nerve endings
causing bronchoconstriction (Holgate, Beasley and Twentyman 1987; Boucher 1981). An
inflammatory response follows (Hogg 1981), and the neutrophils and eosinophils release
mediators that cause further oxidative injury (Castleman et al. 1980). Type II pneumocytes
and cuboidal cells act as stem cells for repair (Keenan, Combs and McDowell 1982; Keenan,
Wilson and McDowell 1983).

Other mechanisms of lung injury eventually involve the oxidative pathway of cellular
damage, particularly after damage to the protective epithelial cell layer has occurred and an
inflammatory response has been elicited. The most commonly described mechanisms are
outlined in table 2.

Table 2. Mechanisms of lung injury by inhaled substances

Mechanism of injury Example compounds Damage that occurs


Oxidation Ozone, nitrogen Patchy airway epithelial damage, with
dioxide, sulphur increased permeability and exposure of
dioxide, chlorine, nerve fibre endings; loss of cilia from
oxides ciliated cells; necrosis of type I
pneumocytes; free radical formation
and subsequent protein binding and
lipid peroxidation
Acid formation Sulphur dioxide, Gas dissolves in water to form acid that
chlorine, halides damages epithelial cells via oxidation;
action mainly on upper airway
Alkali formation Ammonia, calcium Gas dissolves in water to form alkaline
oxide, hydroxides solution that may cause tissue
liquefaction; predominant upper airway
damage, lower airway in heavy
exposures
Protein binding Formaldehyde Reactions with amino acids lead to
toxic intermediates with damage to the
epithelial cell layer
Afferent nerve Ammonia, Direct nerve ending stimulation
stimulation formaldehyde provokes symptoms
Antigenicity Platinum, acid Low molecular weight molecules serve
anhydrides as haptens in sensitized persons
Stimulation of host Copper and zinc oxides, Stimulation of cytokines and
inflammatory lipoproteins inflammatory mediators without
response apparent direct cellular damage
Free radical formation Paraquat Promotion of formation or retardation
of clearance of superoxide radicals,
leading to lipid peroxidation and
oxidative damage
Delayed particle Any prolonged Overwhelming of mucociliary
clearance inhalation of mineral escalators and alveolar macrophage
dust systems with particles, leading to a non-
specific inflammatory response

Workers exposed to low levels of respiratory irritants may have subclinical symptoms
traceable to mucous membrane irritation, such as watery eyes, sore throat, runny nose and
cough. With significant exposure, the added feeling of shortness of breath will often prompt
medical attention. It is important to secure a good medical history in order to determine the
likely composition of the exposure, the quantity of exposure, and the period of time during
which the exposure took place. Signs of laryngeal oedema, including hoarseness and stridor,
should be sought, and the lungs should be examined for signs of lower airway or parenchymal
involvement. Assessment of the airway and lung function, together with chest radiography,
are important in short-term management. Laryngoscopy may be indicated to evaluate the
airway.

If the airway is threatened, the patient should undergo intubation and supportive care. Patients
with signs of laryngeal oedema should be observed for at least 12 hours to insure that the
process is self-limited. Bronchospasm should be treated with b-agonists and, if refractory,
intravenous corticosteroids. Irritated oral and ocular mucosa should be thoroughly irrigated.
Patients with crackles on examination or chest radiograph abnormalities should be
hospitalized for observation in view of the possibility of pneumonitis or pulmonary oedema.
Such patients are at risk of bacterial superinfection; nevertheless, no benefit has been
demonstrated by using prophylactic antibiotics.

The overwhelming majority of patients who survive the initial insult recover fully from
irritant exposures. The chances for long-term sequelae are more likely with greater initial
injury. The term reactive airway dysfunction syndrome (RADS) has been applied to the
persistence of asthma-like symptoms following acute exposure to respiratory irritants
(Brooks, Weiss and Bernstein 1985).
High-level exposures to alkalis and acids can cause upper and lower respiratory tract burns
that lead to chronic disease. Ammonia is known to cause bronchiectasis (Kass et al. 1972);
chlorine gas (which becomes HCl in the mucosa) is reported to cause obstructive lung disease
(Donelly and Fitzgerald 1990; Das and Blanc 1993). Chronic low-level exposures to irritants
may cause continued ocular and upper airway symptoms (Korn, Dockery and Speizer 1987),
but deterioration of lung function has not been conclusively documented. Studies of the
effects of chronic low-level irritants on airway function are hampered by a lack of long-term
follow-up, confounding by cigarette smoking, the “healthy worker effect,” and the minimal, if
any, actual clinical effect (Brooks and Kalica 1987).

After a patient recovers from the initial injury, regular follow-up by a physician is needed.
Clearly, there should be an effort to investigate the workplace and evaluate respiratory
precautions, ventilation and containment of the culprit irritants.

Toxic Chemicals

Chemicals toxic to the lung include most of the respiratory irritants given enough high
exposure, but there are many chemicals that cause significant parenchymal lung injury despite
possessing low to moderate irritant properties. These compounds work their effects by
mechanisms reviewed in Table 3 and discussed above. Pulmonary toxins tend to be less water
soluble than upper airway irritants. Examples of lung toxins and their sources of exposure are
reviewed in table 3.

Table 3. Compounds capable of lung toxicity after low to moderate exposure

Compound Sources of exposure Toxicity


Acrolein Plastics, textiles, Diffuse airway and parenchymal
pharmaceutical injury
manufacturing, combustion
products
Antimony Alloys, organic catalysts Pneumonitis, non-cardiogenic
trichloride; pulmonary oedema
antimony
pentachloride
Cadmium Alloys with zinc and lead, Tracheobronchitis, pulmonary
electroplating, batteries, oedema (often delayed onset over
insecticides 24–48 hours), kidney damage:
tubule proteinuria
Chloropicrin Chemical manufacturing, Upper and lower airway
fumigant components inflammation
Chlorine Bleaching, formation of Upper and lower airway
chlorinated compounds, inflammation, pneumonitis and non-
household cleaners cardiogenic pulmonary oedema
Hydrogen sulphide Natural gas wells, mines, Ocular, upper and lower airway
manure irritation, delayed pulmonary
oedema, asphyxiation from systemic
tissue hypoxia
Lithium hydride Alloys, ceramics, Pneumonitis, non-cardiogenic
electronics, chemical pulmonary oedema
catalysts
Methyl isocyanate Pesticide synthesis Upper and lower respiratory tract
irritation, pulmonary oedema
Mercury Electrolysis, ore and Ocular and respiratory tract
amalgam extraction, inflammation, pneumonitis, CNS,
electronics manufacture kidney and systemic effects
Nickel carbonyl Nickel refining, Lower respiratory irritation,
electroplating, chemical pneumonitis, delayed systemic toxic
reagents effects
Nitrogen dioxide Silos after new grain storage, Ocular and upper airway
fertilizer making, arc inflammation, non-cardiogenic
welding; combustion pulmonary oedema, delayed onset
products bronchiolitis
Nitrogen mustards, Military agents, vesicants Ocular and respiratory tract
sulphur inflammation, pneumonitis
mustards
Paraquat Herbicides (ingested) Selective damage to type-2
pneumocytes leading to RADS,
pulmonary fibrosis; renal failure, GI
irritation
Phosgene Pesticide and other chemical Upper airway inflammation and
manufacture, arc welding, pneumonitis; delayed pulmonary
paint removal oedema in low doses
Zinc chloride Smoke grenades, artillery Upper and lower airway irritation,
fever, delayed onset pneumonitis

One group of inhalable toxins are termed asphyxiants. When present in high enough
concentrations, the asphyxiants, carbon dioxide, methane and nitrogen, displace oxygen and
in effect suffocate the victim. Hydrogen cyanide, carbon monoxide and hydrogen sulphide act
by inhibiting cellular respiration despite adequate delivery of oxygen to the lung. Non-
asphyxiant inhaled toxins damage target organs, causing a wide variety of health problems
and mortality.

The medical management of inhaled lung toxins is similar to the management of respiratory
irritants. These toxins often do not elicit their peak clinical effect for several hours after
exposure; overnight monitoring may be indicated for compounds known to cause delayed
onset pulmonary oedema. Since the therapy of systemic toxins is beyond the scope of this
chapter, the reader is referred to discussions of the individual toxins elsewhere in this
Encyclopaedia and in further texts on the subject (Goldfrank et al. 1990; Ellenhorn and
Barceloux 1988).

Inhalation Fevers

Certain inhalation exposures occurring in a variety of different occupational settings may


result in debilitating flu-like illnesses lasting a few hours. These are collectively referred to as
inhalation fevers. Despite the severity of the symptoms, the toxicity seems to be self-limited
in most cases, and there are few data to suggest long-term sequelae. Massive exposure to
inciting compounds can cause a more severe reaction involving pneumonitis and pulmonary
oedema; these uncommon cases are considered more complicated than simple inhalation
fever.

The inhalation fevers have in common the feature of nonspecificity: the syndrome can be
produced in nearly anyone, given adequate exposure to the inciting agent. Sensitization is not
required, and no previous exposure is necessary. Some of the syndromes exhibit the
phenomenon of tolerance; that is, with regular repeated exposure the symptoms do not occur.
This effect is thought to be related to an increased activity of clearance mechanisms, but has
not been adequately studied.

Organic Dust Toxic Syndrome

Organic dust toxic syndrome (ODTS) is a broad term denoting the self-limited flu-like
symptoms that occur following heavy exposure to organic dusts. The syndrome encompasses
a wide range of acute febrile illnesses that have names derived from the specific tasks that
lead to dust exposure. Symptoms occur only after a massive exposure to organic dust, and
most individuals so exposed will develop the syndrome.

Organic dust toxic syndrome has previously been called pulmonary mycotoxicosis, owing to
its putative aetiology in the action of mould spores and actinomycetes. With some patients,
one can culture species of Aspergillus, Penicillium, and mesophilic and thermophilic
actinomycetes (Emmanuel, Marx and Ault 1975; Emmanuel, Marx and Ault 1989). More
recently, bacterial endotoxins have been proposed to play at least as large a role. The
syndrome has been provoked experimentally by inhalation of endotoxin derived from
Enterobacter agglomerans, a major component of organic dust (Rylander, Bake and Fischer
1989). Endotoxin levels have been measured in the farm environment, with levels ranging
from 0.01 to 100μg/m3. Many samples had a level greater than 0.2μg/m3, which is the level
where clinical effects are known to occur (May, Stallones and Darrow 1989). There is
speculation that cytokines, such as IL-1, may mediate the systemic effects, given what is
already known about the release of IL-1 from alveolar macrophages in the presence of
endotoxin (Richerson 1990). Allergic mechanisms are unlikely given the lack of need for
sensitization and the requirement for high dust exposure.

Clinically, the patient will usually present symptoms 2 to 8 hours after exposure to (usually
mouldy) grain, hay, cotton, flax, hemp or wood chips, or upon manipulation of pigs (Do Pico
1992). Often symptoms begin with eye and mucous membrane irritation with dry cough,
progressing to fever, and malaise, chest tightness, myalgias and headache. The patient appears
ill but otherwise normal upon physical examination. Leukocytosis frequently occurs, with
levels as high as 25,000 white blood corpuscles (WBC)/mm3. The chest radiograph is almost
always normal. Spirometry may reveal a modest obstructive defect. In cases where fibre optic
bronchoscopy was performed and bronchial washings were obtained, an elevation of
leukocytes was found in the lavage fluid. The percentage of neutrophils was significantly
higher than normal (Emmanuel, Marx and Ault 1989; Lecours, Laviolette and Cormier 1986).
Bronchoscopy 1 to 4 weeks after the event shows a persistently high cellularity,
predominantly lymphocytes.

Depending on the nature of the exposure, the differential diagnosis may include toxic gas
(such as nitrogen dioxide or ammonia) exposure, particularly if the episode occurred in a silo.
Hypersensitivity pneumonitis should be considered, especially if there are significant chest
radiograph or pulmonary function test abnormalities. The distinction between hypersensitivity
pneumonitis (HP) and ODTS is important: HP will require strict exposure avoidance and has
a worse prognosis, whereas ODTS has a benign and self-limited course. ODTS is also
distinguished from HP because it occurs more frequently, requires higher levels of dust
exposure, does not induce the release of serum precipitating antibodies, and (initially) does
not give rise to the lymphocytic alveolitis that is characteristic of HP.

Cases are managed with antipyretics. A role for steroids has not been advocated given the
self-limited nature of the illness. Patients should be educated about massive exposure
avoidance. The long-term effect of repeated occurrences is thought to be negligible; however,
this question has not been adequately studied.

Metal Fume Fever

Metal fume fever (MFF) is another self-limited, flu-like illness that develops after inhalation
exposure, in this instance to metal fumes. The syndrome most commonly develops after zinc
oxide inhalation, as occurs in brass foundries, and in smelting or welding galvanized metal.
Oxides of copper and iron also cause MFF, and vapours of aluminium, arsenic, cadmium,
mercury, cobalt, chromium, silver, manganese, selenium and tin have been occasionally
implicated (Rose 1992). Workers develop tachyphalaxis; that is, symptoms appear only when
the exposure occurs after several days without exposure, not when there are regular repeated
exposures. An eight-hour TLV of 5 mg/m3 for zinc oxide has been established by the US
Occupational Safety and Health Administration (OSHA), but symptoms have been elicited
experimentally after a two-hour exposure at this concentration (Gordon et al. 1992).

The pathogenesis of MFF remains unclear. The reproducible onset of symptoms regardless of
the individual exposed argues against a specific immune or allergic sensitization. The lack of
symptoms associated with histamine release (flushing, itching, wheezing, hives) also militates
against the likelihood of an allergic mechanism. Paul Blanc and co-workers have developed a
model implicating cytokine release (Blanc et al. 1991; Blanc et al.1993). They measured the
levels of tumour necrosis factor (TNF), and of the interleukins IL-1, IL-4, IL-6 and IL-8 in the
fluid lavaged from the lungs of 23 volunteers experimentally exposed to zinc oxide fumes
(Blanc et al. 1993). The volunteers developed elevated levels of TNF in their bronchoalveolar
lavage (BAL) fluid 3 hours after exposure. Twenty hours later, high BAL fluid levels of IL-8
(a potent neutrophil attractant) and an impressive neutrophilic alveolitis were observed. TNF,
a cytokine capable of causing fever and stimulating immune cells, has been shown to be
released from monocytes in culture that are exposed to zinc (Scuderi 1990). Accordingly, the
presence of increased TNF in the lung accounts for the onset of symptoms observed in MFF.
TNF is known to stimulate the release of both IL-6 and IL-8, in a time period that correlated
with the peaks of the cytokines in these volunteers’ BAL fluid. The recruitment of these
cytokines may account for the ensuing neutrophil alveolitis and flu-like symptoms that
characterize MFF. Why the alveolitis resolves so quickly remains a mystery.

Symptoms begin 3 to 10 hours after exposure. Initially, there may be a sweet metallic taste in
the mouth, accompanied by a worsening dry cough and shortness of breath. Fever and shaking
chills often develop, and the worker feels ill. The physical examination is otherwise
unremarkable. Laboratory evaluation shows a leukocytosis and a normal chest radiograph.
Pulmonary function studies may show a slightly reduced FEF25-75 and DLCO levels (Nemery
1990; Rose 1992).
With a good history the diagnosis is readily established and the worker can be treated
symptomatically with antipyretics. Symptoms and clinical abnormalities resolve within 24 to
48 hours. Otherwise, bacterial and viral aetiologies of the symptoms must be considered. In
cases of extreme exposure, or exposures involving contamination by toxins such as zinc
chloride, cadmium or mercury, MFF may be a harbinger of a clinical chemical pneumonitis
that will evolve over the next 2 days (Blount 1990). Such cases can exhibit diffuse infiltrates
on a chest radiograph and signs of pulmonary oedema and respiratory failure. While this
possibility should be considered in the initial evaluation of an exposed patient, such a
fulminant course is unusual and not characteristic of uncomplicated MFF.

MFF does not require a specific sensitivity of the individual for the metal fumes; rather, it
indicates inadequate environmental control. The exposure problem should be addressed to
prevent recurrent symptoms. Although the syndrome is considered benign, the long-term
effects of repeated bouts of MFF have not been adequately investigated.

Polymer Fume Fever

Polymer fume fever is a self-limited febrile illness similar to MFF, but caused by inhaled
pyrolysis products of fluoropolymers, including polytetrafluoroethane (PTFE; trade names
Teflon, Fluon, Halon). PTFE is widely used for its lubricant, thermal stability and electrical
insulative properties. It is harmless unless heated above 30°C, when it starts to release
degradation products (Shusterman 1993). This situation occurs when welding materials coated
with PTFE, heating PTFE with a tool edge during high speed machining, operating moulding
or extruding machines (Rose 1992) and rarely during endotracheal laser surgery (Rom 1992a).

A common cause of polymer fume fever was elicited after a period of classic public health
detective work in the early 1970s (Wegman and Peters 1974; Kuntz and McCord 1974).
Textile workers were developing self-limited febrile illnesses with exposures to
formaldehyde, ammonia and nylon fibre; they did not have exposure to fluoropolymer fumes
but handled crushed polymer. After finding that exposure levels of the other possible
aetiological agents were within acceptable limits, the fluoropolymer work was examined more
closely. As it turned out, only cigarette smokers working with the fluoropolymer were
symptomatic. It was hypothesized that the cigarettes were being contaminated with
fluoropolymer on the worker’s hands, then the product was combusted on the cigarette when
it was smoked, exposing the worker to toxic fumes. After banning cigarette smoking in the
workplace and setting strict handwashing rules, no further illnesses were reported (Wegman
and Peters 1974). Since then, this phenomenon has been reported after working with
waterproofing compounds, mould-release compounds (Albrecht and Bryant 1987) and after
using certain kinds of ski wax (Strom and Alexandersen 1990).

The pathogenesis of polymer fume fever is not known. It is thought to be similar to the other
inhalation fevers owing to its similar presentation and apparently non-specific immune
response. There have been no human experimental studies; however, rats and birds both
develop severe alveolar epithelial damage on exposure to PTFE pyrolysis products (Wells,
Slocombe and Trapp 1982; Blandford et al. 1975). Accurate measurement of pulmonary
function or BAL fluid changes has not been done.

Symptoms appear several hours after exposure, and a tolerance or tachyphalaxis effect is not
there as seen in MFF. Weakness and myalgias are followed by fever and chills. Often there is
chest tightness and cough. Physical examination is usually otherwise normal. Leukocytosis is
often seen, and the chest radiograph is usually normal. Symptoms resolve spontaneously in 12
to 48 hours. There have been a few cases of persons developing pulmonary oedema after
exposure; in general, PTFE fumes are thought to be more toxic than zinc or copper fumes in
causing MFF (Shusterman 1993; Brubaker 1977). Chronic airways dysfunction has been
reported in persons who have had multiple episodes of polymer fume fever (Williams,
Atkinson and Patchefsky 1974).

The diagnosis of polymer fume fever requires a careful history with high clinical suspicion.
After ascertaining the source of the PTFE pyrolysis products, efforts must be made to prevent
further exposure. Mandatory handwashing rules and the elimination of smoking in the
workplace has effectively eliminated cases related to contaminated cigarettes. Workers who
have had multiple episodes of polymer fume fever or associated pulmonary oedema should
have long-term medical follow-up.

Occupational Asthma

Written by ILO Content Manager

Asthma is a respiratory disease characterized by airway obstruction that is partially or


completely reversible, either spontaneously or with treatment; airway inflammation; and
increased airway responsiveness to a variety of stimuli (NAEP 1991). Occupational asthma
(OA) is asthma that is caused by environmental exposures in the workplace. Several hundred
agents have been reported to cause OA. Pre-existing asthma or airway hyper-responsiveness,
with symptoms worsened by work exposure to irritants or physical stimuli, is usually
classified separately as work-aggravated asthma (WAA). There is general agreement that OA
has become the most prevalent occupational lung disease in developed countries, although
estimates of actual prevalence and incidence are quite variable. It is clear, however, that in
many countries asthma of occupational aetiology causes a largely unrecognized burden of
disease and disability with high economic and non-economic costs. Much of this public health
and economic burden is potentially preventable by identifying and controlling or eliminating
the workplace exposures causing the asthma. This article will summarize current approaches
to recognition, management and prevention of OA. Several recent publications discuss these
issues in more detail (Chan-Yeung 1995; Bernstein et al. 1993).

Magnitude of the Problem

Prevalences of asthma in adults generally range from 3 to 5%, depending on the definition of
asthma and geographic variations, and may be considerably higher in some low-income urban
populations. The proportion of adult asthma cases in the general population that is related to
the work environment is reported to range from 2 to 23%, with recent estimates tending
towards the higher end of the range. Prevalences of asthma and OA have been estimated in
small cohort and cross-sectional studies of high-risk occupational groups. In a review of 22
selected studies of workplaces with exposures to specific substances, prevalences of asthma or
OA, defined in various ways, ranged from 3 to 54%, with 12 studies reporting prevalences
over 15% (Becklake, in Bernstein et al. 1993). The wide range reflects real variation in actual
prevalence (due to different types and levels of exposure). It also reflects differences in
diagnostic criteria, and variation in the strength of the biases, such as “survivor bias” which
may result from exclusion of workers who developed OA and left the workplace before the
study was conducted. Population estimates of incidence range from 14 per million employed
adults per year in the United States to 140 per million employed adults per year in Finland
(Meredith and Nordman 1996). Ascertainment of cases was more complete and methods of
diagnosis were generally more rigorous in Finland. The evidence from these different sources
is consistent in its implication that OA is often under-diagnosed and/or under-reported and is
a public health problem of greater magnitude than generally recognized.

Causes of Occupational Asthma

Over 200 agents (specific substances, occupations or industrial processes) have been reported
to cause OA, based on epidemiological and/or clinical evidence. In OA, airway inflammation
and bronchoconstriction can be caused by immunological response to sensitizing agents, by
direct irritant effects, or by other non-immunological mechanisms. Some agents (e.g.,
organophosphate insecticides) may also cause bronchoconstriction by direct pharmacological
action. Most of the reported agents are thought to induce a sensitization response. Respiratory
irritants often worsen symptoms in workers with pre-existing asthma (i.e., WAA) and, at high
exposure levels, can cause new onset of asthma (termed reactive airways dysfunction
syndrome (RADS) or irritant-induced asthma) (Brooks, Weiss and Bernstein 1985; Alberts
and Do Pico 1996).

OA may occur with or without a latency period. Latency period refers to the time between
initial exposure and development of symptoms, and is highly variable. It is often less than 2
years, but in around 20% of cases is 10 years or longer. OA with latency is generally caused
by sensitization to one or more agents. RADS is an example of OA without latency.

High molecular weight sensitizing agents (5,000 daltons (Da) or greater) often act by an IgE-
dependent mechanism. Low molecular weight sensitizing agents (less than 5,000 Da), which
include highly reactive chemicals like isocyanates, may act by IgE-independent mechanisms
or may act as haptens, combining with body proteins. Once a worker becomes sensitized to an
agent, re-exposure (frequently at levels far below the level that caused sensitization) results in
an inflammatory response in the airways, often accompanied by increases in airflow limitation
and non-specific bronchial responsiveness (NBR).

In epidemiological studies of OA, workplace exposures are consistently the strongest


determinants of asthma prevalence, and the risk of developing OA with latency tends to
increase with estimated intensity of exposure. Atopy is an important and smoking a somewhat
less consistent determinant of asthma occurrence in studies of agents that act through an IgE-
dependent mechanism. Neither atopy nor smoking appears to be an important determinant of
asthma in studies of agents acting through IgE-independent mechanisms.

Clinical Presentation

The symptom spectrum of OA is similar to non-occupational asthma: wheeze, cough, chest


tightness and shortness of breath. Patients sometimes present cough-variant or nocturnal
asthma. OA can be severe and disabling, and deaths have been reported. Onset of OA occurs
due to a specific job environment, so identifying exposures that occurred at the time of onset
of asthmatic symptoms is key to an accurate diagnosis. In WAA, workplace exposures cause a
significant increase in frequency and/or severity of symptoms of pre-existing asthma.
Several features of the clinical history may suggest occupational aetiology (Chan-Yeung
1995). Symptoms frequently worsen at work or at night after work, improve on days off, and
recur on return to work. Symptoms may worsen progressively towards the end of the
workweek. The patient may note specific activities or agents in the workplace that
reproducibly trigger symptoms. Work-related eye irritation or rhinitis may be associated with
asthmatic symptoms. These typical symptom patterns may be present only in the initial stages
of OA. Partial or complete resolution on weekends or vacations is common early in the course
of OA, but with repeated exposures, the time required for recovery may increase to one or two
weeks, or recovery may cease to occur. The majority of patients with OA whose exposures
are terminated continue to have symptomatic asthma even years after cessation of exposure,
with permanent impairment and disability. Continuing exposure is associated with further
worsening of asthma. Brief duration and mild severity of symptoms at the time of cessation of
exposure are good prognostic factors and decrease the likelihood of permanent asthma.

Several characteristic temporal patterns of symptoms have been reported for OA. Early
asthmatic reactions typically occur shortly (less than one hour) after beginning work or the
specific work exposure causing the asthma. Late asthmatic reactions begin 4 to 6 hours after
exposure begins, and can last 24 to 48 hours. Combinations of these patterns occur as dual
asthmatic reactions with spontaneous resolution of symptoms separating an early and late
reaction, or as continuous asthmatic reactions with no resolution of symptoms between
phases. With exceptions, early reactions tend to be IgE mediated, and late reactions tend to be
IgE independent.

Increased NBR, generally measured by methacholine or histamine challenge, is considered a


cardinal feature of occupational asthma. The time course and degree of NBR may be useful in
diagnosis and monitoring. NBR may decrease within several weeks after cessation of
exposure, although abnormal NBR commonly persists for months or years after exposures are
terminated. In individuals with irritant-induced occupational asthma, NBR is not expected to
vary with exposure and/or symptoms.

Recognition and Diagnosis

Accurate diagnosis of OA is important, given the substantial negative consequences of either


under- or over-diagnosis. In workers with OA or at risk of developing OA, timely recognition,
identification and control of the occupational exposures causing the asthma improve the
chances of prevention or complete recovery. This primary prevention can greatly reduce the
high financial and human costs of chronic, disabling asthma. Conversely, since a diagnosis of
OA may obligate a complete change of occupation, or costly interventions in the workplace,
accurately distinguishing OA from asthma that is not occupational can prevent unnecessary
social and financial costs to both employers and workers.

Several case definitions of OA have been proposed, appropriate in different circumstances.


Definitions found valuable for worker screening or surveillance (Hoffman et al. 1990) may
not be entirely applicable for clinical purposes or compensation. A consensus of researchers
has defined OA as “a disease characterized by variable airflow limitation and/or airway
hyper-responsiveness due to causes and conditions attributable to a particular occupational
environment and not to stimuli encountered outside the workplace” (Bernstein et al. 1993).
This definition has been operationalized as a medical case definition, summarized in table 1
(Chan-Yeung 1995).
Table 1. ACCP medical case definition of occupational asthma

Criteria for diagnosis of occupational asthma1 (requires all 4, A-D):

(A) Physician diagnosis of asthma and/or physiological evidence of airways hyper-


responsiveness

(B) Occupational exposure preceded onset of asthmatic symptoms1

(C) Association between symptoms of asthma and work

(D) Exposure and/or physiological evidence of relation of asthma to workplace


environment (Diagnosis of OA requires one or more of D2-D5, likely OA requires only
D1)

(1) Workplace exposure to agent reported to give rise to OA

(2) Work-related changes in FEV1 and/or PEF

(3) Work-related changes in serial testing for non-specific bronchial responsiveness (e.g.,
Methacholine Challenge Test)

(4) Positive specific bronchial challenge test

(5) Onset of asthma with a clear association with a symptomatic exposure to an inhaled
irritant in the workplace (generally RADS)

Criteria for diagnosis of RADS (should meet all 7):

(1) Documented absence of preexisting asthma-like complaints

(2) Onset of symptoms after a single exposure incident or accident

(3) Exposure to a gas, smoke, fume, vapour or dust with irritant properties present in high
concentration

(4) Onset of symptoms within 24 hours after exposure with persistence of symptoms for
at least 3 months

(5) Symptoms consistent with asthma: cough, wheeze, dyspnoea

(6) Presence of airflow obstruction on pulmonary function tests and/or presence of non-
specific bronchial hyper-responsiveness (testing should be done shortly after exposure)

(7) Other pulmonary diseases ruled out


Criteria for diagnosis of work-aggravated asthma (WAA):

(1) Meets criteria A and C of ACCP Medical Case Definition of OA

(2) Pre-existing asthma or history of asthmatic symptoms, (with active symptoms during
the year prior to start of employment or exposure of interest)

(3) Clear increase in symptoms or medication requirement, or documentation of work-


related changes in PEFR or FEV1 after start of employment or exposure of interest
1
A case definition requiring A, C and any one of D1 to D5 may be useful in surveillance for
OA, WAA and RADS.
Source: Chan-Yeung 1995.

Thorough clinical evaluation of OA can be time consuming, costly and difficult. It may
require diagnostic trials of removal from and return to work, and often requires the patient to
reliably chart serial peak expiratory flow (PEF) measurements. Some components of the
clinical evaluation (e.g., specific bronchial challenge or serial quantitative testing for NBR)
may not be readily available to many physicians. Other components may simply not be
achievable (e.g., patient no longer working, diagnostic resources not available, inadequate
serial PEF measurements). Diagnostic accuracy is likely to increase with the thoroughness of
the clinical evaluation. In each individual patient, decisions on the extent of medical
evaluation will need to balance costs of the evaluation with the clinical, social, financial and
public health consequences of incorrectly diagnosing or ruling out OA.

In consideration of these difficulties, a stepped approach to diagnosis of OA is outlined in


table 2. This is intended as a general guide to facilitate accurate, practical and efficient
diagnostic evaluation, recognizing that some of the suggested procedures may not be available
in some settings. Diagnosis of OA involves establishing both the diagnosis of asthma and the
relation between asthma and workplace exposures. After each step, for each patient, the
physician will need to determine whether the level of diagnostic certainty achieved is
adequate to support the necessary decisions, or whether evaluation should continue to the next
step. If facilities and resources are available, the time and cost of continuing the clinical
evaluation are usually justified by the importance of making an accurate determination of the
relationship of asthma to work. Highlights of diagnostic procedures for OA will be
summarized; details can be found in several of the references (Chan-Yeung 1995; Bernstein et
al. 1993). Consultation with a physician experienced in OA may be considered, since the
diagnostic process may be difficult.
Table 2. Steps in diagnostic evaluation of asthma in the workplace

Step 1 Thorough medical and occupational history and directed physical examination.

Step 2 Physiologic evaluation for reversible airway obstruction and/or non specific bronchial
hyper-responsiveness.

Step 3 Immunologic assessment, if appropriate.

Assess Work Status:

Currently working: Proceed to Step 4 first.


Not currently working, diagnostic trial of return to work feasible: Step 5 first, then Step 4.
Not currently working, diagnostic trial of return to work not feasible: Step 6.

Step 4 Clinical evaluation of asthma at work or diagnostic trial of return to work.

Step 5 Clinical evaluation of asthma away from work or diagnostic trial of removal from
work.

Step 6 Workplace challenge or specific bronchial challenge testing. If available for suspected
causal exposures, this step may be performed prior to Step 4 for any patient.

This is intended as a general guide to facilitate practical and efficient diagnostic evaluation. It
is recommended that physicians who diagnose and manage OA refer to current clinical
literature as well.

RADS, when caused by an occupational exposure, is usually considered a subclass of OA. It


is diagnosed clinically, using the criteria in Table 6. Patients who have experienced significant
respiratory injury due to high-level irritant inhalations should be evaluated for persistent
symptoms and presence of airflow obstruction shortly after the event. If the clinical history is
compatible with RADS, further evaluation should include quantitative testing for NBR, if not
contra-indicated.

WAA may be common, and may cause a substantial preventable burden of disability, but little
has been published on diagnosis, management or prognosis. As summarized in Table 6, WAA
is recognized when asthmatic symptoms preceded the suspected causal exposure but are
clearly aggravated by the work environment. Worsening at work can be documented either by
physiological evidence or through evaluation of medical records and medication use. It is a
clinical judgement whether patients with a history of asthma in remission, who have
recurrence of asthmatic symptoms that otherwise meet the criteria for OA, are diagnosed with
OA or WAA. One year has been proposed as a sufficiently long asymptomatic period that the
onset of symptoms is likely to represent a new process caused by the workplace exposure,
although no consensus yet exists.
Step 1: Thorough medical and occupational history anddirected physical examination

Initial suspicion of possible OA in appropriate clinical and workplace situations is key, given
the importance of early diagnosis and intervention in improving prognosis. The diagnosis of
OA or WAA should be considered in all asthmatic patients in whom symptoms developed as
a working adult (especially recent onset), or in whom the severity of asthma has substantially
increased. OA should also be considered in any other individuals who have asthma-like
symptoms and work in occupations in which they are exposed to asthma-causing agents or
who are concerned that their symptoms are work-related.

Patients with possible OA should be asked to provide a thorough medical and


occupational/environmental history, with careful documentation of the nature and date of
onset of symptoms and diagnosis of asthma, and any potentially causal exposures at that time.
Compatibility of the medical history with the clinical presentation of OA described above
should be evaluated, especially the temporal pattern of symptoms in relation to work schedule
and changes in work exposures. Patterns and changes in patterns of use of asthma
medications, and the minimum period of time away from work required for improvement in
symptoms should be noted. Prior respiratory diseases, allergies/atopy, smoking and other
toxic exposures, and a family history of allergy are pertinent.

Occupational and other environmental exposures to potential asthma-causing agents or


processes should be thoroughly explored, with objective documentation of exposures if
possible. Suspected exposures should be compared with a comprehensive list of agents
reported to cause OA (Harber, Schenker and Balmes 1996; Chan-Yeung and Malo 1994;
Bernstein et al. 1993; Rom 1992b), although inability to identify specific agents is not
uncommon and induction of asthma by agents not previously described is possible as well.
Some illustrative examples are shown in table 3. Occupational history should include details
of current and relevant past employment with dates, job titles, tasks and exposures, especially
current job and job held at time of onset of symptoms. Other environmental history should
include a review of exposures in the home or community that could cause asthma. It is helpful
to begin the exposure history in an open-ended way, asking about broad categories of airborne
agents: dusts (especially organic dusts of animal, plant or microbial origin), chemicals,
pharmaceuticals and irritating or visible gases or fumes. The patient may identify specific
agents, work processes or generic categories of agents that have triggered symptoms. Asking
the patient to describe step by step the activities and exposures involved in the most recent
symptomatic workday can provide useful clues. Materials used by co-workers, or those
released in high concentration from a spill or other source, may be relevant. Further
information can often be obtained on product name, ingredients and manufacturer name,
address and phone number. Specific agents can be identified by calling the manufacturer or
through a variety of other sources including textbooks, CD ROM databases, or Poison Control
Centers. Since OA is frequently caused by low levels of airborne allergens, workplace
industrial hygiene inspections which qualitatively evaluate exposures and control measures
are often more helpful than quantitative measurement of air contaminants.
Table 3. Sensitizing agents that can cause occupational asthma

Classification Sub-groups Examples of Examples of jobs


substances and industries
High-molecular- Animal-derived Laboratory animals, Animal handlers,
weight protein substances crab/seafood, mites, farming and food
antigens insects processing
Plant-derived
substances Flour and grain dusts, Bakeries, health care
natural rubber latex workers, detergent
gloves, bacterial making, food
enzymes, castor bean processing
dust, vegetable gums
Low-molecular- Plasticizers, 2-part Isocyanates, acid Auto spray painting,
weight/chemical paints, adhesives, anhydrides, amines varnishing,
sensitizers foams woodworking
Platinum salts, cobalt
Metals Platinum refineries,
Cedar (plicatic acid), metal grinding
Wood dusts oak
Sawmill work,
Pharmaceuticals, Psyllium, antibiotics carpentry
drugs
Pharmaceutical
manufacturing and
packaging
Other chemicals Chloramine T, Janitorial work, meat
polyvinyl chloride packing
fumes,
organophosphate
insecticides

The clinical history appears to be better for excluding rather than for confirming the diagnosis
of OA, and an open-ended history taken by a physician is better than a closed questionnaire.
One study compared the results of an open-ended clinical history taken by trained OA
specialists with a “gold standard” of specific bronchial challenge testing in 162 patients
referred for evaluation of possible OA. The investigators reported that the sensitivity of a
clinical history suggestive of OA was 87%, specificity 55%, predictive value positive 63%
and predictive value negative 83%. In this group of referred patients, prevalence of asthma
and OA were 80% and 46%, respectively (Malo et al. 1991). In other groups of referred
patients, predictive values positive of a closed questionnaire ranged from 8 to 52% for a
variety of workplace exposures (Bernstein et al. 1993). The applicability of these results to
other settings needs to be assessed by the physician.

Physical examination is sometimes helpful, and findings relevant to asthma (e.g., wheezing,
nasal polyps, eczematous dermatitis), respiratory irritation or allergy (e.g., rhinitis,
conjunctivitis) or other potential causes of symptoms should be noted.
Step 2: Physiological evaluation for reversible airway obstruction and/or non-specific
bronchial hyper-responsiveness

If sufficient physiological evidence supporting the diagnosis of asthma (NAEP 1991) is


already in the medical record, Step 2 can be skipped. If not, technician-coached spirometry
should be performed, preferably post-workshift on a day when the patient is experiencing
asthmatic symptoms. If spirometry reveals airway obstruction which reverses with a
bronchodilator, this confirms the diagnosis of asthma. In patients without clear evidence of
airflow limitation on spirometry, quantitative testing for NBR using methacholine or
histamine should be done, the same day if possible. Quantitative testing for NBR in this
situation is a key procedure for two reasons. First, it can often identify patients with mild or
early stage OA who have the greatest potential for cure but who would be missed if testing
stopped with normal spirometry. Second, if NBR is normal in a worker who has ongoing
exposure in the workplace environment associated with the symptoms, OA can generally be
ruled out without further testing. If abnormal, evaluation can proceed to Step 3 or 4, and the
degree of NBR may be useful in monitoring the patient for improvement after diagnostic trial
of removal from the suspected causal exposure (Step 5). If spirometry reveals significant
airflow limitation that does not improve after inhaled bronchodilator, a re-evaluation after
more prolonged trial of therapy, including corticosteroids, should be considered (ATS 1995;
NAEP 1991).

Step 3: Immunological assessment, if appropriate

Skin or serological (e.g., RAST) testing can demonstrate immunological sensitization to a


specific workplace agent. These immunological tests have been used to confirm the work-
relatedness of asthma, and, in some cases, eliminate the need for specific inhalation challenge
tests. For example, among psyllium-exposed patients with a clinical history compatible with
OA, documented asthma or airway hyper-responsiveness, and evidence of immunological
sensitization to psyllium, approximately 80% had OA confirmed on subsequent specific
bronchial challenge testing (Malo et al. 1990). In most cases, diagnostic significance of
negative immunological tests is less clear. The diagnostic sensitivity of the immunological
tests depends critically on whether all the likely causal antigens in the workplace or hapten-
protein complexes have been included in the testing. Although the implication of sensitization
for an asymptomatic worker is not well defined, analysis of grouped results can be useful in
evaluating environmental controls. The utility of immunological evaluation is greatest for
agents for which there are standardized in vitro tests or skin-prick reagents, such as platinum
salts and detergent enzymes. Unfortunately, most occupational allergens of interest are not
currently available commercially. The use of non-commercial solutions in skin-prick testing
has on occasions been associated with severe reactions, including anaphylaxis, and thus
caution is necessary.

If results of Steps 1 and 2 are compatible with OA, further evaluation should be pursued if
possible. The order and extent of further evaluation depends on availability of diagnostic
resources, work status of the patient and feasibility of diagnostic trials of removal from and
return to work as indicated in Table 7. If further evaluation is not possible, a diagnosis must
be based on the information available at this point.
Step 4: Clinical evaluation of asthma at work, or diagnostic trial of return to work

Often the most readily available physiological test of airway obstruction is spirometry. To
improve reproducibility, spirometry should be coached by a trained technician. Unfortunately,
single-day cross-shift spirometry, performed before and after the workshift, is neither
sensitive nor specific in determining work-associated airway obstruction. It is probable that if
multiple spirometries are performed each day during and after several workdays, the
diagnostic accuracy may be improved, but this has not yet been adequately evaluated.

Due to difficulties with cross-shift spirometry, serial PEF measurement has become an
important diagnostic technique for OA. Using an inexpensive portable meter, PEF
measurements are recorded every two hours, during waking hours. To improve sensitivity,
measurements must be done during a period when the worker is exposed to the suspected
causal agents at work and is experiencing a work-related pattern of symptoms. Three
repetitions are performed at each time, and measurements are made every day at work and
away from work. The measurements should be continued for at least 16 consecutive days
(e.g., two five-day work weeks and 3 weekends off) if the patient can safely tolerate
continuing to work. PEF measurements are recorded in a diary along with notation of work
hours, symptoms, use of bronchodilator medications, and significant exposures. To facilitate
interpretation, the diary results should then be plotted graphically. Certain patterns suggest
OA, but none are pathognomonic, and interpretation by an experienced reader is often helpful.
Advantages of serial PEF testing are low cost and reasonable correlation with results of
bronchial challenge testing. Disadvantages include the significant degree of patient
cooperation required, inability to definitely confirm that data are accurate, lack of
standardized method of interpretation, and the need for some patients to take 1 or 2
consecutive weeks off work to show significant improvement. Portable electronic recording
spirometers designed for patient self monitoring, when available, can address some of the
disadvantages of serial PEF.

Asthma medications tend to reduce the effect of work exposures on measures of airflow.
However, it is not advisable to discontinue medications during airflow monitoring at work.
Rather, the patient should be maintained on a constant minimal safe dosage of anti-
inflammatory medications throughout the entire diagnostic process, with close monitoring of
symptoms and airflow, and the use of short-acting bronchodilators to control symptoms
should be noted in the diary.

The failure to observe work-related changes in PEF while a patient is working routine hours
does not exclude the diagnosis of OA, since many patients will require more than a two-day
weekend to show significant improvement in PEF. In this case, a diagnostic trial of extended
removal from work (Step 5) should be considered. If the patient has not yet had quantitative
testing for NBR, and does not have a medical contra-indication, it should be done at this time,
immediately after at least two weeks of workplace exposure.

Step 5: Clinical evaluation of asthma away from work or diagnostic trial of extended
removal from work

This step consists of completion of the serial 2-hourly PEF daily diary for at least 9
consecutive days away from work (e.g., 5 days off work plus weekends before and after). If
this record, compared with the serial PEF diary at work, is not sufficient for diagnosing OA, it
should be continued for a second consecutive week away from work. After 2 or more weeks
away from work, quantitative testing for NBR can be performed and compared to NBR while
at work. If serial PEF has not yet been done during at least two weeks at work, then a
diagnostic trial of return to work (see Step 4) may be performed, after detailed counselling,
and in close contact with the treating physician. Step 5 is often critically important in
confirming or excluding the diagnosis of OA, although it may also be the most difficult and
expensive step. If an extended removal from work is attempted, it is best to maximize the
diagnostic yield and efficiency by including PEF, FEV1, and NBR tests in one comprehensive
evaluation. Weekly physician visits for counselling and to review the PEF chart can help to
assure complete and accurate results. If, after monitoring the patient for at least two weeks at
work and two weeks away from it, the diagnostic evidence is not yet sufficient, Step 6 should
be considered next, if available and feasible.

Step 6: Specific bronchial challenge or workplace challenge testing

Specific bronchial challenge testing using an exposure chamber and standardized exposure
levels has been labelled the “gold standard” for diagnosis of OA. Advantages include
definitive confirmation of OA with ability to identify asthmatic response to sub-irritant levels
of specific sensitizing agents, which can then be scrupulously avoided. Of all the diagnostic
methods, it is the only one that can reliably distinguish sensitizer-induced asthma from
provocation by irritants. Several problems with this approach have included inherent
costliness of the procedure, general requirement of close observation or hospitalization for
several days, and availability in only very few specialized centres. False negatives may occur
if standardized methodology is not available for all suspected agents, if the wrong agents are
suspected, or if too long a time has elapsed between last exposure and testing. False positives
may result if irritant levels of exposure are inadvertently obtained. For these reasons, specific
bronchial challenge testing for OA remains a research procedure in most localities.

Workplace challenge testing involves serial technician-coached spirometry in the workplace,


performed at frequent (e.g., hourly) intervals before and during the course of a workday
exposure to the suspected causal agents or processes. It may be more sensitive than specific
bronchial challenge testing because it involves “real life” exposures, but since airway
obstruction may be triggered by irritants as well as sensitizing agents, positive tests do not
necessarily indicate sensitization. It also requires cooperation of the employer and much
technician time with a mobile spirometer. Both of these procedures carry some risk of
precipitating a severe asthmatic attack, and should therefore be done under close supervision
of specialists experienced with the procedures.

Treatment and Prevention

Management of OA includes medical and preventive interventions for individual patients, as


well as public health measures in workplaces identified as high risk for OA. Medical
management is similar to that for non-occupational asthma and is well reviewed elsewhere
(NAEP 1991). Medical management alone is rarely adequate to optimally control symptoms,
and preventive intervention by control or cessation of exposure is an integral part of the
treatment. This process begins with accurate diagnosis and identification of causative
exposures and conditions. In sensitizer-induced OA, reducing exposure to the sensitizer does
not usually result in complete resolution of symptoms. Severe asthmatic episodes or
progressive worsening of the disease may be caused by exposures to very low concentrations
of the agent and complete and permanent cessation of exposure is recommended. Timely
referral for vocational rehabilitation and job retraining may be a necessary component of
treatment for some patients. If complete cessation of exposure is impossible, substantial
reduction of exposure accompanied by close medical monitoring and management may be an
option, although such reduction in exposure is not always feasible and the long-term safety of
this approach has not been tested. As an example, it would be difficult to justify the toxicity of
long-term treatment with systemic corticosteroids in order to allow the patient to continue in
the same employment. For asthma induced and/or triggered by irritants, dose response may be
more predictable, and lowering of irritant exposure levels, accompanied by close medical
monitoring, may be less risky and more likely to be effective than for sensitizer-induced OA.
If the patient continues to work under modified conditions, medical follow-up should include
frequent physician visits with review of the PEF diary, well-planned access to emergency
services, and serial spirometry and/or methacholine challenge testing, as appropriate.

Once a particular workplace is suspected to be high risk, due either to occurrence of a sentinel
case of OA or use of known asthma-causing agents, public health methods can be very useful.
Early recognition and effective treatment and prevention of disability of workers with existing
OA, and prevention of new cases, are clear priorities. Identification of specific causal agent(s)
and work processes is important. One practical initial approach is a workplace questionnaire
survey, evaluating criteria A, B, C, and D1 or D5 in the case definition of OA. This approach
can identify individuals for whom further clinical evaluation might be indicated and help
identify possible causal agents or circumstances. Evaluation of group results can help decide
whether further workplace investigation or intervention is indicated and, if so, provide
valuable guidance in targeting future prevention efforts in the most effective and efficient
manner. A questionnaire survey is not adequate, however, to establish individual medical
diagnoses, since predictive positive values of questionnaires for OA are not high enough. If a
greater level of diagnostic certainty is needed, medical screening utilizing diagnostic
procedures such as spirometry, quantitative testing for NBR, serial PEF recording, and
immunological testing can be considered as well. In known problem workplaces, ongoing
surveillance and screening programmes may be helpful. However, differential exclusion of
asymptomatic workers with history of atopy or other potential susceptibility factors from
workplaces believed to be high risk would result in removal of large numbers of workers to
prevent relatively few cases of OA, and is not supported by the current literature.

Control or elimination of causal exposures and avoidance and proper management of spills or
episodes of high-level exposures can lead to effective primary prevention of sensitization and
OA in co-workers of the sentinel case. The usual exposure control hierarchy of substitution,
engineering and administrative controls, and personal protective equipment, as well as
education of workers and managers, should be implemented as appropriate. Proactive
employers will initiate or participate in some or all of these approaches, but in the event that
inadequate preventive action is taken and workers remain at high risk, governmental
enforcement agencies may be helpful.

Impairment and Disability

Medical impairment is a functional abnormality resulting from a medical condition. Disability


refers to the total effect of the medical impairment on the patient’s life, and is influenced by
many non-medical factors such as age and socio-economic status (ATS 1995).

Assessment of medical impairment is done by the physician and may include a calculated
impairment index, as well as other clinical considerations. The impairment index is based on
(1) degree of airflow limitation after bronchodilator, (2) either degree of reversibility of
airflow limitation with bronchodilator or degree of airway hyper-responsiveness on
quantitative testing for NBR, and (3) minimum medication required to control asthma. The
other major component of the assessment of medical impairment is the physician’s medical
judgement of the ability of the patient to work in the workplace environment causing the
asthma. For example, a patient with sensitizer-induced OA may have a medical impairment
which is highly specific to the agent to which he or she has become sensitized. The worker
who experiences symptoms only when exposed to this agent may be able to work in other
jobs, but permanently unable to work in the specific job for which she or he has the most
training and experience.

Assessment of disability due to asthma (including OA) requires consideration of medical


impairment as well as other non-medical factors affecting ability to work and function in
everyday life. Disability assessment is initially made by the physician, who should identify all
the factors affecting the impact of the impairment on the patient’s life. Many factors such as
occupation, educational level, possession of other marketable skills, economic conditions and
other social factors may lead to varying levels of disability in individuals with the same level
of medical impairment. This information can then be used by administrators to determine
disability for purposes of compensation.

Impairment and disability may be classified as temporary or permanent, depending on the


likelihood of significant improvement, and whether effective exposure controls are
successfully implemented in the workplace. For example, an individual with sensitizer-
induced OA is generally considered permanently, totally impaired for any job involving
exposure to the causal agent. If the symptoms resolve partially or completely after cessation
of exposure, these individuals may be classified with less or no impairment for other jobs.
Often this is considered permanent partial impairment/disability, but terminology may vary.
An individual with asthma which is triggered in a dose-dependent fashion by irritants in the
workplace would be considered to have temporary impairment while symptomatic, and less or
no impairment if adequate exposure controls are installed and are effective in reducing or
eliminating symptoms. If effective exposure controls are not implemented, the same
individual might have to be considered permanently impaired to work in that job, with
recommendation for medical removal. If necessary, repeated assessment for long-term
impairment/disability may be carried out two years after the exposure is reduced or
terminated, when improvement of OA would be expected to have plateaued. If the patient
continues to work, medical monitoring should be ongoing and reassessment of
impairment/disability should be repeated as needed.

Workers who become disabled by OA or WAA may qualify for financial compensation for
medical expenses and/or lost wages. In addition to directly reducing the financial impact of
the disability on individual workers and their families, compensation may be necessary to
provide proper medical treatment, initiate preventive intervention and obtain vocational
rehabilitation. The worker’s and physician’s understanding of specific medico-legal issues
may be important to ensuring that the diagnostic evaluation meets local requirements and does
not result in compromise of the rights of the affected worker.

Although discussions of cost savings frequently focus on the inadequacy of compensation


systems, genuinely reducing the financial and public health burden placed on society by OA
and WAA will depend not only on improvements in compensation systems but, more
importantly, on effectiveness of the systems deployed to identify and rectify, or prevent
entirely, workplace exposures that are causing onset of new cases of asthma.
Conclusions

OA has become the most prevalent occupational respiratory disease in many countries. It is
more common than generally recognized, can be severe and disabling, and is generally
preventable. Early recognition and effective preventive interventions can substantially reduce
the risk of permanent disability and the high human and financial costs associated with
chronic asthma. For many reasons, OA merits more widespread attention among clinicians,
health and safety specialists, researchers, health policy makers, industrial hygienists, and
others interested in prevention of work-related diseases.

Diseases Caused by Organic Dusts

Written by ILO Content Manager

Organic Dust and Disease

Dusts of vegetable, animal and microbial origin have always been part of the human
environment. When the first aquatic organisms moved to land some 450 million years ago,
they soon developed defence systems against the many noxious substances present in the
terrestrial environment, most of them of plant origin. Exposures to this environment usually
cause no specific problems, even though plants contain a number of extremely toxic
substances, particularly those present in or produced by moulds.

During the development of civilization, climatic conditions in some parts of the world
necessitated certain activities to be undertaken indoors. Threshing in the Scandinavian
countries was performed indoors during the winter, a practice mentioned by chroniclers in
antiquity. The enclosure of dusty processes led to disease among the exposed persons, and
one of the first published accounts of this is by the Danish bishop Olaus Magnus (1555, as
cited by Rask-Andersen 1988). He described a disease among threshers in Scandinavia as
follows:

“In separating the grain from the chaff, care must be taken to choose a time when there
is a suitable wind which will sweep away the grain dust, so that it will not damage the
vital organs of the threshers. This dust is so fine that it will almost unnoticeably
penetrate into the mouth and accumulate in the throat. If this is not quickly dealt with
by drinking fresh ale, the thresher may never again or only for a short period eat what
he has threshed.”

With the introduction of machine processing of organic materials, treatment of large


quantities of materials indoors with poor ventilation led to high levels of airborne dust. The
descriptions by bishop Olaus Magnus and later by Ramazzini (1713) were followed by several
reports on disease and organic dusts in the nineteenth century, particularly among cotton mill
workers (Leach 1863; Prausnitz 1936). Later, the specific pulmonary disease common among
farmers handling mouldy materials was also described (Campbell 1932).

During recent decades, a large number of reports on disease among persons exposed to
organic dusts have been published. Initially, most of these were based on persons seeking
medical help. The names of the diseases, when published, were often related to the particular
environment where the disease was first recognized, and a bewildering array of names
resulted, such as farmer’s lung, mushroom grower’s lung, brown lung and humidifier fever.

With the advent of modern epidemiology, more reliable figures were obtained for the
incidence of occupational respiratory diseases related to organic dust (Rylander, Donham and
Peterson 1986; Rylander and Peterson 1990). There was also advancement in the
understanding of the pathological mechanisms underlying these diseases, particularly the
inflammatory response (Henson and Murphy 1989). This paved the way for a more coherent
picture of diseases caused by organic dusts (Rylander and Jacobs 1997).

The following will describe the different organic dust environments where disease has been
reported, the disease entities themselves, the classical byssinosis disease and specific
preventive measures.

Environments

Organic dusts are airborne particles of vegetable, animal or microbial origin. Table 1 lists
examples of environments, work processes and agents involving the risk of exposure to
organic dusts.

Table 1. Examples of sources of hazards of exposure to organic dust

Agriculture

Handling of grain, hay or other crops

Sugar-cane processing

Greenhouses

Silos

Animals

Swine/dairy confinement buildings

Poultry houses and processing plants

Laboratory animals, farm animals and pets

Waste-processing

Sewage water and silt

Household garbage

Composting
Industry

Vegetable fibre processing (cotton, flax, hemp, jute, sisal)

Fermentation

Timber and wood processing

Bakeries

Biotechnology processing

Buildings

Contaminated water in humidifiers

Microbial growth on structures or in ventilation ducts

Agents

It is now understood that the specific agents in the dusts are the major reason why disease
develops. Organic dusts contain a multitude of agents with potential biological effects. Some
of the major agents are found in table 2.

Table 2. Major agents in organic dusts with potential biological activity

Vegetable agents

Tannins

Histamine

Plicatic acid

Alkaloids (e.g., nicotine)

Cytochalasins

Animal agents

Proteins

Enzymes

Microbial agents
Endotoxins

(1→3)–β–D-glucans

Proteases

Mycotoxins

The relative role of each of these agents, alone or in combination with others, for the
development of disease, is mostly unknown. Most of the information available relates to
bacterial endotoxins which are present in all organic dusts.

Endotoxins are lipopolysaccharide compounds which are attached to the outer cell surface of
Gram-negative bacteria. Endotoxin has a wide variety of biological properties. After
inhalation it causes an acute inflammation (Snella and Rylander 1982; Brigham and Meyrick
1986). An influx of neutrophils (leukocytes) into the lung and the airways is the hallmark of
this reaction. It is accompanied by activation of other cells and secretion of inflammatory
mediators. After repeated exposures, the inflammation decreases (adaptation). The reaction is
limited to the airway mucosa, and there is no extensive involvement of the lung parenchyma.

Another specific agent in organic dust is (1→3)-β-D-glucan. This is a polyglucose compound


present in the cell wall structure of moulds and some bacteria. It enhances the inflammatory
response caused by endotoxin and alters the function of inflammatory cells, particularly
macrophages and T-cells (Di Luzio 1985; Fogelmark et al. 1992).

Other specific agents present in organic dusts are proteins, tannins, proteases and other
enzymes, and toxins from moulds. Very little data are available on the concentrations of these
agents in organic dusts. Several of the specific agents in organic dusts, such as proteins and
enzymes, are allergens.

Diseases

The diseases caused by organic dusts are shown in table 3 with the corresponding
International Classification of Disease (ICD) numbers (Rylander and Jacobs 1994).

Table 3. Diseases induced by organic dusts and their ICD codes

Bronchitis and pneumonitis (ICD J40)

Toxic pneumonitis (inhalation fever, organic dust toxic syndrome)

Airways inflammation (mucous membrane inflammation)

Chronic bronchitis (ICD J42)


Hypersensitivity pneumonitis (allergic alveolitis) (ICD J67)

Asthma (ICD J45)

Rhinitis, conjunctivitis

The primary route of exposure for organic dusts is by inhalation, and consequently the effects
on the lung have received the major share of attention in research as well as in clinical work.
There is, however, a growing body of evidence from published epidemiological studies and
case reports as well as anecdotal reports, that systemic effects also occur. The mechanism
involved seems to be a local inflammation at the target site, the lung, and a subsequent release
of cytokines either with systemic effects (Dunn 1992; Michel et al. 1991) or an effect on the
epithelium in the gut (Axmacher et al. 1991). Non-respiratory clinical effects are fever, joint
pains, neurosensory effects, skin problems, intestinal disease, fatigue and headache.

The different disease entities as described in table 3 are easy to diagnose in typical cases, and
the underlying pathology is distinctly different. In real life, however, a worker who has a
disease due to organic dust exposure, often presents a mixture of the different disease entities.
One person may have airways inflammation for a number of years, suddenly develop asthma
and in addition have symptoms of toxic pneumonitis during a particularly heavy exposure.
Another person may have subclinical hypersensitivity pneumonitis with lymphocytosis in the
airways and develop toxic pneumonitis during a particularly heavy exposure.

A good example of the mixture of disease entities that may appear is byssinosis. This disease
was first described in the cotton mills, but the individual disease entities are also found in
other organic dust environments. An overview of the disease follows.

Byssinosis

The disease

Byssinosis was first described in the 1800s, and a classic report involving clinical as well as
experimental work was given by Prausnitz (1936). He described the symptoms among cotton
mill workers as follows:

“After working for years without any appreciable trouble except a little cough, cotton
mill workers notice either a sudden aggravation of their cough, which becomes dry
and exceedingly irritating¼ These attacks usually occur on Mondays ¼ but gradually
the symptoms begin to spread over the ensuing days of the week; in time the
difference disappears and they suffer continuously.”

The first epidemiological investigations were performed in England in the 1950s (Schilling et
al. 1955; Schilling 1956). The initial diagnosis was based on the appearance of a typical
Monday morning chest tightness, diagnosed using a questionnaire (Roach and Schilling
1960). A scheme for grading the severity of byssinosis based on the type and periodicity of
symptoms was developed (Mekky, Roach and Schilling 1967; Schilling et al. 1955). Duration
of exposure was used as a measure of dose and this was related to the severity of the response.
Based on clinical interviews of large numbers of workers, this grading scheme was later
modified to more accurately reflect the time intervals for the decrease in FEV1 (Berry et al.
1973).

In one study, a difference in the prevalence of byssinosis in mills processing different types of
cotton was found (Jones et al. 1979). Mills using high-quality cotton to produce finer yarns
had a lower prevalence of byssinosis than mills producing coarse yarns and using a lower
quality of cotton. Thus in addition to exposure intensity and duration, both dose-related
variables, the type of dust became an important variable for assessing exposure. Later it was
demonstrated that the differences in the response of workers exposed to coarse and medium
cottons was dependent not only on the type of cotton but on other variables that affect
exposure, including: processing variables such as carding speed, environmental variables such
as humidification and ventilation, and manufacturing variables such as different yarn
treatments (Berry et al. 1973).

The next refinement of the relationship between exposure to cotton dust and a response (either
symptoms or objective measures of pulmonary function), was the studies from the United
States, comparing those who worked in 100% cotton to workers using the same cotton but in a
50:50 blend with synthetics and workers without exposure to cotton (Merchant et al. 1973).
Workers exposed to 100% cotton had the highest prevalence of byssinosis independent of
cigarette smoking, one of the confounders of exposure to cotton dust. This semiquantitative
relationship between dose and response to cotton dust was further refined in a group of textile
workers stratified by sex, smoking, work area and mill type. A relationship was observed in
each of these categories between dust concentration in the lower dust ranges and byssinosis
prevalence and/or change in forced expiratory volume in one second (FEV1).

In later investigations, the FEV1 decrease over the work shift has been used to assess the
effects of exposure, and it is also a part of the US Cotton Dust Standard.

Byssinosis was long regarded as a peculiar disease with a mixture of different symptoms and
no knowledge of the specific pathology. Some authors suggested that it was an occupational
asthma (Bouhuys 1976). A workgroup meeting in 1987 analysed the symptomatology and
pathology of the disease (Rylander et al. 1987). It was agreed that the disease comprised
several clinical entities, generally related to organic dust exposure.

Toxic pneumonitis may appear the first time an employee works in the mill, particularly when
working in the opening, blowing and carding sections (Trice 1940). Although habituation
develops, the symptoms may reappear after an unusually heavy exposure later on.

Airways inflammation is the most widespread disease, and it appears at different degrees of
severity from light irritation in the nose and airways to severe dry cough and breathing
difficulties. The inflammation causes constriction of airways and a reduced FEV1. Airway
responsiveness is increased as measured with a methacholine or histamine challenge test. It
has been discussed whether airways inflammation should be accepted as a disease entity by
itself or whether it merely represents a symptom. As the clinical findings in terms of severe
cough with airways narrowing can lead to a decrease in work ability, it is justified to regard it
as an occupational disease.

Continued airways inflammation over several years may develop into chronic bronchitis,
particularly among heavily exposed workers in the blowing and carding areas. The clinical
picture would be one of chronic obstructive pulmonary disease (COPD).
Occupational asthma develops in a small percentage of the workforce, but is usually not
diagnosed in cross-sectional studies as the workers are forced to leave work because of the
disease. Hypersensitivity pneumonitis has not been detected in any of the epidemiological
studies undertaken, nor have there been case reports relating to cotton dust exposure. The
absence of hypersensitivity pneumonitis may be due to the relatively low amount of moulds in
cotton, as mouldy cotton is not acceptable for processing.

A subjective feeling of chest tightness, most common on Mondays, is the classical symptom
of cotton dust exposure (Schilling et al. 1955). It is not, however, a feature unique to cotton
dust exposure as it appears also among persons working with other kinds of organic dusts
(Donham et al. 1989). Chest tightness develops slowly over a number of years but it can also
be induced in previously unexposed persons, provided that the dose level is high (Haglind and
Rylander 1984). The presence of chest tightness is not directly related to a decrease in FEV1.

The pathology behind chest tightness has not been explained. It has been suggested that the
symptoms are due to an increased adhesiveness of platelets which accumulate in the lung
capillaries and increase the pulmonary artery pressure. It is likely that chest tightness involves
some kind of cell sensitization, as it takes repeated exposures for the symptom to develop.
This hypothesis is supported by results from studies on blood monocytes from cotton workers
(Beijer et al. 1990). A higher ability to produce procoagulant factor, indicative of cell
sensitization, was found among cotton workers as compared to controls.

The environment

The disease was originally described among workers in cotton, flax and soft hemp mills. In
the first phase of cotton treatment within the mills—bale opening, blowing and carding—
more than half of the workers may have symptoms of chest tightness and airways
inflammation. The incidence decreases as the cotton is processed, reflecting the successive
cleaning of the causative agent from the fibre. Byssinosis has been described in all countries
where investigations in cotton mills have been performed. Some countries like Australia have,
however, unusually low incidence figures (Gun et al. 1983).

There is now uniform evidence that bacterial endotoxins are the causative agent for toxic
pneumonitis and airways inflammation (Castellan et al. 1987; Pernis et al. 1961; Rylander,
Haglind and Lundholm 1985; Rylander and Haglind 1986; Herbert et al. 1992; Sigsgaard et
al. 1992). Dose-response relationships have been described and the typical symptoms have
been induced by inhalation of purified endotoxin (Rylander et al. 1989; Michel et al. 1995).
Although this does not exclude the possibility that other agents could contribute to the
pathogenesis, endotoxins can serve as markers for disease risk. It is unlikely that endotoxins
are related to the development of occupational asthma, but they could act as an adjuvant for
potential allergens in cotton dust.

The case

The diagnosis of byssinosis is classically made using questionnaires with the specific question
“Does your chest feel tight, and if so, on which day of the week?”. Persons with Monday
morning chest tightness are classified as byssinotics according to a scheme suggested by
Schilling (1956). Spirometry can be performed, and, according to the different combinations
of chest tightness and decrease in FEV1, the diagnostic scheme illustrated in table 4 has
evolved.
Table 4. Diagnostic criteria for byssinosis

Grade ½. Chest tightness on the first day of some working weeks

Grade 1. Chest tightness on the first day of every working week

Grade 2. Chest tightness on the first and other days of the working week

Grade 3. Grade 2 symptoms accompanied by evidence of permanent incapacity in the


form of diminished effort intolerance and/or reduced ventilatory capacity

Treatment

Treatment in the light stages of byssinosis is symptomatic, and most of the workers learn to
live with the slight chest tightness and bronchoconstriction that they experience on Mondays
or when cleaning machinery or carrying out similar tasks with a higher than normal exposure.
More advanced stages of airways inflammation or regular chest tightness several days of the
week require transfer to less dusty operations. The presence of occupational asthma mostly
requires work change.

Prevention

Prevention in general is dealt with in detail elsewhere in the Encyclopaedia. The basic
principles for prevention in terms of product substitute, exposure limitation, worker protection
and screening for disease apply also for cotton dust exposure.

Regarding product substitutes, it has been suggested that cotton with a low level of bacterial
contamination be used. An inverse proof of this concept is found in reports from 1863 where
the change to dirty cotton provoked an increase in the prevalence of symptoms among the
exposed workers (Leach 1863). There is also the possibility of changing to other fibres,
particularly synthetic fibres, although this is not always feasible from a product point of view.
There is at present no production-applied technique to decrease the endotoxin content of
cotton fibres.

Regarding dust reduction, successful programmes have been implemented in the United
States and elsewhere (Jacobs 1987). Such programmes are expensive, and the costs for highly
efficient dust removal may be prohibitive for developing countries (Corn 1987).

Regarding exposure control, the level of dust is not a sufficiently precise measure of exposure
risk. Depending on the degree of contamination with Gram-negative bacteria and thus
endotoxin, a given dust level may or may not be associated with a risk. For endotoxins, no
official guidelines have been established. It has been suggested that a level of 200 ng/m3 is the
threshold for toxic pneumonitis, 100 to 200 ng/m3 for acute airways constriction over the
workshift and 10 ng/m3 for airways inflammation (Rylander and Jacobs 1997).
Knowledge about the risk factors and the consequences of exposure are important for
prevention. The information basis has expanded rapidly during recent years, but much of it is
not yet present in textbooks or other easily available sources. A further problem is that
symptoms and findings in respiratory diseases induced by organic dust are non-specific and
occur normally in the population. They may thus not be correctly diagnosed in the early
stages.

Proper dissemination of knowledge concerning the effects of cotton and other organic dusts
requires the establishment of appropriate training programmes. These should be directed not
only towards workers with potential exposure but also towards employers and health
personnel, particularly occupational health inspectors and engineers. Information must include
source identification, symptoms and disease description, and methods of protection. An
informed worker can more readily recognize work-related symptoms and communicate more
effectively to a health care provider. Regarding health surveillance and screening,
questionnaires are a major instrument to be used. Several versions of questionnaires
specifically designed for diagnosing diseases induced by organic dust have been reported in
the literature (Rylander, Peterson and Donham 1990; Schwartz et al. 1995). Lung function
testing is also a useful tool for surveillance and diagnosis. Measurements of airway
responsiveness have been found to be useful (Rylander and Bergström 1993; Carvalheiro et
al. 1995). Other diagnostic tools such as measurements of inflammatory mediators or cell
activity are still in the research phase.

Beryllium Disease

Written by ILO Content Manager

Beryllium disease is a systemic disorder involving multiple organs, with pulmonary


manifestations being most prominent and common. It occurs on exposure to beryllium in its
alloy form or in one of its various chemical compounds. Route of exposure is by inhalation
and the disease can be either acute or chronic. Acute disease is extremely rare currently, and
none has been reported since the first widespread industrial use of beryllium in the 1940s after
industrial hygiene measures had been implemented to limit high-dose exposures. Chronic
beryllium disease continues to be reported.

Beryllium, Alloys and Compounds

Beryllium, an industrial substance suspected of having carcinogenic potential, is notable for


its lightness in weight, high tensile strength and corrosion resistance. Table 1 outlines the
properties of beryllium and its compounds.
Table 1. Properties of beryllium and its compounds

Formula Specific Melting/boiling Solubility Description


weight gravity point (ºC)
Beryllium (Be) 9.01 1.85 1,298±5/2,970 — Grey to silver
(a.w.) metal
Beryllium 25 3.02 2,530±30/— Soluble in White
oxide (BeO) acids and amorphous
alkalis; powder
insoluble in
water
Beryllium 47.02 1.99 Sublimes 800 °C Readily Hygroscopic
fluoride1 (BeF2 soluble in solid
) water;
sparingly
soluble in
ethyl
alcohol
Beryllium 79.9 1.90 405/520 Very White or
chloride2 soluble in slightly yellow
(BeCl2 ) water; deliquescent
soluble in crystals
ethyl
alcohol,
benzene,
ethyl ether
and carbon
disulphide
Beryllium 187.08 1.56 60/142 Soluble in White to faintly
nitrate3 water and yellow
(Be(NO3 )2 ethyl deliquescent
·3H2 O) alcohol crystals
Beryllium 55.06 — 2,200±100/— — Hard, refractory
nitride4 (Be3 N2 white crystals
)
Beryllium 177.2 1.71 100/— Soluble in Colourless
sulphate water; crystals
hydrate5 insoluble in
(BeSO4·4H2 ethyl
O) alcohol

1
Beryllium fluoride is made by the decompensation at 900–950 ºC of ammonium beryllium
fluoride. Its main use is in the production of beryllium metal by reduction with magnesium.
2
Beryllium chloride is manufactured by passing chlorine over a mixture of beryllium oxide
and carbon.
3
Beryllium nitrate is produced by the action of nitric acid on beryllium oxide. It is used as a
chemical reagent and as a gas mantle hardener.
4
Beryllium nitride is prepared by heating beryllium metal powder in an oxygen-free, nitrogen
atmosphere at 700–1,400 ºC. It is used in atomic energy reactions, including the production of
the radioactive carbon isotope carbon-14.
5
Beryllium sulphate hydrate is produced by treating the fritted ore with concentrated suphuric
acid.It is used in the production of metallic beryllium by the sulphate process.

Sources

Beryl (3BeO·Al2O3·6SiO2) is the chief commercial source of beryllium, the most abundant of
the minerals containing high concentrations of beryllium oxide (10 to 13%). Major sources of
beryl are to be found in Argentina, Brazil, India, Zimbabwe and the Republic of South Africa.
In the United States, beryl is found in Colorado, South Dakota, New Mexico and Utah.
Bertrandite, a low-grade ore (0.1 to 3%) with an acid-soluble beryllium content, is now being
mined and processed in Utah.

Production

The two most important methods of extracting beryllium from the ore are the sulphate process
and the fluoride process.

In the sulphate process, crushed beryl is melted in an arc furnace at 1,65°C and poured
through a high-velocity water stream to form a frit. After heat treatment, the frit is ground in a
ball mill and mixed with concentrated sulphuric acid to form a slurry, which is sprayed in the
form of a jet into a directly heated, rotating sulphating mill. The beryllium, now in a water-
soluble form, is leached from the sludge, and ammonium hydroxide is added to the leach
liquor, which is then fed to a crystallizer where ammonium alum is crystallized out. Chelating
agents are added to the liquor to hold iron and nickel in solution, sodium hydroxide is then
added, and the sodium beryllate thus formed is hydrolyzed to precipitate beryllium hydroxide.
The latter product may be converted to beryllium fluoride for reduction by magnesium to
metallic beryllium, or to beryllium chloride for electrolytic reduction.

In the fluoride process (figure 1) a briquetted mixture of ground ore, sodium silicofluoride and
soda ash is sintered in a rotating hearth furnace. The sintered material is crushed, milled and
leached. Sodium hydroxide is added to the solution of beryllium fluoride thus obtained and
the precipitate of beryllium hydroxide is filtered in a rotary filter. Metallic beryllium is
obtained as in the previous process by the magnesium reduction of beryllium fluoride or by
electrolysis of beryllium chloride.
Figure 1. Production of beryllium oxide by the fluoride process

Uses

Beryllium is used in alloys with a number of metals including steel, nickel, magnesium, zinc
and aluminium, the most widely used alloy being beryllium-copper—properly called “a
bronze”—which has a high tensile strength and a capacity for being hardened by heat
treatment. Beryllium bronzes are used in non-spark tools, electrical switch parts, watch
springs, diaphragms, shims, cams and bushings.
One of the largest uses of the metal is as a moderator of thermal neutrons in nuclear reactors
and as a reflector to reduce the leakage of neutrons from the reactor core. A mixed uranium-
beryllium source is often used as a neutron source. As a foil, beryllium is used as window
material in x-ray tubes. Its lightness, high elastic modulus and heat stability make it an
attractive material for the aircraft and aerospace industry.

Beryllium oxide is made by heating beryllium nitrate or hydroxide.

It is used in the manufacture of ceramics, refractory materials and other beryllium


compounds. It was used for the manufacture of phosphors for fluorescent lamps until the
incidence of beryllium disease in the industry caused its use for this purpose to be abandoned
(in 1949 in the United States).

Hazards

Fire and health hazards are associated with processes involving beryllium. Finely divided
beryllium powder will burn, the degree of combustibility being a function of particle size.
Fires have occurred in dust filtration units and during the welding of ventilation ducting in
which finely divided beryllium was present.

Beryllium and its compounds are highly toxic substances. Beryllium can affect all organ
systems, although the primary organ involved is the lung. Beryllium causes systemic disease
by inhalation and can distribute itself widely throughout the body after absorption from the
lungs. Little beryllium is absorbed from the gastro-intestinal tract. Beryllium can cause skin
irritation and its traumatic introduction into subcutaneous tissue can cause local irritation and
granuloma formation.

Pathogenesis

Beryllium in all its forms, except for beryl ore, has been associated with disease. The route of
entry is by inhalation and in the acute disease there is a direct toxic effect on both the
nasopharyngeal mucosa and that of the entire tracheobronchial tree as well, causing oedema
and inflammation. In the lung it causes an acute chemical pneumonitis. The major form of
beryllium toxicity at this point in time is chronic beryllium disease. A beryllium-specific
delayed type of hypersensitivity is the major pathway of chronic disease. The entry of
beryllium into the system through the lungs leads to proliferation of specific CD+
lymphocytes, with beryllium acting as a specific antigen, either alone or as a hapten through
an interleukin-2 (IL2) receptor pathway. Individual susceptibility to beryllium thus can be
explained on the basis of the individual CD+ response. Release of lymphokines from the
activated lymphocytes then can lead to granuloma formation and macrophage recruitment.
Beryllium can be transported to sites outside the lung where it can cause granuloma
formation. Beryllium is released slowly from different sites and it is excreted by the kidneys.
This slow release can occur over a span of 20 to 30 years. The chronicity and latency of
disease can probably be explained on the basis of the slow metabolism and release
phenomenon. The immune mechanisms involved in the pathogenesis of beryllium disease
also allow for specific approaches to diagnosis, which will be discussed below.
Histopathology

The primary pathological finding in beryllium disease is the formation of non-caseating


granulomas in the lungs, lymph nodes and at other sites. Histopathological studies of lungs in
patients with acute beryllium disease have shown a non-specific pattern of acute and subacute
bronchitis and pneumonitis. In chronic beryllium disease, there are varying degrees of
lymphocytic infiltration of the lung interstitium and non-caseating granuloma formation
(figure 2).

Figure 2. Lung tissue in a patient with chronic beryllium disease

Both granulomas and round cell infiltration are visible

Many of the granulomas are located in the peribronchiolar areas. In addition, there can be
histiocytes, plasma cells and giant cells with calcific inclusion bodies. If it is a case solely of
granuloma formation, the long-term prognosis is better. The histology of the lung in chronic
beryllium disease is indistinguishable from that of sarcoidosis. Non-caseating granulomas are
also found in lymph nodes, liver, spleen, muscle and skin.

Clinical Manifestations

Skin injuries

Acid salts of beryllium cause allergic contact dermatitis. Such lesions may be erythematous,
papular or papulovesicular, are commonly pruritic, and are found on exposed parts of the
body. There is usually a delay of 2 weeks from first exposure to occurrence of the dermatitis,
except in the case of heavy exposures, when an irritant reaction may be immediate. This delay
is regarded as the time required to develop the hypersensitive state.

Accidental implantation of beryllium metal or crystals of a soluble beryllium compound in an


abrasion, a crack in the skin or under the nail may cause an indurated area with central
suppuration. Granulomas can also form at such sites.

Conjunctivitis and dermatitis may occur alone or together. In cases of conjunctivitis,


periorbital oedema may be severe.
Acute disease

Beryllium nasopharyngitis is characterized by swollen and hyperaemic mucous membranes,


bleeding points, fissures and ulceration. Perforation of the nasal septum has been described.
Removal from exposure results in reversal of this inflammatory process within 3 to 6 weeks.

Involvement of the trachea and bronchial tree following exposure to higher levels of
beryllium causes non-productive cough, substernal pain and moderate shortness of breath.
Rhonchi and/or rales may be audible, and the x ray of the chest may show increased
bronchovascular markings. The character and speed of onset and the severity of these signs
and symptoms depend on the quality and quantity of exposure. Recovery is to be expected
within 1 to 4 weeks if the worker is removed from further exposure.

The use of steroids is quite useful in countering the acute disease. No new cases of acute
disease have been reported to the US Beryllium Case Registry in over 30 years. The Registry,
which was started by Harriet Hardy in 1952, has almost 1,000 case records, among which are
listed 212 acute cases. Almost all of these occurred in the fluorescent lamp manufacturing
industry. Forty-four subjects with the acute disease subsequently developed chronic disease.

Chronic beryllium disease

Chronic beryllium disease is a pulmonary and systemic granulomatous disease caused by


inhalation of beryllium. The latency of the disease can be from 1 to 30 years, most commonly
occurring 10 to 15 years after first exposure. Chronic beryllium disease has a variable course
with exacerbations and remissions in its clinical manifestations. However, the disease is
usually progressive. There have been a few cases with chest x-ray abnormalities with a stable
clinical course and without significant symptoms.

Exertional dyspnoea is the most common symptom of chronic beryllium disease. Other
symptoms are cough, fatigue, weight loss, chest pain and arthralgias. Physical findings may
be entirely normal or may include bibasilar crackles, lymphadenopathy, skin lesions,
hepatosplenomegaly and clubbing. Signs of pulmonary hypertension may be present in
severe, long-standing disease.

Renal stones and hyperuricaemia can occur in some patients and there have been rare reports
of parotid gland enlargement and central nervous system involvement. The clinical
manifestations of chronic beryllium disease are very similar to those of sarcoidosis.

Roentgenologic features

The x-ray pattern in chronic beryllium disease is non-specific and is similar to that which may
be observed in sarcoidosis, idiopathic pulmonary fibrosis, tuberculosis, mycoses and dust
disease (figure 3). Early in the course of the disease films may show granular, nodular or
linear densities. These abnormalities may increase, decrease or remain unchanged, with or
without fibrosis. Upper-lobe involvement is common. Hilar adenopathy, seen in
approximately one-third of patients, is usually bilateral and accompanied by mottling of the
lung fields. The absence of lung changes in the presence of adenopathy is a relative but not an
absolute differential consideration in favour of sarcoidosis as opposed to chronic beryllium
disease. Unilateral hilar adenopathy has been reported, but is quite rare.
Figure 3. Chest roentgenograph of a patient with chronic beryllium disease, showing diffuse
fibronodular infiltrates and prominent hila

The x-ray picture does not correlate well with clinical status and does not reflect particular
qualitative or quantitative aspects of the causal exposure.

Pulmonary function tests

Data from the Beryllium Case Registry show that 3 patterns of impairment may be found in
chronic beryllium disease. Of 41 patients studied over a period of an average of 23 years after
initial beryllium exposure, 20% had a restrictive defect, 36% had an interstitial defect (normal
lung volumes and air flow rates but reduced diffusing capacity for carbon monoxide), 39%
had an obstructive defect and 5% were normal. The obstructive pattern, which occurred in
both smokers and non-smokers, was associated with granulomas in the peribronchial region.
This study indicated that the pattern of impairment affects prognosis. Patients with interstitial
defect fared best, with the least deterioration over a five-year interval. Patients with
obstructive and restrictive defects experienced worsening of their impairment in spite of
corticosteroid therapy.

Studies of lung function in beryllium extraction workers who were asymptomatic showed the
presence of mild arterial hypoxaemia. This occurred usually within the first 10 years of
exposure. In workers exposed to beryllium for 20 years or more there was a reduction in the
forced vital capacity (FVC) and the forced expiratory volume in one second (FEV1). These
findings suggest that the initial mild hypoxaemia could be due to the early alveolitis and that
with further exposure and elapse of time the reduction in FEV1 and FVC could represent
fibrosis and granuloma formation.

Other laboratory tests

Non-specific abnormal laboratory tests have been reported in chronic beryllium disease and
include elevated sedimentation rate, erythrocytosis, increased gammaglobulin levels,
hyperuricaemia and hypercalcaemia.

The Kveim skin test is negative in beryllium disease, whereas it may be positive in
sarcoidosis. The angiotensin converting enzyme (ACE) level is usually normal in beryllium
disease, but can be increased in 60% or more of patients with active sarcoidosis.

Diagnosis

Diagnosis of chronic beryllium disease for many years was based on the criteria developed
through the Beryllium Case Registry, which included:

1. a history of significant beryllium exposure


2. evidence of lower respiratory tract disease
3. abnormal chest x ray with interstitial fibronodular disease
4. abnormal lung function tests with decreased carbon monoxide diffusing capacity (DLCO)
5. pathological changes consistent with beryllium exposure in lung or thoracic lymph nodes
6. the presence of beryllium in tissue.

Four of the six criteria had to be met and should have included either (1) or (6). Since the
1980s, advances in immunology have made it possible to make the diagnosis of beryllium
disease without requiring tissue specimens for histological examination or beryllium analysis.
The transformation of lymphocytes in blood in response to beryllium exposure (as in the
lymphocyte transformation test, LTT) or lymphocytes from bronchoalveolar lavage (BAL)
have been proposed by Newman et al. (1989) as useful diagnostic tools in making the
diagnosis of beryllium disease in exposed subjects. Their data suggest that a positive blood
LTT is indicative of sensitization. However, recent data show that the blood LTT does not
correlate well with pulmonary disease. The BAL lymphocyte transformation correlates much
better with abnormal pulmonary function and does not correlate well with concurrent
abnormalities in the blood LTT. Thus, to make a diagnosis of beryllium disease, one needs a
combination of clinical, radiological and lung function abnormalities and a positive LTT in
the BAL. A positive blood LTT by itself is not diagnostic. Microprobe analysis of small tissue
samples for beryllium is another recent innovation which could help in diagnosis of disease in
small lung tissue samples obtained by transbronchial lung biopsy.

Sarcoidosis is the disorder most closely resembling chronic beryllium disease, and the
differentiation may be difficult. Thus far, no cystic bone disease or involvement of the eye or
tonsil has appeared in chronic beryllium disease. Similarly, the Kveim test is negative in
beryllium disease. Skin testing to demonstrate beryllium sensitization is not recommended, in
that the test itself is sensitizing, may possibly trigger systemic reactions in sensitized people
and does not of itself establish that the presenting disease is necessarily beryllium related.
More sophisticated immunological approaches in differential diagnosis should allow for better
differentiation from sarcoidosis in the future.

Prognosis

The prognosis of chronic beryllium disease has altered favourably during the years; it has
been suggested that the longer delays in onset observed among beryllium workers may reflect
lower exposure or lower beryllium body burden, resulting in a milder clinical course. Clinical
evidence is that steroid therapy, if used when measurable disability first appears, in adequate
doses for long enough periods, has improved the clinical status of many patients, allowing
some of them to return to useful jobs. There is no clear evidence that steroids have cured
chronic beryllium poisoning.

Beryllium and cancer

In animals, experimentally administered beryllium is a carcinogen, causing osteogenic


sarcoma after intravenous injection in rabbits and lung cancer after inhalation in rats and
monkeys. Whether beryllium may be a human carcinogen is a controversial issue. Some
epidemiological studies have suggested an association, particularly after acute beryllium
disease. This finding has been disputed by others. One can conclude that beryllium is
carcinogenic in animals and there may be a link between lung cancer and beryllium in
humans, particularly in those with the acute disease.

Safety and Health Measures

Safety and health precautions must cover the fire hazard as well as the much more serious
toxicity danger.

Fire prevention

Arrangements must be made to prevent possible sources of ignition, such as the sparking or
arcing of electrical apparatus, friction, and so forth, in the vicinity of finely divided beryllium
powder. Equipment in which this powder has been present should be emptied and cleaned
before acetylene or electrical welding apparatus is used on it. Oxide-free, ultrafine beryllium
powder that has been prepared in inert gas is liable to ignite spontaneously on exposure to air.

Suitable dry powder—not water—should be used to extinguish a beryllium fire. Full personal
protective equipment, including respiratory protective equipment, should be worn and
firefighters should bathe afterwards and arrange for their clothing to be laundered separately.

Health protection

Beryllium processes must be conducted in a carefully controlled manner to protect both the
worker and the general population. The main risk takes the form of airborne contamination
and the process and plant should be designed to give rise to as little dust or fume as possible.
Wet processes should be used instead of dry processes, and the ingredients of beryllium-
containing preparations should be unified as aqueous suspensions instead of as dry powders;
whenever possible the plant should be designed as groups of separate enclosed units. The
permissible concentration of beryllium in the atmosphere is so low that enclosure must be
applied even to wet processes, otherwise escaping splashes and spills can dry out and the dust
can enter the atmosphere.

Operations from which dust may be evolved should be conducted in areas with maximum
degree of enclosure consistent with the needs of manipulation. Some operations are performed
in glove boxes, but many more are conducted in enclosures provided with exhaust ventilation
similar to that installed in chemical fume cupboards. Machining operations may be ventilated
by high-velocity, low-volume local exhaust systems or by hooded enclosures with exhaust
ventilation.

To check the effectiveness of these precautionary measures atmosphere monitoring should be


done in such a manner that the daily average exposure of workers to respirable beryllium can
be calculated. The work area should be cleaned regularly by means of a proper vacuum
cleaner or a wet mop. Beryllium processes should be segregated from the other operations in
the factory.

Personal protective equipment should be provided for workers engaged in beryllium


processes. Where they are fully employed in processes involving the manipulation of
beryllium compounds or in processes associated with the extraction of the metal from the ore,
provision should be made for a complete change of clothing so that the workers do not go
home wearing clothing in which they have been working. Arrangements should be made for
the safe laundering of such working clothes, and protective overalls should be provided even
to laundry workers to ensure that they too are not exposed to risk. These arrangements should
not be left to normal home laundering procedures. Cases of beryllium poisoning in the
families of workers have been attributed to workers taking contaminated clothing home or
wearing them in the home.

An occupational health standard of 2μg/m3, proposed in 1949 by a committee operating under


the auspices of the US Atomic Energy Commission, continues to be widely observed.
Existing interpretations generally permit fluctuations to a “ceiling” of 5μg/m3 as long as the
time-weighted average is not exceeded. Additionally, an “acceptable maximum peak above
the ceiling concentration for an eight-hour shift” of 25μg/m3 for up to 30 min is also
permissible. These operational levels are achievable in current industrial practice, and there is
no evidence of adverse health experience among persons working in an environment thus
controlled. Because of a possible link between beryllium and lung cancer it has been
suggested that the allowable limit be reduced to 1μg/m3, but no official action has been taken
on this suggestion in the United States.

The population at risk for developing beryllium disease is that which in some manner deals
with beryllium in its extraction or subsequent use. However, a few “neighbourhood” cases
have been reported from a distance 1 to 2 km from beryllium extraction plants.

Pre-employment and periodical medical examinations of workers exposed to beryllium and its
compounds are compulsory in a number of countries. Recommended evaluation includes an
annual respiratory questionnaire, a chest x ray and lung function tests. With advances in
immunology, the LTT may also become a routine evaluation, although at this time not enough
data are available to recommend its use routinely. With evidence of beryllium disease, it is
unwise to allow a worker to be exposed to beryllium further, even though the workplace
meets the threshold criteria for beryllium concentration in the air.
Treatment

The major step in therapy is avoidance of further exposure to beryllium. Corticosteroids are
the primary mode of therapy in chronic beryllium disease. Corticosteroids appear to alter the
course of disease favourably but do not “cure” it.

Corticosteroids should be started on a daily basis with a relatively high dose of Prednisone of
0.5 to 1 mg per kg or more, and continued until improvement occurs or no further
deterioration in clinical or lung function tests occurs. Usually this takes 4 to 6 weeks. Slow
reduction of steroids is recommended, and eventually alternate-day therapy may be possible.
Steroid therapy ordinarily becomes a lifelong necessity.

Other supportive measures such as supplemental oxygen, diuretics, digitalis and antibiotics
(when infection exists) are indicated as the clinical condition of the patient would dictate.
Immunization against influenza and pneumococcus should also be considered, as with any
patient with chronic respiratory disease.

Pneumoconioses: Definition

Written by ILO Content Manager

The expression pneumoconiosis, from the Greek pneuma (air, wind) and konis (dust) was
coined in Germany by Zenker in 1867 to denote changes in the lungs caused by the retention
of inhaled dust. Gradually, the need for distinction between the effects of various types of dust
became evident. It was necessary to discriminate among mineral or vegetable dust and their
microbiological component. Consequently, the Third International Conference of Experts on
Pneumoconiosis, organized by the ILO in Sydney in 1950, adopted the following definition:
“Pneumoconiosis is a diagnosable disease of the lungs produced by the inhalation of dust, the
term ‘dust’ being understood to refer to particulate matter in the solid phase, but excluding
living organisms.”

However, the word disease seems to imply some degree of health impairment which may not
be the case with pneumoconioses not connected with the development of lung
fibrosis/scarring. In general, the reaction of lung tissue to the presence of dust varies with
different dusts. Non-fibrogenic dusts evoke a tissue reaction in lungs characterized by
minimal fibrotic reaction and absence of lung function impairment. Such dusts, examples of
which are finely divided dusts of kaolinite, titanium dioxide, stannous oxide, barium sulphate
and ferric oxide, are frequently referred to as biologically inert.

Fibrogenic dust such as silica or asbestos causes a more pronounced fibrogenic reaction
resulting in scars in the lung tissue and obvious disease. The division of dusts into fibrogenic
and non-fibrogenic varieties is by no means sharp because there are many minerals, notably
silicates, which are intermediate in their ability to produce fibrotic lesions in the lungs.
Nevertheless, it proved useful for clinical purposes and is reflected in the classification of
pneumoconioses.

A new definition of pneumoconioses was adopted at the Fourth International Conference on


Pneumoconiosis, Bucharest, 1971: “Pneumoconiosis is the accumulation of dust in the lungs
and the tissue reactions to its presence. For the purpose of this definition, ‘dust’ is meant to be
an aerosol composed of solid inanimate particles.”

In order to avoid any misinterpretation, the expression non-neoplastic is sometimes added to


the words “tissue reaction”.

The Working Group at the Conference made the following comprehensive statement:

The Definition of Pneumoconiosis

Earlier on, in 1950, a definition of pneumoconiosis was established at the 3rd International
Conference of Experts on Pneumoconiosis and this has continued to be used until the present
time. In the meantime, the development of new technologies has resulted in more
occupational risks, particularly those related to the inhalation of airborne contaminants.
Increased knowledge in the field of occupational medicine has enabled new pulmonary
diseases of occupational origin to be recognized but has also demonstrated the necessity for a
re-examination of the definition of pneumoconiosis established in 1950. The ILO therefore
arranged for a Working Group to be convened within the framework of the IVth International
Pneumoconiosis Conference in order to examine the question of the definition of
pneumoconiosis. The Working Group held a general discussion on the matter and proceeded
to examine a number of proposals submitted by its members. It finally adopted a new
definition of pneumoconiosis which was prepared together with a commentary. This text is
reproduced below.

In recent years a number of countries have included under pneumoconiosis, because of socio-
economic reasons, conditions which are manifestly not pneumoconiosis, but are nevertheless
occupational pulmonary diseases. Under the term “disease” are included for preventive
reasons the earliest manifestations which are not necessarily disabling or life shortening.
Therefore the Working Group has undertaken to redefine pneumoconiosis as the accumulation
of dust in the lungs and the tissue reactions to its presence. For the purpose of this definition,
“dust” is meant to be an aerosol composed of solid inanimate particles. From a pathological
point of view pneumoconiosis may be divided for the sake of convenience into collagenous or
non-collagenous forms. A non-collagenous pneumoconiosis is caused by a non-fibrogenic
dust and has the following characteristics:

i. the alveolar architecture remains intact


ii. the stromal reaction is minimal and consists mainly of reticulin fibres
iii. the dust reaction is potentially reversible.

Examples of non-collagenous pneumoconiosis are those caused by pure dusts of tin oxide
(stannosis) and barium sulphate (barytosis).

Collagenous pneumoconiosis is characterised by:

i. permanent alteration or destruction of alveolar architecture


ii. collagenous stromal reaction of moderate to maximal degree, and
iii. permanent scarring of lung.
Such collagenous pneumoconiosis may be caused by fibrogenic dusts or by an altered tissue
response to a non-fibrogenic dust.

Examples of collagenous pneumoconiosis caused by fibrogenic dusts are silicosis and


asbestosis, whereas complicated coalworkers’ pneumoconiosis or progressive massive fibrosis
(PMF) is an altered tissue response to a relatively non-fibrogenic dust. In practice, the
distinction between collagenous and non-collagenous pneumoconiosis is difficult to establish.
Continued exposure to the same dust, such as coal dust, may cause transition from a non-
collagenous to a collagenous form. Furthermore, exposure to a single dust is now becoming
less common and exposures to mixed dusts having different degrees of fibrogenic potential
may result in pneumoconiosis which can range from the non-collagenous to the collagenous
forms. There are in addition occupational chronic pulmonary diseases which, although they
develop from the inhalation of dust are excluded from the pneumoconiosis because the
particles are not known to accumulate in the lungs. The following are examples of potentially
disabling occupational chronic pulmonary diseases: byssinosis, berylliosis, farmers’ lung, and
related diseases. They have one common denominator, namely the aetiologic component of
dust has sensitized the pulmonary or bronchial tissue so that if the lung tissue responds, the
inflammation tends to be granulomatous and if the bronchial tissue responds, there is apt to be
bronchial constriction. Exposures to noxious inhaled materials in certain industries are
associated with an increased risk of mortality from carcinoma of the respiratory tract.
Examples of such materials are radioactive ores, asbestos and chromates.

Adopted at the IVth ILO International Conference on Pneumoconiosis. Bucharest, 1971.

ILO International Classification of Radiographs of Pneumoconioses

Written by ILO Content Manager

Despite all the national and international energies devoted to their prevention,
pneumoconioses are still very present both in industrialized and developing countries, and are
responsible for the disability and impairment of many workers. This is why the International
Labour Office (ILO), the World Health Organization (WHO) and many national institutes for
occupational health and safety continue their fight against these diseases and to propose
sustainable programmes for preventing them. For instance, the ILO, the WHO and the US
National Institute for Occupational Safety and Health (NIOSH) have proposed in their
programmes to work in cooperation on a global fight against silicosis. Part of this programme
is based on medical surveillance which includes the reading of thoracic radiographs to help
diagnose this pneumoconiosis. This is one example which explains why the ILO, in
cooperation with many experts, has developed and updated on a continuous basis a
classification of radiographs of pneumoconioses that provides a means for recording
systematically the radiographic abnormalities in the chest provoked by the inhalation of dust.
The scheme is designed for classifying the appearances of posterio-anterior chest radiographs.

The object of the classification is to codify the radiographic abnormalities of pneumoconioses


in a simple, reproducible manner. The classification does not define pathological entities, nor
take into account working capacity. The classification does not imply legal definitions of
pneumoconioses for compensation purposes, nor imply a level at which compensation is
payable. Nevertheless, the classification has been found to have wider uses than anticipated. It
is now extensively used internationally for epidemiological research, for the surveillance of
those industry occupations and for clinical purposes. Use of the scheme may lead to better
international comparability of pneumoconioses statistics. It is also used for describing and
recording, in a systematic way, part of the information needed for assessing compensation.

The most important condition for using this system of classification with full value from a
scientific and ethical point of view is to read, at all times, films to be classified by
systematically referring to the 22 standard films provided in the ILO International
Classification set of standard films. If the reader attempts to classify a film without referring
to any of the standard films, then no mention of reading according to the ILO International
Classification of Radiographs should be made. The possibility of deviating from the
classification by over or under reading is so risky that his or her reading should not be used at
least for epidemiological research or international comparability of pneumoconioses statistics.

The first classification was proposed for silicosis at the First International Conference of
Experts on Pneumoconioses, held in Johannesburg in 1930. It combined both radiographic
appearances and impairment of lung functions. In 1958, a new classification based purely on
radiographic changes was established (Geneva classification 1958). Since, it has been revised
several times, the last time in 1980, always with the objective of providing improved versions
to be extensively used for clinical and epidemiological purposes. Each new version of the
classification promoted by the ILO has brought modifications and changes based on
international experience gained in the use of earlier classifications.

In order to provide clear instructions for the use of the classification, the ILO issued in 1970 a
publication entitled International Classification of Radiographs of Pneumoconioses/1968 in
the Occupational Safety and Health Series (No. 22). This publication was revised in 1972 as
ILO U/C International Classification of Radiographs of Pneumoconioses/1971 and again in
1980 as Guidelines for the use of ILO International Classification of Radiographs of
Pneumoconioses, revised edition 1980. The description of standard radiographs is given in
table 1.

Table 1. Description of standard radiographs

1980 Small Pleural thickening


Standard
 radiograp opacities
hs showing

Chest wall

Tec Pro Sh Ex Lar Circ Diffu Diap Cost Pleu Sy Comm


hni fusi ap te ge um- se hrag o-
 ral mb ents
cal on e- nt opa 
 m calci ols
phre
qua 
 citi ficati
scri nic
lity es on
siz bed 

e (pla
angl
que
s) e

oblit
erati
on

0/0 
 (example 1 0/0 – – No No No No No No No Vascul


ne ar
1)
patter
n is
well
illustra
ted

0/0 
 (example 1 0/0 – – No No No No No No No Also


ne shows
2)
vascul
ar
patter
n, but
not as
clearly
as
examp
le 1

1/1; p/p 1 1/1 p/ R A No No No No No rp. Rheu


p L matoi

 d
pneu
x
mocon
x
iosis in

 left
x lower
x zone.

 Small
x opaciti
x es are
presen
t in all
zones,
but
the
profusi
on in
the
right-
upper
zone is
typical
of
(some
would
say a
little
more
profus
e
than)
that
classifi
able as
catego
ry 1/1

2/2; p/p 2 2/2 p/ R No No No No No No pi; Qualit


p L tb. y

 defect:
radiog
x
raph is
x
too

 light
x
x

x
x

3/3; p/p 1 3/3 p/ R No No No No Yes No ax. None


p L 


 R L
x 

x
x –

x
x

x
x

1/1; q/q 1 1/1 q/ R No No No No No No No Illustra


q L ne tes

 profusi
on 1/1
x
better
x
than

 shape
x or size
x


2/2; q/q 1 2/2 q/ R No No Yes No Yes No No None


q L 
 
 ne

 R L R L
x 
 

x
x x x x


x
widt
x
h: a

a

x
exte
x
nt:
1 1

3/3; q/q 2 3/3 q/ R No No No No No No pi. Qualit


q L y

 defect
s: poor
x
definit
x
ion of

 pleura
x and
x cut

 basal
x angles
x

1/1; r/r 2 1/1 r/r R No No No No Yes No No Qualit


L 
 ne y

 defect:
R L
subjec
x 
 t
x
– x move

 ment.
x Profusi
x on of

 small
– opaciti
– es is
more
marke
d in
right
lung

2/2; r/r 2 2/2 r/r R No No No No No No No Qualit


L ne y

 defect
s:
x
radiog
x
raph

 too
x light
x and

 contra
x st too
x high.
The
heart
shado
w is
slightly
displac
ed to
the
left

3/3; r/r 1 3/3 r/r R No No No No No No ax; None


L ih.

x
x

x
x

x
x

1/1; s/t 2 1/1 s/t R No No No No No No kl. Qualit


L y

 defect:
cut
x
bases.

Kerley

 lines in
x lower
x right

 zone
x
x

2/2; s/s 2 2/2 s/s R No No No No No No em Qualit


L . y

 defect:
distort

– ion of
bases

due to
x shrinki
x ng.

 Emphy
x sema
x in
upper
zones

3/3; s/s 2 3/3 s/s R No No Yes No No No ho; Qualit


L 
 ih; y

 
 defect:
R L
radiog
x 
 pi.
raph is
x
x x too


 light.
x Honey
widt
x comb
h: a

 lung
a

x appear
exte ance is
x
nt: not
3 3 marke
d

1/1; 1 1/1 t/t R No No Yes No Yes Yes No This


t/t
 Costophre L 
 
 
 ne radiog

 raph
nic R L R L R L
define

 angle
 oblite – 
 
 

– s the
ration x x x – – x lower


 
 limit
x for
widt exte
x costop
h: a nt: 2

 hrenic
a

x angle
exte obliter
x
nt: ation.
2 2 Note
shrink
age in
lower
lung
fields

2/2; t/t 1 2/2 t/t R No No Yes No No No ih. Pleural


L 
 thicke

 ning is
R L
presen
x 
 t in
x
x x the


 apices
x of the
widt
x lung
h: a

a

x
exte
x
nt:
1 1

3/3; t/t 1 3/3 t/t R No No No No No No hi; None


L ho;

 

x id;
x ih;

 

x tb.
x

x
x

1/1; u/u
 2/2; – – – – – – – – – – – This


compo
u/u
 3/3; u/u site
radiog
raph
illustra
tes the
mid-
catego
ries of
profusi
on of
small
opaciti
es
classifi
able
for
shape
and
size as
u/u.

A 2 2/2 p/ R A No No No No No No Qualit
q L y

 defect
s:
x
radiog
x
raph is

 too
x light
x and

 pleural
x definit
x ion is
poor

B 1 1/2 p/ R B No No No No No ax; Definit


q L co. ion of

 pleura
is
x
slightly
x
imperf

 ect
x
x

x
x

C 1 2/1 q/ R C No No No No No bu; The


t L di; small

 
 opaciti
es are
x em
difficul
x ;
t to

 es;
classif
hi;
x y
ih.
x becaus

 e of
x the
x presen
ce of
the
large
opaciti
es.
Note
the
left
costop
hrenic
angle
obliter
ation.
This is
not
classifi
able
becaus
e it
does
not
reach
the
lower
limit
define
d by
the
standa
rd
radiog
raph
1/1;
t/t

Pleural
 thicke – – – – – Yes No No No No The


pleural
ning
 (circumsc thicke
ribed) ning
presen
t face
on, is
of
indete
rminat
e
width,
and
extent
2
Pleural
 thicke – – – – – No Yes No No Yes The
pleural
ning
 (diffuse) thicke
ning
presen
t in
profile
, is of
width
a, and
extent
2. Not
associ
ated
small
calcific
ations

Pleural
 thicke – – – – – No No Yes No Yes Circum


scribe
ning
d,
(calcification)
calcifie
diaphragm
d
pleural
thicke
ning of
extent
2

Pleural
 thicke – – – – – Yes No No No Yes Calcifi


ed and
ning
uncalci
(calcification)
fied
chest wall
pleural
thicke
ning
presen
t face
on, is
of
indete
rminat
e
width,
and
extent
2
ILO 1980 Classification

The 1980 revision was carried out by the ILO with the cooperation of the Commission of the
European Communities, NIOSH and the American College of Radiology. The summary of
the classification is given in table 2. It retained the principle of former classifications (1968
and 1971).

Table 2. ILO 1980 International Classification of Radiographs of Pneumoconioses: Summary


of details of classification

Features Codes Definitions

Technical quality

1 Good.

2 Acceptable, with no
technical defect likely
to impair
classification of the
radiograph of
pneumoconiosis.

3 Poor, with some


technical defect but
still acceptable for
classification
purposes.

4 Unacceptable.

Parenchymal abnormalities

Small opacities Profusion The category of


profusion is based on
assessment of the
concentration of
opacities by
comparison with the
standard radiographs.

0/- 0/0 0/1
 Category O—small


opacities absent or
1/0 1/1 1/2
 less profuse than the
2/1 2/2 2/3
 lower limit of
3/2 3/3 3/+ category 1.

Categories 1, 2 and
3—increasing
profusion of small
opacities as defined
by the corresponding
standard radiographs.

Extent RU RM RL
 The zones in which


the opacities are seen
LU LM LL
are recorded. The
right (R) and left (L)
thorax are both
divided into three
zones—upper (U),
middle (M) and lower
(L).
 The category of
profusion is
determined by
considering the
profusion as a whole
over the affected
zones of the lung and
by comparing this
with the standard
radiographs.

Shape and
Size

Rounded p/p q/q r/r The letters p, q and r


denote the presence
of small, rounded
opacities. Three sizes
are defined by the
appearances on
standard radiographs:

 p = diameter up to
about 1.5 mm
 q =
diameter exceeding
about 1.5 mm and up
to about 3 mm
 r =
diameter exceeding
about 3 mm and up to
about 10 mm

Irregular s/s t/t u/u The letters s, t and u


denote the presence
of small, irregular
opacities. Three sizes
are defined by the
appearances on
standard radiographs:

 s = width up to
about 1.5 mm
 t =
width exceeding
about 1.5 mm and up
to about 3 mm
 u =
width exceeding
3 mm and up to about
10 mm

Mixed p/s p/t p/u p/q p/r For mixed shapes (or

 sizes) of small
opacities, the
q/s q/t q/u q/p q/r
predominant shape

 and size is recorded
r/s r/t r/u r/p r/q first. The presence of

 s/p s/q s/r s/t s/u a significant number
of another shape and

size is recorded after
t/p t/q t/r t/s t/u the oblique stroke.

u/p u/q u/r u/s u/t

Large opacities A B C The categories are


defined in terms of
the dimensions of the
opacities.
 Category
A – an opacity having
a greatest diameter
exceeding about
10 mm and up to and
including 50 mm, or
several opacities each
greater than about
10 mm, the sum of
whose greatest
diameters does not
exceed about 50 mm.

 Category B – one or
more opacities larger
or more numerous
than those in
category A whose
combined area does
not exceed the
equivalent of the right
upper zone.

Category C – one or
more opacities whose
combined area
exceeds the
equivalent of the right
upper zone.

Pleural abnormalities

Pleural thickening

Chest wall Type Two types of pleural


thickening of the
chest wall are
recognized:
circumscribed
(plaques) and diffuse.
Both types may occur
together

Site R L Pleural thickening of


the chest wall is
recorded separately
for the right (R) and
left (L) thorax.

Width a b c For pleural thickening


seen along the lateral
chest wall the
measurement of
maximum width is
made from the inner
line of the chest wall
to the inner margin of
the shadow seen
most sharply at the
parenchymal-pleural
boundary. The
maximum width
usually occurs at the
inner margin of the
rib shadow at its
outermost point.
 a =
maximum width up to
abut 5 mm
 b =
maximum width over
about 5 mm and up
to about 10 mm
 c =
maximum width over
about 10 mm

Face on Y N The presence of


pleural thickening
seen face-on is
recorded even if it
can be seen also in
profile. If pleural
thickening is seen
face-on only, width
cannot usually be
measured.

Extent 1 2 3 Extent of pleural


thickening is defined
in terms of the
maximum length of
pleural involvement,
or as the sum of
maximum lengths,
whether seen in
profile or face-on.
 1
= total length
equivalent up to one
quarter of the
projection of the
lateral chest wall
 2 =
total length exceeding
one quarter but not
one half of the
projection of the
lateral chest wall
 3 =
total length exceeding
one half of the
projection of the
lateral chest wall

Diaphragm Presence Y N A plaque involving the


diaphragmatic pleura
is recorded as present
(Y) or absent (N),
separately for the
right (R) and left (L)
thorax.
Site R L

Costrophrenic
 angle Presence Y N The presence (Y) or


absence (N) of

 obliteration costophrenic angle
obliteration is
recorded separately
from thickening over
other areas, for the
right (R) and left (L)
thorax. The lower
limit for this
obliteration is defined
by a standard
radiograph

Site R L If the thickening


extends up the chest
wall, then both
costophrenic angle
obliteration and
pleural thickening
should be recorded.

Pleural
 calcification Site The site and extent of


pleural calcification
are recorded
separately for the two
lungs, and the extent
defined in terms of
dimensions.

Chest wall R L

Diaphragm R L

Other R L “Other” includes


calcification of the
mediastinal and
pericardial pleura.

Extent 1 2 3 1 = an area of
calcified pleura with
greatest diameter up
to about 20 mm, or a
number of such areas
the sum of whose
greatest diameters
does not exceed
about 20 mm.
 2 =
an area of calcified
pleura with greatest
diameter exceeding
about 20 mm and up
to about 100 mm, or
a number of such
areas the sum of
whose greatest
diameters exceeds
about 20 mm but
does not exceed
about 100 mm.
 3 =
an area of calcified
pleura with greatest
diameter exceeding
about 100 mm, or a
number of such areas
whose sum of
greatest diameters
exceeds about
100 mm.

Symbols

It is to be taken that
the definition of each
of the symbols is
preceded by an
appropriate word or
phrase such as
“suspect”, “changes
suggestive of”, or
“opacities suggestive
of”, etc.

ax Coalescence of small
pneumoconiotic
opacities

bu Bulla(e)

ca Cancer of lung or
pleura

cn Calcification in small
pneumoconiotic
opacities
co Abnormality of
cardiac size or shape

cp Cor pulmonale

cv Cavity

di Marked distortion of
the intrathoracic
organs

ef Effusion

em Definite emphysema

es Eggshell calcification
of hilar or mediastinal
lymph nodes

fr Fractured rib(s)

hi Enlargement of hilar
or mediastinal lymph
nodes

ho Honeycomb lung

id Ill-defined diaphragm

ih Ill-defined heart
outline

kl Septal (Kerley) lines

od Other significant
abnormality

pi Pleural thickening in
the interlobar fissure
of mediastinum

px Pneumothorax

rp Rheumatoid
pneumoconiosis

tb Tuberculosis

Comments

Presence Y N Comments should be


recorded pertaining
to the classification of
the radiograph,
particularly if some
other cause is
thought to be
responsible for a
shadow which could
be thought by others
to have been due to
pneumoconiosis; also
to identify
radiographs for which
the technical quality
may have affected the
reading materially.

The Classification is based on a set of standard radiographs, a written text and a set of notes
(OHS No. 22). There are no features to be seen in a chest radiograph which are
pathognomonic of dust exposure. The essential principle is that all appearances which are
consistent with those defined and represented in the standard radiographs and the guideline
for the use of the ILO International Classification, are to be classified. If the reader believes
that any appearance is probably or definitively not dust related, the radiograph should not be
classified but an appropriate comment must be added. The 22 standard radiographs have been
selected after international trials, in such a way as to illustrate the mid-categories standards of
profusion of small opacities and to give examples of category A, B and C standards for large
opacities. Pleural abnormalities (diffuse pleural thickening, plaques and obliteration of
costophrenic angle) are also illustrated on different radiographs.

Discussion in particular at the Seventh International Pneumoconioses Conference, held in


Pittsburgh in 1988, indicated the need for improvement of some parts of the classification, in
particular those concerning pleural changes. A discussion group meeting on the revision of
the ILO International Classification of Radiographs of Pneumoconioses was convened in
Geneva by the ILO in November 1989. The experts made the suggestion that the short
classification is of no advantage and can be deleted. As regards pleural abnormalities, the
group agreed that this classification would now be divided into three parts: “Diffuse pleural
thickening”; “Pleural plaques”; and “Costophrenic angle obliteration”. Diffuse pleural
thickening might be divided into chest wall and diaphragm. They were identified according to
the six zones—upper, middle and lower, of both right and left lungs. If a pleural thickening is
circumscribed, it could be identified as a plaque. All plaques should be measured in
centimetres. The obliteration of the costophrenic angle should be systematically noted
(whether it exists or not). It is important to identify whether the costophrenic angle is visible
or not. This is because of its special importance in relation to pleural diffuse thickening.
Whether plaques are classified or not should be merely indicated by a symbol. The flattening
of the diaphragm should be recorded by an additional symbol since it is a very important
feature in asbestos exposure. The presence of plaques should be recorded in these boxes using
the appropriate symbol “c” (calcified) or “h” (hyaline).

A full description of the classification, including its applications and limitation is found in the
publication (ILO 1980). The revision of the classification of radiographs is a continuous ILO
process, and a revised guideline should be published in the near future (1997-98) taking into
account the recommendations of these experts.

Aetiopathogensis of pneumoconioses

Written by ILO Content Manager

Pneumoconioses have been recognized as occupational diseases for a long time. Substantial
efforts have been directed to research, primary prevention and medical management. But
physicians and hygienists report that the problem is still present in both industrialized and
industrializing countries (Valiante, Richards and Kinsley 1992; Markowitz 1992). As there is
strong evidence that the three main industrial minerals responsible for the pneumoconioses
(asbestos, coal and silica) will continue to have some economical importance, thus further
entailing possible exposure, it is expected that the problem will continue to be of some
magnitude throughout the world, particularly among underserved populations in small
industries and small mining operations. Practical difficulties in primary prevention, or
insufficient understanding of the mechanisms responsible for the induction and the
progression of the disease are all factors which could possibly explain the continuing presence
of the problem.

The aetiopathogenesis of pneumoconioses can be defined as the appraisal and understanding


of all the phenomena occurring in the lung following the inhalation of fibrogenic dust
particles. The expression cascade of events is often found in the literature on the subject. The
cascade is a series of events that first exposure and at its farthest extent progresses to the
disease in its more severe forms. If we except the rare forms of accelerated silicosis, which
can develop after only a few months of exposure, most of the pneumoconioses develop
following exposure periods measured in decades rather than years. This is especially true
nowadays in workplaces adopting modern standards of prevention. Aetiopathogenesis
phenomena should thus be analysed in terms of its long-term dynamics.

In the last 20 years, a large amount of information has become available on the numerous and
complex pulmonary reactions involved in interstitial lung fibrosis induced by several agents,
including mineral dusts. These reactions were described at the biochemical and cellular level
(Richards, Masek and Brown 1991). Contributions were made by not only physicists and
experimental pathologists but also by clinicians who used bronchoalveolar lavage extensively
as a new pulmonary technique of investigation. These studies pictured aetiopathogenesis as a
very complex entity, which can nonetheless be broken down to reveal several facets: (1) the
inhalation itself of dust particles and the consequent constitution and significance of the
pulmonary burden (exposure-dose-response relationships), (2) the physicochemical
characteristics of the fibrogenic particles, (3) biochemical and cellular reactions inducing the
fundamental lesions of the pneumoconioses and (4) the determinants of progression and
complication. The later facet must not be ignored, since the more severe forms of
pneumoconioses are the ones which entail impairment and disability.

A detailed analysis of the aetiopathogenesis of the pneumoconioses is beyond the scope of


this article. One would need to distinguish the several types of dust and to go deeply into
numerous specialized areas, some of which are still the subject of active research. But
interesting general notions emerge from the currently available amount of knowledge on the
subject. They will be presented here through the four “facets” previously mentioned and the
bibliography will refer the interested reader to more specialized texts. Examples will be
essentially given for the three main and most documented pneumoconioses: asbestosis, coal
workers’ pneumoconioses (CWP) and silicosis. Possible impacts on prevention will be
discussed.

Exposure-Dose-Response Relationships

Pneumoconioses result from the inhalation of certain fibrogenic dust particles. In the physics
of aerosols, the term dust has a very precise meaning (Hinds 1982). It refers to airborne
particles obtained by mechanical comminution of a parent material in a solid state. Particles
generated by other processes should not be called dust. Dust clouds in various industrial
settings (e.g., mining, tunnelling, sand blasting and manufacturing) generally contain a
mixture of several types of dust. The airborne dust particles do not have a uniform size. They
exhibit a size distribution. Size and other physical parameters (density, shape and surface
charge) determine the aerodynamic behaviour of the particles and the probability of their
penetration and deposition in the several compartments of the respiratory system.

In the field of pneumoconioses, the site compartment of interest is the alveolar compartment.
Airborne particles small enough to reach this compartments are referred to as respirable
particles. All particles reaching the alveolar compartments are not systematically deposited,
some being still present in the exhaled air. The physical mechanisms responsible for
deposition are now well understood for isometric particles (Raabe 1984) as well as for fibrous
particles (Sébastien 1991). The functions relating the probability of deposition to the physical
parameters have been established. Respirable particles and particles deposited in the alveolar
compartment have slightly different size characteristics. For non-fibrous particles, size-
selective air sampling instruments and direct reading instruments are used to measure mass
concentrations of respirable particles. For fibrous particles, the approach is different. The
measuring technique is based upon filter collection of “total dust” and counting of fibres
under the optical microscope. In this case, the size selection is made by excluding from the
count the “non-respirable” fibres with dimensions exceeding predetermined criteria.

Following the deposition of particles on the alveolar surfaces there starts the so-called
alveolar clearance process. Chemotactic recruitment of macrophages and phagocytosis
constitute its first phases. Several clearance pathways have been described: removal of dust-
laden macrophages toward the ciliated airways, interaction with the epithelial cells and
transfer of free particles through the alveolar membrane, phagocytosis by interstitial
macrophages, sequestration into the interstitial area and transportation to the lymph nodes
(Lauweryns and Baert 1977). Clearance pathways have specific kinetics. Not only the
exposure regimen, but also the physicochemical characteristics of the deposited particles,
trigger the activation of the different pathways responsible for the lung’s retention of such
contaminants.

The notion of a retention pattern specific to each type of dust is rather new, but is now
sufficiently established to be integrated into aetiopathogenesis schemes. For example, this
author has found that after long term exposure to asbestos, fibres will accumulate in the lung
if they are of the amphibole type, but will not if they are of the chrysotile type (Sébastien
1991). Short fibres have been shown to be cleared more rapidly than longer ones. Quartz is
known to exhibit some lymph tropism and readily penetrates the lymphatic system. Modifying
the surface chemistry of quartz particles has been shown to affect alveolar clearance
(Hemenway et al. 1994; Dubois et al. 1988). Concomitant exposure to several dust types may
also influence alveolar clearance (Davis, Jones and Miller 1991).

During alveolar clearance, dust particles may undergo some chemical and physical changes.
Examples of theses changes include coating with ferruginous material, the leaching of some
elemental constituents and the adsorption of some biological molecules.

Another notion recently derived from animal experiments is that of “lung overload”
(Mermelstein et al. 1994). Rats heavily exposed by inhalation to a variety of insoluble dusts
developed similar responses: chronic inflammation, increased numbers of particle-laden
macrophages, increased numbers of particles in the interstitium, septal thickening,
lipoproteinosis and fibrosis. These findings were not attributed to the reactivity of the dust
tested (titanium dioxide, volcanic ash, fly ash, petroleum coke, polyvinyl chloride, toner,
carbon black and diesel exhaust particulates), but to an excessive exposure of the lung. It is
not known if lung overload must be considered in the case of human exposure to fibrogenic
dusts.

Among the clearance pathways, the transfer towards the interstitium would be of particular
importance for pneumoconioses. Clearance of particles having undergone sequestration into
the interstitium is much less effective than clearance of particles engulfed by macrophages in
the alveolar space and removed by ciliated airways (Vincent and Donaldson 1990). In
humans, it was found that after long-term exposure to a variety of inorganic airborne
contaminants, the storage was much greater in interstitial than alveolar macrophages
(Sébastien et al. 1994). The view was also expressed that silica-induced pulmonary fibrosis
involves the reaction of particles with interstitial rather than alveolar macrophages (Bowden,
Hedgecock and Adamson 1989). Retention is responsible for the “dose”, a measure of the
contact between the dust particles and their biological environment. A proper description of
the dose would require that one know at each point in time the amount of dust stored in the
several lung structures and cells, the physicochemical states of the particles (including the
surface states), and the interactions between the particles and the pulmonary cells and fluids.
Direct assessment of dose in humans is obviously an impossible task, even if methods were
available to measure dust particles in several biological samples of pulmonary origin such as
sputum, bronchoalveolar lavage fluid or tissue taken at biopsy or autopsy (Bignon, Sébastien
and Bientz 1979). These methods were used for a variety of purposes: to provide information
on retention mechanisms, to validate certain exposure information, to study the role of several
dust types in pathogenic developments (e.g., amphiboles versus chrysotile exposure in
asbestosis or quartz versus coal in CWP) and to assist in diagnosis.

But these direct measurements provide only a snapshot of retention at the time of sampling
and do not allow the investigator to reconstitute dose data. New dosimetric models offer
interesting perspectives in that regard (Katsnelson et al. 1994; Smith 1991; Vincent and
Donaldson 1990). These models aim at assessing dose from exposure information by
considering the probability of deposition and the kinetics of the different clearance pathways.
Recently there was introduced into these models the interesting notion of “harmfulness
delivery” (Vincent and Donaldson 1990). This notion takes into account the specific reactivity
of the stored particles, each particle being considered as a source liberating some toxic entities
into the pulmonary milieu. In the case of quartz particles for example, it could be
hypothesized that some surface sites could be the source of active oxygen species. Models
developed along such lines could also be refined to take into account the great interindividual
variation generally observed with alveolar clearance. This was experimentally documented
with asbestos, “high retainer animals” being at greater risk of developing asbestosis (Bégin
and Sébastien 1989).

So far, these models were exclusively used by experimental pathologists. But they could also
be useful to epidemiologists (Smith 1991). Most epidemiological studies looking at exposure
response relationships relied on “cumulative exposure”, an exposure index obtained by
integrating over time the estimated concentrations of airborne dust to which workers had been
exposed (product of intensity and duration). The use of cumulative exposure has some
limitations. Analyses based on this index implicitly assume that duration and intensity have
equivalent effects on risk (Vacek and McDonald 1991).

Maybe the use of these sophisticated dosimetric models could provide some explanation for a
common observation in the epidemiology of pneumoconioses: “the considerable between-
work force differences” and this phenomenon was clearly observed for asbestosis (Becklake
1991) and for CWP (Attfield and Morring 1992). When relating the prevalence of the disease
to the cumulative exposure, great differences—up to 50-fold—in risk were observed between
some occupational groups. The geological origin of the coal (coal rank) provided partial
explanation for CWP, mining deposits of high rank coal (a coal with high carbon content, like
anthracite) yielding greater risk. The phenomenon remains to be explained in the case of
asbestosis. Uncertainties on the proper exposure response curve have some bearings—at least
theoretically—on the outcome, even at current exposure standards.

More generally, exposure metrics are essential in the process of risk assessment and the
establishment of control limits. The use of the new dosimetric models may improve the
process of risk assessment for pneumoconioses with the ultimate goal of increasing the degree
of protection offered by control limits (Kriebel 1994).

Physicochemical Characteristics of Fibrogenic Dust Particles

A toxicity specific to each type of dust, related to the physicochemical characteristics of the
particles (including the more subtle ones such as the surface characteristics), constitutes
probably the most important notion to have emerged progressively during the last 20 years. In
the very earliest stages of research, no differentiation were made among “mineral dusts”.
Then generic categories were introduced: asbestos, coal, artificial inorganic fibres,
phyllosilicates and silica. But this classification was found to be not precise enough to account
for the variety in observed biological effects. Nowadays a mineralogical classification is used.
For example, the several mineralogical types of asbestos are distinguished: serpentine
chrysotile, amphibole amosite, amphibole crocidolite and amphibole tremolite. For silica, a
distinction is generally made between quartz (by far the most prevalent), other crystalline
polymorphs, and amorphous varieties. In the field of coal, high rank and low rank coals
should be treated separately, since there is strong evidence that the risk of CWP and
especially the risk of progressive massive fibrosis is much greater after exposure to dust
produced in high rank coal mines.

But the mineralogical classification has also some limits. There is evidence, both
experimental and epidemiological (taking into account “between-workforce differences”), that
the intrinsic toxicity of a single mineralogical type of dust can be modulated by acting on the
physicochemical characteristics of the particles. This raised the difficult question of the
toxicological significance of each of the numerous parameters which can be used to describe a
dust particle and a dust cloud. At the single particle level, several parameters can be
considered: bulk chemistry, crystalline structure, shape, density, size, surface area, surface
chemistry and surface charge. Dealing with dust clouds adds another level of complexity
because of the distribution of these parameters (e.g., size distribution and the composition of
mixed dust).

The size of the particles and their surface chemistry were the two parameters most studied to
explain the modulation effect. As seen before, retention mechanisms are size related. But size
may also modulate the toxicity in situ, as demonstrated by numerous animal and in vitro
studies.

In the field of mineral fibres, the size was considered of so much importance that it
constituted the basis of a pathogenesis theory. This theory attributed the toxicity of fibrous
particles (natural and artificial) to the shape and size of the particles, leaving no role for the
chemical composition. In dealing with fibres, size must be broken down into length and
diameter. A two-dimensional matrix should be used to report size distributions, the useful
ranges being 0.03 to 3.0mm for diameter and 0.3 to 300mm for length (Sébastien 1991).
Integrating the results of the numerous studies, Lippman (1988) assigned a toxicity index to
several cells of the matrix. There is a general tendency to believe that long and thin fibres are
the most dangerous ones. Since the standards currently used in industrial hygiene are based
upon the use of the optical microscope, they ignore the thinnest fibres. If assessing the
specific toxicity of each cell within the matrix has some academic interest, its practical
interest is limited by the fact that each type of fibre is associated with a specific size
distribution that is relatively uniform. For compact particles, such as coal and silica, there is
unclear evidence about a possible specific role for the different size sub-fractions of the
particles deposited in the alveolar region of the lung.

More recent pathogenesis theories in the field of mineral dust imply active chemical sites (or
functionalities) present at the surface of the particles. When the particle is “born” by
separation from its parent material, some chemical bonds are broken in either a heterolytic or
a homolytic way. What occurs during breaking and subsequent recombinations or reactions
with ambient air molecules or biological molecules makes up the surface chemistry of the
particles. Regarding quartz particles for example, several chemical functionalities of special
interest have been described: siloxane bridges, silanol groups, partially ionized groups and
silicon-based radicals.

These functionalities can initiate both acid-base and redox reactions. Only recently has
attention been drawn to the latter (Dalal, Shi and Vallyathan 1990; Fubini et al. 1990; Pézerat
et al. 1989; Kamp et al. 1992; Kennedy et al. 1989; Bronwyn, Razzaboni and Bolsaitis 1990).
There is now good evidence that particles with surface-based radicals can produce reactive
oxygen species, even in a cellular milieu. It is not certain if all the production of oxygen
species should be attributed to the surface-based radicals. It is speculated that these sites may
trigger the activation of lung cells (Hemenway et al. 1994). Other sites may be involved in the
membranolytic activity of the cytotoxic particles with reactions such as ionic attraction,
hydrogen bonding and hydrophobic bonding (Nolan et al. 1981; Heppleston 1991).

Following the recognition of surface chemistry as an important determinant of dust toxicity,


several attempts were made to modify the natural surfaces of mineral dust particles to reduce
their toxicity, as assessed in experimental models.
Adsorption of aluminium on quartz particles was found to reduce their fibrogenicity and to
favour alveolar clearance (Dubois et al. 1988). Treatment with polyvinylpyridine-N-oxide
(PVPNO) had also some prophylactic effect (Goldstein and Rendall 1987; Heppleston 1991).
Several other modifying processes were used: grinding, thermal treatment, acid etching and
adsorption of organic molecules (Wiessner et al. 1990). Freshly fractured quartz particles
exhibited the highest surface activity (Kuhn and Demers 1992; Vallyathan et al. 1988).
Interestingly enough, every departure from this “fundamental surface” led to a decrease in
quartz toxicity (Sébastien 1990). The surface purity of several naturally occurring quartz
varieties could be responsible for some observed differences in toxicity (Wallace et al. 1994).
Some data support the idea that the amount of uncontaminated quartz surface is an important
parameter (Kriegseis, Scharman and Serafin 1987).

The multiplicity of the parameters, together with their distribution in the dust cloud, yields a
variety of possible ways to report air concentrations: mass concentration, number
concentration, surface area concentration and concentration in various size categories. Thus,
numerous indices of exposure can be constructed and the toxicological significance of each
has to be assessed. The current standards in occupational hygiene reflect this multiplicity. For
asbestos, the standards are based on the numerical concentration of fibrous particles in a
certain geometrical size category. For silica and coal, the standards are based on the mass
concentration of respirable particles. Some standards have also been developed for exposure
to mixtures of particles containing quartz. No standard is based upon surface characteristics.

Biological Mechanisms Inducing the Fundamental Lesions

Pneumoconioses are interstitial fibrous lung diseases, the fibrosis being diffuse or nodular.
The fibrotic reaction involves the activation of the lung fibroblast (Goldstein and Fine 1986)
and the production and metabolism of the connective tissue components (collagen, elastin and
glycosaminoglycans). It is considered to represent a late healing stage after lung injury
(Niewoehner and Hoidal 1982). Even if several factors, essentially related to the
characteristics of exposure, can modulate the pathological response, it is interesting to note
that each type of pneumoconiosis is characterized by what could be called a fundamental
lesion. The fibrosing alveolitis around the peripheral airways constitutes the fundamental
lesion of asbestos exposure (Bégin et al. 1992). The silicotic nodule is the fundamental lesion
of silicosis (Ziskind, Jones and Weil 1976). Simple CWP is composed of dust macules and
nodules (Seaton 1983).

The pathogenesis of the pneumoconioses is generally presented as a cascade of events whose


sequence runs as follows: alveolar macrophage alveolitis, signalling by inflammatory cell
cytokines, oxidative damage, proliferation and activation of fibroblasts and the metabolism of
collagen and elastin. Alveolar macrophage alveolitis is a characteristic reaction to retention of
fibrosing mineral dust (Rom 1991). The alveolitis is defined by increased numbers of
activated alveolar macrophages releasing excessive quantities of mediators including
oxidants, chemotaxins, fibroblast growth factors and protease. Chemotaxins attract
neutrophils and, together with macrophages, may release oxidants capable of injuring alveolar
epithelial cells. Fibroblast growth factors gain access to the interstitium, where they signal
fibroblasts to replicate and increase the production of collagen.

The cascade starts at the first encounter of particles deposited in the alveoli. With asbestos for
example, the initial lung injury occurs almost immediately after exposure at the alveolar duct
bifurcations. After only 1 hour of exposure in animal experiments, there is active uptake of
fibres by type I epithelial cells (Brody et al. 1981). Within 48 hours, increased numbers of
alveolar macrophages accumulate at sites of deposition. With chronic exposure, this process
may lead to peribronchiolar fibrosing alveolitis.

The exact mechanism by which deposited particles produce primary biochemical injury to the
alveolar lining, a specific cell, or any of its organelles, is unknown. It may be that extremely
rapid and complex biochemical reactions result in free radical formation, lipid peroxidation,
or a depletion in some species of vital cell protectant molecule. It has been shown that mineral
particles can act as catalytic substrates for hydroxyl and superoxide radical generation
(Guilianelli et al. 1993).

At the cellular level, there is slightly more information. After deposition at the alveolar level,
the very thin epithelial type I cell is readily damaged (Adamson, Young and Bowden 1988).
Macrophages and other inflammatory cells are attracted to the damage site and the
inflammatory response is amplified by the release of arachidonic acid metabolites such as
prostaglandins and leukotrienes together with exposure of the basement membrane (Holtzman
1991; Kuhn et al. 1990; Engelen et al. 1989). At this stage of primary damage, the lung
architecture becomes disorganized, showing an interstitial oedema.

During the chronic inflammatory process, both the surface of the dust particles and the
activated inflammatory cells release increased amounts of reactive oxygen species in the
lower respiratory tract. The oxidative stress in the lung has some detectable effects on the
antioxidant defense system (Heffner and Repine 1989), with expression of antioxidant
enzymes like superoxide dismutase, glutathione peroxidases and catalase (Engelen et al.
1990). These factors are located in the lung tissue, the interstitial fluid and the circulating
erythrocytes. The profiles of antioxidant enzymes may depend on the type of fibrogenic dust
(Janssen et al. 1992). Free radicals are known mediators of tissue injury and disease (Kehrer
1993).

Interstitial fibrosis does result from a repair process. There are numerous theories to explain
how the repair process takes place. The macrophage/fibroblast interaction has received the
greatest attention. Activated macrophages secrete a network of proinflammatory fibrogenic
cytokines: TNF, IL-1, transforming growth factor and platelet-derived growth factor. They
also produce fibronectin, a cell surface glycoprotein which acts as a chemical attractant and,
under some conditions, as a growth stimulant for mesenchymal cells. Some authors consider
that some factors are more important than others. For example, special importance was
ascribed to TNF in the pathogenesis of silicosis. In experimental animals, it was shown that
collagen deposition after silica instillation in mice was almost completely prevented by anti-
TNF antibody (Piguet et al. 1990). The release of platelet-derived growth factor and
transforming growth factor was presented as playing an important role in the pathogenesis of
asbestosis (Brody 1993).

Unfortunately, many of the macrophage/fibroblast theories tend to ignore the potential


balance between the fibrogenic cytokines and their inhibitors (Kelley 1990). In fact, the
resulting imbalance between oxidizing and antioxidizing agents, proteases and antiproteases,
the arachidonic acid metabolites, elastases and collagenases, as well as the imbalances
between the various cytokines and growth factors, would determine the abnormal remodelling
of the interstitium component towards the several forms of pneumoconioses (Porcher et al.
1993). In pneumoconioses, the balance is clearly directed towards an overwhelming effect of
the damaging cytokine activities.
Because type I cells are incapable of division, after the primary insult, the epithelial barrier is
replaced with type II cells (Lesur et al. 1992). There is some indication that if this epithelial
repair process is successful and that the regenerating type II cells are not damaged further, the
fibrogenesis is not likely to proceed. Under some conditions, the repair by the type II cell is
taken to excess, resulting in alveolar proteinosis. This process was clearly demonstrated after
silica exposure (Heppleston 1991). To what extent the alterations in epithelial cells influence
the fibroblasts is uncertain. Thus, it would seem that fibrogenesis is initiated in areas of
extensive epithelial damage, as fibroblasts replicate, then differentiate and produce more
collagen, fibronectin and other components of the extracelluar matrix.

There is abundant literature on the biochemistry of the several types of collagen formed in
pneumoconioses (Richards, Masek and Brown 1991). The metabolism of such collagen and
its stability in the lung are important elements of the fibrogenesis process. The same probably
holds for the other components of the damaged connective tissue. The metabolism of collagen
and elastin is of particular interest in the healing phase since these proteins are so important to
lung structure and function. It has been very nicely shown that alterations in the synthesis of
these proteins might determine whether emphysema or fibrosis evolves after lung injury
(Niewoehner and Hoidal 1982). In the disease state, mechanisms such as an increase in
transglutaminase activity could favour the formation of stable protein masses. In some CWP
fibrotic lesions, the protein components account for one-third of the lesion, the rest being dust
and calcium phosphate.

Considering only collagen metabolism, several stages of fibrosis are possible, some of which
are potentially reversible while others are progressive. There is experimental evidence that
unless a critical exposure is exceeded, the early lesions can regress and irreversible fibrosis is
an unlikely outcome. In asbestosis for example, several types of lung reactions were described
(Bégin, Cantin and Massé 1989): a transient inflammatory reaction without lesion, a low
retention reaction with fibrotic scar limited to the distal airways, a high inflammatory reaction
sustained by the continuous exposure and the weak clearance of the longest fibres.

It can be concluded from these studies that exposure to fibrotic dust particles is able to trigger
several complex biochemical and cellular pathways involved in lung injury and repair.
Exposure regimen, physicochemical characteristics of the dust particles, and possibly
individual susceptibility factors seem to be the determinants of the fine balance among the
several pathways. Physicochemical characteristics will determine the type of the ultimate
fundamental lesion. Exposure regimen seems to determine the time course of events. There is
some indication that sufficiently low exposure regimens can in most cases limit the lung
reaction to non-progressive lesions with no disability or impairment.

Medical surveillance and screening always have been part of the strategies for the prevention
of pneumoconioses. In that context, the possibility of detecting some early lesions is
advantageous. Increased knowledge of pathogenesis paved the way to the development of
several biomarkers (Borm 1994) and to the refinement and use of “non-classical” pulmonary
investigation techniques such as the measurement of the clearance rate of deposited
99 technetium diethylenetriamine-penta-acetate (99 Tc-DTPA) to assess pulmonary epithelial
integrity (O’Brodovich and Coates 1987), and quantitative gallium-67 lung scan to assess
inflammatory activity (Bisson, Lamoureux and Bégin 1987).

Several biomarkers were considered in the field of pneumoconioses: sputum macrophages,


serum growth factors, serum type III procollagen peptide, red blood cell antioxidants,
fibronectin, leucocyte elastase, neutral metalloendopeptidase and elastin peptides in plasma,
volatile hydrocarbons in exhaled air and TNF release by peripheral blood monocytes.
Biomarkers are conceptually quite interesting, but many more studies are necessary to assess
their significance precisely. This validation effort will be quite demanding, since it will
require investigators to conduct prospective epidemiological studies. Such an effort was
carried out recently for TNF release by peripheral blood monocytes in CWP. TNF was found
to be an interesting marker of CWP progression (Borm 1994). Besides the scientific aspects of
the significance of biomarkers in the pathogenesis of pneumoconioses, other issues related to
the use of biomarkers must be examined carefully (Schulte 1993), namely, opportunities for
prevention, impact on occupational medicine and ethical and legal problems.

Progression and Complication of Pneumoconioses

In the early decades of this century, pneumoconiosis was regarded as a disease that disabled
the young and killed prematurely. In industrialized countries, it is now generally regarded as
no more than a radiological abnormality, without impairment or disability (Sadoul 1983).
However, two observations should be set against this optimistic statement. First, even if under
limited exposure, pneumoconiosis remains a relatively silent and asymptomatic disease, it
should be known that the disease may progress towards more severe and disabling forms.
Factors affecting this progression are definitely important to consider as part of the
aetiopathogenesis of the condition. Secondly, there is now evidence that some
pneumoconioses can affect general health outcome and can be a contributing factor for lung
cancer.

The chronic and progressive nature of asbestosis has been documented from the initial
subclinical lesion to clinical asbestosis (Bégin, Cantin and Massé 1989). Modern pulmonary
investigation techniques (BAL, CT scan, gallium-67 lung uptake) revealed that inflammation
and injury was continuous from the time of exposure, through the latent or subclinical phase,
to the development of the clinical disease. It has been reported (Bégin et al. 1985) that 75% of
subjects who initially had a positive gallium-67 scan but did not have clinical asbestosis at
that time, did progress to “full-blown” clinical asbestosis over a four-year period. In both
humans and experimental animals, asbestosis may progress after disease recognition and
exposure cessation. It is highly probable that exposure history prior to recognition is an
important determinant of progression. Some experimental data support the notion of non-
progressive asbestosis associated with light induction exposure and exposure cessation at
recognition (Sébastien, Dufresne and Bégin 1994). Assuming that the same notion applies to
humans, it would be of the first importance to establish precisely the metrics of “light
induction exposure”. In spite of all the efforts at screening working populations exposed to
asbestos, this information is still lacking.

It is well-known that asbestos exposure can yield to an excessive risk of lung cancer. Even if
it is admitted that asbestos is a carcinogen per se, it has long been debated whether the risk of
lung cancer among asbestos workers was related to the exposure to asbestos or to the lung
fibrosis (Hughes and Weil 1991). This issue is not resolved yet.

Owing to continuous improvement of working conditions in modern mining facilities,


nowadays, CWP is a disease affecting essentially retired miners. If simple CWP is a condition
without symptoms and without demonstrable effect on lung function, progressive massive
fibrosis (PMF) is a much more severe condition, with major structural alterations of the lung,
deficits of lung function and reduced life expectancy. Many studies have aimed at identifying
the determinants of progression towards PMF (heavy retention of dust in the lung, coal rank,
mycobacterial infection or immunological stimulation). A unifying theory was proposed
(Vanhee et al. 1994), based upon a continuous and severe alveolar inflammation with
activation of the alveolar macrophages and substantial production of reactive oxygen species,
chemotactic factors and fibronectin. Other complications of CWP include mycobacterial
infection, Caplan’s syndrome and scleroderma. There is no evidence of elevated risk of lung
cancer among coal miners.

The chronic form of silicosis follows exposure, measured in decades rather than years, to
respirable dust containing generally less than 30% quartz. But in case of uncontrolled
exposure to quartz-rich dust (historical exposures with sand blasting, for example), acute and
accelerated forms can be found after only several months. Cases of acute and accelerated
disease are particularly at risk of complication by tuberculosis (Ziskind, Jones and Weil
1976). Progression may also occur, with the development of large lesions that obliterate lung
structure, called either complicated silicosis or PMF.

A few studies examined the progression of silicosis in relation to exposure and yielded
diverging results about the relationships between progression and exposure, before and after
onset (Hessel et al. 1988). Recently, Infante-Rivard et al. (1991) studied the prognostic factors
influencing the survival of compensated silicotic patients. Patients with small opacities alone
on their chest radiograph and who did not have dyspnoea, expectoration or abnormal breath
sounds had a survival similar to that of the referents. Other patients had a poorer survival.
Finally, one should mention the recent concern about silica, silicosis and lung cancer. There is
some evidence for and against the proposition that silica per se is carcinogenic (Agius 1992).
Silica may synergize potent environmental carcinogens, such as those in tobacco smoke,
through a relatively weak promoting effect on carcinogenesis or by impairing their clearance.
Moreover, the disease process associated with or leading to silicosis might carry an increased
risk of lung cancer.

Nowadays, progression and complication of pneumoconioses could be considered as a key


issue for medical management. The use of classical pulmonary investigation techniques has
been refined for early recognition of the disease (Bégin et al. 1992), at a stage where
pneumoconiosis is limited to its radiological manifestation, without impairment or disability.
In the near future, it is probable that a battery of biomarkers will be available to document
even earlier stages of the disease. The question of whether a worker diagnosed with
pneumoconiosis—or documented to be in its earlier stages—should be allowed to continue
with his or her job has puzzled occupational health decision makers for some time. It is a
rather difficult question which entails ethical, social and scientific considerations. If an
overwhelming scientific literature is available on the induction of pneumoconiosis, the
information on progression usable by decision makers is rather sparse and somewhat
confusing. A few attempts were made to study the roles of variables such as exposure history,
dust retention and medical condition at onset. The relationships between all these variables do
complicate the issue. Recommendations are made for health screening and surveillance of
workers exposed to mineral dust (Wagner 1996). Programmes are already—or will be—put in
place accordingly. Such programmes would definitely benefit from better scientific
knowledge on progression, and especially on the relation between exposure and retention
characteristics.

Discussion
The information brought by many scientific disciplines to bear upon the aetiopathogenesis of
the pneumoconioses is overwhelming. The major difficulty now is to reassemble the scattered
elements of the puzzle into unifying mechanistic pathways leading to the fundamental lesions
of the pneumoconioses. Without this necessary integration, we would be left with the contrast
between a few fundamental lesions, and very numerous biochemical and cellular reactions.

Our knowledge of aetiopathogenesis has so far influenced the practices of occupational


hygiene only to a limited extent, in spite of the strong intention of hygienists to operate
according to standards having some biological significance. Two main notions were
incorporated in their practices: the size selection of respirable dust particles and the dust type
dependence of toxicity. The latter yielded some limits specific to each type of dust. The
quantitative risk assessment, a necessary step in defining exposure limits, constitutes a
complicated exercise for several reasons, such as the variety of possible exposure indices,
poor information on past exposure, the difficulty one has with epidemiological models in
dealing with multiple indices of exposure and the difficulty in estimating dose from exposure
information. The current exposure limits, embodying sometimes considerable uncertainty, are
probably low enough to offer good protection. The between-workforce differences observed
in exposure-response relationships however, reflect our incomplete control of the
phenomenon.

The impact of newer understanding of the cascade of events in the pathogenesis of the
pneumoconioses has not modified the traditional approach to workers’ surveillance, but has
significantly helped physicians in their capacity of recognizing the disease (pneumoconiosis)
early, at a time when the disease has had only a limited impact on lung function. It is indeed
subjects at the early stage of disease that should be recognized and withdrawn from further
significant exposure if prevention of disability is to be achieved by medical surveillance.

Silicosis

Written by ILO Content Manager

Silicosis is a fibrotic disease of the lungs caused by the inhalation, retention and pulmonary
reaction to crystalline silica. Despite knowledge of the cause of this disorder—respiratory
exposures to silica containing dusts—this serious and potentially fatal occupational lung
disease remains prevalent throughout the world. Silica, or silicon dioxide, is the predominant
component of the earth’s crust. Occupational exposure to silica particles of respirable size
(aerodynamic diameter of 0.5 to 5μm) is associated with mining, quarrying, drilling,
tunnelling and abrasive blasting with quartz containing materials (sandblasting). Silica
exposure also poses a hazard to stonecutters, and pottery, foundry, ground silica and
refractory workers. Because crystalline silica exposure is so widespread and silica sand is an
inexpensive and versatile component of many manufacturing processes, millions of workers
throughout the world are at risk of the disease. The true prevalence of the disease is unknown.

Definition

Silicosis is an occupational lung disease attributable to the inhalation of silicon dioxide,


commonly known as silica, in crystalline forms, usually as quartz, but also as other important
crystalline forms of silica, for example, cristobalite and tridymite. These forms are also called
“free silica” to distinguish them from the silicates. The silica content in different rock
formations, such as sandstone, granite and slate, varies from 20 to nearly 100%.

Workers in High-Risk Occupations and Industries

Although silicosis is an ancient disease, new cases are still reported in both the developed and
developing world. In the early part of this century, silicosis was a major cause of morbidity
and mortality. Contemporary workers are still exposed to silica dust in a variety of
occupations—and when new technology lacks adequate dust control, exposures may be to
more hazardous dust levels and particles than in non-mechanized work settings. Whenever the
earth’s crust is disturbed and silica-containing rock or sand is used or processed, there are
potential respiratory risks for workers. Reports continue of silicosis from industries and work
settings not previously recognized to be at risk, reflecting the nearly ubiquitous presence of
silica. Indeed, due to the latency and chronicity of this disorder, including the development
and progression of silicosis after exposure has ceased, some workers with current exposures
may not manifest disease until the next century. In many countries throughout the world,
mining, quarrying, tunnelling, abrasive blasting and foundry work continue to present major
risks for silica exposure, and epidemics of silicosis continue to occur, even in developed
nations.

Forms of Silicosis—Exposure History and Clinicopathologic Descriptions

Chronic, accelerated and acute forms of silicosis are commonly described. These clinical and
pathologic expressions of the disease reflect differing exposure intensities, latency periods
and natural histories. The chronic or classic form usually follows one or more decades of
exposure to respirable dust containing quartz, and this may progress to progressive massive
fibrosis (PMF). The accelerated form follows shorter and heavier exposures and progresses
more rapidly. The acute form may occur after short-term, intense exposures to high levels of
respirable dust with high silica content for periods that may be measured in months rather
than years.

Chronic (or classic) silicosis may be asymptomatic or result in insidiously progressive


exertional dyspnoea or cough (often mistakenly attributed to the ageing process). It presents
as a radiographic abnormality with small (<10 mm), rounded opacities predominantly in the
upper lobes. A history of 15 years or more since onset of exposure is common. The pathologic
hallmark of the chronic form is the silicotic nodule. The lesion is characterized by a cell-free
central area of concentrically arranged, whorled hyalinized collagen fibers, surrounded by
cellular connective tissue with reticulin fibers. Chronic silicosis may progress to PMF
(sometimes referred to as complicated silicosis), even after exposure to silica-containing dust
has ceased.

Progressive massive fibrosis is more likely to present with exertional dyspnoea. This form of
disease is characterized by nodular opacities greater than 1 cm on chest radiograph and
commonly will involve reduced carbon monoxide diffusing capacity, reduced arterial oxygen
tension at rest or with exercise, and marked restriction on spirometry or lung volume
measurement. Distortion of the bronchial tree may also lead to airway obstruction and
productive cough. Recurrent bacterial infection not unlike that seen in bronchiectasis may
occur. Weight loss and cavitation of the large opacities should prompt concern for
tuberculosis or other mycobacterial infection. Pneumothorax may be a life-threatening
complication, since the fibrotic lung may be difficult to re-expand. Hypoxaemic respiratory
failure with cor pulmonale is a common terminal event.

Accelerated silicosis may appear after more intense exposures of shorter (5 to 10 years)
duration. Symptoms, radiographic findings and physiological measurements are similar to
those seen in the chronic form. Deterioration in lung function is more rapid, and many
workers with accelerated disease may develop mycobacterial infection. Auto-immune disease,
including scleroderma or systemic sclerosis, is seen with silicosis, often of the accelerated
type. The progression of radiographic abnormalities and functional impairment can be very
rapid when auto-immune disease is associated with silicosis.

Acute silicosis may develop within a few months to 2 years of massive silica exposure.
Dramatic dyspnoea, weakness, and weight loss are often presenting symptoms. The
radiographic findings of diffuse alveolar filling differ from those in the more chronic forms of
silicosis. Histologic findings similar to pulmonary alveolar proteinosis have been described,
and extrapulmonary (renal and hepatic) abnormalities are occasionally reported. Rapid
progression to severe hypoxaemic ventilatory failure is the usual course.

Tuberculosis may complicate all forms of silicosis, but people with acute and accelerated
disease may be at highest risk. Silica exposure alone, even without silicosis may also
predispose to this infection. M. tuberculosis is the usual organism, but atypical mycobacteria
are also seen.

Even in the absence of radiographic silicosis, silica-exposed workers may also have other
diseases associated with occupational dust exposure, such as chronic bronchitis and the
associated emphysema. These abnormalities are associated with many occupational mineral
dust exposures, including dusts containing silica.

Pathogenesis and the Association with Tuberculosis

The precise pathogenesis of silicosis is uncertain, but an abundance of evidence implicates the
interaction between the pulmonary alveolar macrophage and silica particles deposited in the
lung. Surface properties of the silica particle appear to promote macrophage activation. These
cells then release chemotactic factors and inflammatory mediators that result in a further
cellular response by polymorphonuclear leukocytes, lymphocytes and additional
macrophages. Fibroblast-stimulating factors are released that promote hyalinization and
collagen deposition. The resulting pathologic silicotic lesion is the hyaline nodule, containing
a central acellular zone with free silica surrounded by whorls of collagen and fibroblasts, and
an active peripheral zone composed of macrophages, fibroblasts, plasma cells, and additional
free silica as shown in figure 1.

Figure 1. Typical silicotic nodule, microscopic section. Courtesy of Dr. V. Vallyathan.


The precise properties of silica particles that evoke the pulmonary response described above
are not known, but surface characteristics may be important. The nature and the extent of the
biological response are in general related to the intensity of the exposure; however, there is
growing evidence that freshly fractured silica may be more toxic than aged dust containing
silica, an effect perhaps related to reactive radical groups on the cleavage planes of freshly
fractured silica. This may offer a pathogenic explanation for the observation of cases of
advanced disease in both sandblasters and rock drillers where exposures to recently fractured
silica are particularly intense.

The initiating toxic insult may occur with minimal immunological reaction; however, a
sustained immunological response to the insult may be important in some of the chronic
manifestations of silicosis. For example, antinuclear antibodies may occur in accelerated
silicosis and scleroderma, as well as other collagen diseases in workers who have been
exposed to silica. The susceptibility of silicotic workers to infections, such as tuberculosis and
Nocardia asteroides, is likely related to the toxic effect of silica on pulmonary macrophages.

The link between silicosis and tuberculosis has been recognized for nearly a century. Active
tuberculosis in silicotic workers may exceed 20% when community prevalence of tuberculosis
is high. Again, people with acute silicosis appear to be at considerably higher risk.

Clinical Picture of Silicosis


The primary symptom is usually dyspnoea, first noted with activity or exercise and later at
rest as the pulmonary reserve of the lung is lost. However, in the absence of other respiratory
disease, shortness of breath may be absent and the presentation may be an asymptomatic
worker with an abnormal chest radiograph. The radiograph may at times show quite advanced
disease with only minimal symptoms. The appearance or progression of dyspnoea may herald
the development of complications including tuberculosis, airways obstruction or PMF. Cough
is often present secondary to chronic bronchitis from occupational dust exposure, tobacco use,
or both. Cough may at times also be attributed to pressure from large masses of silicotic
lymph nodes on the trachea or mainstem bronchi.

Other chest symptoms are less common than dyspnoea and cough. Haemoptysis is rare and
should raise concern for complicating disorders. Wheeze and chest tightness may occur
usually as part of associated obstructive airways disease or bronchitis. Chest pain and finger
clubbing are not features of silicosis. Systemic symptoms, such as fever and weight loss,
suggest complicating infection or neoplastic disease. Advanced forms of silicosis are
associated with progressive respiratory failure with or without cor pulmonale. Few physical
signs may be noted unless complications are present.

Radiographic Patterns and Functional Pulmonary Abnormalities

The earliest radiographic signs of uncomplicated silicosis are generally small rounded
opacities. These can be described by the ILO International Classification of Radiographs of
Pneumoconioses by size, shape and profusion category. In silicosis, “q” and “r” type opacities
dominate. Other patterns including linear or irregular shadows have also been described. The
opacities seen on the radiograph represent the summation of pathologic silicotic nodules.
They are usually found predominantly in the upper zones and may later progress to involve
other zones. Hilar lymphadenopathy is also noted sometimes in advance of nodular
parenchymal shadows. Egg shell calcification is strongly suggestive of silicosis, although this
feature is seen infrequently. PMF is characterized by the formation of large opacities. These
large lesions can be described by size using the ILO classification as categories A, B or C.
Large opacities or PMF lesions tend to contract, usually to the upper lobes, leaving areas of
compensatory emphysema at their margins and often in the lung bases. As a result, previously
evident small rounded opacities may disappear at times or be less prominent. Pleural
abnormalities may occur but are not a frequent radiographic feature in silicosis. Large
opacities may also pose concern regarding neoplasm and radiographic distinction in the
absence of old films may be difficult. All lesions that cavitate or change rapidly should be
evaluated for active tuberculosis. Acute silicosis may present with a radiologic alveolar filling
pattern with rapid development of PMF or complicated mass lesions. See figures 2 and 3.

Figure 2. Chest radiograph, acute silico-proteinosis in a surface coal mine driller. Courtesy of
Dr. NL Lapp and Dr. DE Banks.
Figure 3. Chest radiograph, complicated silicosis demonstrating progressive massive fibrosis.

Pulmonary function tests, such as spirometry and diffusing capacity, are helpful for the
clinical evaluation of people with suspected silicosis. Spirometry may also be of value in early
recognition of the health effects from occupational dust exposures, as it may detect
physiologic abnormalities that may precede radiologic changes. No solely characteristic
pattern of ventilatory impairment is present in silicosis. Spirometry may be normal, or when
abnormal, the tracings may show obstruction, restriction or a mixed pattern. Obstruction may
indeed be the more common finding. These changes tend to be more marked with advanced
radiologic categories. However, poor correlation exists between radiographic abnormalities
and ventilatory impairment. In acute and accelerated silicosis, functional changes are more
marked and progression is more rapid. In acute silicosis, radiologic progression is
accompanied by increasing ventilatory impairment and gas exchange abnormalities, which
leads to respiratory failure and eventually to death from intractable hypoxaemia.
Complications and Special Diagnostic Issues

With a history of exposure and a characteristic radiograph, the diagnosis of silicosis is


generally not difficult to establish. Challenges arise only when the radiologic features are
unusual or the history of exposure is not recognized. Lung biopsy is rarely required to
establish the diagnosis. However, tissue samples are helpful in some clinical settings when
complications are present or the differential diagnosis includes tuberculosis, neoplasm or
PMF. Biopsy material should be sent for culture, and in research settings, dust analysis may
be a useful additional measure. When tissue is required, open lung biopsy is generally
necessary for adequate material for examination.

Vigilance for infectious complications, especially tuberculosis, cannot be overemphasized,


and symptoms of change in cough or hemoptysis, and fever or weight loss should trigger a
work-up to exclude this treatable problem.

Substantial concern and interest about the relationship between silica exposure, silicosis and
cancer of the lung continues to stimulate debate and further research. In October of 1996, a
committere of The International Agency for Research on Cancer (IARC) classified crystalline
silica as a Group I carcinogen, reaching this conclusion based on “sufficient evidence of
carcinogenicity in humans”. Uncertainty over the pathogenic mechanisms for the
development of lung cancer in silica-exposed populations exists, and the possible relationship
between silicosis (or lung fibrosis) and cancer in exposed workers continues to be studied.
Regardless of the mechanism that may be responsible for neoplastic events, the known
association between silica exposures and silicosis dictates controlling and reducing exposures
to workers at risk for this disease.

Prevention of Silicosis

Prevention remains the cornerstone of eliminating this occupational lung disease. The use of
improved ventilation and local exhaust, process enclosure, wet techniques, personal protection
including the proper selection of respirators, and where possible, industrial substitution of
agents less hazardous than silica all reduce exposure. The education of workers and
employers regarding the hazards of silica dust exposure and measures to control exposure is
also important.

If silicosis is recognized in a worker, removal from continuing exposure is advisable.


Unfortunately, the disease may progress even without further silica exposure. Additionally,
finding a case of silicosis, especially the acute or accelerated form, should prompt a
workplace evaluation to protect other workers also at risk.

Screening and Surveillance

Silica and other mineral-dust exposed workers should have periodic screening for adverse
health effects as a supplement to, but not a substitute for, dust exposure control. Such
screening commonly includes evaluations for respiratory symptoms, lung function
abnormalities, and neoplastic disease. Evaluations for tuberculosis infection should also be
performed. In addition to individual worker screening, data from groups of workers should be
collected for surveillance and prevention activities. Guidance for these types of studies is
included in the list of suggested readings.
Therapy, Management of Complications and Control of Silicosis

When prevention has been unsuccessful and silicosis has developed, therapy is directed
largely at complications of the disease. Therapeutic measures are similar to those commonly
used in the management of airway obstruction, infection, pneumothorax, hypoxaemia, and
respiratory failure complicating other pulmonary disease. Historically, the inhalation of
aerosolized aluminium has been unsuccessful as a specific therapy for silicosis. Polyvinyl
pyridine-N-oxide, a polymer that has protected experiment animals, is not available for use in
humans. Recent laboratory work with tetrandrine has shown in vivo reduction in fibrosis and
collagen synthesis in silica exposed animals treated with this drug. However, strong evidence
of human efficacy is currently lacking, and there are concerns about the potential toxicity,
including the mutagenicity, of this drug. Because of the high prevalence of disease in some
countries, investigations of combinations of drugs and other interventions continue. Currently,
no successful approach has emerged, and the search for a specific therapy for silicosis to date
has been unrewarding.

Further exposure is undesirable, and advice on leaving or changing the current job should be
given with information about past and present exposure conditions.

In the medical management of silicosis, vigilance for complicating infection, especially


tuberculosis, is critical. The use of BCG in the tuberculin-negative silicotic patient is not
recommended, but the use of preventive isoniazid (INH) therapy in the tuberculin-positive
silicotic subject is advised in countries where the prevalence of tuberculosis is low. The
diagnosis of active tuberculosis infection in patients with silicosis can be difficult. Clinical
symptoms of weight loss, fever, sweats and malaise should prompt radiographic evaluation
and sputum acid-fast bacilli strains and cultures. Radiographic changes, including
enlargement or cavitation in conglomerate lesions or nodular opacities, are of particular
concern. Bacteriological studies on expectorated sputum may not always be reliable in
silicotuberculosis. Fiberoptic bronchoscopy for additional specimens for culture and study
may often be helpful in establishing a diagnosis of active disease. The use of multidrug
therapy for suspected active disease in silicotics is justified at a lower level of suspicion than
in the non-silicotic subject, due to the difficulty in firmly establishing evidence for active
infection. Rifampin therapy appears to have enhanced the success rate of treatment of silicosis
complicated by tuberculosis, and in some recent studies response to short-term therapy was
comparable in cases of silicotuberculosis to that in matched cases of primary tuberculosis.

Ventilatory support for respiratory failure is indicated when precipitated by a treatable


complication. Pneumothorax, spontaneous and ventilator-related, is usually treated by chest
tube insertion. Bronchopleural fistula may develop, and surgical consultation and
management should be considered.

Acute silicosis may rapidly progress to respiratory failure. When this disease resembles
pulmonary alveolar proteinosis and severe hypoxaemia is present, aggressive therapy has
included massive whole-lung lavage with the patient under general anaesthesia in an attempt
to improve gas exchange and remove alveolar debris. Although appealing in concept, the
efficacy of whole lung lavage has not been established. Glucocorticoid therapy has also been
used for acute silicosis; however, it is still of unproven benefit.
Some young patients with end-stage silicosis may be considered candidates for lung or heart-
lung transplantation by centres experienced with this expensive and high-risk procedure.
Early referral and evaluation for this intervention may be offered to selected patients.

The discussion of an aggressive and high-technology therapeutic intervention such as


transplantation serves dramatically to underscore the serious and potentially fatal nature of
silicosis, as well as to emphasize the crucial role for primary prevention. The control of
silicosis ultimately depends upon the reduction and control of workplace dust exposures. This
is accomplished by rigorous and conscientious application of fundamental occupational
hygiene and engineering principles, with a commitment to the preservation of worker health.

Coal Workers' Lung Diseases

Written by ILO Content Manager

Coal miners are subject to a number of lung diseases and disorders arising from their exposure
to coal mine dust. These include pneumoconiosis, chronic bronchitis and obstructive lung
disease. The occurrence and severity of disease depends on the intensity and duration of dust
exposure. The specific composition of the coal mine dust also has a bearing on some health
outcomes.

In the developed countries, where high prevalences of lung disease existed in the past,
reductions in dust levels brought about by regulation have led to substantial drops in disease
prevalence since the 1970s. In addition, major reductions in the mining work force in most of
those countries over recent decades, partly brought about by changes in technology and
resulting improvements in productivity, will result in further reductions in overall disease
levels. Miners in other countries, where coal mining is a more recent phenomenon and dust
controls are less aggressive, have not been so fortunate. This problem is exacerbated by the
high cost of modern mining technology, forcing the employment of large numbers of workers,
many of whom are at high risk of disease development.

In the following text, each disease or disorder is considered in turn. Those specific to coal
mining, such as coal workers’ pneumoconiosis are described in detail; the description of
others, such as obstructive lung disease, is restricted to those aspects that relate to coal miners
and dust exposure.

Coal Workers’ Pneumoconiosis

Coal workers’ pneumoconiosis (CWP) is the disease most commonly associated with coal
mining. It is not a fast-developing disease, usually taking at least ten years to be manifested,
and often much longer when exposures are low. In its initial stages it is an indicator of
excessive lung dust retention, and may be associated with few symptoms or signs in itself.
However, as it advances, it puts the miner at increasing risk of development of the much more
serious progressive massive fibrosis (PMF).
Pathology

The classic lesion of CWP is the coal macule, a collection of dust and dust-laden
macrophages around the periphery of the respiratory bronchioles. The macules contain
minimal collagen and are thus usually not palpable. They are about 1 to 5 mm in size, and are
frequently accompanied by an enlargement of the adjacent air spaces, termed focal
emphysema. Though often very numerous, they are not usually evident on a chest radiograph.

Another lesion associated with CWP is the coal nodule. These larger lesions are palpable and
contain a mixture of dust-laden macrophages, collagen and reticulin. The presence of coal
nodules, with or without silicotic nodules (see below), indicates lung fibrosis, and is largely
responsible for the opacities seen on chest radiographs. Macronodules (7 to 20 mm) in size
may coalesce to form progressive massive fibrosis (see below), or PMF may develop from a
single macronodule.

Silicotic nodules (described under silicosis) have been found in a significant minority of
underground coal miners. For most, the cause may rest simply with the silica present in the
coal dust, although exposure to pure silica in some jobs is certainly an important factor (e.g.,
among surface drillers, underground motormen and roof bolters).

Radiography

The most useful indicator of CWP in miners during life is obtained using the routine chest
radiograph. Dust deposits and the nodular tissue reactions attenuate the x-ray beam and result
in opacities on the film. The profusion of these opacities can be assessed systematically by
using a standardized method of radiograph description such as that disseminated by the ILO
and described else in this chapter. In this method, individual posterior-anterior films are
compared to standard radiographs showing increasing profusion of small opacities, and the
film classified into one of four major categories (0, 1, 2, 3) based on its similarity to the
standard. A secondary classification is also made, depending on the reader’s assessment of the
film’s similarity to adjacent ILO categories. Other aspects of the opacities, such as size, shape
and region of occurrence in the lung are also noted. Some countries, such as China and Japan,
have developed similar systems for systematic radiograph description or interpretation that are
particularly suited to their own needs.

Traditionally, small rounded types of opacity have been associated with coal mining.
However, more recent data indicate that irregular types can also result from exposure to coal
mine dust. The opacities of CWP and silicosis are often indistinguishable on the radiograph.
However, there is some evidence that larger sized opacities (type r) more often indicate
silicosis.

It is important to note that a substantial amount of pathologic abnormality related to


pneumoconiosis may be present in the lung before it can be detected on the routine chest x
ray. This is particularly true for macular deposition, but it becomes progressively less true
with greater profusion and size of nodules. Concomitant emphysema may also reduce the
visibility of lesions on the chest x ray. Computerized tomography (CT)—particularly high-
resolution computerized tomography (HRCT)—may permit visualization of abnormalities not
clearly evident on routine chest x rays, although CT is not necessary for routine clinical
diagnosis of miners’ lung diseases and is not indicated for medical surveillance of miners.
Clinical aspects

The development of CWP, although a marker of excessive lung dust retention, in itself is
often unaccompanied by any overt clinical signs. This should not, however, be taken to imply
that the inhalation of coal mine dust is without risk, for it is now well known that other lung
diseases can arise from dust exposure. Pulmonary hypertension is more often noted in miners
who develop airflow obstruction in association with CWP. Moreover, once CWP has
developed, it usually progresses unless dust exposure ceases, and may progress thereafter. It
also puts the miner at greatly increased risk of development of the clinically ominous PMF,
with the likelihood of subsequent impairment, disability and premature mortality.

Disease mechanisms

Development of the earliest change of CWP, the dust macule, represents the effects of dust
deposition and accumulation. The subsequent stage, that is, the development of nodules,
results from the lung’s inflammatory and fibrotic reaction to the dust. In this, the roles of
silica and non-silica dust have long been debated. On the one hand, silica dust is known to be
considerably more toxic than coal dust. Yet, on the other hand, epidemiological studies have
shown no strong evidence implicating silica exposure in CWP prevalence or incidence.
Indeed, it seems that almost an inverse relationship exists, in that disease levels tend to be
elevated where silica levels are lower (e.g., in areas where anthracite is mined). Recently,
some understanding of this paradox has been gained through studies of particle
characteristics. These studies indicate that not only the quantity of silica present in the dust (as
measured conventionally using infrared spectrometry or x-ray diffraction), but also the
bioavailability of the surface of the silica particles may be related to toxicity. For example,
clay coating (occlusion) may play an important modifying role. Another important factor
under current investigation concerns surface charge in the form of free radicals and the effects
of “freshly fractured” versus “aged” silica-containing dusts.

Surveillance and epidemiology

The prevalence of CWP among underground miners varies with the kind of job, tenure and
age. A recent study of US coal miners revealed that from 1970 to 1972 about 25 to 40% of
working coal miners had category 1 or greater small rounded opacities after 30 or more years
in mining. This prevalence reflects exposure to levels of 6 mg/m3 or more of respirable dust
among coal face workers prior to that time. The introduction of a dust limit of 3 mg/m3 in
1969, with a reduction to 2 mg/m3 in 1972 has led to a decline in disease prevalence to about
half of the former levels. Declines related to dust control have been noted elsewhere, for
example, in the United Kingdom and Australia. Unfortunately, these gains have been
counterbalanced by temporal increases in prevalence elsewhere.

An exposure-response relationship for prevalence or incidence of CWP and dust exposure has
been demonstrated in a number of studies. These have shown that the primary significant dust
exposure variable is exposure to mixed mine dust. Intensive studies by British researchers
failed to disclose any major influence of silica exposure, as long as the percentage of silica
was less than about 5%. Coal rank (percentage carbon) is another important predictor of CWP
development. Studies in the United States, the United Kingdom, Germany and elsewhere have
given clear indications that the prevalence and incidence of CWP increases markedly with
coal rank, these being substantially greater where anthracite (high rank) coal is mined. No
other environmental variables have been found to exert any major effects on CWP
development. Miner age appears to have some bearing on disease development, since older
miners appear to be at increased risk. However, it is not entirely clear whether this implies
that older miners are more susceptible, whether it is a residence time effect, or is simply an
artefact (the age effect might reflect underestimation of exposure estimates for older miners,
for example). Cigarette smoking does not appear to increase the risk of CWP development.

Research in which miners were followed-up with chest radiographs every five years shows
that the risk of developing PMF over the five years is clearly related to the category of CWP
as revealed on the initial chest x ray. Since the risk at category 2 is much greater than that at
category 1, conventional wisdom at one time was that miners should be prevented from
reaching category 2 if at all possible. However, in most mines there are usually many more
miners with category 1 CWP compared to category 2. Thus, the lower risk for category 1
compared to category 2 is offset somewhat by the larger numbers of miners with category 1.
On this showing, it has become clear that all pneumoconiosis should be prevented.

Mortality

Miners as a group have been observed to have increased risk of death from non-malignant
respiratory diseases, and there is evidence that the mortality among miners with CWP is
somewhat increased over those of similar age without the disease. However, the effect is
smaller than the excess seen for miners with PMF (see below).

Prevention

The only protection against CWP is minimization of dust exposure. If possible, this should be
achieved by dust suppression methods, such as ventilation and water sprays, rather than by
respirator use or administrative controls, for example, worker rotation. In this respect, there is
now good evidence that regulatory actions in some countries to reduce the level of dust, taken
around the 1970s, has resulted in greatly reduced levels of disease. Transfer of workers with
early signs of CWP to less dusty jobs is a prudent action, although there is little practical
evidence that such programmes have succeeded in preventing disease progression. For this
reason, dust suppression must remain the primary method of disease prevention.

Ongoing, aggressive monitoring of dust exposure and the conscious exertion of control efforts
can be supplemented by health screening surveillance of miners. If miners are found to
develop dust-related diseases, efforts at exposure control should be intensified throughout the
workplace and miners with dust effects should be offered work in low-dust areas of the mine
environment.

Treatment

Although several forms of treatment have been tried, including aluminium powder inhalation,
and the administration of tetrandine, no treatment is known that effectively reverses or slows
the fibrotic process in the lung. Currently, primarily in China, but elsewhere also, whole-lung
lavage is being tried with the intent of reducing the total lung dust burden. Although the
procedure can result in the removal of a considerable amount of dust, its risks, benefits and
role in the management of miners’ health are unclear.

In other respects, treatment should be directed towards preventing complications, maximizing


the miners’ functional status and alleviating their symptoms, whether due to CWP or to other,
concomitant respiratory diseases. In general, miners who develop dust-induced lung diseases
should evaluate their current dust exposures and utilize the resources of government and
labour organizations to find the avenues available to reduce all adverse respiratory exposures.
For miners who smoke, smoking cessation is an initial step in personal exposure management.
Prevention of infectious complications of chronic lung disease with available pneumococcal
and yearly influenza vaccines is suggested. Early investigation of symptoms of lung infection,
with particular attention to mycobacterial disease, is also recommended. The treatments for
acute bronchitis, bronchospasm and congestive heart failure among miners are similar to those
for patients without dust-related disease.

Progressive Massive Fibrosis

PMF, sometimes referred to as complicated pneumoconiosis, is diagnosed when one or more


large fibrotic lesions (whose definition depends on the mode of detection) are present in one
or both lungs. As its name implies, PMF often becomes more severe over time, even in the
absence of additional dust exposure. It can also develop after dust exposure has ceased, and
may often cause disability and premature mortality.

Pathology

PMF lesions may be unilateral or bilateral, and are most often found in the upper or middle
lobes of the lung. The lesions are formed of collagen, reticulin, coal mine dust and dust-laden
macrophages, while the centre may contain a black liquid which cavitates on occasion. US
pathology standards require the lesions to be 2 cm in size or larger to be identified as PMF
entities in surgical or autopsy specimens.

Radiology

Large opacities >>1 cm) on the radiograph, coupled with a history of extensive coal mine dust
exposure, are taken to imply the presence of PMF. However, it is important that other diseases
such as lung cancer, tuberculosis and granulomas be considered. Large opacities are usually
seen on a background of small opacities, but development of PMF from a category 0
profusion has been noted over a five-year period.

Clinical aspects

Diagnostic possibilities for each individual miner with large chest opacities must be
appropriately evaluated. Clinically stable miners with bilateral lesions in the typical upper-
lung distribution and with pre-existing simple CWP may present little diagnostic challenge.
However, miners with progressive symptoms, risk factors for other disorders (e.g.,
tuberculosis), or atypical clinical features should undergo a thorough appropriate examination
before the diagnostician attributes the lesions to PMF.

Dyspnoea and other respiratory symptoms often accompany PMF, but may not necessarily be
due to the disease itself. Congestive heart failure (due to pulmonary hypertension and cor
pulmonale) is a not infrequent complication.
Disease mechanisms

Despite extensive research, the actual cause of PMF development remains unclear. Over the
years, various hypotheses have been proposed, but none is fully satisfactory. One prominent
theory was that tuberculosis played a role. Indeed, tuberculosis is often present in miners with
PMF, particularly in the developing countries. However, PMF has been found to develop in
miners in whom there was no sign of tuberculosis, and tuberculin reactivity has not been
found to be elevated in miners with pneumoconiosis. Despite investigation, consistent
evidence of the role of the immune system in PMF development is lacking.

Surveillance and epidemiology

As with CWP, PMF levels have been declining in countries which have strict dust control
regulations and programmes. A recent study of US miners revealed that about 2% of coal
miners working underground had PMF after 30 or more years in mining (although this figure
may have been biased by affected miners leaving the work force).

Exposure-response investigations of PMF have shown that exposure to coal mine dust,
category of CWP, coal rank and age are the primary determinants of disease development. As
with CWP, epidemiological studies have found no major effect of silica dust. Although it was
thought at one time that PMF developed only on a background of the small opacities of CWP,
recently this has been found not to be the case. Miners with an initial chest x ray showing
category 0 CWP have been shown to develop PMF over five years, with the risk increasing
with their cumulative dust exposure. Also, miners may develop PMF after cessation of dust
exposure.

Mortality

PMF leads to premature mortality, the prognosis worsening with increasing stage of the
disease. A recent study showed that miners with category C PMF had only one-fourth the rate
of survival over 22 years compared to miners with no pneumoconiosis. This effect was
manifested over all age groups.

Prevention

Avoidance of dust exposure is the only way to prevent PMF. Since the risk of its development
increases sharply with increasing category of simple CWP, a strategy for secondary
prevention of PMF is for miners to undergo periodic chest x rays and to terminate or reduce
their exposure if simple CWP is detected. Although this approach appears valid and has been
adopted in certain jurisdictions, its effectiveness has not been evaluated systematically.

Treatment

There is no known treatment for PMF. Medical care should be organized around ameliorating
the condition and associated lung illnesses, while protecting against infectious complications.
Although maintaining functional stability may be more difficult in patients with PMF, in other
respects, management is similar to simple CWP.
Obstructive Lung Disease

There is now consistent and convincing evidence of a relationship between lung function loss
and dust exposure. Various studies in different countries have looked at the influence of dust
exposure on absolute values of, and temporal changes in, measurements of ventilatory
function, such as forced expiratory volume in one second (FEV1), forced vital capacity (FVC)
and flow rates. All have found evidence that dust exposure leads to a reduction in lung
function, and the results have been strikingly similar for several recent British and US
investigations. These indicate that over the course of a year, dust exposure at the coal face
brings about, on average, a reduction in lung function equivalent to smoking half a pack of
cigarettes each day. The studies also demonstrate that effects vary, and a given miner may
develop effects equal to, or worse than, those expected from cigarette smoking, particularly if
the individual has experienced higher dust exposures.

The effects of dust exposure have been found in both those who have never smoked and in
current smokers. Moreover, there is no evidence that smoking exacerbates the dust exposure
effect. Rather, studies have generally shown a slightly smaller effect in current smokers, a
result that may be due to healthy worker selection. It is important to note that the relationship
between dust exposure and ventilatory decline appears to exist independently of
pneumoconiosis. That is, it is not a requirement that pneumoconiosis be present for there to be
reduced lung function. To the contrary, it appears rather that the inhaled dust can act along
multiple pathways, leading to pneumoconiosis in some miners, to obstruction in others and to
multiple outcomes in yet others. In contrast to miners with CWP alone, miners with
respiratory symptoms have significantly lower lung function, after standardization for age,
smoking, dust exposure and other factors.

Recent work on ventilatory function changes has involved the exploration of longitudinal
changes. The results indicate that there may be a non-linear trend of decline over time in new
miners, a high initial rate of loss being followed by a more moderate decline with continued
exposure. Furthermore, there is evidence that miners who react to the dust may choose, if
possible, to remove themselves from the heavier exposures.

Chronic Bronchitis

Respiratory symptoms, such as chronic cough and phlegm production, are a frequent
consequence of work in coal mining, most studies showing an excess prevalence compared to
non-exposed control groups. Moreover, the prevalence and incidence of respiratory symptoms
has been shown to increase with cumulative dust exposure, after taking into account age and
smoking. The presence of symptoms appears to be associated with a reduction in lung
function over and above that due to dust exposure and other putative causes. This suggests
that dust exposure may be instrumental in initiating certain disease processes that then
progress regardless of further exposure. A relationship between bronchial gland size and dust
exposure has been demonstrated pathologically, and it has been found that mortality from
bronchitis and emphysema increases with increasing cumulative dust exposure.

Emphysema

Pathological studies have repeatedly found an excess of emphysema in coal miners compared
to control groups. Moreover, the degree of emphysema has been found to be related both to
the amount of dust in the lungs and to pathological assessments of pneumoconiosis.
Furthermore, it is important to recognize that there is evidence that the presence of
emphysema is related to dust exposure and to the percentage of predicted FEV1. Hence, these
results are consistent with the view that dust exposure can lead to disability through causing
emphysema.

The form of emphysema most clearly associated with coal mining is focal emphysema. This
consists of zones of enlarged air spaces, 1 to 2 mm in size, adjacent to dust macules
surrounding the respiratory bronchioles. The current thinking is that the emphysema is formed
from tissue destruction, rather than from distension or dilation. Apart from focal emphysema,
there is evidence that centriacinar emphysema has an occupational origin, and that total
emphysema, (i.e., the extent of all types) is correlated with tenure in mining, in those who
have never smoked as well as in smokers. There is no evidence that smoking potentiates the
dust exposure/emphysema relationship. However, there are indications of an inverse
relationship between the silica content of lungs and the presence of emphysema.

The issue of emphysema has long been controversial, with some stating that selection bias and
smoking make interpretation of pathological studies difficult. In addition, some consider that
focal emphysema has only trivial effects on lung function. However, pathological studies
undertaken since the 1980s have been responsive to earlier criticisms, and indicate that the
effect of dust exposure may be more significant for miners’ health than previously thought.
This point of view is supported by recent findings that mortality from bronchitis and
emphysema is related to cumulative dust exposure.

Silicosis

Silicosis, though associated more with industries other than coal mining, can occur in coal
miners. In underground mines, it is found most frequently in workers in certain jobs where
exposure to pure silica typically occurs. Such workers include roof bolters, who drill into the
ceiling rock, which can often be sandstone or other rock with high silica content; motormen,
drivers of rail transport who are exposed to the dust generated by sand placed on the tracks to
lend traction; and rock drillers, who are involved in mine development. Rock drillers at
surface coal mines have been shown to be at particular risk in the United States, with some
developing acute silicosis after only a few years of exposure. Based on pathological evidence,
as noted below, some degree of silicosis may afflict many more coal miners than just those
working the jobs noted above.

Silicotic nodules in coal miners are similar in nature to those observed elsewhere, and consist
of a whorled pattern of collagen and reticulin. One large autopsy study has revealed that about
13% of coal miners had silicotic nodules in their lungs. Although one job, (that of motorman)
was notable for having a much higher prevalence of silicotic nodules (25%), there was little
variation in the prevalence among miners in other jobs, suggesting that the silica in the mixed
mine dust was responsible.

Silicosis cannot be reliably differentiated from coal workers’ pneumoconiosis on a


radiograph. However, there is some evidence that the larger type of small opacities (type r)
are indicative of silicosis.
Rheumatoid Pneumoconiosis

Rheumatoid pneumoconiosis, one variant of which is called Caplan’s syndrome, is the term
used for a condition affecting dust-exposed workers who develop multiple large radiographic
shadows. Pathologically, these lesions resemble rheumatoid nodules rather than PMF lesions,
and often arise over a short time interval. Active arthritis or the presence of circulating
rheumatoid factor are generally found, but occasionally are absent.

Lung Cancer

Included in the occupational exposures suffered by coal miners are a number of substances
that are potential carcinogens. Some of these are silica and benzo(a)pyrenes. Yet, there is no
clear evidence of an excess of deaths from lung cancer in coal miners. One obvious
explanation for this is that coal miners are forbidden to smoke underground because of the
danger of fires and explosions. However, the fact that no exposure-response relationship
between lung cancer and dust exposure has been detected suggests that coal mine dust is not a
major cause of lung cancer in the industry.

Regulatory Limits on Dust Exposure

The World Health Organization (WHO) has recommended a “tentative health-based exposure
limit” for respirable coal mine dust (with less than 6% respirable quartz) ranging from 0.5 to
4 mg/m3. WHO suggests a 2 in 1,000 risk of PMF over a working lifetime as a criterion, and
recommends that mine-based environmental factors, including coal rank, percentage of quartz
and particle size should be taken into account when setting limits.

Currently, among the major coal-producing countries, limits are based on regulating coal dust
alone (e.g., 3.8 mg/m3 in the United Kingdom, 5 mg/m3 in Australia and Canada) or on
regulating a mixture of coal and silica as in the United States (2 mg/m3 when the per cent
quartz is 5 or less, or (10 mg/m3)/per cent SiO2), or in Germany (4 mg/m3 when the per cent
quartz is 5 or less, or 0.15 mg/m3 otherwise), or on regulating pure quartz (e.g., Poland, with a
0.05 mg/m3 limit).

Asbestos-Related Diseases

Written by ILO Content Manager

Historical Perspective

Asbestos is a term used to describe a group of naturally occurring fibrous minerals which are
very widely distributed in rock outcrops and deposits throughout the world. Exploitation of
the tensile and heat-resistant properties of asbestos for human use dates from ancient times.
For instance, in the third century BC asbestos was used to strengthen clay pots in Finland. In
classic times, shrouds woven from asbestos were used to preserve the ashes of the famous
dead. Marco Polo returned from his travels in China with descriptions of a magic material
which could be manufactured into a flame resistant cloth. By the early years of the nineteenth
century, deposits were known to exist in several parts of the world, including the Ural
Mountains, northern Italy and other Mediterranean areas, in South Africa and in Canada, but
commercial exploitation only started in the latter half of the nineteenth century. By this time,
the industrial revolution created not only the demand (such as that of insulating the steam
engine) but also facilitated production, with mechanization replacing hand cobbing of fibre
from the parent rock. The modern industry began in Italy and the United Kingdom after 1860
and was boosted by the development and exploitation of the extensive deposits of chrysotile
(white) asbestos in Quebec (Canada) in the 1880s. Exploitation of the also extensive deposits
of chrysotile in the Ural mountains was modest until the 1920s. The long thin fibres of
chrysotile were particularly suitable for spinning into cloth and felts, one of the early
commercial uses for the mineral. The exploitation of the deposits of crocidolite (blue)
asbestos of the northwest Cape, South Africa, a fibre more water-resistant than chrysotile and
better suited to marine use, and of the amosite (brown) asbestos deposits, also found in South
Africa, started in the early years of this century. Exploitation of the Finnish deposits of
anthophyllite asbestos, the only important commercial source of this fibre, took place between
1918 and 1966, while the deposits of crocidolite in Wittenoom, Western Australia, were
mined from 1937 to 1966.

Fibre Types

The asbestos minerals fall into two groups, the serpentine group which includes chrysotile,
and the amphiboles, which include crocidolite, tremolite, amosite and anthophyllite (figure 1).
Most ore deposits are heterogeneous mineralogically, as are most of the commercial forms of
the mineral (Skinner, Roos and Frondel 1988). Chrysotile and the various amphibole asbestos
minerals differ in crystalline structure, in chemical and surface characteristics and in the
physical characteristics of their fibres, usually described in terms of the length-to-diameter (or
aspect) ratio. They also differ in characteristics which distinguish commercial use and grade.
Pertinent to the current discussion is the evidence that the different fibres differ in their
biological potency (as considered below in the sections on various diseases).

Figure 1. Asbestos fibre types.


Seen on election microscopy together with energy dispersive x-ray spectra which
enables identification of individual fibres. Courtesy of A. Dufresne and M. Harrigan, McGill
University.

Commercial Production

The growth of commercial production, illustrated in figure 2, was slow in the early years of
this century. For instance, Canadian production exceeded 100,000 short tons per annum for
the first time in 1911 and 200,000 tons in 1923. Growth between the two World Wars was
steady, increased considerably to meet the demands of the Second World War and
spectacularly to meet peacetime demands (including those of the cold war) to reach a peak in
1976 of 5,708,000 short tons (Selikoff and Lee 1978). After this, production faltered as the ill-
health effects of exposure became a matter of increasing public concern in North America and
Europe and remain at approximately 4,000,000 short tons per annum up to 1986, but
decreased further in the 1990s. There was also a shift in the uses and sources of fibre in the
1980s; in Europe and North America demand declined as substitutes for many applications
were introduced, while on the African, Asian and South American continents, demand for
asbestos increased to meet the needs of a cheap durable material for use in construction and in
water reticulation. By 1981, Russia had become the world’s major producer, with an increase
in the commercial exploitation of large deposits in China and Brazil. In 1980, it was estimated
that a total of over 100 million tons of asbestos had been mined worldwide, 90% of which
was chrysotile, approximately 75% of which came from 4 chrysotile mining areas, located in
Quebec (Canada), Southern Africa and the central and southern Ural Mountains. Two to three
per cent of the world’s total production was crocidolite, from the Northern Cape, South
Africa, and from Western Australia, and another 2 to 3% was amosite, from the Eastern
Transvaal, South Africa (Skinner, Ross and Frondel 1988).

Figure 2. World production of asbestos in thousands of tons 1900-92


Asbestos-Related Diseases and Conditions

Like silica, asbestos has the capability of evoking scarring reactions in all biological tissue,
human and animal. In addition, asbestos evokes malignant reactions, adding a further element
to the concern for human health, as well as a challenge to science as to how asbestos exerts its
ill effects. The first asbestos-related disease to be recognized, diffuse interstitial pulmonary
fibrosis or scarring, later called asbestosis, was the subject of case reports in the United
Kingdom in the early 1900s. Later, in the 1930s, case reports of lung cancer in association
with asbestosis appeared in the medical literature though it was only over the next several
decades that the scientific evidence was gathered establishing that asbestos was the
carcinogenic factor. In 1960, the association between asbestos exposure and another much
less common cancer, malignant mesothelioma, which involves the pleura (a membrane that
covers the lung and lines the chest wall) was dramatically brought to attention by the report of
a cluster of these tumours in 33 individuals, all of whom worked or lived in the asbestos
mining area of the Northwest Cape (Wagner 1996). Asbestosis was the target of the dust
control levels introduced and implemented with increasing rigour in the 1960s and 1970s, and
in many industrialized countries, as the frequency of this disease decreased, asbestos-related
pleural disease emerged as the most frequent manifestation of exposure and the condition
which most frequently brought exposed subjects to medical attention. Table 1 lists diseases
and conditions currently recognized as asbestos-related. The diseases in bold type are those
most frequently encountered and for which a direct causal relationship is well established,
while for the sake of completeness, certain other conditions, for which the relationship is less
well established, are also listed (see footnote to Table 16) and the sections which follow in the
text below that expand upon the various disease headings).

Table 1. Asbestos-related diseases and conditions

Pathology Organ(s) affected Disease/condition1

Non-malignant Lungs Pleura Skin Asbestosis (diffuse interstitial fibrosis)


Small airway disease2 (fibrosis limited to the
peri-bronchiolar region)
Chronic airways disease3 Pleural plaques
Viscero-parietal reactions, including benign pleural
effusion, diffuse pleural fibrosis and rounded
atelectasis Asbestos corns4

Malignant Lungs Pleura Other Lung cancer (all cell types)


mesothelium-lined Cancer of larynx Mesothelioma of pleura
cavities Mesothelioma of the peritoneum, pericardium and
Gastrointestinal scrotum (in decreasing frequency of occurrence)
tract5 Other5 Cancer of stomach, oesophagus, colon, rectum
Ovary, gall bladder, bile ducts, pancreas, kidney

1 The diseases or conditions indicated in bold type are those most frequently encountered and
the ones for which a causal relationship is well established and/or generally recognized.
2 Fibrosis in the walls of the small airways of the lung (including the membranous and
respiratory bronchioles) is thought to represent the early lung parenchymal response to
retained asbestos (Wright et al. 1992) which will progress to asbestosis if exposure continues
and/or is heavy, but if exposure is limited or light, the lung response may be limited to these
areas (Becklake in Liddell & Miller 1991).
3Included are bronchitis, chronic obstructive pulmonary disease (COPD) and emphysema. All
have been shown to be associated with work in dusty environments. The evidence for
causality is reviewed in the section Chronic Airways Diseases and Becklake (1992).
4 Related to direct handling of asbestos and of historical rather than current interest.
5 Data not consistent from all studies (Doll and Peto 1987); some of the highest risks were
reported in a cohort of over 17,000 American and Canadian asbestos insulation workers
(Selikoff 1990), followed from January 1, 1967 to December 31, 1986 in whom exposure had
been particularly heavy.

Sources: Becklake 1994; Liddell and Miller 1992; Selikoff 1990; Doll and Peto in Antman
and Aisner 1987; Wright et al. 1992.

Uses

Table 2 lists the major sources, products and uses of the asbestos minerals.

Table 2. Main commercial sources, products and uses of asbestos

Fibre type Location of major deposits Commercial products and/or uses

Chrysotile Russia, Canada (Québec, also Construction materials (tiles, shingles, gutters
(white) British Columbia, and cisterns; roofing, sheeting, and siding)
Newfoundland), China Pressure and other pipes
(Szechwan province); Fire proofing (marine and other)
Mediterranean countries (Italy, Insulation and sound proofing
Greece, Corsica, Cyprus); Reinforced plastic products (fan blades,
Southern Africa (South Africa, switch gear)
Zimbabwe, Swaziland); Brazil; Friction materials usually in combination with
smaller deposits in United States resins in brakes, clutches, other
(Vermont, Arizona, California) Textiles (used in belts, clothing, casing, fire
and in Japan barriers, autoclaves, yarns and packing)
Paper products (used in millboard, insulators,
gaskets, roof felt, wall coverings, etc.)
Floats in paints, coatings and welding rods

Crocidolite South Africa (Northwest Cape, Used mainly in combination in cement


(blue) Eastern Transvaal), Western products (in particular pressure pipes) but
Australia1 also in many of the other products listed
above
Amosite South Africa (Northern Used mainly in cement, thermal insulation
(brown) Transvaal)1 and roofing products particularly in the
United States2 , but also in combination in
many of the products listed under chrysotile

Anthophyllite Finland1 Filler in the rubber, plastics and chemical


industries

Tremolite Italy, Korea and some Pacific Used as a filler in talc; may or may not be
Islands; mined on a small scale in removed in processing the ore so it may
Turkey, China and elsewhere; appear in end products
contaminates the ore bearing
rock in some asbestos, iron, talc
and vermiculite mines; also
found in agricultural soils in the
Balkan Peninsula and in Turkey

Actinolite Contaminates amosite, and less Not usually exploited commercially


often, chrysotile, talc and
vermiculite deposits

1
A list such as this is obviously not comprehensive and the readers should consult the sources
cited and other chapters in this Encyclopedia for more complete information.
2
No longer in operation.

Sources: Asbestos Institute (1995); Browne (1994); Liddell and Miller (1991); Selikoff and
Lee (1978); Skinner et al (1988).

Though necessarily incomplete, this table emphasizes that:

1. Deposits are found in many parts of the world, most of which have been exploited non-
commercially or commercially in the past, and some of which are currently commercially
exploited.
2. There are many manufactured products in current or past use which contain asbestos,
particularly in the construction and transport industries.
3. Disintegration of these products or their removal carries with it the risk of the resuspension
of fibres and of renewed human exposure.

A figure of over 3,000 has been commonly quoted for the number of uses of asbestos and no
doubt led to asbestos being dubbed the “magic mineral” in the 1960s. A 1953 industry list
contains as many as 50 uses for raw asbestos, in addition to its use in the manufacture of the
products listed in Table 17, each of which has many other industrial applications. In 1972, the
consumption of asbestos in an industrialized country like the United States was attributed to
the following categories of product: construction (42%); friction materials, felts, packings and
gaskets (20%); floor tiles (11%); paper (9%); insulation and textiles (3%) and other uses
(15%) (Selikoff and Lee 1978). By contrast, a 1995 industry list of the main product
categories shows major redistribution on a worldwide basis as follows: asbestos cement
(84%); friction materials (10%); textiles (3%); seals and gaskets (2%); and other uses (1%)
(Asbestos Institute 1995).

Occupational Exposures, Past and Current

Occupational exposure, certainly in industrialized countries, has always been and is still the
most likely source of human exposure (see Table 17 and the references cited in its footnote;
other sections of this Encyclopaedia contain further information). There have, however, been
major changes in industrial processes and procedures aimed at diminishing the release of dust
into the working environment (Browne 1994; Selikoff and Lee 1978). In countries with
mining operations, milling usually takes place at the minehead. Most chrysotile mines are
open cast, while amphibole mines usually involve underground methods which generate more
dust. Milling involves separating fibre from rock by means of mechanized crushing and
screening, which were dusty processes until the introduction of wet methods and/or enclosure
in most mills during the 1950s and 1960s. The handling of waste was also a source of human
exposure, as was transporting bagged asbestos, whether it involved loading and unloading
trucks and railcars or work on the dockside. These exposures have diminished since the
introduction of leak-proof bags and the use of sealed containers.

Workers have had to use raw asbestos directly in packing and lagging, particularly in
locomotives, and in spraying walls, ceilings and airducts, and in the marine industry,
deckheads and bulkheads. Some of these uses have been phased out voluntarily or have been
banned. In the manufacture of asbestos cement products, exposure occurs in receiving and
opening bags containing raw asbestos, in preparing the fibre for mixing in the slurry, in
machining end-products and in dealing with waste. In the manufacture of vinyl tiles and
flooring, asbestos was used as a reinforcing and filler agent to blend with organic resins, but
has now largely been replaced by organic fibre in Europe and North America. In the
manufacture of yarns and textiles, exposure to fibre occurs in receiving, preparing, blending,
carding, spinning, weaving and calendaring the fibre—processes which were until recently
dry and potentially very dusty. Dust exposure has been considerably reduced in modern plants
through use of a colloidal suspension of fibre extruded through a coagulant to form wet
strands for the last-mentioned three processes. In the manufacture of asbestos paper products,
human exposure to asbestos dust is also most likely to occur in the reception and preparation
of the stock mix and in cutting the final products which in the 1970s contained from 30 to
90% asbestos. In the manufacture of asbestos friction products (dry mix-moulded, roll-
formed, woven or endless wound) human exposure to asbestos dust is also most likely to
occur during the initial handling and blending processes as well as in finishing the end
product, which in the 1970s contained from 30 to 80% of asbestos. In the construction
industry, prior to regular use of appropriate exhaust ventilation (which came in the 1960s), the
high-speed power sawing, drilling and sanding of asbestos-containing boards or tiles led to
the release of fibre-containing dust close to the operator’s breathing zone, particularly when
such operations were conducted in closed spaces (for instance in high-rise buildings under
construction). In the period after the Second World War, a major source of human exposure
was in the use, removal or replacement of asbestos-containing materials in the demolition or
refurbishing of buildings or ships. One of the chief reasons for this state of affairs was the
lack of awareness, both of the composition of these materials (i.e., that they contained
asbestos) and that exposure to asbestos could be harmful to health. Improved worker
education, better work practices and personal protection have reduced the risk in the 1990s in
some countries. In the transport industry, sources of exposure were the removal and
replacement of lagging in locomotive engines and of braking material in trucks and cars in the
automobile repair industry. Other sources of past exposure leading to, in particular, pleural
disease, continue to attract notice, even in the 1990s, usually on the basis of case reports, for
instance those describing workers using asbestos string in the manufacture of welding rods, in
the formation of asbestos rope for grouting furnaces and maintaining underground mine
haulage systems.

Other Sources of Exposure

Exposure of individuals engaged in trades which do not directly involve use or handling of
asbestos but who work in the same area as those who do deal with it directly is called para-
occupational (bystander) exposure. This has been an important source of exposure not only in
the past but also for cases presenting for diagnosis in the 1990s. Workers involved include
electricians, welders and carpenters in the construction and in the ship building or repair
industries; maintenance personnel in asbestos factories; fitters, stokers and others in power
stations and ships and boiler houses where asbestos lagging or other insulation is in place, and
maintenance personnel in post-war high-rise buildings incorporating various asbestos-
containing materials. In the past, domestic exposure occurred primarily from dust-laden
workclothes being shaken or laundered at home, the dust so released becoming entrapped in
carpets or furnishings and resuspended into the air with the activities of daily living. Not only
could levels of airborne fibre reach levels as high as 10 fibre per millilitre (f/ml), that is, ten
times the occupational exposure limit proposed by a WHO consultation (1989) of 1.0 f/ml but
the fibres tended to remain airborne for several days. Since the 1970s, the practice of retaining
all work clothes at the worksite for laundering has been widely but not universally adopted. In
the past also, residential exposure occurred from contamination of air from industrial sources.
For instance, increased levels of airborne asbestos have been documented in the
neighbourhood of mines and asbestos plants and are determined by production levels,
emission controls and weather. Given the long lag time for, in particular, asbestos-related
pleural disease, such exposures are still likely to be responsible for some cases presenting for
diagnosis in the 1990s. In the 1970s and 1980s, with the increase in public awareness of both
the ill-health consequences of asbestos exposure and of the fact that asbestos containing
materials are used extensively in modern construction (particularly in the friable form used for
spray-on applications to walls, ceilings and ventilation ducts), a major cause of concern
centred on whether, as such buildings age and are subject to daily wear and tear, asbestos
fibres may be released into the air in sufficient numbers to become a threat to the health of
those working in modern high-rise buildings (see below for risk estimates). Other sources of
contamination of the air in urban areas include the release of fibre from brakes of vehicles and
rescattering of fibres released by passing vehicles (Bignon, Peto and Saracci 1989).

Non-industrial sources of environmental exposure include naturally occurring fibres in soils,


for instance in eastern Europe, and in rock outcrops in the Mediterranean region, including
Corsica, Cyprus, Greece and Turkey (Bignon, Peto and Saracci 1989). An additional source of
human exposure results from the use of tremolite for whitewash and stucco in Greece and
Turkey, and according to more recent reports, in New Caledonia in the South Pacific (Luce et
al. 1994). Furthermore, in several rural villages in Turkey, a zeolite fibre, erionite, has been
found to be used both in stucco and in domestic construction and has been implicated in
mesothelioma production (Bignon, Peto and Saracci 1991). Finally, human exposure may
occur through drinking water, mainly from natural contamination, and given the widespread
natural distribution of the fibre in outcrops, most water sources contain some fibre, levels
being highest in mining areas (Skinner, Roos and Frondel 1988).

Aetiopathology of Asbestos-Related Disease

Fate of inhaled fibres

Inhaled fibres align themselves with the airstream and their capability of penetrating into the
deeper lung spaces depends on their dimension, fibres of 5mm or less in aerodynamic
diameter showing an over 80% penetration, but also less than 10 to 20% retention. Larger
particles may impact in the nose and in major airways at bifurcations, where they tend to
collect. Particles deposited in the major airways are cleared by the action of ciliated cells and
are transported up the mucus escalator. Individual differences associated with what appears to
be the same exposure are due, at least in part, to differences between individuals in the
penetration and retention of inhaled fibres (Bégin, Cantin and Massé 1989). Small particles
deposited beyond the major airways are phagocytosed by alveolar macrophages, scavenger
cells which ingest foreign material. Longer fibres, that is, those over 10mm, often come under
attack by more than one macrophage, are more likely to become coated and to form the
nucleus of an asbestos body, a characteristic structure recognized since the early 1900s as a
marker of exposure (see figure 3). Coating a fibre is considered to be part of the lungs’
defense to render it inert and non-immunogenic. Asbestos bodies are more likely to form on
amphibole than on chrysotile fibres, and their density in biological material (sputum,
bronchoalveolar lavage, lung tissue) is an indirect marker of lung burden. Coated fibres may
persist in the lung for long periods, to be recovered from sputum or bronchoalveolar lavage
fluid up to 30 years after last exposure. Clearance of non-coated fibres deposited in the lung’s
parenchyma is towards the lung periphery and subpleural regions, and then to lymph nodes at
the root of the lung.

Figure 3. Asbestos body


Magnification x 400, seen on microscopic section of the lung as a slightly curved elongated
structure with a finely beaded iron protein coat. The asbestos fibre itself can be identified as
the thin line near one end of the asbestos body (arrow). Source: Fraser et al. 1990

Theories to explain how fibres evoke the various pleural reactions associated with asbestos
exposure include:

1. direct penetration into the pleural space and drainage with the pleural fluid to pores in the
pleura lining the chest wall
2. release of mediators into the pleural space from subpleural lymphatic collections
3. retrograde flow from lymph nodes at the root of the lung to the parietal pleura (Browne
1994)

There may also be retrograde flow via the thoracic duct to the abdominal lymph nodes to
explain the occurrence of peritoneal mesothelioma.

Cellular effects of inhaled fibres

Animal studies indicate that the initial events which follow asbestos retention in the lung
include:

1. an inflammatory reaction, with accumulation of white blood cells followed by a macrophagic


alveolitis with release of fibronectin, growth factor and various neutrophil chemotactic
factors and, over time, the release of superoxide ion and
2. proliferation of alveolar, epithelial, interstitial and endothelial cells (Bignon, Peto and Saracci
1989).

These events are reflected in the material recovered by bronchoalveolar lavage in animals and
humans (Bégin, Cantin and Massé 1989). Both fibre dimensions and their chemical
characteristics appear to determine biological potency for fibrogenesis, and these
characteristics, in addition to surface properties, are also thought to be important for
carcinogenesis. Long, thin fibres are more active than short ones, although the activity of the
latter cannot be discounted, and amphiboles are more active than chrysotile, a property
attributed to their greater biopersistence (Bégin, Cantin and Massé 1989). Asbestos fibres may
also affect the human immune system and change the circulating population of blood
lymphocytes. For instance, human cell mediated immunity to cell antigens (such as is
exhibited in a tuberculin skin test) may be impaired (Browne 1994). In addition, since
asbestos fibres appear to be capable of inducing chromosome abnormality, the view has been
expressed that they can also be considered capable of inducing as well as promoting cancer
(Jaurand in Bignon, Peto and Saracci 1989).
Dose versus exposure response relationships

In biological sciences such as pharmacology or toxicology in which dose-response


relationships are used to estimate the probability of desired effects or the risk of undesired
effects, a dose is conceptualized as the amount of agent delivered to and remaining in contact
with the target organ for sufficient time to evoke a reaction. In occupational medicine,
surrogates for dose, such as various measures of exposure, are usually the basis for risk
estimates. However, exposure-response relationships can usually be demonstrated in
workforce-based studies; the most appropriate exposure measure may, however, differ
between diseases. Somewhat disconcerting is the fact that although exposure-response
relationships will differ between workforces, these differences can be explained only in part
by the fibre, particle size and industrial process. Nevertheless, such exposure-response
relationships have formed the scientific basis for risk assessment and for setting permissible
exposure levels, which were originally focused on controlling asbestosis (Selikoff and Lee
1978). As the prevalence and/or incidence of this condition has decreased, concern has
switched to assure protection of human health against asbestos-related cancers. Over the last
decade, techniques have been developed for the quantitative measurement of lung dust burden
or biological dose directly in terms of fibres per gram of dry lung tissue. In addition, energy
dispensive x-ray analysis (EDXA) permits precise characterization of each fibre by fibre type
(Churg 1991). Though standardization of results between laboratories has not yet been
achieved, comparisons of results obtained within a given laboratory are useful, and lung
burden measurements have added a new tool for case evaluation. In addition, the application
of these techniques in epidemiological studies has

1. confirmed the biopersistence of amphibole fibres in the lung compared to chrysotile fibres
2. identified fibre burden in the lungs of some individuals in whom exposure was forgotten,
remote or thought to be unimportant
3. demonstrated a gradient in lung burden associated with rural and urban residence and with
occupational exposure and
4. confirmed a fibre gradient in the lung dust burden associated with the major asbestos-
related diseases (Becklake and Case 1994).

Asbestosis

Definition and history

Asbestosis is the name given to the pneumoconiosis consequent on exposure to asbestos dust.
The term pneumoconiosis is used here as defined in the article “Pneumoconioses:
Definitions”, of this Encyclopaedia as a condition in which there is “accumulation of dust in
the lungs and tissue responses to the dust”. In the case of asbestosis, the tissue reaction is
collagenous, and results in permanent alteration of the alveolar architecture with scarring. As
early as 1898, the Annual Report of Her Majesty’s Chief Inspector of Factories contained
reference to a lady factory inspector’s report on the adverse health consequences of asbestos
exposure, and the 1899 Report contained details of one such case in a man who had worked
for 12 years in one of the recently established textile factories in London, England. Autopsy
revealed diffuse severe fibrosis of the lung and what subsequently came to be known as
asbestos bodies were seen on subsequent histologic re-examination of the slides. Since
fibrosis of the lung is an uncommon condition, the association was thought to be causal and
the case was presented in evidence to a committee on compensation for industrial disease in
1907 (Browne 1994). Despite the appearance of reports of a similar nature filed by inspectors
from the United Kingdom, Europe and Canada over the next decade, the role of exposure to
asbestos in the genesis of the condition was not generally recognized until a case report was
published in the British Medical Journal in 1927. In this report, the term pulmonary
asbestosis was first used to describe this particular pneumoconiosis, and comment was made
on the prominence of the associated pleural reactions, in contrast, for instance, to silicosis, the
main pneumoconiosis recognized at the time (Selikoff and Lee 1978). In the 1930s, two major
workforce-based studies carried out among textile workers, one in the United Kingdom and
one in the United States, provided evidence of an exposure-response (and therefore likely
causal) relationship between level and duration of exposure and radiographic changes
indicative of asbestosis. These reports formed the basis of the first control regulations in the
United Kingdom, promulgated in 1930, and the first threshold limit values for asbestos
published by the American Conference of Government and Industrial Hygienists in 1938
(Selikoff and Lee 1978).

Pathology

The fibrotic changes which characterize asbestosis are the consequence of an inflammatory
process set up by fibres retained in the lung. The fibrosis of asbestosis is interstitial, diffuse,
tends to involve the lower lobes and peripheral zones preferentially and, in the advanced case,
is associated with obliteration of the normal lung architecture. Fibrosis of the adjacent pleura
is common. Nothing in the histological features of asbestosis distinguish it from interstitial
fibrosis due to other causes, except the presence of asbestos in the lung either in the form of
asbestos bodies, visible to light microscopy, or as uncoated fibres, most of which are too fine
to be seen except by means of electron microscopy. Thus, the absence of asbestos bodies in
images derived from light microscopy does not rule out either exposure or the diagnosis of
asbestosis. At the other end of the spectrum of disease severity, the fibrosis may be limited to
relatively few zones and affect mainly the peribronchiolar regions (see figure 4), giving rise to
what has been called asbestos-related small airways disease. Again, except perhaps for more
extensive involvement of membranous small airways, nothing in the histologic changes of this
condition distinguishes it from small airways disease due to other causes (such as cigarette
smoking or exposure to other mineral dusts) other than the presence of asbestos in the lung.
Small airways disease may be the only manifestation of asbestos-related lung fibrosis or it
may coexist with varying degrees of interstitial fibrosis, that is, asbestosis (Wright et al.
1992). Carefully considered criteria have been published for the pathological grading of
asbestosis (Craighead et al. 1982). In general, the extent and intensity of the lung fibrosis
relates to the measured lung dust burden (Liddell and Miller 1991).
Figure 4. Asbestos-related small airways disease

Peribronchiolar fibrosis and infiltration by inflammatory cells is seen on a histologic section


of a respiratory bronchiole (R) and its distal divisions or alveolar ducts (A). The surrounding
lung is mostly normal but with focal thickening of the interstitial tissue (arrow), representing
early asbestosis. Source: Fraser et al. 1990

Clinical features

Shortness of breath, the earliest, most consistently reported and most distressing complaint,
has led to asbestosis being called a monosymptomatic disease (Selikoff and Lee 1978).
Shortness of breath precedes other symptoms which include a dry, often distressing cough,
and chest tightness—which is thought to be associated pleural reactions. Late inspiratory rales
or crackles which persist after coughing are heard, first in the axilla and over the lung bases,
before becoming more generalized as the condition advances, and are thought to be due to the
explosive opening of airways which close on expiration. Coarse rales and rhonchi, if present,
are thought to reflect bronchitis either in response to working in a dusty environment, or due
to smoking.

Chest imaging

Traditionally, the chest radiograph has been the most important single diagnostic tool for
establishing the presence of asbestosis. This has been facilitated by the use of the ILO (1980)
radiological classification, which grades the small irregular opacities that are characteristic of
asbestosis on a continuum from no disease to the most advanced disease, both for severity
(described as profusion on a 12-point scale from –/0 to 3/+) and extent (described as the
number of zones affected). Despite between reader differences, even among those who have
completed training courses in reading, this classification has proved particularly useful in
epidemiological studies, and has also been used clinically. However, pathological changes of
asbestosis can be present on lung biopsy in up to 20% of subjects with a normal chest
radiograph. In addition, small irregular opacities of low profusion (e.g., 1/0 on the ILO scale)
are not specific for asbestosis but can be seen in relation to other exposures, for instance to
cigarette smoking (Browne 1994). Computer tomography (CT) has revolutionized the
imaging of interstitial lung disease, including asbestosis, with high resolution computer
tomography (HRCT) adding increased sensitivity to the detection of interstitial and pleural
disease (Fraser et al. 1990). Characteristics of asbestosis which can be identified by HRCT
include thickened interlobular (septal) and intralobular core lines, parenchymal bands,
curvilinear subpleural lines and subpleural dependent densities, the first two being the most
distinctive for asbestosis (Fraser et al. 1990). HRCT can also identify these changes in cases
with pulmonary function deficit in whom the chest radiograph is inconclusive. Based on
postmortem HRCT, thickened intralobular lines have been shown to correlate with
peribronchiolar fibrosis, and thickened interlobular lines with interstitial fibrosis (Fraser et al.
1990). As yet, no standardized reading method has been developed for the use of HRCT in
asbestos-related disease. In addition to its cost, the fact that a CT device is a hospital
installation makes it unlikely that it will replace the chest radiograph for surveillance and
epidemiological studies; its role will likely remain limited to individual case investigation or
to planned studies intended to address specific issues. Figure 21 illustrates the use of chest
imaging in the diagnosis of asbestos-related lung disease; the case shown exhibits asbestosis,
asbestos-related pleural disease and lung cancer. Large opacities, a complication of other
pneumoconioses, in particular silicosis, are unusual in asbestosis and are usually due to other
conditions such as lung cancer (see the case described in figure 5) or rounded atelectasis.

Figure 5. Chest imaging in asbestos-related lung disease.

A posteroanterior chest radiograph (A) shows asbestosis involving both lungs and assessed as
ILO category 1/1, associated with bilateral pleural thickening (open arrows) and a vaguely
defined opacity (arrow heads) in the left upper lobe. On HRCT scan (B), this was shown to be
a dense mass (M) abutting onto the pleura and transthoracic needle biopsy revealed an
adenocarcinoma of the lung. Also on CT scan (C), at high attenuation pleural plaques can be
seen (arrow heads) as well as a thin curvilinear opacity in the parenchyma underlying the
plaques with interstitial abnormality in the lung between the opacity and the pleura. Source:
Fraser et al. 1990

Lung function tests

Established interstitial lung fibrosis due to asbestos exposure, like established lung fibrosis
due to other causes, is usually but not invariably associated with a restrictive lung function
profile (Becklake 1994). Its features include reduced lung volumes, in particular vital capacity
(VC) with preservation of the ratio of forced expiratory volume in 1second to forced vital
capacity (FEV1/FVC%), reduced lung compliance, and impaired gas exchange. Air flow
limitation with reduced FEV1/FVC may, however, also be present as a response to a dusty
work environment or to cigarette smoke. In the earlier stages of asbestosis, when the
pathological changes are limited to peribronchiolar fibrosis and even before small irregular
opacities are evident on the chest radiograph, impairment of tests reflecting small airway
dysfunction such as the Maximum Mid-expiratory Flow Rate may be the only sign of
respiratory dysfunction. Responses to the stress of exercise may also be impaired early in the
disease, with increased ventilation in relation to the oxygen requirement of the exercise (due
to an increased breathing frequency and shallow breathing) and impaired O2 exchange. As the
disease progresses, less and less exercise is required to compromise O2 exchange. Given that
the asbestos-exposed worker may exhibit features of both a restrictive and an obstructive lung
function profile, the wise physician interprets the lung function profile in the asbestos worker
for what it is, as a measure of impairment, rather than as an aid to diagnosis. Lung functions,
in particular vital capacity, provide a useful tool for the follow-up of subjects individually, or
in epidemiological studies, for instance after exposure has ceased, to monitor the natural
history of asbestosis or asbestos-related pleural disease.

Other laboratory tests

Bronchoalveolar lavage is increasingly used as a clinical tool in the investigation of asbestos-


related lung disease:

1. to rule out other diagnoses


2. to assess the activity of the pulmonary reactions under study such as fibrosis or
3. to identify the agent in the form of asbestos bodies or fibres.

It is also used to study disease mechanisms in humans and animals (Bégin, Cantin and Massé
1989). The uptake of Gallium-67 is used as a measure of the activity of the pulmonary
process, and serum antinuclear antibodies (ANA) and rheumatoid factors (RF), both of which
reflect the immunological status of the individual, have also been investigated as factors
influencing disease progression, and/or accounting for between individual differences in
response to what appears to be the same level and dose of exposure.

Epidemiology including natural history

The prevalence of radiological asbestosis documented in workforce-based surveys varies


considerably and, as might be expected, these differences relate to differences in exposure
duration and intensity rather than differences between workplaces. However, even when these
are taken into account by restricting comparison of exposure response relationships to those
studies in which exposure estimates were individualized for each cohort member and based on
job history and industrial hygiene measurements, marked fibre and process related gradients
are evident (Liddell and Miller 1991). For instance, a 5% prevalence of small irregular
opacities (1/0 or more on the ILO classification) resulted from a cumulative exposure to
approximately 1,000 fibre years in Quebec chrysotile miners, to approximately 400 fibre
years in Corsican chrysotile miners, and to under 10 fibre years in South African and
Australian crocidolite miners. By contrast, for textile workers exposed to Quebec chrysotile, a
5% prevalence of irregular small opacities resulted from a cumulative exposure to under 20
fibre years. Lung dust burden studies are also consistent with a fibre gradient for evoking
asbestosis: in 29 men in Pacific shipyard trades with asbestosis associated with mainly
amosite exposure, the average lung burden found in autopsy material was 10 million amosite
fibres per gram of dry lung tissue compared to an average chrysotile burden of 30 million
fibres per gram of dry lung tissue in 23 Quebec chrysotile miners and millers (Becklake and
Case 1994). Fibre size distribution contributes to but does not fully explain these differences,
suggesting that other plant-specific factors, including other workplace contaminants, may play
a role.

Asbestosis may remain stable or progress, but probably does not regress. Progression rates
increase with age, with cumulative exposure, and with the extent of existing disease, and are
more likely to occur if exposure was to crocidolite. Radiological asbestosis can both progress
and appear long after exposure ceases. Deterioration of lung functions may also occur after
exposure has ceased (Liddell and Miller 1991). An important issue (and one on which the
epidemiological evidence is not consistent) is whether continued exposure increases the
chance of progression once radiological changes have developed (Browne 1994; Liddell and
Miller 1991). In some jurisdictions, for example in the United Kingdom, the number of cases
of asbestosis presenting for worker’s compensation have decreased over the last decades,
reflecting the workplace controls put in place in the 1970s (Meredith and McDonald 1994). In
other countries, for instance in Germany (Gibbs, Valic and Browne 1994), rates of asbestosis
continue to rise. In the United States, age-adjusted asbestos-related mortality rates (based on
mention of asbestosis on the death certificate as either the cause of death or as playing a
contributory role) for age 1+ increased from under 1 per million in 1960 to over 2.5 in 1986,
and to 3 in 1990 (US Dept. of Health and Human Services, 1994).

Diagnosis and case management

Clinical diagnosis depends on:

1. establishing the presence of disease


2. establishing whether exposure occurred and
3. evaluating whether the exposure was likely to have caused the disease.

The chest radiograph remains the key tool to establish the presence of disease, supplemented
by HRCT if available in cases where there is doubt. Other objective features are the presence
of basal crackles, while lung function level, including exercise challenge, is useful in
establishing impairment, a step required for compensation evaluation. Since neither the
pathology, radiological changes, nor the symptoms and lung function changes associated with
asbestosis are different from those associated with interstitial lung fibrosis due to other
causes, establishing exposure is key to diagnosis. In addition, the many uses of asbestos
products whose content is often not known to the user makes an exposure history a much
more daunting exercise in interrogation than was previously thought. If the exposure history
appears inadequate, identification of the agent in biological specimens (sputum,
bronchoalveolar lavage and when indicated, biopsy) can corroborate exposure; dose in the
form of lung burden can be assessed quantitatively by autopsy or in surgically removed lungs.
Evidence of disease activity (from a gallium-67 scan or bronchoalveolar lavage) may assist in
estimating prognosis, a key issue in this irreversible condition. Even in the absence of
consistent epidemiological evidence that progression is slowed once exposure ceases, such a
course may be prudent and certainly desirable. It is not, however, a decision easy to take or
recommend, particularly for older workers with little opportunity for job retraining. Certainly
exposure should not continue in any workplace not in conformity with current permissible
exposure levels. Criteria for the diagnosis of asbestosis for epidemiological purposes are less
demanding, particularly for cross-sectional workforce-based studies which include those well
enough to be at work. These usually address issues of causality and often use markers that
indicate minimal disease, based either on lung function level or on changes in the chest
radiograph. By contrast, criteria for diagnosis for medicolegal purposes are considerably more
stringent and vary according to the legal administrative systems under which they operate,
varying between states within countries as well as between countries.
Asbestos-Related Pleural Disease

Historical perspective

Early descriptions of asbestosis mention fibrosis of the visceral pleura as part of the disease
process (see “Pathology”, page 10.55). In the 1930s there were also reports of circumscribed
pleural plaques, often calcified, in the parietal pleura (which lines the chest wall and covers
the surface of the diaphragm), and occurring in those with environmental, not occupational,
exposure. A 1955 workforce-based study of a German factory reported a 5% prevalence of
pleural changes on the chest radiograph, thereby drawing attention to the fact that pleural
disease might be the primary if not the only manifestation of exposure. Visceroparietal pleural
reactions, including diffuse pleural fibrosis, benign pleural effusion (reported first in the
1960s) and rounded atelectasis (first reported in the 1980s) are now all considered interrelated
reactions which are usefully distinguished from pleural plaques on the basis of pathology and
probably pathogenesis, as well as clinical features and presentation. In jurisdictions in which
the prevalence and/or incidence rates of asbestosis are decreasing, pleural manifestations,
increasingly common in surveys, are increasingly the basis of detection of past exposure, and
increasingly the reason for an individual seeking medical attention.

Pleural plaques

Pleural plaques are smooth, raised, white irregular lesions covered with mesothelium and
found on the parietal pleura or diaphragm (figure 6). They vary in size, are often multiple, and
tend to calcify with increasing age (Browne 1994). Only a small proportion of those detected
at autopsy are seen on the chest radiograph, though most can be detected by HRCT. In the
absence of pulmonary fibrosis, pleural plaques may cause no symptoms and may be detected
only in screening surveys using chest radiography. Nevertheless, in workforce surveys, they
are consistently associated with modest but measurable lung function impairment, mainly in
VC and FVC (Ernst and Zejda 1991). In radiological surveys in the United States, rates of 1%
are reported in men without known exposure, and 2.3% in men which include those in urban
populations, with occupational exposure. Rates are also higher in communities with asbestos
industries or high usage rates, while in some workforces, such as sheet metal workers,
insulators, plumbers and railroad workers, rates may exceed 50%. In a 1994 Finnish autopsy
survey of 288 men aged 35 to 69 years who died suddenly, pleural plaques were detected in
58%, and exhibited the tendency to increase with age, with the probability of exposure (based
on history), with the concentration of asbestos fibres in lung tissue, and with smoking
(Karjalainen et al. 1994). The aetiologic fraction of plaques attributable to a lung dust burden
of 0.1 million fibres per gram of lung tissue was estimated at 24%, (this value is considered to
be an underestimate). Lung dust burden studies are also consistent with fibre gradient in
potency for evoking pleural reactions; in 103 men with amosite exposure in Pacific shipyard
trades, all with pleural plaques, the average autopsy lung burden was 1.4 million fibres per
gram of lung tissue, compared to 15.5 and 75 million fibres per gram of lung tissue for
chrysotile and tremolite respectively in 63 Quebec chrysotile miners and millers examined in
the same way (Becklake and Case 1994).
Figure 6. Asbestos-related pleural disease

A diaphragmatic pleural plaque (A) is seen in an autopsy specimen as a smooth well defined
focus of fibrosis on the diaphragm of a construction worker with incidental exposure to
asbestos and asbestos bodies in the lung. Visceral pleural fibrosis (B) is seen on an inflated
autopsy lung specimen, and radiates from two central foci on the visceral pleura of the lung of
a construction worker with asbestos exposure who also exhibited several parietal pleural
plaques. Source: Fraser et al. 1990.

Visceroparietal pleural reactions

Though the pathology and pathogenesis of the different forms of visceroparietal reaction to
asbestos exposure are almost certainly interrelated, their clinical manifestations and how they
come to attention differs. Acute exudative pleural reactions may occur in the form of
effusions in subjects whose lungs do not manifest other asbestos-related disease, or as an
exacerbation in the severity and extent of existing pleural reactions. Such pleural effusions are
called benign by way of distinguishing them from effusions associated with malignant
mesothelioma. Benign pleural effusions occur typically 10 to 15 years after first exposure (or
after limited past exposure) in individuals in their 20s and 30s. They are usually transient but
may reoccur, may involve one or both sides of the chest simultaneously or sequentially, and
may be either silent or associated with symptoms including chest tightness and/or pleural pain
and dyspnoea. The pleural fluid contains leucocytes, often blood, and is albumin-rich; only
rarely does it contain asbestos bodies or fibres which may, however, be found in biopsy
material of the pleura or underlying lung. Most benign pleural effusions clear spontaneously,
though in a small proportion of subjects (of the order of 10% in one series) these effusions
may evolve into diffuse pleural fibrosis (see figure 6), with or without the development of
lung fibrosis. Local pleural reactions may also fold in upon themselves, trapping lung tissue
and causing well defined lesions called rounded atelectasis or pseudotumour because they
may have the radiological appearance of lung cancer. In contrast to pleural plaques, which
seldom cause symptoms, visceroparietal pleural reactions are usually associated with some
shortness of breath as well as lung function impairment, particularly when there is obliteration
of the costophrenic angle. In one study, for instance, average FVC deficit was 0.07 l when the
chest wall was involved and 0.50 l when the costophrenic angle was involved (Ernst and
Zejda in Liddell and Miller 1991). As already indicated, the distribution and determinants of
pleural reactions vary considerably between workforces, with prevalence rates increasing
with:

1. estimated residence time of fibre in the lung (measured as time since first exposure)
2. exposures primarily to or including amphibole and
3. possibly intermittence of exposure, given the high rates of contamination in occupations in
which use of asbestos materials is intermittent, but exposure probably heavy.

Lung Cancer

Historical perspective

The 1930s saw the publication of a number of clinical case reports from the United States, the
United Kingdom and Germany of lung cancer (a condition much less common then than it is
today) in asbestos workers, most of whom also had asbestosis of varying degrees of severity.
Further evidence of the association between the two conditions was provided in the 1947
Annual Report of His Majesty’s Chief Inspector of Factories, which noted that lung cancer
had been reported in 13.2% of male deaths attributed to asbestosis in the period 1924 to 1946
and in only 1.3% of male deaths attributed to silicosis. The first study to address the causal
hypothesis was a cohort mortality study of a large United Kingdom asbestos textile plant
(Doll 1955), one of the first such workforce-based studies, and by 1980, after at least eight
such studies in as many workforces had confirmed an exposure-response relationship, the
association was generally accepted as causal (McDonald and McDonald in Antman and
Aisner 1987).

Clinical features and pathology

In the absence of other associated asbestos disease, the clinical features and criteria for the
diagnosis of asbestos-associated lung cancer are no different from those for lung cancer not
associated with asbestos exposure. Originally, asbestos-associated lung cancers were
considered to be scar cancers, similar to lung cancer seen in other forms of diffuse lung
fibrosis such as scleroderma. Features which favoured this view were their location in the
lower lung lobes (where asbestosis is usually more marked), their sometimes multicentric
origin and a preponderance of adenocarcinoma in some series. However, in most reported
workforce-based studies, the distribution of cell types was no different from that seen in
studies of non-asbestos-exposed populations, supporting the view that asbestos itself may be a
human carcinogen, a conclusion reached by the International Agency for Research on Cancer
(World Health Organization: International Agency for Research on Cancer 1982). Most but
not all asbestos-related lung cancers occur in association with radiologic asbestosis (see
below).

Epidemiology

Cohort studies confirm that lung cancer risk increases with exposure, though the fractional
rate of increase for each fiber per milliliter per year exposed varies, and is related both to fibre
type and to industrial process (Health Effects Institute—Asbestos Research 1991). For
instance, for mainly chrysotile exposures in mining, milling and friction product manufacture,
the increase ranged from approximately 0.01 to 0.17%, and in textile manufacture from 1.1 to
2.8%, while for exposure to amosite insulation products and some cement product exposures
involving mixed fibre, rates of as high as 4.3 and 6.7% have been recorded (Nicholson 1991).
Cohort studies in asbestos workers also confirm that cancer risk is demonstrable for non-
smokers and that risk is increased (closer to multiplicative than additive) by cigarette smoking
(McDonald and McDonald in Antman and Aisner 1987). The relative risk for lung cancer
declines after exposure ceases, although the decline appears slower than that which occurs
after quitting smoking. Lung dust burden studies are also consistent with a fibre gradient in
lung cancer production; 32 men in Pacific shipyard trades with mainly amosite exposure had a
lung dust burden of 1.1 million amosite fibres per gram of dry lung tissue compared to 36
Quebec chrysotile miners with an average lung dust burden of 13 million chrysotile fibres per
gram of lung tissue (Becklake and Case 1994).

Relationship to asbestosis

In the 1955 autopsy study of causes of death in 102 workers employed in the United Kingdom
asbestos textile factory referred to above (Doll 1955), lung cancer was found in 18
individuals, 15 of whom also had asbestosis. All subjects in whom both conditions were
found had worked for at least 9 years before 1931, when national regulations for asbestos dust
control were introduced. These observations suggested that as exposure levels decreased, the
competing risk of death from asbestosis also decreased and workers lived long enough to
exhibit the development of cancer. In most workforce-based studies, older workers with long
service have some pathological evidence of asbestosis (or asbestos-related small airways
disease) at autopsy even though this may be minimal and not detectable on the chest
radiograph in life (McDonald and McDonald in Antman and Aisner 1987). Several but not all
cohort studies are consistent with the view that not all excess lung cancers in populations
exposed to asbestos are related to asbestosis. More than one pathogenetic mechanism may in
fact be responsible for lung cancers in individuals exposed to asbestos depending on the site
and deposition of the fibres. For instance, long thin fibres, which are deposited preferentially
at airway bifurcations, are thought to become concentrated and to act as inducers of the
process of cancerogenesis through chromosomal damage. Promoters of this process may
include continued exposure to asbestos fibres or to tobacco smoke (Lippman 1995). Such
cancers are more likely to be squamous cell in type. By contrast, in lungs which are the site of
fibrosis, cancerogenesis may result from the fibrotic process: such cancers are more likely to
be adenocarcinomas.

Implications and attributability

While determinants of excess cancer risk can be derived for exposed populations,
attributability in the individual case cannot. Obviously, attributability to asbestos exposure is
more likely and credible in an exposed individual with asbestosis who has never smoked than
in an exposed individual without asbestosis who smokes. Nor can this probability be modelled
reasonably. Lung dust burden measurements may supplement a careful clinical assessment
but each case must be evaluated on its merits (Becklake 1994).

Malignant Mesothelioma

Pathology, diagnosis, ascertainment and clinical features

Malignant mesotheliomas arise from the serous cavities of the body. Approximately two-
thirds arise in the pleura, about one-fifth in the peritoneum, while the pericardium and tunica
vaginalis are much less frequently affected (McDonald and McDonald in Lidell and Miller
1991). Since mesothelial cells are pluripotential, the histological features of mesothelial
tumours may vary; in most series, epithelial, sarcomatous and mixed forms account for
approximately 50, 30 and 10% of cases respectively. Diagnosis of this rare tumour, even in
the hands of experienced pathologists, is not easy, and mesothelioma panel pathologists often
confirm only a small percentage, in some studies less than 50% of cases submitted for review.
A variety of cytological and immunohistochemical techniques have been developed to assist
in differentiating malignant mesothelioma from the main alternative clinical diagnoses,
namely, secondary cancer or reactive mesothelial hyperplasia; this remains an active research
field in which expectations are high but findings inconclusive (Jaurand, Bignon and Brochard
1993). For all these reasons, ascertainment of cases for epidemiological surveys is not
straightforward, and even when based on cancer registries, may be incomplete. In addition,
confirmation by expert panels using specified pathological criteria is necessary to assure
comparability in criteria for registration.

Clinical features

Pain is usually the presenting feature. For pleural tumours, this starts in the chest and/or
shoulders, and may be severe. Breathlessness follows, associated with pleural effusion and/or
progressive encasement of the lung by tumour, and weight loss. With peritoneal tumours,
abdominal pain is usually accompanied by swelling. Imaging features are illustrated in figure
7. The clinical course is usually rapid and median survival times, six months in a 1973 report
and eight months in a 1993 report, have changed little over the last two decades, despite the
greater public and medical awareness which often leads to earlier diagnosis and despite
advances in diagnostic techniques and an increase in the number of treatment options for
cancer.

Figure 7. Malignant mesothelioma

Seen on an overpenetrated chest roetngenogram (A) as a large mass in the axillary region.
Note the associated reduction in volume of the right haemothorax with marked irregular
nodular thickening of the pleura of the whole right lung. CT scan (B) confirms the extensive
pleural thickening involving parietal and mediastinal pleura (closed arrows) in and around the
ribs. Source: Fraser et al. 1990

Epidemiology

In the 15 years which followed the 1960 report of the mesothelioma case series from the
Northwest Cape, South Africa (Wagner 1996), international confirmation of the association
came from reports of other case series from Europe (United Kingdom, France, Germany,
Holland), the United States (Illinois, Pennsylvania and New Jersey) and Australia, and of case
control studies from the United Kingdom (4 cities), Europe (Italy, Sweden, Holland) and from
the United States and Canada. Odds ratios in these studies ranged from 2 to 9. In Europe in
particular, the association with shipyard occupations was strong. In addition, proportional
mortality studies in asbestos-exposed cohorts suggested that risk was associated both with
fibre type and with industrial process, with rates attributable to mesothelioma ranging from
0.3% in chrysotile mining to 1% in chrysotile manufacturing, compared with 3.4% in
amphibole mining and manufacturing and as high as 8.6% for exposure to mixed fibre in
insulation (McDonald and McDonald in Liddell and Miller 1991). Similar fibre gradients are
shown in cohort mortality studies which, given the short survival times of these tumours, are a
reasonable reflection of incidence. These studies also show longer latent periods when
exposure was to chrysotile compared to amphiboles. Geographical variation in incidence has
been documented using Canadian age-and sex-specific rates for 1966 to 1972 to calculate
expected rates (McDonald and McDonald in Liddell and Miller 1991); rate ratios (values
actually observed over expected) were 0.8 for the United States (1972), 1.1 for Sweden (1958
to 1967), 1.3 for Finland (1965 to 1969), 1.7 for United Kingdom (1967 to 1968), and 2.1 for
the Netherlands (1969 to 1971). While technical factors including ascertainment may
obviously contribute to the variation recorded, the results do suggest higher rates in Europe
than in North America.

Time trends and gender differences in mesothelioma incidence have been used as a measure
of the health impact of asbestos exposure on populations. The best estimates for overall rates
in industrialized countries before 1950 are under 1.0 per million for men and women
(McDonald and McDonald in Jaurand and Bignon 1993). Subsequently, rates increased
steadily in men and either not at all or less in women. For instance, overall rates in men and
women per million were reported at 11.0 and under 2.0 in the United States in 1982, 14.7 and
7.0 in Denmark for 1975-80, 15.3 and 3.2 in the United Kingdom for 1980-83, and 20.9 and
3.6 in the Netherlands for 1978-87. Higher rates in men and women, but excluding younger
subjects, were reported for crocidolite mining countries: 28.9 and 4.7 respectively in Australia
(aged 2+) for 1986, and 32.9 and 8.9 respectively in South African Whites (aged 1+) for 1988
(Health Effects Institute—Asbestos Research 1991). The rising rates in men are likely to
reflect occupational exposure, and if so, they should level off or decrease within the 20-to 30-
year “incubation” period following the introduction of workplace controls and reduction of
exposure levels in most workplaces in most industrialized countries in the 1970s. In countries
in which the rates in women are rising, this increase may reflect their increasing engagement
in occupations with risk exposure, or the increasing environmental or indoor contamination of
urban air (McDonald 1985).

Aetiology

Environmental factors are clearly the main determinants of mesothelioma risk, exposure to
asbestos being the most important, though the occurrence of family clusters maintains interest
in the potential role of genetic factors. All asbestos fibre types have been implicated in
mesothelioma production, including anthophyllite for the first time in a recent report from
Finland (Meurman, Pukkala and Hakama 1994). However, there is a substantial body of
evidence, from proportional and cohort mortality studies and lung burden studies, which
suggests the role of a fibre gradient in mesothelioma production, risk being higher for
exposures to mainly amphiboles or amphibole chrysotile mixtures, compared with mainly
chrysotile exposures. In addition, there are rate differences between workforces for the same
fibre at what appears to be the same exposure level; these remain to be explained, though fibre
size distribution is a likely contributing factor.
The role of tremolite has been widely debated, a debate sparked by the evidence of its
biopersistence in lung tissue, animal and human, compared to that of chrysotile. A plausible
hypothesis is that the many short fibres which reach and are deposited in peripheral lung
airways and alveoli are cleared to subpleural lymphatics where they collect; their potency in
mesothelioma production depends on their biopersistence in contact with pleural surfaces
(Lippmann 1995). In human studies, mesothelioma rates are lower for populations exposed at
work to chrysotile relatively uncontaminated by tremolite (for instance, in Zimbabwean
mines) compared to those exposed to chrysotile which is so contaminated (for instance, in
Quebec mines), and these findings have been replicated in animal studies (Lippmann 1995).
Also, in a multivariate analysis of lung fibre burden in material from a Canada-wide
mesothelioma case control study (McDonald et al. 1989), the results suggested that most if
not all mesotheliomas could be explained by tremolite lung fibre burden. Finally, a recent
analysis of the mortality in the cohort of over 10,000 Quebec chrysotile miners and millers
born between 1890 and 1920, and followed to 1988 (McDonald and McDonald 1995),
supports this view: in almost 7,300 deaths, the 37 mesothelioma deaths were concentrated in
certain mines from the Thetford area, yet the lung burden of 88 cohort members from the
mines implicated did not differ from that of miners from other mines in terms of chrysotile
fibre burden, only in terms of tremolite burden (McDonald et al. 1993).

What has been called the tremolite question is perhaps the most important of the currently
debated scientific issues, and it also has public health implications. Note must also be made of
the important fact that in all series and jurisdictions, a certain proportion of cases occur
without reported asbestos exposure, and that only in some of these cases do lung dust burden
studies point to previous environmental or occupational exposure. Other occupational
exposures have been implicated in mesothelioma production, for instance in talc, vermiculite
and possibly mica mining, but in these, the ore contained either tremolite or other fibres
(Bignon, Peto and Saracci 1989). An open search for other exposures, occupational or non-
occupational, to fibres, inorganic and organic, and to other agents which may be associated
with mesothelioma production, should continue.

Other Asbestos-Related Diseases

Chronic airways disease

Usually included under this rubric are chronic bronchitis and chronic obstructive pulmonary
disease (COPD), both of which can be diagnosed clinically, and emphysema, until recently
diagnosed only by pathological examination of lungs removed at autopsy or otherwise
(Becklake 1992). A major cause is smoking, and, over the past decades, mortality and
morbidity due to chronic airways disease has increased in most industrialized countries.
However, with the decline of pneumoconiosis in many workforces, evidence has emerged to
implicate occupational exposures in the genesis of chronic airways disease, after taking into
account the dominant role of smoking. All forms of chronic airways disease have been shown
to be associated with work in a variety of dusty occupations, including those occupations in
which an important component of the dust contaminating the workplace was asbestos (Ernst
and Zejda in Liddell and Miller 1991). Total pollutant burden, rather than exposure to any of
its particular components, in this case asbestos dust, is thought to be implicated, in much the
same way as the effect of smoking exposure on chronic airways diseases is viewed, that is, in
terms of total exposure burden (e.g., as pack-years), not exposure to any one of the over 4,000
constituents of tobacco smoke. (see elsewhere in this volume for a further discussion of the
relationship between occupational exposures and chronic airways disease).
Other cancers

In several of the earlier cohort studies of asbestos exposed workers, mortality attributable to
all cancers exceeded that expected, based on national or regional vital statistics. While lung
cancer accounted for most of the excess, other cancers implicated were gastro-intestinal
cancers, laryngeal cancer and cancer of the ovaries, in that order of frequency. For gastro-
intestinal cancers, (including those affecting the oesophagus, the stomach, the colon and the
rectum), the relevant exposure in occupational cohorts is presumed to be via swallowing
asbestos-laden sputum raised from the major airways in the lung, and in earlier times, (before
protection measures were taken against exposure at lunch sites) direct contamination of food
in workplaces which had no lunch areas separate from working areas of plants and factories.
Retrograde flow via the thoracic duct from lymph nodes draining the lung might also occur
(see “Fate of inhaled fibres”, page 10.54). Because the association was inconsistent in the
different cohorts studied, and because exposure response relationships were not always seen,
there has been a reluctance to accept the evidence of the association between occupational
exposure and asbestos exposure as causal (Doll and Peto 1987; Liddell and Miller 1991).

Cancer of the larynx is much less common than gastro-intestinal or lung cancer. As early as
the 1970s, there were reports of an association between cancer of the larynx and asbestos
exposure. Like lung cancer, a major risk factor and cause of laryngeal cancer is smoking.
Laryngeal cancer is also strongly associated with alcohol consumption. Given the location of
the larynx (an organ exposed to all the inhaled pollutants to which the lungs are exposed) and
given the fact that it is lined by the same epithelium that lines the major bronchi, it is certainly
biologically plausible that cancer of the larynx occurs as a result of asbestos exposure.
However, the overall evidence available to date is inconsistent, even from large cohort studies
such as the Quebec and Balangero (Italy) chrysotile miners, possibly because it is a rare
cancer and there is still reluctance to regard the association as causal (Liddell and Miller
1991) despite its biological plausibility. Cancer of the ovaries has been recorded in excess of
expected in three cohort studies (WHO 1989). Misdiagnosis, in particular as peritoneal
mesothelioma, may explain most of the cases (Doll and Peto 1987).

Prevention, Surveillance and Assessment

Historical and current approaches

Prevention of any pneumoconiosis, including asbestosis, has traditionally been through:

1. engineering and work practices to maintain airborne fibre levels as low as possible, or at
least in conformity with permissible exposure levels usually set by law or regulation
2. surveillance, conducted to record trends of markers of disease in exposed populations and
monitor the results of control measures
3. education and product labelling aimed at assisting workers as well as the general public in
avoiding non-occupational exposure.

Permissible exposure levels were originally directed at controlling asbestosis and were based
on industrial hygiene measurements in million particles per cubic foot, gathered using the
same methods as were used for the control of silicosis. With the shift in biological focus to
fibres, in particular long thin ones, as the cause of asbestosis, methods more appropriate to
their identification and measurement in air were developed and, given these methods, the
focus on the more abundant short fibres which contaminate most workplaces was minimized.
Aspect (length to diameter) ratios for most particles of milled chrysotile asbestos fall within
the range 5:1 to 20:1, going up to 50:1, in contrast to most particles of milled amphibole
asbestos (including cleavage fragments) whose values fall below 3:1. The introduction of the
membrane filter for fibre counting of air samples led to an arbitrary industrial hygiene and
medical definition of a fibre as a particle at least 5μm long, 3μm or less thick, and with a
length to width ratio of at least 3:1. This definition, used for many of the studies of exposure-
response relationships, forms the scientific basis for setting environmental standards.

For instance, it was used in a meeting sponsored by the World Health Organization (1989) to
propose occupational exposure limits and has been adopted by agencies such as the US
Occupational Safety and Health Administration; it is retained mainly for reasons of
comparability. The WHO meeting, chaired by Sir Richard Doll, while recognizing that the
occupational exposure limit in any country can only be set by the appropriate national body,
recommended that countries having high limits should take urgent steps to lower the
occupational exposure for an individual worker to 2 f/ml (eight-hour time-weighted average)
and that all countries should move as quickly as possible to 1 f/ml (eight-hour time-weighted
average) if they had not already done so. With the decrease in asbestosis rates in some
industrialized countries, and concern over asbestos-related cancers in all, attention has now
shifted to determining whether the same fibre parameters—that is, at least 5mm long, 3mm or
less thick, and with a length to width ratio of at least 3:1—are also appropriate for controlling
carcinogenesis (Browne 1994). A current theory of asbestos carcinogenesis implicates short
as well as long fibres (Lippmann 1995). In addition, given the evidence for a fibre gradient in
mesothelioma and lung cancer production, and to a lesser extent, for asbestosis production, an
argument could be made for permissible exposure levels taking fibre type into account. Some
countries have addressed the issue by banning the use (and thus the import) of crocidolite, and
setting more stringent exposure levels for amosite, namely 0.1 f/l (McDonald and McDonald
1987).

Exposure levels in the workplace

Permissible exposure levels embody the hypothesis, based on all available evidence, that
human health will be preserved if exposure is maintained within those limits. Revision of
permissible exposure levels, when it occurs, is invariably towards greater stringency (as
described in the paragraph above). Nevertheless, despite good compliance with workplace
controls, cases of disease continue to occur, for reasons of personal susceptibility (for
instance, higher-than-average fibre retention rates) or because of failure of workplace controls
for certain jobs or processes. Engineering controls, improved workplace practices and the use
of substitutes, described elsewhere in the chapter, have been implemented internationally
(Gibbs, Valic and Browne 1994) in larger establishments through industry, union and other
initiatives. For instance, according to a 1986 worldwide industry review, compliance with the
current recommended standard of 1 f/ml had been achieved at 83% of production sites (mines
and mills) covering 13,499 workers in 6 countries; in 96% of 167 cement factories operating
in 23 countries; in 71% of 40 textile factories covering over 2,000 workers operating in 7
countries; and in 97% of 64 factories manufacturing friction materials, covering 10,190
workers in 10 countries (Bouige 1990). However, a not unimportant proportion of such
workplaces still do not comply with regulations, not all manufacturing countries participated
in this survey, and the anticipated health benefits are evident only in some national statistics,
not in others (“Diagnosis and case management”, page 10.57). Control in demolition
processes and small enterprises using asbestos continues to be less than successful, even in
many industrialized countries.

Surveillance

The chest radiograph is the main tool for asbestosis surveillance, cancer registries and
national statistics for asbestos-related cancers. A commendable initiative in international
surveillance of mining, tunnelling and quarrying, undertaken by the ILO through voluntary
reporting from governmental sources, focuses on coal and hard-rock mining but could include
asbestos. Unfortunately, follow-through has been poor, with the last report, which was based
on data for 1973-77, being published in 1985 (ILO 1985). Several countries issue national
mortality and morbidity data, an excellent example being the Work-related Lung Disease
Surveillance Report for the United States, a report referred to above (USDHSS 1994). Such
reports provide information to interpret trends and evaluate the impact of control levels at a
national level. Larger industries should (and many do) keep their own surveillance statistics,
as do some unions. Surveillance of smaller industries may require specific studies at
appropriate intervals. Other sources of information include programmes such as the
Surveillance of Work-related Respiratory Diseases (SWORD) in the United Kingdom, which
gathers regular reports from a sample of the country’s chest and occupational physicians
(Meredith and McDonald 1994), and reports from compensation boards (which often,
however, do not provide information on workers at risk).

Product labelling, education and the information highway

Mandatory product labelling together with worker education and education of the general
public are powerful tools in prevention. While in the past, this took place within the context of
worker organizations, worker management committees, and union education programmes,
future approaches could exploit electronic highways to make available databases on health
and safety in toxicology and medicine.

Exposure in buildings and from water supplies

In 1988, a review of potential health risks associated with working in buildings constructed
using asbestos-containing materials was mandated by the US Congress (Health Effects
Institute—Asbestos Research 1991). The results of a large number of indoor sampling studies
from Europe, the United States and Canada were used in risk estimates. The lifetime risk for
premature cancer death was estimated to be 1 per million for those exposed for 15 years in
schools (for estimated exposure levels ranging from .0005 to .005 f/ml) and 4 per million for
those exposed for 20 years working in office buildings (for estimated exposure levels ranging
from .0002 to .002 f/ml). For comparison, the risk for occupational exposure to 0.1 f/ml (i.e.,
in compliance with the permissible exposure limit proposed by the US Occupational Safety
and Health Administration) for 20 years was estimated at 2,000 per million exposed.
Measurements in drinking water in urban communities show considerable variation, from
undetectable levels to high levels ranging from 0.7 million f/l in Connecticut, USA, to levels
ranging from 1.1 million to 1.3 billion f/l in the mining areas of Quebec (Bignon, Peto and
Saracci 1989). Some contamination may also occur from the asbestos cement pipes which
must service most urban water reticulation services in the world. However, a working group
which reviewed the evidence in 1987 did not discount the potential associated hazard, but did
not regard the health risks associated with asbestos ingestion as “one of the most pressing
public health hazards” (USDHHS 1987), a view concordant with the concluding remarks in
an IARC (WHO) monograph on non-occupational exposure to mineral fibres (Bignon, Peto
and Saracci 1989).

Asbestos and other fibres in the 21st century

The first half of the twentieth century was characterized by what could be described as gross
neglect of asbestos-related ill health. Before the Second World War, the reasons for this are
not clear; the scientific basis for control was there but perhaps not the will and not the worker
militancy. During the war, there were other national and international priorities, and after the
war, pressures of urbanization by a rapidly increasing world population took precedence, and
perhaps fascination in an industrial age with the versatility of the “magic” mineral diverted
attention from its dangers. Following the first International Conference on the Biological
Effects of Asbestos in 1964 (Selikoff and Churg 1965), asbestos-related disease became a
cause célèbre, not only on its own account, but also because it marked a period of labour-
management confrontation concerning the rights of the worker to knowledge about workplace
hazards, health protection and fair compensation for injury or illness. In countries with no-
fault worker’s compensation, asbestos-related disease on the whole received fair recognition
and handling. In countries where product liability and class action suits were more usual,
large awards have been made to some affected workers (and their lawyers) while others have
been left destitute and without support. While the need for fibres in modern societies is
unlikely to diminish, the role of the mineral fibres vis-à-vis other fibres may change. There
has already been a shift in uses both within and between countries (see “Other sources of
exposure”, page 10.53). Though the technology exists to diminish workplace exposures, there
remain workplaces in which it has not been applied. Given the current knowledge, given
international communication and product labelling, and given worker education and industry
commitment, it should be possible to use this mineral to provide cheap and durable products
for use in construction and water reticulation on an international basis without risk to user,
worker, manufacturer or miner, or to the general public at large.

Hard Metal Disease

Written by ILO Content Manager

Shortly after the end of the First World War, while doing research to find a material able to
replace diamond in metal-drawing nozzles, Karl Schoeter patented in Berlin a sintering
process (pressurization plus heating at 1,500°C) of a mixture of fine tungsten carbide (WC)
powder with 10% of cobalt to produce “hard metal”. The main characteristics of this sinter are
the extreme hardness, only slightly inferior to that of diamond, and the maintenance of its
mechanical properties at high temperatures; these characteristics make it suitable for use in
drawing metal, for welded inserts, and for high-speed tools for machining of metals, stone,
wood and materials with high resistance to wear or to heat, in the mechanical, aeronautical
and ballistic fields. The use of hard metal is continually expanding all over the world. In 1927
Krupp extended the use of hard metal into the cutting tools field, calling it “Widia” (wie
Diamant—like diamond), a name still in use today.

Sintering remains the basis of all hard metal production: techniques are improved by the
introduction of other metallic carbides—titanium carbide (TiC) and tantalum carbide (TaC)—
and by treatment of hard metal parts for mobile cutting inserts with one or more layers of
titanium nitride or aluminium oxide and of other very hard compounds applied with chemical
vapour deposition (CVD) or physical vapour deposition (PVD). The fixed inserts welded to
the tools cannot be plated, but are repeatedly sharpened by a diamond grinding wheel (figures
1 and 2).

Figure 1. (A) Examples of some hard metal drawing mobile inserts, plated with golden-yellow
tungsten nitride; (B) insert welded to the tool and working in steel drawing.

Figure 2. Fixed inserts welded to (A) stone drill and (B) saw disk.

The hard metal sinter is formed by particles of metallic carbides incorporated in a matrix
formed by cobalt, which melts during sintering, interacting and occupying the interstices.
Cobalt is therefore the structure gluing material, which assumes metal-ceramic characteristics
(figures 3, 4, and 5).

Figure 3. Microstructure of a WC/Co sintering; WC particles are incorporated into the Co


light matrix (1,500x).
Figure 4. Microstructure of a WC + TiC + TaC + Co sintering. Along with WC prismatic
particles, globular particles formed by a solid solution of TiC + TaC are observed. The light
matrix is formed by Co (1,500x).

Figure 5. Sintering microstructure plated by multiple very hard layers (2,000x).

The sintering process uses very fine metallic carbide powders (average diameters from 1 to
9μm) and cobalt powders (average diameter from 1 to 4μm) which are mixed, treated with
paraffin solution, die-pressed, de-waxed at low temperature, pre-sintered at 700 to 750°C and
sintered at 1,500°C (Brookes 1992).

When sintering is done with inadequate methods, improper techniques and poor industrial
hygiene, the powders can pollute the atmosphere of the work environment: workers are
therefore exposed to the risk of inhalation of metallic carbide powders and cobalt powders.
Along with the primary process there are other activities which can expose the workers to the
risk of aerosol inhalation of hard metal. Sharpening of fixed inserts welded to tools is
normally carried out by dry diamond grinding or, more frequently, cooled with liquids of
different kinds, producing powders or mists formed by very small drops containing metallic
particles. Particles of hard metal are also used in the production of a high-resistance layer on
steel surfaces subjected to wear, applied through methods (plasma coating process and others)
based on the combination of a powder spray with an electric arc or a controlled explosion of a
gas mixture at high temperature. The electric arc or the explosive flow of the gas determines
the fusion of the metallic particles and their impact on the surface being plated.

First observations on “hard metal diseases” were described in Germany in the 1940s. They
reported a diffuse, progressive pulmonary fibrosis, called Hartmetallungenfibrose. During the
next 20 years parallel cases were observed and described in all industrial countries. The
workers affected were in the majority of cases in charge of the sintering. From 1970 to the
present, several studies indicate that the pathology to the breathing apparatus is caused by the
inhalation of hard metal particles. It affects only susceptible subjects, and consists of the
following symptoms:

 acute: rhinitis, asthma


 subacute: fibrosant alveolitis
 chronic: diffuse and progressive interstitial fibrosis.

It affects not only workers in charge of sintering, but anyone inhaling aerosol containing hard
metal and particularly cobalt. It is mainly and perhaps exclusively caused by cobalt.

The definition of hard metal disease now includes a group of pathologies of the breathing
apparatus, different from each other in clinical gravity and prognosis, but having in common a
variable individual reactivity to the aetiological factor, cobalt.

More recent epidemiological and experimental information agree on the causal role of cobalt
for acute symptoms in the upper respiratory tract (rhinitis, asthma) and for subacute and
chronic symptoms in the bronchial parenchyma (fibrosing alveolitis and chronic interstitial
fibrosis).

The pathogenic mechanism is based on the induction by Co of a hypersensitive


immunoreaction: in fact, only some of the subjects present pathologies after short exposures
to relatively low concentration, or even after longer and more intense exposures. Co
concentrations in biological samples (blood, urine, skin) are not significantly different in
those who have the pathology and those who do not; there is no correlation of dose and
response at the tissue level; specific antibodies have been individuated (immunoglobins IgE
and IgG) against a Co-albumin compound in asthmatics, and the Co patch test is positive in
the subjects with alveolitis or fibrosis; the cytological aspects of the giant-cellular alveolitis
are compatible with immunoreaction, and acute or subacute symptoms tend to regress when
the subjects are removed from exposure to Co (Parkes 1994).

The immunological basis of hypersensitivity to Co has not yet been satisfactorily explained; it
is not possible, therefore, to identify a reliable marker of individual susceptibility.

Identical pathologies to those found in the subjects exposed to hard metals were also observed
in diamond cutters, who use disks formed by microdiamonds cemented with Co and who
therefore inhale only Co and diamond particles.
It is not yet fully demonstrated that pure Co (all other inhaled particles excluded) is capable
alone of producing the pathologies and above all the diffused interstitial fibrosis: the particles
inhaled with Co could have a synergistic as well as modulating effect. Experimental studies
seem to demonstrate that the biological reactivity to a mixture of Co particles and of tungsten
is stronger than that caused by Co alone, and significant pathologies are not to be observed in
the workers in charge of the production of pure Co powder (Science of the Total Environment
1994).

Clinical symptoms of hard metal disease, which, on the basis of current aetiopathogenic
knowledge should be more precisely called “cobalt disease”, are, as mentioned before, acute,
subacute and chronic.

Acute symptoms include a specific respiratory irritation (rhinitis, laryngo-tracheitis,


pulmonary oedema) caused by exposure to high concentrations of Co powder or Co smoke;
they are observable only in exceptional cases. Asthma is observed more frequently. It appears
in 5 to 10% of the workers exposed to cobalt concentrations of 0.05 mg/m3, the current US
threshold limit value (TLV). Symptoms of thoracic constriction with dyspnoea and cough
tend to appear at the end of the work shift or during the night. The diagnosis of occupational
allergic bronchial asthma due to cobalt can be suspected on the basis of case history criteria,
but it is confirmed by a specific bronchial stimulation test which determines the appearance of
an immediate, delayed or dual bronchospastic response. Even respiratory capacity tests
carried out at the beginning and at the end of the work shift can help the diagnosis. Asthmatic
symptoms due to cobalt tend to disappear when the subject is removed from exposure, but,
similarly to all other forms of occupational allergic asthma, symptoms can become chronic
and irreversible when the exposure continues for a long time (years), despite the presence of
respiratory disturbances. Highly bronchoreactive subjects can present non-allergic aetiological
asthmatic symptoms, with a non-specific response to inhalation of cobalt and other irritating
powders. In a high percentage of cases with allergic bronchial asthma, specific reaction
towards a human Co-seroalbumin compound was found in the IgE serum. The radiological
finding does not vary: only in rare cases can mixed forms of asthma plus alveolitis with
radiological alteration specifically caused by alveolitis be found. Bronchodilator therapy,
along with an immediate end of the work exposure, leads to complete recovery for cases that
are of recent onset, not yet chronic.

Subacute and chronic symptoms include fibrosant alveolitis and chronic diffuse and
progressive interstitial fibrosis (DIPF). The clinical experience seems to indicate that the
transition from alveolitis to interstitial fibrosis is a process which evolves gradually and
slowly in time: one can find cases of pure initial alveolitis reversible with withdrawal from the
exposure plus corticosteroid therapy; or cases with an already present fibrosis component,
which can improve but not reach complete recovery by removing the subject from exposure,
even with additional therapy; and finally, cases in which the predominant situation is that of
an irreversible DIPF. The occurrence of such cases is low in the exposed workers, very much
lower than the percentage of allergic asthma cases.

Alveolitis is easy to study today in its cytological components through broncho-alveolar


lavage (BAL); it is characterized by a large increase of the total cell number, mainly formed
by macrophages, with numerous multinuclear giant cells and the typical aspect of foreign-
body giant cells containing at times cytoplasmic cells (figure 6); even an absolute or relative
increase of lymphocytes is frequent, with a decreased CD4/CD8 ratio, associated with a large
increase of eosinophiles and mast cells. Rarely, alveolitis is mainly lymphocytic, with
CD4/CD8 ratio inverted, as it occurs in the pneumopathies due to hypersensitivity.

Figure 6. Cytological BAL in a macrophagic mono-nuclear giant-cellular alveolitis case


caused by hard metal. Between the mononuclear macrophages and the lymphocyte, a giant
foreign-body type of cell (400x) is observed.

Subjects with alveolitis report dyspnoea linked with fatigue, loss of weight and dry cough.
Crepitation is present in the lower lung with functional alteration of a restrictive kind and
diffused round or irregular radiological opacity. The patch test for cobalt is positive in the
majority of cases. In the susceptible subjects, alveolitis is revealed after a relatively short
period of workplace exposure, of one or a few years. In its initial phases this form is
reversible up to complete recovery with the simple removal from exposure, with better results
if this is combined with cortisone therapy.

The development of diffuse interstitial fibrosis aggravates the clinical symptoms with a
worsening of the dyspnoea, which appears even after minimal strain and then even at rest,
with worsening of the restrictive ventilatory impairment which is linked to a reduction of the
capillary-alveolar diffusion, and with appearance of radiographic opacities of a linear type and
of honeycombing (figure 7). The histological situation is that of a fibrosing alveolitis of a
“mural type”.
Figure 7. Thoracic radiograph of a subject affected by interstitial fibrosis caused by hard
metal. Linear and diffused opacity and honeycombing aspects are observed.

The evolution is rapidly progressive; therapies are ineffective and the prognosis doubtful. One
of the cases diagnosed by the author eventually required a lung transplant.

The occupational diagnosis is based on case history, BAL cytological pattern and cobalt patch
test.

Prevention of hard metal disease, or, more precisely, of cobalt disease, is now mainly
technical: protecting the workers through the elimination of powder, smokes or mists with
adequate ventilation of the work areas. In fact, the lack of knowledge about the factors which
determine individual hypersensitivity to cobalt makes the identification of susceptible people
impossible, and the maximum effort must be made to reduce the atmospheric concentrations.

The number of people at risk is underestimated because many sharpening activities are carried
out in small industries or by craftspeople. In such workplaces, the US TLV of 0.05 mg/m3 is
frequently exceeded. There is also some question as to the adequacy of the TLV for protecting
workers against cobalt disease since dose-effect relationships for disease mechanisms
involving hypersensitivity are not completely understood.

Routine surveillance must be accurate enough to identify cobalt pathologies in their earliest
stages. An annual questionnaire aimed mainly at temporary symptoms must be administered,
along with a medical examination that includes pulmonary function testing and other
appropriate medical examinations. Since it has been demonstrated that there is a good
correlation between cobalt concentrations in the work environment and the urinary excretion
of the metal, it is appropriate to carry out semi-annual measurement of cobalt in urine (CoU)
on samples taken at the end of the work week. When the exposure is at the level of the TLV,
the biological exposure index (BEI) is estimated to be equal to 30μg Co/litre urine.

Pre-exposure medical examinations for the presence of pre-existing respiratory disease and
bronchial hypersensitivity can be useful in the counselling and placement of workers.
Metacholine tests are a useful indicator of non-specific bronchial hyperreactivity and may be
useful in some settings.

International standardization of environmental and medical surveillance methods for workers


exposed to cobalt is highly recommended.

Respiratory System: The Variety of Pneumoconioses

Written by ILO Content Manager

This article is devoted to a discussion of pneumoconioses related to a variety of specific non-


fibrous substances; exposures to these dusts are not covered elsewhere in this volume. For
each material capable of engendering a pneumoconiosis upon exposure, a brief discussion of
the mineralogy and commercial importance is followed by information related to the lung
health of exposed workers.

Aluminium

Aluminium is a light metal with many commercial uses in both its metallic and combined
states. (Abramson et al. 1989; Kilburn and Warshaw 1992; Kongerud et al. 1994.)
Aluminium-containing ores, primarily bauxite and cryolite, consist of combinations of the
metal with oxygen, fluorine and iron. Silica contamination of the ores is common. Alumina
(Al2O3) is extracted from bauxite, and may be processed for use as an abrasive or as a
catalyst. Metallic aluminium is obtained from alumina by electrolytic reduction in the
presence of fluoride. Electrolysis of the mixture is carried out by using carbon electrodes at a
temperature of about 1,000°C in cells known as pots. The metallic aluminium is then drawn
off for casting. Dust, fume and gas exposures in pot rooms, including carbon, alumina,
fluorides, sulphur dioxide, carbon monoxide and aromatic hydrocarbons, are accentuated
during crust breaking and other maintenance operations. Numerous products are
manufactured from aluminium plate, flake, granules and castings—resulting in extensive
potential for occupational exposures. Metallic aluminium and its alloys find use in the aircraft,
boat and automobile industries, in the manufacture of containers and of electrical and
mechanical devices, as well as in a variety of construction and structural applications. Small
aluminium particles are used in paints, explosives and incendiary devices. To maintain
particle separation, mineral oils or stearin are added; increased lung toxicity of aluminium
flakes has been associated with the use of mineral oil.

Lung health

Inhalation of aluminium-containing dusts and fumes may occur in workers involved in the
mining, extraction, processing, fabrication and end-use of aluminium-containing materials.
Pulmonary fibrosis, resulting in symptoms and radiographic findings, has been described in
workers with several differing exposures to aluminium-containing substances. Shaver’s
disease is a severe pneumoconiosis described among workers involved in the manufacture of
alumina abrasives. A number of deaths from the condition have been reported. The upper
lobes of the lung are most often affected and the occurrence of pneumothorax is a frequent
complication. High levels of silicon dioxide have been found in the pot room environment as
well as in workers’ lungs at autopsy, suggesting silica as a potential contributor to the clinical
picture in Shaver’s disease. High concentrations of aluminium oxide particulate have also
been observed. Lung pathology may show blebs and bullae, and pleural thickening is seen
occasionally. The fibrosis is diffuse, with areas of inflammation in the lungs and associated
lymph nodes.

Aluminium powders are used in making explosives, and there have been a number of reports
of a severe and progressive fibrosis in workers involved in this process. Lung involvement has
also occasionally been described in workers employed in the welding or polishing of
aluminium, and in bagging cat litter containing aluminium silicate (alunite). However, there
has been considerable variation in the reporting of lung diseases in relation to exposures to
aluminium. Epidemiological studies of workers exposed to aluminium reduction have
generally shown low prevalence of pneumoconiotic changes and slight mean reductions in
ventilatory lung function. In various work environments, alumina compounds can occur in
several forms, and in animal studies these forms appear to have differing lung toxicities.
Silica and other mixed dusts may also contribute to this varying toxicity, as may the materials
used to coat the aluminium particles. One worker, who developed a granulomatous lung
disease after exposure to oxides and metallic aluminium, showed transformation of his blood
lymphocytes upon exposure to aluminium salts, suggesting that immunologic factors might
play a role.

An asthmatic syndrome has frequently been noted among workers exposed to fumes in
aluminium reduction pot rooms. Fluorides found in the pot room environment have been
implicated, although the specific agent or agents associated with the asthmatic syndrome has
not been determined. As with other occupational asthmas, symptoms are often delayed 4 to 12
hours after exposure, and include cough, dyspnoea, chest tightness and wheeze. An immediate
reaction may also be noted. Atopy and a family history of asthma do not appear to be risk
factors for development of pot room asthma. After cessation of exposure, symptoms may be
expected to disappear in most cases, although two-thirds of the affected workers show
persistent non-specific bronchial responsiveness and, in some workers, symptoms and airway
hyperresponsiveness continue for years even after exposure is terminated. The prognosis for
pot room asthma appears to be best in those who are immediately removed from exposure
when the asthmatic symptoms become manifest. Fixed airflow obstruction has also been
associated with pot room work.

Carbon electrodes are used in the aluminium reduction process, and known human
carcinogens have been identified in the pot room environment. Several mortality studies have
revealed lung cancer excesses among exposed workers in this industry.

Diatomaceous Earth

Deposits of diatomaceous earth result from the accretion of skeletons of microscopic


organisms. (Cooper and Jacobson 1977; Checkoway et al. 1993.) Diatomaceous earth may be
utilized in foundries and in the maintenance of filters, abrasives, lubricants and explosives.
Certain deposits comprise up to 90% free silica. Exposed workers may develop lung changes
involving simple or complicated pneumoconiosis. The risk of death from both nonmalignant
respiratory diseases and lung cancer has been related to the workers’ tenure in dusty work as
well as to cumulative crystalline silica exposures during the mining and processing of
diatomaceous earth.

Elemental Carbon

Aside from coal, the two common forms of elemental carbon are graphite (crystalline carbon)
and carbon black. (Hanoa 1983; Petsonk et al. 1988.) Graphite is used in the manufacture of
lead pencils, foundry linings, paints, electrodes, dry batteries and crucibles for metallurgical
purposes. Finely ground graphite has lubricant properties. Carbon black is a partially
decomposed form used in automotive tires, pigments, plastics, inks and other products.
Carbon black is manufactured from fossil fuels through a variety of processes involving
partial combustion and thermal decomposition.

Inhalation of carbon, as well as associated dusts, may occur during the mining and milling of
natural graphite, and during the manufacture of artificial graphite. Artificial graphite is
produced by the heating of coal or petroleum coke, and generally contains no free silica.

Lung health

Pneumoconiosis results from worker exposure to both natural and artificial graphite.
Clinically, workers with carbon or graphite pneumoconiosis show radiographic findings
similar to those for coal workers. Severe symptomatic cases with massive pulmonary fibrosis
were reported in the past, particularly related to the manufacture of carbon electrodes for
metallurgy, although recent reports emphasize that the materials implicated in exposures
leading to this sort of condition are likely to be mixed dusts.

Gilsonite

Gilsonite, also known as uintaite, is a solidified hydrocarbon. (Keimig et al. 1987.) It occurs
in veins in the western United States. Current uses include the manufacture of automotive
body seam sealers, inks, paints and enamels. It is an ingredient of oil-well drilling fluids and
cements; it is an additive in sand moulds in the foundry industry; it is to be found as a
component of asphalt, building boards and explosives; and it is employed in the production of
nuclear grade graphite. Workers exposed to gilsonite dust have reported symptoms of cough
and phlegm production. Five of ninety-nine workers surveyed showed radiographic evidence
of pneumoconiosis. No abnormalities in pulmonary function have been defined in relation to
gilsonite dust exposures.

Gypsum

Gypsum is hydrated calcium sulphate (CaSO4·2H2O) (Oakes et al. 1982). It is used as a


component of plasterboard, plaster of Paris and Portland cement. Deposits are found in
several forms and are often associated with other minerals such as quartz. Pneumoconiosis
has been observed in gypsum miners, and has been attributed to silica contamination.
Ventilatory abnormalities have not been associated with gypsum dust exposures.
Oils and Lubricants

Liquids containing hydrocarbon oils are used as coolants, cutting oils and lubricants (Cullen
et al. 1981). Vegetable oils are found in some commercial products and in a variety of
foodstuffs. These oils may be aerosolized and inhaled when metals that are coated with oils
are milled or machined, or if oil-containing sprays are used for purposes of cleaning or
lubrication. Environmental measurements in machine shops and mills have documented
airborne oil levels up to 9 mg/m3. One report implicated airborne oil exposure from the
burning of animal and vegetable fats in an enclosed building.

Lung health

Workers exposed to these aerosols have occasionally been reported to develop evidence of a
lipoid pneumonia, similar to that noted in patients who have aspirated mineral oil nose drops
or other oily materials. The condition is associated with symptoms of cough and dyspnoea,
inspiratory lung crackles, and impairments in lung function, generally mild in severity. A few
cases have been reported with more extensive radiographic changes and severe lung
impairments. Exposure to mineral oils has also been associated in several studies with an
increased risk of respiratory tract cancers.

Portland Cement

Portland cement is made from hydrated calcium silicates, aluminium oxide, magnesium
oxide, iron oxide, calcium sulphate, clay, shale and sand (Abrons et al. 1988; Yan et al. 1993).
The mixture is crushed and calcined at high temperatures with the addition of gypsum.
Cement finds numerous uses in road and building construction.

Lung health

Silicosis appears to be the greatest risk in cement workers, followed by a mixed dust
pneumoconiosis. (In the past, asbestos was added to cement to improve its characteristics.)
Abnormal chest radiographic findings, including small rounded and irregular opacities and
pleural changes, have been noted. Workers have occasionally been reported to have
developed pulmonary alveolar proteinosis after the inhalation of cement dust. Airflow
obstructive changes have been noted in some, but not all, surveys of cement workers.

Rare Earth Metals

Rare earth metals or “lanthanides” have atomic numbers between 57 and 71. Lanthanum
(atomic number 57), cerium (58), and neodymium (60) are the commonest of the group. The
other elements in this group include praseodymium (59), promethium (61), samarium (62),
europium (63), gadolinium (64), terbium (65), dysprosium (66), holmium (67), erbium (68),
thulium (69), ytterbium (70) and lutetium (71). (Hussain, Dick and Kaplan 1980; Sabbioni,
Pietra and Gaglione 1982; Vocaturo, Colombo and Zanoni 1983; Sulotto, Romano and Berra
1986; Waring and Watling 1990; Deng et al. 1991.) The rare earth elements are found
naturally in monazite sand, from which they are extracted. They are used in a variety of alloy
metals, as abrasives for polishing mirrors and lenses, for high-temperature ceramics, in
fireworks and in cigarette lighter flints. In the electronics industry they are used in
electrowelding and are to be found in various electronic components, including television
phosphors, radiographic screens, lasers, microwave devices, insulators, capacitors and
semiconductors.

Carbon arc lamps are used widely in the printing, photoengraving and lithography industries
and were used for floodlighting, spotlighting and movie projection before the wide-scale
adoption of argon and xenon lamps. The rare earth metal oxides were incorporated into the
central core of carbon arc rods, where they stabilize the arc stream. Fumes which are emitted
from the lamps are a mixture of gaseous and particulate material composed of approximately
65% rare earth oxides, 10% fluorides and unburnt carbon and impurities.

Lung health

Pneumoconiosis in workers exposed to rare earths has been exhibited primarily as bilateral
nodular chest radiographic infiltrates. Lung pathology in cases of rare earth pneumoconiosis
has been described as an interstitial fibrosis accompanied by an accumulation of fine granular
dust particles, or granulomatous changes.

Variable pulmonary function impairments have been described, from restrictive to mixed
restrictive-obstructive. However, the spectrum of pulmonary disease related to inhalation of
rare earth elements is still to be defined, and data regarding the pattern and progression of
disease and histological changes is available primarily only from a few case reports.

A neoplastic potential of the rare earth isotopes has been suggested by a case report of lung
cancer, possibly related to ionizing radiation from the naturally occurring rare earth
radioisotopes.

Sedimentary Compounds

Sedimentary rock deposits form through the processes of physical and chemical weathering,
erosion, transport, deposition and diagenesis. These may be characterized into two broad
classes: Clastics, which include mechanically deposited erosion debris, and chemical
precipitates, which include carbonates, shells of organic skeletons and saline deposits.
Sedimentary carbonates, sulphates and halides provide relatively pure minerals that have
crystallized from concentrated solutions. Due to the high solubility of many of the
sedimentary compounds, they are rapidly cleared from the lungs and are generally associated
with little pulmonary pathology. In contrast, workers exposed to certain sedimentary
compounds, primarily clastics, have shown pneumoconiotic changes.

Phosphates

Phosphate ore, Ca5(F,Cl)(PO4)3, is used in the production of fertilizers, dietary supplements,


toothpaste, preservatives, detergents, pesticides, rodent poisons and ammunitions (Dutton et
al. 1993). Extraction and processing of the ore may result in a variety of irritant exposures.
Surveys of workers in phosphate mining and extraction have documented increased symptoms
of cough and phlegm production, as well as radiographic evidence of pneumoconiosis, but
little evidence of abnormal lung function.
Shale

Shale is a mixture of organic material composed mainly of carbon, hydrogen, oxygen, sulphur
and nitrogen (Rom, Lee and Craft 1981; Seaton et al. 1981). The mineral component
(kerogen) is found in the sedimentary rock called marlstone, which is of a grey-brown colour
and a layered consistency. Oil shale has been used as an energy source since the 1850s in
Scotland. Major deposits exist in the United States, Scotland and Estonia. Dust in the
atmosphere of underground oil shale mines is of relatively fine dispersion, with up to 80% of
the dust particles under 2 mm in size.

Lung health

Pneumoconiosis related to the deposition of shale dust in the lung is termed shalosis. The dust
creates a granulomatous and fibrotic reaction in the lungs. This pneumoconiosis is similar
clinically to coal workers’ pneumoconiosis and silicosis, and may progress to massive fibrosis
even after the worker has left the industry.

Pathologic changes identified in lungs with shalosis are characterized by vascular and
bronchial deformation, with irregular thickening of interalveolar and interlobular septa. In
addition to interstitial fibrosis, lung specimens with shale pneumoconiosis have shown
enlarged hilar shadows, related to the transport of shale dust and subsequent development of
well-defined sclerotic changes in the hilar lymph nodes.

Shale workers have been found to have a prevalence of chronic bronchitis two and one-half
times that of age-matched controls. The effect of shale dust exposures on lung function has
not been studied systematically.

Slate

Slate is a metamorphic rock, made up of various minerals, clays and carbonaceous matter
(McDermott et al. 1978). The major constituents of slate include muscovite, chlorite, calcite
and quartz, along with graphite, magnetite and rutile. These have undergone metamorphosis
to form a dense crystalline rock that possesses strength but is easily cleaved, characteristics
which account for its economic importance. Slate is used in roofing, dimension stone, floor
tile, flagging, structural shapes such as panels and window sills, blackboards, pencils, billiard
tables and laboratory bench tops. Crushed slate is used in highway construction, tennis court
surfaces and lightweight roofing granules.

Lung health

Pneumoconiosis has been found in a third of workers studied in the slate industry in North
Wales, and in 54% of slate pencil makers in India. Various lung radiographic changes have
been identified in slateworkers. Because of the high quartz content of some slates and the
adjacent rock strata, slateworkers’ pneumoconiosis may have features of silicosis. The
prevalence of respiratory symptoms in slateworkers is high, and the proportion of workers
with symptoms increases with pneumoconiosis category, irrespective of smoking status.
Diminished values of forced expiratory volume in one second (FEV1) and forced vital
capacity (FVC) are associated with increasing pneumoconiosis category.
The lungs of miners exposed to slate dust reveal localized areas of perivascular and
peribronchial fibrosis, extending to macule formation and extensive interstitial fibrosis.
Typical lesions are fibrotic macules of variable configuration intimately associated with small
pulmonary blood vessels.

Talc

Talc is composed of magnesium silicates, and is found in a variety of forms. (Vallyathan and
Craighead 1981; Wegman et al. 1982; Stille and Tabershaw 1982; Wergeland, Andersen and
Baerheim 1990; Gibbs, Pooley and Griffith 1992.)

Deposits of talc are frequently contaminated with other minerals, including both fibrous and
non-fibrous tremolite and quartz. Lung health effects of talc-exposed workers may be related
to both the talc itself as well as the other associated minerals.

Talc production occurs primarily in Australia, Austria, China, France and the United States.
Talc is used as a component in hundreds of products, and is used in the manufacture of paint,
pharmaceuticals, cosmetics, ceramics, automobile tires and paper.

Lung health

Diffuse rounded and irregular parenchymal lung opacities and pleural abnormalities are seen
on the chest radiographs of talc workers in association with the talc exposure. Depending on
the specific exposures experienced, the radiographic shadows may be ascribed to talc itself or
to contaminants in the talc. Talc exposure has been associated with symptoms of cough,
dyspnoea and phlegm production, and with evidence of airflow obstruction in pulmonary
function studies. Lung pathology has revealed various forms of pulmonary fibrosis:
granulomatous changes and ferruginous bodies have been reported, and dust-laden
macrophages collected around the respiratory bronchioles intermingled with bundles of
collagen. Mineralogical examination of lung tissue from talc workers is also variable and may
show silica, mica or mixed silicates.

Since talc deposits may be associated with asbestos and other fibres, it is not surprising that an
increased risk of bronchogenic carcinoma has been reported in talc miners and millers. Recent
investigations of workers exposed to talc without associated asbestos fibres revealed trends
for higher mortality from non-malignant respiratory disease (silicosis, silico-tuberculosis,
emphysema and pneumonia), but the risk for bronchogenic cancer was not found to be
elevated.

Hairspray

Exposure to hairsprays occurs in the home environment as well as in commercial hairdressing


establishments (Rom 1992b). Environmental measurements in beauty salons have indicated
the potential for respirable aerosol exposures. Several case reports have implicated hairspray
exposure in the occurrence of a pneumonitis, thesaurosis, in heavily exposed individuals.
Clinical symptoms in the cases were generally mild, and resolved with termination of
exposure. Histology usually showed a granulomatous process in the lung and enlarged hilar
lymph nodes, with thickening of alveolar walls and numerous granular macrophages in the
airspaces. Macromolecules in hairsprays, including shellacs and polyvinylpyrrolidone, have
been suggested as potential agents. In contrast to the clinical case reports, increased lung
parenchymal radiographic shadows observed in radiological surveys of commercial
hairdressers have not been conclusively related to hairspray exposure. Although the results of
these studies do not allow definitive conclusions to be drawn, clinically important lung
disease from typical hairspray exposures does appear to be an unusual occurrence.
Respiratory System References
Abramson, MJ, JH Wlodarczyk, NA Saunders, and MJ Hensley. 1989. Does aluminum
smelting cause lung disease? Am Rev Respir Dis 139:1042-1057.

Abrons, HL, MR Peterson, WT Sanderson, AL Engelberg, and P Harber. 1988. Symptoms,


ventilatory function, and environmental exposures in Portland cement workers. Brit J Ind Med
45:368-375.

Adamson, IYR, L Young, and DH Bowden. 1988. Relationship of alveolar epithelial injury
and repair to the indication of pulmonary fibrosis. Am J Pathol 130(2):377-383.

Agius, R. 1992. Is silica carcinogenic? Occup Med 42: 50-52.

Alberts, WM and GA Do Pico. 1996. Reactive airways dysfunction syndrome (review). Chest
109:1618-1626.
Albrecht, WN and CJ Bryant. 1987. Polymer fume fever associated with smoking and use of a
mold release spray containing polytetraflouroethylene. J Occup Med 29:817-819.

American Conference of Governmental Industrial Hygienists (ACGIH). 1993. 1993-1994


Threshold Limit Values and Biological Exposure Indices. Cincinnati, Ohio: ACGIH.

American Thoracic Society (ATS). 1987 Standards for the diagnosis and care of patients with
chronic obstructive pulmonary disease (COPD) and asthma. Am Rev Respir Dis 136:225-244.

—.1995. Standardization of Spirometry: 1994 update. Amer J Resp Crit Care Med 152: 1107-
1137.

Antman, K and J Aisner. 1987. Asbestos-Related Malignancy. Orlando: Grune & Stratton.

Antman, KH, FP Li, HI Pass, J Corson, and T Delaney. 1993. Benign and malignant
mesothelioma. In Cancer: Principles and Practice of Oncology, edited by VTJ DeVita, S
Hellman, and SA Rosenberg. Philadelphia: JB Lippincott.
Asbestos Institute. 1995. Documentation center: Montreal, Canada.

Attfield, MD and K Morring. 1992. An investigation into the relationship between coal
workers’ pneumoconiosis and dust exposure in US coal miners. Am Ind Hyg Assoc J
53(8):486-492.

Attfield, MD. 1992. British data on coal miners’ pneumoconiosis and relevance to US
conditions. Am J Public Health 82:978-983.

Attfield, MD and RB Althouse. 1992. Surveillance data on US coal miners’ pneumoconiosis,


1970 to 1986. Am J Public Health 82:971-977.

Axmacher, B, O Axelson, T Frödin, R Gotthard, J Hed, L Molin, H Noorlind Brage, and M


Ström. 1991. Dust exposure in coeliac disease: A case-referent study. Brit J Ind Med 48:715-
717.
Baquet, CR, JW Horm, T Gibbs, and P Greenwald. 1991. Socioeconomic factors and cancer
incidence among blacks and whites. J Natl Cancer Inst 83: 551-557.

Beaumont, GP. 1991. Reduction in airborne silicon carbide whiskers by process


improvements. Appl Occup Environ Hyg 6(7):598-603.

Becklake, MR. 1989. Occupational exposures: Evidence for a causal association with chronic
obstructive pulmonary disease. Am Rev Respir Dis. 140: S85-S91.

—. 1991. The epidemiology of asbestosis. In Mineral Fibers and Health, edited by D Liddell
and K Miller. Boca Raton: CRC Press.

—. 1992. Occupational exposure and chronic airways disease. Chap. 13 in Environmental and
Occupational Medicine. Boston: Little, Brown & Co.

—. 1993. In Asthma in the workplace, edited by IL Bernstein, M Chan-Yeung, J-L Malo and
D Bernstein. Marcel Dekker.

—. 1994. Pneumoconioses. Chap. 66 in A Textbook of Respiratory Medicine, edited by JF


Murray and J Nadel. Philadelphia: WB Saunders.

Becklake, MR and B Case. 1994. Fibre burden and asbestos-related lung disease:
Determinants of dose-response relationships. Am J Resp Critical Care Med 150:1488-1492.

Becklake, MR. et al. 1988. The relationships between acute and chronic airways responses to
occupational exposures. In Current Pulmonology. Vol. 9, edited by DH Simmons. Chicago:
Year Book Medical Publishers.

Bégin, R, A Cantin, and S Massé. 1989. Recent advances in the pathogenesis and clinical
assessment of mineral dust pneumoconioses: Asbestosis, silicosis and coal pneumoconiosis.
Eur Resp J 2:988-1001.

Bégin, R and P Sébastien. 1989. Alveolar dust clearance capacity as determinant of individual
susceptibility to asbestosis: Experimental oservations. Ann Occup Hyg 33:279-282.

Bégin, R, A Cantin, Y Berthiaume, R Boileau, G Bisson, G Lamoureux, M Rola-


Pleszczynski, G Drapeau, S Massé, M Boctor, J Breault, S Péloquin, and D Dalle. 1985.
Clinical features to stage alveolitis in asbestos workers. Am J Ind Med 8:521-536.

Bégin, R, G Ostiguy, R Filion, and S Groleau. 1992. Recent advances in the early diagnosis of
asbestosis. Sem Roentgenol 27(2):121-139.

Bégin, T, A Dufresne, A Cantin, S Massé, P Sébastien, and G Perrault. 1989. Carborundum


pneumoconiosis. Chest 95(4):842-849.

Beijer L, M Carvalheiro, PG Holt, and R Rylander. 1990. Increased blood monocyte


procoagulant activity in cotton mill workers. J. Clin Lab Immunol 33:125-127.
Beral, V, P Fraser, M Booth, and L Carpenter. 1987. Epidemiological studies of workers in
the nuclear industry. In Radiation and Health: The Biological Effects of Low-Level Exposure
to Ionizing Radiation, edited by R Russell Jones and R Southwood. Chichester: Wiley.

Bernstein, IL, M Chan-Yeung, J-L Malo, and D Bernstein. 1993. Asthma in the Workplace.
Marcel Dekker.

Berrino F, M Sant, A Verdecchia, R Capocaccia, T Hakulinen, and J Esteve. 1995. Survival


of Cancer Patients in Europe: The EUROCARE Study. IARC Scientific Publications, no 132.
Lyon: IARC.

Berry, G, CB McKerrow, MKB Molyneux, CE Rossiter, and JBL Tombleson. 1973. A study
of the acute and chronic changes in ventilatory capacity of workers in Lancashire Cotton
Mills. Br J Ind Med 30:25-36.

Bignon J, (ed.) 1990. Health-related effects of phyllosilicates. NATO ASI series Berlin:
Springer-Verlag.

Bignon, J, P Sébastien, and M Bientz. 1979. Review of some factors relevant to the
assessment of exposure to asbestos dusts. In The use of Biological Specimens for the
Assessment of Human Exposure to Environmental Pollutants, edited by A Berlin, AH Wolf,
and Y Hasegawa. Dordrecht: Martinus Nijhoff for the Commission of the European
Communities.

Bignon J, J Peto and R Saracci, (eds.) 1989. Non-occupational exposure to mineral fibres.
IARC Scientific Publications, no 90. Lyon: IARC.

Bisson, G, G Lamoureux, and R Bégin. 1987. Quantitative gallium 67 lung scan to assess the
inflammatory activity in the pneumoconioses. Sem Nuclear Med 17(1):72-80.

Blanc, PD and DA Schwartz. 1994. Acute pulmonary responses to toxic exposures. In


Respiratory Medicine, edited by JF Murray and JA Nadel. Philadelphia: WB Saunders.

Blanc, P, H Wong, MS Bernstein, and HA Boushey. 1991. An experimental human model of


a metal fume fever. Ann Intern Med 114:930-936.

Blanc, PD, HA Boushey, H Wong, SF Wintermeyer, and MS Bernstein. 1993. Cytokines in


metal fume fever. Am Rev Respir Dis 147:134-138.

Blandford, TB, PJ Seamon, R Hughes, M Pattison, and MP Wilderspin. 1975. A case of


polytetrafluoroethylene poisoning in cockatiels accompanied by polymer fume fever in the
owner. Vet Rec 96:175-178.

Blount, BW. 1990. Two types of metal fume fever: mild vs. serious. Milit Med 155:372-377.

Boffetta, P, R Saracci, A Anderson, PA Bertazzi, Chang-Claude J, G Ferro, AC Fletcher, R


Frentzel-Beyme, MJ Gardner, JH Olsen, L Simonato, L Teppo, P Westerholm, P Winter, and
C Zocchetti. 1992. Lung cancer mortality among workers in the European production of man-
made mineral fibers-a Poisson regression analysis. Scand J Work Environ Health 18:279-286.
Borm, PJA. 1994. Biological markers and occupational lung dsease: Mineral dust-induced
respiratory disorders. Exp Lung Res 20:457-470.

Boucher, RC. 1981. Mechanisms of pollutant induced airways toxicity. Clin Chest Med
2:377-392.

Bouige, D. 1990. Dust exposure results in 359 asbestos-using factories from 26 countries. In
Seventh International Pneumoconiosis Conference Aug 23-26, 1988. Proceedings Part II.
Washington, DC: DHS (NIOSH).

Bouhuys A. 1976. Byssinosis: Scheduled asthma in the textile industry. Lung 154:3-16.

Bowden, DH, C Hedgecock, and IYR Adamson. 1989. Silica-induced pulmonary fibrosis
involves the reaction of particles with interstitial rather than alveolar macrophages. J Pathol
158:73-80.

Brigham, KL and B Mayerick. 1986. Endotoxin and Lung injury. Am Rev Respir Dis
133:913-927.

Brody, AR. 1993. Asbestos-induced lung disease. Environ Health Persp 100:21-30.

Brody, AR, LH Hill, BJ Adkins, and RW O’Connor. 1981. Chrysotile asbestos inhalation in
rats: Deposition pattern and reaction of alveolar epithelium and pulmonary macrophages. Am
Rev Respir Dis 123:670.

Bronwyn, L, L Razzaboni, and P Bolsaitis. 1990. Evidence of an oxidative mechanism for the
hemolytic activity of silica particles. Environ Health Persp 87: 337-341.

Brookes, KJA. 1992. World Directory and Handbook of Hard Metal and Hard Materials.
London: International Carbide Data.

Brooks, SM and AR Kalica. 1987. Strategies for elucidating the relationship between
occupational exposures and chronic air-flow obstruction. Am Rev Respir Dis 135:268-273.

Brooks, SM, MA Weiss, and IL Bernstein. 1985. Reactive airways dysfunction syndrome
(RADS). Chest 88:376-384.

Browne, K. 1994. Asbestos-related disorders. Chap. 14 in Occupational Lung Disorders,


edited by WR Parkes. Oxford: Butterworth-Heinemann.

Brubaker, RE. 1977. Pulmonary problems associated with the use of polytetrafluoroethylene.
J Occup Med 19:693-695.

Bunn, WB, JR Bender, TW Hesterberg, GR Chase, and JL Konzen. 1993. Recent studies of
man-made vitreous fibers: Chronic animal inhalation studies. J Occup Med 35(2):101-113.

Burney, MB and S Chinn. 1987. Developing a new questionnaire for measuring the
prevalence and distribution of asthma. Chest 91:79S-83S.
Burrell, R and R Rylander. 1981. A critical review of the role of precipitins in
hypersensitivity pneumonitis. Eur J Resp Dis 62:332-343.

Bye, E. 1985. Occurrence of airborne silicon carbide fibers during industrial production of
silicon carbide. Scand J Work Environ Health 11:111-115.

Cabral-Anderson, LJ, MJ Evans, and G Freeman. 1977. Effects of NO2 on the lungs of aging
rats I. Exp Mol Pathol 27:353-365.

Campbell, JM. 1932. Acute symptoms following work with hay. Brit Med J 2:1143-1144.

Carvalheiro MF, Y Peterson, E Rubenowitz, R Rylander. 1995. Bronchial activity and work-
related symptoms in farmers. Am J Ind Med 27: 65-74.

Castellan, RM, SA Olenchock, KB Kinsley, and JL Hankinson. 1987. Inhaled endotoxin and
decreased spirometric values: An exposure-response relation for cotton dust. New Engl J Med
317:605-610.

Castleman, WL, DL Dungworth, LW Schwartz, and WS Tyler. 1980. Acute repiratory


bronchiolitis - An ultrastructural and autoradiographic study of epithelial cell injury and
renewal in Rhesus monkeys exposed to ozone. Am J Pathol 98:811-840.

Chan-Yeung, M. 1994. Mechanism of occupational asthma due to Western red cedar. Am J


Ind Med 25:13-18.

—. 1995. Assessment of asthma in the workplace. ACCP consensus statement. American


College of Chest Physicians. Chest 108:1084-1117.
Chan-Yeung, M and J-L Malo. 1994. Aetiological agents in occupational asthma. Eur Resp J
7:346-371.

Checkoway, H, NJ Heyer, P Demers, and NE Breslow. 1993. Mortality among workers in the
diatomaceous earth industry. Brit J Ind Med 50:586-597.

Chiazze, L, DK Watkins, and C Fryar. 1992. A case-control study of malignant and non-
malignant respiratory disease among employees of a fibreglass manufacturing facility. Brit J
Ind Med 49:326-331.

Churg, A. 1991. Analysis of lung asbestos content. Brit J Ind Med 48:649-652.

Cooper, WC and G Jacobson. 1977. A twenty-one year radiographic follow-up of workers in


the diatomite industry. J Occup Med 19:563-566.

Craighead, JE, JL Abraham, A Churg, FH Green, J Kleinerman, PC Pratt, TA Seemayer, V


Vallyathan and H Weill. 1982. The pathology of asbestos associated diseases of the lungs and
pleural cavities. Diagnostic criteria and proposed grading system. Arch Pathol Lab Med 106:
544-596.

Crystal, RG and JB West. 1991. The Lung. New York: Raven Press.
Cullen, MR, JR Balmes, JM Robins, and GJW Smith. 1981. Lipoid pneumonia caused by oil
mist exposure from a steel rolling tandem mill. Am J Ind Med 2: 51-58.

Dalal, NA, X Shi, and V Vallyathan. 1990. Role of free radicals in the mechanisms of
hemolysis and lipid peroxidation by silica: Comparative ESR and cytotoxicity studies. J Tox
Environ Health 29:307-316.

Das, R and PD Blanc. 1993. Chlorine gas exposure and the lung: A review. Toxicol Ind
Health 9:439-455.

Davis, JMG, AD Jones, and BG Miller. 1991. Experimental studies in rats on the effects of
asbestos inhalation couples with the inhalation of titanium dioxide or quartz. Int J Exp Pathol
72:501-525.

Deng, JF, T Sinks, L Elliot, D Smith, M Singal, and L Fine. 1991. Characterisation of
respiratory health and exposures at a sintered permanent magnet manufacturer. Brit J Ind Med
48:609-615.

de Viottis, JM. 1555. Magnus Opus. Historia de gentibus septentrionalibus. In Aedibus


Birgittae. Rome.

Di Luzio, NR. 1985. Update on immunomodulating activities of glucans. Springer Semin


Immunopathol 8:387-400.

Doll, R and J Peto. 1985. Effects on health of exposure to asbestos. London, Health and
Safety Commission London: Her Majesty’s Stationery Office.

—. 1987. In Asbestos-Related Malignancy, edited by K Antman and J Aisner. Orlando, Fla:


Grune & Stratton.

Donelly, SC and MX Fitzgerald. 1990. Reactive airways dysfunction syndrome (RADS) due
to acute chlorine exposure. Int J Med Sci 159:275-277.

Donham, K, P Haglind, Y Peterson, and R Rylander. 1989. Environmental and health studies
of farm workers in Swedish swine confinement buildings. Brit J Ind Med 46:31-37.

Do Pico, GA. 1992. Hazardous exposure and lung disease among farm workers. Clin Chest
Med 13: 311-328.

Dubois, F, R Bégin, A Cantin, S Massé, M Martel, G Bilodeau, A Dufresne, G Perrault, and P


Sébastien. 1988. Aluminum inhalation reduces silicosis in a sheep model. Am Rev Respir Dis
137:1172-1179.

Dunn, AJ. 1992. Endotoxin-induced activation of cerebral catecholamine and serotonin


metabolism: Comparison with Interleukin.1. J Pharmacol Exp Therapeut 261:964-969.

Dutton, CB, MJ Pigeon, PM Renzi, PJ Feustel, RE Dutton, and GD Renzi. 1993. Lung
function in workers refining phosphorus rock to obtain elementary phosphorus. J Occup Med
35:1028-1033.
Ellenhorn, MJ and DG Barceloux. 1988. Medical Toxicology. New York: Elsevier.
Emmanuel, DA, JJ Marx, and B Ault. 1975. Pulmonary mycotoxicosis. Chest 67:293-297.

—. 1989. Organic dust toxic syndrome (pulmonary mycotoxicosis) - A review of the


experience in central Wisconsin. In Principles of Health and Safety in Agriculture, edited by
JA Dosman and DW Cockcroft. Boca Raton: CRC Press.

Engelen, JJM, PJA Borm, M Van Sprundel, and L Leenaerts. 1990. Blood anti-oxidant
parameters at different stages in coal worker’s pneumoconiosis. Environ Health Persp 84:165-
172.

Englen, MD, SM Taylor, WW Laegreid, HD Liggit, RM Silflow, RG Breeze, and RW Leid.


1989. Stimulation of arachidonic acid metabolism in silica-exposed alveolar macrophages.
Exp Lung Res 15: 511-526.

Environmental Protection Agency (EPA). 1987. Ambient Air Monitoring reference and
equivalent methods. Federal Register 52:24727 (July l, 1987).

Ernst and Zejda. 1991. In Mineral Fibers and Health, edited by D Liddell and K Miller. Boca
Raton: CRC Press.

European Standardization Committee (CEN). 1991. Size Fraction Definitions for


Measurements of Airborne Particles in the Workplace. Report No. EN 481. Luxembourg:
CEN.

Evans, MJ, LJ Cabral-Anderson, and G Freeman. 1977. Effects of NO2 on the lungs of aging
rats II. Exp Mol Pathol 27:366-376.

Fogelmark, B, H Goto, K Yuasa, B Marchat, and R Rylander. 1992. Acute pulmonary toxicity
of inhaled (13)-B-D-glucan and endotoxin. Agents Actions 35:50-56.

Fraser, RG, JAP Paré, PD Paré, and RS Fraser. 1990. Diagnosis of Diseases of the Chest. Vol.
III. Philadelphia: WB Saunders.

Fubini, B, E Giamello, M Volante, and V Bolis. 1990. Chemical functionalities at the silica
surface determining its reactivity when inhaled. Formation and reactivity of surface radicals.
Toxicol Ind Health 6(6):571-598.

Gibbs, AE, FD Pooley, and DM Griffith. 1992. Talc pneumoconiosis: A pathologic and
mineralogic study. Hum Pathol 23(12):1344-1354.

Gibbs, G, F Valic, and K Browne. 1994. Health risk associated with chrysotile asbestos. A
report of a workshop held in Jersey, Channel Islands. Ann Occup Hyg 38:399-638.

Gibbs, WE. 1924. Clouds and Smokes. New York: Blakiston.

Ginsburg, CM, MG Kris, and JG Armstrong. 1993. Non-small cell lung cancer. In Cancer:
Principles & Practice of Oncology, edited by VTJ DeVita, S Hellman, and SA Rosenberg.
Philadelphia: JB Lippincott.
Goldfrank, LR, NE Flomenbaum, N Lewin, and MA Howland. 1990. Goldfrank’s
Toxicologic Emergencies. Norwalk, Conn.: Appleton & Lange.
Goldstein, B and RE Rendall. 1987. The prophylactic use of polyvinylpyridine-N-oxide
(PVNO) in baboons exposed to quartz dust. Environmental Research 42:469-481.

Goldstein, RH and A Fine. 1986. Fibrotic reactions in the lung: The activation of the lung
fibroblast. Exp Lung Res 11:245-261.
Gordon, RE, D Solano, and J Kleinerman. 1986. Tight junction alterations of respiratory
epithelia following long term NO2 exposure and recovery. Exp Lung Res 11:179-193.

Gordon, T, LC Chen, JT Fine, and RB Schlesinger. 1992. Pulmonary effects of inhaled zinc
oxide in human subjects, guinea pigs, rats, and rabbits. Am Ind Hyg Assoc J 53:503-509.

Graham, D. 1994. Noxious gases and fumes. In Textbook of Pulmonary Diseases, edited by
GL Baum and E Wolinsky. Boston: Little, Brown & Co.

Green, JM, RM Gonzalez, N Sonbolian, and P Renkopf. 1992. The resistance to carbon
dioxide laser ignition of a new endotracheal tube. J Clin Anesthesiaol 4:89-92.

Guilianelli, C, A Baeza-Squiban, E Boisvieux-Ulrich, O Houcine, R Zalma, C Guennou, H


Pezerat, and F MaraNo. 1993. Effect of mineral particles containing iron on primary cultures
of rabbit tracheal epithelial cells: Possible implication of oxidative stress. Environ Health
Persp 101(5):436-442.

Gun, RT, Janckewicz, A Esterman, D Roder, R Antic, RD McEvoy, and A Thornton. 1983.
Byssinosis: A cross-sectional study in an Australian textile factory. J Soc Occup Med 33:119-
125.

Haglind P and R Rylander. Exposure to cotton dust in an experimental cardroom. Br J Ind


Med 10: 340-345.

Hanoa, R. 1983. Graphite pneumoconiosis. A review of etiologic and epidemiologic aspects.


Scand J Work Environ Health 9:303-314.

Harber, P, M Schenker, and J Balmes. 1996. Occupational and Environmental Respiratory


Disease. St. Louis: Mosby.

Health Effects Institute - Asbestos Research. 1991. Asbestos in Public and Commercial
Buildings: A Literature Review and Synthesis of Current Knowledge. Cambridge, Mass.:
Health Effects Institute.

Heffner, JE and JE Repine. 1989. Pulmonary strategies of antioxidant defense. Am Rev


Respir Dis 140: 531-554.

Hemenway, D, A Absher, B Fubini, L Trombley, P Vacek, M Volante, and A Cabenago.


1994. Surface functionalities are related to biological response and transport of crystalline
silica. Ann Occup Hyg 38 Suppl. 1:447-454.

Henson, PM and RC Murphy. 1989. Mediators of the Inflammatory Process. New York:
Elsevier.
Heppleston, AG. 1991. Minerals, fibrosis and the Lung. Environ Health Persp 94:149-168.

Herbert, A, M Carvalheiro, E Rubenowiz, B Bake, and R Rylander. 1992. Reduction of


alveolar-capillary diffusion after inhalation of endotoxin in normal subjects. Chest 102:1095-
1098.

Hessel, PA, GK Sluis-Cremer, E Hnizdo, MH Faure, RG Thomas, and FJ Wiles. 1988.


Progression of silicosis in relation to silica dust exposure. Am Occup Hyg 32 Suppl. 1:689-
696.

Higginson, J, CS Muir, and N Muñoz. 1992. Human cancer: Epidemiology and environmental
causes. In Cambridge Monographs on Cancer Research. Cambridge: Cambridge Univ. Press.

Hinds, WC. 1982. Aerosol Technology: Properties, Behavior, and Measurement of Airborne
Particles. New York: John Wiley.

Hoffman, RE, K Rosenman, F Watt, et al. 1990. Occupational disease surveillance:


Occupational asthma. Morb Mortal Weekly Rep 39:119-123.

Hogg, JC. 1981. Bronchial mucosal permeability and its relationship to airways
hyperreactivity. J Allergy Clin immunol 67:421-425.

Holgate, ST, R Beasley, and OP Twentyman. 1987. The pathogenesis and significance of
bronchial hyperresponsiveness in airways disease. Clin Sci 73:561-572.

Holtzman, MJ. 1991. Arachidonic acid metabolism. Implications of biological chemistry for
lung function and disease. Am Rev Respir Dis 143:188-203.

Hughes, JM and H Weil. 1991. Asbestosis as a precursor of asbestos related lung cancer:
Results of a prospective mortality study. Brit J Ind Med 48: 229-233.

Hussain, MH, JA Dick, and YS Kaplan. 1980. Rare earth pneumoconiosis. J Soc Occup Med
30:15-19.

Ihde, DC, HI Pass, and EJ Glatstein. 1993. Small cell lung cancer. In Cancer: Principles and
Practice of Oncology, edited by VTJ DeVita, S Hellman, and SA Rosenberg. Philadelphia: JB
Lippincott.

Infante-Rivard, C, B Armstrong, P Ernst, M Peticlerc, L-G Cloutier, and G Thériault. 1991.


Descriptive study of prognostic factors influencing survival of compensated silicotic patients.
Am Rev Respir Dis 144:1070-1074.

International Agency for Research on Cancer (IARC). 1971-1994. Monographs on the


Evaluation of Carcinogenic Risks to Humans. Vol. 1-58. Lyon: IARC.

—. 1987. Monographs on the Evaluation of Carcinogenic Risks to Humans, Overall


Evaluations of Carcinogenicity: An Updating of IARC
Monographs. Vol. 1-42. Lyon: IARC. (Supplement 7.)
—. 1988. Man-made mineral fibres and radon. IARC Monographs on the Evaluation of
Carcinogenic Risks to Humans, No. 43. Lyon: IARC.

—. 1988. Radon. IARC Monographs on the Evaluation of Carcinogenic Risks to Humans,


No. 43. Lyon: IARC.

—. 1989a. Diesel and gasoline engine exhausts and some nitroarenes. IARC Monographs on
the Evaluation of Carcinogenic Risks to Humans, No. 46. Lyon: IARC.

—. 1989b. Non-occupational exposure to mineral fibres. IARC Scientific Publications, No.


90. Lyon: IARC.

—. 1989c. Some organic solvents, resin monomers and related compounds, pigments and
occupational exposure in paint manufacture and painting. IARC Monographs on the
Evaluation of Carcinogenic Risks to Humans, No. 47. Lyon: IARC.

—. 1990a. Chromium and chromium compounds. IARC Monographs on the Evaluation of


Carcinogenic Risks to Humans, No. 49. Lyon: IARC.

—. 1990b. Chromium, nickel, and welding. IARC Monographs on the Evaluation of


Carcinogenic Risks to Humans, No. 49. Lyon: IARC.

—. 1990c. Nickel and nickel compounds. IARC Monographs on the Evaluation of


Carcinogenic Risks to Humans, No. 49. Lyon: IARC.

—. 1991a. Chlorinated drinking-water; Chlorination by-products; Some other halogenated


compounds; Cobalt and cobalt compounds. IARC Monographs on the Evaluation of
Carcinogenic Risks to Humans, No. 52. Lyon: IARC.

—. 1991b. Occupational exposures in spraying and application of insecticides and some


pesticides. IARC Monographs on the Evaluation of Carcinogenic Risks to Humans, No. 53.
Lyon: IARC.

—. 1992. Occupational exposures to mists and vapours from sulfuric acid, other strong
inorganic acids and other industrial chemicals. IARC Monographs on the Evaluation of
Carcinogenic Risks to Humans, No. 54. Lyon: IARC.

—. 1994a. Beryllium and beryllium compounds. IARC Monographs on the Evaluationof


Carcinogenic Risks to Humans, No. 58. Lyon: IARC.

—. 1994b. Beryllium, cadmium and cadmium compounds, mercury and the glass industry.
IARC Monographs on the Evaluation of Carcinogenic Risks to Humans, No. 58. Lyon: IARC.

—. 1995. Survival of cancer patients in Europe: The EUROCARE study. IARC Scientific
Publications, No.132. Lyon: IARC.

International Commission on Radiological Protection (ICRP). 1994. Human Respiratory Tract


Model for Radiological Protection. Publication No. 66. ICRP.
International Labour Office (ILO). 1980. Guidelines for the use of ILO international
classification of radiographs of pneumoconioses. Occupational Safety and Health Series, No.
22. Geneva: ILO.

—. 1985. Sixth International Report on the Prevention and Suppression of Dust in Mining,
Tunnelling and Quarrying 1973-1977. Occupational Safety and Health Series, No.48. Geneva:
ILO.

International Organization for Standardization (ISO). 1991. Air Quality - Particle Size
Fraction Definitions for Health-Related Sampling. Geneva: ISO.

Janssen, YMW, JP Marsh, MP Absher, D Hemenway, PM Vacek, KO Leslie, PJA Borm, and
BT Mossman. 1992. Expression of antioxidant enzymes in rat lungs after inhalation of
asbestos or silica. J Biol Chem 267(15):10625-10630.

Jaurand, MC, J Bignon, and P Brochard. 1993. The mesothelioma cell and mesothelioma.
Past, present and future. International Conference, Paris, Sept. 20 to Oct. 2, 1991. Eur Resp
Rev 3(11):237.

Jederlinic, PJ, JL Abraham, A Churg, JS Himmelstein, GR Epler, and EA Gaensler. 1990.


Pulmonary fibrosis in aluminium oxide workers. Am Rev Respir Dis 142:1179-1184.

Johnson, NF, MD Hoover, DG Thomassen, YS Cheng, A Dalley, and AL Brooks. 1992. In


vitro activity of silicon carbide whiskers in comparison to other industrial fibers using four
cell culture systems. Am J Ind Med 21:807-823.

Jones, HD, TR Jones, and WH Lyle. 1982. Carbon fibre: Results of a survey of process
workers and their environment in a factory producing continuous filament. Am Occup Hyg
26:861-868.

Jones, RN, JE Diem, HW Glindmeyer, V Dharmarajan, YY Hammad, J Carr, and H Weill.


1979. Mill effect and dose-response relationships in byssinosis. Br J Ind Med 36:305-313.

Kamp, DW, P Graceffa, WA Prior, and A Weitzman. 1992. The role of free radicals in
asbestos-induced diseases. Free Radical Bio Med 12:293-315.

Karjalainen, A, PJ Karhonen, K Lalu, A Pentilla, E Vanhala, P Kygornen, and A Tossavainen.


1994. Pleural plaques and exposure to mineral fibres in a male urban necropsy population.
Occup Environ Med 51:456-460.

Kass, I, N Zamel, CA Dobry, and M Holzer. 1972. Bronchiectasis following ammonia burns
of the respiratory tract. Chest 62:282-285.

Katsnelson, BA, LK Konyscheva, YEN Sharapova, and LI Privalova. 1994. Prediction of the
comparative intensity of pneumoconiotic changes caused by chronic inhalation exposure to
dusts of different cytotoxicity by means of a mathematical model. Occup Environ Med
51:173-180.

Keenan, KP, JW Combs, and EM McDowell. 1982. Regeneration of hamster tracheal


epithelium after mechanical injury I, II, III. Virchows Archiv 41:193-252.
Keenan, KP, TS Wilson, and EM McDowell. 1983. Regeneration of hamster tracheal
epithelium after mechanical injury IV. Virchows Archiv 41:213-240.
Kehrer, JP. 1993. Free radicals as mediators of tissue injury and disease. Crit Rev Toxicol
23:21-48.

Keimig, DG, RM Castellan, GJ Kullman, and KB Kinsley. 1987. Respiratory health status of
gilsonite workers. Am J Ind Med 11:287-296.

Kelley, J. 1990. Cytokines of the Lung. Am Rev Respir Dis 141:765-788.

Kennedy, TP, R Dodson, NV Rao, H Ky, C Hopkins, M Baser, E Tolley, and JR Hoidal.
1989. Dusts causing pneumoconiosis generate OH and product hemolysis by acting as fenton
catalysts. Arch Biochem Biophys 269(1):359-364.

Kilburn, KH and RH Warshaw. 1992. Irregular opacities in the lung, occupational asthma,
and airways dysfunction in aluminum workers. Am J Ind Med 21:845-853.

Kokkarinen, J, H Tuikainen, and EO Terho. 1992. Severe farmer’s lung following a


workplace challenge. Scand J Work Environ Health 18:327-328.

Kongerud, J, J Boe, V Soyseth, A Naalsund, and P Magnus. 1994. Aluminium pot room
asthma: The Norwegian experience. Eur Resp J 7:165-172.

Korn, RJ, DW Dockery, and FE Speizer. 1987. Occupational exposure and chronic respiratory
symptoms. Am Rev Respir Dis 136:298-304.

Kriebel, D. 1994. The dosimetric model in occupational and environmental epidemiology.


Occup Hyg 1:55-68.

Kriegseis, W, A Scharmann, and J Serafin. 1987. Investigations of surface properties of silica


dusts with regard to their cytotoxicity. Ann Occup Hyg 31(4A):417-427.

Kuhn, DC and LM Demers. 1992. Influence of mineral dust surface chemistry on eicosanoid
production by the alveolar macrophage. J Tox Environ Health 35: 39-50.

Kuhn, DC, CF Stanley, N El-Ayouby, and LM Demers. 1990. Effect of in vivo coal dust
exposure on arachidonic acid metabolism in the rat alveolar macrophage. J Tox Environ
Health 29:157-168.

Kunkel, SL, SW Chensue, RM Strieter, JP Lynch, and DG Remick. 1989. Cellular and
molecular aspects of granulomatous inflammation. Am J Respir Cell Mol Biol 1:439-447.

Kuntz, WD and CP McCord. 1974. Polymer fume fever. J Occup Med 16:480-482.

Lapin, CA, DK Craig, MG Valerio, JB McCandless, and R Bogoroch. 1991. A subchronic


inhalation toxicity study in rats exposed to silicon carbide whiskers. Fund Appl Toxicol
16:128-146.
Larsson, K, P Malmberg, A Eklund, L Belin, and E Blaschke. 1988. Exposure to
microorganisms, airway inflammatory changes and immune reactions in asymptomatic dairy
farmers. Int Arch Allergy Imm 87:127-133.

Lauweryns, JM and JH Baert. 1977. Alveolar clearance and the role of the pulmonary
lymphatics. Am Rev Respir Dis 115:625-683.

Leach, J. 1863. Surat cotton, as it bodily affects operatives in cotton mills. Lancet II:648.

Lecours, R, M Laviolette, and Y Cormier. 1986. Bronchoalveolar lavage in pulmonary


mycotoxicosis (organic dust toxic syndrome). Thorax 41:924-926.

Lee, KP, DP Kelly, FO O’Neal, JC Stadler, and GL Kennedy. 1988. Lung response to
ultrafine kevlar aramid synthetic fibrils following 2-year inhalation exposure in rats. Fund
Appl Toxicol 11:1-20.

Lemasters, G, J Lockey, C Rice, R McKay, K Hansen, J Lu, L Levin, and P Gartside. 1994.
Radiographic changes among workers manufacturing refractory ceramic fiber and products.
Ann Occup Hyg 38 Suppl 1:745-751.

Lesur, O, A Cantin, AK Transwell, B Melloni, J-F Beaulieu, and R Bégin. 1992. Silica
exposure induces cytotoxicity and proliferative activity of type II. Exp Lung Res 18:173-190.

Liddell, D and K Millers (eds.). 1991. Mineral fibers and health. Florida, Boca Raton: CRC
Press.
Lippman, M. 1988. Asbestos exposure indices. Environmental Research 46:86-92.

—. 1994. Deposition and retention of inhaled fibres: Effects on incidence of lung cancer and
mesothelioma. Occup Environ Med 5: 793-798.

Lockey, J and E James. 1995. Man-made fibers and nonasbestos fibrous silicates. Chap. 21 in
Occupational and Environmental Respiratory Diseases, edited by P Harber, MB Schenker,
and JR Balmes. St.Louis: Mosby.

Luce, D, P Brochard, P Quénel, C Salomon-Nekiriai, P Goldberg, MA Billon-Galland, and M


Goldberg. 1994. Malignant pleural mesothelioma associated with exposure to tremolite.
Lancet 344:1777.

Malo, J-L, A Cartier, J L’Archeveque, H Ghezzo, F Lagier, C Trudeau, and J Dolovich. 1990.
Prevalence of occupational asthma and immunological sensitization to psyllium among health
personnel in chronic care hospitals. Am Rev Respir Dis 142:373-376.

Malo, J-L, H Ghezzo, J L’Archeveque, F Lagier, B Perrin, and A Cartier. 1991. Is the clinical
history a satisfactory means of diagnosing occupational asthma? Am Rev Respir Dis 143:528-
532.

Man, SFP and WC Hulbert. 1988. Airway repair and adaptation to inhalation injury. In
Pathophysiology and Treatment of Inhalation Injuries, edited by J Locke. New York: Marcel
Dekker.
Markowitz, S. 1992. Primary prevention of occupational lung disease: A view from the
United States. Israel J Med Sci 28:513-519.

Marsh, GM, PE Enterline, RA Stone, and VL Henderson. 1990. Mortality among a cohort of
US man-made mineral fiber workers: 1985 follow-up. J Occup Med 32:594-604.

Martin, TR, SW Meyer, and DR Luchtel. 1989. An evaluation of the toxicity of carbon fiber
composites for lung cells in vitro and in vivo. Environmental Research 49:246-261.

May, JJ, L Stallones, and D Darrow. 1989. A study of dust generated during silo opening and
its physiologic effect on workers. In Principles of Health and Safety in Agriculture, edited by
JA Dosman and DW Cockcroft. Boca Raton: CRC Press.

McDermott, M, C Bevan, JE Cotes, MM Bevan, and PD Oldham. 1978. Respiratory function


in slateworkers. B Eur Physiopathol Resp 14:54.

McDonald, JC. 1995. Health implications of environmental exposure to asbestos. Environ


Health Persp 106: 544-96.

McDonald, JC and AD McDonald. 1987. Epidemiology of malignant mesothelioma. In


Asbestos-Related Malignancy, edited by K Antman and J Aisner. Orlando, Fla: Grune &
Stratton.

—. 1991. Epidemiology of mesothelioma. In Mineral Fibres and Health. Boca Raton: CRC
Press.

—. 1993. Mesothelioma: Is there a background? In The Mesothelioma Cell and


Mesothelioma: Past, Present and Future, edited by MC Jaurand, J Bignon, and P Brochard.

—. 1995. Chrysotile, tremolite, and mesothelioma. Science 267:775-776.

McDonald, JC, B Armstrong, B Case, D Doell, WTE McCaughey, AD McDonald, and P


Sébastien. 1989. Mesothelioma and asbestos fibre type. Evidence from lung tissue analyses.
Cancer 63:1544-1547.

McDonald, JC, FDK Lidell, A Dufresne, and AD McDonald. 1993. The 1891-1920 birth
cohort of Quebec chrystotile miners and millers: mortality 1976-1988. Brit J Ind Med
50:1073-1081.

McMillan, DD and GN Boyd. 1982. The role of antioxidants and diet in the prevention or
treatment of oxygen-induced lung microvascular injury. Ann NY Acad Sci 384:535-543.

Medical Research Council. 1960. Standardized questionnaire on respiratory symptoms. Brit


Med J 2:1665.

Mekky, S, SA Roach, and RSF Schilling. 1967. Byssinosis among winders in the industry. Br
J Ind Med 24:123-132.

Merchant JA, JC Lumsden, KH Kilburn, WM O’Fallon, JR Ujda, VH Germino, and JD


Hamilton. 1973. Dose response studies in cotton textile workers. J Occup Med 15:222-230.
Meredith, SK and JC McDonald. 1994. Work-related respiratory disease in the United
Kingdom, 1989-1992. Occup Environ Med 44:183-189.

Meredith, S and H Nordman. 1996. Occupational asthma: Measures of frequency of four


countries. Thorax 51:435-440.

Mermelstein, R, RW Lilpper, PE Morrow, and H Muhle. 1994. Lung overload, dosimetry of


lung fibrosis and their implications to the respiratory dust standard. Ann Occup Hyg 38 Suppl.
1:313-322.

Merriman, EA. 1989. Safe use of Kevlar aramid fiber in composites. Appl Ind Hyg Special
Issue (December):34-36.

Meurman, LO, E Pukkala, and M Hakama. 1994. Incidence of cancer among anthophyllite
asbestos miners in Finland. Occup Environ Med 51:421-425.

Michael, O, R Ginanni, J Duchateau, F Vertongen, B LeBon, and R Sergysels. 1991.


Domestic endotoxin exposure and clinical severity of asthma. Clin Exp Allergy 21:441-448.

Michel, O, J Duchateau, G Plat, B Cantinieaux, A Hotimsky, J Gerain and R Sergysels. 1995.


Blood inflammatory response to inhaled endotoxin in normal subjects. Clin Exp Allergy
25:73-79.

Morey, P, JJ Fischer, and R Rylander. 1983. Gram-negative bacteria on cotton with particular
reference to climatic conditions. Am Ind Hyg Assoc J 44: 100-104.

National Academy of Sciences. 1988. Health risks of radon and other internally deposited
alpha-emitters. Washington, DC: National Academy of Sciences.

—. 1990. Health effects of exposure to low levels of ionizing radiation. Washington, DC:
National Academy of Sciences.

National Asthma Education Program (NAEP). 1991. Expert Panel Report: Guidelines for the
Diagnosis and Management of Asthma. Bethesda, Md: National Institutes of Health (NIH).

Nemery, B. 1990. Metal toxicity and the respiratory tract. Eur Resp J 3:202-219.

Newman, LS, K Kreiss, T King, S Seay, and PA Campbell. 1989. Pathologic and
immunologic alterations in early stages of beryllium disease. Reexamination of disease
definition and natural history. Am Rev Respir Dis 139:1479-1486.

Nicholson, WJ. 1991. In Health Effects Institute-Asbestos Research: Asbestos in Public and
Commercial Buildings. Cambrige, Mass: Health Effects Institute-Asbestos Research.

Niewoehner, DE and JR Hoidal. 1982. Lung Fibrosis and Emphysema: Divergent responses
to a common injury. Science 217:359-360.

Nolan, RP, AM Langer, JS Harrington, G Oster, and IJ Selikoff. 1981. Quartz hemolysis as
related to its surface functionalities. Environ Res 26:503-520.
Oakes, D, R Douglas, K Knight, M Wusteman, and JC McDonald. 1982. Respiratory effects
of prolonged exposure to gypsum dust. Ann Occup Hyg 2:833-840.

O’Brodovich, H and G Coates. 1987. Pulmonary Clearance of 99mTc-DTPA: A noninvasive


assessment of epithelial integrity. Lung 16:1-16.

Parkes, RW. 1994. Occupational Lung Disorders. London: Butterworth-Heinemann.

Parkin, DM, P Pisani, and J Ferlay. 1993. Estimates of the worldwide incidence of eighteen
major cancers in 1985. Int J Cancer 54:594-606.

Pepys, J and PA Jenkins. 1963. Farmer’s lung: Thermophilic actinomycetes as a source of


“farmer’s lung hay” antigen. Lancet 2:607-611.

Pepys, J, RW Riddell, KM Citron, and YM Clayton. 1962. Precipitins against extracts of hay
and molds in the serum of patients with farmer’s lung, aspergillosis, asthma and sarcoidosis.
Thorax 17:366-374.

Pernis, B, EC Vigliani, C Cavagna, and M Finulli. 1961. The role of bacterial endotoxins in
occupational diseases caused by inhaling vegetable dusts. Brit J Ind Med 18:120-129.

Petsonk, EL, E Storey, PE Becker, CA Davidson, K Kennedy, and V Vallyathan. 1988.


Pneumoconiosis in carbon electrode workers. J Occup Med 30: 887-891.

Pézerat, H, R Zalma, J Guignard, and MC Jaurand. 1989. Production of oxygen radicals by


the reduction of oxygen arising from the surface activity of mineral fibres. In Non-
occupational exposure to mineral fibres, edited by J Bignon, J Peto, and R Saracci. IARC
Scientific Publications, no.90. Lyon: IARC.

Piguet, PF, AM Collart, GE Gruaeu, AP Sappino, and P Vassalli. 1990. Requirement of


tumour necrosis factor for development of silica-induced pulmonary fibrosis. Nature 344:245-
247.

Porcher, JM, C Lafuma, R El Nabout, MP Jacob, P Sébastien, PJA Borm, S Hannons, and G
Auburtin. 1993. Biological markers as indicators of exposure and pneumoconiotic risk:
Prospective study. Int Arch Occup Environ Health 65:S209-S213.

Prausnitz, C. 1936. Investigations on respiratory dust disease in operatives in cotton industry.


Medical Research Council Special Report Series, No. 212. London: His Majesty’s Stationery
Office.

Preston, DL, H Kato, KJ Kopecky, and S Fujita. 1986. Life Span Study Report 10, Part 1.
Cancer Mortality Among A-Bomb Survivors in Hiroshima and Nagasaki, 1950-1982.
Technical Report. RERF TR.

Quanjer, PH, GJ Tammeling, JE Cotes, OF Pedersen, R Peslin and J-C Vernault. 1993. Lung
volumes and forced ventilatory flows. Report of Working Party, Standardization of Lung
Function Tests, European Community for Steel and Coal. Official Statement of the European
Respiratory Society. Eur Resp J 6(suppl 16): 5-40.
Raabe, OG. 1984. Deposition and clearance of inhaled particles. In Occupational Lung
Disease, edited by BL Gee, WKC Morgan, and GM Brooks. New York: Raven Press.

Ramazzini, B. 1713. De Moribis Artificium Diatriba (Diseases of Workers). In Allergy Proc


1990, 11:51-55.

Rask-Andersen A. 1988. Pulmonary reactions to inhalation of mould dust in farmers with


special reference to fever and allergic alveolitis. Acta Universitatis Upsalienses. Dissertations
from the Faculty of Medicine 168. Uppsala.

Richards, RJ, LC Masek, and RFR Brown. 1991. Biochemical and Cellular Mechanisms of
Pulmonary Fibrosis. Toxicol Pathol 19(4):526
-539.

Richerson, HB. 1983. Hypersensitivity pneumonitis – pathology and pathogenesis. Clin Rev
Allergy 1: 469-486.

—. 1990. Unifying concepts underlying the effects of organic dust exposures. Am J Ind Med
17:139-142.

—. 1994. Hypersensitivity pneumonitis. In Organic Dusts - Exposure, Effects, and


Prevention, edited by R Rylander and RR Jacobs. Chicago: Lewis Publishing.

Richerson, HB, IL Bernstein, JN Fink, GW Hunninghake, HS Novey, CE Reed, JE Salvaggio,


MR Schuyler, HJ Schwartz, and DJ Stechschulte. 1989. Guidelines for the clinical evaluation
of hypersensitivity pneumonitis. J Allergy Clin immunol 84:839-844.

Rom, WN. 1991. Relationship of inflammatory cell cytokines to disease severity in


individuals with occupational inorganic dust exposure. Am J Ind Med 19:15-27.

—. 1992a. Environmental and Occupational Medicine. Boston: Little, Brown & Co.

—. 1992b. Hairspray-induced lung disease. In Environmental and Occupational Medicine,


edited by WN Rom. Boston: Little, Brown & Co.

Rom, WN, JS Lee, and BF Craft. 1981. Occupational and environmental health problems of
the developing oil shale industry: A review. Am J Ind Med 2: 247-260.

Rose, CS. 1992. Inhalation fevers. In Environmental and Occupational Medicine, edited by
WN Rom. Boston: Little, Brown & Co.

Rylander R. 1987. The role of endotoxin for reactions after exposure to cotton dust. Am J Ind
Med 12: 687-697.

Rylander, R, B Bake, J-J Fischer and IM Helander 1989. Pulmonary function and symptoms
after inhalation of endotoxin. Am Rev Resp Dis 140:981-986.

Rylander R and R Bergström 1993. Bronchial reactivity among cotton workers in relation to
dust and endotoxin exposure. Ann Occup Hyg 37:57-63.
Rylander, R, KJ Donham, and Y Peterson. 1986. Health effects of organic dusts in the farm
environment. Am J Ind Med 10:193-340.

Rylander, R and P Haglind. 1986. Exposure of cotton workers in an experimental cardroom


with reference to airborne endotoxins. Environ Health Persp 66:83-86.

Rylander R, P Haglind, M Lundholm 1985. Endotoxin in cotton dust and respiratory function
decrement among cotton workers. Am Rev Respir Dis 131:209-213.

Rylander, R and PG Holt. 1997. Modulation of immune response to inhaled allergen by co-
exposure to the microbial cell wall components (13)-B-D-glucan and endotoxin.
Manuscript.

Rylander, R and RR Jacobs. 1994. Organic Dusts: Exposure, Effects, and Prevention.
Chicago: Lewis Publishing.

—. 1997. Environmental endotoxin – A criteria document. J Occup Environ Health 3: 51-548.

Rylander, R and Y Peterson. 1990. Organic dusts and lung disease. Am J Ind Med 17:1148.

—. 1994. Causative agents for organic dust related disease. Am J Ind Med 25:1-147.

Rylander, R, Y Peterson, and KJ Donham. 1990. Questionnaire evaluating organic dust


exposure. Am J Ind Med 17:121-126.

Rylander, R, RSF Schilling, CAC Pickering, GB Rooke, AN Dempsey, and RR Jacobs. 1987.
Effects after acute and chronic exposure to cotton dust - The Manchester criteria. Brit J Ind
Med 44:557-579.

Sabbioni, E, R Pietra, and P Gaglione. 1982. Long term occupational risk of rare-earth
pneumoconiosis. Sci Total Environ 26:19-32.

Sadoul, P. 1983. Pneumoconiosis in Europe yesterday, today and tomorrow. Eur J Resp Dis
64 Suppl. 126:177-182.

Scansetti, G, G Piolatto, and GC Botta. 1992. Airborne fibrous and non-fibrous particles in a
silicon carbide manufacturing plant. Ann Occup Hyg 36(2):145-153.

Schantz, SP, LB Harrison, and WK Hong. 1993. Tumours of the nasal cavity and paranasal
sinuses, nasopharynx, oral cavity,and oropharynx. In Cancer: Principles & Practice of
Oncology, edited by VTJ DeVita, S Hellman, and SA Rosenberg. Philadelphia: JB Lippincott.

Schilling, RSF. 1956. Byssinosis in cotton and other textile workers. Lancet 2:261-265.

Schilling, RSF, JPW Hughes, I Dingwall-Fordyce, and JC Gilson. 1955. An epidemiological


study of byssinosis among Lancashire cotton workers. Brit J Ind Med 12:217-227.

Schulte, PA. 1993. Use of biological markers in occupational health research and practice. J
Tox Environ Health 40:359-366.
Schuyler, M, C Cook, M Listrom, and C Fengolio-Preiser. 1988. Blast cells transfer
experimental hypersensitivity pneumonitis in guinea pigs. Am Rev Respir Dis 137:1449-
1455.

Schwartz DA, KJ Donham, SA Olenchock, WJ Popendorf, D Scott Van Fossen, LJ


Burmeister and JA Merchant. 1995. Determinants of longitudinal changes in spirometric
function among swine confinement operators and farmers. Am J Respir Crit Care Med 151:
47-53.

Science of the total environment. 1994. Cobalt and Hard Metal Disease 150(Special issue):1-
273.

Scuderi, P. 1990. Differential effects of copper and zinc on human peripheral blood monocyte
cytokine secretion. Cell Immunol 265:2128-2133.
Seaton, A. 1983. Coal and the lung. Thorax 38:241-243.

Seaton, J, D Lamb, W Rhind Brown, G Sclare, and WG Middleton. 1981. Pneumoconiosis of


shale miners. Thorax 36:412-418.

Sébastien, P. 1990. Les mystères de la nocivité du quartz. In Conférence Thématique. 23


Congrès International De La Médecine Du Travail Montréal: Commission international de la
Médecine du travail.

—. 1991. Pulmonary Deposition and Clearance of Airborne Mineral Fibers. In Mineral Fibers
and Health, edited by D Liddell and K Miller. Boca Raton: CRC Press.

Sébastien, P, A Dufresne, and R Bégin. 1994. Asbestos fibre retention and the outcome of
asbestosis with or without exposure cessation. Ann Occup Hyg 38 Suppl. 1:675-682.

Sébastien, P, B Chamak, A Gaudichet, JF Bernaudin, MC Pinchon, and J Bignon. 1994.


Comparative study by analytical transmission electron microscopy of particles in alveolar and
interstitial human lung macrophages. Ann Occup Hyg 38 Suppl. 1:243-250.

Seidman, H and IJ Selikoff. 1990. Decline in death rates among asbestos insulation workers
1967-1986 associated with diminution of work exposure to asbestos. Annals of the New York
Academy of Sciences 609:300-318.

Selikoff, IJ and J Churg. 1965. The biological effects of asbestos. Ann NY Acad Sci 132:1-
766.

Selikoff, IJ and DHK Lee. 1978. Asbestos and Disease. New York: Academic Press.

Sessions, RB, LB Harrison, and VT Hong. 1993. Tumours of the larynx, and hypopharynx. In
Cancer: Principles and Practice of Oncology, edited by VTJ DeVita, S Hellman, and SA
Rosenberg. Philadelphia: JB Lippincott.

Shannon, HS, E Jamieson, JA Julian, and DCF Muir. 1990. Mortality of glass filament
(textile) workers. Brit J Ind Med 47:533-536.
Sheppard, D. 1988. Chemical agents. In Respiratory Medicine, edited by JF Murray and JA
Nadel. Philadelphia: WB Saunders.

Shimizu, Y, H Kato, WJ Schull, DL Preston, S Fujita, and DA Pierce. 1987. Life span study
report 11, Part 1. Comparison of Risk Coefficients for Site-Specific Cancer Mortality based
on the DS86 and T65DR Shielded Kerma and Organ Doses. Technical Report. RERF TR 12-
87.

Shusterman, DJ. 1993. Polymer fume fever and other flourocarbon pyrolysis related
syndromes. Occup Med: State Art Rev 8:519-531.

Sigsgaard T, OF Pedersen, S Juul and S Gravesen. Respiratory disorders and atopy in cotton
wool and other textile mill workers in Denmark. Am J Ind Med 1992;22:163-184.

Simonato, L, AC Fletcher, and JW Cherrie. 1987. The International Agency for Research on
Cancer historical cohort study of MMMF production workers in seven European countries:
Extension of the follow-up. Ann Occup Hyg 31:603-623.

Skinner, HCW, M Roos, and C Frondel. 1988. Asbestos and Other Fibrous Minerals. New
York: Oxford Univ. Press.

Skornik, WA. 1988. Inhalation toxicity of metal particles and vapors. In Pathophysiology and
Treatment of Inhalation Injuries, edited by J Locke. New York: Marcel Dekker.

Smith, PG and R Doll. 1982. Mortality among patients with ankylosing sponchylitis after a
single treatment course with X-rays. Brit Med J 284:449-460.

Smith, TJ. 1991. Pharmacokinetic models in the development of exposure indicators in


epidemiology. Ann Occup Hyg 35(5):543-560.

Snella, M-C and R Rylander. 1982. Lung cell reactions after inhalation of bacterial
lipopolysaccharides. Eur J Resp Dis 63:550-557.

Stanton, MF, M Layard, A Tegeris, E Miller, M May, E Morgan, and A Smith. 1981. Relation
of particle dimension to carcinogenicity in amphibole asbestoses and other fibrous minerals. J
Natl Cancer Inst 67:965-975.

Stephens, RJ, MF Sloan, MJ Evans, and G Freeman. 1974. Alveolar type I cell response to
exposure to 0.5 ppm 03 for short periods. Exp Mol Pathol 20:11-23.

Stille, WT and IR Tabershaw. 1982. The mortality experience of upstate New York talc
workers. J Occup Med 24:480-484.

Strom, E and O Alexandersen. 1990. Pulmonary damage caused by ski waxing. Tidsskrift for
Den Norske Laegeforening 110:3614-3616.

Sulotto, F, C Romano, and A Berra. 1986. Rare earth pneumoconiosis: A new case. Am J Ind
Med 9: 567-575.

Trice, MF. 1940. Card-room fever. Textile World 90:68.


Tyler, WS, NK Tyler, and JA Last. 1988. Comparison of daily and seasonal exposures of
young monkeys to ozone. Toxicology 50:131-144.

Ulfvarson, U and M Dahlqvist. 1994. Pulmonary function in workers exposed to diesel


exhaust. In Encyclopedia of Environmental Control Technology New Jersey: Gulf Publishing.

US Department of Health and Human Services. 1987. Report on cancer risks associated with
the ingestion of asbestos. Environ Health Persp 72:253-266.

US Department of Health and Human Services (USDHHS). 1994. Work-Related Lung


Disease Surveillance Report. Washington, DC: Public Health Services, Center for Disease
Control and Prevention.

Vacek, PM and JC McDonald. 1991. Risk assessment using exposure intensivity: An


application to vermiculite mining. Brit J Ind Med 48:543-547.

Valiante, DJ, TB Richards, and KB Kinsley. 1992. Silicosis surveillance in New Jersey:
Targeting workplaces using occupational disease and exposure surveillance data. Am J Ind
Med 21:517-526.

Vallyathan, NV and JE Craighead. 1981. Pulmonary pathology in workers exposed to


nonasbestiform talc. Hum Pathol 12:28-35.

Vallyathan, V, X Shi, NS Dalal, W Irr, and V Castranova. 1988. Generation of free radicals
from freshly fractured silica dust. Potential role in acute silica-induced lung injury. Am Rev
Respir Dis 138:1213-1219.

Vanhee, D, P Gosset, B Wallaert, C Voisin, and AB Tonnel. 1994. Mechanisms of fibrosis in


coal workers’ pneumoconiosis. Increased production of platelet-derived growth factor,
insulin-like growth factor type I, and transforming growth-factor beta and relationship to
disease severity. Am J Resp Critical Care Med 150(4):1049-1055.

Vaughan, GL, J Jordan, and S Karr. 1991. The toxicity, in vitro, of silicon carbide whiskers.
Environmental Research 56:57-67.
Vincent, JH and K Donaldson. 1990. A dosimetric approach for relating the biological
response of the lung to the accumulation of inhaled mineral dust. Brit J Ind Med 47:302-307.

Vocaturo, KG, F Colombo, and M Zanoni. 1983. Human exposure to heavy metals. Rare
earth pneumoconiosis in occupational workers. Chest 83:780-783.

Wagner, GR. 1996. Health Screening and Surveillance of Mineral Dust Exposed Workers.
Recommendation for the ILO Workers Group. Geneva: WHO.

Wagner, JC. 1994. The discovery of the association between blue asbestos and mesotheliomas
and the aftermath. Brit J Ind Med 48:399-403.

Wallace, WE, JC Harrison, RC Grayson, MJ Keane, P Bolsaitis, RD Kennedy, AQ Wearden,


and MD Attfield. 1994. Aluminosilicate surface contamination of respirable quartz particles
from coal mine dusts and from clay works dust. Ann Occup Hyg 38 Suppl. 1:439-445.
Warheit, DB, KA Kellar, and MA Hartsky. 1992. Pulmonary cellular effects in rats following
aerosol exposures to ultrafine Kevlar aramid fibrils: Evidence for biodegradability of inhaled
fibrils. Toxicol Appl Pharmacol 116:225-239.

Waring, PM and RJ Watling. 1990. Rare deposits in a deceased movie projectionist. A new
case of rare earth pneumoconiosis? Med J Austral 153:726-730.

Wegman, DH and JM Peters. 1974. Polymer fume fever and cigarette smoking. Ann Intern
Med 81:55-57.

Wegman, DH, JM Peters, MG Boundy, and TJ Smith. 1982. Evaluation of respiratory effects
in miners and millers exposed to talc free of asbestos and silica. Brit J Ind Med 39:233-238.

Wells, RE, RF Slocombe, and AL Trapp. 1982. Acute toxicosis of budgerigars (Melopsittacus
undulatus) caused by pyrolysis products from heated polytetrafluoroethylene: Clinical study.
Am J Vet Res 43:1238-1248.

Wergeland, E, A Andersen, and A Baerheim. 1990. Morbidity and mortality in talc-exposed


workers. Am J Ind Med 17:505-513.

White, DW and JE Burke. 1955. The Metal Beryllium. Cleveland, Ohio: American Society
for Metals.

Wiessner, JH, NS Mandel, PG Sohnle, A Hasegawa, and GS Mandel. 1990. The effect of
chemical modification of quartz surfaces on particulate-induces pulmonary inflammation and
fibrosis in the mouse. Am Rev Respir Dis 141:11-116.

Williams, N, W Atkinson, and AS Patchefsky. 1974. Polymer fume fever: Not so benign. J
Occup Med 19:693-695.

Wong, O, D Foliart, and LS Trent. 1991. A case-control study of lung cancer in a cohort of
workers potentially exposed to slag wool fibres. Brit J Ind Med 48:818-824.

Woolcock, AJ. 1989. Epidemiology of Chronic airways disease. Chest 96 (Suppl): 302-306S.

World Health Organization (WHO) and International Agency for Research on Cancer
(IARC). 1982. IARC Monographs on the Evaluation of the Carcinogenic Risk of Chemicals
to Humans. Lyon: IARC.

World Health Organization (WHO) and Office of Occupational Health. 1989. Occupational
Exposure Limit for Asbestos. Geneva: WHO.

Wright, JL, P Cagle, A Shurg, TV Colby, and J Myers. 1992. Diseases of the small airways.
Am Rev Respir Dis 146:240-262.

Yan, CY, CC Huang, IC Chang, CH Lee, JT Tsai, and YC Ko. 1993. Pulmonary function and
respiratory symptoms of portland cement workers in southern Taiwan. Kaohsiung J Med Sci
9:186-192.
Zajda, EP. 1991. Pleural and airway disease associated with mineral fibers. In Mineral Fibers
and
Health, edited by D Liddell and K Miller. Boca Raton: CRC Press.

Ziskind, M, RN Jones, and H Weill. 1976. Silicosis. Am Rev Respir Dis 113:643-665.

Copyright 2015 International Labour Organization


11. Sensory Systems
Chapter Editor: Heikki Savolainen

The Ear

Written by ILO Content Manager

Marcel-André Boillat

Anatomy

The ear is the sensory organ responsible for hearing and the maintenance of equilibrium, via
the detection of body position and of head movement. It is composed of three parts: the outer,
middle, and inner ear; the outer ear lies outside the skull, while the other two parts are
embedded in the temporal bone (figure 1).

Figure 1. Diagram of the ear.

The outer ear consists of the auricle, a cartilaginous skin-covered structure, and the external
auditory canal, an irregularly-shaped cylinder approximately 25 mm long which is lined by
glands secreting wax.

The middle ear consists of the tympanic cavity, an air-filled cavity whose outer walls form the
tympanic membrane (eardrum), and communicates proximally with the nasopharynx by the
Eustachian tubes, which maintain pressure equilibrium on either side of the tympanic
membrane. For instance, this communication explains how swallowing allows equalization of
pressure and restoration of lost hearing acuity caused by rapid change in barometric pressure
(e.g., landing airplanes, fast elevators). The tympanic cavity also contains the ossicles—the
malleus, incus and stapes—which are controlled by the stapedius and tensor tympani muscles.
The tympanic membrane is linked to the inner ear by the ossicles, specifically by the mobile
foot of the stapes, which lies against the oval window.
The inner ear contains the sensory apparatus per se. It consists of a bony shell (the bony
labyrinth) within which is found the membranous labyrinth—a series of cavities forming a
closed system filled with endolymph, a potassium-rich liquid. The membranous labyrinth is
separated from the bony labyrinth by the perilymph, a sodium-rich liquid.

The bony labyrinth itself is composed of two parts. The anterior portion is known as the
cochlea and is the actual organ of hearing. It has a spiral shape reminiscent of a snail shell,
and is pointed in the anterior direction. The posterior portion of the bony labyrinth contains
the vestibule and the semicircular canals, and is responsible for equilibrium. The
neurosensory structures involved in hearing and equilibrium are located in the membranous
labyrinth: the organ of Corti is located in the cochlear canal, while the maculae of the utricle
and the saccule and the ampullae of the semicircular canals are located in the posterior
section.

Hearing organs

The cochlear canal is a spiral triangular tube, comprising two and one-half turns, which
separates the scala vestibuli from the scala tympani. One end terminates in the spiral ligament,
a process of the cochlea’s central column, while the other is connected to the bony wall of the
cochlea.

The scala vestibuli and tympani end in the oval window (the foot of the stapes) and round
window, respectively. The two chambers communicate through the helicotrema, the tip of the
cochlea. The basilar membrane forms the inferior surface of the cochlear canal, and supports
the organ of Corti, responsible for the transduction of acoustic stimuli. All auditory
information is transduced by only 15,000 hair cells (organ of Corti), of which the so-called
inner hair cells, numbering 3,500, are critically important, since they form synapses with
approximately 90% of the 30,000 primary auditory neurons (figure 2). The inner and outer
hair cells are separated from each other by an abundant layer of support cells. Traversing an
extraordinarily thin membrane, the cilia of the hair cells are embedded in the tectorial
membrane, whose free end is located above the cells. The superior surface of the cochlear
canal is formed by Reissner’s membrane.

Figure 2. Cross-section of one loop of the cochlea. Diameter: approximately 1.5 mm.
The bodies of the cochlear sensory cells resting on the basilar membrane are surrounded by
nerve terminals, and their approximately 30,000 axons form the cochlear nerve. The cochlear
nerve crosses the inner ear canal and extends to the central structures of the brain stem, the
oldest part of the brain. The auditory fibres end their tortuous path in the temporal lobe, the
part of the cerebral cortex responsible for the perception of acoustic stimuli.

Organs of Equilibrium

The sensory cells are located in the ampullae of the semicircular canals and the maculae of the
utricle and saccule, and are stimulated by pressure transmitted through the endolymph as a
result of head or body movements. The cells connect with bipolar cells whose peripheral
processes form two tracts, one from the anterior and external semicircular canals, the other
from the posterior semicircular canal. These two tracts enter the inner ear canal and unite to
form the vestibular nerve, which extends to the vestibular nuclei in the brainstem. Fibres from
the vestibular nuclei, in turn, extend to cerebellar centres controlling eye movements, and to
the spinal cord.

The union of the vestibular and cochlear nerves forms the 8th cranial nerve, also known as the
vestibulocochlear nerve.

Physiology of Hearing

Sound conduction through air

The ear is composed of a sound conductor (the outer and middle ear) and a sound receptor
(the inner ear).

Sound waves passing through the external auditory canal strike the tympanic membrane,
causing it to vibrate. This vibration is transmitted to the stapes through the hammer and anvil.
The surface area of the tympanic membrane is almost 16 times that of the foot of the stapes
(55 mm2/3.5 mm2), and this, in combination with the lever mechanism of the ossicles, results
in a 22-fold amplification of the sound pressure. Due to the middle ear’s resonant frequency,
the transmission ratio is optimal between 1,000 and 2,000 Hz. As the foot of the stapes
moves, it causes waves to form in the liquid within the vestibular canal. Since the liquid is
incompressible, each inward movement of the foot of the stapes causes an equivalent outward
movement of the round window, towards the middle ear.

When exposed to high sound levels, the stapes muscle contracts, protecting the inner ear (the
attenuation reflex). In addition to this function, the muscles of the middle ear also extend the
dynamic range of the ear, improve sound localization, reduce resonance in the middle ear, and
control air pressure in the middle ear and liquid pressure in the inner ear.

Between 250 and 4,000 Hz, the threshold of the attenuation reflex is approximately
80 decibels (dB) above the hearing threshold, and increases by approximately 0.6 dB/dB as
the stimulation intensity increases. Its latency is 150 ms at threshold, and 24-35 ms in the
presence of intense stimuli. At frequencies below the natural resonance of the middle ear,
contraction of the middle ear muscles attenuates sound transmission by approximately 10 dB.
Because of its latency, the attenuation reflex provides adequate protection from noise
generated at rates above two to three per second, but not from discrete impulse noise.
The speed with which sound waves propagate through the ear depends on the elasticity of the
basilar membrane. The elasticity increases, and the wave velocity thus decreases, from the
base of the cochlea to the tip. The transfer of vibration energy to Reissner’s membrane and the
basilar membrane is frequency-dependent. At high frequencies, the wave amplitude is greatest
at the base, while for lower frequencies, it is greatest at the tip. Thus, the point of greatest
mechanical excitation in the cochlea is frequency-dependent. This phenomenon underlies the
ability to detect frequency differences. Movement of the basilar membrane induces shear
forces in the stereocilia of the hair cells and triggers a series of mechanical, electrical and
biochemical events responsible for mechanical-sensory transduction and initial acoustic signal
processing. The shear forces on the stereocilia cause ionic channels in the cell membranes to
open, modifying the permeability of the membranes and allowing the entry of potassium ions
into the cells. This influx of potassium ions results in depolarization and the generation of an
action potential.

Neurotransmitters liberated at the synaptic junction of the inner hair cells as a result of
depolarization trigger neuronal impulses which travel down the afferent fibres of the auditory
nerve toward higher centres. The intensity of auditory stimulation depends on the number of
action potentials per unit time and the number of cells stimulated, while the perceived
frequency of the sound depends on the specific nerve fibre populations activated. There is a
specific spatial mapping between the frequency of the sound stimulus and the section of the
cerebral cortex stimulated.

The inner hair cells are mechanoreceptors which transform signals generated in response to
acoustic vibration into electric messages sent to the central nervous system. They are not,
however, responsible for the ear’s threshold sensitivity and its extraordinary frequency
selectivity.

The outer hair cells, on the other hand, send no auditory signals to the brain. Rather, their
function is to selectively amplify mechano-acoustic vibration at near-threshold levels by a
factor of approximately 100 (i.e., 40 dB), and so facilitate stimulation of inner hair cells. This
amplification is believed to function through micromechanical coupling involving the
tectorial membrane. The outer hair cells can produce more energy than they receive from
external stimuli and, by contracting actively at very high frequencies, can function as cochlear
amplifiers.

In the inner ear, interference between outer and inner hair cells creates a feedback loop which
permits control of auditory reception, particularly of threshold sensitivity and frequency
selectivity. Efferent cochlear fibres may thus help reduce cochlear damage caused by
exposure to intense acoustic stimuli. Outer hair cells may also undergo reflex contraction in
the presence of intense stimuli. The attenuation reflex of the middle ear, active primarily at
low frequencies, and the reflex contraction in the inner ear, active at high frequencies, are thus
complementary.

Bone conduction of sound

Sound waves may also be transmitted through the skull. Two mechanisms are possible:

In the first, compression waves impacting the skull cause the incompressible perilymph to
deform the round or oval window. As the two windows have differing elasticities, movement
of the endolymph results in movement of the basilar membrane.
The second mechanism is based on the fact that movement of the ossicles induces movement
in the scala vestibuli only. In this mechanism, movement of the basilar membrane results from
the translational movement produced by the inertia.

Bone conduction is normally 30-50 dB lower than air conduction—as is readily apparent
when both ears are blocked. This is only true, however, for air-mediated stimuli, direct bone
stimulation being attenuated to a different degree.

Sensitivity range

Mechanical vibration induces potential changes in the cells of the inner ear, conduction
pathways and higher centres. Only frequencies of 16 Hz–25,000 Hz and sound pressures
(these can be expressed in pascals, Pa) of 20 μPa to 20 Pa can be perceived. The range of
sound pressures which can be perceived is remarkable—a 1-million-fold range! The detection
thresholds of sound pressure are frequency-dependent, lowest at 1,000-6,000 Hz and
increasing at both higher and lower frequencies.

For practical purposes, the sound pressure level is expressed in decibels (dB), a logarithmic
measurement scale corresponding to perceived sound intensity relative to the auditory
threshold. Thus, 20 μPa is equivalent to 0 dB. As the sound pressure increases tenfold, the
decibel level increases by 20 dB, in accordance with the following formula:

Lx = 20 log Px/P0

where:

Lx = sound pressure in dB

Px = sound pressure in pascals

P0 = reference sound pressure(2×10–5 Pa, the auditory threshold)

The frequency-discrimination threshold, that is the minimal detectable difference in


frequency, is 1.5 Hz up to 500 Hz, and 0.3% of the stimulus frequency at higher frequencies.
At sound pressures near the auditory threshold, the sound-pressure-discrimination threshold is
approximately 20%, although differences of as little as 2% may be detected at high sound
pressures.

If two sounds differ in frequency by a sufficiently small amount, only one tone will be heard.
The perceived frequency of the tone will be midway between the two source tones, but its
sound pressure level is variable. If two acoustic stimuli have similar frequencies but differing
intensities, a masking effect occurs. If the difference in sound pressure is large enough,
masking will be complete, with only the loudest sound perceived.

Localization of acoustic stimuli depends on the detection of the time lag between the arrival
of the stimulus at each ear, and, as such, requires intact bilateral hearing. The smallest
detectable time lag is 3 x 10–5 seconds. Localization is facilitated by the head’s screening
effect, which results in differences in stimulus intensity at each ear.
The remarkable ability of human beings to resolve acoustic stimuli is a result of frequency
decomposition by the inner ear and frequency analysis by the brain. These are the
mechanisms that allow individual sound sources such as individual musical instruments to be
detected and identified in the complex acoustic signals that make up the music of a full
symphony orchestra.

Physiopathology

Ciliary damage

The ciliary motion induced by intense acoustic stimuli may exceed the mechanical resistance
of the cilia and cause mechanical destruction of hair cells. As these cells are limited in number
and incapable of regeneration, any cell loss is permanent, and if exposure to the harmful
sound stimulus continues, progressive. In general, the ultimate effect of ciliary damage is the
development of a hearing deficit.

Outer hair cells are the most sensitive cells to sound and toxic agents such as anoxia, ototoxic
medications and chemicals (e.g., quinine derivates, streptomycin and some other antibiotics,
some anti-tumour preparations), and are thus the first to be lost. Only passive
hydromechanical phenomena remain operative in outer hair cells which are damaged or have
damaged stereocilia. Under these conditions, only gross analysis of acoustic vibration is
possible. In very rough terms, cilia destruction in outer hair cells results in a 40 dB increase in
hearing threshold.

Cellular damage

Exposure to noise, especially if it is repetitive or prolonged, may also affect the metabolism of
cells of the organ of Corti, and afferent synapses located beneath the inner hair cells. Reported
extraciliary effects include modification of cell ultrastructure (reticulum, mitochondria,
lysosomes) and, postsynaptically, swelling of afferent dendrites. Dendritic swelling is
probably due to the toxic accumulation of neurotransmitters as a result of excessive activity
by inner hair cells. Nevertheless, the extent of stereociliary damage appears to determine
whether hearing loss is temporary or permanent.

Noise-induced Hearing Loss

Noise is a serious hazard to hearing in today’s increasingly complex industrial societies. For
example, noise exposure accounts for approximately one-third of the 28 million cases of
hearing loss in the United States, and NIOSH (the National Institute for Occupational Safety
and Health) reports that 14% of American workers are exposed to potentially dangerous
sound levels, that is levels exceeding 90 dB. Noise exposure is the most widespread harmful
occupational exposure and is the second leading cause, after age-related effects, of hearing
loss. Finally, the contribution of non-occupational noise exposure must not be forgotten, such
as home workshops, over-amplified music especially with use of earphones, use of firearms,
etc.

Acute noise-induced damage. The immediate effects of exposure to high-intensity sound


stimuli (for example, explosions) include elevation of the hearing threshold, rupture of the
eardrum, and traumatic damage to the middle and inner ears (dislocation of ossicles, cochlear
injury or fistulas).
Temporary threshold shift. Noise exposure results in a decrease in the sensitivity of auditory
sensory cells which is proportional to the duration and intensity of exposure. In its early
stages, this increase in auditory threshold, known as auditory fatigue or temporary threshold
shift (TTS), is entirely reversible but persists for some time after the cessation of exposure.

Studies of the recovery of auditory sensitivity have identified several types of auditory
fatigue. Short-term fatigue dissipates in less than two minutes and results in a maximum
threshold shift at the exposure frequency. Long-term fatigue is characterized by recovery in
more than two minutes but less than 16 hours, an arbitrary limit derived from studies of
industrial noise exposure. In general, auditory fatigue is a function of stimulus intensity,
duration, frequency, and continuity. Thus, for a given dose of noise, obtained by integration of
intensity and duration, intermittent exposure patterns are less harmful than continuous ones.

The severity of the TTS increases by approximately 6 dB for every doubling of stimulus
intensity. Above a specific exposure intensity (the critical level), this rate increases,
particularly if exposure is to impulse noise. The TTS increases asymptotically with exposure
duration; the asymptote itself increases with stimulus intensity. Due to the characteristics of
the outer and middle ears’ transfer function, low frequencies are tolerated the best.

Studies on exposure to pure tones indicate that as the stimulus intensity increases, the
frequency at which the TTS is the greatest progressively shifts towards frequencies above that
of the stimulus. Subjects exposed to a pure tone of 2,000 Hz develop TTS which is maximal
at approximately 3,000 Hz (a shift of a semi-octave). The noise’s effect on the outer hair cells
is believed to be responsible for this phenomenon.

The worker who shows TTS recovers to baseline hearing values within hours after removal
from noise. However, repeated noise exposures result in less hearing recovery and resultant
permanent hearing loss.

Permanent threshold shift. Exposure to high-intensity sound stimuli over several years may
lead to permanent hearing loss. This is referred to as permanent threshold shift (PTS).
Anatomically, PTS is characterized by degeneration of the hair cells, starting with slight
histological modifications but eventually culminating in complete cell destruction. Hearing
loss is most likely to involve frequencies to which the ear is most sensitive, as it is at these
frequencies that the transmission of acoustic energy from the external environment to the
inner ear is optimal. This explains why hearing loss at 4,000 Hz is the first sign of
occupationally induced hearing loss (figure 3). Interaction has been observed between
stimulus intensity and duration, and international standards assume the degree of hearing loss
to a function of the total acoustic energy received by the ear (dose of noise).
Figure 3. Audiogram showing bilateral noise-induced hearing loss.

The development of noise-induced hearing loss shows individual susceptibility. Various


potentially important variables have been examined to explain this susceptibility, such as age,
gender, race, cardiovascular disease, smoking, etc. The data were inconclusive.

An interesting question is whether the amount of TTS could be used to predict the risk of
PTS. As noted above, there is a progressive shift of the TTS to frequencies above that of the
stimulation frequency. On the other hand, most of the ciliary damage occurring at high
stimulus intensities involves cells that are sensitive to the stimulus frequency. Should
exposure persist, the difference between the frequency at which the PTS is maximal and the
stimulation frequency progressively decreases. Ciliary damage and cell loss consequently
occurs in the cells most sensitive to the stimulus frequencies. It thus appears that TTS and
PTS involve different mechanisms, and that it is thus impossible to predict an individual’s
PTS on the basis of the observed TTS.

Individuals with PTS are usually asymptomatic initially. As the hearing loss progresses, they
begin to have difficulty following conversations in noisy settings such as parties or
restaurants. The progression, which usually affects the ability to perceive high-pitched sounds
first, is usually painless and relatively slow.

Examination of individuals suffering from hearing loss

Clinical examination

In addition to the history of the date when the hearing loss was first detected (if any) and how
it has evolved, including any asymmetry of hearing, the medical questionnaire should elicit
information on the patient’s age, family history, use of ototoxic medications or exposure to
other ototoxic chemicals, the presence of tinnitus (i.e., buzzing, whistling or ringing sounds in
one or both ears), dizziness or any problems with balance, and any history of ear infections
with pain or discharge from the outer ear canal. Of critical importance is a detailed life-long
history of exposures to high sound levels (note that, to the layperson, not all sounds are
“noise”) on the job, in previous jobs and off-the-job. A history of episodes of TTS would
confirm prior toxic exposures to noise.
Physical examination should include evaluation of the function of the other cranial nerves,
tests of balance, and ophthalmoscopy to detect any evidence of increased cranial pressure.
Visual examination of the external auditory canal will detect any impacted cerumen and, after
it has been cautiously removed (no sharp object!), any evidence of scarring or perforation of
the tympanic membrane. Hearing loss can be determined very crudely by testing the patient’s
ability to repeat words and phrases spoken softly or whispered by the examiner when
positioned behind and out of the sight of the patient. The Weber test (placing a vibrating
tuning fork in the centre of the forehead to determine if this sound is “heard” in either or both
ears) and the Rinné pitch-pipe test (placing a vibrating tuning fork on the mastoid process
until the patient can no longer hear the sound, then quickly placing the fork near the ear canal;
normally the sound can be heard longer through air than through bone) will allow
classification of the hearing loss as transmission- or neurosensory.

The audiogram is the standard test to detect and evaluate hearing loss (see below). Specialized
studies to complement the audiogram may be necessary in some patients. These include:
tympanometry, word discrimination tests, evaluation of the attenuation reflex, electrophysical
studies (electrocochleogram, auditory evoked potentials) and radiological studies (routine
skull x rays complemented by CAT scan, MRI).

Audiometry

This crucial component of the medical evaluation uses a device known as an audiometer to
determine the auditory threshold of individuals to pure tones of 250-8,000 Hz and sound
levels between –10 dB (the hearing threshold of intact ears) and 110 dB (maximal damage).
To eliminate the effects of TTSs, patients should not have been exposed to noise during the
previous 16 hours. Air conduction is measured by earphones placed on the ears, while bone
conduction is measured by placing a vibrator in contact with the skull behind the ear. Each
ear’s hearing is measured separately and test results are reported on a graph known as an
audiogram (Figure 3). The threshold of intelligibility, that is. the sound intensity at which
speech becomes intelligible, is determined by a complementary test method known as vocal
audiometry, based on the ability to understand words composed of two syllables of equal
intensity (for instance, shepherd, dinner, stunning).

Comparison of air and bone conduction allows classification of hearing losses as transmission
(involving the external auditory canal or middle ear) or neurosensory loss (involving the inner
ear or auditory nerve) (figures 3 and 4). The audiogram observed in cases of noise-induced
hearing loss is characterized by an onset of hearing loss at 4,000 Hz, visible as a dip in the
audiogram (figure 3). As exposure to excessive noise levels continues, neighbouring
frequencies are progressively affected and the dip broadens, encroaching, at approximately
3,000 Hz, on frequencies essential for the comprehension of conversation. Noise-induced
hearing loss is usually bilateral and shows a similar pattern in both ears, that is, the difference
between the two ears does not exceed 15 dB at 500 Hz, at 1,000 dB and at 2,000 Hz, and
30 dB at 3,000, at 4,000 and at 6,000 Hz. Asymmetric damage may, however, be present in
cases of non-uniform exposure, for example, with marksmen, in whom hearing loss is higher
on the side opposite to the trigger finger (the left side, in a right-handed person). In hearing
loss unrelated to noise exposure, the audiogram does not exhibit the characteristic 4,000 Hz
dip (figure 4).
Figure 4. Examples of right-ear audiograms. The circles represent air-conduction hearing loss,
the “<“ bone conduction.

There are two types of audiometric examinations: screening and diagnostic. Screening
audiometry is used for the rapid examination of groups of individuals in the workplace, in
schools or elsewhere in the community to identify those who appear to have some hearing
loss. Often, electronic audiometers that permit self-testing are used and, as a rule, screening
audiograms are obtained in a quiet area but not necessarily in a sound-proof, vibration-free
chamber. The latter is considered to be a prerequisite for diagnostic audiometry which is
intended to measure hearing loss with reproducible precision and accuracy. The diagnostic
examination is properly performed by a trained audiologist (in some circumstances, formal
certification of the competence of the audiologist is required). The accuracy of both types of
audiometry depends on periodic testing
and rec
alibration of the equipment being used.

In many jurisdictions, individuals with job-related, noise-induced hearing loss are eligible for
workers’ compensation benefits. Accordingly, many employers are including audiometry in
their preplacement medical examinations to detect any existing hearing loss that may be the
responsibility of a previous employer or represent a non-occupational exposure.

Hearing thresholds progressively increase with age, with higher frequencies being more
affected (figure 3). The characteristic 4,000 Hz dip observed in noise-induced hearing loss is
not seen with this type of hearing loss.

Calculation of hearing loss

In the United States the most widely accepted formula for calculating functional limitation
related to hearing loss is the one proposed in 1979 by the American Academy of
Otolaryngology (AAO) and adopted by the American Medical Association. It is based on the
average of values obtained at 500, at 1,000, at 2,000 and at 3,000 Hz (table 1), with the lower
limit for functional limitation set at 25 dB.
Table 1. Typical calculation of functional loss from an audiogram

Frequency

500 1,000 2,000 3,000 4,000 6,000 8,000


Hz Hz Hz Hz Hz Hz Hz

Right ear 25 35 35 45 50 60 45
(dB)

Left ear 25 35 40 50 60 70 50
(dB)

Unilateral loss

Percentage unilateral loss = (average at 500, 1,000, 2,000 and 3,000 Hz)
– 25dB (lower limit) x1.5

Example:
Right ear: [([25 + 35 + 35 + 45]/4) – 25) x 1.5 = 15 (per cent)
Left ear: [([25 + 35 + 40 + 50]/4) – 25) x 1.5 = 18.8 (per cent)

Bilateral loss

Percentage of bilateral loss = {(percentage of unilateral loss of the best ear


x 5) + (percentage of unilateral loss of the worst ear)}/6

Example: {(15 x 5) + 18.8}/6 = 15.6 (per cent)

Source: Rees and Duckert 1994.

Presbycusis

Presbycusis or age-related hearing loss generally begins at about age 40 and progresses
gradually with increasing age. It is usually bilateral. The characteristic 4,000 Hz dip observed
in noise-induced hearing loss is not seen with presbycusis. However, it is possible to have the
effects of ageing superimposed on noise-related hearing loss.

Treatment

The first essential of treatment is avoidance of any further exposure to potentially toxic levels
of noise (see “Prevention” below). It is generally believed that no more subsequent hearing
loss occurs after the removal from noise exposure than would be expected from the normal
ageing process.
While conduction losses, for example, those related to acute traumatic noise-induced damage,
are amenable to medical treatment or surgery, chronic noise-induced hearing loss cannot be
corrected by treatment. The use of a hearing aid is the sole “remedy” possible, and is only
indicated when hearing loss affects the frequencies critical for speech comprehension (500 to
3,000 Hz). Other types of support, for example lip-reading and sound amplifiers (on
telephones, for example), may, however, be possible.

Prevention

Because noise-induced hearing loss is permanent, it is essential to apply any measure likely to
reduce exposure. This includes reduction at the source (quieter machines and equipment or
encasing them in sound-proof enclosures) or the use of individual protective devices such as
ear plugs and/or ear muffs. If reliance is placed on the latter, it is imperative to verify that
their manufacturers’ claims for effectiveness are valid and that exposed workers are using
them properly at all times.

The designation of 85 dB (A) as the highest permissible occupational exposure limit was to
protect the greatest number of people. But, since there is significant interpersonal variation,
strenuous efforts to keep exposures well below that level are indicated. Periodic audiometry
should be instituted as part of the medical surveillance programme to detect as early as
possible any effects that may indicate noise toxicity.

Chemically-Induced Hearing Disorders

Written by ILO Content Manager

Hearing impairment due to the cochlear toxicity of several drugs is well documented (Ryback
1993). But until the latest decade there has been only little attention paid to audiologic effects
of industrial chemicals. The recent research on chemically-induced hearing disorders has
focused on solvents, heavy metals and chemicals inducing anoxia.

Solvents. In studies with rodents, a permanent decrease in auditory sensitivity to high-


frequency tones has been demonstrated following weeks of high-level exposure to toluene.
Histopathological and auditory brainstem response studies have indicated a major effect on
the cochlea with damage to the outer hair cells. Similar effects have been found in exposure to
styrene, xylenes or trichloroethylene. Carbon disulphide and n-hexane may also affect
auditory functions while their major effect seems to be on more central pathways (Johnson
and Nylén 1995).

Several human cases with damage to the auditory system together with severe neurologic
abnormalities have been reported following solvent sniffing. In case series of persons with
occupational exposure to solvent mixtures, to n-hexane or to carbon disulphide, both cochlear
and central effects on auditory functions have been reported. Exposure to noise was prevalent
in these groups, but the effect on hearing has been considered greater than expected from
noise.

Only few controlled studies have so far addressed the problem of hearing impairment in
humans exposed to solvents without a significant noise exposure. In a Danish study, a
statistically significant elevated risk for self-reported hearing impairment at 1.4 (95% CI: 1.1-
1.9) was found after exposure to solvents for five years or more. In a group exposed to both
solvents and noise, no additional effect from solvent exposure was found. A good agreement
between reporting hearing problems and audiometric criteria for hearing impairment was
found in a subsample of the study population (Jacobsen et al. 1993).

In a Dutch study of styrene-exposed workers a dose-dependent difference in hearing


thresholds was found by audiometry (Muijser et al. 1988).

In another study from Brazil the audiologic effect from exposure to noise, toluene combined
with noise, and mixed solvents was examined in workers in printing and paint manufacturing
industries. Compared to an unexposed control group, significantly elevated risks for
audiometric high frequency hearing loss were found for all three exposure groups. For noise
and mixed solvent exposures the relative risks were 4 and 5 respectively. In the group with
combined toluene and noise exposure a relative risk of 11 was found, suggesting interaction
between the two exposures (Morata et al. 1993).

Metals. The effect of lead on hearing has been studied in surveys of children and teenagers
from the United States. A significant dose-response association between blood lead and
hearing thresholds at frequencies from 0.5 to 4 kHz was found after controlling for several
potential confounders. The effect of lead was present across the entire range of exposure and
could be detected at blood lead levels below 10 μg/100ml. In children without clinical signs
of lead toxicity a linear relationship between blood lead and latencies of waves III and V in
brainstem auditory potentials (BAEP) has been found, indicating a site of action central to the
cochlear nucleus (Otto et al. 1985).

Hearing loss is described as a common part of the clinical picture in acute and chronic
methyl-mercury poisoning. Both cochlear and postcochlear lesions have been involved
(Oyanagi et al. 1989). Inorganic mercury may also affect the auditory system, probably
through damage to cochlear structures.

Exposure to inorganic arsenic has been implied in hearing disorders in children. A high
frequency of severe hearing loss (>30 dB) has been observed in children fed with powdered
milk contaminated with inorganic arsenic V. In a study from Czechoslovakia, environmental
exposure to arsenic from a coal-burning power plant was associated with audiometric hearing
loss in ten-year-old children. In animal experiments, inorganic arsenic compounds have
produced extensive cochlear damage (WHO 1981).

In acute trimethyltin poisoning, hearing loss and tinnitus have been early symptoms.
Audiometry has shown pancochlear hearing loss between 15 and 30 dB at presentation. It is
not clear whether the abnormalities have been reversible (Besser et al. 1987). In animal
experiments, trimethyltin and triethyltin compounds have produced partly reversible cochlear
damage (Clerisi et al. 1991).

Asphyxiants. In reports on acute human poisoning by carbon monoxide or hydrogen sulphide,


hearing disorders have often been noted along with central nervous system disease (Ryback
1992).
In experiments with rodents, exposure to carbon monoxide had a synergistic effect with noise
on auditory thresholds and cochlear structures. No effect was observed after exposure to
carbon monoxide alone (Fechter et al. 1988).

Summary

Experimental studies have documented that several solvents can produce hearing disorders
under certain exposure circumstances. Studies in humans have indicated that the effect may
be present following exposures that are common in the occupational environment. Synergistic
effects between noise and chemicals have been observed in some human and experimental
animal studies. Some heavy metals may affect hearing, most of them only at exposure levels
that produce overt systemic toxicity. For lead, minor effects on hearing thresholds have been
observed at exposures far below occupational exposure levels. A specific ototoxic effect from
asphyxiants has not been documented at present although carbon monoxide may enhance the
audiological effect of noise.

Physically-Induced Hearing Disorders

Written by ILO Content Manager

By virtue of its position within the skull, the auditory system is generally well protected
against injuries from external physical forces. There are, however, a number of physical
workplace hazards that may affect it. They include:

Barotrauma. Sudden variation in barometric pressure (due to rapid underwater descent or


ascent, or sudden aircraft descent) associated with malfunction of the Eustachian tube (failure
to equalize pressure) may lead to rupture of the tympanic membrane with pain and
haemorrhage into the middle and external ears. In less severe cases stretching of the
membrane will cause mild to severe pain. There will be a temporary impairment of hearing
(conductive loss), but generally the trauma has a benign course with complete functional
recovery.

Vibration. Simultaneous exposure to vibration and noise (continuous or impact) does not
increase the risk or severity of sensorineural hearing loss; however, the rate of onset appears
to be increased in workers with hand-arm vibration syndrome (HAVS). The cochlear
circulation is presumed to be affected by reflex sympathetic spasm, when such workers have
bouts of vasospasm (Raynaud’s phenomenon) in their fingers or toes.

Infrasound and ultrasound. The acoustic energy from both of these sources is normally
inaudible to humans. The common sources of ultrasound, for example, jet engines, high-speed
dental drills, and ultrasonic cleaners and mixers all emit audible sound so the effects of
ultrasound on exposed subjects are not easily discernible. It is presumed to be harmless below
120 dB and therefore unlikely to cause NIHL. Likewise, low-frequency noise is relatively
safe, but with high intensity (119-144 dB), hearing loss may occur.

“Welder’s ear”. Hot sparks may penetrate the external auditory canal to the level of the
tympanic membrane, burning it. This causes acute ear pain and sometimes facial nerve
paralysis. With minor burns, the condition requires no treatment, while in more severe cases,
surgical repair of the membrane may be necessary. The risk may be avoided by correct
positioning of the welder’s helmet or by wearing ear plugs.

Equilibrium

Written by ILO Content Manager

Balance System Function

Input

Perception and control of orientation and motion of the body in space is achieved by a system
that involves simultaneous input from three sources: vision, the vestibular organ in the inner
ear and sensors in the muscles, joints and skin that provide somatosensory or “proprioceptive”
information about movement of the body and physical contact with the environment (figure
1). The combined input is integrated in the central nervous system which generates
appropriate actions to restore and maintain balance, coordination and well-being. Failure to
compensate in any part of the system may produce unease, dizziness and unsteadiness that can
produce symptoms and/or falls.

Figure 1. An outline of the principal elements of the balance system

The vestibular system directly registers the orientation and movement of the head. The
vestibular labyrinth is a tiny bony structure located in the inner ear, and comprises the
semicircular canals filled with fluid (endolymph) and the otoliths (Figure 6). The three
semicircular canals are positioned at right angles so that acceleration can be detected in each
of the three possible planes of angular motion. During head turns, the relative movement of
the endolymph within the canals (caused by inertia) results in deflection of the cilia projecting
from the sensory cells, inducing a change in the neural signal from these cells (figure 2). The
otoliths contain heavy crystals (otoconia) which respond to changes in the position of the
head relative to the force of gravity and to linear acceleration or deceleration, again bending
the cilia and so altering the signal from the sensory cells to which they are attached.

Figure 2. Schematic diagram of the vestibular labyrinth.

Figure 3. Schematic representation of the biomechanical effects of a ninety-degree (forward)


inclination of the head.
Integration

The central interconnections within the balance system are extremely complex; information
from the vestibular organs in both ears is combined with information derived from vision and
the somatosensory system at various levels within the brainstem, cerebellum and cortex
(Luxon 1984).

Output

This integrated information provides the basis not only for the conscious perception of
orientation and self-motion, but also the preconscious control of eye movements and posture,
by means of what are known as the vestibuloocular and vestibulospinal reflexes. The purpose
of the vestibuloocular reflex is to maintain a stable point of visual fixation during head
movement by automatically compensating for the head movement with an equivalent eye
movement in the opposite direction (Howard 1982). The vestibulospinal reflexes contribute to
postural stability and balance (Pompeiano and Allum 1988).

Balance System Dysfunction

In normal circumstances, the input from the vestibular, visual and somatosensory systems is
congruent, but if an apparent mismatch occurs between the different sensory inputs to the
balance system, the result is a subjective sensation of dizziness, disorientation, or illusory
sense of movement. If the dizziness is prolonged or severe it will be accompanied by
secondary symptoms such as nausea, cold sweating, pallor, fatigue, and even vomiting.
Disruption of reflex control of eye movements and posture may result in a blurred or
flickering visual image, a tendency to veer to one side when walking, or staggering and
falling. The medical term for the disorientation caused by balance system dysfunction is
“vertigo,” which can be caused by a disorder of any of the sensory systems contributing to
balance or by faulty central integration. Only 1 or 2% of the population consult their doctor
each year on account of vertigo, but the incidence of dizziness and imbalance rises steeply
with age. “Motion sickness” is a form of disorientation induced by artificial environmental
conditions with which our balance system has not been equipped by evolution to cope, such
as passive transport by car or boat (Crampton 1990).

Vestibular causes of vertigo

The most common causes of vestibular dysfunction are infection (vestibular labyrinthitis or
neuronitis), and benign positional paroxysmal vertigo (BPPV) which is triggered principally
by lying on one side. Recurrent attacks of severe vertigo accompanied by loss of hearing and
noises (tinnitus) in one ear are typical of a syndrome known as Menière’s disease. Vestibular
damage can also result from disorders of the middle ear (including bacterial disease, trauma
and cholesteatoma), ototoxic drugs (which should be used only in medical emergencies), and
head injury.

Non-vestibular peripheral causesof vertigo

Disorders of the neck, which may alter the somatosensory information relating to head
movement or interfere with the blood-supply to the vestibular system, are believed by many
clinicians to be a cause of vertigo. Common aetiologies include whiplash injury and arthritis.
Sometimes unsteadiness is related to a loss of feeling in the feet and legs, which may be
caused by diabetes, alcohol abuse, vitamin deficiency, damage to the spinal cord, or a number
of other disorders. Occasionally the origin of feelings of giddiness or illusory movement of
the environment can be traced to some distortion of the visual input. An abnormal visual input
may be caused by weakness of the eye muscles, or may be experienced when adjusting to
powerful lenses or to bifocal glasses.

Central causes of vertigo

Although most cases of vertigo are attributable to peripheral (mainly vestibular) pathology,
symptoms of disorientation can be caused by damage to the brainstem, cerebellum or cortex.
Vertigo due to central dysfunction is almost always accompanied by some other symptom of
central neurological disorder, such as sensations of pain, tingling or numbness in the face or
limbs, difficulty speaking or swallowing, headache, visual disturbances, and loss of motor
control or loss of consciousness. The more common central causes of vertigo include
disorders of the blood supply to the brain (ranging from migraine to strokes), epilepsy,
multiple sclerosis, alcoholism, and occasionally tumours. Temporary dizziness and imbalance
is a potential side-effect of a vast array of drugs, including widely-used analgesics,
contraceptives, and drugs used in the control of cardiovascular disease, diabetes and
Parkinson’s disease, and in particular the centrally-acting drugs such as stimulants, sedatives,
anti-convulsants, anti-depressants and tranquillizers (Ballantyne and Ajodhia 1984).

Diagnosis and treatment

All cases of vertigo require medical attention in order to ensure that the (relatively
uncommon) dangerous conditions which can cause vertigo are detected and appropriate
treatment is given. Medication can be given to relieve symptoms of acute vertigo in the short
term, and in rare cases surgery may be required. However, if the vertigo is caused by a
vestibular disorder the symptoms will generally subside over time as the central integrators
adapt to the altered pattern of vestibular input—in the same way that sailors continuously
exposed to the motion of waves gradually acquire their “sea legs”. For this to occur, it is
essential to continue to make vigorous movements which stimulate the balance system, even
though these will at first cause dizziness and discomfort. Since the symptoms of vertigo are
frightening and embarrassing, sufferers may need physiotherapy and psychological support to
combat the natural tendency to restrict their activities (Beyts 1987; Yardley 1994).

Vertigo in the Workplace

Risk factors

Dizziness and disorientation, which may become chronic, is a common symptom in workers
exposed to organic solvents; furthermore, long-term exposure can result in objective signs of
balance system dysfunction (e.g., abnormal vestibular-ocular reflex control) even in people
who experience no subjective dizziness (Gyntelberg et al. 1986; Möller et al. 1990). Changes
in pressure encountered when flying or diving can cause damage to the vestibular organ
which results in sudden vertigo and hearing loss requiring immediate treatment (Head 1984).
There is some evidence that noise-induced hearing loss can be accompanied by damage to the
vestibular organs (van Dijk 1986). People who work for long periods at computer screens
sometimes complain of dizziness; the cause of this remains unclear, although it may be related
to the combination of a stiff neck and moving visual input.
Occupational difficulties

Unexpected attacks of vertigo, such as occur in Menière’s disease, can cause problems for
people whose work involves heights, driving, handling dangerous machinery, or responsibility
for the safety of others. An increased susceptibility to motion sickness is a common effect of
balance system dysfunction and may interfere with travel.

Conclusion

Equilibrium is maintained by a complex multisensory system, and so disorientation and


imbalance can result from a wide variety of aetiologies, in particular any condition which
affects the vestibular system or the central integration of perceptual information for
orientation. In the absence of central neurological damage the plasticity of the balance system
will normally enable the individual to adapt to peripheral causes of disorientation, whether
these are disorders of the inner ear which alter vestibular function, or environments which
provoke motion sickness. However, attacks of dizziness are often unpredictable, alarming and
disabling, and rehabilitation may be necessary to restore confidence and assist the balance
function.

Vision and Work

Written by ILO Content Manager

Anatomy of the Eye

The eye is a sphere (Graham et al. 1965; Adler 1992), approximately 20 mm in diameter, that
is set in the body orbit with the six extrinsic (ocular) muscles that move the eye attached to
the sclera, its external wall (figure 1). In front, the sclera is replaced by the cornea, which is
transparent. Behind the cornea in the interior chamber is the iris, which regulates the diameter
of the pupil, the space through which the optic axis passes. The back of the anterior chamber
is formed by the biconvex crystalline lens, whose curvature is determined by the ciliary
muscles attached at the front to the sclera and behind to the choroidal membrane, which lines
the posterior chamber. The posterior chamber is filled with the vitreous humour—a clear,
gelatinous liquid. The choroid, the inner surface of the posterior chamber, is black to prevent
interference with visual acuity by internal light reflections.
Figure 1. Schematic representation of the eye.

The eyelids help to maintain a film of


tears, produced by the lacrymal glands, which protects the anterior surface of the eye.
Blinking facilitates the spread of tears and their emptying into the lacrymal canal, which
empties in the nasal cavity. The frequency of blinking, which is used as a test in ergonomics,
varies greatly depending on the activity being undertaken (for example, it is slower during
reading) and also on the lighting conditions (the rate of blinking is lowered by an increase of
illumination).

The anterior chamber contains two muscles: the sphincter of the iris, which contracts the
pupil, and the dilator, which widens it. When a bright light is directed toward a normal eye,
the pupil contracts (pupillary reflex). It also contracts when viewing a nearby object.

The retina has several inner layers of nerve cells and an outer layer containing two types of
photoreceptor cells, the rods and cones. Thus, light passes through the nerve cells to the rods
and cones where, in a manner not yet understood, it generates impulses in the nerve cells
which pass along the optic nerve to the brain. The cones, numbering four to five millions, are
responsible for the perception of bright images and colour. They are concentrated in the inner
portion of the retina, most densely at the fovea, a small depression at the centre of the retina
where there are no rods and where vision is most acute. With the help of spectrophotometry,
three types of cones have been identified, whose absorption peaks are yellow, green and blue
zones accounting for the sense of colour. The 80 to 100 million rods become more and more
numerous toward the periphery of the retina and are sensitive to dim light (night vision). They
also play a major role in black-white vision and in the detection of motion.

The nerve fibres, along with the blood vessels which nourish the retina, traverse the choroid,
the middle of the three layers forming the wall of the posterior chamber, and leave the eye as
the optic nerve at a point somewhat off-centre, which, because there are no photoreceptors
there, is known as the “blind spot.”
The retinal vessels, the only arteries and veins that can be viewed directly, can be visualized
by directing a light through the pupil and using an ophthalmoscope to focus on their image
(the images can also be photographed). Such retinoscopic examinations, part of the routine
medical examination, are important in evaluating the vascular components of such diseases as
arteriosclerosis, hypertension and diabetes, which may cause retinal haemorrhages and/or
exudates that may cause defects in the field of vision.

Properties of the Eye that Are Important for Work

Mechanism of accommodation

In the emmetropic (normal) eye, as light rays pass through the cornea, the pupil and the lens,
they are focused on the retina, producing an inverted image which is reversed by the visual
centres in the brain.

When a distant object is viewed, the lens is flattened. When viewing nearby objects, the lens
accommodates (i.e., increases its power) by a squeezing of the ciliary muscles into a more
oval, convex shape. At the same time, the iris constricts the pupil, which improves the quality
of the image by reducing the spherical and chromatic aberrations of the system and increasing
the depth of field.

In binocular vision, accommodation is necessarily accompanied by proportional convergence


of both eyes.

The visual field and the field of fixation

The visual field (the space covered by the eyes at rest) is limited by anatomical obstacles in
the horizontal plane (more reduced on the side towards the nose) and in the vertical plane
(limited by the upper edge of the orbit). In binocular vision, the horizontal field is about 180
degrees and the vertical field 120 to 130 degrees. In daytime vision, most visual functions are
weakened at the periphery of the visual field; on the contrary, perception of movement is
improved. In night vision there is a considerable loss of acuity at the centre of the visual field,
where, as noted above, the rods are less numerous.

The field of fixation extends beyond the visual field thanks to the mobility of the eyes, head
and body; in work activities it is the field of fixation that matters. The causes of reduction of
the visual field, whether anatomical or physiological, are very numerous: narrowing of the
pupil; opacity of the lens; pathological conditions of the retina, visual pathways or visual
centres; the brightness of the target to be perceived; the frames of spectacles for correction or
protection; the movement and speed of the target to be perceived; and others.

Visual acuity

“Visual acuity (VA) is the capacity to discriminate the fine details of objects in the field of
view. It is specified in terms of the minimum dimension of some critical aspects of a test
object that a subject can correctly identify” (Riggs, in Graham et al. 1965). A good visual
acuity is the ability to distinguish fine details. Visual acuity defines the limit of spatial
discrimination.
The retinal size of an object depends not only on its physical size but also on its distance from
the eye; it is therefore expressed in terms of the visual angle (usually in minutes of arc).
Visual acuity is the reciprocal of this angle.

Riggs (1965) describes several types of “acuity task”. In clinical and occupational practice,
the recognition task, in which the subject is required to name the test object and locate some
details of it, is the most commonly applied. For convenience, in ophthalmology, visual acuity
is measured relative to a value called “normal” using charts presenting a series of objects of
different sizes; they have to be viewed at a standard distance.

In clinical practice Snellen charts are the most widely used tests for distant visual acuity; a
series of test objects are used in which the size and broad shape of characters are designed to
subtend an angle of 1 minute at a standard distance which varies from country to country (in
the United States, 20 feet between the chart and the tested individual; in most European
countries, 6 metres). The normal Snellen score is thus 20/20. Larger test objects which form
an angle of 1 minute of arc at greater distances are also provided.

The visual acuity of an individual is given by the relation VA = D¢/D, where D¢ is the
standard viewing distance and D the distance at which the smallest test object correctly
identified by the individual subtends an angle of 1 minute of arc. For example, a person’s VA
is 20/30 if, at a viewing distance of 20 ft, he or she can just identify an object which subtends
an angle of 1 minute at 30 feet.

In optometric practice, the objects are often letters of the alphabet (or familiar shapes, for
illiterates or children). However, when the test is repeated, charts should present unlearnable
characters for which the recognition of differences involve no educational and cultural
features. This is one reason why it is nowadays internationally recommended to use Landolt
rings, at least in scientific studies. Landolt rings are circles with a gap, the directional position
of which has to be identified by the subject.

Except in ageing people or in those individuals with accommodative defects (presbyopia), the
far and the near visual acuity parallel each other. Most jobs require both a good far (without
accommodation) and a good near vision. Snellen charts of different kinds are also available
for near vision (figures 2 and 3). This particular Snellen chart should be held at 16 inches
from the eye (40 cm); in Europe, similar charts exist for a reading distance of 30 cm (the
appropriate distance for reading a newspaper).

Figure 2. Example of a Snellen chart: Landolt rings (acuity in decimal values (reading
distance not specified)).
Figure 3. Example of a Snellen chart: Sloan letters for measuring near vision (40 cm)(acuity
in decimal values and in distance equivalents).
With the broad use of visual display units, VDUs, however, there is an increased interest in
occupational health to test operators at a longer distance (60 to 70 cm, according to Krueger
(1992), in order to correct VDU operators properly.

Vision testers and visual screening

For occupational practice, several types of visual testers are available on the market which
have similar features; they are named Orthorater, Visiotest, Ergovision, Titmus Optimal C
Tester, C45 Glare Tester, Mesoptometer, Nyctometer and so on.

They are small; they are independent of the lighting of the testing room, having their own
internal lighting; they provide several tests, such as far and near binocular and monocular
visual acuity (most of the time with unlearnable characters), but also depth perception, rough
colour discrimination, muscular balance and so on. Near visual acuity can be measured,
sometimes for short and intermediate distance of the test object. The most recent of these
devices makes extensive use of electronics to provide automatically written scores for
different tests. Moreover, these instruments can be handled by non-medical personnel after
some training.

Vision testers are designed for the purpose of pre-recruitment screening of workers, or
sometimes later testing, taking into account the visual requirements of their workplace. Table
1 indicates the level of visual acuity needed to fulfil unskilled to highly skilled activities,
when using one particular testing device (Fox, in Verriest and Hermans 1976).

Table 1. Visual requirements for different activities when using Titmus Optimal C Tester,
with correction

Category 1: Office work

Far visual acuity 20/30 in each eye (20/25 for binocular vision)

Near VA 20/25 in each eye (20/20 for binocular vision)

Category 2: Inspection and other activities in fine mechanics

Far VA 20/35 in each eye (20/30 for binocular vision)

Near VA 20/25 in each eye (20/20 for binocular vision)

Category 3: Operators of mobile machinery

Far VA 20/25 in each eye (20/20 for binocular vision)

Near VA 20/35 in each eye (20/30 for binocular vision)


Category 4 : Machine tools operations

Far and near VA 20/30 in each eye (20/25 for binocular vision)

Category 5 : Unskilled workers

Far VA 20/30 in each eye (20/25 for binocular vision)

Near VA 20/35 in each eye (20/30 for binocular vision)

Category 6 : Foremen

Far VA 20/30 in each eye (20/25 for binocular vision)

Near VA 20/25 in each eye (20/20 for binocular vision)

Source: According to Fox in Verriest and Hermans 1975.

It is recommended by manufacturers that employees are measured when wearing their


corrective glasses. Fox (1965), however, stresses that such a procedure may lead to wrong
results—for example, workers are tested with glasses which are too old in comparison with
the time of the present measurement; or lenses may be worn out by exposure to dust or other
noxious agents. It is also very often the case that people come to the testing room with the
wrong glasses. Fox (1976) suggests therefore that, if “the corrected vision is not improved to
20/20 level for distance and near, referral should be made to an ophthalmologist for a proper
evaluation and refraction for the current need of the employee on his job”. Other deficiencies
of vision testers are referred to later in this article.

Factors influencing visual acuity

VA meets its first limitation in the structure of the retina. In daytime vision, it may exceed
10/10ths at the fovea and may rapidly decline as one moves a few degrees away from the
centre of the retina. In night vision, acuity is very poor or nil at the centre but may reach one
tenth at the periphery, because of the distribution of cones and rods (figure 4).
Figure 4. Density of cones and rods in the retina as compared with the relative visual acuity in
the corresponding visual field.

The diameter of the pupil acts on visual performance in a complex manner. When dilated, the
pupil allows more light to enter into the eye and stimulate the retina; the blur due to the
diffraction of the light is minimized. A narrower pupil, however, reduces the negative effects
of the aberrations of the lens mentioned above. In general, a pupil diameter of 3 to 6 mm
favours clear vision.

Thanks to the process of adaptation it is possible for the human being to see as well by
moonlight as by full sunshine, even though there is a difference in illumination of 1 to
10,000,000. Visual sensitivity is so wide that luminous intensity is plotted on a logarithmic
scale.

On entering a dark room we are at first completely blind; then the objects around us become
perceptible. As the light level is increased, we pass from rod-dominated vision to cone-
dominated vision. The accompanying change in sensitivity is known as the Purkinje shift. The
dark-adapted retina is mainly sensitive to low luminosity, but is characterized by the absence
of colour vision and poor spatial resolution (low VA); the light-adapted retina is not very
sensitive to low luminosity (objects have to be well illuminated in order to be perceived), but
is characterized by a high degree of spatial and temporal resolution and by colour vision.
After the desensitization induced by intense light stimulation, the eye recovers its sensitivity
according to a typical progression: at first a rapid change involving cones and daylight or
photopic adaptation, followed by a slower phase involving rods and night or scotopic
adaptation; the intermediate zone involves dim light or mesopic adaptation.

In the work environment, night adaptation is hardly relevant except for activities in a dark
room and for night driving (although the reflection on the road from headlights always brings
some light). Simple daylight adaptation is the most common in industrial or office activities,
provided either by natural or by artificial lighting. However, nowadays with emphasis on
VDU work, many workers like to operate in dim light.
In occupational practice, the behaviour of groups of people is particularly important (in
comparison with individual evaluation) when selecting the most appropriate design of
workplaces. The results of a study of 780 office workers in Geneva (Meyer et al. 1990) show
the shift in percentage distribution of acuity levels when lighting conditions are changed. It
may be seen that, once adapted to daylight, most of the tested workers (with eye correction)
reach a quite high visual acuity; as soon as the surrounding illumination level is reduced, the
mean VA decreases, but also the results are more spread, with some people having very poor
performance; this tendency is aggravated when dim light is accompanied by some disturbing
glare source (figure 5). In other words, it is very hard to predict the behaviour of a subject in
dim light from his or her score in optimal daylight conditions.

Figure 5. Percentage distribution of tested office workers’ visual acuity.


Glare. When the eyes are directed from a dark area to a lighted area and back again, or when
the subject looks for a moment at a lamp or window (illuminance varying from 1,000 to
12,000 cd/m2), changes in adaptation concern a limited area of the visual field (local
adaptation). Recovery time after disabling glare may last several seconds, depending on
illumination level and contrast (Meyer et al. 1986) (figure 6).

Figure 6. Response time before and after exposure to glare for perceiving the gap of a Landolt
ring: Adaption to dim light.

Afterimages. Local disadaptation is usually accompanied by the continued image of a bright


spot, coloured or not, which produces a veil or masking effect (this is the consecutive image).
Afterimages have been studied very extensively to better understand certain visual
phenomena (Brown in Graham et al. 1965). After visual stimulation has ceased, the effect
remains for some time; this persistence explains, for example, why perception of continuous
light may be present when facing a flickering light (see below). If the frequency of flicker is
high enough, or when looking at cars at night, we see a line of light. These afterimages are
produced in the dark when viewing an enlighted spot; they are also produced by coloured
areas, leaving coloured images. It is the reason why VDU operators may be exposed to sharp
afterimages after looking for a prolonged time at the screen and then moving their eyes
towards another area in the room.

Afterimages are very complicated. For example, one experiment on afterimages found that a
blue spot appears white during the first seconds of observation, then pink after 30 seconds,
and then bright red after a minute or two. Another experiment showed that an orange-red field
appeared momentarily pink, then within 10 to 15 seconds passed through orange and yellow
to a bright green appearance which remained throughout the whole observation. When the
point of fixation moves, usually the afterimage moves too (Brown in Graham et al. 1965).
Such effects could be very disturbing to someone working with a VDU.

Diffused light emitted by glare sources also has the effect of reducing the object/background
contrast (veiling effect) and thus reducing visual acuity (disability glare).
Ergophthalmologists also describe discomfort glare, which does not reduce visual acuity but
causes uncomfortable or even painful sensation (IESNA 1993).

The level of illumination at the workplace must be adapted to the level required by the task. If
all that is required is to perceive shapes in an environment of stable luminosity, weak
illumination may be adequate; but as soon as it is a question of seeing fine details that require
increased acuity, or if the work involves colour discrimination, retinal illumination must be
markedly increased.

Table 2 gives recommended illuminance values for the lighting design of a few workstations
in different industries (IESNA 1993).

Table 2. Recommended illuminance values for the lighting design of a few workstations

Cleaning and pressing industry

Dry and wet cleaning and steaming 500-1,000 lux or 50-100 footcandles

Inspection and spotting 2,000-5,000 lux or 200-500 footcandles

Repair and alteration 1,000-2,000 lux or 100-200 footcandles

Dairy products, fluid milk industry

Bottle storage 200-500 lux or 20-50 footcandles

Bottle washers 200-500 lux or 20-50 footcandles

Filling, inspection 500-1,000 lux or 50-100 footcandles

Laboratories 500-1,000 lux or 50-100 footcandles

Electrical equipment, manufacturing

Impregnating 200-500 lux or 20-50 footcandles


Insulating coil winding 500-1,000 lux or 50-100 footcandles

Electricity-generating stations

Air-conditioning equipment, air preheater 50-100 lux or 50-10 footcandles

Auxiliaries, pumps, tanks, compressors 100-200 lux or 10-20 footcandles

Clothing industry

Examining (perching) 10,000-20,000 lux or 1,000-2,000 footcandles

Cutting 2,000-5,000 lux or 200-500 footcandles

Pressing 1,000-2,000 lux or 100-200 footcandles

Sewing 2,000-5,000 lux or 200-500 footcandles

Piling up and marking 500-1,000 lux or 50-100 footcandles

Sponging, decating, winding 200-500 lux or 20-50 footcandles

Banks

General 100-200 lux or 10-20 footcandles

Writing area 200-500 lux or 20-50 footcandles

Tellers’ stations 500-1,000 lux or 50-100 footcandles

Dairy farms

Haymow area 20-50 lux or 2-5 footcandles

Washing area 500-1,000 lux or 50-100 footcandles

Feeding area 100-200 lux or 10-20 footcandles

Foundries

Core-making: fine 1,000-2,000 lux or 100-200 footcandles

Core-making: medium 500-1,000 lux or 50-100 footcandles

Moulding: medium 1,000-2,000 lux or 100-200 footcandles

Moulding: large 500-1,000 lux or 50-100 footcandles

Inspection: fine 1,000-2,000 lux or 100-200 footcandles

Inspection: medium 500-1,000 lux or 50-100 footcandles

Source: IESNA 1993.


Brightness contrast and spatial distribution of luminances at the workplace. From the point of
view of ergonomics, the ratio between luminances of the test object, its immediate
background and the surrounding area has been widely studied, and recommendations on this
subject are available for different requirements of the task (see Verriest and Hermans 1975;
Grandjean 1987).

The object-background contrast is currently defined by the formula (Lf – Lo)/Lf, where Lo is
the luminance of the object and Lf the luminance of the background. It thus varies from 0 to 1.

As shown by figure 7, visual acuity increases with the level of illumination (as previously
said) and with the increase of object-background contrast (Adrian 1993). This effect is
particularly marked in young people. A large light background and a dark object thus provides
the best efficiency. However, in real life, contrast will never reach unity. For example, when a
black letter is printed on a white sheet of paper, the object-background contrast reaches a
value of only around 90%.

Figure 7. Relationship between visual acuity of a dark object perceived on a background


receiving increasing illumination for four contrast values.

In the most favourable situation—that is, in positive presentation (dark letters on a light
background)—acuity and contrast are linked, so that visibility can be improved by affecting
either one or the other factor—for example, increasing the size of letters or their darkness, as
in Fortuin’s table (in Verriest and Hermans 1975). When video display units appeared on the
market, letters or symbols were presented on the screen as light spots on a dark background.
Later on, new screens were developed which displayed dark letters on a light background.
Many studies were conducted in order to verify whether this presentation improved vision.
The results of most experiments stress without any doubt that visual acuity is enhanced when
reading dark letters on a light background; of course a dark screen favours reflections of glare
sources.
The functional visual field is defined by the relationship between the luminosity of the
surfaces actually perceived by the eye at the workpost and those of the surrounding areas.
Care must be taken not to create too great differences of luminosity in the visual field;
according to the size of the surfaces involved, changes in general or local adaptation occur
which cause discomfort in the execution of the task. Moreover, it is recognized that in order to
achieve good performance, the contrasts in the field must be such that the task area is more
illuminated than its immediate surroundings, and that the far areas are darker.

Time of presentation of the object. The capacity to detect an object depends directly on the
quantity of light entering the eye, and this is linked with the luminous intensity of the object,
its surface qualities and the time during which it appears (this is known in tests of
tachystocopic presentation). A reduction in acuity occurs when the duration of presentation is
less than 100 to 500 ms.

Movements of the eye or of the target. Loss of performance occurs particularly when the eye
jerks; nevertheless, total stability of the image is not required in order to attain maximum
resolution. But it has been shown that vibrations such as those of construction site machines
or tractors can adversely affect visual acuity.

Diplopia. Visual acuity is higher in binocular than in monocular vision. Binocular vision
requires optical axes that both meet at the object being looked at, so that the image falls into
corresponding areas of the retina in each eye. This is made possible by the activity of the
external muscles. If the coordination of the external muscles is failing, more or less transitory
images may appear, such as in excessive visual fatigue, and may cause annoying sensations
(Grandjean 1987).

In short, the discriminating power of the eye depends on the type of object to be perceived
and the luminous environment in which it is measured; in the medical consulting room,
conditions are optimal: high object-background contrast, direct daylight adaptation, characters
with sharp edges, presentation of the object without a time limit, and certain redundancy of
signals (e.g., several letters of the same size on a Snellen chart). Moreover, visual acuity
determined for diagnosis purposes is a maximal and unique operation in the absence of
accommodative fatigue. Clinical acuity is thus a poor reference for the visual performance
attained on the job. What is more, good clinical acuity does not necessarily mean the absence
of discomfort at work, where conditions of individual visual comfort are rarely attained. At
most workplaces, as stressed by Krueger (1992), objects to be perceived are blurred and of
low contrast, background luminances are unequally scattered with many glare sources
producing veiling and local adaptation effects and so on. According to our own calculations,
clinical results do not carry much predictive value of the amount and nature of visual fatigue
encountered, for example, in VDU work. A more realistic laboratory set-up in which
conditions of measurement were closer to task requirements did somewhat better (Rey and
Bousquet 1990; Meyer et al. 1990).

Krueger (1992) is right when claiming that ophthalmological examination is not really
appropriate in occupational health and ergonomics, that new testing procedures should be
developed or extended, and that existing laboratory set-ups should be made available to the
occupational practitioner.
Relief Vision, Stereoscopic Vision

Binocular vision allows a single image to be obtained by means of synthesis of the images
received by the two eyes. Analogies between these images give rise to the active cooperation
that constitutes the essential mechanism of the sense of depth and relief. Binocular vision has
the additional property of enlarging the field, improving visual performance generally,
relieving fatigue and increasing resistance to glare and dazzle.

When the fusion of both eyes is not sufficient, ocular fatigue may appear earlier.

Without achieving the efficiency of binocular vision in appreciating the relief of relatively
near objects, the sensation of relief and the perception of depth are nevertheless possible with
monocular vision by means of phenomena that do not require binocular disparity. We know
that the size of objects does not change; that is why apparent size plays a part in our
appreciation of distance; thus retinal images of small size will give the impression of distant
objects, and vice versa (apparent size). Near objects tend to hide more distant objects (this is
called interposition). The brighter one of two objects, or the one with a more saturated colour,
seems to be nearer. The surroundings also play a part: more distant objects are lost in mist.
Two parallel lines seem to meet at infinity (this is the perspective effect). Finally, if two
targets are moving at the same speed, the one whose speed of retinal displacement is slower
will appear farther from the eye.

In fact, monocular vision does not constitute a major obstacle in the majority of work
situations. The subject needs to get accustomed to the narrowing of the visual field and also to
the rather exceptional possibility that the image of the object may fall on the blind spot. (In
binocular vision the same image never falls on the blind spot of both eyes at the same time.) It
should also be noted that good binocular vision is not necessarily accompanied by relief
(stereoscopic) vision, since this also depends on complex nervous system processes.

For all these reasons, regulations for the need of stereoscopic vision at work should be
abandoned and replaced by a thorough examination of individuals by an eye doctor. Such
regulations or recommendations exist nevertheless and stereoscopic vision is supposed to be
necessary for such tasks as crane driving, jewellery work and cutting-out work. However, we
should keep in mind that new technologies may modify deeply the content of the task; for
example, modern computerized machine-tools are probably less demanding in stereoscopic
vision than previously believed.

As far as driving is concerned, regulations are not necessarily similar from country to country.
In table 3 (overleaf), French requirements for driving either light or heavy vehicles are
mentioned. The American Medical Association guidelines are the appropriate reference for
American readers. Fox (1973) mentions that, for the US Department of Transportation in
1972, drivers of commercial motor vehicles should have a distant VA of at least 20/40, with
or without corrective glasses; a field of vision of at least 70 degrees is needed in each eye.
Ability to recognize the colours of the traffic lights was also required at that time, but today in
most countries traffic lights can be distinguished not only by colour but also by shape.
Table 3. Visual requirements for a driving licence in France

Visual acuity (with eyeglasses)

For light vehicles At least 6/10th for both eyes with at least 2/10th in the
worse eye

For heavy vehicles VA with both eyes of 10/10th with at least 6/10th in the
worse eye

Visual field

For light vehicles No licence if peripheral reduction in candidates with one


eye or with the second eye having a visual acuity of less
than 2/10th

For heavy vehicles Complete integrity of both visual fields (no peripheral
reduction, no scotoma)

Nystagmus (spontaneous eye movements)

For light vehicles No licence if binocular visual acuity of less than 8/10th

Heavy vehicles No defects of night vision are acceptable

Eye Movements

Several types of eye movements are described whose objective is to allow the eye to take
advantage of all the information contained in the images. The system of fixation allows us to
maintain the object in place at the level of the foveolar receptors where it can be examined in
the retinal region with the highest power of resolution. Nevertheless, the eyes are constantly
subject to micromovements (tremor). Saccades (particularly studied during reading) are
intentionally induced rapid movements the aim of which is to displace the gaze from one
detail to another of the motionless object; the brain perceives this unanticipated motion as the
movement of an image across the retina. This illusion of movement is met in pathological
conditions of the central nervous system or the vestibular organ. Search movements are
partially voluntary when they involve the tracking of relatively small objects, but become
rather irrepressible when very large objects are concerned. Several mechanisms for
suppressing images (including jerks) allow the retina to prepare to receive new information.

Illusions of movements (autokinetic movements) of a luminous point or a motionless object,


such as the movement of a bridge over a watercourse, are explained by retinal persistence and
conditions of vision that are not integrated in our central system of reference. The consecutive
effect may be merely a simple error of interpretation of a luminous message (sometimes
harmful in the working environment) or result in serious neurovegetative disturbances. The
illusions caused by static figures are well known. Movements in reading are discussed
elsewhere in this chapter.
Flicker Fusion and de Lange Curve

When the eye is exposed to a succession of short stimuli, it first experiences flicker and then,
with an increase in frequency, has the impression of stable luminosity: this is the critical
fusion frequency. If the stimulating light fluctuates in a sinusoidal manner, the subject may
experience fusion for all frequencies below the critical frequency insofar as the level of
modulation of this light is reduced. All these thresholds can then be joined by a curve which
was first described by de Lange and which can be altered when changing the nature of the
stimulation: the curve will be depressed when the luminance of the flickering area is reduced
or if the contrast between the flickering spot at its surrounding decreases; similar changes of
the curve can be observed in retinal pathologies or in post-effects of cranial trauma (Meyer et
al. 1971) (Figure 8).

Figure 8. Flicker-fusion curves connecting the frequency of intermittent luminous stimulation


and its amplitude of modulation at threshold (de Lange’s curves), average and standard
deviation, in 43 patients suffering from cranial trauma and 57 controls (dotted line).

Therefore one must be cautious when claiming to interpret a fall in critical flicker fusion in
terms of work-induced visual fatigue.

Occupational practice should make a better use of flickering light to detect small retinal
damage or dysfunctioning (e.g., an enhancement of the curve can be observed when dealing
with slight intoxication, followed by a drop when intoxication becomes greater); this testing
procedure, which does not alter retinal adaptation and which does not require eye correction,
is also very useful for the follow-up of functional recovery during and after a treatment
(Meyer et al. 1983) (figure 9).

Figure 9. De Lange’s curve in a young man absorbing ethambutol; the effect of treatment can
de deduced from comparing the flicker sensitivity of the subject before and after treatment.

Colour Vision

The sensation of colour is connected with the activity of the cones and therefore exists only in
the case of daylight (photopic range of light) or mesopic (middle range of light) adaptation. In
order for the system of colour analysis to function satisfactorily, the illuminance of the
perceived objects must be at least 10 cd/m2. Generally speaking, three colour sources, the so-
called primary colours—red, green and blue—suffice to reproduce a whole spectrum of
colour sensations. In addition, a phenomenon is observed of induction of colour contrast
between two colours which mutually reinforce each other: the green-red pair and the yellow-
blue pair.

The two theories of colour sensation, the trichromatic and the dichromatic, are not exclusive;
the first appears to apply at the level of the cones and the second one at more central levels of
the visual system.

To understand the perception of coloured objects against a luminous background, other


concepts need to be used. The same colour may in fact be produced by different types of
radiation. To reproduce a given colour faithfully, it is therefore necessary to know the spectral
composition of the light sources and the spectrum of the reflectance of the pigments. The
index of colour reproduction used by lighting specialists allows the selection of fluorescent
tubes appropriate to the requirements. Our eyes have developed the faculty of detecting very
slight changes in the tonality of a surface obtained by changing its spectral distribution; the
spectral colours (the eye can distinguish more than 200) recreated by mixtures of
monochromatic light represent only a small proportion of the possible colour sensation.

The importance of the anomalies of colour vision in the work environment should thus not be
exaggerated except in activities such as inspecting the appearance of products, and e.g., for
decorators and similar, where colours must be correctly identified. Moreover, even in
electricians’ work, size and shape or other markers may replace colour.

Anomalies of colour vision may be congenital or acquired (degenerations). In abnormal


trichromates, the change may affect the basic red sensation (Dalton type), or the green or the
blue (the rarest anomaly). In dichromates, the system of three basic colours is reduced to two.
In deuteranopia, it is the basic green that is lacking. In protanopia, it is the disappearance of
the basic red; although less frequent, this anomaly, as it is accompanied by a loss of
luminosity in the range of reds, deserves attention in the work environment, in particular by
avoiding the deployment of red notices especially if they are not very well lighted. It should
also be noted that these colour vision defects can be found in various degrees in the so-called
normal subject; hence the need for caution in using too many colours. It should be kept in
mind also that only broad colour defects are detectable with vision testers.

Refractive Errors

The near point (Weymouth 1966) is the shortest distance at which an object can be brought
into sharp focus; the farthest away is the far point. For the normal (emmetropic) eye, the far
point is situated at infinity. For the myopic eye, the far point is situated in front of the retina,
at a finite distance; this excess of strength is corrected by means of concave lenses. For the
hyperopic (hypermetropic) eye, the far point is situated behind the retina; this lack of strength
is corrected by means of convex lenses (figure 10). In a case of light hyperopia, the defect is
spontaneously compensated with accommodation and may be ignored by the individual. In
myopics who are not wearing their spectacles the loss of accommodation can be compensated
for by the fact that the far point is nearer.
Figure 10. Schematic representation of refractive errors and their correction.

In the ideal eye, the surface of the cornea should be perfectly spherical; however, our eyes
show differences in curvature in different axes (this is called astigmatism); refraction is
stronger when the curvature is more accentuated, and the result is that rays emerging from a
luminous point do not form a precise image on the retina. These defects, when pronounced,
are corrected by means of cylindrical lenses (see lowest diagram in figure 10, overleaf); in
irregular astigmatism, contact lenses are recommended. Astigmatism becomes particularly
troublesome during night driving or in work on a screen, that is, in conditions where light
signals stand out on a dark background or when using a binocular microscope.

Contact lenses should not be used at workstations where air is too dry or in case of dusts and
so on (Verriest and Hermans 1975).
In presbyopia, which is due to loss of elasticity of the lens with age, it is the amplitude of
accommodation that is reduced—that is, the distance between the far and near points; the
latter (from about 10 cm at the age of 10 years) moves further away the older one gets; the
correction is made by means of unifocal or multifocal convergent lenses; the latter correct for
ever nearer distances of the object (usually up to 30 cm) by taking into account that nearer
objects are generally perceived in the lower part of the visual field, while the upper part of the
spectacles is reserved for distance vision. New lenses are now proposed for work at VDUs
which are different from the usual type. The lenses, known as progressive, almost blur the
limits between the correction zones. Progressive lenses require the user to be more
accustomed to them than do the other types of lenses, because their field of vision is narrow
(see Krueger 1992).

When the visual task requires alternative far and near vision, bifocal, trifocal or even
progressive lenses are recommended. However, it should be kept in mind that the use of
multifocal lenses can create important modifications to the posture of an operator. For
example, VDU operators with presbyopia corrected by the means of bifocal lenses tend to
extend the neck and may suffer cervical and shoulder pain. Spectacles manufacturers will then
propose progressive lenses of different kinds. Another cue is the ergonomic improvement of
VDU workplaces, to avoid placing the screen too high.

Demonstrating refractive errors (which are very common in the working population) is not
independent of the type of measurement. Snellen charts fixed on a wall will not necessarily
give the same results as various kinds of apparatus in which the image of the object is
projected on a near background. In fact, in a vision tester (see above), it is difficult for the
subject to relax the accommodation, particularly as the axis of vision is lower; this is known
as “instrumental myopia”.

Effects of Age

With age, as already explained, the lens loses its elasticity, with the result that the near point
moves farther away and the power of accommodation is reduced. Although the loss of
accommodation with age can be compensated for by means of spectacles, presbyopia is a real
public health problem. Kauffman (in Adler 1992) estimates its cost, in terms of means of
correction and loss of productivity, to be of the order of tens of billions of dollars annually for
the United States alone. In developing countries we have seen workers obliged to give up
work (in particular the making of silk saris) because they are unable to buy spectacles.
Moreover, when protective glasses need to be used, it is very expensive to offer both
correction and protection. It should be remembered that the amplitude of accommodation
declines even in the second ten years of life (and perhaps even earlier) and that it disappears
completely by the age of 50 to 55 years (Meyer et al. 1990) (figure 11).

Figure 11. Near point measured with the rule of Clement and Clark, percentage distribution of
367 office workers aged 18-35 years (below) and 414 office workers aged 36-65 years
(above).
Other phenomena due to age also play a part: the sinking of the eye into the orbit, which
occurs in very old age and varies more or less according to individuals, reduces the size of the
visual field (because of the eyelid). Dilation of the pupil is at its maximum in adolescence and
then declines; in older people, the pupil dilates less and the reaction of the pupil to light slows
down. Loss of transparency of the media of the eye reduces visual acuity (some media have a
tendency to become yellow, which modifies colour vision) (see Verriest and Hermans 1976).
Enlargement of the blind spot results in the reduction of the functional visual field.
With age and illness, changes are observed in the retinal vessels, with consequent functional
loss. Even the movements of the eye are modified; there is a slowing down and reduction in
amplitude of the exploratory movements.

Older workers are at a double disadvantage in conditions of weak contrast and weak
luminosity of the environment; first, they need more light to see an object, but at the same
time they benefit less from increased luminosity because they are dazzled more quickly by
glare sources. This handicap is due to changes in the transparent media which allow less light
to pass and increase its diffusion (the veil effect described above). Their visual discomfort is
aggravated by too sudden changes between strongly and weakly lighted areas (slowed pupil
reaction, more difficult local adaptation). All these defects have a particular impact in VDU
work, and it is very difficult, indeed, to provide good illumination of workplaces for both
young and older operators; it can be observed, for example, that older operators will reduce by
all possible means the luminosity of the surrounding light, although dim light tends to
decrease their visual acuity.

Risks to the Eye at Work

These risks may be expressed in different ways (Rey and Meyer 1981; Rey 1991): by the
nature of the causal agent (physical agent, chemical agents, etc.), by the route of penetration
(cornea, sclera, etc.), by the nature of the lesions (burns, bruises, etc.), by the seriousness of
the condition (limited to the outer layers, affecting the retina, etc.) and by the circumstances
of the accident (as for any physical injury); these descriptive elements are useful in devising
preventive measures. Only the eye lesions and circumstances most frequently encountered in
the insurance statistics are mentioned here. Let us stress that workers’ compensation can be
claimed for most eye injuries.

Eye conditions caused by foreign bodies

These conditions are seen particularly among turners, polishers, foundry workers,
boilermakers, masons and quarrymen. The foreign bodies may be inert substances such as
sand, irritant metals such as iron or lead, or animal or vegetable organic materials (dusts).
This is why, in addition to the eye lesions, complications such as infections and intoxications
may occur if the amount of substance introduced into the organism is sufficiently large.
Lesions produced by foreign bodies will of course be more or less disabling, depending on
whether they remain in the outer layers of the eye or penetrate deeply into the bulb; treatment
will thus be quite different and sometimes requires the immediate transfer of the victim to the
eye clinic.

Burns of the eye

Burns are caused by various agents: flashes or flames (during a gas explosion); molten metal
(the seriousness of the lesion depends on the melting point, with metals melting at higher
temperature causing more serious damage); and chemical burns due, for example, to strong
acids and bases. Burns due to boiling water, electrical burns and many others also occur.
Injuries due to compressed air

These are very common. Two phenomena play a part: the force of the jet itself (and the
foreign bodies accelerated by the air flow); and the shape of the jet, a less concentrated jet
being less harmful.

Eye conditions caused by radiation

Ultraviolet (UV) radiation

The source of the rays may be the sun or certain lamps. The degree of penetration into the eye
(and consequently the danger of the exposure) depends on the wavelength. Three zones have
been defined by the International Lighting Commission: UVC (280 to 100 nm) rays are
absorbed at the level of the cornea and conjunctiva; UVB (315 to 280 nm) are more
penetrating and reach the anterior segment of the eye; UVA (400 to 315 nm) penetrate still
further.

For welders the characteristic effects of exposure have been described, such as acute
keratoconjunctivitis, chronic photo-ophthalmia with decreased vision, and so on. The welder
is subjected to a considerable amount of visible light, and it is essential that the eyes be
protected with adequate filters. Snowblindness, a very painful condition for workers in
mountains, needs to be avoided by wearing appropriate sunglasses.

Infrared radiation

Infrared rays are situated between the visible rays and the shortest radio-electric waves. They
begin, according to the International Lighting Commission, at 750 nm. Their penetration into
the eye depends on their wavelength; the longest infrared rays can reach the lens and even the
retina. Their effect on the eye is due to their calorigenicity. The characteristic condition is
found in those who blow glass opposite the oven. Other workers, such as blast furnace
workers, suffer from thermal irradiation with various clinical effects (such as
keratoconjunctivitis, or membranous thickening of the conjunctiva).

LASER (Light amplification by stimulated emission of radiation)

The wavelength of the emission depends on the type of laser—visible light, ultraviolet and
infrared radiation. It is principally the quantity of energy projected that determines the level of
the danger incurred.

Ultraviolet rays cause inflammatory lesions; infrared rays can cause caloric lesions; but the
greatest risk is destruction of retinal tissue by the beam itself, with loss of vision in the
affected area.

Radiation from cathode screens

The emissions coming from the cathode screens commonly used in offices (x rays, ultraviolet,
infrared and radio rays) are all situated below the international standards. There is no evidence
of any relationship between video terminal work and the onset of cataract (Rubino 1990).
Harmful substances

Certain solvents, such as the esters and aldehydes (formaldehyde being very widely used), are
irritating to the eyes. The inorganic acids, whose corrosive action is well known, cause tissue
destruction and chemical burns by contact. The organic acids are also dangerous. Alcohols are
irritants. Caustic soda, an extremely strong base, is a powerful corrosive that attacks the eyes
and the skin. Also included in the list of harmful substances are certain plastic materials
(Grant 1979) as well as allergenic dusts or other substances such as exotic woods, feathers
and so on.

Finally, infectious occupational diseases can be accompanied by effects on the eyes.

Protective glasses

Since the wearing of individual protection (glasses and masks) can obstruct vision (reduction
of visual acuity owing to loss of transparency of the glasses on account of the projection of
foreign bodies, and obstacles in the visual field such as the sidepieces of glasses), workplace
hygiene also tends towards using other means such as the extraction of dust and dangerous
particles from the air through general ventilation.

The occupational physician is frequently called upon to advise on the quality of glasses
adapted to the risk; national and international directives will guide this choice. Moreover,
better goggles are now available, which include improvements in efficacy, comfort and even
aesthetics.

In the United States, for example, reference can be made to ANSI standards (particularly
ANSI Z87.1-1979) that have the force of law under the federal Occupational Safety and
Health Act (Fox 1973). ISO Standard No. 4007-1977 refers also to protective devices. In
France, recommendations and protective material are available from the INRS in Nancy. In
Switzerland, the national insurance company CNA provides rules and procedures for
extraction of foreign bodies at the workplace. For serious damage, it is preferable to send the
injured worker to the eye doctor or the eye clinic.

Finally, people with eye pathologies may be more at risk than others; to discuss such a
controversial problem goes beyond the scope of this article. As previously said, their eye
doctor should be aware of the dangers that they may encounter at their workplace and survey
them carefully.

Conclusion

At the workplace, most information and signals are visual in nature, although acoustic signals
may play a role; nor should we forget the importance of tactile signals in manual work, as
well as in office work (for example, the speed of a keyboard).

Our knowledge of the eye and vision comes mostly from two sources: medical and scientific.
For the purpose of diagnosis of eye defects and diseases, techniques have been developed
which measure visual functions; these procedures may not be the most effective for
occupational testing purposes. Conditions of medical examination are indeed very far from
those which are encountered at the workplace; for example, to determine visual acuity the eye
doctor will make use of charts or instruments where contrast between test object and
background is the highest possible, where the edges of test objects are sharp, where no
disturbing glare sources are perceptible and so on. In real life, lighting conditions are often
poor and visual performance is under stress for several hours.

This emphasizes the need to utilize laboratory apparatus and instrumentation which display a
higher predictive power for visual strain and fatigue at the workplace.

Many of the scientific experiments reported in textbooks were performed for a better
theoretical understanding of the visual system, which is very complex. References in this
article have been limited to that knowledge which is immediately useful in occupational
health.

While pathological conditions may impede some people in fulfilling the visual requirements
of a job, it seems safer and fairer—apart from highly demanding jobs with their own
regulations (aviation, for example)—to give the eye doctor the power of decision, rather than
refer to general rules; and it is in this way that most countries operate. Guidelines are
available for more information.

On the other hand, hazards exist for the eye when exposed at the workplace to various
noxious agents, whether physical or chemical. Hazards for the eye in industry are briefly
enumerated. From scientific knowledge, no danger of developing cataracts may be expected
from working on a VDU.

Taste

Written by ILO Content Manager

The three chemosensory systems, smell, taste, and the common chemical sense, require direct
stimulation by chemicals for sensory perception. Their role is to monitor constantly both
harmful and beneficial inhaled and ingested chemical substances. Irritating or tingling
properties are detected by the common chemical sense. The taste system perceives only sweet,
salty, sour, bitter and possibly metallic and monosodium glutamate (umami) tastes. The
totality of the oral sensory experience is termed “flavour,” the interaction of smell, taste,
irritation, texture and temperature. Because most flavour is derived from the smell, or aroma,
of food and beverages, damage to the smell system is often reported as a problem with
“taste”. Verifiable taste deficits are more likely present if specific losses to sweet, sour, salty
and bitter sensations are described.

Chemosensory complaints are frequent in occupational settings and may result from a normal
sensory system perceiving environmental chemicals. Conversely, they may also indicate an
injured system: requisite contact with chemical substances renders these sensory systems
uniquely vulnerable to damage (see table 1). In the occupational setting, these systems can
also be damaged by trauma to the head as well as by agents other than chemicals (e.g.,
radiation). Taste disorders are either temporary or permanent: complete or partial taste loss
(ageusia or hypogeusia), heightened taste (hypergeusia) and distorted or phantom tastes
(dysgeusia) (Deems, Doty and Settle 1991; Mott, Grushka and Sessle 1993).
Table 1. Agents/processes reported to alter the taste system

Agent/process Taste disturbance Reference


Amalgam Metallic taste Siblerud 1990; see text
Dental Metallic taste See text
restorations/appliances
Diving (dry saturation) Sweet, bitter; salt, sour See text
Diving and welding Metallic taste See text
Drugs/Medications Varies See text
Hydrazine Sweet dysgeusia Schweisfurth and Schottes
1993
Hydrocarbons Hypogeusia, “glue” Hotz et al. 1992
dysgeusia
Lead poisoning Sweet/metallic dysgeusia Kachru et al. 1989
Metals and metal fumes Sweet/Metallic See text; Shusterman and
(also, some specific metals Sheedy 1992
listed in chart)
Nickel Metallic taste Pfeiffer and Schwickerath
1991
Pesticides Bitter/metallic dysgeusia +
(Organo-phosphates)
Radiation Increased DT & RT *
Selenium Metallic taste Bedwal et al. 1993
Solvents “Funny taste”, H +
Sulphuric acid mists “Bad taste” Petersen and Gormsen 1991
Underwater welding Metallic taste See text
Vanadium Metallic taste Nemery 1990

DT = detection threshold, RT = recognition threshold, * = Mott & Leopold 1991, + =


Schiffman & Nagle 1992
Specific taste disturbances are as stated in the articles referenced.

The taste system is sustained by regenerative capability and redundant innervation. Because
of this, clinically notable taste disorders are less common than olfactory disorders. Taste
distortions are more common than significant taste loss and, when present, are more likely to
have secondary adverse effects such as anxiety and depression. Taste loss or distortion can
interfere with occupational performance where keen taste acuity is required, such as culinary
arts and blending of wines and spirits.

Anatomy and Physiology

Taste receptor cells, found throughout the oral cavity, the pharynx, the larynx and the
oesophagus, are modified epithelial cells located within the taste buds. While on the tongue
taste buds are grouped in superficial structures called papillae, extralingual taste buds are
distributed within the epithelium. The superficial placement of taste cells makes them
susceptible to injury. Damaging agents usually come in contact with the mouth through
ingestion, although mouth breathing associated with nasal obstruction or other conditions
(e.g., exercise, asthma) allows oral contact with airborne agents. The taste receptor cell’s
average ten-day life span permits rapid recovery if superficial damage to receptor cells has
occurred. Also, taste is innervated by four pairs of peripheral nerves: the front of the tongue
by the chorda tympani branch of the seventh cranial nerve (CN VII); the posterior of the
tongue and the pharynx by the glossopharyngeal nerve (CN IX); the soft palate by the greater
superficial petrosal branch of CN VII; and the larynx/oesophagus by the vagus (CN X). Last,
taste central pathways, although not completely mapped in humans (Ogawa 1994), appear
more divergent than olfactory central pathways.

The first step in taste perception involves interaction between chemicals and taste receptor
cells. The four taste qualities, sweet, sour, salty and bitter, enlist different mechanisms at the
level of the receptor (Kinnamon and Getchell 1991), ultimately generating action potentials in
taste neurons (transduction).

Tastants diffuse through salivary secretions and also mucus secreted around taste cells to
interact with the surface of taste cells. Saliva ensures that tastants are carried to the buds, and
provides an optimal ionic environment for perception (Spielman 1990). Alterations in taste
can be demonstrated with changes in the inorganic constituents of saliva. Most taste stimuli
are water soluble and diffuse easily; others require soluble carrier proteins for transport to the
receptor. Salivary output and composition, therefore, play an essential role in taste function.

Salt taste is stimulated by cations such as Na+, K+ or NH4+. Most salty stimuli are transduced
when ions travel through a specific type of sodium channel (Gilbertson 1993), although other
mechanisms may also be involved. Changes in the composition of taste pore mucus or the
taste cell’s environment could alter salt taste. Also, structural changes in nearby receptor
proteins could modify receptor membrane function. Sour taste corresponds to acidity.
Blockade of specific sodium channels by hydrogen ions elicits sour taste. As with salt taste,
however, other mechanisms are thought to exist. Many chemical compounds are perceived as
bitter, including cations, amino acids, peptides and larger compounds. Detection of bitter
stimuli appears to involve more diverse mechanisms that include transport proteins, cation
channels, G proteins and second messenger mediated pathways (Margolskee 1993). Salivary
proteins may be essential in transporting lipophilic bitter stimuli to the receptor membranes.
Sweet stimuli bind to specific receptors linked to G protein-activated second-messenger
systems. There is also some evidence in mammals that sweet stimuli can gate ion channels
directly (Gilbertson 1993).

Taste Disorders

General Concepts

The anatomic diversity and redundancy of the taste system is sufficiently protective to prevent
total, permanent taste loss. Loss of a few peripheral taste fields, for example, would not be
expected to affect whole mouth taste ability (Mott, Grushka and Sessle 1993). The taste
system may be far more vulnerable to taste distortion or phantom tastes. For example,
dysgeusias appear to be more common in occupational exposures than taste losses per se.
Although taste is thought to be more robust than smell with respect to the ageing process,
losses in taste perception with ageing have been documented.

Temporary taste losses can occur when the oral mucosa has been irritated. Theoretically, this
can result in inflammation of the taste cells, closure of taste pores or altered function at the
surface of taste cells. Inflammation can alter blood flow to the tongue, thereby affecting taste.
Salivary flow can also be compromised. Irritants can cause swelling and obstruct salivary
ducts. Toxicants absorbed and excreted through salivary glands, could damage ductal tissue
during excretion. Either of these processes could cause long-term oral dryness with resultant
taste effects. Exposure to toxicants could alter the turnover rate of taste cells, modify the taste
channels at the surface of the taste cell, or change the internal or external chemical
environments of the cells. Many substances are known to be neurotoxic and could injure
peripheral taste nerves directly, or damage higher taste pathways in the brain.

Pesticides

Pesticide use is widespread and contamination occurs as residues in meat, vegetables, milk,
rain and drinking water. Although workers exposed during the manufacture or use of
pesticides are at greatest risk, the general population is also exposed. Important pesticides
include organochloride compounds, organophosphate pesticides, and carbamate pesticides.
Organochloride compounds are highly stable and therefore exist in the environment for
lengthy periods. Direct toxic effects on central neurons have been demonstrated.
Organophosphate pesticides have more widespread use because they are not as persistent, but
they are more toxic; inhibition of acetylcholinesterase can cause neurological and behavioural
abnormalities. Carbamate pesticide toxicity is similar to that for the organophosphorus
compounds and are often used when the latter fail. Pesticide exposures have been associated
with persistent bitter or metallic tastes (Schiffman and Nagle 1992), unspecified dysgeusia
(Ciesielski et al. 1994), and less commonly with taste loss. Pesticides can reach taste receptors
via air, water and food and can be absorbed from the skin, gastrointestinal tract, conjunctiva,
and respiratory tract. Because many pesticides are lipid soluble, they can easily penetrate lipid
membranes within the body. Interference with taste can occur peripherally irrespective of
route of initial exposure; in mice, binding to the tongue has been seen with certain insecticides
after injection of pesticide material into the bloodstream. Alterations in taste bud morphology
after pesticide exposure have been demonstrated. Degenerative changes in the sensory nerve
terminations have been also noted and may account for reports of abnormalities of neural
transmission. Metallic dysgeusia may be a sensory paresthesia caused by the impact of
pesticides on taste buds and their afferent nerve endings. There is some evidence, however,
that pesticides can interfere with neurotransmitters and therefore disrupt transmission of taste
information more centrally (El-Etri et al. 1992). Workers exposed to organophosphate
pesticides can demonstrate neurological abnormalities on electroencephalography and
neuropsychological testing independent of cholinesterase depression in the blood stream. It is
thought that these pesticides have a neurotoxic effect on the brain independent of the effect
upon cholinesterase. Although increased salivary flow has been reported to be associated with
pesticide exposure, it is unclear what effect this might have on taste.

Metals and metal fume fever

Alterations of taste have occurred after exposure to certain metals and metallic compounds
including mercury, copper, selenium, tellurium, cyanide, vanadium, cadmium, chromium and
antimony. Metallic taste has also been noted by workers exposed to the fumes of zinc or
copper oxide, from the ingestion of copper salt in poisoning cases, or from exposure to
emissions resulting from the use of torches for cutting of brass piping.

Exposure to freshly formed fumes of metal oxides can result in a syndrome known as metal
fume fever (Gordon and Fine 1993). Although zinc oxide is most frequently cited, this
disorder has also been reported after exposure to oxides of other metals, including copper,
aluminium, cadmium, lead, iron, magnesium, manganese, nickel, selenium, silver, antimony
and tin. The syndrome was first noted in brass foundry workers, but is now most common in
welding of galvanized steel or during galvanization of steel. Within hours after exposure,
throat irritation and a sweet or metallic dysgeusia may herald more generalized symptoms of
fever, shaking chills, and myalgia. Other symptoms, such as cough or headache, may also
occur. The syndrome is notable for both rapid resolution (within 48 hours) and development
of tolerance upon repeated exposures to the metal oxide. A number of possible mechanisms
have been suggested, including immune system reactions and a direct toxic effect on
respiratory tissue, but it is now thought that lung exposure to metal fumes results in release of
specific mediators into the blood stream, called cytokines, that cause the physical symptoms
and findings (Blanc et al. 1993). A more severe, potentially fatal, variant of metal fume fever
occurs after exposure to zinc chloride aerosol in military screening smoke bombs (Blount
1990). Polymer fume fever is similar to metal fume fever in presentation, with the exception
of the absence of metallic taste complaints (Shusterman 1992).

In lead poisoning cases, sweet metallic tastes are often described. In one report, silver
jewellery workers with confirmed lead toxicity exhibited taste alterations (Kachru et al.
1989). The workers were exposed to lead fumes by heating jewellers’ silver waste in
workshops which had poor exhaust systems. The vapours condensed on the skin and hair of
the workers and also contaminated their clothing, food and drinking water.

Underwater welding

Divers describe oral discomfort, loosening of dental fillings and metallic taste during
electrical welding and cutting underwater. In a study by Örtendahl, Dahlen and Röckert
(1985), 55% of 118 divers working under water with electrical equipment described metallic
taste. Divers without this occupational history did not describe metallic taste. Forty divers
were recruited into two groups for further evaluation; the group with underwater welding and
cutting experience had significantly more evidence of dental amalgam breakdown. Initially, it
was theorized that intraoral electrical currents erode dental amalgam, releasing metal ions
which have direct effects on taste cells. Subsequent data, however, demonstrated intraoral
electrical activity of insufficient magnitude to erode dental amalgam, but of sufficient
magnitude to directly stimulate taste cells and cause metallic taste (Örtendahl 1987; Frank and
Smith 1991). Divers may be vulnerable to taste changes without welding exposure;
differential effects on taste quality perception have been documented, with decreased
sensitivity to sweet and bitter and increased sensitivity to salty and sour tastants (O’Reilly et
al. 1977).

Dental restorations and oral galvanism

In a large prospective, longitudinal study of dental restorations and appliances, approximately


5% of subjects reported a metallic taste at any given time (Participants of SCP Nos. 147/242
& Morris 1990). Frequency of metallic taste was higher with a history of teeth grinding; with
fixed partial dentures than with crowns; and with an increased number of fixed partial
dentures. Interactions between dental amalgams and the oral environment are complex
(Marek 1992) and could affect taste through a variety of mechanisms. Metals that bind to
proteins can acquire antigenicity (Nemery 1990) and might cause allergic reactions with
subsequent taste alterations. Soluble metal ions and debris are released and may interact with
soft tissues in the oral cavity. Metallic taste has been reported to correlate with nickel
solubility in saliva from dental appliances (Pfeiffer and Schwickerath 1991). Metallic taste
was reported by 16% of subjects with dental fillings and none of subjects without fillings
(Siblerud 1990). In a related study of subjects who had amalgam removed, metallic taste
improved or abated in 94% (Siblerud 1990).

Oral galvanism, a controversial diagnosis (Council on Dental Materials report 1987),


describes the generation of oral currents from either corrosion of dental amalgam restorations
or electrochemical differences between dissimilar intraoral metals. Patients considered to have
oral galvanism appear to have a high frequency of dysgeusia (63%) described as metallic,
battery, unpleasant or salty tastes (Johansson, Stenman and Bergman 1984). Theoretically,
taste cells could be directly stimulated by intraoral electric currents and generate dysgeusia.
Subjects with symptoms of oral burning, battery taste, metallic taste and/or oral galvanism
were determined to have lower electrogustometric thresholds (i.e. more sensitive taste) on
taste testing than control subjects (Axéll, Nilner and Nilsson 1983). Whether galvanic currents
related to dental materials are causative is debatable, however. A brief tin-foil taste shortly
after restorative work is thought to be possible, but more permanent effects are probably
unlikely (Council on Dental Materials 1987). Yontchev, Carlsson and Hedegård (1987) found
similar frequencies of metallic taste or oral burning in subjects with these symptoms whether
or not there was contact between dental restorations. Alternative explanations for taste
complaints in patients with restorations or appliances are sensitivity to mercury, cobalt,
chrome, nickel or other metals (Council on Dental Materials 1987), other intraoral processes
(e.g., periodontal disease), xerostomia, mucosal abnormalities, medical illnesses, and
medication side effects.

Drugs and medications

Many drugs and medications have been linked to taste alterations (Frank, Hettinger and Mott
1992; Mott, Grushka and Sessle 1993; Della Fera, Mott and Frank 1995; Smith and Burtner
1994) and are mentioned here because of possible occupational exposures during the
manufacture of these drugs. Antibiotics, anticonvulsants, antilipidemics, antineoplastics,
psychiatric, antiparkinsonism, antithyroid, arthritis, cardiovascular, and dental hygiene drugs
are broad classes reported to affect taste.

The presumed site of action of drugs on the taste system varies. Often the drug is tasted
directly during oral administration of the drug or the drug or its metabolites are tasted after
being excreted in saliva. Many drugs, for example anticholinergics or some antidepressants,
cause oral dryness and affect taste through inadequate presentation of the tastant to the taste
cells via saliva. Some drugs may affect taste cells directly. Because taste cells have a high
turnover rate, they are especially vulnerable to drugs that interrupt protein synthesis, such as
antineoplastic drugs. It has also been thought that there may be an effect on impulse
transmission through the taste nerves or in the ganglion cells, or a change in the processing of
the stimuli in higher taste centres. Metallic dysgeusia has been reported with lithium, possibly
through transformations in receptor ion channels. Anti-thyroid drugs and angiotensin
converting enzyme inhibitors (e.g., captopril and enalapril) are well known causes of taste
alterations, possibly because of the presence of a sulphydryl (-SH) group (Mott, Grushka and
Sessle 1993). Other drugs with -SH groups (e.g., methimazole, penicillamine) also cause taste
abnormalities. Drugs that affect neurotransmitters could potentially alter taste perception.

Mechanisms of taste alterations vary, however, even within a class of drug. For example, taste
alterations after treatment with tetracycline may be caused by oral mycosis. Alternatively, an
increased blood urea nitrogen, associated with the catabolic effect of tetracycline, can
sometimes result in a metallic or ammonia-like taste.
Side effects of metronidazole include alteration of taste, nausea, and a distinctive distortion of
the taste of carbonated and alcoholic beverages. Peripheral neuropathy and paraesthesias can
also sometimes occur. It is thought that the drug and its metabolites may have a direct affect
upon taste receptor function, and also on the sensory cell.

Radiation exposure

Radiation treatment can cause taste dysfunction through (1) taste cell changes, (2) damage to
taste nerves, (3) salivary gland dysfunction, and (4) opportunistic oral infection (Della Fera et
al. 1995). There have been no studies of occupational radiation effects on the taste system.

Head trauma

Head trauma occurs in the occupational setting and can cause alterations in the taste system.
Although perhaps only 0.5% of head trauma patients describe taste loss, the frequency of
dysgeusia may be much higher (Mott, Grushka and Sessle 1993). Taste loss, when it occurs, is
likely quality-specific or localized and may not even be subjectively apparent. The prognosis
of subjectively noted taste loss appears better than that for olfactory loss.

Non-occupational causes

Other causes of taste abnormalities must be considered in the differential diagnosis, including
congenital/genetic, endocrine/metabolic, or gastrointestinal disorders; hepatic disease;
iatrogenic effects; infection; local oral conditions; cancer; neurological disorders; psychiatric
disorders; renal disease; and dry mouth/Sjogren’s syndrome (Deems, Doty and Settle 1991;
Mott and Leopold 1991; Mott, Grushka and Sessle 1993).

Taste testing

Psychophysics is the measurement of a response to an applied sensory stimulus. “Threshold”


tasks, tests that determine the minimum concentration that can be reliably perceived, are less
useful in taste than olfaction because of wider variability in the former in the general
population. Separate thresholds can be obtained for detection of tastants and recognition of
tastant quality. Suprathreshold tests assess the ability of the system to function at levels above
threshold and may provide more information about “real world” taste experience.
Discrimination tasks, telling the difference between substances, can elicit subtle changes in
sensory ability. Identification tasks may yield different results than threshold tasks in the same
individual. For example, a person with central nervous system injury may be able to detect
and rank tastants, but may not be able to identify them. Taste testing can assess whole mouth
taste through swishing of tastants throughout the oral cavity, or can test specific taste areas
with targeted droplets of tastants or focally applied filter paper soaked with tastants.

Summary

The taste system is one of three chemosensory systems, together with olfaction and the
common chemical sense, committed to monitoring harmful and beneficial inhaled and
ingested substances. Taste cells are rapidly replaced, are innervated by pairs of four peripheral
nerves, and appear to have divergent central pathways in the brain. The taste system is
responsible for the appreciation of four basic taste qualities (sweet, sour, salty, and bitter) and,
debatably, metallic and umami (monosodium glutamate) tastes. Clinically significant taste
losses are rare, probably because of the redundancy and diversity of innervation. Distorted or
abnormal tastes, however, are more common and can be more distressing. Toxic agents
unable to destroy the taste system, or to halt transduction or transmission of taste information,
nevertheless have ample opportunities to impede the perception of normal taste qualities.
Irregularities or obstacles can occur through one or more of the following: suboptimal tastant
transport, altered salivary composition, taste cell inflammation, blockage of taste cell ion
pathways, alterations in the taste cell membrane or receptor proteins, and peripheral or central
neurotoxicity. Alternatively, the taste system may be intact and functioning normally, but be
subjected to disagreeable sensory stimulation through small intraoral galvanic currents or the
perception of intraoral medications, drugs, pesticides or metal ions.

Smell

Written by ILO Content Manager

Three sensory systems are uniquely constructed to monitor contact with environmental
substances: olfaction (smell), taste (sweet, salty, sour, and bitter perception), and the common
chemical sense (detection of irritation or pungency). Because they require stimulation by
chemicals, they are termed “chemosensory” systems. Olfactory disorders consist of temporary
or permanent: complete or partial smell loss (anosmia or hyposmia) and parosmias (perverted
smells dysosmia or phantom smells phantosmia) (Mott and Leopold 1991; Mott, Grushka and
Sessle 1993). After chemical exposures, some individuals describe a heightened sensitivity to
chemical stimuli (hyperosmia). Flavour is the sensory experience generated by the interaction
of the smell, taste and irritating components of food and beverages, as well as texture and
temperature. Because most flavour is derived from the smell, or aroma, of ingestants, damage
to the smell system is often reported as a problem with “taste”.

Chemosensory complaints are frequent in occupational settings and may result from a normal
sensory system’s perceiving environmental chemicals. Conversely, they may also indicate an
injured system: requisite contact with chemical substances renders these sensory systems
uniquely vulnerable to damage. In the occupational setting, these systems can also be
damaged by trauma to the head and agents other than chemicals (e.g., radiation). Pollutant-
related environmental odours can exacerbate underlying medical conditions (e.g., asthma,
rhinitis), precipitate development of odour aversions, or cause a stress-related type of illness.
Malodors have been demonstrated to decrease complex task performance (Shusterman 1992).

Early identification of workers with olfactory loss is essential. Certain occupations, such as
the culinary arts, wine making and the perfume industry, require a good sense of smell as a
prerequisite. Many other occupations require normal olfaction for either good job
performance or self-protection. For example, parents or day care workers generally rely on
smell to determine children’s hygiene needs. Firefighters need to detect chemicals and smoke.
Any worker with ongoing exposure to chemicals is at increased risk if olfactory ability is
poor.

Olfaction provides an early warning system to many harmful environmental substances. Once
this ability is lost, workers may not be aware of dangerous exposures until the concentration
of the agent is high enough to be irritating, damaging to respiratory tissues or lethal. Prompt
detection can prevent further olfactory damage through treatment of inflammation and
reduction of subsequent exposure. Lastly, if loss is permanent and severe, it may be
considered a disability requiring new job training and/or compensation.

Anatomy and Physiology

Olfaction

The primary olfactory receptors are located in patches of tissue, termed olfactory
neuroepithelium, at the most superior portion of the nasal cavities (Mott and Leopold 1991).
Unlike other sensory systems, the receptor is the nerve. One portion of an olfactory receptor
cell is sent to the surface of the nasal lining, and the other end connects directly via a long
axon to one of two olfactory bulbs in the brain. From here, the information travels to many
other areas of the brain. Odorants are volatile chemicals that must contact the olfactory
receptor for smell perception to occur. Odorant molecules are trapped by and then diffuse
through mucus to attach to cilia at the ends of the olfactory receptor cells. It is not yet known
how we are able to detect more than ten thousand odorants, discriminate from as many as
5,000, and judge varying odorant intensities. Recently, a multigene family was discovered
that codes for odorant receptors on primary olfactory nerves (Ressler, Sullivan and Buck
1994). This has allowed investigation of how odours are detected and how the smell system is
organized. Each neuron may respond broadly to high concentrations of a variety of odorants,
but will respond to only one or a few odorants at low concentrations. Once stimulated, surface
receptor proteins activate intracellular processes that translate sensory information into an
electrical signal (transduction). It is not known what terminates the sensory signal despite
continued odorant exposure. Soluble odorant binding proteins have been found, but their role
is undetermined. Proteins that metabolize odorants may be involved or carrier proteins may
transport odorants either away from the olfactory cilia or toward a catalytic site within the
olfactory cells.

The portions of the olfactory receptors connecting directly to the brain are fine nerve
filaments that travel through a plate of bone. The location and delicate structure of these
filaments render them vulnerable to shear injury from blows to the head. Also, because the
olfactory receptor is a nerve, physically contacts odorants, and connects directly to the brain,
substances entering the olfactory cells can travel along the axon into the brain. Because of
continued exposure to agents damaging to the olfactory receptor cells, olfactory ability might
be lost early in the lifespan if it were not for a critical attribute: olfactory receptor nerves are
capable of regeneration and may be replaced, provided the tissue has not been completely
destroyed. If the damage to the system is more centrally located, however, the nerves can not
be restored.

Common chemical sense

The common chemical sense is initiated by stimulation of mucosal, multiple, free nerve
endings of the fifth (trigeminal) cranial nerve. It perceives the irritating properties of inhaled
substances and triggers reflexes designed to limit exposure to dangerous agents: sneezing,
mucus secretion, reduction of breathing rate or even breath-holding. Strong warning cues
compel removal from the irritation as soon as possible. Although the pungency of substances
vary, generally the odour of the substance is detected before irritation becomes apparent (Ruth
1986). Once irritation is detected, however, small increases in concentration enhance irritation
more than odorant appreciation. Pungency may be evoked through either physical or chemical
interactions with receptors (Cometto-Muñiz and Cain 1991). The warning properties of gases
or vapours tend to correlate with their water solubilities (Shusterman 1992). Anosmics appear
to require higher concentrations of pungent chemicals for detection (Cometto-Muñiz and Cain
1994), but thresholds of detection are not elevated as one ages (Stevens and Cain 1986).

Tolerance and adaptation

Perception of chemicals can be altered by previous encounters. Tolerance develops when


exposure reduces the response to subsequent exposures. Adaptation occurs when a constant or
rapidly repeated stimulus elicits a diminishing response. For example, short-term solvent
exposure markedly, but temporarily, reduces solvent detection ability (Gagnon, Mergler and
Lapare 1994). Adaptation can also occur when there has been prolonged exposure at low
concentrations or rapidly, with some chemicals, when extremely high concentrations are
present. The latter can lead to rapid and reversible olfactory “paralysis”. Nasal pungency
typically shows less adaptation and development of tolerance than olfactory sensations.
Mixtures of chemicals can also alter perceived intensities. Generally, when odorants are
mixed, perceived odorant intensity is less than would be expected from adding the two
intensities together (hypoadditivity). Nasal pungency, however, generally shows additivity
with exposure to multiple chemicals, and summation of irritation over time (Cometto-Muñiz
and Cain 1994). With odorants and irritants in the same mixture, the odour is always
perceived as less intense. Because of tolerance, adaptation, and hypoadditivity, one must be
careful to avoid relying on these sensory systems to gauge the concentration of chemicals in
the environment.

Olfactory Disorders

General concepts

Olfaction is disrupted when odorants can not reach olfactory receptors, or when olfactory
tissue is damaged. Swelling within the nose from rhinitis, sinusitis or polyps can preclude
odorant accessibility. Damage can occur with: inflammation in the nasal cavities; destruction
of the olfactory neuroepithelium by various agents; trauma to the head; and transmittal of
agents via the olfactory nerves to the brain with subsequent injury to the smell portion of the
central nervous system. Occupational settings contain varying amounts of potentially
damaging agents and conditions (Amoore 1986; Cometto-Muñiz and Cain 1991; Shusterman
1992; Schiffman and Nagle 1992). Recently published data from 712,000 National
Geographic Smell Survey respondents suggests that factory work impairs smell; male and
female factory workers reported poorer senses of smell and demonstrated decreased olfaction
on testing (Corwin, Loury and Gilbert 1995). Specifically, chemical exposures and head
trauma were more frequently reported than by workers in other occupational settings.

When an occupational olfactory disorder is suspected, identification of the offending agent


can be difficult. Current knowledge is largely derived from small series and case reports. It is
of importance that few studies mention examination of the nose and sinuses. Most rely on
patient history for olfactory status, rather than testing of the olfactory system. An additional
complicating factor is the high prevalence of non-occupationally related olfactory
disturbances in the general population, mostly due to viral infections, allergies, nasal polyps,
sinusitis or head trauma. Some of these, however, are also more common in the work
environment and will be discussed in detail here.
Rhinitis, sinusitis and polyposis

Individuals with olfactory disturbance must first be assessed for rhinitis, nasal polyps and
sinusitis. It is estimated that 20% of the United States population, for example, has upper
airway allergies. Environmental exposures can be unrelated, cause inflammation or
exacerbate an underlying disorder. Rhinitis is associated with olfactory loss in occupational
settings (Welch, Birchall and Stafford 1995). Some chemicals, such as isocyanates, acid
anhydrides, platinum salts and reactive dyes (Coleman, Holliday and Dearman 1994), and
metals (Nemery 1990) can be allergenic. There is also considerable evidence that chemicals
and particles increase sensitivity to nonchemical allergens (Rusznak, Devalia and Davies
1994). Toxic agents alter the permeability of the nasal mucosa and allow greater penetration
of allergens and enhanced symptoms, making it difficult to discriminate between rhinitis due
to allergies and that due to exposure to toxic or particulate substances. If inflammation and/or
obstruction in the nose or sinuses is demonstrated, return of normal olfactory function is
possible with treatment. Options include topical corticosteroid sprays, systemic antihistamines
and decongestants, antibiotics and polypectomy/sinus surgery. If inflammation or obstruction
is not present or treatment does not secure improvement in olfactory function, olfactory tissue
may have sustained permanent damage. Irrespective of cause, the individual must be
protected from future contact with the offending substance or further injury to the olfactory
system could occur.

Head trauma

Head trauma can alter olfaction through (1) nasal injury with scarring of the olfactory
neuroepithelium, (2) nasal injury with mechanical obstruction to odours, (3) shearing of the
olfactory filaments, and (4) bruising or destruction of the part of the brain responsible for
smell sensations (Mott and Leopold 1991). Although trauma is a risk in many occupational
settings (Corwin, Loury and Gilbert 1995), exposure to certain chemicals can increase this
risk.

Smell loss occurs in 5% to 30% of head trauma patients and may ensue without any other
nervous system abnormalities. Nasal obstruction to odorants may be surgically correctable,
unless significant intranasal scarring has occurred. Otherwise, no treatment is available for
smell disorders resulting from head trauma, although spontaneous improvement is possible.
Rapid initial improvement may occur as swelling subsides in the area of injury. If olfactory
filaments have been sheared, regrowth and gradual improvement of smell may also occur.
Although this occurs in animals within 60 days, improvements in humans have been reported
as long as seven years after injury. Parosmias developing as the patient recovers from injury
may indicate regrowth of olfactory tissue and herald return of some normal smell function.
Parosmias occurring at the time of injury or shortly thereafter are more likely due to brain
tissue damage. Damage to the brain will not repair itself and improvement in smell ability
would not be expected. Injury to the frontal lobe, the portion of the brain integral to emotion
and thinking, may be more frequent in head trauma patients with smell loss. The resultant
changes in socialization or thinking patterns may be subtle, though harmful to family and
career. Formal neuropsychiatric testing and treatment may, therefore, be indicated in some
patients.
Environmental agents

Environmental agents can gain access to the olfactory system through either the bloodstream
or inspired air and have been reported to cause smell loss, parosmia and hyperosmia.
Responsible agents include metallic compounds, metal dusts, nonmetallic inorganic
compounds, organic compounds, wood dusts and substances present in various occupational
environments, such as metallurgical and manufacturing processes (Amoore 1986; Schiffman
and Nagle 1992 (table 1). Injury can occur both after acute and chronic exposures and can be
either reversible or irreversible, depending on the interaction between host susceptibility and
the damaging agent. Important substance attributes include bioactivity, concentration, irritant
capacity, length of exposure, rate of clearance and potential synergism with other chemicals.
Host susceptibility varies with genetic background and age. There are gender differences in
olfaction, hormonal modulation of odorant metabolism and differences in specific anosmias.
Tobacco use, allergies, asthma, nutritional status, pre-existing disease (e.g., Sjogren’s
syndrome), physical exertion at time of exposure, nasal airflow patterns and possibly
psychosocial factors influence individual differences (Brooks 1994). Resistance of the
peripheral tissue to injury and presence of functioning olfactory nerves can alter
susceptibility. For example, acute, severe exposure could decimate the olfactory
neuroepithelium, effectively preventing spread of the toxin centrally. Conversely, long-term,
low-level exposure might allow preservation of functioning peripheral tissue and slow, but
steady-transit of damaging substances into the brain. Cadmium, for example, has a half-life
of 15 to 30 years in humans, and its effects might not be apparent until years after exposure
(Hastings 1990).

Table 1. Agents/processes associated with olfactory abnormalities

Agent Smell disturbance Reference


Acetaldehyde H 2
Acetates, butyl and ethyl H or A 3
Acetic acid H 2
Acetone H, P 2
Acetophenone Low normal 2
Acid chloride H 2
Acids (organic and inorganic) H 2
Acrylate, methacrylate vapours Decreased odour ID 1
Alum H 2
Aluminium fumes H 2
Ammonia H 1, 2
Anginine H 1
Arsenic H 2
Ashes (incinerator) H 4
Asphalt (oxidized) Low normal 2
Benzaldehyde H 2
Benzene Below average 2
Benzine H/A 1
Benzoic acid H 2
Benzol H/A 1
Blasting powder H 2
Bromine H 2
Butyl acetate H/A 1
Butylene glycol H 2
Cadmium compounds, dust, oxides H/A 1 ; Bar-Sela et al. 1992;
Rose, Heywood and
Costanzo 1992
Carbon disulphide H/A 1
Carbon monoxide A 2
Carbon tetrachloride H 2
Cement H 4
Chalk dust H 1
Chestnut wood dust A 1
Chlorine H 2
Chloromethanes Low normal 2
Chlorovinylarsine chlorides H 2
Chromium (salts and plating) H 2 ;4
Chromate Olfactory disorder 1
Chromate salts A 2
Chromic acid H 2
Chromium fumes H 2
Cigarette smoking Decreased ID 1
Coal (coal-bunker) H 4
Coal tar fumes H 2
Coke H or A 4
Copper (and sulphuric acid) Olfactory Savov 1991
Copper arsenite disturbance 2
Copper fumes H 2
Cotton, knitting factory H 4
Creosote fumes H 5
Cutting oils (machining) Abnormal UPSIT 2
Cyanides Below average 2
H
Dichromates H 2
Ethyl acetate H/A 1
H 2
Ethyl ether Decreased smell Gosselin, Smith and
Hodge 1984
Ethylene oxide
Flax H 2
Flour, flour mill H 4
Fluorides H or A 3
Fluorine compounds H 2
Formaldehyde H 1, 2 ; Chia et al. 1992
Fragrances Below average 2
Furfural H 2
Grain H or A 4
Halogen compounds H 2
Hard woods A 2
Hydrazine H/A 1
Aromatic hydrocarbon solvent Decreased UPSIT, 5 ; Hotz et al. 1992
combinations (e.g., toluene, xylene, H
ethyl
benzene) 2
Hydrogen chloride H 2
Hydrogen cyanide A 2
Hydrogen fluoride H 1
Hydrogen selenide H/A 5; Guidotti 1994
Hydrogen sulphide H or A
Iodoform H 2
Iron carbonyl H 1
Isocyanates H 2
Lead H 4
Lime H 2
Lye H 2
Magnet production H 2
Manganese fumes H 2
Menthol H 2 ; Naus 1968
Mercury Low normal 2
N-Methylformimino-methyl ester A 2
Nickel dust, hydroxide, plating and H/A 1;4; Bar-Sela et al. 1992
refining A 2
Nickel hydroxide Low normal 2
Nickel plating A 2
Nickel refining (electrolytic) H 2
Nitric acid H 2
Nitro compounds H 2
Nitrogen dioxide
Oil of peppermint H/A 1
Organophosphates Garlic odour; H or 3;5
Osmium tetroxide A 2
Ozone H 3
Temporary H
Paint (lead) Low normal 2
Paint (solvent based) H or A Wieslander, Norbäck
and Edling 1994
Paper, packing factory Possible H 4
Paprika H 2
Pavinol (sewing) Low normal 2
Pentachlorophenol A 2
Pepper and creosol mixture H/A 1
Peppermint H or A 3
Perfumes (concentrated) H 2
Pesticides 5
Petroleum H or A 3
Phenylenediamine H or A 2
Phosgene H 2
Phosphorous oxychloride H 1
Potash H/A 1
Printing H 2
Low normal
Rubber vulcanization H 2
Selenium compounds (volatile) H 2
Selenium dioxide H 2
Silicone dioxide H 4
Silver nitrate H 2
Silver plating Below normal 2
Solvents H, P, Low normal 1; Ahlström, Berglund and
Berglund 1986; Schwartz
et al. 1991; Bolla et al.
Spices H 1995
Steel production Low normal 4
Sulphur compounds H 2
Sulphur dioxide H 2
Sulphuric acid H 2
1; Petersen and Gormsen
1991
Tanning H 2
Tetrabromoethane Parosmia, H or A 5
Tetrachloroethane H 2
Tin fumes H 2
Tobacco H 2; 4
Trichloroethane H 2
Trichloroethylene H/A 2
Vanadium fumes H 2
Varnishes H 2
Wastewater Low normal 2
Zinc (fumes, chromate) and production Low normal 2

H = hyposmia; A = anosmia; P = parosmia; ID = odour identification ability

1 = Mott and Leopold 1991. 2 = Amoore 1986. 3 = Schiffman and Nagle 1992. 4 = Naus
1985. 5 = Callendar et al. 1993.

Specific smell disturbances are as stated in the articles referenced.

Nasal passages are ventilated by 10,000 to 20,000 litres of air per day, containing varying
amounts of potentially harmful agents. The upper airways almost totally absorb or clear
highly reactive or soluble gases, and particles larger than 2 mm (Evans and Hastings 1992).
Fortunately, a number of mechanisms exist to protect tissue damage. Nasal tissues are
enriched with blood vessels, nerves, specialized cells with cilia capable of synchronous
movement, and mucus-producing glands. Defensive functions include filtration and clearing
of particles, scrubbing of water soluble gases, and early identification of harmful agents
through olfaction and mucosal detection of irritants that can initiate an alarm and removal of
the individual from further exposure (Witek 1993). Low levels of chemicals are absorbed by
the mucus layer, swept away by functioning cilia (mucociliary clearance) and swallowed.
Chemicals can bind to proteins or be rapidly metabolized to less damaging products. Many
metabolizing enzymes reside in the nasal mucosa and olfactory tissues (Bonnefoi, Monticello
and Morgan 1991; Schiffman and Nagle 1992; Evans et al. 1995). Olfactory neuroepithelium,
for example, contains cytochrome P-450 enzymes which play a major role in the
detoxification of foreign substances (Gresham, Molgaard and Smith 1993). This system may
protect the primary olfactory cells and also detoxify substances that would otherwise enter the
central nervous system through olfactory nerves. There is also some evidence that intact
olfactory neuroepithelium can prevent invasion by some organisms (e.g., cryptococcus; see
Lima and Vital 1994). At the level of the olfactory bulb, there may also be protective
mechanisms preventing transport of toxic substances centrally. For example, it has been
recently shown that the olfactory bulb contains metallothioneins, proteins which have a
protective effect against toxins (Choudhuri et al. 1995).

Exceeding protective capacities can precipitate a worsening cycle of injury. For example, loss
of olfactory ability halts early warning of the hazard and allows continued exposure. Increase
in nasal blood flow and blood vessel permeability causes swelling and odorant obstruction.
Cilial function, necessary for both mucociliary clearance and normal smell, may be impaired.
Change in clearance will increase contact time between injurious agents and nasal mucosa.
Intranasal mucus abnormalities alter absorption of odorants or irritant molecules.
Overpowering the ability to metabolize toxins allows tissue damage, increased absorption of
toxins, and possibly enhanced systemic toxicity. Damaged epithelial tissue is more vulnerable
to subsequent exposures. There are also more direct effects on olfactory receptors. Toxins can
alter the turnover rate of olfactory receptor cells (normally 30 to 60 days), injure receptor cell
membrane lipids, or change the internal or external environment of the receptor cells.
Although regeneration can occur, damaged olfactory tissue can exhibit permanent changes of
atrophy or replacement of olfactory tissue with nonsensory tissue.

The olfactory nerves provide a direct connection to the central nervous system and may serve
as a route of entry for a variety of exogenous substances, including viruses, solvents and some
metals (Evans and Hastings 1992). This mechanism may contribute to some of the olfactory-
related dementias (Monteagudo, Cassidy and Folb 1989; Bonnefoi, Monticello and Morgan
1991) through, for example, transmittal of aluminium centrally. Intranasally, but not
intraperitoneally or intracheally, applied cadmium can be detected in the ipsilateral olfactory
bulb (Evans and Hastings 1992). There is further evidence that substances may be
preferentially taken up by olfactory tissue irrespective of the site of initial exposure (e.g.,
systemic versus inhalation). Mercury, for example, has been found in high concentrations in
the olfactory brain region in subjects with dental amalgams (Siblerud 1990). On
electroencephalography, the olfactory bulb demonstrates sensitivity to many atmospheric
pollutants, such as acetone, benzene, ammonia, formaldehyde and ozone (Bokina et al. 1976).
Because of central nervous system effects of some hydrocarbon solvents, exposed individuals
might not readily recognize and distance themselves from the danger, thereby prolonging
exposure. Recently, Callender and colleagues (1993) obtained a 94% frequency of abnormal
SPECT scans, which assess regional cerebral blood flow, in subjects with neurotoxin
exposures and a high frequency of olfactory identification disorders. The location of
abnormalities on SPECT scanning was consistent with distribution of toxin through olfactory
pathways.

The site of injury within the olfactory system differs with various agents (Cometto-Muñiz and
Cain 1991). For example, ethyl acrylate and nitroethane selectively damage olfactory tissue
while the respiratory tissue within the nose is preserved (Miller et al. 1985). Formaldehyde
alters the consistency, and sulphuric acid the pH of nasal mucus. Many gases, cadmium salts,
dimethylamine and cigarette smoke alter ciliary function. Diethyl ether causes leakage of
some molecules from the junctions between cells (Schiffman and Nagle 1992). Solvents, such
as toluene, styrene and xylene change olfactory cilia; they also appear to be transmitted into
the brain by the olfactory receptor (Hotz et al. 1992). Hydrogen sulphide is not only irritating
to mucosa, but highly neurotoxic, effectively depriving cells of oxygen, and inducing rapid
olfactory nerve paralysis (Guidotti 1994). Nickel directly damages cell membranes and also
interferes with protective enzymes (Evans et al. 1995). Dissolved copper is thought to directly
interfere with different stages of transduction at the olfactory receptor level (Winberg et al.
1992). Mercuric chloride selectively distributes to olfactory tissue, and may interfere with
neuronal function through alteration of neurotransmitter levels (Lakshmana, Desiraju and
Raju 1993). After injection into the bloodstream, pesticides are taken up by nasal mucosa
(Brittebo, Hogman and Brandt 1987), and can cause nasal congestion. The garlic odour noted
with organophosphorus pesticides is not due to damaged tissue, but to detection of
butylmercaptan, however.

Although smoking can inflame the lining of the nose and reduce smell ability, it may also
confer protection from other damaging agents. Chemicals within the smoke may induce
microsomal cytochrome P450 enzyme systems (Gresham, Molgaard and Smith 1993), which
would accelerate metabolism of toxic chemicals before they can injure the olfactory
neuroepithelium. Conversely, some drugs, for example tricyclic antidepressants and
antimalarial drugs, can inhibit cytochrome P450.

Olfactory loss after exposure to wood and fibre board dusts (Innocenti et al. 1985;
Holmström, Rosén and Wilhelmsson 1991; Mott and Leopold 1991) may be due to diverse
mechanisms. Allergic and nonallergic rhinitis can result in obstruction to odorants or
inflammation. Mucosal changes can be severe, dysplasia has been documented (Boysen and
Solberg 1982) and adenocarcinoma may result, especially in the area of the ethmoid sinuses
near the olfactory neuroepithelium. Carcinoma associated with hard woods may be related to
a high tannin content (Innocenti et al. 1985). Inability to effectively clear nasal mucus has
been reported and may be related to an increased frequency of colds (Andersen, Andersen and
Solgaard 1977); resultant viral infection could further damage the olfactory system. Olfactory
loss may also be due to chemicals associated with woodworking, including varnishes and
stains. Medium-density fibre board contains formaldehyde, a known respiratory tissue irritant
that impairs mucociliary clearance, causes olfactory loss, and is associated with a high
incidence of oral, nasal and pharyngeal cancer (Council on Scientific Affairs 1989), all of
which could contribute to an understanding of formaldehyde-induced olfactory losses.

Radiation therapy has been reported to cause olfactory abnormalities (Mott and Leopold
1991), but little information is available about occupational exposures. Rapidly regenerating
tissue, such as olfactory receptor cells, would be expected to be vulnerable. Mice exposed to
radiation in a spaceflight demonstrated smell tissue abnormalities, while the rest of the nasal
lining remained normal (Schiffman and Nagle 1992).

After chemical exposures, some individuals describe a heightened sensitivity to odorants.


“Multiple chemical sensitivities” or “environmental illness” are labels used to describe
disorders typified by “hypersensitivity” to diverse environmental chemicals, often in low
concentrations (Cullen 1987; Miller 1992; Bell 1994). Thus far, however, lower thresholds to
odorants have not been demonstrated.

Non-occupational causes of olfactory problems

Ageing and smoking decrease olfactory ability. Upper respiratory viral damage, idiopathic
(“unknown”), head trauma, and diseases of the nose and sinuses appear to be the four leading
causes of smell problems in the United States (Mott and Leopold 1991) and must be
considered as part of the differential diagnosis in any individual presenting with possible
environmental exposures. Congenital inabilities to detect certain substances are common. For
example, 40 to 50% of the population can not detect androsterone, a steroid found in sweat.

Testing of chemosensation

Psychophysics is the measurement of a response to an applied sensory stimulus. “Threshold”


tests, tests that determine the minimum concentration that can be reliably perceived, are
frequently used. Separate thresholds can be obtained for detection of odorants and
identification of odorants. Suprathreshold tests assess ability of the system to function at
levels above threshold and also provide useful information. Discrimination tasks, telling the
difference between substances, can elicit subtle changes in sensory ability. Identification tasks
may yield different results than threshold tasks in the same individual. For example, a person
with central nervous system injury may be able to detect odorants at usual threshold levels,
but may not be able to identify common odorants.

Summary

The nasal passages are ventilated by 10,000 to 20,000 litres of air per day, which may be
contaminated by possibly hazardous materials in varying degrees. The olfactory system is
especially vulnerable to damage because of requisite direct contact with volatile chemicals for
odorant perception. Olfactory loss, tolerance and adaptation prevent recognition of the
proximity of dangerous chemicals and may contribute to local injury or systemic toxicity.
Early identification of olfactory disorders can prompt protective strategies, ensure appropriate
treatment and prevent further damage. Occupational smell disorders can manifest themselves
as temporary or permanent anosmia or hyposmia, as well as distorted smell perception.
Identifiable causes to be considered in the occupational setting include rhinitis, sinusitis, head
trauma, radiation exposure and tissue injury from metallic compounds, metal dusts,
nonmetallic inorganic compounds, organic compounds, wood dusts, and substances present in
metallurgical and manufacturing processes. Substances differ in their site of interference with
the olfactory system. Powerful mechanisms for trapping, removing and detoxifying foreign
nasal substances serve to protect olfactory function and also prevent spread of damaging
agents into the brain from the olfactory system. Exceeding protective capacities can
precipitate a worsening cycle of injury, ultimately leading to greater severity of impairment
and extension of sites of injury, and converting temporary reversible effects into permanent
damage.

Cutaneous Receptors

Written by ILO Content Manager

Cutaneous sensitivity shares the main elements of all the basic senses. Properties of the
external world, such as colour, sound, or vibration, are received by specialized nerve cell
endings called sensory receptors, which convert external data into nervous impulses. These
signals are then conveyed to the central nervous system, where they become the basis for
interpreting the world around us.
It is useful to recognize three essential points about these processes. First, energy, and
changes in energy levels, can be perceived only by a sense organ capable of detecting the
specific type of energy in question. (This is why microwaves, x rays, and ultraviolet light are
all dangerous; we are not equipped to detect them, so that even at lethal levels they are not
perceived.) Second, our perceptions are necessarily imperfect shadows of reality, as our
central nervous system is limited to reconstructing an incomplete image from the signals
conveyed by its sensory receptors. Third, our sensory systems provide us with more accurate
information about changes in our environment than about static conditions. We are well-
equipped with sensory receptors sensitive to flickering lights, for example, or to the tiny
fluctuations of temperature provoked by a slight breeze; we are less well-equipped to receive
information about a steady temperature, say, or a constant pressure on the skin.

Traditionally the skin senses are divided into two categories: cutaneous and deep. While deep
sensitivity relies on receptors located in muscle, tendons, joints, and the periosteum
(membrane surrounding the bones), cutaneous sensitivity, with which we are concerned here,
deals with information received by receptors in the skin: specifically, the various classes of
cutaneous receptors that are located in or near the junction of the dermis and the epidermis.

All sensory nerves linking cutaneous receptors to the central nervous system have roughly the
same structure. The cell’s large body resides in a cluster of other nerve cell bodies, called a
ganglion, located near the spinal cord and connected to it by a narrow branch of the cell’s
trunk, called its axon. Most nerve cells, or neurons, that originate at the spinal cord send
axons to bones, muscle, joints, or, in the case of cutaneous sensitivity, to the skin. Just like an
insulated wire, each axon is covered along its course and at its endings with protective layers
of cells known as Schwann cells. These Schwann cells produce a substance known as myelin,
which coats the axon like a sheath. At intervals along the way are tiny breaks in the myelin,
known as nodes of Ranvier. Finally, at the end of the axon are found the components that
specialize in receiving and retransmitting information about the external environment: the
sensory receptors (Mountcastle 1974).

The different classes of cutaneous receptors, like all sensory receptors, are defined in two
ways: by their anatomical structures, and by the type of electrical signals they send along their
nerve fibres. Distinctly structured receptors are usually named after their discoverers. The
relatively few classes of sensory receptors found in the skin can be divided into three main
categories: mechanoreceptors, thermal receptors, and nociceptors.

All of these receptors can convey information about a particular stimulus only after they have
first encoded it in a type of electrochemical neural language. These neural codes use varying
frequencies and patterns of nerve impulses that scientists have only just begun to decipher.
Indeed, an important branch of neurophysiological research is devoted entirely to the study of
sensory receptors and the ways in which they translate energy states in the environment into
neural codes. Once the codes are generated, they are conveyed centrally along afferent fibres,
the nerve cells that serve receptors by conveying the signals to the central nervous system.

The messages produced by receptors can be subdivided on the basis of the response given to a
continuous, unvarying stimulation: slowly adapting receptors send electrochemical impulses
to the central nervous system for the duration of a constant stimulus, whereas rapidly adapting
receptors gradually reduce their discharges in the presence of a steady stimulus until they
reach a low baseline level or cease entirely, thereupon ceasing to inform the central nervous
system about the continuing presence of the stimulus.
The distinctly different sensations of pain, warmth, cold, pressure, and vibration are thus
produced by activity in distinct classes of sensory receptors and their associated nerve fibres.
The terms “flutter” and “vibration,” for example, are used to distinguish two slightly different
vibratory sensations encoded by two different classes of vibration-sensitive receptors
(Mountcastle et al. 1967). The three important categories of pain sensation known as pricking
pain, burning pain, and aching pain have each been associated with a distinct class of
nociceptive afferent fibre. This is not to say, however, that a specific sensation necessarily
involves only one class of receptor; more than one receptor class may contribute to a given
sensation, and, in fact, sensations may differ depending on the relative contribution of
different receptor classes (Sinclair 1981).

The preceding summary is based on the specificity hypothesis of cutaneous sensory function,
first formulated by a German physician named Von Frey in 1906. Although at least two other
theories of equal or perhaps greater popularity have been proposed during the past century,
Von Frey’s hypothesis has now been strongly supported by factual evidence.

Receptors that Respond to Constant Skin Pressure

In the hand, relatively large myelinated fibres (5 to 15 mm in diameter) emerge from a


subcutaneous nerve network called the subpapillary nerve plexus and end in a spray of nerve
terminals at the junction of the dermis and the epidermis (figure 1). In hairy skin, these nerve
endings culminate in visible surface structures known as touch domes; in glabrous, or hairless,
skin, the nerve endings are found at the base of skin ridges (such as those forming the
fingerprints). There, in the touch dome, each nerve fibre tip, or neurite, is enclosed by a
specialized epithelial cell known as a Merkel cell (see figures 2 and 3).

Figure 1. A schematic illustration of a cross-section of the skin


Figure 2. The touch dome on each raised region of skin contains 30 to 70 Merkel cells.

Figure 3. At a higher magnification available with the electron microscope, the Merkel cell, a
specialized epithelial cell, is seen to be attached to the basement membrane that separates the
epidermis from the dermis.

The Merkel cell neurite complex transduces mechanical energy into nerve impulses. While
little is known about the cell’s role or about its mechanism of transduction, it has been
identified as a slowly adapting receptor. This means that pressure on a touch dome containing
Merkel cells causes the receptors to produce nerve impulses for the duration of the stimulus.
These impulses rise in frequency in proportion to the intensity of the stimulus, thereby
informing the brain of the duration and magnitude of pressure on the skin.

Like the Merkel cell, a second slowly adapting receptor also serves the skin by signalling the
magnitude and duration of steady skin pressures. Visible only through a microscope, this
receptor, known as the Ruffini receptor, consists of a group of neurites emerging from a
myelinated fibre and encapsulated by connective tissue cells. Within the capsule structure are
fibres that apparently transmit local skin distortions to the neurites, which in turn produce the
messages sent along the neural highway to the central nervous system. Pressure on the skin
causes a sustained discharge of nerve impulses; as with the Merkel cell, the frequency of
nerve impulses is proportional to the intensity of the stimulus.
Despite their similarities, there is one outstanding difference between Merkel cells and Ruffini
receptors. Whereas sensation results when Ruffini receptors are stimulated, stimulation of
touch domes housing Merkel cells produces no conscious sensation; the touch dome is thus a
mystery receptor, for its actual role in neural function remains unknown. Ruffini receptors,
then, are believed to be the only receptors capable of providing the neural signals necessary
for the sensory experience of pressure, or constant touch. In addition, it has been shown that
the slowly adapting Ruffini receptors account for the ability of humans to rate cutaneous
pressure on a scale of intensity.

Receptors that Respond to Vibration and Skin Movement

In contrast with slowly adapting mechanoreceptors, rapidly adapting receptors remain silent
during sustained skin indentation. They are, however, well-suited to signal vibration and skin
movement. Two general categories are noted: those in hairy skin, which are associated with
individual hairs; and those which form corpuscular endings in glabrous, or hairless, skin.

Receptors serving hairs

A typical hair is enveloped by a network of nerve terminals branching from five to nine large
myelinated axons (figure 4). In primates, these terminals fall into three categories: lanceolate
endings, spindle-like terminals, and papillary endings. All three are rapidly adapting, such that
a steady deflection of the hair causes nerve impulses only while movement occurs. Thus,
these receptors are exquisitely sensitive to moving or vibratory stimuli, but provide little or no
information about pressure, or constant touch.

Figure 4. The shafts of hairs are a platform for nerve terminals that detect movements.
Lanceolate endings arise from a heavily myelinated fibre that forms a network around the
hair. The terminal neurites lose their usual coverage of Schwann cells and work their way
among the cells at the base of the hair.

Spindle-like terminals are formed by axon terminals surrounded by Schwann cells. The
terminals ascend to the sloping hair shaft and end in a semicircular cluster just below a
sebaceous, or oil-producing, gland. Papillary endings differ from spindle-like terminals
because instead of ending on the hair shaft, they terminate as free nerve endings around the
orifice of the hair.

There are, presumably, functional differences among the receptor types found on hairs. This
can be inferred in part from structural differences in the way the nerves end on the hair shaft
and in part from differences in the diameter of axons, as axons of different diameters connect
to different central relay regions. Still, the functions of receptors in hairy skin remains an area
for study.

Receptors in glabrous skin

The correlation of a receptor’s anatomical structure with the neural signals it generates is most
pronounced in large and easily manipulable receptors with corpuscular, or encapsulated,
endings. Particularly well understood are the pacininan and Meissner corpuscles, which, like
the nerve endings in hairs discussed above, convey sensations of vibration.

The pacinian corpuscle is large enough to be seen with the naked eye, making it easy to link
the receptor with a specific neural response. Located in the dermis, usually around tendons or
joints, it is an onion-like structure, measuring 0.5 × 1.0 mm. It is served by one of the body’s
largest afferent fibres, having a diameter of 8 to 13 μm and conducting at 50 to 80 metres per
second. Its anatomy, well-studied by both light and electron microscopy, is well known.

The principal component of the corpuscle is an outer core formed of cellular material
enclosing fluid-filled spaces. The outer core itself is then surrounded by a capsule that is
penetrated by a central canal and a capillary network. Passing through the canal is a single
myelinated nerve fibre 7 to 11 mm in diameter, which becomes a long, nonmyelinated nerve
terminal that probes deep into the centre of the corpuscle. The terminal axon is elliptical, with
branch-like processes.

The pacinian corpuscle is a rapidly adapting receptor. When subjected to sustained pressure, it
thus produces an impulse only at the beginning and the end of the stimulus. It responds to
high-frequency vibrations (80 to 400 Hz) and is most sensitive to vibrations around 250 Hz.
Often, these receptors respond to vibrations transmitted along bones and tendons, and because
of their extreme sensitivity, they may be activated by as little as a puff of air on the hand
(Martin 1985).

In addition to the pacinian corpuscle, there is another rapidly adapting receptor in glabrous
skin. Most researchers believe it to be the Meissner corpuscle, located in the dermal papillae
of the skin. Responsive to low-frequency vibrations of 2 to 40 Hz, this receptor consists of the
terminal branches of a medium-sized myelinated nerve fibre enveloped in one or several
layers of what appear to be modified Schwann cells, called laminar cells. The receptor’s
neurites and laminar cells may connect to a basal cell in the epidermis (figure 5).
Figure 5. The Meissner corpuscle is a loosely encapsulated sensory receptor in the dermal
papillae of glabrous skin.

If the Meissner corpuscle is selectively inactivated by the injection of a local anaesthetic


through the skin, the sense of flutter or low-frequency vibration is lost. This suggests that it
functionally complements the high frequency capacity of the pacinian corpuscles. Together,
these two receptors provide neural signals sufficient to account for human sensibility to a full
range of vibrations (Mountcastle et al. 1967).

Cutaneous Receptors Associated with Free Nerve Endings

Many still unidentifiable myelinated and unmyelinated fibres are found in the dermis. A large
number are only passing through, on their way to skin, muscles, or periosteum, while others
(both myelinated and unmyelinated) appear to end in the dermis. With a few exceptions, such
as the pacinian corpuscle, most fibres in the dermis appear to end in poorly defined ways or
simply as free nerve endings.

While more anatomical study is needed to differentiate these ill-defined endings,


physiological research has clearly shown that these fibres encode a variety of environmental
events. For example, free nerve endings found at the junction between the dermis and
epidermis are responsible for encoding the environmental stimuli that will be interpreted as
cold, warmth, heat, pain, itch, and tickle. It is not yet known which of these different classes
of small fibres convey particular sensations.

The apparent anatomical similarity of these free nerve endings is probably due to the
limitations of our investigative techniques, since structural differences among free nerve
endings are slowly coming to light. For example, in glabrous skin, two different terminal
modes of free nerve endings have been distinguished: a thick, short pattern and a long, thin
one. Studies of human hairy skin have demonstrated histochemically recognizable nerve
endings that terminate at the dermal-epidermal junction: the penicillate and papillary endings.
The former arise from unmyelinated fibres and form a network of endings; in contrast, the
latter arise from myelinated fibres and end around the hair orifices, as mentioned earlier.
Presumably, these structural disparities correspond to functional differences.

Although it is not yet possible to assign specific functions to individual structural entities, it is
clear from physiological experiments that there exist functionally different categories of free
nerve endings. One small myelinated fibre has been found to respond to cold in humans.
Another unmyelinated fibre serving free nerve endings responds to warmth. How one class of
free nerve endings can respond selectively to a drop in temperature, while an increase of skin
temperature can provoke another class to signal warmth is unknown. Studies show that
activation of one small fibre with a free ending may be responsible for itching or tickling
sensations, while there are believed to be two classes of small fibres specifically sensitive to
noxious mechanical and noxious chemical or thermal stimuli, providing the neural basis for
pricking and burning pain (Keele 1964).

The definitive correlation between anatomy and physiological response awaits the
development of more advanced techniques. This is one of the major stumbling blocks in the
management of disorders such as causalgia, paraesthesia, and hyperpathia, which continue to
present a dilemma to the physician.

Peripheral Nerve Injury

Neural function can be divided into two categories: sensory and motor. Peripheral nerve
injury, usually resulting from the crushing or severing of a nerve, can impair either function or
both, depending on the types of fibres in the damaged nerve. Certain aspects of motor loss
tend to be misinterpreted or overlooked, as these signals do not go to muscles but rather affect
autonomic vascular control, temperature regulation, the nature and thickness of the epidermis,
and the condition of cutaneous mechano-receptors. The loss of motor innervation will not be
discussed here, nor will the loss of innervation affecting senses other than those responsible
for cutaneous sensation.

The loss of sensory innervation to the skin creates a vulnerability to further injury, as it leaves
an anaesthetic surface that is incapable of signalling potentially harmful stimuli. Once injured,
anaesthetized skin surfaces are slow to heal, perhaps in part on account of the lack of
autonomic innervation that normally regulates such key factors as temperature regulation and
cellular nutrition.

Over a period of several weeks, denervated cutaneous sensory receptors begin to atrophy, a
process which is easy to observe in large encapsulated receptors such as pacinian and
Meissner corpuscles. If regeneration of the axons can occur, recovery of function may follow,
but the quality of the recovered function will depend upon the nature of the original injury and
upon the duration of denervation (McKinnon and Dellon 1988).

Recovery following a nerve crush is more rapid, much more complete and more functional
than is recovery after a nerve is severed. Two factors explain the favourable prognosis for a
nerve crush. First, more axons may again achieve contact with the skin than after a
transection; second, the connections are guided back to their original site by Schwann cells
and linings known as basement membranes, both of which remain intact in a crushed nerve,
whereas after a nerve transection the nerves often travel to incorrect regions of the skin
surface by following the wrong Schwann cell paths. The latter situation results in distorted
spatial information being sent to the somatosensory cortex of the brain. In both cases,
however, regenerating axons appear capable of finding their way back to the same class of
sensory receptors that they previously served.

The reinnervation of a cutaneous receptor is a gradual process. As the growing axon reaches
the skin surface, receptive fields are smaller than normal, while the threshold is higher. These
receptive points expand with time and gradually coalesce into larger fields. Sensitivity to
mechanical stimuli becomes greater and often approaches the sensitivity of normal sensory
receptors of that class. Studies using the stimuli of constant touch, moving touch, and
vibration have shown that the sensory modalities attributed to different types of receptors
return to anaesthetic areas at different rates.

Viewed under a microscope, denervated glabrous skin is seen to be thinner than normal,
having flattened epidermal ridges and fewer layers of cells. This confirms that nerves have a
trophic, or nutritional, influence on skin. Soon after innervation returns, the dermal ridges
become better developed, the epidermis becomes thicker, and axons can be found penetrating
the basement membrane. As the axon comes back to the Meissner corpuscle, the corpuscle
begins to increase in size, and the previously flattened, atrophic structure returns to its original
form. If the denervation has been of long duration, a new corpuscle may form adjacent to the
original atrophic skeleton, which remains denervated (Dellon 1981).

As can be seen, an understanding of the consequences of peripheral nerve injury requires


knowledge of normal function as well as the degrees of functional recovery. While this
information is available for certain nerve cells, others require further investigation, leaving a
number of murky areas in our grasp of the role of cutaneous nerves in health and disease.
Sensory Systems References
Adler, FH. 1992. Physiology of the Eye: Clinical Application. St. Louis: Mosby New York
Books.

Adrian, WK. 1993. Visual Performance, Acuity and Age: Lux Europa Proceedings of the
VIIth European Lighting Conference. London: CIBSE.

Ahlström, R, B Berglund, and U Berblund. 1986. Impaired odor perception in tank cleaners.
Scand J Work Environ Health 12:574-581.

Amoore, JE. 1986. Effects of chemical exposure on olfaction in humans. In Toxicology of the
Nasal Passages, edited by CS Barrow. Washington, DC: Hemisphere Publishing.

Andersen, HC, I Andersen, and J Solgard. 1977. Nasal cancers, symptoms and upper airway
function in woodworkers. Br J Ind Med 34:201-207.

—. 1993. Otolaryngol Clin N Am 5(26).

Axéll, T, K Nilner, and B Nilsson. 1983. Clinical evaluation of patients referred with
symptoms related to oral galvanism. Scand Dent J 7:169-178.

Ballantyne, JC and JM Ajodhia. 1984. Iatrogenic dizziness. In Vertigo, edited by MR Dix and
JD Hood. Chichester: Wiley.

Bar-Sela, S, M Levy, JB Westin, R Laster, and ED Richter. 1992. Medical findings in nickel-
cadmium battery workers. Israel J Med Sci 28:578-583.

Bedwal, RS, N Nair, and MP Sharma. 1993. Selenium-its biological perspectives. Med
Hypoth 41:150-159.

Bell, IR. 1994. White paper: Neuropsychiatric aspects of sensitivity to low-level chemicals: A
neural sensitization model. Toxicol Ind Health 10:277-312.

Besser, R, G Krämer, R Thümler, J Bohl, L Gutmann, and HC Hopf. 1987. Acute trimethyltin
limbic cerebellar syndrome. Neurology 37:945-950.

Beyts, JP. 1987. Vestibular rehabilitation. In Adult Audiology, Scott-Brown’s


Otolaryngology, edited by D Stephens. London: Butterworths.

Blanc, PD, HA Boushey, H Wong, SF Wintermeyer and MS Bernstein. 1993. Cytokines in


metal fume fever. Am Rev Respir Dis 147:134-138.

Blount, BW. 1990. Two types of metal fume fever: mild vs. serious. Mil Med (Aug)
155(8):372-7

Bokina, AI, ND Eksler, and AD Semenenko. 1976. Investigation of the mechanism of action
of atmospheric pollutants on the cenral nervous system and comparative evaluation of
methods of study. Environ Health Persp 13:37-42.
Bolla, KI, BS Schwartz, and W Stewart. 1995. Comparison of neurobehavioral function in
workers exposed to a mixture of organic and inorganic lead and in workers exposed to
solvents. Am J Ind Med 27:231-246.

Bonnefoi, M, TM Monticello, and KT Morgan. 1991. Toxic and neoplastic responses in the
nasal passages: Future research needs. Exp Lung Res 17:853-868.

Boysen, M and Solberg. 1982. Changes in the nasal mucosa of furniture workers. Scand J
Work Environ Health :273-282.

Brittebo, EB, PG Hogman, and I Brandt. 1987. Epithelial binding of hexachlorocyclohexanes


in the respiratory and upper alimentary tracts: A comparison between the alpha-, beta-, and
gamma-isomers in mice. Food Chem Toxicol 25:773-780.

Brooks, SM. 1994. Host susceptibility to indoor air pollution. J Allergy Clin Immunol
94:344-351.

Callender, TJ, L Morrow, K Subramanian, D Duhon, and M Ristovv. 1993. Three-


dimensional brain metabolic imaging in patients with toxic encephalopathy. Environmental
Research 60:295-319.

Chia, SE, CN Ong, SC Foo, and HP Lee. 1992. Medical student’s exposure to formaldehyde
in a gross anatomy dissection laboratory. J Am Coll Health 41:115-119.

Choudhuri, S, KK Kramer, and NE Berman. 1995. Constitutive expression of metallothionein


genes in mouse brain. Toxicol Appl Pharmacol 131:144-154.

Ciesielski, S, DP Loomis, SR Mims, and A Auer. 1994. Pesticide exposures, cholinesterase


depression, and symptoms among North Carolina migrant farmworkers. Am J Public Health
84:446-451.

Clerisi, WJ, B Ross, and LD Fechter. 1991. Acute ototoxicity of trialkyltins in the guinea pig.
Toxicol Appl Pharmacol :547-566.

Coleman, JW, MR Holliday, and RJ Dearman. 1994. Cytokine-mast cell interactions:


Relevance to IgE-mediated chemical allergy. Toxicology 88:225-235.

Cometto-Muñiz, JE and WS Cain. 1991. Influence of airborne contaminants on olfaction and


the common chemical sense. In Smell and Taste in Health and Disease, edited by TV
Getchell. New York: Raven Press.

—. 1994. Sensory reactions of nasal pungency and odor to volatile organic compounds: The
alkylbenzenes. Am Ind Hyg Assoc J 55:811-817.

Corwin, J, M Loury, and AN Gilbert. 1995. Workplace, age, and sex as mediators of olfactory
function: Data from the National Geographic Smell Survey. Journal of Gerontolgy: Psychiol
Sci 50B:P179-P186.
Council on Dental Materials, Instruments and Equipment. 1987. American Dental Association
status report on the occurence of galvanic corrosion in the mouth and its potential effects. J
Am Dental Assoc 115:783-787.

Council on Scientific Affairs. 1989. Council report: Formaldehyde. JAMA 261:1183-1187.

Crampton, GH. 1990. Motion and Space Sickness. Boca Raton: CRC Press.

Cullen, MR. 1987. Workers with multiple chemical sensitivities. Occup Med: State Art Rev
2(4).

Deems, DA, RL Doty, and RG Settle. 1991. Smell and taste disorders, a study of 750 patients
from the University of Pennsylvania Smell and Taste Center. Arch Otolaryngol Head Neck
Surg 117:519-528.

Della Fera, MA, AE Mott, and ME Frank. 1995. Iatrogenic causes of taste disturbances:
Radiation therapy, surgery, and medication. In Handbook of Olfaction and Gustation, edited
by RL Doty. New York: Marcel Dekker.

Dellon, AL. 1981. Evaluation of Sensibility and Re-Education of Sensation in the Hand.
Baltimore: Williams & Wilkins.

Dykes, RW. 1977. Sensory receptors. In Reconstructive Microsurgery, edited by RK Daniel


and JK Terzis. Boston: Little Brown & Co.

El-Etri, MM, WT Nickell, M Ennis, KA Skau, and MT Shipley. 1992. Brain norepinephrine
reductions in soman-intoxicated rats: Association with convulsions and AchE inhibition, time
course, and relation to other monoamines. Experimental Neurology 118:153-163.

Evans, J and L Hastings. 1992. Accumulation of Cd(II) in the CNS depending on the route of
administration: Intraperitoneal, intratracheal, or intranasal. Fund Appl Toxicol 19:275-278.

Evans, JE, ML Miller, A Andringa, and L Hastings. 1995. Behavioral, histological, and
neurochemical effets of nickel(II) on the rat olfactory system. Toxicol Appl Pharmacol
130:209-220.

Fechter, LD, JS Young, and L Carlisle. 1988. Potentiation of noise induced threshold shifts
and hair cell loss by carbon monoxide. Hearing Res 34:39-48.
Fox, SL. 1973. Industrial and Occupational Opthalmology. Springfield: Charles C. Thomas.

Frank, ME, TP Hettinger, and AE Mott. 1992. The sense of taste: Neurobiology, aging, and
medication effects. Critical Reviews in Oral Biology Medicine 3:371-393.

Frank, ME and DV Smith. 1991. Electrogustometry: A simple way to test taste. In Smell and
Taste in Health and Disease, edited by TV Getchell, RL Doty, and LM Bartoshuk. New York:
Raven Press.

Gagnon, P, D Mergler, and S Lapare. 1994. Olfactory adaptation, threshold shift and recovery
at low levels of exposure to methyl isobutyl ketone (MIBK). Neurotoxicology 15:637-642.
Gilbertson, TA. 1993. The physiology of vertebrate taste reception. Curr Opin Neurobiol
3:532-539.

Gordon, T and JM Fine. 1993. Metal fume fever. Occup Med: State Art Rev 8:505-517.

Gosselin, RE, RP Smith, and HC Hodge. 1984. Clinical Toxicology of Commercial Products.
Baltimore: Williams & Wilkins.

Graham, CH, NR Barlett, JL Brown, Y Hsia, CG Mueller, and LA Riggs. 1965. Vision and
Visual Perception. New York: John Wiley and Sons, Inc.

Grandjean, E. 1987. Ergonomics in Computerized Offices. London: Taylor & Francis.

Grant, A. 1979. Optical danger of fiberglass hardener. Med J Austral 1:23.

Gresham, LS, CA Molgaard, and RA Smith. 1993. Induction of cytochrome P-450 enzymes
via tobacco smoke: A potential mechanism for developing resistance to environmental toxins
as related to Parkinsonism and other neurologic disease. Neuroepidemiol 12:114-116.

Guidotti, TL. 1994. Occupational exposure to hydrogen sulfide in the sour gas industry: Some
unresolved issues. Int Arch Occup Environ Health 66:153-160.

Gyntelberg, F, S Vesterhauge, P Fog, H Isager, and K Zillstorff. 1986. Acquired intolerance


to organic solvents and results of vestibular testing. Am J Ind Med 9:363-370.

Hastings, L. 1990. Sensory neurotoxicology: use of the olfactory system in the assessment of
toxicity. Neurotoxicology and Teratology 12:455-459.

Head, PW. 1984. Vertigo and barotrauma. In Vertigo, edited by MR Dix and JD Hood.
Chichester: Wiley.

Hohmann, B and F Schmuckli. 1989. Dangers du bruit pour l’ouië et l’emplacement de


travail. Lucerne: CNA.

Holmström, M, G Rosén, and B Wilhelmsson. 1991. Symptoms, airway physiology and


histology of workers exposed to medium-density fiber board. Scand J Work Environ Health
17:409-413.

Hotz, P, A Tschopp, D Söderström, and J Holtz. 1992. Smell or taste disturbances,


neurological symptoms, and hydrocarbon exposure. Int Arch Occup Environ Health 63:525-
530.

Howard, IP. 1982. Human Visual Orientation. Chichester: Wiley.

Iggo, A and AR Muir. 1969. The structure and function of a slowly adapting touch corpuscle
in hairy skin. J Physiol Lond 200(3):763-796.

Illuminating Engineering Society of North America (IESNA). 1993. Vision and perception. In
Lighting Handbook: Reference and Application, edited by MS Rea and Fies. New York:
IESNA.
Innocenti, A, M Valiani, G Vessio, M Tassini, M Gianelli, and S Fusi. 1985. Wood dust and
nasal diseases: Exposure to chestnut wood dust and loss of smell (pilot study). Med Lavoro
4:317-320.

Jacobsen, P, HO Hein, P Suadicani, A Parving, and F Gyntelberg. 1993. Mixed solvent


exposure and hearing impairment: An epidemiological study of 3284 men. The Copenhagen
male study. Occup Med 43:180-184.

Johansson, B, E Stenman, and M Bergman. 1984. Clinical study of patients referred for
investigation regarding so-called oral galvanism. Scand J Dent Res 92:469-475.

Johnson, A-C and PR Nylén. 1995. Effects of industrial solvents on hearing. Occup Med:
State of the art reviews. 10:623-640.

Kachru, DM, SK Tandon, UK Misra, and D Nag. 1989. Occupational lead poisoning among
silver jewelry workers. Indian Journal of Medical Sciences 43:89-91.

Keele, CA. 1964. Substances Producing Pain and Itch. London: Edward Arnold.

Kinnamon, SC and TV Getchell. 1991. Sensory transduction in olfactory receptor neurons and
gustatory receptor cells. In Smell and Taste in Health and Disease, edited by TV Getchell, RL
Doty, and LM Bartoshuk. New York: Raven Press.

Krueger, H. 1992. Exigences visuelles au poste de travail: Diagnostic et traitement. Cahiers


médico-sociaux 36:171-181.

Lakshmana, MK, T Desiraju, and TR Raju. 1993. Mercuric chloride-induced alterations of


levels of noradrenaline, dopamine, serotonin and acetylcholine esterase activity in different
regions of rat brain during postnatal development. Arch Toxicol 67:422-427.

Lima, C and JP Vital. 1994. Olfactory mucosa response in guinea pigs following intranasal
instillation with Cryptococcus neoformans: A histological and immunocytochemical study.
Mycopathologia 126:65-73.

Luxon, LM. 1984. The anatomy and physiology of the vestibular system. In Vertigo, edited
by MR Dix and JD Hood. Chichester: Wiley.

MacKinnon, SE and AL Dellon. 1988. Surgery of the Peripheral Nerve. New York: Thieme
Medical Publishers.

Marek, J-J. 1993. The molecular biology of taste transduction. Bioessays 15:645-650.

Marek, M. 1992. Interactions between dental amalgams and the oral environment. Adv Dental
Res 6:100-109.

Margolskee, RF. 1993. The biochemistry and molecular biology of taste transduction. Curr
Opin Neurobiol 3:526-531.

Martin, JH. 1985. Receptor physiology and submodality coding in the somatic sensory
system. Principles of Neuroscience, edited by ER Kandel and JH Schwartz.
Meyer, J-J. 1990. Physiologie de la vision et ambiance lumineuse. Document de
l’Aerospatiale, Paris.

Meyer, J-J, A Bousquet, L Zoganas and JC Schira. 1990. Discomfort and disability glare in
VDT operators. In Work with Display Units 89, edited by L Berlinguet and D Berthelette.
Amsterdam: Elsevier Science.

Meyer, J-J, P Rey, and A Bousquet. 1983. An automatic intermittent light stimulator to record
flicker perceptive thresholds in patients with retinal disease. In Advances in Diagnostic Visual
Optics, edited by GM Brenin and IM Siegel. Berlin: Springer-Verlag.

Meyer, J-J, P Rey, B Thorens, and A Beaumanoire. 1971. Examen de sujets atteints d’un
traummatisme cranio-cérébral par un test perception visuelle: courbe de Lange. Swiss Arch of
Neurol 108:213-221.

Meyer, J-J, A Bousquet, JC Schira, L Zoganas, and P Rey. 1986. Light sensitivity and visual
strain when driving at night. In Vision in Vehicles, edited by AG Gale. Amsterdam: Elsevier
Science Publisher.

Miller, CS. 1992. Possible models for multiple chemical sensitivity: conceptual issues and
role of the limbic system. Toxicol Ind Health 8:181-202.

Miller, RR, JT Young, RJ Kociba, DG Keyes, KM Bodner, LL Calhoun, and JA Ayres. 1985.
Chronic toxicity and oncogenicity bioassay of inhaled ethyl acrylate in fischer 344 rats and
B6C3F1 mice. Drug Chem Toxicol 8:1-42.

Möller, C, L Ödkvist, B Larsby, R Tham, T Ledin, and L Bergholtz. 1990. Otoneurological


finding among workers exposed to styrene. Scand J Work Environ Health 16:189-194.

Monteagudo, FSE, MJD Cassidy, and PI Folb. 1989. Recent developments in aluminum
toxicology. Med Toxicol 4:1-16.

Morata, TC, DE Dunn, LW Kretschmer, GK Lemasters, and RW Keith. 1993. Effects of


occupational exposure to organic solvents and noise on hearing. Scand J Work Environ
Health 19:245-254.

Mott, AE, M Grushka, and BJ Sessle. 1993. Diagnosis and management of taste disorders and
burning mouth syndrome. Dental Clinics of North America 37:33-71.

Mott, AE and DA Leopold. 1991. Disorders in taste and smell. Med Clin N Am 75:1321-
1353.

Mountcastle, VB. 1974. Medical Physiology. St. Louis: CV Mosby.

Mountcastle, VB, WH Talbot, I Darian-Smith, and HH Kornhuber. 1967. Neural basis of the
sense of flutter-vibration. Science :597-600.

Muijser, H, EMG Hoogendijk, and J Hoosima. 1988. The effects of occupational exposure to
styrene on high-frequency hearing thresholds. Toxicology :331-340.
Nemery, B. 1990. Metal toxicity and the respiratory tract. Eur Respir J 3:202-219.

Naus, A. 1982. Alterations of the smell acuity caused by menthol. J Laryngol Otol 82:1009-
1011.

Örtendahl, TW. 1987. Oral changes in divers working with electrical welding/cutting
underwater. Swedish Dent J Suppl 43:1-53.

Örtendahl, TW, G Dahlén, and HOE Röckert. 1985. The evaluation of oral problems in divers
performing electrical welding and cutting under water. Undersea Biomed Res 12:55-62.

Ogawa, H. 1994. Gustatory cortex of primates: Anatomy and physiology. Neurosci Res 20:1-
13.

O’Reilly, JP, BL Respicio, and FK Kurata. 1977. Hana Kai II: A 17-day dry saturation dive at
18.6 ATA. VII: Auditory, visual and gustatory sensations. Undersea Biomed Res 4:307-314.

Otto, D, G Robinson, S Bauman, S Schroeder, P Mushak, D Kleinbaum, and L Boone. 1985.


%-years follow-up study of children with low-to-moderate lead absorption:
Electrophysiological evaluation. Environ Research 38:168-186.

Oyanagi, K, E Ohama, and F Ikuta. 1989. The auditory system in methyl mercurial
intoxication: A neuropathological investigation on 14 autopsy cases in Niigata, Japan. Acta
Neuropathol 77:561-568.

Participants of SCP Nos. 147/242 and HF Morris. 1990. Veterans administration cooperative
studies project no. 147: Association of metallic taste with metal ceramic alloys. J Prosthet
Dent 63:124-129.

Petersen, PE and C Gormsen. 1991. Oral conditions among German battery factory workers.
Community Dentistry and Oral Epidemiology 19:104-106.

Pfeiffer, P and H Schwickerath. 1991. Nickel solubility and metallic taste. Zwr 100:762-
764,766,768-779.

Pompeiano, O and JHJ Allum. 1988. Vestibulospinal Control of Posture and Locomotion.
Progress in Brain Research, no.76. Amsterdam: Elsevier.

Rees, T and L Duckert. 1994. Hearing loss and other otic disorders. In Textbook of Clinical,
Occupational and Environmental Medicine, edited by C Rosenstock. Philadelphia: WB
Saunders.

Ressler, KJ, SL Sullivan, and LB Buck. 1994. A molecular dissection of spatial patterning in
the olfactory system. Curr Opin Neurobiol 4:588-596.

Rey, P. 1991. Précis De Medecine Du Travail. Geneva: Medicine et Hygiène.

Rey, P and A Bousquet. 1990. Medical eye examination strategies for VDT operators. In
Work With Display Units 89, edited by L Berlinguet and D Berthelette. Amsterdam: Elsevier
Science.
Rose, CS, PG Heywood, and RM Costanzo. 1934. Olfactory impairment after chronic
occupational cadmium exposure. J Occup Med 34:600-605.

Rubino, GF. 1990. Epidemiologic survey of ocular disorders: The Italian multicentric
research. In Work with Display Units 89, edited by L Berlinguet and D Berthelette.
Amsterdam: Elsevier Science Publishers B.V.

Ruth, JH. 1986. Odor thresholds and irritation levels of several chemical substances: A
review. Am Ind Hyg Assoc J 47:142-151.

Rusznak, C, JL Devalia, and RJ Davies. 1994. The impact of pollution on allergic disease.
Allergy 49:21-27.

Ryback, LP. 1992. Hearing: The effects of chemicals. Otolaryngology-Head and Neck
Surgery 106:677-686.

—. 1993. Ototoxicity. Otolaryngol Clin N Am 5(26).

Savov, A. 1991. Damages to the ears, nose and throat in copper production. Problemi na
Khigienata 16:149-153.

—. 1994. Changes in taste and smell: Drug interactions and food preferences. Nutr Rev
52(II):S11-S14.

Schiffman, SS. 1994. Changes in taste and smell: Drug interactions and food preferences.
Nutr Rev 52(II): S11-S14.

Schiffman, SS and HT Nagle. 1992. Effect of environmental pollutants on taste and smell.
Otolaryngology-Head and Neck Surgery 106:693-700.

Schwartz, BS, DP Ford, KI Bolla, J Agnew, and ML Bleecker. 1991. Solvent-associated


olfatory dysfunction: Not a predictor of deficits in learning and memory. Am J Psychiatr
148:751-756.

Schweisfurth, H and C Schottes. 1993. Acute intoxication of a hydrazine-like gas by 19


workers in a garbage dump. Zbl Hyg 195:46-54.

Shusterman, D. 1992. Critical review: The health significance of environmental odor


pollution. Arch Environ Health 47:76-87.

Shusterman, DJ and JE Sheedy. 1992. Occupational and environmental disorders of the


special senses. Occup Med: State Art Rev 7:515-542.

Siblerud, RL. 1990. The relationship between mercury from dental amalgam and oral cavity
health. Ann Dent 49:6-10.

Sinclair. 1981. Mechanisms of Cutaneous Sensation. Oxford: Oxford Univ. Press.

Spielman, AI. 1990. Interaction of saliva and taste. J Dental Res 69:838.
Stevens, JC and WS Cain. 1986. Aging and the perception of nasal irritation. Physiol Behav
37:323-328.

van Dijk, FJH. 1986. Non-auditory effects of noise in industry. II A review of the literature.
Int Arch Occup Environ Health 58.

Verriest, G and G Hermans. 1975. Les aptitudes visuelles professionnelles. Bruxelles:


Imprimerie médicale et scientifique.

Welch, AR, JP Birchall, and FW Stafford. 1995. Occupational rhinitis - Possible mechanisms
of pathogenesis. J Laryngol Otol 109:104-107.

Weymouth, FW. 1966. The eye as an optical instrument. In Physiology and Biophysics,
edited by TC Ruch and HD Patton. London: Saunders.

Wieslander, G, D Norbäck, and C Edling. 1994. Occupational exposure to water based paint
and symptoms from the skin and eyes. Occup Environ Med 51:181-186.

Winberg, S, R Bjerselius, E Baatrup, and KB Doving. 1992. The effect of Cu(II) on the
electro-olfactogram (EOG) of the Atlantic salmon (Salmo salar L) in artificial freshwater of
varying inorganic carbon concentrations. Ecotoxicology and Environmental Safety 24:167-
178.

Witek, TJ. 1993. The nose as a target for adverse effects from the environment: Applying
advances in nasal physiologic measurements and mechanisms. Am J Ind Med 24:649-657.

World Health Organization (WHO). 1981. Arsenic. Environmental Health Criteria, no.18.
Geneva: WHO.

Yardley, L. 1994. Vertigo and Dizziness. London: Routledge.

Yontchev, E, GE Carlsson, and B Hedegård. 1987. Clinical findings in patients with orofacial
discomfort complaints. Int J Oral Maxillofac Surg 16:36-44.

Copyright 2015 International Labour Organization


12. Skin Diseases
Chapter Editor: Louis-Philippe Durocher

Overview: Occupational Skin Diseases

Written by ILO Content Manager

The growth of industry, agriculture, mining and manufacturing has been paralleled by the
development of occupational diseases of the skin. The earliest reported harmful effects were
ulcerations of the skin from metal salts in mining. As populations and cultures have expanded
the uses of new materials, new skills and new processes have emerged. Such technological
advances brought changes to the work environment and during each period some aspect of the
technical change has impaired workers’ health. Occupational diseases, in general and skin
diseases, in particular, have long been an unplanned by-product of industrial achievement.

Fifty years ago in the United States, for example, occupational diseases of the skin accounted
for no less than 65-70% of all reported occupational diseases. Recently, statistics collected by
the United States Department of Labor indicate a drop in frequency to approximately 34%.
This decreased number of cases is said to have resulted from increased automation, from
enclosure of industrial processes and from better education of management, supervisors and
workers in the prevention of occupational diseases in general. Without doubt such preventive
measures have benefited the workforce in many larger plants where good preventive services
may be available, but many people are still employed in conditions which are conducive to
occupational diseases. Unfortunately, there is no accurate assessment of the number of cases,
causal factors, time lost or actual cost of occupational skin disease in most countries.

General terms, such as industrial or occupational dermatitis or professional eczema, are used
for occupational skin diseases but names related both to cause and effect are also commonly
used. Cement dermatitis, chrome holes, chloracne, fibreglass itch, oil bumps and rubber rash
are some examples. Because of the variety of skin changes induced by agents or conditions at
work, these diseases are appropriately called occupational dermatoses—a term which includes
any abnormality resulting directly from, or aggravated by, the work environment. The skin
can also serve as an avenue of entry for certain toxicants which cause chemical poisoning via
percutaneous absorption.

Cutaneous Defence

From experience we know that the skin can react to a large number of mechanical, physical,
biological and chemical agents, acting alone or in combination. Despite this vulnerability,
occupational dermatitis is not an inevitable accompaniment of work. The majority of the
workforce manages to remain free of disabling occupational skin problems, due in part to the
inherent protection provided by the skin’s design and function, and in part due to the daily use
of personal protective measures directed towards minimizing skin contact with known skin
hazards at the worksite. Hopefully, the absence of disease in the majority of workers may also
be due to jobs which have been designed to minimize exposure to conditions hazardous to the
skin.
The skin

Human skin, except for palms and soles, is quite thin and of variable thickness. It has two
layers: the epidermis (outer) and dermis (inner). Collagen and elastic components in the
dermis allow it to function as a flexible barrier. The skin provides a unique shield which
protects within limits against mechanical forces, or penetration by various chemical agents.
The skin limits water loss from the body and guards against the effects of natural and artificial
light, heat and cold. Intact skin and its secretions provide a fairly effective defence zone
against micro-organisms, providing mechanical or chemical injury does not impair this
defence. Figure 1 provides an illustration of the skin and description of its physiological
functions.

Figure 1. Schematic representation of the skin.

The outer epidermal layer of dead cells (keratin) provides a shield against elements in the
outside world. These cells, if exposed to frictional pressures, can form a protective callus and
can thicken after ultraviolet exposure. Keratin cells are normally arranged in 15 or 16 shingle-
like layers and provide a barrier, though limited, against water, water-soluble materials and
mild acids. They are less able to act as a defence against repeated or prolonged contact with
even low concentrations of organic or inorganic alkaline compounds. Alkaline materials
soften but do not totally dissolve the keratin cells. The softening disturbs their inner structure
enough to weaken cellular cohesiveness. The integrity of the keratin layer is allied to its water
content which, in turn, influences its pliability. Lowered temperatures and humidity,
dehydrating chemicals such as acids, alkali, strong cleaners and solvents, cause water loss
from the keratin layer, which, in turn, causes the cells to curl and crack. This
weakens its ability
to serve as a barrier and compromises its
defence against
water loss from the body and entry of various
agents from
outside.
Cutaneous defence systems are effective only within limits. Anything which breaches one or
more of the links endangers the entire defence chain. For example, percutaneous absorption is
enhanced when the continuity of the skin has been altered by physical or chemical injury or
by mechanical abrasion of the keratin layer. Toxic materials can be absorbed not only through
the skin, but also through the hair follicules, sweat orifices and ducts. These latter routes are
not as important as transepidermal absorption. A number of chemicals used in industry and in
farming have caused systemic toxicity by absorption through the skin. Some well established
examples are mercury, tetraethyllead, aromatic and amino nitro compounds and certain
organophosphates and chlorinated hydrocarbon pesticides. It should be noted that for many
substances, systemic toxicity generally arises through inhalation but percutaneous absorption
is possible and should not be overlooked.

A remarkable feature of cutaneous defence is the ability of the skin to continually replace the
basal cells which provide the epidermis with its own built-in replication and repair system.

The skin’s ability to act as a heat exchanger is essential to life. Sweat gland function, vascular
dilation and constriction under nervous control are vital to regulating body heat, as is
evaporation of surface water on skin. Constriction of the blood vessels protects against cold
exposures by preserving central body heat. Multiple nerve endings within the skin act as
sensors for heat, cold and other excitants by relaying the presence of the stimulant to the
nervous system which responds to the provoking agent.

A major deterrent against injury from ultraviolet radiation, a potentially harmful component
of sunlight and some forms of artificial light is the pigment (melanin) manufactured by the
melanocytes located in the basal cell layer of the epidermis. Melanin granules are picked up
by the epidermal cells and serve to add protection against the rays of natural or artificial light
which penetrate the skin. Additional protection, though less in degree, is furnished by the
keratin cell layer which thickens following ultraviolet exposure. (As discussed below, for
those whose worksites are outdoors it is essential to protect exposed skin with a sun-screen
coating agent having a protective against UV-A and against UV-B (rating of 15 or greater)
together with appropriate clothing to provide a high level of shielding against sun light
injury.)

Types of Occupational Skin Diseases

Occupational dermatoses vary both in their appearance (morphology) and severity. The effect
of an occupational exposure may range from the slightest erythema (reddening) or
discoloration of the skin to a far more complex change, as a malignancy. Despite the wide
range of substances that are known to cause skin effects, in practice it is difficult to associate
a specific lesion with exposure to a specific material. However, certain chemical groups are
associated with characteristic reaction patterns. The nature of the lesions and their location
may provide a strong clue as to causality.

A number of chemicals with or without direct toxic effect on the skin can also cause systemic
intoxication following absorption through the skin. In order to act as a systemic toxin, the
agent must pass through the keratin and the epidermal cell layers, then through the epidermal-
dermal junction. At this point it has ready access to the bloodstream and the lymphatic system
and can now be carried to vulnerable target organs.
Acute contact dermatitis (irritant or allergic).

Acute contact eczematous dermatitis can be caused by hundreds of irritant and sensitizing
chemicals, plants and photoreactive agents. Most occupational allergic dermatoses can be
classified as acute eczematous contact dermatitis. Clinical signs are heat, redness, swelling,
vesiculation and oozing. Symptoms include itch, burning and general discomfort. The back of
the hands, the inner wrists and the forearms are the usual sites of attack, but acute contact
dermatitis can occur anywhere on the skin. If the dermatosis occurs on the forehead, the
eyelids, the ears, the face or the neck, it is logical to suspect that a dust or a vapour may be
involved in the reaction. When there is a generalized contact dermatitis, not restricted to one
or a few specific sites, it is usually caused by a more extensive exposure, such as the wearing
of contaminated clothing, or by autosensitization from a pre-existing dermatitis. Severe
blistering or destruction of tissue generally indicates the action of an absolute or strong
irritant. The exposure history, which is taken as part of the medical control of occupational
dermatitis, may reveal the suspected causative agent. An accompanying article in this chapter
provides more details on contact dermatitis.

Sub-acute contact dermatitis

Through a cumulative effect repeated contact with both weak and moderate irritants can cause
a sub-active form of contact dermatitis characterized by dry, red plaques. If the exposure
continues, the dermatitis will become chronic.

Chronic eczematous contact dermatitis

When a dermatitis recurs over an extended period of time it is called chronic eczematous
contact dermatitis. The hands, fingers, wrists and forearms are the sites most often affected by
chronic eczematous lesions, characterized by dry, thickened and scaly skin. Cracking and
fissuring of the fingers and the palms may be present. Chronic nail dystrophy is also
commonly found. Frequently, the lesions will begin to ooze (sometimes called “weeping”)
because of re-exposure to the responsible agent or by imprudent treatment and care. Many
materials not responsible for the original dermatosis will sustain this chronic recurrent skin
problem.

Photosensitivity dermatitis (phototoxic or photoallergic)

Most photoreactions on the skin are phototoxic. Either natural and artificial light sources
alone or in combination with various chemicals, plants or drugs can induce a phototoxic or
photosensitive response. Phototoxic reaction is generally limited to light-exposed areas while
photosensitive reaction can develop frequently on non-exposed body surfaces. Some
examples of photoreactive chemicals are coal tar distillation products, such as creosote, pitch
and anthracene. Members of the plant family Umbelliferae are well known photoreactors.
Family members include cow parsnip, celery, wild carrot, fennel and dill. The reactive agent
in these plants are psoralens.

Folliculitis and acneform dermatoses, including chloracne

Workers with dirty jobs often develop lesions involving the follicular openings. Comedones
(blackheads) may be the only obvious effect of the exposure, but often a secondary infection
of the follicle may ensure. Poor personal hygiene and ineffective cleansing habits can add to
the problem. Follicular lesions generally occur on the forearms and less often on the thighs
and buttocks, but they can occur anywhere except on the palms and soles.

Follicular and acneform lesions are caused by overexposure to insoluble cutting fluids, to
various tar products, paraffin, and certain aromatic chlorinated hydrocarbons. The acne
caused by any of the above agents can be extensive. Chloracne is the most serious form, not
only because it can lead to disfigurement (hyperpigmentation and scarring) but also because
of the potential liver damage, including porphyria cutanea tarda and other systemic effects
that the chemicals can cause. Chloronaphthalenes, chlorodi-phenyls, chlorotriphenyls,
hexachlorodibenzo-p-dioxin, tetrachloroazoxybenzene and tetrachlorodibenzodioxin (TCDD),
are among the chloracne-causing chemicals. The blackheads and cystic lesions of chloracne
often appear first on the sides of the forehead and the eyelids. If exposure continues, lesions
may occur over widespread areas of the body, except for the palms and soles.

Sweat-induced reactions

Many types of work involve exposure to heat and where there is too much heat and sweating,
followed by too little evaporation of the sweat from the skin, prickly heat can develop. When
there is chafing of the affected area by skin rubbing against skin, a secondary bacterial or
fungal infection may frequently occur. This happens particularly in the underarm area, under
the breast, in the groin and between the buttocks.

Pigment change

Occupationally induced changes in skin colour can be caused by dyes, heavy metals,
explosives, certain chlorinated hydrocarbons, tars and sunlight. The change in skin colour
may be the result of a chemical reaction within the keratin, as for example, when the keratin is
stained by metaphenylene-diamine or methylene blue or trinitrotoluene. Sometimes
permanent discoloration may occur more deeply in the skin as with argyria or traumatic
tattoo. Increased pigmentation induced by chlorinated hydrocarbons, tar compounds, heavy
metals and petroleum oils generally results from melanin stimulation and overproduction.
Hypopigmentation or depigmentation at selected sites can be caused by a previous burn,
contact dermatitis, contact with certain hydroquinone compounds or other antioxidant agents
used in selected adhesives and sanitizing products. Among the latter are tertiary amyl phenol,
tertiary butyl catechol and tertiary butyl phenol.

New growths

Neoplastic lesions of occupational origin may be malignant or benign (cancerous or non-


cancerous). Melanoma and non-melanocytic skin cancer are discussed in two other articles in
this chapter. Traumatic cysts, fibromata, asbestos, petroleum and tar warts and
keratoacanthoma, are typical benign new growths. Keratoacanthomas can be associated with
excessive exposure to sunlight and also have been ascribed to contact with petroleum, pitch
and tar.

Ulcerative changes

Chromic acid, concentrated potassium dichromate, arsenic trioxide, calcium oxide, calcium
nitrate and calcium carbide are documented ulcerogenic chemicals. Favourite attack sites are
the fingers, hands, folds and palmar creases. Several of these agents also cause perforation of
the nasal septum.

Chemical or thermal burns, blunt injury or infections resulting from bacteria and fungi may
result in ulcerous excavations on the affected part.

Granulomas

Granulomas can arise from many occupational sources if the appropriate circumstances are
present. Granulomas can be caused by occupational exposures to bacteria, fungi, viruses or
parasites. Inanimate substances, such as bone fragments, wood splinters, cinders, coral and
gravel, and minerals such as beryllium, silica and zirconium, can also cause granulomas after
skin embedment.

Other conditions

Occupational contact dermatitis accounts for at least 80% of all cases of occupational skin
diseases. However, a number of other changes that affect the skin, hair and nails are not
included in the foregoing classification. Hair loss caused by burns, or mechanical trauma or
certain chemical exposures, is one example. A facial flush that follows the combination of
drinking alcohol and inhaling certain chemicals, such as trichlorethylene and disulfuram, is
another. Acroosteolysis, a type of bony disturbance of the digits, plus vascular changes of the
hands and forearm (with or without Raynaud’s syndrome) has been reported among polyvinyl
chloride polymerization tank cleaners. Nail changes are covered in a separate article in this
chapter.

Physiopathology or Mechanismsof Occupational Skin Diseases

The mechanisms by which primary irritants act are understood only in part—for instance,
vesicant or blister gases (nitrogen mustard or bromomethane and Lewisite, etc.)—interfere
with certain enzymes and thereby block selective phases in the metabolism of carbohydrates,
fats and proteins. Why and how the blister results is not clearly understood but observations
of how chemicals react outside the body yield some ideas about possible biological
mechanisms.

In brief, because alkali reacts with acid or lipid or protein, it has been presumed that it also
reacts with skin lipid and protein. In so doing, surface lipids are changed and keratin structure
becomes disturbed. Organic and inorganic solvents dissolve fats and oils and have the same
effect on cutaneous lipids. Additionally, however, it appears that solvents abstract some
substance or change the skin in such a way that the keratin layer dehydrates and the skin’s
defence is no longer intact. Continued insult results in an inflammatory reaction eventuating
in contact dermatitis.

Certain chemicals readily combine with the water within skin or on the surface of the skin,
and cause a vigorous chemical reaction. Calcium compounds, such as calcium oxide and
calcium chloride, produce their irritant effect in this way.

Substances such as coal tar pitch, creosote, crude petroleum, certain aromatic chlorinated
hydrocarbons, in combination with sunlight exposure, stimulate the pigment-producing cells
to over function, leading to hyperpigmentation. Acute dermatitis also may give rise to
hyperpigmentation after healing. Conversely, burns, mechanical trauma, chronic contact
dermatitis, contact with monobenzyl ether of hydroquinone or certain phenolics can induce
hypo- or de-pigmented skin.

Arsenic trioxide, coal tar pitch, sunlight and ionizing radiation, among other agents, can
damage the skin cells so that abnormal cell growth results in cancerous change of the exposed
skin.

Unlike primary irritation, allergic sensitization is the result of a specifically acquired


alteration in the capacity to react, brought about by T-cell activation. For several years it has
been agreed that contact allergic eczematous dermatitis accounts for about 20% of all the
occupational dermatoses. This figure is probably too conservative in view of the continued
introduction of new chemicals, many of which have been shown to cause allergic contact
dermatitis.

Causes of Occupational Skin Diseases

Materials or conditions known to cause occupational skin disease are unlimited. They are
currently divided into mechanical, physical, biological and chemical categories, which
continue to grow in number each year.

Mechanical

Friction, pressure or other forms of more forceful trauma may induce changes ranging from
callus and blisters to myositis, tenosynovitis, osseous injury, nerve damage, laceration,
shearing of tissue or abrasion. Lacerations, abrasions, tissue disruption and blisters
additionally pave the way for secondary infection by bacteria or, less often, fungi to set in.
Almost everyone is exposed each day to one or more forms of mechanical trauma which may
be mild or moderate in degree. However, those who use pneumatic riveters, chippers, drills
and hammers are at greater risk of suffering neurovascular, soft tissue, fibrous or bone injury
to the hands and forearms. because of the repetitive trauma from the tool. The use of
vibration-producing tools which operate in a certain frequency range can induce painful
spasms in the fingers of the tool-holding hand. Transfer to other work, where possible,
generally provides relief. Modern equipment is designed to reduce vibration and thus obviate
the problems.

Physical agents

Heat, cold, electricity, sunlight, artificial ultraviolet, laser radiation and high energy sources
such as x rays, radium and other radioactive substances are potentially injurious to skin and to
the entire body. High temperature and humidity at work or in a tropical work environment can
impair the sweat mechanism and cause systemic effects known as sweat retention syndrome.
Milder exposure to heat may induce prickly heat, intertrigo (chafing), skin maceration and
supervening bacterial or fungal infection, particularly in overweight and diabetic individuals.

Thermal burns are frequently experienced by electric furnace operators, lead burners, welders,
laboratory chemists, pipe-line workers, road repairmen, roofers and tar plant workers
contacting liquid tar. Prolonged exposure to cold water or lowered temperatures causes mild
to severe injury ranging from erythema to blistering, ulceration and gangrene. Frostbite
affecting the nose, ears, fingers and toes of construction workers, firemen, postal workers,
military personnel and other outdoor workers is a common form of cold injury.

Electricity exposure resulting from contact with short circuits, bare wires or defective
electrical apparatus cause burns of the skin and destruction of deeper tissue.

Few workers are without exposure to sunlight and some individuals with repeated exposure
incur severe actinic damage to skin. Modern industry also has many sources of potentially
injurious artificial ultraviolet wavelengths, such as in welding, metal burning, molten-metal
pouring, glass blowing, electric furnace tending, plasma torch burning and laser beam
operations. Apart from the natural capacity of ultraviolet rays in natural or artificial light to
injure skin, coal tar and several of its by-products, including certain dyes, selected light-
receptive components of plants and fruits and a number of topical and parenteral medications
contain harmful chemicals which are activated by certain wavelengths of ultraviolet rays.
Such photoreaction effects may operate by either phototoxic or photoallergic mechanisms.

High-intensity electromagnetic energy associated with laser beams is well able to injure
human tissue, notably the eye. Skin damage is less of a risk but can occur.

Biological

Occupational exposures to bacteria, fungi, viruses or parasites may cause primary or


secondary infections of the skin. Prior to the advent of modern antibiotic therapy, bacterial
and fungal infections were more commonly encountered and associated with disabling illness
and even death. While bacterial infections can occur in any kind of work setting, certain jobs,
such as animal breeders and handlers, farmers, fishermen, food processors and hide handlers
have greater exposure potential. Similarly, fungal (yeast) infections are common among
bakers, bartenders, cannery workers, cooks, dishwashers, child-care workers and food
processors. Dermatoses due to parasitic infections are not common, but when they do occur
they are seen most often among agricultural and livestock workers, grain handlers and
harvesters, longshoremen and silo workers.

Cutaneous viral infections caused by work are few in number, yet some, such as milker’s
nodules among dairyworkers, herpes simplex among medical and dental personnel and sheep
pox among livestock handlers continue to be reported.

Chemicals

Organic and inorganic chemicals are the major source of hazards to the skin. Hundreds of new
agents enter the work environment each year and many of these will cause cutaneous injury
by acting as primary skin irritants or allergic sensitizers. It has been estimated that 75% of the
occupational dermatitis cases are caused by primary irritant chemicals. However, in clinics
where the diagnostic patch test is commonly used, the frequency of occupational allergic
contact dermatitis is increased. By definition, a primary irritant is a chemical substance which
will injure every person’s skin if sufficient exposure takes place. Irritants can be rapidly
destructive (strong or absolute) as would occur with concentrated acids, alkalis, metallic salts,
certain solvents and some gases. Such toxic effects can be observed within a few minutes,
depending upon the concentration of the contactant and the length of contact which occurs.
Conversely, dilute acids and alkalis, including alkaline dusts, various solvents and soluble
cutting fluids, among other agents, may require several days of repeated contact to produce
observable effects. These materials are termed “marginal or weak irritants”.

Plants and woods

Plants and woods are often classified as a separate cause of skin disease, but they can also be
correctly included in the chemical grouping. Many plants cause mechanical and chemical
irritation and allergic sensitization, while others have gained attention because of their
photoreactive capacity. The family Anacardiaceae, which includes poison ivy, poison oak,
poison sumac, cashew-nut shell oil and the Indian marking nut, is a well-known cause of
occupational dermatitis due to its active ingredients (polyhydric phenols). Poison ivy, oak and
sumac are common causes of allergic contact dermatitis. Other plants associated with
occupational and non-occupational contact dermatitis include castor bean, chrysanthemum,
hops, jute, oleander, pineapple, primrose, ragweed, hyacinth and tulip bulbs. Fruits and
vegetables, including asparagus, carrots, celery, chicory, citrus fruits, garlic and onions, have
been reported as causing contact dermatitis in harvesters, food packing and food preparation
workers.

Several varieties of wood have been named as causes of occupational dermatoses among
lumberers, sawyers, carpenters and other wood craftspeople. However, the frequency of skin
disease is much less than is experienced from contact with poisonous plants. It is likely that
some of the chemicals used for preserving the wood cause more dermatitic reactions than the
oleoresins contained in wood. Among the preservative chemicals used to protect against
insects, fungi and deterioration from soil and moisture are chlorinated diphenyls, chlorinated
naphthalenes, copper naphthenate, creosote, fluorides, organic mercurials, tar and certain
arsenical compounds, all known causes of occupational skin diseases.

Non-Occupational Factors in OccupationalSkin Disease

Considering the numerous direct causes of occupational skin disease cited above, it can be
readily understood that practically any job has obvious and often hidden hazards. Indirect or
predisposing factors may also merit attention. A predisposition can be inherited and related to
skin colour and type or it may represent a skin defect acquired from other exposures.
Whatever the reason, some workers have lower tolerance to materials or conditions in the
work environment. In large industrial plants, medical and hygiene programmes can provide
the opportunity for placement of such employees in work situations that will not further
impair their health. In small plants, however, predisposing or indirect causal factors may not
be given proper medical attention.

Pre-existing skin conditions

Several non-occupational diseases affecting the skin can be worsened by various occupational
influences.

Acne. Adolescent acne in employees is generally made worse by machine tool, garage and tar
exposures. Insoluble oils, various tar fractions, greases and chloracnegenic chemicals are
definite hazards to these people.

Chronic eczemas. Detecting the cause of chronic eczema affecting the hands and sometimes
distant sites can be elusive. Allergic dermatitis, pompholyx, atopic eczema, pustular psoriasis
and fungal infections are some examples. Whatever the condition, any number of irritant
chemicals, including plastics, solvents, cutting fluids, industrial cleansers and prolonged
moisture, can worsen the eruption. Employees who must continue to work will do so with
much discomfort and probably lowered efficiency.

Dermatomycosis. Fungal infections can be worsened at work. When fingernails become


involved it may be difficult to assess the role of chemicals or trauma in the nail involvement.
Chronic tinea of the feet is subject to periodic worsening, particularly when heavy footgear is
required.

Hyperhidrosis. Excessive sweating of the palms and soles can soften the skin (maceration),
particularly when impervious gloves or protective footgear are required. This will increase a
person’s vulnerability to the effects of other exposures.

Miscellaneous conditions. Employees with polymorphous light eruption, chronic discoid


lupus erythematous, porphyria or vitiligo are definitely at greater risk, particularly if there is
simultaneous exposure to natural or artificial ultraviolet radiation.

Skin type and pigmentation

Redheads and blue-eyed blondes, particularly those of Celtic origin, have less tolerance to
sunlight than people of darker skin type. Such skin is also less able to tolerate exposures to
photoreactive chemicals and plants and is suspected of being more susceptible to the action of
primary irritant chemicals, including solvents. In general, black skin has a superior tolerance
to sunlight and photoreactive chemicals and is less prone to the induction of cutaneous cancer.
However, darker skin tends to respond to mechanical, physical or chemical trauma by
displaying post-inflammatory pigmentation. It is also more prone to develop keloids
following trauma.

Certain skin types, such as hairy, oily, swarthy skins, are more likely to incur folliculitis and
acne. Employees with dry skin and those with ichthyoses are at a disadvantage if they must
work in low humidity environments or with chemical agents which dehydrate skin. For those
workers who sweat profusely, a need to wear impervious protective gear will add to their
discomfort. Similarly, overweight individuals usually experience prickly heat during the
warm months in hot working environments or in tropical climates. While sweat can be helpful
in cooling the skin, it can also hydrolyze certain chemicals that will act as skin irritants.

Diagnosing Occupational Skin Diseases

Cause and effect of occupational skin disease can be best ascertained through a detailed
history, which should cover the past and present health and work status of the employee.
Family history, particularly of allergies, personal illness in childhood and the past, is
important. The title of the job, the nature of the work, the materials handled, how long the job
has been done, should be noted. It is important to know when and where on the skin the rash
appeared, the behaviour of the rash away from work, whether other employees were affected,
what was used to cleanse and protect the skin, and what has been used for treatment (both
self-medication and prescribed medication); as well as whether the employee has had dry skin
or chronic hand eczema or psoriasis or other skin problems; what drugs, if any, have been
used for any particular disease; and finally, which materials have been used in home hobbies
such as the garden or woodworking or painting.
The following elements are important parts of the clinical diagnosis:

 Appearance of the lesions. Acute or chronic eczematous contact dermatosis are most
common. Follicular, acneform, pigmentary, neoplastic, ulcerative granulomatous lesions and
conditions such as Raynaud’s syndrome and contact urticaria can occur.
 Sites involved. The hands, the digits, the wrists and the forearms are the most common sites
affected. Exposure to dusts and fumes usually cause the dermatosis to appear on the
forehead, face, and V of the neck. Widespread dermatitis can result from autosensitization
(spread) of an occupational or nonoccupational dermatosis.
 Diagnostic tests. Laboratory tests should be employed when necessary for the detection of
bacteria, fungi and parasites. When allergic reactions are suspect, diagnostic patch tests can
be used to detect occupational as well as non-occupational allergies, including
photosensitization. Patch tests are a highly useful procedure and are discussed in an
accompanying article in this chapter. At times, useful information can be obtained through
the use of analytical chemical examination of blood, urine, or tissue (skin, hair, nails).
 Course. Of all the cutaneous changes induced by agents or certain conditions at work, acute
and chronic eczematous contact dermatoses are foremost in number. Next in frequency are
follicular and acneform eruptions. The other categories, including chloracne, constitute a
smaller but still important group because of their chronic nature and the scarring and
disfigurement which may be present.

An occupationally induced acute contact eczematous dermatitis tends to improve upon


cessation of contact. Additionally, modern therapeutic agents can facilitate the period of
recovery. However, if a worker returns to work and to the same conditions, without proper
preventive measures undertaken by the employer and necessary precautions explained and
understood by the worker, it is probable that the dermatosis will recur soon after re-exposure.

Chronic eczematous dermatoses, acneform lesions and pigmentary changes are less
responsive to treatment even when contact is eliminated. Ulcerations usually improve with
elimination of the source. With granulomatous and tumour lesions, eliminating contact with
the offending agent may prevent future lesions but will not dramatically change already
existing disease.

When a patient with a suspected occupational dermatosis has not improved within two months
after no longer having contact with the suspected agent, other reasons for the persistence of
the disease should be explored. However, dermatoses caused by metals such as nickel or
chrome have a notoriously prolonged course partly because of their ubiquitous nature. Even
removal from work cannot eliminate the workplace as the source of the disease. If these and
other potential allergens have been eliminated as causal, it is reasonable to conclude that the
dermatitis is either non-occupational or is being perpetuated by non-occupational contacts,
such as maintenance and repair of automobiles and boats, tile setting glues, garden plants or
including even medical therapy, prescribed or otherwise.
Non-Melanocytic Skin Cancer

Written by ILO Content Manager

There are three histological types of non-melanocytic skin cancers (NMSC) (ICD-9: 173;
ICD-10: C44): basal cell carcinoma, squamous cell carcinoma and rare soft tissue sarcomas
involving the skin, subcutaneous tissue, sweat glands, sebaceous glands and hair follicles.

Basal cell carcinoma is the most common NMSC in white populations, representing 75 to
80% of them. It develops usually on the face, grows slowly and has little tendency to
metastasize.

Squamous cell cancers account for 20 to 25% of reported NMSCs. They can occur on any part
of the body, but especially on the hands and legs and can metastasize. In darkly pigmented
populations squamous cell cancers are the most common NMSC.

Multiple primary NMSCs are common. The bulk of the NMSCs occur on the head and neck,
in contrast with most of the melanomas which occur on the trunk and limbs. The localization
of NMSCs reflects clothing patterns.

NMSCs are treated by various methods of excision, radiation and topical chemotherapy. They
respond well to treatment and over 95% are cured by excision (IARC 1990).

The incidence of NMSCs is hard to estimate because of gross underreporting and since many
cancer registries do not record these tumours. The number of new cases in the US was
estimated at 900,000 to 1,200,000 in 1994, a frequency comparable to the total number of all
non-cutaneous cancers (Miller & Weinstock 1994). The reported incidences vary widely and
are increasing in a number of populations, e.g., in Switzerland and the US. The highest annual
rates have been reported for Tasmania (167/100,000 in men and 89/100,000 in women) and
the lowest for Asia and Africa (overall 1/100,000 in men and 5/100,000 in women). NMSC is
the most common cancer in Caucasians. NMSC is about ten times as common in White as in
non-White populations. The lethality is very low (Higginson et al. 1992).

Susceptibility to skin cancer is inversely related to the degree of melanin pigmentation, which
is thought to protect by buffering against the carcinogenic action of solar ultraviolet (UV)
radiation. Non-melanoma risk in white-skinned populations increases with the proximity to
the equator.

In 1992, the International Agency for Research on Cancer (IARC 1992b) evaluated the
carcinogenicity of solar radiation and concluded that there is sufficient evidence in humans
for the carcinogenicity of solar radiation and that solar radiation causes cutaneous malignant
melanoma and NMSC.

Reduction of exposure to sunlight would probably reduce the incidence of NMSCs. In Whites,
90 to 95% of NMSCs are attributable to solar radiation (IARC 1990).

NMSCs may develop in areas of chronic inflammation, irritation and scars from burns.
Traumas and chronic ulcers of the skin are important risk factors for squamous cell skin
cancers, particularly in Africa.
Radiation therapy, chemotherapy with nitrogen mustard, immunosuppressive therapy,
psoralen treatment combined with UV-A radiation and coal tar preparations applied on skin
lesions have been associated with an increased risk of NMSC. Environmental exposure to
arsenic trivalent and arsenical compounds have been confirmed to be associated with skin
cancer excess in humans (IARC 1987). Arsenicism can give rise to palmar or plantar arsenical
keratoses, epidermoid carcinoma and superficial basal cell carcinoma.

Hereditary conditions such as lack of enzymes required to repair the DNA damaged by UV
radiation may increase the risk of NMSC. Xeroderma pigmentosum represents such a
hereditary condition.

A historical example of an occupational skin cancer is scrotal cancer that Sir Percival Pott
described in chimney sweeps in 1775. The cause of these cancers was soot. In the early 1900s,
scrotal cancers were observed in mulespinners in cotton textile factories where they were
exposed to shale oil, which was used as a lubricant for cotton spindles. The scrotal cancers in
both chimney sweeps and mulespinners were later associated with polycyclic aromatic
hydrocarbons (PAHs), many of which are animal carcinogens, particularly some 3-, 4- and 5-
ring PAHs such as benz(a)pyrene and dibenz(a,h)anthracene (IARC 1983, 1984a, 1984b,
1985a). In addition to mixtures that readily contain carcinogenic PAHs, carcinogenic
compounds may be formed by cracking when organic compounds are heated.

Further occupations with which PAH-related excesses of NMSC have been associated
include: aluminium reduction workers, coal gasification workers, coke oven workers, glass
blowers, locomotive engineers, road pavers and highway maintenance workers, shale oil
workers, tool fitters and tool setters (see table 1). Coal tars, coal-based pitches, other coal-
derived products, anthracene oil, creosote oil, cutting oils and lubricating oils are some of the
materials and mixtures that contain carcinogenic PAHs.

Table 1. Occupations at risk

Carcinogenic Industry or hazard Process or group at risk


material or agent
Pitch, tar or Aluminium reduction Pot room worker
tarry product

Coal, gas and coke industries Coke ovens, tar distillation,


coal
gas manufacture, pitch loading
Patent fuel manufacture
Briquette making
Asphalt industry
Road construction
Creosote users
Brick and tile workers, timber
proofers
Soot Chimney sweeps

Rubber industry Mixers of carbon black


(commercial soot) and oil
Lubricating and Glass blowing
cutting oils
Shale oil refining

Cotton industry Mulespinners

Paraffin wax workers

Engineering Toolsetters and setter operators


in automatic machine shops
(cutting oils)
Arsenic Oil refinery Still cleaners

Sheep dip factories

Arsenical insecticides Manufacturing workers and


users
(gardeners, fruit farmers and
vintagers)
Arsenic mining
Ionizing radiation Radiologists

Other radiation workers


Ultraviolet radiation Outdoor workers Farmers, fishermen, vineyard
and
other outdoor construction
Industrial UV workers

Welding arc: germicidal lamps;


cutting and printing processes

Additional job titles that have been associated with increased NMSC risk include jute
processors, outdoor workers, pharmacy technicians, sawmill workers, shale oil workers,
sheep-dip workers, fishermen, tool setters, vineyard workers and watermen. The excess for
watermen (who are primarily involved in traditional fishing tasks) was noticed in Maryland,
USA and was confined to squamous cell cancers. Solar radiation probably explains
fishermen’s, outdoor workers’, vineyard workers’ and watermen’s excess risks. Fishermen
also may be exposed to oils and tar and inorganic arsenic from the consumed fish, which may
contribute to the observed excess, which was threefold in a Swedish study, as compared with
the county-specific rates (Hagmar et al. 1992). The excess in sheep dip workers may be
explained by arsenical compounds, which induce skin cancers through ingestion rather than
through skin contact. While farmers have slightly increased risk of melanoma, they do not
appear to have increased risk of NMSC, based on epidemiological observations in Denmark,
Sweden and the USA (Blair et al. 1992).

Ionizing radiation has caused skin cancer in early radiologists and workers who handled
radium. In both situations, the exposures were long-lasting and massive. Occupational
accidents involving skin lesions or long-term cutaneous irritation may increase the risk on
NMSC.

Prevention (of Non-Melanocytic OccupationalSkin Cancer)

The use of appropriate clothing and a sunscreen having a protective UV-B factor of 15 or
greater will help protect outdoor workers exposed to ultraviolet radiation. Further, the
replacement of carcinogenic materials (such as feed stocks) by non-carcinogenic alternatives
is another obvious protective measure which may, however, not always be possible. The
degree of exposure to carcinogenic materials can be reduced by the use of protective shields
on equipment, protective clothing and hygienic measures.

Of overriding importance is the education of the workforce about the nature of the hazard and
the reasons for and value of the protective measures.

Finally, skin cancers usually take many years to develop and many of them pass through
several premalignant stages before achieving their full malignant potential such as arsenic
keratoses and actinic keratoses. These early stages are readily detectable by visual inspection.
For this reason, skin cancers offer the real possibility that regular screening could reduce
mortality among those known to have been exposed to any skin carcinogen.

Malignant Melanoma

Written by ILO Content Manager

Malignant melanoma is rarer than non-melanocytic skin cancer. Apart from exposure to solar
radiation, no other environmental factors show a consistent association with malignant
melanoma of the skin. Associations with occupation, diet and hormonal factors are not firmly
established (Koh et al. 1993).

Malignant melanoma is an aggressive skin cancer (ICD-9 172.0 to 173.9; ICD-10: C43). It
arises from pigment-producing cells of the skin, usually in an existing naevus. The tumour is
usually a few millimetres to several centimetres thick, brown or black in colour, that has
grown in size, changed colour and may bleed or ulcerate (Balch et al. 1993).

Indicators of poor prognosis of malignant melanoma of the skin include nodular subtype,
tumour thickness, multiple primary tumours, metastases, ulceration, bleeding, long tumour
duration, body site and, for some tumour sites, male sex. A history of malignant melanoma of
the skin increases the risk for a secondary melanoma. Five-year post-diagnosis survival rates
in high incidence areas are 80 to 85%, but in low incidence areas the survival is poorer
(Ellwood and Koh 1994; Stidham et al. 1994).

There are four histologic types of malignant melanoma of the skin. Superficial spreading
melanomas (SSM) represent 60 to 70% of all melanomas in Whites and less in non-Whites.
SSMs tend to progress slowly and are more common in women than in men. Nodular
melanomas (NM) account for 15 to 30% of malignant melanomas of the skin. They are
invasive, grow rapidly and are more frequent in men. Four to 10% of malignant melanomas of
the skin are lentigo malignant melanomas (LMM) or Hutchinson’s melanotic freckles. LMMs
grow slowly, occur frequently in the face of old persons and rarely metastasize. Acral
lentiginous melanomas (ALM) represent 35 to 60% of all malignant melanomas of the skin in
non-Whites and 2 to 8% in Whites. They occur frequently on the sole of the foot (Bijan 1993).

For the treatment of malignant melanomas of the skin, surgery, radiation therapy,
chemotherapy and biologic therapy (interferon alpha or interleukin-2) may be applied singly
or in combination.

During the 1980s, the reported age-standardized annual incidence rates of malignant
melanoma of the skin varied per 100,000 from 0.1 in males in Khon Kaen, Thailand to around
30.9 in males and 28.5 in females in Queensland, Australia (IARC 1992b). Malignant
melanomas of the skin represent less than 1% of all cancers in most populations. An annual
increase of about 5% in melanoma incidence has been observed in most white populations
from the early 1960s to about 1972. Melanoma mortality has increased in the last decades in
most populations, but less rapidly than incidence, probably due to early diagnoses and
awareness of the disease (IARC 1985b, 1992b). More recent data show different rates of
change, some of them suggesting even downward trends.

Malignant melanomas of the skin are among the ten most frequent cancers in incidence
statistics in Australia, Europe and North America, representing a lifetime risk of 1 to 5%.
White-skinned populations are more susceptible than non-White populations. Melanoma risk
in white-skinned populations increases with proximity to the equator.

The gender distribution of melanomas of the skin varies widely between populations (IARC
1992a). Women have lower incidence rates than men in most populations. There are gender
differences in patterns of body distribution of the lesions: trunk and face dominate in men,
extremities in women.

Malignant melanomas of the skin are more common in higher than in lower socio-economic
groups (IARC 1992b).

Familial melanomas are uncommon, but have been well documented. with between 4% and
10% of patients describing a history of melanoma among their first degree relatives.

Solar UV-B irradiation is probably the major cause for the widespread increase in the
incidence of melanomas of the skin (IARC 1993). It is not clear whether depletion of the
stratospheric ozone layer and the consequent increase in UV irradiance has caused the
increase in the incidence of malignant melanoma (IARC 1993, Kricker et al. 1993). The effect
of UV irradiation depends on some characteristics, such as I or II phenotype and blue eyes. A
role for UV radiation emanating from fluorescent lamps is suspected, but not conclusively
established (Beral et al. 1982).

It has been estimated that reduction in recreational sun exposure and use of sun-screens could
reduce the incidence of malignant melanomas in high risk populations by 40% (IARC 1990).
Among outdoor workers, the application of sunscreens having a protective UV-B factor rating
of at least 15 and UV-A sunscreen and the use of appropriate clothing are practical protective
measures. Although a risk from outdoor occupations is plausible, given the increased
exposure to solar radiation, results of studies on regular outdoor occupational exposure are
inconsistent. This is probably explained by the epidemiological findings suggesting that it is
not regular exposures but rather intermittent high doses of solar radiation that are associated
with excess melanoma risk (IARC 1992b).

Therapeutic immunosuppression may result in increased risk of malignant melanoma of the


skin. An increased risk with the use of oral contraceptives has been reported, but it seems
unlikely to increase the risk of malignant melanoma of the skin (Hannaford et al. 1991).
Melanomas can be produced by oestrogen in hamsters. There is no evidence of such an effect
in humans.

In White adults, the majority of primary intraocular malignant tumours are melanomas,
usually arising from uveal melanocytes. The estimated rates for these cancers do not show the
geographic variations and increasing time trends observed for melanomas of the skin. The
incidence and mortality of ocular melanomas are very low in Black and Asiatic populations
(IARC 1990, Sahel et al. 1993) The causes of ocular melanoma are unknown (Higginson et al.
1992).

In epidemiological studies, excess risk for malignant melanoma has been observed in
administrators and managers, airline pilots, chemical processing workers, clerks, electrical
power workers, miners, physical scientists, policemen and guards, refinery workers and
gasoline exposed workers, salesmen and warehouse clerks. Excess melanoma risks have been
reported in industries such as cellulose fibre production, chemical products, clothing industry,
electrical and electronics products, metal industry, non-metallic mineral products,
petrochemical industry, printing industry and telecommunications. Many of these findings
are, however, solitary and have not been replicated in other studies. A series of meta-analyses
of cancer risks in farmers (Blair et al. 1992; Nelemans et al. 1993) indicated a slight, but
significant excess (aggregated risk ratio of 1.15) of malignant melanoma of the skin in 11
epidemi-ological studies.

In a multi-site case-control study of occupational cancer in Montreal, Canada (Siemiatycki et


al. 1991), the following occupational exposures were associated with a significant excess of
malignant melanoma of the skin: chlorine, propane engine emissions, plastics pyrolysis
products, fabric dust, wool fibres, acrylic fibres, synthetic adhesives, “other” paints,
varnishes, chlorinated alkenes, trichloroethylene and bleaches. It was estimated that the
population attributable risk due to occupational exposures based on the significant
associations in the data of the same study was 11.1%.

Occupational Contact Dermatitis

Written by ILO Content Manager

The terms dermatitis and eczema are interchangeable and refer to a particular type of
inflammatory reaction of the skin which may be triggered by internal or external factors.
Occupational contact dermatitis is an exogenous eczema caused by the interaction of the skin
with chemical, biological or physical agents found in the work environment.

Contact dermatitis accounts for 90% of all occupational dermatoses and in 80% of the cases,
it will impair a worker’s most important tool, the hands (Adams 1988). Direct contact with the
offending agent is the usual mode of production of the dermatitis, but other mechanisms may
be involved. Particulate matter such as dust or smoke, or vapours from volatile substances,
may give rise to airborne contact dermatitis. Some substances will be transferred from the
fingers to distant sites on the body to produce ectopic contact dermatitis. Finally, a
photocontact dermatitis will be induced when a contactant has become activated by exposure
to ultraviolet light.

Contact dermatitis is divided into two broad categories based on different mechanisms of
production. Table 1 lists the salient features of irritant contact dermatitis and of allergic
contact dermatitis.

Table 1. Types of contact dematitis

Features Irritant contact dermatitis Allergic contact dermatitis


Mechanism of Direct cytotoxic effect Delayed–type cellular immunity
production (Gell and Coombs type IV)
Potential Everyone A minority of individuals
victims
Onset Progressive, after repeated or Rapid, within 12–48 hours in
prolonged exposure sensitized individuals
Signs Subacute to chronic eczema with Acute to subacute eczema with
erythema, desquamation and erythema, oedema, bullae and
fissures vesicles
Symptoms Pain and burning sensation Pruritus
Concentration High Low
of contactant
Investigation History and examination History and examination
Patch tests

Irritant Contact Dermatitis

Irritant contact dermatitis is caused by a direct cytotoxic action of the offending agent.
Participation of the immune system is secondary to cutaneous damage and results in visible
skin inflammation. It represents the most common type of contact dermatitis and accounts for
80% of all cases.

Irritants are mostly chemicals, which are classified as immediate or cumulative irritants.
Corrosive substances, such as strong acids and alkalis are examples of the former in that they
produce skin damage within minutes or hours of exposure. They are usually well identified,
so that contact with them is most often accidental. By contrast, cumulative irritants are more
insidious and often are not recognized by the worker as deleterious because damage occurs
after days, weeks or months of repeated exposure. As shown in table 2 (overleaf) such
irritants include solvents, petroleum distillates, dilute acids and alkalis, soaps and detergents,
resins and plastics, disinfectants and even water (Gellin 1972).
Table 2. Common irritants

Acids and alkalis

Soaps and detergents

Solvents

Aliphatic: Petroleum distillates (kerosene, gasoline, naphta)


Aromatic: Benzene, toluene, xylene
Halogenated: Trichloroethylene, chloroform, methylene chloride
Miscellaneous: Turpentine, ketones, esters, alcohols, glycols, water

Plastics

Epoxy, phenolic, acrylic monomers


Amine catalysts
Styrene, benzoyl peroxide

Metals

Arsenic
Chrome

Irritant contact dermatitis, which appears after years of trouble-free handling of a substance,
may be due to loss of tolerance, when the epidermal barrier ultimately fails after repeated
subclinical insults. More rarely, thickening of the epidermis and other adaptive mechanisms
can induce a greater tolerance to some irritants, a phenomenon called hardening.

In summary, irritant contact dermatitis will occur in a majority of individuals if they are
exposed to adequate concentrations of the offending agent for a sufficient length of time.

Allergic Contact Dermatitis

A cell-mediated, delayed allergic reaction, similar to that seen in graft rejection, is responsible
for 20% of all cases of contact dermatitis. This type of reaction, which occurs in a minority of
subjects, requires active participation of the immune system and very low concentrations of
the causative agent. Many allergens are also irritants, but the threshold for irritancy is usually
much higher than that required for sensitization. The sequence of events which culminate in
visible lesions is divided in two phases.
The sensitization (induction or afferent) phase

Allergens are heterogeneous, organic or non-organic chemicals, capable of penetrating the


epidermal barrier because they are lipophilic (attracted to the fat in the skin) and of small
molecular weight, usually less than 500 daltons (table 3). Allergens are incomplete antigens,
or haptens; that is, they must bind to epidermal proteins to become complete antigens.

Langerhans cells are antigen-presenting dendritic cells which account for less than 5% of all
epidermal cells. They trap cutaneous antigens, internalize and process them before re-
expressing them on their outer surface, bound to proteins of the major histocompatibility
complex. Within hours of contact, Langerhans cells leave the epidermis and migrate via the
lymphatics towards draining lymph nodes. Lymphokines such as interleukin-1 (IL-1) and
tumour necrosis factor alpha (TNF-α) secreted by keratinocytes are instrumental in the
maturation and migration of Langerhans cells.

Table 3. Common skin allergens

Metals

Nickel
Chrome
Cobalt
Mercury

Rubber additives

Mercaptobenzothiazole
Thiurams
Carbamates
Thioureas

Dyes

Paraphenylene diamine
Photographic colour developers
Disperse textile dyes

Plants

Urushiol (Toxicodendron)
Sesquiterpene lactones (Compositae)
Primin (Primula obconica)
Tulipalin A (Tulipa, Alstroemeria)
Plastics

Epoxy monomer
Acrylic monomer
Phenolic resins
Amine catalysts

Biocides

Formaldehyde
Kathon CG
Thimerosal

In the paracortical area of regional lymph nodes, Langerhans cells make contact with naive
CD4+ helper T cells and present them with their antigenic load. Interaction between
Langerhans cells and helper T cells involve recognition of the antigen by T-cell receptors, as
well as the interlocking of various adhesion molecules and other surface glycoproteins.
Successful antigen recognition results in a clonal expansion of memory T cells, which spill
into the bloodstream and the entire skin. This phase requires 5 to 21 days, during which no
lesion occurs.

The elicitation (efferent) phase

Upon re-exposure to the allergen, sensitized T cells become activated and secrete potent
lymphokines such as IL-1, IL-2 and interferon gamma (IFN-γ). These in turn induce blast
transformation of T cells, generation of cytotoxic as well as suppressor T cells, recruitment
and activation of macrophages and other effector cells and production of other mediators of
inflammation such as TNF-α and adhesion molecules. Within 8 to 48 hours, this cascade of
events results in vasodilatation and reddening (erythema), dermal and epidermal swelling
(oedema), blister formation (vesiculation) and oozing. If left untreated, this reaction may last
between two and six weeks.

Dampening of the immune response occurs with shedding or degradation of the antigen,
destruction of Langerhans cells, increased production of CD8+ suppressor T cells and
production by keratinocytes of IL-10 which inhibits the proliferation of helper/cytotoxic T
cells.

Clinical Presentation

Morphology. Contact dermatitis may be acute, subacute or chronic. In the acute phase, lesions
appear rapidly and present initially as erythematous, oedematous and pruritic urticarial
plaques. The oedema may be considerable, especially where the skin is loose, such as the
eyelids or the genital area. Within hours, these plaques become clustered with small vesicles
which may enlarge or coalesce to form bullae. When they rupture, they ooze an amber-
coloured, sticky fluid.

Oedema and blistering are less prominent in subacute dermatitis; which is characterized by
erythema, vesiculation, peeling of skin (desquamation), moderate oozing and formation of
yellowish crusts.

In the chronic stage, vesiculation and oozing are replaced by increased desquamation,
thickening of the epidermis, which becomes greyish and furrowed (lichenification) and
painful, deep fissures over areas of movement or trauma. Long-lasting lymphoedema may
result after years of persistent dermatitis.

Distribution. The peculiar pattern and distribution of a dermatitis will often allow the clinician
to suspect its exogenous origin and sometimes identify its causative agent. For example, linear
or serpiginous streaks of erythema and vesicles on uncovered skin are virtually diagnostic of a
plant contact dermatitis, while an allergic reaction due to rubber gloves will be worse on the
back of the hands and around the wrists.

Repeated contact with water and cleansers is responsible for the classic “housewives’
dermatitis”, characterized by erythema, desquamation and fissures of the tips and backs of the
fingers and involvement of the skin between the fingers (interdigital webs). By contrast,
dermatitis caused by friction from tools, or by contact with solid objects tends to be localized
on the palm and underside (volar) area of the fingers.

Irritant contact dermatitis due to fibreglass particles will involve the face, hands and forearms
and will be accentuated in flexures, around the neck and waist, where movement and friction
from clothes will force the spicules into the skin. Involvement of the face, upper eyelids, ears
and submental area suggests an airborne dermatitis. A photocontact dermatitis will spare sun-
protected areas such as the upper eyelids, the submental and retroauricular areas.

Extension to distant sites. Irritant dermatitis remains localized to the area of contact. Allergic
contact dermatitis, especially if acute and severe, is notorious for its tendency to disseminate
away from the site of initial exposure. Two mechanisms may explain this phenomenon. The
first, autoeczematization, also known as id-reaction or the excited skin syndrome, refers to a
state of hypersensitivity of the entire skin in response to a persistent or severe localized
dermatitis. Systemic contact dermatitis occurs when a patient topically sensitized to an
allergen is re-exposed to the same agent by oral or parenteral route. In both cases, a
widespread dermatitis will ensue, which may easily be mistaken for an eczema of endogenous
origin.

Predisposing factors

The occurrence of an occupational dermatitis is influenced by the nature of the contactant, its
concentration and the duration of contact. The fact that under similar conditions of exposure
only a minority of workers will develop a dermatitis is proof of the importance of other
personal and environmental predisposing factors (table 4).
Table 4. Predisposing factors for occupational dermatitis

Age Younger workers are often inexperienced or careless and


are more likely to develop occupational dermatitis than
older workers
Skin type Orientals and Blacks are generally more resistant to
irritation than Whites
Pre-existing disease Atopy predisposes to irritant contact dermatitis

Psoriasis or lichen planus may worsen because of the


Koebner phenomenon
Temperature and humidity High humidity reduces the effectiveness of the epidermal
barrier

Low humidity and cold cause chapping and desiccation of


the epidermis
Working conditions A dirty worksite is more often contaminated with toxic or
allergenic chemicals

Obsolete equipment and lack of protective measures


increase the risk of occupational dermatitis

Repetitive movements and friction may cause irritation


and calluses

Age. Younger workers are more likely to develop occupational dermatitis. It may be that they
are often less experienced than their older colleagues, or may have a more careless attitude
about safety measures. Older workers may have become hardened to mild irritants, or they
have learned how to avoid contact with hazardous substances, or older workers may be a self-
selected group that did not experience problems while others who did may have left the job.

Skin type. Most Black or Oriental skin appears to be more resistant to the effects of contact
irritants than the skin of most Caucasians.

Pre-existing disease. Allergy-prone workers (having a background of atopy manifested by


eczema, asthma or allergic rhinitis) are more likely to develop irritant contact dermatitis.
Psoriasis and lichen planus may be aggravated by friction or repetitive trauma, a phenomenon
called koebnerization. When such lesions are limited to the palms, they may be difficult to
distinguish from chronic irritant contact dermatitis.

Temperature and humidity. Under conditions of extreme heat, workers often neglect to wear
gloves or other appropriate protective gear. High humidity reduces the effectiveness of the
epidermal barrier, while dry and cold conditions promote chapping and fissures.

Working conditions. The incidence of contact dermatitis is higher in worksites which are
dirty, contaminated with various chemicals, have obsolete equipment, or lack protective
measures and hygiene facilities. Some workers are at higher risk because their tasks are
manual and they are exposed to strong irritants or allergens (e.g., hairdressers, printers, dental
technicians).

Diagnosis

A diagnosis of occupational contact dermatitis can usually be made after a careful history and
a thorough physical examination.

History. A questionnaire that includes the name and address of the employer, the worker’s job
title and a description of functions should be completed The worker should provide a list of
all the chemicals handled and supply information about them, such as is found on the Material
Safety Data Sheets. The date of onset and location of the dermatitis should be noted. It is
important to document the effects of vacation, sick leave, sun exposure and treatment on the
course of the disease. The examining physician should obtain information about the worker’s
hobbies, personal habits, history of pre-existing skin disease, general medical background and
current medication, as well.

Physical examination. The involved areas must be carefully examined. Note should be taken
of the severity and stage of the dermatitis, of its precise distribution and of its degree of
interference with function. A complete skin examination must be performed, looking for tell-
tale stigmata of psoriasis, atopic dermatitis, lichen planus, tinea, etc., which may signify that
the dermatitis is not of occupational origin.

Complementary investigation

The information obtained from history and physical examination is usually sufficient to
suspect the occupational nature of a dermatitis. However, additional tests are required in most
cases to confirm the diagnosis and to identify the offending agent.

Patch testing. Patch testing is the technique of choice for the identification of cutaneous
allergens and it should be routinely performed in all cases of occupational dermatitis
(Rietschel et al. 1995). More than 300 substances are now commercially available. The
standard series, which regroup the most common allergens, can be supplemented with
additional series aimed at specific categories of workers such as hairdressers, dental
technicians, gardeners, printers, etc. Table 6 lists the various irritants and sensitizers
encountered in some of these occupations.

Table 5. Examples of skin irritants and sensitizers with occupations where contact can occur

Occupation Irritants Sensitizers


Construction Turpentine, thinner, Chromates, epoxy and phenolic
workers fibreglass, glues resins, colophony, turpentine,
woods
Dental Detergents, disinfectants Rubber, epoxy and acrylic
technicians monomer, amine catalysts, local
anaesthetics, mercury, gold, nickel,
eugenol, formaldehyde,
glutaraldehyde
Farmers, florists, Fertilizers, disinfectants, Plants, woods, fungicides,
gardeners soaps and detergents insecticides
Food handlers, Soaps and detergents, Vegetables, spices, garlic, rubber,
cooks, bakers vinegar, fruits, vegetables benzoyl peroxide
Hairdressers, Shampoos, bleach, Paraphenylenediamine in hair dye,
beauticians peroxide, glycerylmonothioglycolate in
permanent wave, acetone permanents, ammonium
persulphate in bleach, surfactants
in shampoos, nickel, perfume,
essential oils, preservatives in
cosmetics
Medical Disinfectants, alcohol, soaps Rubber, colophony, formaldehyde,
personnel and detergents glutaraldehyde, disinfectants,
antibiotics, local anaesthetics,
pheno-thiazines, benzodiazepines
Metal workers, Soaps and detergents, Nickel, cobalt, chrome, biocides in
machinists and cutting cutting oils, hydrazine and
mechanics oils, petroleum distillates, colophony in welding flux, epoxy
abrasives resins and amine catalysts, rubber
Printers and Solvents, acetic acid, ink, Nickel, cobalt, chrome,
photographers acrylic monomer rubber,colophony, formaldehyde,
paraphenylene diamine and azo
dyes, hydroquinone, epoxy and
acrylic monomer, amine catalysts,
B&W and colour developers
Textile workers Solvents, bleaches, natural Formaldehyde resins, azo- and
and synthetic fibres anthraquinone dyes, rubber,
biocides

The allergens are mixed in a suitable vehicle, usually petroleum jelly, at a concentration
which was found by trial and error over the years to be non-irritant but high enough to reveal
allergic sensitization. More recently, prepackaged, ready-to-apply allergens embedded in
adhesive strips have been introduced, but so far only the 24 allergens of the standard series are
available. Other substances must be bought in individual syringes.

At the time of testing, the patient must be in a quiescent phase of dermatitis and not be taking
systemic corticosteroids. A small amount of each allergen is applied to shallow aluminium or
plastic chambers mounted on porous, hypoallergenic adhesive tape. These rows of chambers
are affixed to an area free of dermatitis on the patient’s back and left in place for 24 or more
commonly 48 hours. A first reading is done when the strips are removed, followed by a
second and sometimes a third reading after four and seven days respectively. Reactions are
graded as follows:

Nul no reaction

? doubtful reaction, mild macular erythema

+ weak reaction, mild papular erythema

++ strong reaction, erythema, oedema, vesicles


+++ extreme reaction, bullous or ulcerative;

IR irritant reaction, glazed erythema or erosion resemblinga burn.

When a photocontact dermatitis (one that requires exposure to ultraviolet light, UV-A) is
suspected, a variant of patch testing, called photopatch testing, is performed. Allergens are
applied in duplicate to the back. After 24 or 48 hours, one set of allergens is exposed to 5
joules of UV-A and the patches are put back in place for another 24 to 48 hours. Equal
reactions on both sides signify allergic contact dermatitis, positive reactions on the UV-
exposed side only is diagnostic of photocontact allergy, while reactions on both sides but
stronger on the UV-exposed side mean contact and photocontact dermatitis combined.

The technique of patch testing is easy to perform. The tricky part is the interpretation of the
results, which is best left to the experienced dermatologist. As a general rule, irritant reactions
tend to be mild, they burn more than they itch, they are usually present when the patches are
removed and they fade rapidly. By contrast, allergic reactions are pruritic, they reach a peak at
four to seven days and may persist for weeks. Once a positive reaction has been identified, its
relevance must be assessed: is it pertinent to the current dermatitis, or does it reveal past
sensitization? Is the patient exposed to that particular substance, or is he allergic to a different
but structurally-related compound with which it cross-reacts?

The number of potential allergens far exceeds the 300 or so commercially available
substances for patch testing. It is therefore often necessary to test patients with the actual
substances that they work with. While most plants can be tested “as is,” chemicals must be
precisely identified and buffered if their acidity level (pH) falls outside the range of 4 to 8.
They must be diluted to the appropriate concentration and mixed in a suitable vehicle
according to current scientific practice (de Groot 1994). Testing a group of 10 to 20 control
subjects will ensure that irritant concentrations are detected and rejected.

Patch testing is usually a safe procedure. Strong positive reactions may occasionally cause
exacerbation of the dermatitis under investigation. On rare occasions, active sensitization may
occur, especially when patients are tested with their own products. Severe reactions may leave
hypo- or hyperpigmented marks, scars or keloids.

Skin biopsy. The histological hallmark of all types of eczema is epidermal intercellular
oedema (spongiosis) which stretches the bridges between keratinocytes to the point of rupture,
causing intraepidermal vesiculation. Spongiosis is present even in the most chronic dermatitis,
when no macroscopic vesicle can be seen. An inflammatory infiltrate of lymphohistiocytic
cells is present in the upper dermis and migrates into the epidermis (exocytosis). Because a
skin biopsy cannot distinguish between the various types of dermatitis, this procedure is rarely
performed, except in rare cases where the clinical diagnosis is unclear and in order to rule out
other conditions such as psoriasis or lichen planus.

Other procedures. It may at times be necessary to perform bacterial, viral or fungal cultures,
as well as potassium hydroxide microscopic preparations in search of fungi or ectoparasites.
Where the equipment is available, irritant contact dermatitis can be assessed and quantified by
various physical methods, such as colorimetry, evaporimetry, Laser-Doppler velocimetry,
ultrason- ography and the measurement of electrical impedance, conductance and capacitance
(Adams 1990).
Workplace. On occasion, the cause of an occupational dermatitis is uncovered only after a
careful observation of a particular worksite. Such a visit allows the physician to see how a
task is performed and how it might be modified to eliminate the risk of occupational
dermatitis. Such visits should always be arranged with the health officer or supervisor of the
plant. The information that it generates will be useful to both the worker and the employer. In
many localities, workers have the right to request such visits and many work sites have active
health and safety committees which do provide valuable information.

Treatment

Local treatment of an acute, vesicular dermatitis will consist of thin, wet dressings soaked in
lukewarm saline, Burow’s solution or tap water, left in place for 15 to 30 minutes, three to
four times a day. These compresses are followed by the application of a strong topical
corticosteroid. As the dermatitis improves and dries up, the wet dressings are spaced and
stopped and the strength of the corticosteroid is decreased according to the part of the body
being treated.

If the dermatitis is severe or widespread, it is best treated with a course of oral prednisone, 0.5
to 1.0 mg/kg/day for two to three weeks. Systemic first-generation antihistamines are given as
needed to provide sedation and relief from pruritus.

Subacute dermatitis usually responds to mid-strength corticosteroid creams applied two to


three times a day, often combined with protective measures such as the use of cotton liners
under vinyl or rubber gloves when contact with irritants or allergens cannot be avoided.

Chronic dermatitis will require the use of corticosteroid ointments, coupled with the frequent
application of emollients, the greasier the better. Persistent dermatitis may need to be treated
with psoralen and ultraviolet-A (PUVA) phototherapy, or with systemic immunosuppressors
such as azathioprine (Guin 1995).

In all cases, strict avoidance of causative substances is a must. It is easier for the worker to
stay away from offending agents if he or she is given written information which specifies their
names, synonyms, sources of exposure and cross-reaction patterns. This printout should be
clear, concise and written in terms that the patient can easily understand.

Worker’s compensation

It is often necessary to withdraw a patient from work. The physician should specify as
precisely as possible the estimated length of the disability period, keeping in mind that full
restoration of the epidermal barrier takes four to five weeks after the dermatitis is clinically
cured. The legal forms that will allow the disabled worker to receive adequate compensation
should be diligently filled out. Finally, the extent must be determined of permanent
impairment or the presence of functional limitations, which may render a patient unfit to
return to his former work and make him a candidate for rehabilitation.
Prevention of Occupational Dermatoses

Written by ILO Content Manager

The goal of occupational health programmes is to allow workers to maintain their job and
their health over several years. The development of effective programmes requires the
identification of sectoral, population-based, and workplace-specific risk factors. This
information can then be used to develop prevention policies both for groups and individuals.

The Québec Occupational Health and Safety Commission (Commission de la santé et de la


sécurité au travail du Québec) has characterized work activities in 30 industrial, commercial,
and service sectors (Commission de la santé et de la sécurité au travail 1993). Its surveys
reveal that occupational dermatoses are most prevalent in the food and beverage industries,
medical and social services, miscellaneous commercial and personal services and construction
(including public works). Affected workers are typically engaged in service, manufacturing,
assembly, repair, materials handling, food-processing, or health-care activities.

Occupational dermatoses are particularly prevalent in two age groups: young and
inexperienced workers who may be unaware of the sometimes insidious risks associated with
their work, and those workers approaching retirement age who may not have noticed the
progressive drying of their skin over the years, which increases over several consecutive
workdays. Because of such dehydration, repeated exposure to previously well-tolerated
irritant or astringent substances may cause irritative dermatitis in these workers.

As table 1 indicates, even though most cases of occupational dermatoses do not involve
compensation exceeding two weeks, a significant number of cases may persist for over two
months (Durocher and Paquette 1985). This table clearly illustrates the importance of
preventing chronic dermatoses requiring prolonged work absences.

Table 1. Occupational dermatoses in Quebec in 1989: Distribution by length of compensation

Length of compensation (days) 0 1–14 15–56 57–182 >183


Number of cases (total: 735) 10 370 195 80 80

Source: Commission de la santé et de la sécurité au travail, 1993.

Risk Factors

Many substances used in industry are capable of causing dermatoses, the risk of which
depends on the concentration of the substance and the frequency and duration of skin contact.
The general classification scheme presented in table 2 (overleaf) based on the classification of
risk factors as mechanical, physical, chemical or biological, is a useful tool for the
identification of risk factors during site visits. During workplace evaluation, the presence of
risk factors may be either directly observed or suspected on the basis of observed skin lesions.
Particular attention is paid to this in the classification scheme presented in table 2. In some
cases effects specific to a given risk factor may be present, while in others, the skin disorders
may be associated with several factors in a given category. Disorders of this last type are
known as group effects. The specific cutaneous effects of physical factors are listed in table 2
and described in other sections of this chapter.
Table 2. Risk factors and their effects on the skin

Mechanical factors

Trauma
Friction
Pressure
Dusts

Physical factors

Radiation
Humidity
Heat
Cold

Chemical factors

Acids, bases
Detergents, solvents
Metals, resins
Cutting oils
Dyes, tar
Rubber, etc.

Biological factors

Bacteria
Viruses
Dermatophytes
Parasites
Plants
Insects

Risk co-factors

Eczema (atopic, dyshidrotic, seborrhoeic, nummular)


Psoriasis
Xeroderma
Acne

Group effects
Cuts, punctures, blisters
Abrasions, isomorphism
Lichenification
Calluses

Specific effects

Photodermatitis, radiodermatitis, cancer


Maceration, irritation
Heat rash, burns, erythema
Frostbite, xeroderma, urticaria, panniculitis, Raynaud’s phenomenon

Group effects

Dehydration
Inflammation
Necrosis
Allergy
Photodermatitis
Dyschromia

Specific effects

Pyodermatitis
Multiple warts
Dermatomycosis
Parasitosis
Phytodermatitis
Urticaria

Mechanical factors include repeated friction, excessive and prolonged pressure, and the
physical action of some industrial dusts, whose effects are a function of the shape and size of
the dust particles and the extent of their friction with the skin. The injuries themselves may be
mechanical (especially in workers exposed to repeated vibrations), chemical, or thermal, and
include physical lesions (ulcers, blisters), secondary infection, and isomorphism (Koebner
phenomenon). Chronic changes, such as scars, keloid, dyschromia, and Raynaud’s
phenomenon, which is a peripheral neurovascular alteration caused by prolonged use of
vibrating tools, may also develop.

Chemical factors are by far the most common cause of occupational dermatoses. To establish
an exhaustive list of the many chemicals is not practical. They may cause allergic, irritant or
photodermatotic reactions, and may leave dyschromic sequelae. The effects of chemical
irritation vary from simple drying to inflammation to complete cell necrosis. More
information on this subject is provided in the article on contact dermatitis. Material Safety
Data Sheets, which provide toxicological and other information are indispensable tools for
developing effective preventive measures against chemicals. Several countries, in fact, require
chemical manufacturers to provide every workplace using their products with information on
the occupational health hazards posed by their products.

Bacterial, viral and fungal infections contracted in the workplace arise from contact with
contaminated materials, animals, or people. Infections include pyodermatitis, folliculitis,
panaris, dermatomycosis, anthrax, and brucellosis. Workers in the food-processing sector may
develop multiple warts on their hands, but only if they have already suffered microtraumas
and are exposed to excessive levels of humidity for prolonged periods (Durocher and Paquette
1985). Both animals and humans such as day-care and health-care workers, may act as vectors
for parasitic contamination like mites, scabies and head lice. Phytodermatitis may be caused
by plants (Rhus sp.) or flowers (alstromeria, chrysanthemums, tulips). Finally, some wood
extracts may cause contact dermatitis.

Risk Co-factors

Some non-occupational cutaneous pathologies may exacerbate the effects of environmental


factors on workers’ skin. For example, it has long been recognized that the risk of irritant
contact dermatitis is greatly increased in individuals with a medical history of atopy, even in
the absence of an atopic dermatitis. In a study of 47 cases of irritant contact dermatitis of the
hands of food-processing workers, 64% had a history of atopy (Cronin 1987). Individuals
with atopic dermatitis have been shown to develop more severe irritation when exposed to
sodium lauryl sulphate, commonly found in soaps (Agner 1991). Predisposition to allergies
(Type I) (atopic diathesis) does not however increase the risk of delayed allergic (Type IV)
contact dermatitis, even to nickel (Schubert et al. 1987), the allergen most commonly
screened for. On the other hand, atopy has recently been shown to favour the development of
contact urticaria (Type I allergy) to rubber latex among health-care workers (Turjanmaa 1987;
Durocher 1995) and to fish among caterers (Cronin 1987).

In psoriasis, the outmost layer of the skin (stratum corneum) is thickened but not calloused
(parakeratotic) and less resistant to skin irritants and mechanical traction. Frequent skin injury
may worsen pre-existing psoriasis, and new isomorphic psoriatic lesions may develop on scar
tissue.

Repeated contact with detergents, solvents, or astringent dusts may lead to secondary irritant
contact dermatitis in individuals suffering from xeroderma. Similarly, exposure to frying oils
may exacerbate acne.

Prevention

A thorough understanding of relevant risk factors is a prerequisite to establishing prevention


programmes, which may be either institutional or personal ones such as relying on personal
protective equipment. The efficacy of prevention programmes depends on the close
collaboration of workers and employers during their development. Table 3 provides some
information on prevention.
Table 3. Collective measures (group approach) to prevention

Collective measures

 Substitution

 Environmental control:

Use of tools for handling materials


Ventilation
Closed systems
Automation

 Information and training

 Careful work habits

 Follow-up

Personal protection

 Skin hygiene

 Protective agents

 Gloves

Workplace Prevention

The primary goal of workplace preventive measures is the elimination of hazards at their
source. When feasible, substitution of a toxic substance by a non-toxic one is the ideal
solution. For example, the toxic effects of a solvent being incorrectly used to clean the skin
can be eliminated by substituting a synthetic detergent that presents no systemic hazard and
that is less irritating. Several non-allergenic cement powders which substitute ferrous sulphate
for hexavalent chromium, a well-known allergen, are now available. In water-based cooling
systems, chromate-based anti-corrosion agents can be replaced by zinc borate, a weaker
allergen (Mathias 1990). Allergenic biocides in cutting oils can be replaced by other
preservatives. The use of gloves made of synthetic rubber or PVC can eliminate the
development of latex allergies among health-care workers. Replacement of
aminoethanolamine by triethanolamine in welding fluxes used to weld aluminium cables has
led to a reduction in allergies (Lachapelle et al. 1992).

Modification of production processes to avoid skin contact with hazardous substances may be
an acceptable alternative when substitution is impossible or the risk is low. Simple
modifications include using screens or flexible tubes to eliminate splashing during the transfer
of liquids, or filters that retain residues and reduce the need for manual cleaning. More natural
grasp points on tools and equipment that avoid exerting excessive pressure and friction on the
hands and that prevent skin contact with irritants may also work. Local capture ventilation
with capture inlets that limit nebulisation or reduce the concentration of airborne dusts are
useful. Where processes have been completely automated in order to avoid environmental
hazards, particular attention should be paid to training workers responsible for repairing and
cleaning the equipment and specific preventive measures may be required to limit their
exposure (Lachapelle et al. 1992).

All personnel must be aware of the hazards present in their workplace, and collective
measures can only be effective when implemented in conjunction with a comprehensive
information programme. Material Safety Data Sheets can be used to identify hazardous and
potentially hazardous substances. Hazard warning signs can be used to rapidly identify these
substances. A simple colour code allows visual coding of the risk level. For example, a red
sticker could signal the presence of a hazard and the necessity of avoiding direct skin contact.
This code would be appropriate for a corrosive substance that rapidly attacks the skin.
Similarly, a yellow sticker could indicate the need for prudence, for example when dealing
with a substance capable of damaging the skin following repeated or prolonged contact
(Durocher 1984). Periodic display of posters and the occasional use of audio-visual aids
reinforce the information delivered and stimulate interest in occupational dermatosis
prevention programmes.

Complete information on the hazards associated with work activities should be provided to
workers prior to starting work. In several countries, workers are given special occupational
training by professional instructors.

Workplace training must be repeated each time a process or task is changed with resulting
change in risk factors. Neither an alarmist nor paternalistic attitude favours good working
relationships. Employers and workers are partners who both desire work to be executed
safely, and the information delivered will only be credible if it is realistic.

Given the absence of safety standards for dermatotoxic substances (Mathias 1990), preventive
measures must be supported by vigilant observation of the state of workers’ skin. Fortunately
this is easily implemented, since the skin, particularly that on the hands and face, can be
directly observed by everyone. The goal of this type of observation is the identification of
early signs of cutaneous modifications indicating an overwhelming of the body’s natural
equilibrium. Workers and health and safety specialists should therefore be on the lookout for
the following early warning signs:

 progressive drying
 maceration
 localized thickening
 frequent trauma
 redness, particularly around hairs.
Prompt identification and treatment of cutaneous pathologies is essential, and their underlying
causal factors must be identified, to prevent them from becoming chronic.

When workplace controls are unable to protect the skin from contact with hazardous
substances, the duration of skin contact should be minimized. For this purpose, workers
should have ready access to appropriate hygienic equipment. Contamination of cleaning
agents can be avoided by using closed containers equipped with a pump that dispenses an
adequate amount of the cleanser with a single press. Selecting cleansers requires
compromising between cleaning power and the potential for irritation. For example, so-called
high-performance cleansers often contain solvents or abrasives which increase irritation. The
cleanser selected should take into account the specific characteristics of the workplace, since
workers will often simply use a solvent if available cleansers are ineffective. Cleansers may
take the form of soaps, synthetic detergents, waterless pastes or creams, abrasive preparations
and antimicrobial agents (Durocher 1984).

In several occupations, the application of a protective cream before work facilitates skin
cleaning, regardless of the cleaner used. In all cases, the skin must be thoroughly rinsed and
dried after each washing. Failure to do so may increase irritation, for example by re-
emulsification of the soap residues caused by the humidity inside impermeable gloves.

Industrial soaps are usually provided as liquids dispensed by hand pressure. They are
composed of fatty acids of animal (lard) or vegetable (oil) origin, buffered with a base (e.g.,
sodium hydroxide). Buffering may be incomplete and may leave residual free radicals that are
capable of irritating the skin. To avoid this, a near-neutral pH (4 to 10) is desirable. These
liquid soaps are adequate for many tasks.

Synthetic detergents, available in both liquid and powder form, emulsify greases. Thus they
usually remove the human skin’s sebum, which is a substance that protects the skin against
drying. Skin emulsification is generally less marked with soaps than with synthetic detergents
and is proportional to detergent concentration. Emollients such as glycerine, lanolin, and
lecithin are often added to detergents to counteract this effect.

Pastes and creams, also known as “waterless soaps” are emulsions of oil-based substances in
water. Their primary cleaning agent is a solvent, generally a petroleum derivative. They are
called “waterless” because they are effective in the absence of tap water, and are typically
used to remove stubborn soils or to wash hands when water is unavailable. Because of their
harshness, they are not considered cleansers of choice. Recently, “waterless soaps” containing
synthetic detergents that are less irritating to the skin than solvents have become available.
The American Association of Soap and Detergent Manufacturers recommends washing with a
mild soap after using solvent-based “waterless soaps.” Workers who use “waterless soaps”
three or four times per day should apply a moisturizing lotion or cream at the end of the work
day, in order to prevent drying.

Abrasive particles, which are often added to one of the cleaners described above to increase
their cleaning power are irritants. They may be soluble (e.g., borax) or insoluble. Insoluble
abrasives may be mineral (e.g., pumice), vegetal (e.g., nut shells) or synthethic (e.g.,
polystyrene).
Antimicrobial cleaners should only be used in workplaces where there is a real risk of
infection, since several of them are potential allergens and workers should not be exposed
needlessly.

Under the influence of certain substances or repeated washings, workers’ hands may tend to
dry out. Long-term maintenance of good skin hygiene under these conditions requires daily
moisturizing, the frequency of which will depend on the individual and the type of work. In
many cases, moisturizing lotions or creams, also known as hand creams, are adequate. In
cases of severe drying or when the hands are immersed for prolonged periods, hydrophilic
vaselines are more appropriate. So-called protective or barrier creams are usually moisturizing
creams; they may contain silicones or zinc or titanium oxides. Exposure-specific protective
creams are rare, with the exception of those which protect against ultraviolet radiation. These
have been greatly improved over the last few years and now provide effective protection
against both UV-A and UV-B. A minimum protection factor of 15 (North American scale) is
recommended. StokogarÔ cream appears to be effective against contact dermatitis caused by
poison ivy. Protective or barrier creams should never be seen as equivalent to some form of
invisible impermeable glove (Sasseville 1995). Furthermore, protective creams are only
effective on healthy skin.

While few people like wearing protective equipment, there may be no choice when the
measures described above are inadequate. Protective equipment includes: boots, aprons,
visors, sleeves, overalls, shoes, and gloves. These are discussed elsewhere in the
Encyclopaedia.

Many workers complain that protective gloves reduce their dexterity, but their use is
nevertheless inevitable in some situations. Special efforts are required to minimize their
inconvenience. Many types are available, both permeable (cotton, leather, metal mesh,
KevlaÔasbestos) and impermeable (rubber latex, neoprene, nitrile, polyvinyl chloride, VitoÔ,
polyvinyl alcohol, polyethylene) to water. The type selected should take into account the
specific needs of each situation. Cotton offers minimal protection but good ventilation.
Leather is effective against friction, pressure, traction, and some types of injury. Metal mesh
protects against cuts. KevlaÔis fire-resistant. Asbestos is fire- and heat-resistant. The solvent
resistance of water-impermeable gloves is highly variable and depends on their composition
and thickness. To increase solvent resistance, some researchers have developed gloves
incorporating multiple polymer layers.

Several characteristics have to be taken into account when selecting gloves. These include
thickness, flexibility, length, roughness, wrist and finger adjustment, and chemical,
mechanical, and thermal resistance. Several laboratories have developed techniques, based on
the measurement of break-through times and permeability constants, with which to estimate
the resistance of gloves to specific chemicals. Lists to help guide glove selection are also
available (Lachapelle et al. 1992; Berardinelli 1988).

In some cases, the prolonged wear of protective gloves may cause allergic contact dermatitis
due to glove components or to allergens that penetrate the gloves. Wearing protective gloves
is also associated with an increased risk of skin irritation, due to prolonged exposure to high
levels of humidity within the glove or penetration of irritants through perforations. To avoid
deterioration of their condition, all workers suffering from hand dermatitis, regardless of its
origin, should avoid wearing gloves that increase the heat and humidity around their lesions.
Establishing a comprehensive occupational dermatosis prevention programme depends on
careful adaptation of standards and principles to the unique characteristics of each workplace.
To ensure their effectiveness, prevention programmes should be revised periodically to take
into account changes in the workplace, experience with the programme and technological
advances.

Occupational Nail Dystrophy

Written by ILO Content Manager

The function of the epithelium of the epidermis is to form the surface or horny layer of the
skin, of which the major component is the fibrous protein, keratin. In certain areas the
epithelium is specially developed to produce a particular type of keratin structure. One of
these is hair, and another is nail. The nail plate is formed partly by the epithelium of the
matrix and partly by that of the nail bed. The nail grows in the same way as the hair and the
horny layer and is affected by similar pathogenic mechanisms to those responsible for
diseases of the hair and epidermis. Some elements such as arsenic and mercury accumulate in
the nail as in the hair.

Figure 1 shows that the nail matrix is an invagination of the epithelium and it is covered by
the nail fold at its base. A thin film of horny layer called the cuticle serves to seal the
paronychial space by stretching from the nail fold to the nail plate.

Figure 1. The structure of the nail.

The most vulnerable parts of the nail are the nail fold and the area beneath the tip of the nail
plate, although the nail plate itself may suffer direct physical or chemical traumata. Chemical
substances or infective agents may penetrate under the nail plate at its free margin. Moisture
and alkali may destroy the cuticle and allow the entry of bacteria and fungi which will cause
inflammation of the paronychial tissue and produce secondary growth disturbance of the nail
plate.
The most frequent causes of nail disease are chronic paronychia, ringworm, trauma, psoriasis,
impaired circulation and eczema or other dermatitis. Paronychia is an inflammation of the nail
fold. Acute paronychia is a painful suppurative condition requiring antibiotic and sometimes
surgical treatment. Chronic paronychia follows loss of the cuticle which allows water,
bacteria and Candida albicans to penetrate into the paronychial space. It is common among
persons with intense exposure to water, alkaline substances and detergents, such as kitchen
staff, cleaners, fruit and vegetable preparers and canners and housewives. Full recovery
cannot be achieved until the integrity of the cuticle and eponychium sealing the paronychial
space has been restored.

Exposure to cement, lime and organic solvents, and work such as that of a butcher or
poulterer may also cause trauma of the cuticle and nail folds.

Any inflammation or disease of the nail matrix may result in dystrophy (distortion) of the nail
plate, which is usually the symptom which has brought the condition to medical attention.
Exposure to chilling cold, or the arterial spasm of Raynaud’s phenomenon, can also damage
the matrix and produce nail dystrophy. Sometimes the damage is temporary and the nail
dystrophy will disappear after removal of the cause and treatment of the inflammatory
condition. (An example is shown in figure 2.)

Figure 2. Onychodystrophy secondary to contact dermatitis resulting from chronic irritation.

One cause of nail damage is the direct application of certain cosmetic preparations, such as
base coats under nail polish, nail hardeners and synthetic nail dressings to the nail.

Some special occupations may cause nail damage. There has been a report of dystrophy due
to handling the concentrated dipyridylium pesticide compounds paraquat and diquat. During
the manufacture of selenium dioxide, a fine powder of this substance may get under the fringe
of the nail plate and cause intense irritation and necrosis of the finger tip and damage to the
nail plate. Care should be taken to warn workers of this hazard and advise them always to
clean the subungual areas of their fingers each day.

Certain types of allergic contact dermatitis of the finger tips frequently result in secondary
nail dystrophy. Six common sensitizers which will do this are:
1. amethocaine and chemically related local anaesthetics used by dental surgeons
2. formalin used by mortuary attendants, anatomy, museum and laboratory assistants
3. garlic and onion used by cooks
4. tulip bulbs and flowers handled by horticulturists and florists
5. p-tert-butylphenol formaldehyde resin used by shoe manufacturers and repairers
6. aminoethylethanolamine used in some aluminium fluxes.

The diagnosis can be confirmed by a positive patch test. The condition of the skin and nails
will recover when contact ceases.

Protective measures

In many cases nails can be safeguarded by the use of suitable hand protection. However,
where hand exposure exists, nails should receive adequate care, consisting essentially of
preserving the cuticle and protecting the subungual area. The skin under the free margin of the
nails should be cleaned daily in order to remove foreign debris or chemical irritants. Where
barrier creams or lotions are employed, care should be taken to ensure that the cuticle and the
area under the free margin are coated.

To preserve the intact cuticle it is necessary to avoid excessive manicure or trauma,


maceration by prolonged exposure to water, and dissolution by repeated exposure to alkali,
solvent and detergent solutions.

Stigmata

Written by ILO Content Manager

Occupational stigma or occupational marks are work-induced anatomical lesions which do not
impair working capacity. Stigmata are generally caused by mechanical, chemical or thermal
skin irritation over a long period and are often characteristic of a particular occupation. Any
kind of pressure or friction on the skin may produce an irritating effect, and a single violent
pressure may break the epidermis, leading to the formation of excoriations, seropurulent
blisters and infection of the skin and underlying tissues. On the other hand, however, frequent
repetition of moderate irritant action does not disrupt the skin but stimulates defensive
reactions (thickening and keratinization of the epidermis). The process may take three forms:

1. a diffuse thickening of the epidermis which merges into the normal skin, with preservation
and occasional accentuation of skin ridges and unimpaired sensitivity
2. a circumscribed callosity made up of smooth, elevated, yellowish, horny lamellae, with
partial or complete loss of skin ridges and impairment of sensitivity. The lamellae are not
circumscribed; they are thicker in the centre and thinner towards the periphery and blend
into the normal skin
3. a circumscribed callosity, mostly raised above the normal skin, 15 mm in diameter, yellowish-
brown to black in colour, painless and occasionally associated with hypersecretion of the
sweat glands.
Callosities are usually produced by mechanical agents, sometimes with the aid of a thermal
irritant (as in the case of glass- blowers, bakers, firefighters, meat curers, etc.), when they are
dark-brown to black in colour with painful fissures. If, however, the mechanical or thermal
agent is combined with a chemical irritant, callosities undergo discoloration, softening and
ulceration.

Callosities which represent a characteristic occupational reaction (particularly on the skin of


the hand as shown in figure 1 and 2) are seen in many occupations. Their form and
localization are determined by the site, force, manner and frequency of the pressure exerted,
as well as by the tools or materials used. The size of callosities may also reveal a congenital
tendency to skin keratinization (ichthyosis, hereditary keratosis palmaris). These factors may
also often be decisive as concerns deviations in the localization and size of callosities in
manual workers.

Figure 1. Occupational stigmata on the hands.

(a) Tanner’s ulcers; (b) Blacksmith; (c) Sawmill worker; (d) Stonemason; (e) Mason;
(f) Marble Mason; (g) Chemical factory worker; (h) Paraffin refinery worker; (I) Printer;
(j) Violinist
Figure 2. Calluses at pressure points on the palm of the hand.

Callosities normally act as protective mechanisms but may, under certain conditions, acquire
pathological features; for this reason they should not be overlooked when pathogenesis and,
particularly, prophylaxis of occupational dermatoses are envisaged.

When a worker gives up a callosity-inducing job, the superfluous horny layers undergo
exfoliation, the skin becomes thin and soft, the discoloration disappears and the normal
appearance is restored. The time required for skin regeneration varies: occupational callosities
on the hands may occasionally be seen several months or years after the work has been given
up (especially in blacksmiths, glass-blowers and sawmill workers). They persist longer in
senile skin and when associated with connective tissue degeneration and bursitis.

Fissures and erosions of the skin are characteristic of certain occupations (railway workers,
gunsmiths, bricklayers, goldsmiths, basket weavers, etc.). The painful “tanner’s ulcer”
associated with chromium compound exposures (figure 1) round or oval in shape and from 2-
10 mm in diameter. The localisation of occupational lesions (e.g. on confectioners’ fingers,
tailors’ fingers and palms, etc.) is also characteristic.

Pigment spots are caused by the absorption of dyes through the skin, the penetration of
particles of solid chemical compounds or industrial metals, or the excessive accumulation of
the skin pigment, melanin, in workers in coking or generator plants, after three to five years of
work. In some establishments, about 32% of workers were found to exhibit melanomata.
Pigment spots are mostly found in chemical workers.

As a rule, dyes absorbed through the skin cannot be removed by routine washing, hence their
permanence and significance as occupational stigmata. Pigment spots occasionally result from
impregnation with chemical compounds, plants, soil or other substances to which the skin is
exposed during the work process.

A number of occupational stigmata may be seen in the region of the mouth (e.g. Burton’s line
within the gums of workers exposed to lead, teeth erosion in workers exposed to acid fumes,
etc. blue colouring of the lips in workers engaged in aniline manufacture and in the form of
acne. Characteristic odours connected with certain occupations may also be considered as
occupational stigmata.
Skin Diseases References
Adams, RM. 1988. Medicolegal aspects of occupational skin diseases. Dermatol Clin 6:121.

—. 1990. Occupational Skin Disease. 2nd edn. Philadelphia: Saunders.

Agner, T. 1991. Susceptibility of atopic dermatitis patients to irritant dermatitis caused by


sodium lauryl sulfate. A Derm-Ven 71:296-300.

Balch, CM, AN Houghton, and L Peters. 1993. Cutaneous melanoma. In Cancer: Principles
and Practice of Oncology, edited by VTJ DeVita, S Hellman, and SA Rosenberg.
Philadelphia: JB Lippincott.

Beral, V, H Evans, H Shaw, and G Milton. 1982. Malignant melanoma and exposure to
fluorescent lighting at work. Lancet II:290-293.

Berardinelli, SP. 1988. Prevention of occupational skin disease through use of chemical
protective gloves. Dermatol Clin 6:115-119.

Bijan, S. 1993. Cancers of the skin. In Cancer: Principles & Practice of Oncology, edited by
VTJ DeVita, S Hellman, and SA Rosenberg. Philadelphia: JB Lippincott.

Blair, A, S Hoar Zahm, NE Pearce, EF Heinerman, and J Fraumeni. 1992. Clues to cancer
etiology from studies of farmers. Scand J Work Environ Health 18:209-215.

Commission de la santé et de la sécurité du travail. 1993. Statistiques sur les lesions


professionnelles de 1989. Québec: CSST.

Cronin, E. 1987. Dermatitis of the hands in caterers. Contact Dermatitis 17: 265-269.

De Groot, AC. 1994. Patch Testing: Test Concentrations and Vehicles for 3,700 Allergens.
2nd ed. Amsterdam: Elsevier.

Durocher, LP. 1984. La protection de la peau en milieu de travail. Le Médecin du Québec


19:103-105.

—. 1995. Les gants de latex sont-ils sans risque? Le Médecin du Travail 30:25-27.

Durocher, LP and N Paquette. 1985. Les verrues multiples chez les travailleurs de
l’alimentation. L’Union Médicale du Canada 115:642-646.

Ellwood, JM and HK Koh. 1994. Etiology, epidemiology, risk factors, and public health
issues of melanoma. Curr Opin Oncol 6:179-187.

Gellin, GA. 1972. Occupational Dermatoses. Chicago: American Medical Assoc.

Guin, JD. 1995. Practical Contact Dermatitis. New York: McGraw-Hill.


Hagmar, L, K Linden, A Nilsson, B Norrving, B Akesson, A Schutz, and T Moller. 1992.
Cancer incidence and mortality among Swedish Baltic Sea fisherman. Scand J Work Environ
Health 18:217-224.

Hannaford, PC, L Villard Mackintosh, MP Vessey, and CR Kay. 1991. Oral contraceptives
and malignant melanoma. Br J Cancer 63:430-433.

Higginson, J, CS Muir, and M Munoz. 1992. Human Cancer: Epidemiology and


Environmental
Causes. Cambridge Monographs on Cancer Research. Cambridge, UK: CUP.

International Agency for Research on Cancer (IARC). 1983. Polynuclear aromatic


compounds, Part I, Chemical, environmental and experimental data. Monographs on the
Evaluation of the Carcinogenic Risk of Chemicals to Humans, No. 32. Lyon: IARC.

—. 1984a. Polynuclear aromatic compounds, Part 2, Carbon blacks, mineral oils and some
Nitroarenes. Monographs on the Evaluation of the Carcinogenic Risk of Chemicals to
Humans, No. 33. Lyon: IARC.

—. 1984b. Polynuclear aromatic compounds, Part 3, Industrial exposures in aluminium


production, coal gasification, coke production, and iron and steel founding. Monographs on
the Evaluation of the Carcinogenic Risk of Chemicals to Humans, No. 34. Lyon: IARC.

—. 1985a. Polynuclear aromatic compounds, Part 4, Bitumens, coal tars and derived products,
shale-oils and soots. Monographs on the Evaluation of the Carcinogenic Risk of Chemicals to
Humans, No. 35. Lyon: IARC.

—. 1985b. Solar and ultraviolet radiation. Monographs on the Evaluation of the Carcinogenic
Risk of Chemicals to Humans, No. 55. Lyon: IARC.

—. 1987. Overall Evaluations of Carcinogenecity: An updating of IARC Monographs


Volumes 1 to 42. Monographs on the Carcinogenic Risks to Humans. Suppl. 7. Lyon: IARC

—. 1990. Cancer: Causes, occurrence and control. IARC Scientific Publications, No. 100.
Lyon: IARC.

—. 1992a. Cancer incidence in five continents. Vol. VI. IARC Scientific Publications, No.
120. Lyon: IARC.

—. 1992b. Solar and ultraviolet radiation. Monographs On the Evaluation of Carcinogenic


Risks to Humans, No. 55. Lyon: IARC.

—. 1993. Trends in cancer incidence and mortality. IARC Scientific Publications, No. 121.
Lyon: IARC.

Koh, HK, TH Sinks, AC Geller, DR Miller, and RA Lew. 1993. Etiology of melanoma.
Cancer Treat Res 65:1-28.

Kricker, A, BK Armstrong, ME Jones, and RC Burton. 1993. Health, solar UV radiation and
environmental change. IARC Technical Report, No. 13. Lyon: IARC.
Lachapelle, JM, P Frimat, D Tennstedt, and G Ducombs. 1992. Dermatologie professionnelle
et de l’environnement. Paris: Masson.

Mathias, T. 1987. Prevention of occupational contact dermatitis. J Am Acad Dermatol


23:742-748.

Miller, D and MA Weinstock. 1994. Nonmelanoma skin cancer in the United States:
Incidence. J Am Acad Dermatol 30:774-778.

Nelemans, PJ, R Scholte, H Groenendal, LA Kiemeney, FH Rampen, DJ Ruiter, and AL


Verbeek. 1993. Melanoma and occupation: results of a case-control study in The Netherlands.
Brit J Ind Med 50:642-646.

Rietschel, RI, and JF Fowler Jr. 1995. Fisher’s Contact Dermatitis. 4th ed. Baltimore:
Williams & Wilkins.

Sahel, JA, JD Earl, and DM Albert. 1993. Intraocular melanomas. In Cancer: Principles &
Practice of Oncology, edited by VTJ DeVita, S Hellman, and SA Rosenberg. Philadelphia: JB
Lippincott.

Sasseville, D. 1995. Occupational dermatoses: Employing good diagnostic skills. Allergy


8:16-24.

Schubert, H, N Berova, A Czernielewski, E Hegyi and L Jirasek. 1987. Epidemiology of


nickel allergy. Contact Dermatitis 16:122-128.

Siemiatycki J, M Gerin, R Dewar, L Nadon, R Lakhani, D Begin, and L Richardson. 1991.


Associations between occupational circumstances and cancer. In Risk Factors for Cancer in
the Workplace, edited by J Siematycki. London, Boca Raton: CRC Press.

Stidham, KR, JL Johnson, and HF Seigler. 1994. Survival superiority of females with
melanoma. A multivariate analysis of 6383 patients exploring the significance of gender in
prognostic outcome. Archives of Surgery 129:316-324.

Turjanmaa, K. 1987. Incidence of immediate allergy to latex gloves in hospital personnel.


Contact Dermatitis 17:270-275.

Copyright 2015 International Labour Organization


13. Systemic Conditions
Chapter Editor: Howard M. Kipen

Systemic Conditions: An Introduction

Written by ILO Content Manager

The last edition of this Encyclopaedia did not contain articles on either sick building
syndrome (SBS) or multiple chemical sensitivities (MCS) (the latter term was coined by
Cullen, 1987). Most practitioners of occupational medicine are not comfortable with such
symptomatically driven and frequently psychologically related phenomena, at least partly for
the reason that patients with these syndromes do not respond reliably to the standard means of
occupational health intervention, namely, exposure reduction. Non-occupational physicians in
general medical practice also react similarly: patients with little verifiable pathology, such as
those complaining of chronic fatigue syndrome or fibromyalgia, are regarded as more difficult
to treat (and generally regard themselves as more disabled) than patients with deforming
conditions such as rheumatoid arthritis. There is clearly less regulatory imperative for sick
building syndrome and multiple chemical sensitivities than for the classic occupational
syndromes such as lead intoxication or silicosis. This discomfort on the part of treating
physicians and the lack of appropriate regulatory guidance is unfortunate, however
understandable it may be, because it leads to minimization of the importance of these
increasingly common, albeit largely subjective and non-lethal complaints. Since many
workers with these conditions claim total disability, and few examples of cures can be found,
multiple chemical sensitivities and sick building syndrome present important challenges to
compensation systems.

In the developed world, since many classic occupational toxins are better controlled,
symptomatic syndromes, such as those under present scrutiny that are associated with lower-
level exposures, are assuming increasing recognition as significant economic and health
concerns. Managers are frustrated by these conditions for a number of reasons. As there are
no clear-cut regulatory requirements in most jurisdictions which cover indoor air or
hypersusceptible individuals (with the important exception being persons with recognized
allergic disorders), it is impossible for management to be certain whether or not they are in
compliance. Agent-specific contaminant levels developed for industrial settings, such as the
US Occupational Safety and Health Administration’s (OSHA’s) permissible exposure levels
(PELs) or the American Conference of Governmental Industrial Hygienists’ (ACGIH’s)
threshold limit values (TLVs), are clearly not able to prevent or predict symptomatic
complaints in office and school workers. Finally, because of the apparent importance of
individual susceptibility and psychological factors as determinants of response to low levels
of contaminants, the impact of environmental interventions is not as predictable as many
would like before a decision is taken to commit scarce building or maintenance resources.
Often after complaints arise, a potential culprit such as elevated volatile organic compound
levels with respect to outdoor air is found, and yet following remediation, complaints persist
or reoccur.

Employees who suffer from symptoms of either sick building syndrome or multiple chemical
sensitivities are often less productive and frequently accusatory when management or
government is reluctant to commit themselves to interventions which cannot be reliably
predicted to ameliorate symptoms. Clearly, occupational health providers are among the few
key individuals who may be able to facilitate reasonable middle ground outcomes to the
advantage of all concerned. This is true whether or not an underlying cause is low levels of
contaminants, or even in the rare case of true mass hysteria, which may often have low-level
environmental triggers. Using skill and sensitivity to address, evaluate and incorporate a
combination of factors into solutions is an important approach to management.

Sick building syndrome is the more contained and definable of the two conditions, and has
even had definitions established by the World Health Organization (1987). Although there is
debate, both in general and in specific instances, about whether a given lesion is more
attributable to individual workers or to the building, it is widely acknowledged, based on
controlled exposure studies with volatile organic compounds, as well as survey epidemiology,
that modifiable environmental factors do drive the kinds of symptom which are subsumed
under the following article entitled Sick Building Syndrome. In that article, Michael Hodgson
(1992) details the triad of personal, work activity and building factors which may contribute
in various proportions to symptoms among a population of workers. A major problem is in
maintaining good employee-employer communication while investigation and attempts at
remediation take place. Health professionals will usually require expert environmental
consultation to assist in the evaluation and remediation of identified outbreaks.

Multiple chemical sensitivities is a more problematic condition to define than sick building
syndrome. Some organized medical entities, including the American Medical Association,
have published position papers which question the scientific basis of the diagnosis of this
condition. Many physicians who practise without a rigorous scientific basis have nevertheless
championed the validity of this diagnosis. They rely on unproven or over-interpreted
diagnostic tests such as lymphocyte activation or brain imaging and may recommend
treatments such as sauna therapies and megadoses of vitamins, practices which have in large
part engendered the animosity of groups such as the American Medical Association.
However, no one denies that there is a group of patients who present with complaints of
becoming symptomatic in response to low levels of ambient chemicals. Their constitutional
symptoms overlap those of other subjective syndromes such as chronic fatigue syndrome and
fibromyalgia. These symptoms include pain, fatigue and confusion, they worsen with low-
level chemical exposure and they are reported to be present in a substantial percentage of
patients who have been diagnosed with these other syndromes. Of great import, but still
unresolved, is the question whether chemical sensitivity symptoms are acquired (and to what
extent) because of a preceding chemical overexposure, or whether—as in the commonly
reported situation—they arise without a major identified precipitating event.

Multiple chemical sensitivities is sometimes invoked as an outcome in certain sick building


syndrome outbreaks which are not resolved or ameliorated after routine investigation and
remediation. Here it is clear that MCS afflicts an individual or small number of people, rarely
a population; it is the effect on a population that may even be a criterion for the sick building
syndrome by some definitions. MCS seems to be endemic in populations, whereas sick
building syndrome is often epidemic; however, preliminary investigations suggest that some
degree of chemical sensitivity (and chronic fatigue) may occur in outbreaks, as was found
among American veterans of the Persian Gulf conflict. The controlled exposure studies which
have done much to clarify the role of volatile organic compounds and irritants in sick building
syndrome have yet to be performed in a controlled manner for multiple chemical sensitivities.
Many practitioners claim to recognize MCS when they see it, but there is no agreed-upon
definition. It may well be included as a condition which “overlaps” other non-occupational
syndromes such as chronic fatigue syndrome, fibromyalgia, somatization disorder and others.
Sorting out its relationship to both psychiatric diagnoses and to early reports suggests that
when the onset of the syndrome is fairly definable, there is a much lower rate of diagnosable
psychiatric co-morbidity (Fiedler et al. 1996). The phenomenon of odor-triggered symptoms
is distinctive, but clearly not unique, and the extent to which this is an occupational condition
at all is debated. This is important because Dr. Cullen’s (1987) definition, like many others,
describes multiple chemical sensitivities as a sequel to a better-characterized occupational or
environmental disorder. However, as stated above, symptoms following exposure to ambient
levels of odorants are common among individuals both with and without clinical diagnoses,
and it may be just as important to explore the similarities between MCS and other conditions
as to define the differences (Kipen et al. 1995; Buchwald and Garrity 1994).

Sick Building Syndrome

Written by ILO Content Manager

Sick building syndrome (SBS) is a term used to describe office worker discomfort and
medical symptoms that are related to building characteristics, to pollutant exposures and to
work organization, and that are mediated through personal risk factors. A wide range of
definitions exists, but disagreement remains (a) as to whether a single individual in a building
can develop this syndrome or whether a set numeric criterion (the proportion affected) should
be used; and (b) as to the necessary symptom components. Figure 1 lists symptoms
commonly included in SBS; in recent years, with increased understanding, complaints related
to odours have generally been dropped from the list and chest symptoms included under
mucous membrane irritation. A critical distinction needs to be made between SBS and
building-related illness (BRI), where verifiable irritation, allergy or illness such as
hypersensitivity pneumonitis, asthma or carbon monoxide-induced headaches may be present
as an outbreak associated with a building. SBS should also be distinguished from multiple
chemical sensitivities (MCS; see below) which is more sporadic in occurrence, often occurs
within an SBS population, and is much less responsive to modifications of the office
environment.

Figure 1. Sick building syndrome.


SBS should be simultaneously viewed from and informed by three disparate perspectives. For
health professionals, the view is from the perspective of medicine and the health sciences as
they define symptoms related to indoor work and their associated pathophysiological
mechanisms. The second perspective is that of engineering, including design, commissioning,
operations, maintenance and exposure assessment for specific pollutants. The third
perspective includes the organizational, social and psychological aspects of work.

Epidemiology

Since the mid-1970s, increasingly voiced office worker discomfort has been studied in formal
ways. These have included field epidemiological studies using a building or a workstation as
the sampling unit to identify risk factors and causes, population-based surveys to define
prevalence, chamber studies of humans to define effects and mechanisms, and field
intervention studies.

Cross-sectional and case-control studies

Approximately 30 cross-sectional surveys have been published (Mendell 1993; Sundell et al.
1994). Many of these have included primarily “non-problem” buildings, selected at random.
These studies consistently demonstrate an association between mechanical ventilation and
increased symptom reporting. Additional risk factors have been defined in several case-
control studies. Figure 2 presents a grouping of widely recognized risk factors associated with
increased rates of complaints.

Many of these factors overlap; they are not mutually exclusive. For example, the presence of
inadequate housekeeping and maintenance, the presence of strong indoor pollution sources
and increased individual susceptibility may lead to much greater problems than the presence
of any one factor alone.

Figure 2. Risk factors for and causes of the sick building syndrome.
Factor and principal components analyses of questionnaire responses in cross-sectional
surveys have explored the interrelationship of various symptoms. Consistently, symptoms
related to single organ systems have clustered together more strongly than symptoms relating
to different organ systems. That is, eye irritation, eye tearing, eye dryness, and eye itching all
appear to correlate very strongly, and little benefit is obtained from looking at multiple
symptoms within an organ system.

Controlled exposure studies

Animal testing to determine irritant properties and thresholds has become standard. A
consensus method of the American Society for Testing and Materials (1984) is widely
regarded as the basic instrument. This method has been used to develop structure-activity
relationships, to demonstrate that more than one irritant receptor may exist in the trigeminal
nerve and to explore interactions between multiple exposures. Most recently, it has been used
to demonstrate the irritating properties of office equipment offgassing.

Analogous to this method, several approaches have been defined to document methods and
dose-response relationships for irritation in humans. This work meanwhile suggests that, at
least for “non-reactive” compounds such as saturated aliphatic hydrocarbons, the percentage
of vapour pressure saturation of a compound is a reasonable predictor of its irritant potency.
Some evidence also supports the view that increasing the number of compounds in complex
mixtures decreases the irritant thresholds. That is, the more agents that are present, even at a
constant mass, the greater the irritation.

Controlled exposure studies have been performed of volunteers in stainless steel chambers.
Most have been performed with one constant mixture of volatile organic compounds (VOC)
(Mølhave and Nielsen 1992). These consistently document relationships between symptoms
and increasing exposure levels. Office workers who perceived themselves as “susceptible” to
the effects of usual levels of VOCs indoors demonstrated some impairment on standard tests
of neuropsychological performance (Mølhave, Bach and Pederson 1986). Healthy volunteers,
on the other hand, demonstrated mucous membrane irritation and headaches at exposures in
the range of 10 to 25 mg/m3, but no changes on neuropsychological performance. More
recently, office workers demonstrated similar symptoms after simulated work in environments
where pollutants from commonly used office equipment were generated. Animals reacted
similarly when a standardized test of irritant potency was used.

Population-based studies

To date, three population-based studies have been published in Sweden, Germany and the
United States. The questionnaires differed considerably, and thus prevalence estimates cannot
be directly compared. Nevertheless, between 20 and 35% of respondents from various
buildings not known to be sick were found to have complaints.

Mechanisms

A number of potential mechanisms and objective measures to explain and examine symptoms
within specific organ systems have been identified. None of these has a high predictive value
for the presence of disease, and they are therefore not suitable for clinical diagnostic use.
They are useful in field research and epidemiological investigations. For many of these it is
unclear whether they should be regarded as mechanisms, as markers of effect, or as measures
of susceptibility.

Eyes

Both allergic and irritant mechanisms have been proposed as explanations for eye symptoms.
Shorter tear-film break-up time, a measure of tear film instability, is associated with increased
levels of symptoms. “Fat-foam thickness” measurement and photography for documentation
of ocular erythema have also been used. Some authors attribute eye symptoms at least in part
to increased individual susceptibility as measured by those factors. In addition, office workers
with ocular symptoms have been demonstrated to blink less frequently when working at video
display terminals.

Nose

Both allergic and irritant mechanisms have been proposed as explanations for nasal
symptoms. Measures that have successfully been used include nasal swabs (eosinophils),
nasal lavage or biopsy, acoustic rhinometry (nasal volume), anterior and posterior
rhinomanometry (plethysmography) and measures of nasal hyperreactivity.

Central nervous system

Neuropsychological tests have been used to document decreased performance on standardized


tests, both as a function of controlled exposure (Mølhave, Bach and Pederson 1986) and as a
function of the presence of symptoms (Middaugh, Pinney and Linz 1982).

Individual risk factors

Two sets of individual risk factors have been discussed. First, two commonly recognized
diatheses, atopy and seborrhea, are considered predisposing factors for medically defined
symptoms. Second, psychological variables may be important. For example, personal traits
such as anxiety, depression or hostility are associated with sick-role susceptibility. Similarly,
work stress is so consistently associated with building-related symptoms that some causal
association is likely to be present. Which of the three components of work stress—individual
traits, coping skills, and organization function such as poor management styles—is the
dominant cause remains undetermined. It is recognized that failing to intervene in a well-
defined problem leads workers to experience their discomfort with increasing distress.

Engineering and Sources

Beginning in the late 1970s, the US National Institute for Occupational Safety and Health
(NIOSH) responded to requests for help in identifying causes of occupant discomfort in
buildings, attributing problems to ventilation systems (50%), microbiological contamination
(3 to 5%), strong indoor pollution sources (tobacco 3%, others 14%), pollutants entrained
from the outside (15%) and others. On the other hand, Woods (1989) and Robertson (et al.
1988) published two well-known series of engineering analyses of problem buildings,
documenting on average the presence of three potential causal factors in each building.

One current professional ventilation standard (American Society of Heating, Refrigerating,


and Airconditioning Engineers (1989) suggests two approaches to ventilation: a ventilation
rate procedure and an air quality procedure. The former provides a tabular approach to
ventilation requirements: office buildings require 20 cubic feet of outside air per occupant per
minute to maintain occupant complaint rates of environmental discomfort at below 20%. This
assumes relatively weak pollution sources. When stronger sources are present, that same rate
will provide less satisfaction. For example, when smoking is permitted at usual rates
(according to data from the early 1980s), approximately 30% of occupants will complain of
environmental discomfort. The second approach requires the selection of a target
concentration in air (particulates, VOCs, formaldehyde, etc.), information on emission rates
(pollutant per time per mass or surface), and derives the ventilation requirements. Although
this is an intellectually much more satisfying procedure, it remains elusive because of
inadequate emissions data and disagreement on target concentrations.

Pollutants

Environmental scientists have generally defined exposure and health effects on a pollutant-by-
pollutant basis. The American Thoracic Society (1988) defined six important categories, listed
in figure 3.

Figure 3. Principal pollutant categories.

Environmental criteria have been established for many of the individual substances in these
six groups. The utility and applicability of such criteria for indoor environments is
controversial for many reasons. For example, the goals of threshold limit values often do not
include prevention of eye irritation, a common complaint in indoor environments with
requirements for close eye work at video display units. For most of the pollutant categories,
the problem of interactions, commonly termed the “multiple contaminants problem,” remains
inadequately defined. Even for agents that are thought to affect the same receptor, such as
aldehydes, alcohols and ketones, no prediction models are well established. Finally, the
definition of “representative compounds” for measurement is unclear. That is, pollutants must
be measurable, but complex mixtures vary in their composition. It is unclear, for instance,
whether the chronic residual odor annoyance due to environmental tobacco smoke correlates
better with nicotine, particulates, carbon monoxide or other pollutants. The measure “total
volatile organic compounds” is meanwhile considered an interesting concept, but is not useful
for practical purposes as the various components have such radically different effects
(Mølhave and Nielsen 1992; Brown et al. 1994). Particulates indoors may differ in
composition from their outdoor counterparts, as filter sizes affect entrained concentrations,
and indoor sources may differ from outdoor sources. There are measurement problems as
well, since the sizes of filters used will affect which particles are collected. Different filters
may be needed for indoor measurements.

Finally, emerging data suggest that reactive indoor pollutants may interact with other
pollutants and lead to new compounds. For example, the presence of ozone, either from office
machines or entrained from outdoors, may interact with 4-phenylcyclohexene and generate
aldehydes (Wechsler 1992).

Primary Aetiological Theories

Organic solvents

Buildings have always relied on general dilution strategies for pollutant removal, but
designers have assumed that humans were the primary source of pollutants. More recently,
emissions from “solid materials” (such as particle board desks, carpeting and other furniture),
from wet products (such as glues, wall paints, office machine toners) and personal products
(perfumes) have been recognized as contributors to a complex mixture of very low levels of
individual pollutants (summarized in Hodgson, Levin and Wolkoff 1994).

Several studies suggest that the presence of reactive volatile organic compounds, such as
aldehydes and halogenated hydrocarbons, are associated with increasing levels of symptoms.
Offices with higher complaint rates have had greater “loss” of VOCs between incoming and
outgoing air than did offices with lower complaints. In a prospective study of schools, short
chain VOCs were associated with symptom development. In another survey, higher personal
samples for VOCs using a screening sampler that “over-reacts” to reactive VOCs, such as
aldehydes and halogenated hydrocarbons, were associated with higher symptom levels. In that
study, women had higher levels of VOCs in their breathing zone, suggesting another potential
explanation for the increased rate of complaints among women. VOCs might adsorb onto
sinks, such as fleecy surfaces, and be re-emitted from such secondary sources. The interaction
of ozone and relatively non-irritant VOCs to form aldehydes is also consistent with this
hypothesis.

The presence of multiple potential sources, the consistency of VOC health effects and SBS
symptoms, and the widely recognized problems associated with ventilation systems make
VOCs an attractive aetiological agent. Solutions other than better design and operation of
ventilation systems include the selection of low-emitting pollutants, better housekeeping and
prevention of “indoor chemistry.”
Bioaerosols

Several studies have suggested that bioaerosols have the potential to contribute to occupant
discomfort. They may do this through several different mechanisms: irritant emissions;
release of fragments, spores or viable organisms leading to allergy; and secretion of complex
toxins. Fewer data exist to support this theory than the others. Nevertheless, it is clear that
heating, ventilating and air-conditioning systems may be sources of micro-organisms.

They have also been described in building construction materials (as a result of improper
curing), as a result of unwanted water incursion and in office dust. The presence of sensitizers
in the office environment, such as dust mites or cat dander brought in from home on clothing,
presents another possibility of exposure. To the extent that biological agents contribute to the
problem, dirt and water management become primary control strategies.

In addition, toxigenic fungi may be found on other porous products in buildings, including
ceiling tile, spray-on insulation and wooden joists. Especially in residential environments,
fungal proliferation associated with inadequate moisture control has been associated with
symptoms.

Psychosocial aspects of work

In all studies where it has been examined, “work stress” was clearly associated with SBS
symptoms. Workers’ perceptions of job pressures, task conflicts, and non-work stressors such
as spousal or parental demands may clearly lead to the subjective experience of “stronger”
irritation as a function of illness behaviour. At times, such perceptions may in fact result from
poor supervisory practices. In addition, the presence of irritants leading to subjective irritation
is thought to lead to “work stress”.

Evaluation of the Patient

The examination should be directed at identification or exclusion of a significant component


of building related illness (BRI). Allergic disease should be identified and optimally managed.
However, this must be done with awareness that non-allergic mechanisms may contribute to a
substantial residual symptom burden. Sometimes individuals can be reassured of the absence
of clear disease by studies such as portable peak flow monitoring or pre- and post-work
pulmonary function tests. Once such observable or pathologically verifiable disease has been
ruled out, evaluation of the building itself becomes paramount and should be done with
industrial hygiene or engineering input. Documentation, management and remediation of
identified problems is discussed in Controlling the Indoor Environment.

Conclusion

SBS is a phenomenon that can be experienced by an individual, but is usually seen in groups;
it is associated with engineering deficiencies and is likely caused by a series of pollutants and
pollutant categories. As with all “dis-ease,” a component of personal psychology serves as an
effect modifier that can lead to varying degrees of symptom intensity at any given level of
distress.
Multiple Chemical Sensitivities

Written by ILO Content Manager

Introduction

Since the 1980s, a new clinical syndrome has been described in occupational and
environmental health practice characterized by the occurrence of diverse symptoms after
exposure to low levels of artificial chemicals, although as yet it lacks a widely accepted
definition. The disorder may develop in individuals who have experienced a single episode, or
recurring episodes of a chemical injury such as solvent or pesticide poisoning. Subsequently,
many types of environmental contaminant in air, food or water may elicit a wide range of
symptoms at doses below those which produce toxic reactions in others.

Although there may not be measurable impairment of specific organs, the complaints are
associated with dysfunction and disability. Although idiosyncratic reactions to chemicals are
probably not a new phenomenon, it is believed that multiple chemical sensitivities (MCSs), as
the syndrome is now most frequently called, is being brought by patients to the attention of
medical practitioners far more commonly than in the past. This syndrome is prevalent enough
to have generated substantial public controversy as to who should treat patients suffering with
the disorder and who should pay for the treatment, but research has yet to elucidate many
scientific issues relevant to the problem, such as its cause, pathogenesis, treatment and
prevention. Despite this, MCS clearly does occur and causes significant morbidity in the
workforce and general population. It is the purpose of this article to elucidate what is known
about it at this time in the hope of enhancing its recognition and management in the face of
uncertainty.

Definition and Diagnosis

Although there is no general consensus on a definition for MCS, certain features allow it to be
differentiated from other well-characterized entities. These include the following:

 Symptoms typically occur after a definitely characterizable occupational or environmental


incident, such as an inhalation of noxious gases or vapours or other toxic exposure. This
“initiating” event may be a single episode, such as an exposure to a pesticide spray, or a
recurrent one, such as frequent solvent overexposure. Often the effects of the apparently
precipitating event, or events, are mild and may merge without clear demarcation into the
syndrome which follows.

 Acute symptoms similar to those of the preceding exposure begin to occur after re-exposures
to lower levels of various materials, such as petroleum derivatives, perfumes and other
common work and household products.

 Symptoms are referrable to multiple organ systems. Central nervous system complaints, such
as fatigue, confusion and headache, occur in almost every case. Upper and lower respiratory,
cardiac, dermal, gastrointestinal and musculoskeletal symptoms are common.

 It is generally the case that very diverse agents may elicit the symptoms at levels of exposure
orders of magnitude below accepted TLVs or guidelines.
 Complaints of chronic symptomatology, such as fatigue, cognitive difficulties, gastrointestinal
and musculoskeletal disturbances are common. Such persistent symptoms may predominate
over reactions to chemicals in some cases.

 Objective impairment of the organs which would explain the pattern or intensity of
complaints is typically absent. Patients examined during acute reactions may hyperventilate
or demonstrate other manifestations of excess sympathetic nervous system activity.

 No better established diagnosis easily explains the range of responses or symptoms.

While not every patient precisely meets the criteria, each point should be considered in the
diagnosis of MCS. Each serves to rule out other clinical disorders which MCS may resemble,
such as somatization disorder, sensitization to environmental antigens (as with occupational
asthma), late sequelae of organ system damage (e.g., reactive airways dysfunction syndrome
after a toxic inhalation) or a systemic disease (e.g., cancer). On the other hand, MCS is not a
diagnosis of exclusion and exhaustive testing is not required in most cases. While many
variations occur, MCS is said to have a recognizable character which facilitates diagnosis as
much or more than the specific criteria themselves.

In practice, diagnostic problems with MCS occur in two situations. The first is with a patient
early in the course of the condition in whom it is often difficult to distinguish MCS from the
more proximate occupational or environmental health problem which precedes it. For
example, patients who have experienced symptomatic reactions to pesticide spraying indoors
may find that their reactions are persisting, even when they avoid direct contact with the
materials or spraying activities. In this situation a clinician might assume that significant
exposures are still occurring and direct unwarranted effort to altering the environment further,
which generally does not relieve the recurrent symptoms. This is especially troublesome in an
office setting where MCS may develop as a complication of sick building syndrome. Whereas
most office workers will improve after steps are taken to improve air quality, the patient who
has acquired MCS continues to experience symptoms, despite the lower exposures involved.
Efforts to improve the air quality further typically frustrate patient and employer.

Later in the course of MCS, diagnostic difficulty occurs because of the chronic aspects of the
illness. After many months, the MCS patient is often depressed and anxious, as are other
medical patients with new chronic diseases. This may lead to an exaggeration of psychiatric
manifestations, which may predominate over chemically stimulated symptoms. Without
diminishing the importance of recognizing and treating these complications of MCS, nor even
the possibility that MCS itself is psychological in origin (see below), the underlying MCS
must be recognized in order to develop an effective mode of management which is acceptable
to the patient.

Pathogenesis

The pathogenic sequence which leads in certain people from a self-limited episode or
episodes of an environmental exposure to the development of MCS is not known. There are
several current theories. Clinical ecologists and their adherents have published extensively to
the effect that MCS represents immune dysfunction caused by accumulation in the body of
exogenous chemicals (Bell 1982; Levin and Byers 1987). At least one controlled study did not
confirm immune abnormalities (Simon, Daniel and Stockbridge 1993). Susceptibility factors
under this hypothesis may include nutritional deficiencies (e.g., lack of vitamins or
antioxidants) or the presence of subclinical infections such as candidiasis. In this theory, the
“initiating” illness is important because of its contribution to lifelong chemical overload.

Less well developed, but still very biologically oriented, are the views that MCS represents
unusual biological sequelae of chemical injury. As such, the disorder may represent a new
form of neurotoxicity due to solvents or pesticides, injury to the respiratory mucosae after an
acute inhalational episode or similar phenomena. In this view, MCS is seen as a final common
pathway of different primary disease mechanisms (Cullen 1994; Bascom 1992).

A more recent biological perspective has focused on the relationship between the mucosae of
the upper respiratory tract and the limbic system, especially with respect to the linkage in the
nose (Miller 1992). Under this perspective, relatively small stimulants to the nasal epithelium
could produce an amplified limbic response, explaining the dramatic, and often stereotypic,
responses to low-dose exposures. This theory also may explain the prominent role of highly
odoriferous materials, such as perfumes, in triggering responses in many patients.

Conversely, however, many experienced investigators and clinicians have invoked


psychological mechanisms to explain MCS, linking it to other somatoform disorders (Brodsky
1983; Black, Ruth and Goldstein 1990). Variations include the theory that MCS is a variant of
post-traumatic stress disorder (Schottenfeld and Cullen 1985) or a conditioned response to an
initial toxic experience (Bolle-Wilson, Wilson and Blecker 1988). One group has
hypothesized MCS as a late-life response to early childhood traumas such as sexual abuse
(Selner and Strudenmayer 1992). In each of these theories, the precipitating illness plays a
more symbolic than biological role in the pathogenesis of MCS. Host factors are seen as very
important, especially the predisposition to somaticize psychological distress.

Although there is much published literature on the subject, few clinical or experimental
studies have appeared to support strongly any of these views. Investigators have not generally
defined their study populations nor compared them with appropriately matched groups of
control subjects. Observers have not been blinded to subject status or research hypotheses. As
a result, most available data are effectively descriptive. Furthermore, the legitimate debate
over the aetiology of MCS has been distorted by dogma. Since major economic decisions
(e.g., patient benefit entitlements and physician reimbursement acceptance) may hinge upon
the way in which cases are viewed, many physicians have very strong opinions about the
illness, which limit the scientific value of their observations. Caring for MCS patients requires
a recognition of the fact that these theories are often well known to patients, who may also
have very strong views on the matter.

Epidemiology

Detailed knowledge of the epidemiology of MCS is not available. Estimates of its prevalence
in the US population (from where most reports continue to come) range as high as several
percentage points, but the scientific basis for these is obscure, and other evidence exists to
suggest that MCS in its clinically apparent form is rare (Cullen, Pace and Redlich 1992). Most
available data derive from case series by practitioners who treat MCS patients. These
shortcomings notwithstanding, some general observations can be made. Although patients of
virtually all ages have been described, MCS occurs most commonly among mid-life subjects.
Workers in jobs of higher socio-economic status seem disproportionately affected, while the
economically disadvantaged and non-White population seems underrepresented; this may be
an artefact of differential access or of clinician bias. Women are more frequently affected than
men. Epidemiological evidence strongly implicates some host idiosyncrasy as a risk factor,
since mass outbreaks have been uncommon and only a small fraction of victims of chemical
accidents or overexposures appear to develop MCS as a sequela (Welch and Sokas 1992;
Simon 1992). Perhaps surprising in this regard is the fact that common atopic allergic
disorders do not appear to be a strong risk factor for MCS among most groups.

Several groups of chemicals have been implicated in the majority of initiating episodes,
specifically organic solvents, pesticides and respiratory irritants. This may be a function of the
widespread usage of these materials in the workplace. The other commonplace setting in
which many cases occur is in the sick building syndrome, some patients evolving from typical
SBS-type complaints into MCS. Although the two illnesses have much in common, their
epidemiological features should distinguish them. Sick building syndrome typically affects
most individuals sharing a common environment, who improve in response to environmental
remediation; MCS occurs sporadically and does not respond predictably to modifications of
the office environment.

Finally, there is great interest in whether MCS is a new disorder or a new presentation or
perception of an old one. Views are divided according to the proposed pathogenesis of MCS.
Those favouring a biological role for environmental agents, including the clinical ecologists,
postulate that MCS is a twentieth century disease with rising incidence related to increased
chemical usage (Ashford and Miller 1991). Those who support the role of psychological
mechanisms see MCS as an old somatoform illness with a new societal metaphor (Brodsky
1983; Shorter 1992). According to this view, the social perception of chemicals as agents of
harm has resulted in the evolution of new symbolic content to the historic problem of
psychosomatic disease.

Natural History

MCS has not yet been studied sufficiently to define its course or outcome. Reports of large
numbers of patients have provided some clues. First, the general pattern of illness appears to
be one of early progression as the process of generalization develops, followed by less
predictable periods of incremental improvements and exacerbations. While these cycles may
be perceived by the patient to be due to environmental factors or treatment, no scientific
evidence for such relationships has been established.

Two important inferences follow. First, there is little evidence to suggest that MCS is
progressive. Patients do not deteriorate from year to year in any measurable physical way, nor
have complications such as infections or organ system failure resulted in the absence of
intercurrent illness. There is no evidence that MCS is potentially lethal, despite the
perceptions of the patients. While this may be the basis of a hopeful prognosis and
reassurance, it has been equally clear from clinical descriptions that complete remissions are
rare. While significant improvement occurs, this is generally based on enhanced patient
function and sense of well-being. The underlying tendency to react to chemical exposures
tends to persist, although symptoms may become sufficiently bearable to allow the victim to
return to a normal lifestyle.
Clinical Management

Very little is known about the treatment of MCS. Many traditional and non-traditional
methods have been tried, though none has been subjected to the usual scientific standards to
confirm their efficacy. As with other conditions, approaches to treatment have paralleled
theories of pathogenesis. Clinical ecologists and others, who believe that MCS is caused by
immune dysfunction due to high burdens of exogenous chemicals, have focused attention on
avoidance of artificial chemicals. This view has been accompanied by use of diagnostic
strategies to determine “specific” sensitivities by various invalidated tests to “desensitize”
patients. Coupled with this have been strategies to enhance underlying immunity with dietary
supplements, such as vitamins and antioxidants, and efforts to eradicate yeasts or other
commensal organisms. A most radical approach involves efforts to eliminate toxins from the
body by chelation or accelerated turnover of fat where lipid-soluble pesticides, solvents and
other organic chemicals are stored.

Those inclined to a psychological view of MCS have tried appropriately alternative


approaches. Supportive individual or group therapies and more classic behavioural
modification techniques have been described, though the efficacy of these approaches remains
conjectural. Most observers have been struck by the intolerance of the patients to
pharmacological agents typically employed for affective and anxiety disorders, an impression
supported by a small placebo-controlled double-blind trial with fluvoxamine that was
conducted by the author and aborted due to side effects in five of the first eight enrolees.

The limitations of present knowledge notwithstanding, certain treatment principles can be


enunciated.

First, to the extent possible, the search for a specific “cause” of MCS in the individual case
should be minimized—it is fruitless and counterproductive. Many patients have had
considerable medical evaluation by the time MCS is considered and equate testing with
evidence of pathology and the potential for a specific cure. Whatever the theoretical beliefs of
the clinician, it is vital that the existing knowledge and uncertainty about MCS be explained
to the patient, including specifically that its cause is unknown. The patient should be
reassured that consideration of psychological issues does not make the illness less real, less
serious or less worthy of treatment. Patients can also be reassured that MCS is not likely to be
progressive or fatal, and they should be made to understand that total cures are not likely with
present modalities.

Uncertainty about pathogenesis aside, it is most often necessary to remove the patient from
components of their work environment which trigger symptoms. Although radical avoidance
is of course counterproductive to the goal of enhancing the worker’s functioning, regular and
severe symptomatic reactions should be controlled as far as possible as the basis for a strong
therapeutic relationship with the patient. Often this requires a job change. Workers’
compensation may be available; even in the absence of detailed understanding of disease
pathogenesis, MCS may correctly be characterized as a complication of a work exposure
which is more readily identified (Cullen 1994).

The goal of all subsequent therapy is improvement of function. Psychological problems, such
as adjustment difficulties, anxiety and depression should be treated, as should coexistent
problems like typical atopic allergies. Since MCS patients do not tolerate chemicals in
general, non-pharmacological approaches may be necessary. Most patients need direction,
counselling and reassurance to adjust to an illness without an established treatment (Lewis
1987). To the extent possible, patients should be encouraged to expand their activities and
should be discouraged from passivity and dependence, which are common responses to the
disorder.

Prevention and Control

Obviously, primary prevention strategies cannot be developed given present knowledge of the
pathogenesis of the disorder or of its predisposing host risk factors. On the other hand,
reduction of opportunities in the workplace for the uncontrolled acute exposures which
precipitate MCS in some hosts, such as those involving respiratory irritants, solvents and
pesticides, will likely reduce the occurrence of MCS. Proactive measures to improve the air
quality of poorly ventilated offices would also probably help.

Secondary prevention would appear to offer a greater opportunity for control, although no
specific interventions have been studied. Since psychological factors may play a role in
victims of occupational overexposures, careful and early management of exposed persons is
advisable even when the prognosis from the point of view of the exposure itself is good.
Patients seen in clinics or emergency rooms immediately after acute exposures should be
assessed for their reactions to the events and should probably receive very close follow-up
where undue concerns of long-term effects or persistent symptoms are noted. Obviously,
efforts should be made for such patients to ensure that preventable reoccurrences do not come
about, since this kind of exposure may be an important risk factor for MCS regardless of the
causal mechanism.
Systemic Conditions References
American Society of Heating, Refrigerating, and Airconditioning Engineers (ASHRAE).
1989. Standard 62-89: Ventilation for Acceptable Indoor Air Quality. Atlanta: ASHRAE.

American Society for Testing and Materials (ASTM). 1984. Standard Test Method for the
Estimation of Sensory Irritancy of Airborne Chemicals. Philadelphia: ASTM.

Anon. 1990. Environmental controls and lung disease. (Erratum in Am Rev Respir Dis
143(3):688, 1991 Am Rev Respir Dis 142:915-939.

Ashford, NA and CS Miller. 1991. Chemical Exposures: Low Levels and High Stakes. New
York: Van Nostrand Reinhold.
Bascom, R. 1992. Multiple chemical sensitivity: A respiratory disorder? Toxicol Ind Health
8:221-228.

Bell, I. 1982. Clinical Ecology. Colinas, Calif.: Common Knowledge Press.

Black, DW, A Ruth, and RB Goldstein. 1990. Environmental illness: A controlled study of 26
subjects with 20th century disease. J Am Med Assoc 264:3166-3170.

Bolle-Wilson, K, RJ Wilson, and ML Bleecker. 1988. Conditioning of physical symptoms


after neurotoxic exposure. J Occup Med 30:684-686.

Brodsky, CM. 1983. Psychological factors contributing to somatoform diseases attributed to


the workplace. The case of intoxication. J Occup Med 25:459-464.

Brown, SK, MR Sim, MJ Abramson, and CN Gray. 1994. Concentrations of VOC in indoor
air. Indoor Air 2:123-134.

Buchwald, D and D Garrity. 1994. Comparison of patients with chronic fatigue syndrome,
fibromyalgia, and multiple chemical sensitivities. Arch Int Med 154:2049-2053.

Cullen, MR. 1987. The worker with multiple chemical sensitivities: An overview. In Workers
with Multiple Chemical Sensitivities, edited by M Cullen. Philadelphia: Hanley & Belfus.

—. 1994. Multiple chemical sensitivities: Is there evidence of extreme vulnerability of the


brain to environmental chemicals? In The Vulnerable Brain and Environmental Risks, Vol. 3,
edited by RL Isaacson and KIF Jensen. New York: Plenum.

Cullen, MR, PE Pace, and CA Redlich. 1992. The experience of the Yale Occupational and
Environmental Medicine Clinics with MCS, 1986-1989. Toxicol Ind Health 8:15-19.

Fiedler, NL, H Kipen, J De Luca, K Kelly-McNeil, and B Natelson. 1996. A controlled


comparison of multiple chemical sensitivities and chronic fatigue syndrome. Psychosom Med
58:38-49.

Hodgson, MJ. 1992. A series of field studies on the sick-building syndrome. Ann NY Acad
Sci 641:21-36.
Hodgson, MJ, H Levin, and P Wolkoff. 1994. Volatile organic compounds and indoor air
(review). J Allergy Clin Immunol 94:296-303.

Kipen, HM, K Hallman, N Kelly-McNeil, and N Fiedler. 1995. Measuring chemical


sensitivity prevalence. Am J Public Health 85(4):574-577.

Levin, AS and VS Byers. 1987. Environmental illness: A disorder of immune regulation.


State Art Rev Occup Med 2:669-682.

Lewis, BM. 1987. Workers with multiple chemical sensitivities: Psychosocial interventions.
State Art Rev Occup Med 2:791-800.

Mendell, MJ. 1993. Non-specific symptoms in office workers: A review and summary of the
literature. Indoor Air 4:227-236.

Middaugh, DA, SM Pinney, and DH Linz. 1992. Sick building syndrome: Medical evaluation
of two work forces. J Occup Med 34:1197-1204.

Miller, CS. 1992. Possible models for multiple chemical sensitivity: Conceptual issues and
the role of the limbic system. Toxicol Ind Health :181-202.

Mølhave, L, R Bach, and OF Pederson. 1986. Human reactions to low concentrations of


volatile organic compounds. Environ Int 12:167-175.

Mølhave, L and GD Nielsen. 1992. Interpretation and limitations of the concept “Total
volatile organic compounds” (TVOC) as an indicator of human responses to exposures of
volatile organic compounds (VOC) in indoor air. Indoor Air 2:65-77.

Robertson, A, PS Burge, A Hedge, S Wilson, and J Harris-Bass. 1988. Relation between


passive cigarette smoke exposure and “building sickness”. Thorax 43:263P.

Schottenfeld, RS and MR Cullen. 1985. Occupation-induced post-traumatic stress disorder.


Am J Psychol 142:198-202.

Selner, JC and H Strudenmayer. 1992. Neuropsychophysiologic observations in patients


presenting with environmental illness. Toxicol Ind Health 8:145-156.

Shorter, E. 1992. From Paralysis to Fatigue. New York: The Free Press.

Simon, GE. 1992. Epidemic MCS in an industrial setting. Toxicol Ind Health 8:41-46.

Simon, GE, W Daniel, and H Stockbridge. 1993. Immunologic, psychologic, and


neuropsychological factors in multiple chemical sensitivity. Ann Intern Med 19:97-103.

Sundell, J, T Lindvall, B Stenberg, and S Wall. 1994. SBS in office workers and facial skin
symptoms among VDT workers in relation to building and room characteristics: Two case-
referent studies. Indoor Air 2:83-94.

Wechsler, CJ. 1992. Indoor chemistry: Ozone, volatile organic compounds, and carpets.
Environ Sci Technol 26:2371-2377.
Welch, LS and P Sokas. 1992. Development of MCS after an outbreak of sick building
syndrome. Toxicol Ind Health 8:47-50.

Woods, JE. 1989. Cost avoidance and productivity. State Art Rev Occup Med 4:753-770.

Copyright 2015 International Labour Organization

Das könnte Ihnen auch gefallen