Sie sind auf Seite 1von 7

Kybernetes Approaches to the Job-classification

22,2
Problem in the Use of Beer's
Predictive Model
A.G. Adeagbo-Sheikh
56 Department of Mathematics, Obafemi Awolowo University,
Ile-Ife, Nigeria

Introduction
Beer's methodology on the Control of Operations in an enterprise is presented
in different equivalent forms by Beer[l-3]. However, in Beer[l, chapter 13] a
predictive model is shown to emerge from the procedure and the emergence
of this predictive model is believed to be about the most important result of
the methodology. Some of the problems that arise in producing the model have
already been discussed, elsewhere, by this author[4]. This article considers
certain problems that arise in the use of the model.
The main feature of the Predictive Model is the partitioning of the different
job items in the operations unit into "achievement" groups. This partitioning
is to be used to classify incoming jobs (some of which may not even have been
handled previously but which can be handled by the unit) into correct
achievement groups. The importance of this is that the stability of the operations
unit is always assured. On the partitioning Beer says[l, pp. 335-6]:
. . . actual events are being used to create a structure for the situation, a structure designated
by a set of statistical control groups. This structure is then used to classify the jobs that
are coming along:... and because it is characterized by an elaborate definition, it is almost
unfallible in recognizing the job to be done by the time it will take. In other words, it is an
excellent predictor.

If the structure referred to in the above quotation is to be used to classify


incoming jobs, then essentially we have a pattern recognition problem. This
is made clearer by Beer's earlier words[l, p. 326]:
Predictive value lies in identifying the class of black box output to which events of this sort
tend in general to correspond.
Pattern recognition in modern technology is a quantitative affair. It involves
measurements to obtain the so-called feature vector used in the classification
exercise which uses statistical procedures. It appears, however, that Beer has
some other approach to the problem; when apparently trying to throw light
on how to go about the identification exercise with an example of a job involving
hypothetical white and red pieces of some input material, he says[l, p. 333]:
Kybernetes. Vol 22 No. 2, 1993,
pp. 56-62, © MCB University
Attached to this group is a definition, perhaps rather involved, (and certainly quite
Press. 0368-492X unconventional) which says what kinds of white pieces belong to it . . .
The "rather involved definition" referred to in this excerpt is also mentioned Communications
(as "an elaborate definition"[1, pp. 335-6]). It appears from this mention of
"definition" that Beer believes that the items in an achievement group can
be defined verbally in some way. If such a definition exists, it could only be
in terms of some common attributes possessed by the items in the group. Beer
supports this conclusion as he says[l, p. 324]:
By determining which features the members of each group have in common, a classification
system for events determines itself. 57
There are various ways in which groups of objects could be identified under
some consideration. We shall consider two, namely:
(1) the process of identifying a group of objects by the number of attributes
they have in common. The classification system that emerges from this
we shall call the Common Attribute Classification System (CACS);
(2) the process of identifying a collection of items when it is believed that
they are components of a functional object or functional group. The
classification system resulting from this we shall refer to as the Functional
Object Classifying System (FOCS).
The Common Attributes Classification System (CACS)
Here it is assumed that the items in an achievement group Gr fall into this
group by virtue of a set of attributes that they have in common.
The CACS is the quadruple

where G is the set of all job items in the operations unit, Gr is the class of
achievement groups Gr ⊂ G, r = 1, 2, . . ., n, that emerge in the Predictive
Model, The set A, assumed finite, is the set that includes any attribute that
a job item in the operations set-up could possess. AGJr ⊆ A is the set of
attributes of the element grj ε Gr. {Ar} is the class of subsets Ar ⊆ A where
Ar satisfies the following axioms:
Ar ≠ Ø (Al)
a ε Ar (A2)
if and only if a ε AGJr forallj = 1, 2, . . . , Nr where Nr (finite) is the number
of elements in group Gr;
Ar ≠ As if r ≠ s. (A3)
We shall call the set Ar the classifying set for achievement group Gr under
CACS.
We now take some properties of CACS.
Theorem1.Ar is the union of the class {Sa} such that Sα ⊆ AGJr, all j = 1,
2, . . . Nr.
(From Axiom (A2) above the theorem formalizes the intuitive notion that the
intersection of a number of sets is the largest set common to them all.)
Kybernetes Proof. We first observe that Arε{Bα}.Suppose Ar ≠ ⋃Sα; then Ar C ⋃Sα
22,2 so that ⋃Sα - Ar is not empty. Let a ε ⋃Sα - Ar then a ε Ar but a ε AGJr
for ally, 1, 2, . . . , Nr. This is not in agreement with axiom (A2) of CACS.
Hence Ar = ⋃Sα. ⋃
Corollary 1. Let{Cw}be a class of sets, partially ordered by "Cu is a subset
of Cv". Suppose further that Cw ⊆ AGJr for all j = 1, 2,..., Nr. Then the class
58 {Cw} has a maximal element which coincides with Ar.
Proof. The ordering is total and, since A (the universal attributes set) is finite,
the class{Cw}has a unique maximal element in⋃wCw. Since Cw ⊆ AGJr, all
j = 1, 2,..., Nr, we have by Theorem (1) that ⋃CW = Ar. ⋃
Definition 1. The class{Gr},r = 1, 2, . . . , n of achievement groups, is said
to be totally discriminated if for every pair Gr, Gs, Ar ⋂ As = Φ;.
Definition 2. The class {Gr} is said to be partially discriminated if for every pair
Gr, Gs, Ar ⋂ As ≠ Ø.
Definition 3. The class{Gr}is said to be of mixed discrimination if it is totally
discriminated for a number of G,s and partially discriminated for the rest.
Theorem 2. For a class{Gr}of n achievement groups, the universal attributes
set A should contain at least n distinct elements for a CACS set up.

Proof. Let{Gr}be totally discriminated; then Ar ⋂ As = φ and the axiom (A3)


of CACS is satisfied. Ak ≠ φ for all k so | Ak | ≥ 1. Also Ak ⊆ AGPK for all
p = 1, 2,..., Nk. So | | > 1. Thus

|A| ≥ | | ≥ n.
Let{Gr}be partially discriminated; then Ar ⋂ As ≥ φ for every pair r, s =
1, 2, 3, . . . , n. So | Ar ⋂ As | ≥ 1 for every pair r, s. To satisfy axiom (A3)
of CACS we must have that | (Ar - As) ⋃ (As - Ar) | ≥ 1. To meet the
minimum of each of the above two inequalities, we order the class {Ar}by
Ar ⊂ As. We relabel the A,s as Ar, i = 1, 2, . . . , n such that Ar ⊂ Arj+1,
j = 1, 2, . . . , n -1, with the conditions | Ar1 | = 1 and | Arj+1 - Arj | = 1.
The ordering is total so the ordered class has a maximal element in Arj =
Arn = Aơ for some a = 1, 2, 3, . . . , or n. By our construction we have |
Arj | = j , 1 ≤ j ≤ n.Consequently,

| A| ≥ | | ≥ | Arj | = | Arn | = n.
Finally let{Gr}be of mixed discrimination. Label{Gr}such that G1, G2, . . . ,
G1, q < n is the totally discriminated subclass and Gq+1, . . . , Gn is the
partially discriminated subclass. Then by the foregoing results we have Communications
| A | ≥ q + n - q = n.
The proof is complete. □
It should be noted that it is assumed that all attributes referred to in the
above theorem have discrimination power.
We shall leave further development of the CACS theory for subsequent
discussion and introduce another non-numerical classifying system. 59
The Functional Object Classifying System (FOCS)
The FOCS is based on the assumption that the items in an achievement group
are components of a functional object, e.g. chair, bicycle, steam engine etc.
or of a functional group of objects like digging tools, toiletries etc. which can
be brought under a collective title.
Formally we conceive of a functional object μ in its microstructure as the
quintuple
(1)
where Mu and Cu are maps given by
Mu: Fu → E x S, MU(FU) = Eu x Su (2)
Cu: Eu → S for Eu ε E (3)
with the range Su ⊆ S.
The characters have the following definitions: Fu is the set of functions or
attributes to be satisfied by the functional object μ; E is the class of sets of
components for the functional objects of the world. S is the class of sets of
states associated with the class of sets of components of world objects:
Eu ε E is the set of components for a particular μ; Mu maps Fu into EΦ x
S and it must be defined for all Fuj ε Fu; Su ⊆ S is the class of sets of
states corresponding to the components set Eu of μ in the realization of μ;
Cu will be called the map of the system of couplings for μ.
We shall not look at the properties of a functional object now, but rather we
shall consider our main topic.
A Functional Object Classification system exists in a Beer predictive model
if the items of each achievement group are components of some functional object
and the same functional object does not arise in two groups. If G is the class
of achievement groups Gr, and U is the set of all functional objects, then the
FOCS is the triple
(G, U, α) (4)
such that
α: G → U (5)
is a one-to-one map. The implementation problem for FOCS involves determining
what functional object is represented in an achievement group. Theorems on FOCS
should mainly be about this problem. We shall take a theorem in this direction.
Kybernetes Definition 4. Two objects are said to be connected if one of them cannot vary
22,2 its state independently of the state of the other.
Theorem 3. In a functional object any two components are connected (either
directly or indirectly, i.e. through the medium of other components).
Proof. The theorem is of course intuitively obvious. Formally the map (Equation
60 (3)) above assigns a fixed subset Su of S to the component set Eu. Let SuEi;
and SuEj be the states of elements Eui and Euj respectively in the realization
of μ. Then SuEt, SuEj ε Su. But Su is a fixed set. Thus SuEt or SuEj cannot be
arbitrary with respect to the other. Therefore Eui and Euj are connected. □

The application of Theorem (3) is the suggestion that with enough experience
and imagination one can infer the functional object in which two given objects
could be connected. Further evidence from other items in the group may confirm
or negate initial conceptions since the functional object that connects two given
items may not be unique.

Remark on CACS and FOCS


The above approaches to CACS and FOCS may be applied if the conditions
for the existence of the given classifying systems are satisfied in a given problem.
We shall see, however, that these approaches may have serious limitations. In
the CACS we have looked for attributes that are common to items in an
achievement group. We recall that each group, by virtue of the productivity
values of the items in the group, has the same distribution. There is no theory
in the set-up which guarantees that the common attributes we have discovered
were actually responsible for the grouping. It is possible therefore that the next
incoming item has the attributes of a certain group but does not have a
productivity value falling into that group. However, it is reported by Beer[l,
p. 336] that the approach does work. We should expect this, since productivity
is approximately about the result of the actual way the job is handled in relation
to the optimal way it could be handled. Similar handling sequences for two
products could place them in the same achievement group. We look for this
kind of thing (generally those factors that affect handling) when we are searching
for the common attributes for an achievement group.

The Features Approach


Introduction
This is the approach from statistical pattern recognition, which has already been
mentioned. The approach is quantitative. It uses measurements from features
that are common to the items to be discriminated but where the measurements
obtained are known to discriminate the items. For example in Fu and Mendel[5],
in order to discriminate the Bs from the 8s in a collection of characters consisting
of handwritten Bs and 8s, Duda uses the straightness (the ratio of the distance
between the end points of a line to its arc length) of the middle third of the
left side of the character and the ratio of the maximum width of the top half
of the figure to the maximum width of the bottom half (many handwritten Bs
tend to be larger at the bottom half (many handwritten 8s tend to be larger Communications
at the top). Those are measurements from two features of each of the figures.
Each figure has each of the features but the figures are discriminated by the
measurements obtained from these features. In the example Duda has used
two measurements and they become the components of what is known as the
Feature Vector which the classifier will use to discriminate the figures. Unlike
the CACS, in which we search for common attributes of items in a group to
use in classifying incoming jobs, the features whose measurements are known 61
to discriminate the items are chosen here. This pattern-recognition approach
thus has a stronger footing than either CACS or FOCS. Enough features are
chosen to give discrimination as accurately as we may wish. For example, the
two measurements used by Duda above give better classification than just either
one of them. Pattern recognition uses statistical procedures and a detailed
discussion is found in Fu and Mendel[5, chapters 1-2].

Link with Psychological Measurement Theory


To use a reliable method of statistical pattern recognition in our job classification
problem, we need measurements. We need first to collect features (of jobs)
whose measures do discriminate the jobs into the productivity groups of our
predictive model. The features we are likely to come up with will, in most cases
not be those that can be measured physically in the way, for example, in which
we measure the upper half of a handwritten letter B at its widest part. The
features to be tried for measurement are normally suggested as a result of
experience and intuition. They are then tried for their discriminating power
with samples that have already been correctly classified. One would suspect,
for example, that the amount of irrelevant variety that a job can generate will
be one of the factors determining productivity in such a job. The job under
an unsupervised set of workers is likely to generate more irrelevant variety
than that under a supervised set. We cannot measure this variety feature
physically, but there are results in psychological measurement theory that could
help us.
Psychological measurement theory uses subjective judgements to create
measurement scales for situations that cannot be measured physically. For
example if for every pair of jobs Jp and Jq we can find a job Js such that its
amount of irrelevant variety is judged to be halfway between that of Jp and that
ofJpthen the situation can be handled by the Bisection System of the psycho­
logical measurement models and a so-called interval scale can be constructed
for the measurement of the feature. Alternatively, if we are able to obtain our
judgements under different combinations of two (or more) independent variables,
then "an interval scale measurement of the additive type" can be constructed
for our variable of interest under the additive conjoint measurement system
of the psychological measurement models. Here we obtain our judgements of
relative magnitudes of our variable of interest as joint effects of two (or more)
measureable factors at various combinations of their values. Coombs[6,
chapter 1] discusses these models in some detail.
Kybernetes It should be noted that all attributes or features for classification purposes
22,2 in each of the above approaches must be independent of process results since
incoming jobs are to be classified before being undertaken.

Summary and Conclusion


In this article we have discussed non-numerical and quantitative approaches
to the job classification problem which is an application of Beer's Predictive
62 Model for control of operations in enterprises. We have tried to bring to the
notice of researchers the problems we believe are involved and to point out
how solutions may be obtained. In effect, we have not solved the problem
completely. We believe that research in this area should be very rewarding,
judging from the potential of the practical applications of such research. We
particularly advocate research into features of jobs, with their measurement
methods, for use in the pattern recognition approach. The CACS and the FOCS
are also worthy of consideration.

References
1. Beer, S., Decision and Control, John Wiley & Sons, New York, NY, 1966.
2. Beer, S., Brain of the Firm, John Wiley & Sons, New York, NY, 1972.
3. Beer, S., Heart of Enterprise, John Wiley & Sons, New York, NY, 1979.
4. Adeago-Sheikh, A.G., "Research-adjustment Problem in the Construction of Beer's
Predictive Model", Kybernetes, Vol. 21 No. 1, 1992, pp. 46-51.
5. Fu, K.S. and Mendel, J.M., Adaptive, Learning and Pattern Recognition Systems, Academic
Press, New York, NY, 1970.
6. Coombs, C.H., Dawes, R.M. and Tversky, A., Mathematical Psychology, Prentice-Hall,
Englewood Cliffs, NJ, 1970.

Das könnte Ihnen auch gefallen