Beruflich Dokumente
Kultur Dokumente
22,2
Problem in the Use of Beer's
Predictive Model
A.G. Adeagbo-Sheikh
56 Department of Mathematics, Obafemi Awolowo University,
Ile-Ife, Nigeria
Introduction
Beer's methodology on the Control of Operations in an enterprise is presented
in different equivalent forms by Beer[l-3]. However, in Beer[l, chapter 13] a
predictive model is shown to emerge from the procedure and the emergence
of this predictive model is believed to be about the most important result of
the methodology. Some of the problems that arise in producing the model have
already been discussed, elsewhere, by this author[4]. This article considers
certain problems that arise in the use of the model.
The main feature of the Predictive Model is the partitioning of the different
job items in the operations unit into "achievement" groups. This partitioning
is to be used to classify incoming jobs (some of which may not even have been
handled previously but which can be handled by the unit) into correct
achievement groups. The importance of this is that the stability of the operations
unit is always assured. On the partitioning Beer says[l, pp. 335-6]:
. . . actual events are being used to create a structure for the situation, a structure designated
by a set of statistical control groups. This structure is then used to classify the jobs that
are coming along:... and because it is characterized by an elaborate definition, it is almost
unfallible in recognizing the job to be done by the time it will take. In other words, it is an
excellent predictor.
where G is the set of all job items in the operations unit, Gr is the class of
achievement groups Gr ⊂ G, r = 1, 2, . . ., n, that emerge in the Predictive
Model, The set A, assumed finite, is the set that includes any attribute that
a job item in the operations set-up could possess. AGJr ⊆ A is the set of
attributes of the element grj ε Gr. {Ar} is the class of subsets Ar ⊆ A where
Ar satisfies the following axioms:
Ar ≠ Ø (Al)
a ε Ar (A2)
if and only if a ε AGJr forallj = 1, 2, . . . , Nr where Nr (finite) is the number
of elements in group Gr;
Ar ≠ As if r ≠ s. (A3)
We shall call the set Ar the classifying set for achievement group Gr under
CACS.
We now take some properties of CACS.
Theorem1.Ar is the union of the class {Sa} such that Sα ⊆ AGJr, all j = 1,
2, . . . Nr.
(From Axiom (A2) above the theorem formalizes the intuitive notion that the
intersection of a number of sets is the largest set common to them all.)
Kybernetes Proof. We first observe that Arε{Bα}.Suppose Ar ≠ ⋃Sα; then Ar C ⋃Sα
22,2 so that ⋃Sα - Ar is not empty. Let a ε ⋃Sα - Ar then a ε Ar but a ε AGJr
for ally, 1, 2, . . . , Nr. This is not in agreement with axiom (A2) of CACS.
Hence Ar = ⋃Sα. ⋃
Corollary 1. Let{Cw}be a class of sets, partially ordered by "Cu is a subset
of Cv". Suppose further that Cw ⊆ AGJr for all j = 1, 2,..., Nr. Then the class
58 {Cw} has a maximal element which coincides with Ar.
Proof. The ordering is total and, since A (the universal attributes set) is finite,
the class{Cw}has a unique maximal element in⋃wCw. Since Cw ⊆ AGJr, all
j = 1, 2,..., Nr, we have by Theorem (1) that ⋃CW = Ar. ⋃
Definition 1. The class{Gr},r = 1, 2, . . . , n of achievement groups, is said
to be totally discriminated if for every pair Gr, Gs, Ar ⋂ As = Φ;.
Definition 2. The class {Gr} is said to be partially discriminated if for every pair
Gr, Gs, Ar ⋂ As ≠ Ø.
Definition 3. The class{Gr}is said to be of mixed discrimination if it is totally
discriminated for a number of G,s and partially discriminated for the rest.
Theorem 2. For a class{Gr}of n achievement groups, the universal attributes
set A should contain at least n distinct elements for a CACS set up.
|A| ≥ | | ≥ n.
Let{Gr}be partially discriminated; then Ar ⋂ As ≥ φ for every pair r, s =
1, 2, 3, . . . , n. So | Ar ⋂ As | ≥ 1 for every pair r, s. To satisfy axiom (A3)
of CACS we must have that | (Ar - As) ⋃ (As - Ar) | ≥ 1. To meet the
minimum of each of the above two inequalities, we order the class {Ar}by
Ar ⊂ As. We relabel the A,s as Ar, i = 1, 2, . . . , n such that Ar ⊂ Arj+1,
j = 1, 2, . . . , n -1, with the conditions | Ar1 | = 1 and | Arj+1 - Arj | = 1.
The ordering is total so the ordered class has a maximal element in Arj =
Arn = Aơ for some a = 1, 2, 3, . . . , or n. By our construction we have |
Arj | = j , 1 ≤ j ≤ n.Consequently,
| A| ≥ | | ≥ | Arj | = | Arn | = n.
Finally let{Gr}be of mixed discrimination. Label{Gr}such that G1, G2, . . . ,
G1, q < n is the totally discriminated subclass and Gq+1, . . . , Gn is the
partially discriminated subclass. Then by the foregoing results we have Communications
| A | ≥ q + n - q = n.
The proof is complete. □
It should be noted that it is assumed that all attributes referred to in the
above theorem have discrimination power.
We shall leave further development of the CACS theory for subsequent
discussion and introduce another non-numerical classifying system. 59
The Functional Object Classifying System (FOCS)
The FOCS is based on the assumption that the items in an achievement group
are components of a functional object, e.g. chair, bicycle, steam engine etc.
or of a functional group of objects like digging tools, toiletries etc. which can
be brought under a collective title.
Formally we conceive of a functional object μ in its microstructure as the
quintuple
(1)
where Mu and Cu are maps given by
Mu: Fu → E x S, MU(FU) = Eu x Su (2)
Cu: Eu → S for Eu ε E (3)
with the range Su ⊆ S.
The characters have the following definitions: Fu is the set of functions or
attributes to be satisfied by the functional object μ; E is the class of sets of
components for the functional objects of the world. S is the class of sets of
states associated with the class of sets of components of world objects:
Eu ε E is the set of components for a particular μ; Mu maps Fu into EΦ x
S and it must be defined for all Fuj ε Fu; Su ⊆ S is the class of sets of
states corresponding to the components set Eu of μ in the realization of μ;
Cu will be called the map of the system of couplings for μ.
We shall not look at the properties of a functional object now, but rather we
shall consider our main topic.
A Functional Object Classification system exists in a Beer predictive model
if the items of each achievement group are components of some functional object
and the same functional object does not arise in two groups. If G is the class
of achievement groups Gr, and U is the set of all functional objects, then the
FOCS is the triple
(G, U, α) (4)
such that
α: G → U (5)
is a one-to-one map. The implementation problem for FOCS involves determining
what functional object is represented in an achievement group. Theorems on FOCS
should mainly be about this problem. We shall take a theorem in this direction.
Kybernetes Definition 4. Two objects are said to be connected if one of them cannot vary
22,2 its state independently of the state of the other.
Theorem 3. In a functional object any two components are connected (either
directly or indirectly, i.e. through the medium of other components).
Proof. The theorem is of course intuitively obvious. Formally the map (Equation
60 (3)) above assigns a fixed subset Su of S to the component set Eu. Let SuEi;
and SuEj be the states of elements Eui and Euj respectively in the realization
of μ. Then SuEt, SuEj ε Su. But Su is a fixed set. Thus SuEt or SuEj cannot be
arbitrary with respect to the other. Therefore Eui and Euj are connected. □
The application of Theorem (3) is the suggestion that with enough experience
and imagination one can infer the functional object in which two given objects
could be connected. Further evidence from other items in the group may confirm
or negate initial conceptions since the functional object that connects two given
items may not be unique.
References
1. Beer, S., Decision and Control, John Wiley & Sons, New York, NY, 1966.
2. Beer, S., Brain of the Firm, John Wiley & Sons, New York, NY, 1972.
3. Beer, S., Heart of Enterprise, John Wiley & Sons, New York, NY, 1979.
4. Adeago-Sheikh, A.G., "Research-adjustment Problem in the Construction of Beer's
Predictive Model", Kybernetes, Vol. 21 No. 1, 1992, pp. 46-51.
5. Fu, K.S. and Mendel, J.M., Adaptive, Learning and Pattern Recognition Systems, Academic
Press, New York, NY, 1970.
6. Coombs, C.H., Dawes, R.M. and Tversky, A., Mathematical Psychology, Prentice-Hall,
Englewood Cliffs, NJ, 1970.