Sie sind auf Seite 1von 14

Microprocessors and Microsystems 23 (1999) 435–448

www.elsevier.nl/locate/micpro

Integrated modular avionics: system modelling


R.J. Bluff*
DERA Farnborough, Avionics Systems Architecture Sections, Avionics and Sensors Department, A5 Probert Building, Room 1005, Farnborough,
Hamsphire GU14 OLX, UK

Abstract
Through the need to reduce the risk of adopting future avionic architectures and standards a better understanding of the operation of
evolving system instances and standards early in the design cycle are required. A flexible means of achieving this aim is to conduct an
analysis through the development of an avionic system model based on behavioural and performance simulation. This paper defines a
modelling framework for the construction of an avionic system model that would provide a means of assessing the hardware and software
components and their interaction in a system. The Shlaer–Mellor object oriented analysis method has been used to defined the model. To
form an avionic system model three modelling domains have been defined: behavioural, performance and visualisation. Behavioural models
constitute primarily the software components of the system, the performance models analyse the hardware concepts, and visualisation models
provide a means of understanding the system’s operation. The modelling domains can be applied separately or they can be linked together to
form a system model. A developed performance model of the aeronautical radio, INC (ARINC) 651 integrated modular avionic architecture
is described. It is discussed how architecture bottlenecks and system capacity can be derived. An APplication/EXecutive is also described.
The behavioural analysis of this interface is illustrated. System visualisation concepts are also discussed. Validation of the avionic modelling
suite is also considered. How the various modelling components can be combined to form an avionic system model is also described. q 1999
Published by Elsevier Science B.V.
Keywords: Integrated modular avionic architechtecture; Shlaer–Mellor analysis; Avionic system modelling; Performance analysis

1. Introduction realisation of a civil IMA architecture [2]. A range of archi-


tecture options is described to support a potential system
In order to reduce the cost of future avionic systems development.
developments avionic architectures and standards have In support of portability the 650 series of standards has
been developed to encourage system component reuse and defined an APplication/EXecutive (APEX) ARINC 653 that
portability. In the civil avionic community this has led to the describes an interface between an avionic application and an
development of the aeronautical radio, INC (ARINC) 650 underlying avionic operating system [3].
series of avionic architectures and standards [1]. Their The instantiation of an IMA system also encourages the
purpose being to define a compatible framework that enhancement of fault tolerance and the improvement in the
airframe manufacturers and avionic suppliers can make availability of avionic systems. The guarantee of a mainte-
reference to. A competitive source of supply is encouraged nance free operating period is an important objective for the
to ensure that the most cost-effective solutions are adopted. airlines [4].
An important architecture concept that forms an integral The development of these standards and their potential
part of the evolving avionic standards is the notion of an realisation in an avionic system development needs to be
integrated modular avionic (IMA) architecture. Individual assessed in order to ensure that current and future operating
architecture building blocks are defined that can be reused to requirements can be met. Simulation modelling and analysis is
form individual architecture instances and system imple- a flexible means of contributing to this evaluation since a range
mentations. These building blocks can include hardware of application loads can be investigated through modelling.
components (processing modules, data communication This paper is concerned with the development of a model-
networks) as well as components of the software architec- ling framework that can support the analysis of the ARINC
ture. ARINC 651 defines a set of rules and guidelines for the 650 series standards to reduce the risk in their adoption in
future avionic systems.
* Tel.: 144-1252-393600; fax: 144-1252-395120. An important aim of modelling is to gain an understand-
E-mail address: rjbluff@dera.gov.uk (R.J. Bluff) ing of a potential system’s operation early in the design
0141-9331/99/$ - see front matter q 1999 Published by Elsevier Science B.V.
PII: S0141-933 1(99)00054-X
436 R.J. Bluff / Microprocessors and Microsystems 23 (1999) 435–448

Fig. 1. Avionic system model framework domains.

cycle, when changes can be made more flexibly and with the performance model are then briefly described. A simple
less cost. Significant design changes later in a design cycle analytical model is then discussed to assist in the validation
can become prohibitively expensive. of the modelling framework. Also in support of validation
This paper is organised as follows. An avionic system options for the statistical analysis of data captured from an
model framework is defined. Component models of this avionic rig compared against corresponding simulation
framework are then described including an ARINC 651 results is also discussed. How a system model can be
(IMA) performance model and an ARINC 653 software realised from the various modelling components is then
behavioural model. Visualisation techniques in support of reviewed. Options for future work are then described.

Fig. 2. Avionic architecture baseline subsystems.


R.J. Bluff / Microprocessors and Microsystems 23 (1999) 435–448 437

Fig. 3. Software architecture objects.

2. Avionic system model framework object-oriented analysis (OOA) method, Shlaer and Mellor
[5] has been used to develop this definition. The SES/object-
An avionic system model framework has been defined to bench tool [6] supports Shlaer–Mellor, and was used to
formalise the relationships between different modelling capture the analysis information.
components that a subset of which would be required to As shown in Fig. 1, the first stage in a Shlaer–Mellor
analyse a proposed avionic system. The Shlaer–Mellor analysis is to define the independent domains: each domain

Fig. 4. System hardware objects.


438 R.J. Bluff / Microprocessors and Microsystems 23 (1999) 435–448

Fig. 5. System fault tolerance objects.

has it’s own rules and policies that are common to the The avionic architecture baseline provides the modelling
domain’s objects. The behavioural, performance and component reference for the individual modelling domains.
concept visualisation domains support the need to assess Fig. 2 shows the individual subsystems defined within this
an avionic architecture. These domains collectively support domain. Each subsystem defines objects for the software
the realisation of an IMA system model. architecture, the hardware components and facilities for

Fig. 6. ARINC 651 main modules.


R.J. Bluff / Microprocessors and Microsystems 23 (1999) 435–448 439

Fig. 7. An example IMA cabinet.

the provision of fault tolerance. Collectively these form the It has been found that a Shlaer–Mellor analysis provides
baseline reference. a succinct means of specifying systems and their various
Within the software architecture subsystem objects have options and instances. Consequently this method was
been defined that formalise the relationships between the found to be very suitable for describing a modelling frame-
various elements of the system software. Fig. 3 shows a work for an avionic system model.
selection of these objects: instances of software interfaces From the defined avionic architecture baseline-modelling
are shown between avionic applications and instances of components can be identified for software and hardware
supporting hardware. elements of the system.
In a similar way the hardware components that instances
of the software architecture will reside in can be defined as 2.1. ARINC 651 (IMA) performance model
objects as shown in Fig. 4. The partial range of line replace-
able modules is shown, a subset of which would reside in The system hardware objects have been modelled as a
instances of IMA cabinets as defined in ARINC 651 [2]. system architecture hardware performance analysis model
Future avionic systems will need to progressively support based on ARINC 651 [2]. The SES/workbench tool [7], has
more extensive facilities for fault tolerance. Fig. 5 specifies been utilised to develop this model. This tool provides
the range of techniques that will need to be supported. the capability to develop multi-purpose discrete event

Fig. 8. Processing resource performance model.


440 R.J. Bluff / Microprocessors and Microsystems 23 (1999) 435–448

Fig. 9. Software behavioural model domain chart.

simulations. Performance statistics can be gathered report- resource pool (local processor memory). Alternatively the
ing means, maximum and minimum values and their processor may need access to data from its memory or may
standard deviation. need to delete unwanted data. These alternatives are shown
The main modules for the ARINC 651 performance by the transaction arcs “Read_From_RAM” and “Release_
model are shown in Fig. 6. The IMA architecture defines Memory”, respectively. Any access required to ROM is
the avionic architecture to be modelled. The model can be shown by the “ROM_Access” server associated transaction
configured to represent different architecture instance arc. Access to RAM is also modelled by a server
options. Computing resources exists in the IMA cabinets; (“RAM_Access”) since memory is a finite resource that
data can be transferred via the Bus. Various aircraft sensors, takes time to access.
actuators and interfaces have also been modelled. The processor server icon shown in Fig. 8, for example,
A typical IMA cabinet is shown in Fig. 7. Three proces- models a queue and a delay. Different queuing disciplines
sors with access to a main memory module via a backplane can modelled; first come first served is commonly used. A
bus have been modelled. The backplane also has access to range of probability distributions can be entered for the
the main data bus via a gateway module. service delay; a negative exponential is often used for queu-
The performance model of a processor server is shown in ing systems. Processor response time and queue length can
Fig. 8. Results from the processor can be sent to an output or be monitored. Process utilisation can be measured and is
written to memory. The “Write_To_RAM” modelling defined as the proportion of time the server is busy and
component allocates memory space from the “Local_RAM” can be expressed as a percentage. Time to access memory

Fig. 10. APEX subsystems.


R.J. Bluff / Microprocessors and Microsystems 23 (1999) 435–448 441

Fig. 11. Process management objects.

Fig. 12. Process object lifecycle.


442 R.J. Bluff / Microprocessors and Microsystems 23 (1999) 435–448

Fig. 13. Suspend–self object lifecycle.

can also be monitored. From these basic performance para- architecture. The individual subsystems within the APEX
meters, architecture performance bottlenecks can be identi- domain are shown in Fig. 10.
fied and the system’s capacity to support avionic Each subsystem contains a set of objects relating to
applications can be derived. Continuously growing queue service calls required by the functional groupings. As an
lengths found for a server, during a simulation, could indi- example Fig. 11 illustrates a selection of objects defined
cate a bottleneck has been identified and potentially the for process management. The process object is created,
capacity of the system will be exceeded to perform tasks. suspended, stopped for example by the surrounding service
In this situation either the arrival rate of jobs needs to be call objects. In the Shlaer–Mellor method roles as well as
reduced or the power of the server needs to be increased. entities can be defined as objects [5].
Each object has a lifecycle; Fig. 12 shows the lifecycle for
the process object. The state transitions can be modelled
2.2. ARINC 653 software behavioural model with the support of a discrete event simulator. The APEX
model has been implemented using the SES/objectbench
A software behavioural model of ARINC 653 [3] has tool [6]; simulation of the object lifecycles is supported.
been completed based on utilising the Shlaer–Mellor Delays for event transitions can be defined in a similar
method. The first stage is to define domain chart as illu- manner to the service delays that can be defined in SES/
strated in Fig. 9. workbench [5], including probability distributions for
By convention the top of the domain chart is the applica- example.
tion, the middle the service and the bottom the implementa- An important aim of behavioural modelling is to under-
tion or architecture domain. The interface between domains stand the operation of a system and to ensure that the
is an important area of development in the Shlaer–Mellor sequence of events occurs as expected. Through simulation
method. race conditions can be investigated. Fig. 13 defines the life-
The operation of civil aircraft requires the support of cycle of the Suspend–Self object. Shlaer–Mellor supports a
avionic software architecture, and in turn APEX the timer mechanism whereby events can be generated after a
application domain, is an important component of the pre-determined delay. In the Suspend–Self lifecycle if the
R.J. Bluff / Microprocessors and Microsystems 23 (1999) 435–448 443

Fig. 14. Processor utilisation.

process can be suspended after a time out delay the process modelling results to non-experts and can be very useful for
is resumed. The time out mechanism ensures that a process generating customer-oriented displays. In terms of an avionic
doesn’t lock up. Any form of failure is reported in the return system this could include an example cockpit display.
code.
The investigation of the determinism of the behaviour of 2.4. Analytical model
the object lifecycles is well suited to this form of simulation
In order to assist in the initial validation of the modelling
modelling. This form of analysis is particularly important
framework an analytical model has been developed. Service
for the development of avionic systems.
centres have been defined for the main components of the
architecture components including processors, backplane
2.3. Visualisation buses and main avionic data buses. The following inputs
and outputs to the analytical model have been defined as
Visualisation displays have been developed to support the
follows:
avionic modelling framework. Fig. 14 presents a graphic
Inputs:
visualisation of a five cabinet avionic system ACRs (avio-
nics computing resource) including the processing Kˆ number of service centres
resources, backplane buses and the main global avionic kˆ 1,…,K
data bus. In this case the utilisation of the resources is lˆ arrival rate to system ( ˆ V/T)
being monitored during a simulation. Highly utilised vk ˆ number of visits to service centre
resources are shown in a darker colour. Potential system nˆ number of transactions in system
bottlenecks could be readily appreciated with this form of Vk ˆ vk/n
visualisation. Sk ˆ mean service time at service centre
The Altia tool has been utilised for the development of Tˆ model simulation time (in simulation units)
the visualisation displays [8]. The graphical display can be
Outputs:
developed to react to an event generated by SES/workbench
or SES/objectbench; data monitored by these tools can also l max ˆ maximum capacity of system
be passed to a graphical display including on line perfor- Rk ˆ mean response time
mance statistics as illustrated in Fig. 14. Xk ˆ throughput (or completion rate)
Visualisation displays are an important way of conveying Uk ˆ server utilisation
444 R.J. Bluff / Microprocessors and Microsystems 23 (1999) 435–448

Table 1
Global Bus Simulation Statistics

Qk ˆ mean queue length assumed the system bottleneck can be identified (Eq. (4))
Nˆ mean number of transactions in system and the maximum capacity of the system can be derived
Rˆ mean system response time (Eq. (1)).
As the arrival rate at the system bottleneck increases
From these definitions the following equations can be
(note: there can be more than one bottleneck simulta-
derived from Little’s Law [9]:
neously) the response time is bounded asymptotically as
lmax ˆ 1=…maxk {Vk Sk }† …1† shown in Fig. 15. The points plotted were obtained from
the example of simulation analysis.
if l $ lmax then insufficient capacity
Through the above analysis and the use of Eq. (1) the
Xk ˆ l …2† maximum sustainable arrival rate for the system can be
derived.
U k ˆ X k Sk …3†
2.5. Model validation
Vb Sb ˆ maxk {Vk Sk } …4†
Eq. (1) derives the maximum capacity of a system to System performance measures from an avionic demon-
absorb tasks, Eq. (2) calculates the system throughput, Eq. stration rig can be compared against the corresponding
(3) calculates the utilisation of a model component and simulation results. The aim would be to determine whether
finally Eq. (4) derives the system’s bottleneck. the avionic modelling suite could be formulated to be a
These equations are valid provided the number of tasks reasonably accurate representation of a demonstrable
entering the system equal those completing service. The avionic concept.
avionic system model simulation must run until flow One possible approach is to apply a statistical test to
balance is achieved. Table 1 lists the simulation statistics determine whether the underlying distributions of the data
for the main avionic bus under flow balance conditions sets can be safely regarded as being the same. These tests
during an example analysis. Table 2 uses these statistics assume the data is independent, identically distributed (IID)
to solve Eqs. (1)–(4). and for most real-world systems and simulations this
As can be seen the calculated utilisation is in agreement assumption is not valid.
with the simulation results. Since flow balance can be An alternative approach is to use a confidence interval. A
hypothesis test will either be rejected or accepted. A confi-
Table 2 dence interval will not only provide this information but will
Global Bus Statistics also the possible range of values for the parameter.
To determine the range in a measured parameter prob-
Model component Avionic bus
Measured values abilistic bounds can be derived. For a given parameter there
Number of visits (V) 2056 is a probability of 1 2 a that the parameter is in the interval
Mean response time (rt) 86.5611 s (c1,c2):
Mean queue response time (qt) 68.5463 s
Calculated values Probability{c1 # m # c2 } ˆ 1 2 a
Mean service time (S ˆ rt 2 qt† 18.0148 s
Mean arrival rate ( ˆ V=T† 0.04112 transactions/s The interval (c1,c2) is the confidence interval, a is the
Throughput (X ˆ l) 0.04112 transactions/s
significance level, 100…1 2 a† is the confidence level, and
Utilisation (U ˆ XS) 0.7408
1 2 a is the confidence coefficient. Confidence levels are
R.J. Bluff / Microprocessors and Microsystems 23 (1999) 435–448 445

Fig. 15. Effect of arrival data on response time.

normally near 100% while significance levels are typically measure would then be derived. A confidence interval
near zero. would then be constructed for the difference. If the
A validation exercise would ideally conduct n experi- confidence interval includes zero, the avionic demon-
ments on an avionic demonstrator and the simulation strator and the simulation model would not then be
model. There would then be a one-to-one correspon- significantly different. Hence, the model would have
dence between the ith test on the demonstrator and been validated. If there is a difference then the data would
the ith test on the simulation so that the observations will need to be examined and the model refined as necessary.
be paired. The confidence interval test for the data set would then need
The two data sets would then be treated as one sample of to be repeated.
n pairs. For each pair, the difference in the performance From the central limit theorem a 100…1 2 a†% confidence

Fig. 16. Software–hardware simulation link.


446 R.J. Bluff / Microprocessors and Microsystems 23 (1999) 435–448

Fig. 17. Hardware–software simulation link.

interval for a population mean is given by: software across available hardware resources. In the case of
p p the avionic system model framework, this would provide a
…x 2 Z12a=2 s= n; x 1 Z12a=2 s= n† link between software behavioural models and their asso-
ciated hardware performance models.
where x̄ is the sample mean, s the sample standard devia-
Fig. 16 illustrates a simple object lifecycle that could
tion, n the sample size and Z12a=2 the …1 2 a=2† quantile of a
be modelling a software thread. In State_2 an event is
unit normal variate. These values can be looked up from a
passed from SES/objectbench to an SES/workbench
standard table.
performance model along with two data items. In
The above confidence internal can only be applied to data
State_3 an event is returned from SES/workbench. In this
sets with large sample sizes, n . 30: For smaller sample
case a parameter is passed between the models during the
sizes the confidence interval is given by:
event transition and defines the time delay for the state
p p transition. A link therefore can be made between the beha-
…x 2 t‰12a=2;n21Š s= n; x 1 t‰12a=2;n21Š s= n†
viour of software and the associated performance implica-
where t‰12a=2;n21Š is the …1 2 a=2† quantile of a t-variate with tions. Instances of software objects can be mapped across
n 2 1 degrees of freedom. These values can also be looked supporting hardware for analysis purposes to form a system
up from a table. In a simpler form: model.
p The performance analysis domain is best suited to model-
Confidence interval for mean ˆ x ^ t s2 =n ling contention for resources. Fig. 17 shows the companion
SES/workbench model that provides an event to the soft-
where s2 ˆ sample variance:
ware model defined in Fig. 16. The “Write_To_Memory_
The sample means can be derived from the full range of
Event” transfers resources from the Memory resource pool.
performance data measured from the demonstrator and the
The next icon “Event_Release” transfers resources back to
avionic modelling suite: response times for equivalent
the Memory resource pool and transfers an event back to the
workloads for example.
software model.
This combined model could represent a software
3. System model behavioural model accessing physical memory in the
hardware domain. Fig. 18 represents for example, a
A system model would aim to provide an analysis of physical processor. A link can be established between
the mapping of avionic applications and supporting system the software and the physical processor it is executing on.

Fig. 18. Physical CPU.


R.J. Bluff / Microprocessors and Microsystems 23 (1999) 435–448 447

Fig. 19. Software architecture model.

An understanding can then be developed of the interaction provide an optimisation of the architecture concepts under
between the hardware and software components of a system. assessment.
How software is mapped across an architecture could be The developed modelling suite (ARINC 651 and 653) can
assessed in terms of behaviour and the performance be used as individual models or linked together as discussed
implications. In general a system model should aim to to form a system model.

Fig. 20. Avionic architecture performance model.


448 R.J. Bluff / Microprocessors and Microsystems 23 (1999) 435–448

4. Future work Acknowledgements

Future modelling activities planned include the extension This research was supported by the DTI as part of the
of the software architecture model to include the hardware Long term Research into Civil Avionic Systems (LORCAS)
support layer that exists beneath APEX and to develop a programme and was funded by CARAD III.
distributed executive above the application interface layer.
A distributed executive is required to control the distribution
of applications across the hardware resources of the entire References
avionic architecture. Fig. 19 illustrates the proposed exten-
sion to the existing model. [1] ARINC: Integrated Modular Avionics Packaging and Interfaces’,
Aeronautical Radio, INC. (ARINC) Specification 650, 1994.
Also shown in Fig. 19 is a configuration management [2] ARINC: Design Guidance for Integrated Modular Avionics, Aeronau-
subsystem that would assist in the fault management of a tical Radio, INC. (ARINC) Report 651, 1991.
system. Fig. 20 illustrates this same subsystem modelled as [3] ARINC: Avionics Application Software Standard Interface, Aeronau-
part of the performance-modelling domain. tical Radio, INC. (ARINC) Specification 653, 1997.
In addition to the further development of avionic model- [4] Baxter, C.R., The Airline Wish-List, International Avionics Confer-
ence, Integrated Avionics—How Far, How Fast? Heathrow, 1993, pp.
ling components it is planned to validate the modelling suite 1.1.1–1.1.5.
against an avionic systems demonstrator initially developed [5] S. Shlaer, J. Mellor, Object Lifecycles Modelling the World in States,
under the Control Technology Programme [10]. A study of Yourdon Press Computing Series, New Jersey, 1992.
the ARINC standards was included in this project. The [6] SES/objectbench: Technical Reference Manual, Release 2.3, SES,
avionic system model will be formulated to represent the Inc., Austin, TX, 1996.
[7] SES/workbench: Sim-Simulation Language Reference Manual,
demonstrator. Similar experiments will then be conducted Release 3.1, SES, Inc., Austin, TX, 1996.
on the avionic rig and the model. Behaviour and [8] Altia: 1996, Altia Design Users Manual, Version 2.0, Altia, Inc.,
performance results will be gathered and their data sets Colorado Springs, 1996.
statistically compared. If they are found to be statistically [9] J.D.C. Little, A Proof for the Queuing Formula: L ˆ lW, Operations
equal then the modelling suite will be validated against the Research 9 (3) (1961) 383–387.
[10] Penny, J.W.L., A Review of the Control Technology Programme,
rig; alternatively, model refinements will be made as International Avionics Conference, Advances in Systems Engineering
necessary. for Civil and Military Avionics, Heathrow, 1991, pp. 8.2.1–8.2.12,
20.

Roger Bluff has been involved in the research and development of


5. Conclusions Modular Avionics since the late 1980s. He received an MSc in Micro-
electronics System Design in 1989 from Brunel University. He was part
A framework for an Avionic System Model has been of the team that developed the Modular Avionic Research Facility
defined. ARINC 653 and 651 software behavioural and (MARF) with the aim of providing a flexible means of evaluating future
avionic architectures and standards.
performance models, respectively, have been described. It Since 1995 he has been involved in the development of an avionic
has been shown how architecture bottlenecks can be identi- modelling framework to provide a suite of simulation tools that can
fied and the system’s capacity derived. Validation methods assess avionic architectures and standards. His interests include
for the modelling suite have also been considered. How the performance analysis, behavioural modelling and visualisation and
various modelling components can be combined to form a how these techniques can be applied to the assessment of new avionic
architecture concepts.
system model has also been described.

Das könnte Ihnen auch gefallen