Sie sind auf Seite 1von 5

Introduction

By definition an operating system is supposed to control the computer upon which it is running. It parcels out the resources of the system to programs that ask nicely and rejects or delays those that dont make the cut. The movie TRON characterized the Master Control Program as a megalomaniacal dictator that arbitrarily interfered with the peaceful lives of programs running within its purview. Laying such anthropomorphism aside, it is easy to see how a typical desktop computer user could identify with characterizations like this. Most operating systems exercise a large degree of control over the programs and resources being used by a computer, and sometimes that control appears to be wielded arbitrarily or even malevolently. But the relationship of the operating system and the application software in a typical embedded system is much simpler in some ways and much more complex in others. There are specific requirements that grow out of the dedicated environment that simply arent present in the multipurpose world of desktop or larger computers. This paper will explore that relationship and what makes some operating systems much more appropriate to embedded systems and where it might be appropriate to press desktop operating systems into service for embedded systems.

Really Real-Time
The most apparent dividing line usually revolves around the phrase real-time. Most of the operating systems that are specifically targeted to the embedded world are characterized as Real-Time Operating Systems (RTOS) and much is made of that distinction. But what exactly does that mean? There are also desktop operating systems that have priority levels that are characterized as real-time. Are these the same thing? What does real-time mean anyway? Do other operating systems run in imaginary time? Are RTOSs just very fast operating systems?

Deterministic versus fast


The key to understanding the distinction of real-time is the concept of determinism. A deterministic system always takes the same amount of time to execute a particular functionality of that system. This can be extremely desirable for many embedded systems, since they depend on being able to meet deadlines The idea of determinism is usually associated with being fast, but there is no real correlation. Theoretically, a system could become deterministic by using a hardware timer that was set for the worst case for a particular function call. The system would then become very deterministic but would probably be less than useful because the deterministic time interval would simply be too slow. The key therefore is to be both predictable and fast enough to be useful. This definition can cover a multitude of systems. For example, a real-time accounting system might be defined as one that always generated paychecks by payday. A real-time weather simulation might be defined as one that predicted the weather before the weather actually happened. In that case, real-time is defined as happening faster than real time an interesting definition indeed. The overriding concept in determinism is therefore predictability. What this means to a RTOS is that when a function is invoked it will complete in a consistent timeframe. If I am allocating memory, for example, the RTOS should not go off and do garbage collection on fragmented memory in an attempt to answer the request. By definition, a late answer is a wrong answer, so a serious RTOS will instead return an error condition if it cannot respond to the request within the allocated time window. This has some serious implications for the embedded applications programmer. They must make sure that they somehow avoid situations where such errors are likely to happen, and if they do happen there has to be some way of handling the problem. For example, if the system being designed involves moving chunks of data it might be necessary to use a scheme of fixed-size data buffers instead of variable-sized memory mallocs. There must also be code to handle situations where buffers are not available. It is rarely acceptable for an embedded system to give the equivalent of a Windows General Protection Fault and roll over dead.

Revisiting fast
The RTOS market is very competitive and very fragmented. Each vendor is therefore looking for any advantage they can get. Given this, it is to be expected that the major RTOSs are both deterministic and very fast. The critical sections of code in each RTOS have been optimized and reoptimized, to the point where there is very little difference between the speed that different RTOSs take to do these critical functions. There is a price to be paid for this speed, however. It is basically the difference between the family station wagon and an Indy 500 racecar. The former is very good at taking care of Grandma as she drives to the market on Sunday, but the latter will do 200 MPH. Each of these is adapted to the requirements of the environment. In other words, the programmer should not assume that a typical RTOS will have all of the niceties of a desktop operating system. The way this racecar obtains that speed id by stripping out everything that is not absolutely essential to achieving maximum speed. This has the added advantage of reducing the memory footprint of the RTOS. Most desktop operating systems these days require multiple megabytes of RAM to operate, while a minimally-configured RTOS kernel can often run with just a few tens of kilobytes. Granted, RAM is much cheaper these days than has been the case in the past, but many embedded systems are still very sensitive to price.

Codependence with the application


But a fast, deterministic RTOS is only half of the equation for having a fast, deterministic system. The application software is much more important in embedded applications than in desktop ones. A careless programmer can misuse the RTOS and end up with a sloppy, bug-ridden mess in spite of having a good RTOS. In fact, it is even easier for an embedded programmer to introduce strange and exotic misbehavior into an embedded system. Most desktop operating systems have some form of hardware memory management in place. The aforementioned Windows GPFs simply meant that some application was treading on NoMans Land, outside of the memory space to which it was allotted. As irritating as these are, they are infinitely preferable to the obscure behavior that can be caused by not detecting such events. Most RTOS implementations do not implement hardware memory management. This is a symptom of the racecar design I alluded to earlier. Most systems run measurable faster in hardware address mode than they do using virtual addresses. In addition, communication is faster if tasks can simply share variables in a common memory space. The down side of this is that such embedded applications are often running without the benefit of a safety net. The application code is literally running in the same memory space as the RTOS and has the same degree of system privilege. Unfortunately in many cases it doesnt have the same level of maturity. This is not to say that any bug in the system is automatically the fault of the application code. I have seen released versions of several RTOSs that have blatant errors. In most cases, however, it has been some time since that has been evident. Most commercial RTOS vendors have survived by delivering relatively high-quality code. If a problem is evident in your system, dont completely discount the RTOS as a source of the problem. But dont overlook the application code either.

Multitasking/Multithreading
The most commonly used feature of any given RTOS is easily multitasking. This powerful tool allows a programmer to interleave operation of relatively unrelated portions of the application code at a very high granularity. This creates the illusion that the system is doing multiple things at once, a very useful capability for a programmer with a complex set of requirements to implement. But there are two primary models for multitasking. The multithreading model is the simplest and fastest (and therefore the most popular in many applications), while the multiprocessing model is the most robust. Each of these is examined in more detail below.

Multiprocessing model
This is the model typically used in desktop operating systems. Each task has a distinct code and data space and these boundaries are typically enforced by hardware memory management. The communications methods between processes typically involve the operating system to a much higher degree, as do most other forms of I/O. Essentially, this is desktop computing done on embedded systems. The operating system takes a much more commanding role over the application code, which means that it now has the capability to shut down and restart misbehaving processes. The downside, of course, is increased overhead. At a minimum there is a runtime cost for accessing memory through a virtual memory translation on the CPU. There is also an extra step necessary for each context switch where the new memory context is established. The time to perform these functions is relatively small on modern CPUs, but if your application is pushing the CPU to the limit even small time deltas can be critical.

Multithreading model
The alternative is a simpler multithreading model. This model has been co-opted into many desktop environments to allow multiple threads of operation within a single process. In the embedded world it has acted as the primary means of multitasking for RTOSs like pSOS and VxWorks that follow the racecar motif described above. In this model each of the tasks shares the code and primary data space with all other tasks in the system. In other words, the only data area that distinguishes one task from another is the stack that is being used at the time. Any normal static global variables are shared and ultimately accessible from any other task. Therefore, pointers to data items can be used very freely within this type of environment without the virtual to physical translation that is necessary in the multiprocessing model.

Which is better?
This is the inevitable question, and the inevitable answer is simply It depends. Multithreading has a relatively long history in the embedded world and has been the correct answer for many applications. The dominating reason for this is the relative simplicity that it brings to intertask communication and the lower overhead that it entails. Add to this the reduced system cost in terms of simpler CPUs and less memory and it seems like the hands-down winner. But there are the issues of more complex applications and higher reliability requirements. These have raised the bar in some applications to the point where they can no longer afford the simplicity of the multithreading model. It is a very different job to debug an application written by two or three programmers with a dozen or so tasks then it is to debug one written by a dozen programmers with a hundred or more tasks. If the latter project is done without the protection of memory access control there had better be some strict coding standards in place. As usual, there is no easy answer for the vast majority of applications, but the ones on either end are relatively simple. A relatively simple system that has tight cost constraints will probably go with a multithreading model, assuming there is a need for multitasking at all. A complex multimegabyte application with very high reliability requirements will probably go with a multiprocessing model. In between there is a lot of latitude for selection by systems engineers.

Peripheral Support
Another way that an RTOS brings value to a system is in the organization of code that interfaces to peripheral devices. This partitioning is very useful to extend the life of a system by allowing it to adapt more easily to new devices. It also simplifies the focus of the applications programmers, since they can follow familiar methods to access devices.

Device driver interface models


Every operating system provides some way of adding code to control devices. They also provide a standardized API for getting to that code from an application. The application side of that equation is usually very standardized and straightforward. It usually follows the ANSI C device I/O specification, albeit sometimes with the addition of some bells and whistles to make it more efficient in an embedded environment. The device driver side of the equation is much less standardized, though. There is usually a function table that has to be filled in with pointers to particular functions, and the specifics of each of these functions is an exercise definitely left up to the author of the RTOS. There are some advances being made on this front, though. Some RTOS vendors are moving towards a Streams model for device drivers. This provides a defined set of system calls and a functionality model that is very capable to support device driver code. An added bonus is that much of the code becomes portable across a variety of target systems. The Streams subsystem comes out of the UNIX world, which has a long history of providing fodder for use in embedded RTOS systems. Many RTOS vendors have based much of their current I/O interface systems on the stdio-style I/O model that was popular in earlier versions of UNIX.

Subsystem interface models


This is an area that can be a little fuzzy, but it also can be the difference between a viable product and one that simply is too expensive to produce. The types of subsystems I am referring to here are things like specific communications packages or graphics libraries. For example, it would be fairly daunting for the typical application developer to develop their own TCP/IP protocol stack. It is a real advantage to simply buy one that goes with the particular RTOS that has been selected. The same can be true of a graphics subsystem. Unfortunately, this hasnt had as good a model as TCP/IP has been for networking. The equivalent for a long time was the X11 Windowing system, which had a lot of capability but was also too resource-intensive for many smaller systems. As an alternative, some RTOS vendors (most notably Microsoft themselves) are moving towards an implementation of the GUI for Windows. The idea of providing major subsystems like this are to absorb major parts of the reusable parts of the design of a system into a package that can be used in a large number of designs. The tradeoff for the system implementer is engineering resources versus a chance to add value and uniqueness to their product. In the case of a network stack the requirement for interoperability may easily override the need to make it more efficient, for example, and so in that case it would probably be smarter to buy rather than build. The opposite might be true for a graphics subsystem that had to support a unique environment, however.

Third-party support and other politics


The key to being able to leverage these advantages is to have them available on the RTOS that you want to use. This leads to a situation where the vendors for these products have to select the RTOSs that customers will be looking to use to port their products to in the first place. In other words, the availability of a particular third-party layered product may make an RTOS a viable contender within a particular market segment. If this is true then it is clearly in the interest of the RTOS vendor to have that package available, and often they will support the development involved. On the other hand, if the RTOS is already a leader in that market segment or already has similar capability from another package there is often much less interest on their part in making it happen. This can make things much more difficult for the third-party vendor as far as getting information on internal hooks into the RTOS, for example. What it boils down to is whose interests are served by having the third-party software available. Ultimately there must be a customer or two involved, or the whole exercise is rather futile. Everything else is simply salesmanship and politics between the RTOS vendors and the third-party developers.

Conclusion
The RTOS world is a complex one. The needs of the customers are widely varied, with some needing only a minimal kernel capability and some online documentation and others needing a full-blown development system and training classes followed by periodic consultation. Any company that attempts to cover the whole space runs the risk of stretching itself very thin. But the essentials of an RTOS itself and how to work with it are relatively straightforward. The key is to understand what an RTOS is and what it is not. There are no magic instruction opcodes available to the RTOS vendor that cannot be used by applications programmers in most embedded systems, so having a separate RTOS from the application is a choice, not a necessity as it is in the desktop world. I believe it is usually a wise choice, in that it divides and therefore simplifies the task that must be done to create a complex system, but the ultimate choice is up to the individual system designer. If you choose to separate your application into system software and application software, the next choice is whether to build or buy the system part. The embedded world has a long history of NIH (Not Invented Here) syndrome, but I think they have to get over that in order to meet the demand for new systems and quick revisions to old systems. This does not mean that you must automatically go to the nearest RTOS vendor and write them a check. There are public domain options available or there is always the choice of writing your own RTOS. If you select that path, however, please make sure you completely understand the support and other costs that are involved. It may turn out that that commercial RTOS is cheaper in the long run!

Das könnte Ihnen auch gefallen