Beruflich Dokumente
Kultur Dokumente
A kernel connects the application software to the hardware of a computer. In computing, the kernel is the central component of most computer operating systems (OSs). Its responsibilities include managing the system's resources and the communication between hardware and software components. As a basic component of an operating system, a kernel pro ides the lowest!le el abstraction layer for the resources (especially memory, processors and I"O de ices) that applications must control to perform their function. It typically makes these facilities a ailable to application processes through inter!process communication mechanisms and system calls. #hese tasks are done differently by different kernels, depending on their design and implementation. $hile monolithic kernels will try to achie e these goals by e%ecuting all the code in the same address space to increase the performance of the system, microkernels run most of their ser ices in user space, aiming to impro e maintainability and modularity of the codebase.&'( A range of possibilities e%ists between these two e%tremes.
Overview
A typical ision of a computer architecture as a series of abstraction layers) hardware, firmware, assembler, kernel, operating system and applications. *ost operating systems rely on the kernel concept. #he e%istence of a kernel is a natural conse+uence of designing a computer system as a series of abstraction layers, each relying on the functions of layers beneath itself. #he kernel, from this iewpoint, is simply the name gi en to the lowest le el of abstraction that is implemented in software. In order to a oid ha ing a kernel, one would ha e to design all the software on the system to not use abstraction layers, this would increase the comple%ity of the design to such a point that only the simplest systems could feasibly be implemented. $hile it is today mostly called the kernel, the same part of the operating system has also in the past been known as the nucleus or core. (-ote, howe er, that the term core has also been used to refer to the primary memory of a computer system, typically because some early computers used a form of memory called .ore memory.) In most cases, the boot loader starts e%ecuting the kernel in super isor mode, the kernel then initiali/es itself and starts the first process. After this, the kernel does not typically e%ecute directly, only in response to e%ternal e ents (e.g. ia system calls used by applications to re+uest ser ices from the kernel, or ia interrupts used by the hardware to notify the kernel of e ents). Additionally, the kernel typically pro ides a loop that is e%ecuted whene er no processes are a ailable to run, this is often called the idle process.
0ernel de elopment is considered one of the most comple% and difficult tasks in programming. Its central position in an operating system implies the necessity for good performance, which defines the kernel as a critical piece of software and makes its correct design and implementation difficult. 1or arious reasons, a kernel might not e en be able to use the abstraction mechanisms it pro ides to other software. Such reasons include memory management concerns (for e%ample, a user!mode function might rely on memory being sub2ect to demand paging, but as the kernel itself pro ides that facility it cannot use it, because then it might not remain in memory to pro ide that facility) and lack of reentrancy, thus making its de elopment e en more difficult for software engineers. A kernel will usually pro ide features for low!le el scheduling of processes (dispatching), Inter!process communication, process synchroni/ation, conte%t switch, manipulation of process control blocks, interrupt handling, process creation and destruction, process suspension and resumption (see process states).
0ernels also usually pro ide methods for synchroni/ation and communication between processes (called inter-process communication or I3.). A kernel may implement these features itself, or rely on some of the processes it runs to pro ide the facilities to other processes, although in this case it must pro ide some means of I3. to allow processes to access the facilities pro ided by each other. 1inally, a kernel must pro ide running programs with a method to make re+uests to access these facilities.
Process management
#he main task of a kernel is to allow the e%ecution of applications and support them with features such as hardware abstractions. #o run an application, a kernel typically sets up an address space for the application, loads the file containing the application's code into memory (perhaps ia demand paging), sets up a stack for the program and branches to a gi en location inside the program, thus starting its e%ecution.&'5( *ulti!tasking kernels are able to gi e the user the illusion that the number of processes being run simultaneously on the computer is higher than the ma%imum number of processes the computer is physically able to run simultaneously. #ypically, the number of processes a system may run simultaneously is e+ual to the number of .34s installed (howe er this may not be the case if the processors support simultaneous multithreading). In a pre!empti e multitasking system, the kernel will gi e e ery program a slice of time and switch from process to process so +uickly that it will appear to the user as if these processes were being e%ecuted simultaneously. #he kernel uses scheduling algorithms to determine which process is running ne%t and how much time it will be gi en. #he algorithm chosen may allow for some processes to ha e higher priority than others. #he kernel generally also pro ides these processes a way to communicate, this is known as inter!process communication (I3.) and the main approaches are shared memory, message passing and remote procedure calls (see concurrent computing).
Other systems (particularly on smaller, less powerful computers) may pro ide co!operati e multitasking, where each process is allowed to run uninterrupted until it makes a special re+uest that tells the kernel it may switch to another process. Such re+uests are known as 6yielding6, and typically occur in response to re+uests for interprocess communication, or for waiting for an e ent to occur. Older ersions of $indows and *ac OS both used co!operati e multitasking but switched to pre! empti e schemes as the power of the computers to which they were targeted grew. #he operating system might also support multiprocessing (S*3 or -on!4niform *emory Access), in that case, different programs and threads may run on different processors. A kernel for such a system must be designed to be re!entrant, meaning that it may safely run two different parts of its code simultaneously. #his typically means pro iding synchroni/ation mechanisms (such as spinlocks) to ensure that no two processors attempt to modify the same data at the same time.
Memory management
#he kernel has full access to the system's memory and must allow processes to access this memory safely as they re+uire it. Often the first step in doing this is irtual addressing, usually achie ed by paging and"or segmentation. 7irtual addressing allows the kernel to make a gi en physical address appear to be another address, the irtual address. 7irtual address spaces may be different for different processes, the memory that one process accesses at a particular ( irtual) address may be different memory from what another process accesses at the same address. #his allows e ery program to beha e as if it is the only one (apart from the kernel) running and thus pre ents applications from crashing each other. &'5( On many systems, a program's irtual address may refer to data which is not currently in memory. #he layer of indirection pro ided by irtual addressing allows the operating system to use other data stores, like a hard dri e, to store what would otherwise ha e to remain in main memory (8A*). As a result, operating systems can allow programs to use more memory than the system has physically a ailable. $hen a program needs data which is not currently in 8A*, the .34 signals to the kernel that this has happened, and the kernel responds by writing the contents of an inacti e memory block to disk (if necessary) and replacing it with the data re+uested by the program. #he program can then be resumed from the point where it was stopped. #his scheme is generally known as demand paging. 7irtual addressing also allows creation of irtual partitions of memory in two dis2ointed areas, one being reser ed for the kernel (kernel space) and the other for the applications (user space). #he applications are not permitted by the processor to address kernel memory, thus pre enting an application from damaging the running kernel. #his fundamental partition of memory space has contributed much to current designs of actual general!purpose kernels and is almost uni ersal in such systems, although some research kernels (e.g. Singularity) take other approaches.
Device management
#o perform useful functions, processes need access to the peripherals connected to the computer, which are controlled by the kernel through de ice dri ers. 1or e%ample, to show the user something on the screen, an application would make a re+uest to the kernel, which would forward the re+uest to its display dri er, which is then responsible for actually plotting the character"pi%el. A kernel must maintain a list of a ailable de ices. #his list may be known in ad ance (e.g. on an embedded system where the kernel will be rewritten if the a ailable hardware changes), configured by the user (typical on older 3.s and on systems that are not designed for personal use) or detected by the operating system at run time (normally called 3lug and 3lay). In a plug and play system, a de ice manager first performs a scan on different hardware buses, such as 3eripheral .omponent Interconnect (3.I) or 4ni ersal Serial 9us (4S9), to detect installed de ices, then searches for the appropriate dri ers. As de ice management is a ery OS!specific topic, these dri ers are handled differently by each kind of kernel design, but in e ery case, the kernel has to pro ide the I"O to allow dri ers to physically access their de ices through some port or memory location. 7ery important decisions ha e to be made when designing the de ice management system, as in some designs accesses may in ol e conte%t switches, making the operation ery .34!intensi e and easily causing a significant performance o erhead.
System calls
#o actually perform useful work, a process must be able to access the ser ices pro ided by the kernel. #his is implemented differently by each kernel, but most pro ide a . library or an A3I, which in turn in oke the related kernel functions.
#he method of in oking the kernel function aries from kernel to kernel. If memory isolation is in use, it is impossible for a user process to call the kernel directly, because that would be a iolation of the processor's access control rules. A few possibilities are) 4sing a software!simulated interrupt. #his method is a ailable on most hardware, and is therefore ery common. 4sing a call gate. A call gate is a special address which the kernel has added to a list stored in kernel memory and which the processor knows the location of. $hen the processor detects a call to that location, it instead redirects to the target location without causing an access iolation. 8e+uires hardware support, but the hardware for it is +uite common. 4sing a special system call instruction. #his techni+ue re+uires special hardware support, which common architectures (notably, %:;) may lack. System call instructions ha e been added to recent models of %:; processors, howe er, and some (but not all) operating systems for 3.s make use of them when a ailable. 4sing a memory!based +ueue. An application that makes large numbers of re+uests but does not need to wait for the result of each may add details of re+uests to an area of memory that the kernel periodically scans to find re+uests.
Security
An important kernel design decision is the choice of the abstraction le els where the security mechanisms and policies should be implemented. One approach is to use firmware and kernel support for fault tolerance (see abo e), and build the security policy for malicious beha ior on top of that (adding features such as cryptography mechanisms where necessary), delegating some responsibility to the compiler. Approaches that delegate enforcement of security policy to the compiler and"or the application le el are often called language-based security.
>isad antages include) ?onger applications start up time. Applications must be erified when they are started to ensure they ha e been compiled by the correct compiler, or may need recompiling either from source code or from byte code. Infle%ible type systems. On traditional systems, applications fre+uently perform operations that are not type safe. Such operations cannot be permitted in a language!based protection system, which means that applications may need to be rewritten and may, in some cases, lose performance.
Process cooperation
<dsger >i2kstra pro ed that from a logical point of iew, atomic lock and unlock operations operating on binary semaphores are sufficient primiti es to e%press any functionality of process cooperation. &':( =owe er this approach is generally held to be lacking in terms of safety and efficiency, whereas a message passing approach is more fle%ible.
particular 6mode of operation6. In minimal microkernel 2ust some ery basic policies are included, and its mechanisms allows what is running on top of the kernel (the remaining part of the operating system and the other applications) to decide which policies to adopt (as memory management, high le el process scheduling, file system management, ecc.). A monolithic kernel instead tends to include many policies, therefore restricting the rest of the system to rely on them.
Monolit#ic kernels
Main article: Monolithic kernel
>iagram of *onolithic kernels In a monolithic kernel, all OS ser ices run along with the main kernel thread, thus also residing in the same memory area. #his approach pro ides rich and powerful hardware access. Some de elopers maintain that monolithic systems are easier to design and implement than other solutions, and are e%tremely efficient if well!written. #he main disad antages of monolithic kernels are the dependencies between system components ! a bug in a de ice dri er might crash the entire system ! and the fact that large kernels can become ery difficult to maintain.
In the microkernel approach, the kernel itself only pro ides basic functionality that allows the e%ecution of ser ers, separate programs that assume former kernel functions, such as de ice dri ers, C4I ser ers, etc. #he microkernel approach consists of defining a simple abstraction o er the hardware, with a set of primiti es or system calls to implement minimal OS ser ices such as memory management, multitasking, and inter!process communication. Other ser ices, including those normally pro ided by the kernel such as networking, are implemented in user!space programs, and referred to as servers. *icrokernels are easier to maintain than monolithic kernels, but the large number of system calls and conte%t switches might slow down the system because they typically generate more o erhead than plain function calls. *icrokernels generally underperform traditional designs, sometimes dramatically. #his is due in large part to the o erhead of mo ing in and out of the kernel, a conte%t switch, to mo e data between the arious applications and ser ers. 9y the mid! '@@5s, most researchers had abandoned the belief that careful tuning could reduce this o erhead dramatically, but recently, newer microkernels, optimi/ed for performance, ha e addressed these problems.
A microkernel allows the implementation of the remaining part of the operating system as a normal application program written in a high!le el language, and the use of different operating systems on top of the same unchanged kernel. It is also possible to dynamically switch among
code, which can be ery difficult with non!ob ious interdependencies between parts of a kernel with millions of lines of code. >ue to the problems that monolithic kernels pose, they were considered obsolete by the early '@@5s. As a result, the design of ?inu% using a monolithic kernel rather than a microkernel was the topic of a famous flame war between ?inus #or alds and Andrew #anenbaum. #here is merit on both sides of the argument presented in the #anenbaum"#or alds debate. Some, including early 4-IB de eloper 0en #hompson, argued that while microkernel designs were more aesthetically appealing, monolithic kernels were easier to implement. =owe er, a bug in a monolithic system usually crashes the entire system, while this doesn't happen in a microkernel with ser ers running apart from the main thread. *onolithic kernel proponents reason that incorrect code doesn't belong in a kernel, and that microkernels offer little ad antage o er correct code. *icrokernels are often used in embedded robotic or medical computers where crash tolerance is important and most of the OS components reside in their own pri ate, protected memory space. #his is impossible with monolithic kernels, e en with modern module!loading ones. =owe er, the monolithic model tends to be more efficient through the use of shared kernel memory, rather than the slower I3. system of microkernel designs, which is typically based on message passing.
Hybrid kernels
Main article: Hybrid kernel
#he hybrid kernel approach tries to combine the speed and simpler design of a monolithic kernel with the modularity and e%ecution safety of a microkernel. =ybrid kernels are essentially a compromise between the monolithic kernel approach and the microkernel system. #his implies running some ser ices (such as the network stack or the filesystem) in kernel space to reduce the performance o erhead of a traditional microkernel, but still running kernel code (such as de ice dri ers) as ser ers in user space.
'anokernels
Main article: Nanokernel A nanokernel delegates irtually all ser ices D including e en the most basic ones like interrupt controllers or the timer D to de ice dri ers to make the kernel memory re+uirement e en smaller than a traditional microkernel.
()okernels
Main article: Exokernel An e%okernel is a type of kernel that does not abstract hardware into theoretical models. Instead it allocates physical hardware resources, such as processor time, memory pages, and disk blocks, to different programs. A program running on an e%okernel can link to a library operating system that uses the e%okernel to simulate the abstractions of a well!known OS, or it can de elop application!specific abstractions for better performance.
Strictly speaking, an operating system (and thus, a kernel) is not required to run a computer. 3rograms can be directly loaded and e%ecuted on the 6bare metal6 machine, pro ided that the authors of those programs are willing to work without any hardware abstraction or operating system support. *ost early computers operated this way during the '@E5s and early '@;5s, which were reset and reloaded between the e%ecution of different programs. < entually, small ancillary programs such as program loaders and debuggers were left in memory between runs, or loaded from 8O*. As these were de eloped, they formed the basis of what became early operating system kernels. #he 6bare metal6 approach is still used today on some ideo game consoles and embedded systems, but in general, newer computers use modern operating systems and kernels.
+ni)
Main article: nix 4ni% represented the culmination of decades of de elopment towards a modern operating system. >uring the design phase, programmers decided to model e ery high!le el de ice as a file, because they belie ed the purpose of computation was data transformation. 1or instance, printers were represented as a 6file6 at a known location ! when data was copied to the file, it printed out. Other systems, to pro ide a similar functionality, tended to irtuali/e de ices at a lower le el ! that is, both de ices and files would be instances of some lower le el concept. 7irtuali/ing the system at the file le el allowed users to manipulate the entire system using their e%isting file management utilities and concepts, dramatically simplifying operation. As an e%tension of the same paradigm, 4ni% allows programmers to manipulate files using a series of small programs, using the concept of pipes, which allowed users to complete operations in stages, feeding a file through a chain of single!purpose tools. Although the end result was the same, using smaller programs in this way dramatically increased fle%ibility as well as ease of de elopment and use, allowing the user to modify their workflow by adding or remo ing a program from the chain. In the 4ni% model, the !perating "ystem consists of two parts, one the huge collection of utility programs that dri e most operations, the other the kernel that runs the programs. 4nder 4ni%, from a programming standpoint the distinction between the two is fairly thin, the kernel is a program running in super isor mode &A( that acts as a program loader and super isor for the small utility programs making up the rest of the system, and to pro ide locking and I"O ser ices for these programs, beyond that, the kernel didn't inter ene at all in user space. O er the years the computing model changed, and 4ni%'s treatment of e erything as a file no longer seemed to be as uni ersally applicable as it was before. Although a terminal could be treated as a file or a stream, which is printed to or read from, the same did not seem to be true for a graphical user interface. -etworking posed another problem. < en if network communication can be compared to file access, the low!le el packet!oriented architecture dealt with discrete chunks of data and not with whole files. As the capability of computers grew, 4ni% became increasingly cluttered with code. $hile kernels might ha e had '55,555 lines of code in the se enties and eighties, kernels of modern 4ni% successors like ?inu% ha e more than F.E million lines. #hus, the biggest problem with monolithic kernels, or monokernels, was sheer si/e. #he code was so e%tensi e that working on such a large codebase was e%tremely tedious and time!consuming. *odern 4ni%!deri ati es are generally based on module!loading monolithic kernels. <%amples for this are ?inu% distributions like >ebian C-4"?inu%, 8ed =at ?inu% and 4buntu ?inu%, as well as 9erkeley software distributions such as 1ree9S> and -et9S>. Apart from these alternati es, amateur de elopers maintain an acti e operating system de elopment community, populated by self!written hobby kernels which mostly end up sharing many features with ?inu% and"or being compatible with it.
Mac OS
Main article: Mac !" history Apple .omputer first launched *ac OS in '@:F, bundled with its Apple *acintosh personal computer. 1or the first few releases, *ac OS (or System Software, as it was called) lacked many essential features, such as multitasking and a hierarchical filesystem. $ith time, the OS e ol ed and e entually became *ac OS @ and had many new features added, but the kernel basically stayed the same. Against this, *ac OS B is based on >arwin, which uses a hybrid kernel called B-4, which was created combining the F.G9S> kernel and the *ach kernel.
!miga
Main article: #miga!" #he .ommodore Amiga was released in '@:E, and was among the first (and certainly most successful) home computers to feature a microkernel operating system. #he Amiga's kernel, exec$library, was small but capable, pro iding fast pre!empti e multitasking on similar hardware to the cooperati ely!multitasked Apple *acintosh, and an ad anced dynamic linking system that allowed for easy e%pansion.
,indows
Main article: History of Microsoft %indo&s *icrosoft $indows was first released in '@:E as an add!on to >OS. Similarly to *ac OS, it also lacked important features at first but e entually ac+uired them in later releases. #his product line would continue until the release of the $indows @% series and end with $indows *e. At the same time, *icrosoft has been de eloping $indows -# since '@@G, an operating system intended for the high!end and business user. #his line started with the release of $indows -# G.' and replaced the main product line with the release of the -#!based $indows H555. #he highly successful $indows B3 brought these two product lines together, combining the stability of the -# line and the isual appeal of the @% series. It uses the -# kernel, which is generally considered a hybrid kernel because the kernel itself contains tasks such as the $indow *anager and the I3. *anager, but se eral subsystems run in user mode.