Sie sind auf Seite 1von 9

Aim: To Study Network Operating System and Distributed Operating System Theory: Network Operating System: A networking operating

system is an operating system that contains components and programs that allow a computer on a network to serve requests from other computers for data and provide access to other resources such as printer and file systems. Network operating systems (also known as NOS) is an operating system which is designed for special tasks of linking computers and devices in to a local-area network (i.e. LAN).

Fig.: Network Operating System A network operating system is a software application that provides a platform for both the functionality of an individual computer and for multiple computers within an interconnected network. Basically, a network operating system controls other software and computer hardware to run applications, share resources, protects data and establishes communication. Individual computers run client operating systems, while network systems create the software infrastructure for wireless, local and wide area networks to function. Origin and Evolution of Network Operating Systems: Contemporary network operating systems are mostly advanced and specialized branches of POSIX-compliant Software platforms and are rarely developed from scratch. The main reason for this situation is the high cost of developing a world-class operating system all the way from concept to finished product. By adopting general purpose OS architecture, network vendors can focus on routing-specific code, decrease time to market, and benefit from years of technology and research that went into the design of the original (donor) products.

1. First-Generation OS: Monolithic Architecture: Typically, first-generation network operating systems for routers and switches were proprietary images running in a flat memory space, often directly from flash memory or ROM. All first-generation network operating systems shared one trait: They eliminated the risks of running full-size commercial operating systems on embedded hardware. Memory management, protection and context switching were either rudimentary or nonexistent, with the primary goals being a small footprint and speed of operation. Nevertheless, first-generation network operating systems made networking commercially viable and were deployed on a wide range of products. The downside was that these systems were plagued with a host of problems associated with resource management and fault isolation; a single runaway process could easily consume the processor or cause the entire system to fail. Such failures were not uncommon in the data networks controlled by older software and could be triggered by software errors, rogue traffic and operator errors. 2. Second-Generation OS: Control Plane Modularity: The mid-1990s were marked by a significant increase in the use of data networks worldwide, which quickly challenged the capacity of existing networks and routers. By this time, it had become evident that embedded platforms could run full-size commercial operating systems, at least on high-end hardware, but with one catch: They could not sustain packet forwarding with satisfactory data rates. A breakthrough solution was needed. It came in the concept of a hard separation between the control and forwarding plane an approach that became widely accepted after the success of the industrys first application-specific integrated circuit (ASIC)-driven routing platform, the Juniper Networks M40. Forwarding packets entirely in silicon was proven to be viable, clearing the path for next generation 3. Third-Generation OS: Flexibility, Scalability and Continuous Operation: Although second-generation designs were very successful, the past 10 years have brought new challenges. Increased competition led to the need to lower operating expenses and a coherent case for network software flexible enough to be redeployed in network devices across the larger part of the end-to-end packet path. Unlike operating systems, such as DOS and Windows that are designed for single users to control one computer network operating systems (NOS) coordinate the activities of multiple computers across a network. The network operating system acts as a director to keep the network running smoothly.

The Two major types of Network Operating Systems are:

1. Peer-to-Peer 2. Client/Server 1. Peer-to-Peer: Peer-to-peer network operating systems allow users to share resources and files located on their computers and to access shared resources found on other computers. However, they do not have a file server or a centralized management source. In a peer-to-peer network, all computers are considered equal; they all have the same abilities to use the resources available on the network. Peer to peer networks are designed primarily for small to medium local area networks. AppleShare and Windows for Workgroups are examples of programs that can function as peer-to-peer network operating Systems.

Fig.: Peer-to-peer network Advantages of a Peer-to-Peer Network: Less initial expense: No need for a dedicated server Setup: An operating system (such as Windows XP) already in place may only need to be reconfigured for peer-to-peer operations. Disadvantages of a Peer-to-Peer Network: Decentralized: No central repository for files and applications. Security: Does not provide the security available on a client/server network. 2. Client/Server: Client/server network operating systems allow the network to centralize functions and applications in one or more dedicated file servers. The file servers become the heart of the system, providing access to resources and providing security. Individual workstations (clients) have access to the resources available on the file servers. The network operating system provides the mechanism to integrate all the components of the network and allow multiple users to simultaneously share the same resources irrespective of physical location.

Novell Netware and Windows 2000 Server are examples of client/server network operating systems.

Fig.: Client/Server Model Advantages of a Client/Server Network: Centralized: Resources and data security are controlled through the server. Scalability: Any or all elements can be replaced individually as needs increase. Flexibility: New technology can be easily integrated into system. Interoperability: All components (client/network/server) work together. Accessibility: Server can be accessed remotely and across multiple platforms. Disadvantages of a Client/Server Network: Expense: Requires initial investment in dedicated server. Maintenance: Large networks will require a staff to ensure efficient operation. Dependence: When server goes down, operations will cease across the network. Examples of Network Operating Systems: The following list includes some of the more popular Peer-to-Peer and Client/Server Network Operating Systems: Novell NetWare, Microsoft Windows NT, Microsoft Windows 2000, Microsoft Windows XP, Sun Solaris, Linux, AppleShare, Microsoft Windows Server, Novell Netware, etc Advantages of NOS: Centralized Flexibility Accessibility Scalability Disadvantages of NOS:

Expensive Maintenance Dependence

Characteristics of Network Operating System: It provides necessary operating system attributes such as different protocols, support for processors, support multi-processing of applications, automatic hardware detection etc. It calculates the safety parameters such as authorization, authentication, logging on restrictions and controlling the access. It provides file and directory services and user name. It provides web services, printing facility, back-ups and duplication services. NOS carry Internetworking for example WAN ports and routing. It helps in management of the user and maintains additional support for logoff and logon provides administration, system management, and remote access and examination tools along with graphical interfaces. It has got the clustering capabilities; like the error understanding capability and elevated accessibility systems. It offers the capacity to share devices, hardware and files all the way through the network. Distributed Operating System: A Distributed operating system is the logical aggregation of operating system software over a collection of independent, networked, communicating, and spatially disseminated computational nodes. Individual system nodes each hold a discrete software subset of the global aggregate operating system. Each node-level software subset is a composition of two distinct provisioners of services. The first is a ubiquitous minimal kernel, or microkernel, situated directly above each nodes hardware. The microkernel provides only the necessary mechanisms for a node's functionality. Second is a higher-level collection of system management components, providing all necessary policies for a node's individual and collaborative activities. This collection of management components exists immediately above the microkernel, and below any user applications or APIs that might reside at higher levels. These two entities, the microkernel and the management components collection, work together. They support the global systems goal of seamlessly integrating all network-connected resources and processing functionality into an efficient, available, and unified system. This seamless integration of individual nodes into a global system is referred to as transparency, or Single system image; describing the illusion provided to users of the global systems appearance as a singular and local computational entity

A System within a System: A Distributed operating system is an operating system. This statement may be trivial, but it is not always overt and obvious because the distributed operating system is such an integral part of the distributed system. This idea is synonymous to the consideration of a square. A square might not immediately be recognized as a rectangle. Although possessing all requisite attributes defining a rectangle, a squares additional attributes and specific configuration provide a disguise. At its core, the distributed operating system provides only the essential services and minimal functionality required of an operating system, but its additional attributes and particular configuration make it different. The Distributed operating system fulfills its role as operating system; and does so in a manner indistinguishable from a centralized, monolithic operating system. That is, although distributed in nature, it supports a Single system image through the implementation of Transparency; or more simply said the systems appearance as a singular, local entity. Working together as an Operating System: The architecture and design of a distributed operating system is specifically aligned with realizing both individual node and global system goals. Any architecture or design must be approached in a manner consistent with separating policy and mechanism. In doing so, a distributed operating system attempts to provide a highly efficient and reliable distributed computing framework allowing for an absolute minimal user awareness of the underlying command and control efforts. The multi-level collaboration between a kernel and the system management components and in turn between the distinct nodes in a distributed operating system is the functional challenge of the distributed operating system. This is the point in the system that must maintain a perfect harmony of purpose, and simultaneously maintain a complete disconnect of intent from implementation. This challenge is the distributed operating system's opportunity, to produce the foundation and framework for a reliable, efficient, available, robust, extensible, and scalable system. However, this opportunity comes at a very high cost in complexity. Location Transparency: Location transparency comprises two distinct sub-aspects of transparency, Naming transparency and User mobility. Naming transparency requires that nothing in the physical or logical references to any system entity should expose any indication of the entities location, or its local or remote relationship to the user. User mobility requires the consistent referencing of system entities, regardless of the system location from which the reference originates. Transparency dictates that the relative location of a system entityeither local or remotemust be both invisible to, and undetectable by the user.

Access Transparency: Local and remote system entities must remain indistinguishable when viewed through the user interface. The distributed operating system maintains this perception through the exposure of a single access mechanism for a system entity, regardless of that entity being local or remote to the user. Transparency dictates that any differences in methods of accessing any particular system entityeither local or remotemust be both invisible to, and undetectable by the user. Migration Transparency: Logical resources and physical processes migrated by the system, from one location to another in an attempt to maximize efficiency, reliability, availability, security, or whatever reason, should do so automatically controlled solely by the system. There are a myriad of possible reasons for migration; in any such event, the entire process of migrationbefore, during, and aftershould occur without user knowledge or interaction. Transparency dictates that both the need for, and the execution of any system entity migration must be both invisible to, and undetectable by the user. Replication Transparency: A system's elements or components may need to be copied to strategic remote points in the system in an effort to possibly increase efficiencies through better proximity, or provide for improved reliability through the duplication of a back-up. This duplication of a system entity and its subsequent movement to a remote system location may occur for any number of possible reasons; in any event, the entire processbefore, during, and aftershould occur without user knowledge or interaction. Transparency dictates that the necessity and execution of replication, as well as the existence of replicated entities throughout the system must be both invisible to, and undetectable by the user.

The Amoeba Distributed Operating System: Amoeba is a powerful microkernel-based system that turns a collection of workstations or singleboard computers into a transparent distributed system. It has been in use in academia, industry, and government for about 5 years. It runs on the SPARC (Sun4c and Sun4m), the 386/486, 68030, and Sun 3/50 and Sun 3/60. At the Vrije Universiteit, Amoeba runs on a collection of 80 single-board SPARC computers connected by an Ethernet, forming a powerful processor pool. This equipment is pictured below. It is used for research in distributed and parallel operating systems, runtime systems, languages, and applications. History: The Amoeba distributed operating system has been in development and use since 1981. Amoeba was originally designed and implemented at the Vrije Universiteit in Amsterdam, The Netherlands by Prof. Andrew S. Tanenbaum and two of his Ph.D. students, Sape Mullender and Robbert van Renesse. From 1986 until 1990 it was developed jointly there and at the Centre for Mathematics and Computer Science, also in Amsterdam. Since then development has continued at the Vrije Universiteit. It has passed through several versions, each experimenting with

different file servers, network protocols and remote procedure call mechanisms. Although Amoeba has reached a point where it seems relatively stable, it is still undergoing change and so it is important to take note of the various warnings and advice about the proposed design changes for future releases. The Amoeba Design Philosophy: Amoeba has been developed with several ideas in mind. The first is that computers are rapidly becoming cheaper and faster. In one decade we have progressed from many people sharing one computer to each person having their own computer. As computers continue to get cheaper it should be possible for each person to have many computers available for individual use. This is already the case to some extent. The second relates to the widespread use and increasing performance of computer networks. The need for a uniform communication mechanism between computers that are either on a local network or on a wide-area network is already apparent for many applications. What is eeded is the ability to deal with physically distributed hardware while using logically centralized software. Amoeba allows the connection of large numbers of computers, on both local and widearea networks, in a coherent way that is easy to use and understand. The basic idea behind Amoeba is to provide the users with the illusion of a single powerful timesharing system, when in fact the system is implemented on a collection of machines, potentially distributed across several countries. The chief design goals were to build a distributed system that would be small, simple to use, scalable to large numbers of processors, have a degree of fault-tolerance, have very high performance, including the possibility for parallelism, and above all be usable in a way that is transparent to the users. What is meant by transparent can best be illustrated by contrasting it with a network operating system, in which each machine retains its own identity. With a network operating system, each user logs into one specific machine: their home machine. When a program is started, it executes on the home machine, unless the user gives an explicit command to run it elsewhere. Similarly, files are local unless a remote file system is explicitly mounted or files are explicitly copied. In short, the user is clearly aware that multiple independent computers exist, and must deal with them explicitly. The System Architecture: The Amoeba architecture consists of four principal components, as shown in figure 2.1. First are the workstations, one per user, on which users can carry out editing and other tasks that require fast interactive response. The workstations are all diskless, and are primarily used as intelligent terminals that do window management, rather than as computers for running complex user programs. Currently Suns, IBM PC/AT clones and X terminals can be used as workstations.

Second are the pool processors: a group of CPUs that can be dynamically allocated as needed, used, and then returned to the pool. This makes it possible to take advantage of parallelism within a job. For example, the make command might need to do six compilations, so six processors could be selected from the pool to do the compilation. Many applications, such as heuristic search in AI applications (e.g., playing chess), use large numbers of pool processors to do their computing. The processor pool also offers the possibility of doing many jobs in parallel (e.g., several large text processing jobs and program compilations) without affecting the perceived performance of the system because new work will be assigned to idle processors (or the most lightly loaded ones). Third are the specialized servers, such as directory servers, file servers, boot servers, and various other servers with specialized functions. Each server is dedicated to performing a specific function. In some cases, there are multiple servers that provide the same function, for example, as part of the replicated file system. Fourth are the gateways, which are used to link Amoeba systems at different sites and different countries into a single, uniform system. The gateways isolate Amoeba from the peculiarities of the protocols that must be used over the wide-area networks. All the Amoeba machines run the same kernel, which primarily provides multithreaded processes, communication services, and little else. The basic idea behind the kernel was to keep it small, to enhance its reliability, and to allow as much as possible of the operating system to run as user processes (i.e., outside the kernel), providing for flexibility and experimentation. This approach is called a microkernel. Conclusion: Thus we have succesfully studied Network Operating System and Distributed Operating System

Das könnte Ihnen auch gefallen