Sie sind auf Seite 1von 19

Abstract Mobile / handheld devices incorporating several user centric applications owe th eir proliferation to the development of the

internet and to networking technolog ies that interface their operating systems to the internet. However their usage is still limited by the capacities of their memories. Modern technology has faci litated the development of virtual memory systems where devices can interface wi th computer systems that hold larger amounts of memory and applications. These c omputers also called servers significantly increase the computational efficienci es of individual systems. Their good performance is however restricted to the co verage of the local wireless network system. When migrated to another geographic region covered by other wireless networks, the challenge of the network is to m aintain seamless data transfer as also the previous context of the device. If this is not done there will be significant losses in terms of data, connectivity as also increased start up times. The Context Transfer Protocol has provided a meth od of ensuring that devices can be migrated from one networked area to another w ithout any of the bottlenecks mentioned earlier. With the incorporation of this protocol, network storage devices functioning as servers can be seamlessly integ rated with mobile devices forming network memory servers. Apart from memory management, another key requirement of operating systems is re liability. However due to the incorporation of monolithic kernels they have a te ndency to crash often. This is because of software bugs that interfere with the functioning of the kernel. The development of microkernel technology can signifi cantly improve the reliability of operating systems. In particular the Minix 3 o perating system, with its small kernel code, fault isolation properties as also its reincarnation server processes holds promise for providing a reliable and se lf troubleshooting platform for portable devices. An interface between a reliabl e operating system like Minix operating on mobile devices and the network memory service will dramatically increase processing and computational capabilities of these devices. However the Minix is in a state of evolution and the possibilit ies of interface with the NMS have yet to be ascertained. This scope of this dis sertation is limited to the testing of basic system calls such as ID identificat ion & read / write command functions of Minix in NMS. If this basic interface is possible it stands to reason that there is scope for much more development in f uture too in this area. Chapter 1 - Introduction According to Amir et al., (2006) technological advancements account for one of t he fifty trends changing lifestyles worldwide. Technology has transformed the te lephone into the mobile and the computer into the laptop. Mobiles telephony has transformed the cell phone into the smart phone which has changed the work peopl e work and live. Laptops have become the preferred medium of work by all section s of people and are all set to supersede even desktop computers (Harter and Hopp er, 2009). These devices owe their popularity due to their ability to offer anytime, anywhe re access to the internet and hence to a whole range of online applications and offerings. Other mobile devices include notebooks, tablet personal computers, pe rsonal digital assistants (PDAs). All These devices have been made possible by ra pid strides in networking technologies which have evolved from wired networks to wireless virtual networks. These networks use sophisticated protocols that enab le mobile devices to interface with the internet work seamlessly. Key to the effective performance of all computer systems (including mobile devic es) is the operating system. Operating system efficiencies are largely determine d by the Primary Memory. The greater the primary memory the more efficient the p rocessing and computational powers of the operating systems (Balakrishnan et al. , 2010). However as per Fox et al., (2008) there exists a vast and ever widening gap between the sophisticated applications being developed and the memory avail able to mobile devices. The computer memory available on these devices is restri cted to their hard drives and is restricted to 64 or 128 megabytes of random-acc ess memory. This is grossly insufficient for current usages (Gribble and Brewer,

2010). Moreover hard disk drives are weighty, expensive, consume more power and are limited in terms of data storage capacities. Moreover users hesitate to sto re data on these devices because their loss or theft results in a concomitant lo ss of the data (Perkins, 2000). Virtual Memory offers a solution to this problem. Virtual Memory refers to memor y storage architecture located external to the mobile device or computer system. This memory is managed by a server system which interfaces with the mobile devi ce through the network (Schilit et al., 2007). This server serves several mobile d evices or clients through the network and hence it is called a network memory se rver (NMS). Network memory protocols are used to send data back and forth across the network to the mobile device and the server (Weiser, 2010). The Network Memory Server provides a secure link between the mobile device and t he data it wishes to access. It vastly increases the storage capacities of these devices and provides anytime anywhere access to a whole host of applications th ereby increasing their processing and computational powers. It makes redundant hard disk drives thereby making for more economical devices along with power sav ings (Joseph at al., 2009). Since data is stored on the network server, even if the device is lost data is not compromised. The increased interface and data transmission speeds from the current 1 GB per s econd to 10 GBS per second that seems a possibility in the near future, will giv e a huge boost to network memory server technology (Hodes & Katz., 2007). This i s because at this speed, accessing data from a virtual memory server will be fas ter than that of a hard drive located on the device itself. However, for any functionality to be efficient, the operating systems that run o n devices should be reliable. Current operating systems are subject to frequent failure on account of the monolithic kernels they are configured on. challenges. For eg., 78% of Windows users experienced 15 crashes per month, whereas 24% of t he customers experienced more than 65 monthly crashes (Noble et al., 2006). Thes e kernels are subject to attacks by malfunctioning software codes called bugs. T hese bugs incapacitate the kernel and consequently the operating system itself ( Noble et al., 2006). Hence the need to incorporate microkernel technology on ope rating systems. Microkernel technologies include only a small amount of code in the kernel space thereby limiting the number of bugs. The other processes are ru n on mutually independent drivers. Thus any bug in any process will affect that particular process only and not the entire system. The Minix 3 operating system offers the advantages of microkernel technology in terms of smaller sizes, enhan ced reliability and the ability to repair any faulty processes seamlessly (Katz and Brewer 2010). Because of these features, they are suitable for use in handheld mobile devices. The combination of an efficient reliable operating system which can interface w ith a network memory server offers immense possibilities in terms of extending t he functionalities and utility of handheld devices. 1.1. Research Question: To explore the possibility of interfacing Minix Operatin g System running on a mobile device to the Network Memory Server. 1.2. Aim of the Dissertation: NMS technology has the potential of overcoming the roadblocks of memory storages which hamper processing efficiencies of mobile devices. Minix, with its unique ability to troubleshoot snags combined with its small size make it ideal for use in mobile devices. This dissertation aims to demonstrate the possibilities of i nterface between Minix and NMS which will enable the combination of the advantag es of each. 1.3. Objectives: To explore storage architecture of an NMS system and to identify its advantages over other forms of storage. To explore the Minix Operating System and to determine its suitability for use i n terms of reliability and automatic fault rectification To demonstrate the possibility of an interface between MINIX and NMS

Chapter 2 - Literature Review 2.1. The Internet The precursor of the modern computer network as we know it today is the ARPANET which was first introduced in 1969 as a means to connect computers of government research and development establishments (Keshav and Morgan 1997). APRANET morph ed into USENET which was a system of interlinked computers in colleges to exchan ge bulletins and other communications. Usenet morphed into email while chat facility was introduced in 1988 (Keshav and Morgan 1997). The development of personal computers in the ninety seventies gav e a big boost to the USENET. The introduction of the World Wide Web in the 1990s by Tim Berners and Robert Caillau ushered in the Internet era. Today the interne t has become an integral part of daily lives both personal and professional. Its primary applications include the World Wide Web, Email, Data / File Transfers a nd Chat (Perkins 2000). These applications are offered to millions of computers globally interconnected through a large network. The networks that are used to connect computer systems themselves have evolved f rom hard cables to wireless networks all of which are still in use today. While a LAN (Local Area Network) connects a group of computers mostly present in the s ame building, a MAN (Metropolitan Area Network) connects computers located acros s several buildings in the same city or town. A WAN (Wide Area Network) connects several LANs together and can be restricted to a particular organization or acce ssible to the general public. The Internet is a WAN that can be used by the publ ic while any particular organization may employ a WAN to connect its systems acr oss several countries (Weiser 2010). The latest development in WAN technology is WWAN or Wireless WAN. While the other networks use physical cables for connecti on purposes, a WWAN employs mobile telecommunication technologies to transmit in formation amongst the computers it connects. The advantages of using a WWAN is t he possibility of larger area coverage and faster transmission of data (Stemm et al., 2008) The internet functions by exchanging data between individual computer systems al so called nodes. This data transfer takes place over a network. In order to prov ide a standard framework to govern the data transfer process the Open System Int erconnection (OSI) model has been developed by the International Standards Organ ization (ISO) (Mummert et al., 2002). The OSI model is supported across almost a ll computer systems & networks across business and geographies thus allowing for a uniform method of data transfer. 2.2. Internet Protocols The OSI model is a conceptual construct that defines the how of data transfer. It divides the data transfer process into seven sequential layers. Each Layer has i ts own task in the overall function of data transfer and communicates with the o ne above and below it only. The functioning of each layer is governed by a serie s of standard rules and regulations also known as protocols which have been define d by statutory bodies such as Institute of Electrical and Electronic Engineers ( IEEE) and the American National Standards Institute (ANSI) (Nguyen et al 2007). These protocols across different layers regulate data formats, time data transfe r so that it happens sequentially and troubleshoot potential problem that can oc cur. Without these rules or protocols, individual computer systems will not be a ble to comprehend in a meaningful manner the data that is getting inputted into them (Kornacker and Gilstrap 2008). With the development of wireless connectivity technology engaging in seamless da ta transfer through protocols, several handheld / portable electronic devices ha ve proliferated in the market. These include laptops as well as mobiles all offe ring a large number of applications and anywhere anytime access to data (Nguyen et a

l 2007). Users will demand instant connectivity at all times. Key to our underst anding of how these services can be provided is the manner in which individual c omputers systems are connected to each other through the network and how their f unction. 2.3. Client Server Architecture Client server architecture is the term used to describe the manner in which comp uters systems are interconnected in a network (Hodes and Newman 2010). Also refe rred to as two tier architecture, every computer in a network is either a client or a server. This set up is primarily made to accelerate the computing efficien cies of the system. The server computer is a large capacity device with the capa city to store large amounts of data and processing functionalities. The server se rves several individual computers in the network called clients. It does this by ma intaining files and applications that are then used by the clients. These also i nclude disk drives, printers and regulation of network data. Clients in turn acc ess the server for applications, files, accessories and computing power. Hence w hile the primary task of a server is to store applications and files, it also ac ts as a source of processing data as well providing individual computer systems with greater computation speeds and capabilities. Servers can be individual comp uters or a cluster of computers. The latter have more powerful functionalities tha n the former (Nguyen et al 2007). 2.4. Operating Systems Key to the efficient functioning of servers and their clients is the operating s ystem. Also knows as the OS, it is the platform that runs other software program s in a system. It manages both the software and hardware on the system. It is co nsidered the brains or the backbone of a system and controls and allocates resources such as memory, file management, computer peripherals as also recognizes input data and transmits output data to display unit (Narayanaswamy 2001). It makes su re that various programs run without interference from each other though they ma y be used simultaneously. It acts as the system guard preventing any authorized access into the computer system. Hence it is a multi-tasker as also a multi-proc essor. Typical operating systems include UNIX, Mac OS and LINUX. One of the most important tasks that an operating system does is manage the memo ry requirements of your computer Operating System (Seshan et al., 2007). Compute r Memory is an essential component of computer systems providing information sto rage capabilities. Memory are devices that store and record digital data. 2.5. Computer Memory Computer Memory is segregated into Primary and Secondary Memory. Primary Memory also knows as temporary memory or Random Access Memory (RAM) acts as a buffer be tween the Central Processing Unit (CPU) and the Disk Hard Drives of the System. The importance of primary memory is its capacity to significantly increase proc essing times. Whenever the CPU wishes to process data, it is much faster to acce ss it from the primary memory system than from the much slower disk drives. This increases operating efficiencies of the whole system. Thus the more memory on a system, the faster the performance of that system. The Secondary Memory refers to permanent storage devices including hard disk and optical disk drives. While data can be accessed from here also it is much slower than the Primary Memory an d is hence used for more permanent storage purposes (Hodes et al., 2006). The operating system in its capacity as a memory manager, allocates main memory from amongst various applications running on the system simultaneously. It does this through a process of dynamic memory allocation. This involves determining m emory space for an application that is already running on a system. The operatin g system must not only allocate memory space but also deallocate that space once it is used. Else memory leak and consequent degradation in performance occurs. When a particular memory is shared across several processes allocation and deal location tasks become even more critical to ensure efficient functioning of each application ((Hodes et al., 2006) Modern computers are finding that, large though their memories may be, they are not sufficient. This is because of the large size of modern programs and applica tions as also the fact that most users run several applications simultaneously. This results in overall slowing down of processing times. One of the techniques

used to ameliorate this problem is to use Virtual Memory. This involves a large ma in memory maintained external to the individual computer station. It is a proces s of simulating memory where the operating system then maps memory requirements of various applications on the virtual memory (Veizades et al., 2005). The configuration of virtual memories involves designing storage systems on serv ers and use of efficient network technology. Servers, as mentioned earlier, can be standalone systems or a cluster of systems. Modern day requirements have nece ssitated the development of cluster type servers which are then networked togeth er (Seshan et al., 2007). Along with applications and processing / computational facilities, these servers also called as networked storage systems also offer m emory storage facilities to other computers connected on the network. 2.6. Network Storage Architectures Development of networked storage systems has been facilitated by the advances in storage architectures and networking technologies. There are three storage arch itectures most commonly used. They are: 2.6.1. Direct Attached Storage (DAS) DAS systems are the most widely used memory storage architecture used in persona l computers and at the workplace. They offer block storage of data and are use S CSI interfaces to directly connect with the input / output systems of host compu ters. It is a secure system but is limited in terms of connectivity, storage cap acities and ability to dynamically manage memory allocations. It is most commonl y used in systems requiring high performance but is limited in terms of data sha ring ability amongst servers (Kleinrock et al. 2004) Fig 2A: DAS Network, (Kleinrock et al. 2004)

2.6.2. Storage Area Networks (SAN) SANs were developed to overcome the limited connectivity of DAS systems. These ne tworks are interfaced with a large number of client systems. It may be defined a s a group of interconnected systems and servers all configured on a common data transmission network channel. The term used to refer to the data transfer proces s is storage fabric. The goal of configuring SAN is to permit multiserver access t o storage systems (Schilit et al 2010). SAN was developed to manage the increasing need for data storage. They connect v arious devices together in such a manner that anytime, anywhere access is provid ed. They incorporate extant technologies such as SCSI for customizing to connect large storage devices to client computer systems. The storage protocols maps al l data to be stored onto blocks on the storage device. For this mapping, metadat a technique is used. Metadata is descriptive information that enables one to sto re, retrieve, use and manage data resources (Schilit et al 2010). SANs are characterized by high connectivity, interface across many network segmen ts and have many network management and troubleshooting mechanisms. SANs allow centralized storage of data. They employ a common infrastructure acros s multiple systems and devices. It guarantees data integrity and enforces securi ty measures. Fig 2B: SAN Network, (Schilit et al 2010) 2.6.3 Network Attached Storage (NAS) NAS evolved to simplify and make more efficient the data storage capabilities of SAN Systems. NAS incorporates client / server formats. It comprises of a hardwa re unit called a NAS box to interface between the actual storage device and the client systems. This box does not have any input output facility but incorporate s an embedded operating system. The operating systems use metadata techniques to manage data storage on the NAS. All individual clients interface with the NAS head through an Ethernet connection. The protocol used for transmission of data

is TCP / IP. Each device identifies the head through its own unique IP address ( Cox 2011). A NAS also interfaces with SAN or DAS. Thus NAS may be configured over a SAN or with a DAS. The advantages of using a NAS are that it allows for easier access t o data than the other forms of storage architectures. It also stores any form of data as long as it is configured in file format. This includes emails, web data , system backups. They are reliable and easy to manage. They include large stora ge spaces, authentication and alert mechanisms (Cox 2011). Fig 2C: NAS Network, (Cox 2011)

2.7. Transmission Mechanisms All storage devices interface with client systems through networks to transmit d ata. The network along with the storage devices forms Networked Storage Systems. These systems incorporate specialized storage network technologies. Key require ments for these technologies are that they should be high in performance as also provide reliable transfer mechanisms to large volumes of data (Brooks et al., 2 006). Some of the most important technologies include: Gigabit Ethernet This technology enables data to be transmitted at the rate of 1 000 Mega Bytes per second. Because of this large capacity it is the preferred ne tworking technology for storage systems. It uses the TCP protocol for controllin g flow of data as also for troubleshooting during data transmission (Brooks et a l., 2006). Fibre channels Fibre Channel provides point to point connectivity between system s. It is purely a storage networking technology with standard protocol definitio ns across all its layers. The protocol is configured on hardware and is not soft ware dependent. This permits faster data processing. The hardware also incorpora tes troubleshooting, data flow control mechanisms. Fibre Channels provide transm ission rates of up to 4 giga bytes per second with a 10 giga byte per second tec hnology currently being evolved. However this technology is more expensive and c omplex than Ethernet (Brooks et al., 2006). Ten Gigabit Ethernet this technology is a development over the Gigabit Ethernet permitting 10 Giga bytes of data to be transmitted per second. At these speeds, accessing data over the network is much faster than accessing from local hard dr ives. 2.8. Network Memory Server We have examined how technology has enabled data to be stored virtually thereby increasing the processing capacities of operating systems. Sophisticated network ing technologies also provide fast access to this stored data. However DAS, SAN as well as NAS function well in a local area network or in a limited geographica l spread. The true measure of their utility is their ability to provide access t o data storage efficiently combining mobility with connectivity. This should be done without compromising performance levels when movement takes place from loca tion to location. Users should always be connected to systems where their data i s stored and the data transfer from networked area to another must happen seamle ssly. The development of the Context Transfer Protocol (CXTP) has dramatically improve d the ability to transfer data from fixed data storage locations to a mobile dev ice (Brown 2011). It overcomes the primary challenge that devices face when move d across networked areas serviced by different access routers (AR). When the dev ice makes such a crossover, its OS generates a handover request. These requests are necessary to avoid disruption of services, preservation of session continuit y and seamless connectivity. Thus the challenge here for the system is to mainta in the context of the from state to the new to state of the device. This is necessary to avoid the repetition of a service session each time the user migrates from on e area to another. Eg. this process makes unnecessary the keying in of username s and passwords that authenticate the user each time a mobile device is migrated

across different regions. The CXPT in conjunction with the Mobile IPv6 protocol transfers context from the access router of the old region (oAR) to the access router of the new region being entered (nAR). While CXPT handles the transfer of the context, Mobile IPv6 lays down the rules by which data is transferred within a networked area. This is done in order to preserve the integrity of the data (B rown 2011). CXTP is configured between a source node and a target node. It uses the followin g messages to ensure seamless handover of context: Context Transfer Activate Request Message (CTAR) Context Transfer Request Message (CTR) Context Transfer Data Message (CTD) Context Transfer Activate Acknowledge Message (CTAA) Context Transfer Data Reply Message (CTDR) The transfer of context can be initiated either by the mobile being migrated or by the access router in the new region. The former case is called predictive mod e or mobile initiated and the latter case is called reactive mode or network ini tiated. In the predictive mode, the old address router (oAR) is sent a CTAR message from the mobile node being migrated. This message contains the IP address of the new address router (nAR) as also a token of authorization. The oAR then transmits t o the nAR a CTD containing the details of the context to be transferred. This me ssage also includes data for the nAR to generate another token of authorization that verifies the mobile nodes authorization token. The mobile node sends the CTA A to the nAR in order to verify the integrity of context transfer. This CTAR con tains the authorization token of the mobile node which is verified by the nAR (B rown 2011). In the reactive mode the mobile node sends a CTAR to the nAR. The CTAR contains a token of authorization which is calculated based on confidential data interfac ed between the mobile node and the oAR. Once the nAR receives this message it in turn generates a CT message including the AT and the context to be transferred. This message in turn is received by the oAR that cross checks the authorization token, verifies it and sends a CTD message which includes the context to be tra nsferred. Mobile Node -CTAR +-------------> CT-Request +-------------> CTD <-------------+ CTAA <-------------+ Fig 2D: CXTP Mechanism, (Brown 2011) With the development of the CXTP, it has become increasingly possible to configu re various networked storage systems with operating systems of mobile devices. I n this case the storage devices act as a server providing the client devices wit h increased memory, applications and computational capacities. Since they operat e across networks, they are called network memory servers. 2.9. Advantages of Network Memory Servers The two main advantages that will accrue to users of portable electronic devices on account of network memory servers is security. Most users fear loss of conf idential data if the device is stolen or loss and hence limit the storage of dat a on these services. Such data can be safely stored on network memory servers an nAR --oAR ---

d accessed whenever required (Balakrishnan et al., 2010). To store data modern laptops use hard drives. However these are costly and incre ase both the weight and power consumption of the laptop. With data storage now h appening on the network server, the need for a hard drive storage facility becom es redundant. Developments in network transfer technologies have also fuelled the demand for n etworked memory servers. While current networks deliver data @ 1 GB per second f uture projections are in the level of 10 GB per seconds. While most devices have 800 GB network interfaces with over the air transfer rates of 53 MB per second, latest technologies expect to deliver over 530 MB per second over the air (Broo ks et al, 2006). At this rate, Networks will be able to deliver data at speeds higher than that provided by local disks. Home networks will be configured to de liver 100 MB per second enabling work for home possibilities in future too (Brow n, 2010). This will further accelerate demand for network memory servers 2.10. Kernel While the operating system is the backbone of a computer system, its essential c ore is the kernel. The kernel manages system communications between hardware and software. It forms the interface between applications and data being processed by hardware (Chawathe 2006). It allows applications to use system resources. The se include. The CPU the CPU executes or processes programs. The kernel allocates the program s to be processed on the CPU one at a time. Memory Management for a program to be executed its instructions and data both ne ed to be stored in the memory. The kernel allocates memory space to each program . As the programs are processed, the memories are also deallocated by the kernel Input / Output device management the kernel is the interface between input / out put devices. It takes in data from the input device, determines which computer r esource best will process it and then sends the processed output to the display unit. In addition kernels maintain authentication and security mechanisms as also sync hronize the intercommunication processes within the system. Thus from the above we conclude that proper functioning of the kernel is essenti al to good operating system performance and hence of the whole computer system. Kernels function in virtual memory. Virtual memory is further segregated into Ke rnel Space and User Space (Fox and Gribble 2009). Kernel Space is only used for performing kernel related tasks while user space is used to run applications. A key difference between kernel and user space is that the latter can be swapped o ut when required. Traditionally UNIX operating systems incorporate monolithic kernels. A monolithi c kernel is a single program that controls the core activities of the operating system as also the peripheral devices. It runs exclusively in the kernel space. In effect this kernel contains all the code needed to perform any and every kern el related function. While it allows ease of implementation, its main disadvanta ge lies in the fact that a single bug can cause it to malfunction resulting in cr ash of the operating system and the computer system itself. The larger the kernel the more difficult it is to maintain (Harter and Hopper 2009). Because of these disadvantages, operating systems nowadays are configured on mic rokernels. Also known as minimal kernels, they are so configured, that only essential opera ting system requirements are processed in kernel space. These include scheduling of processes, management of memory and input output devices. Other services are handled in user space. The advantages of a microkernel over a monolithic kernel are that it can be troubleshooted in parts and ease of maintenance (Katz and Br ewer 2010). Fig 2E: Kernel Configuration, (Katz and Brewer 2010) From the above, it can be concluded that operating system reliability depends to a large part on stable performance of the kernel. The above summarizes concepts of operating system and kernels which is key to ou r understanding of the Minix Operating System

2.11. The Minix 3 Operating System A system can be considered dependable if 99% of users never experience any kind of failure during use of the system (Zhou et al., 2010). However despite develop ment in technology, computer systems / laptops and other handheld devices are st ill highly unreliable. They are constantly subject to crashes which is irksome to the vast majority of users. Every time a crash occurs, the system has to be rebo oted again increasing the inconvenience caused. The benchmark for these devices should be a television set which once bought lasts without breaking down for man y years (Madden et al., 2009). The key to system reliability lies in security t he functionalities of the operating system. An operating system provides the foundation for all computer activities. Hence i t should possess high reliability and function perfectly at all times. However m ost operating systems today including Windows and Linux do not function faultles sly. This is because they incorporate large monolithic kernels and rely on these for their functioning. This results in too many privileges (Lynch and Lo 2010). Consequently too many modules are allowed to run on the kernel. A programming e rror called a bug in any of these modules can affect the entire kernel causing it to crash (Cidon and Sidi 2009). Bugs occur in software code. Statistics reveal t hat there are one to nineteen bugs per thousand lines of code (Lynch and Lo 2010 ). Thus more the code the more the number of bugs. Lack of adequate security ca uses problems when third party programs are run on the operating system. These o ffer a source of bugs which can affect the kernel. Moreover these bugs can be wi llfully introduced into the system in the form of viruses and worms which furthe r damage the system (Rhee et al., 2010). Monolithic kernels are also very large, difficult to maintain and prone to structural defects all of which have the pot ential to compromise computer systems The solution to the above is to use microkernel technologies rather than monolit hic kernels. Using Microkernels, most of the code (which is the source of bugs) can be moved out of the kernel space where its malfunctioning can cause the syst em to crash to the user space where they cannot cause system crashes. The code t hat does run in the kernel space should be typically small in size thereby reduc ing the number of bugs (Cidon and Sidi 2009). Each of the other kernel drivers should run as separate, protected processes in the user space. Minix 3 technology offer these facilities and hence better reliability to the sy stems in which they run. It was first developed in 1987 as an alternate operatin g system to UNIX (Rhee et al., 2010). It was the inspiration for the Linux Oper ating system. It then morphed into the Minix 2 system which was however mostly u sed for teaching operating system courses at colleges. However in 2005 it was re configured into the Minix 3 version. The main motive behind the development of M inix 3 version was to provide a reliable operating system despite the presence o f bugs in the code (Zhou et al., 2010). Minix 3 offers a high degree of reliability through a process of fault isolation . Minix 3 kernel software contains only about 5000 lines of code (Rhee et al., 2 010). This code handles key processes of system interrupts, scheduling of proc esses and communication between different processes. The rest of the software co nsists of separate, protected user mode processes none of which is allowed to fu nction as a super user. Thus a bug in any particular process might result in the faulty performance of that process but the system as a whole will not require r ebooting. Hence the fault is isolated and its harmful effects restricted. One of the processes on Minix is called the reincarnation server (Rhee et al., 2 010). This process monitors the functioning of all the other processes. Whenever any of them starts to malfunction, the reincarnation server replaces it instant ly by a fresh version. This in most cases involves restarting of the malfunction ing process. The main advantage of this is that the system is allowed to repair itself without the necessity of rebooting. This process of self-healing occurs w ithout the knowledge of the user and does not result in any loss of data. Minix is comprises as a series of layers. The lowest layer is the microkernel co ntaining about 4000 lines of code controlling essential functions. Above that ar e the drivers that run devices such as input / output, disk and display systems. Each driver is run by a separate process run in user space. Then come the serve

r processes including the reincarnation server, network server, file servers etc . Finally come the user processes (Madden et al., 2009). Fig 2F: Minix 3 Architecture, (Madden et al., 2009) Another advantage of MINIX 3 is that it supports all software that run on UNIX p latform. Moreover it is open source software which means that it can be download ed from its own website and installed on various systems. Because of these advantages it is run on those on those applications where a hig h degree of reliability is required. Because of its small size it is ideal to be run on embedded systems used on mobile phones and laptops. Its popularity can be gauged from the fact that its website has about three hund red thousand visitors with the software being downloaded at least 25000 times (M adden et al., 2009). 2.12. Summary of the Literature Review. Mobile portable devices incorporating several user centric applications owe thei r proliferation to the development of the internet and to networking technologie s that interface their system to the internet. However their usage is still limi ted by the capacities of their memories. Modern technology has facilitated the d evelopment of virtual memory systems where devices can interface with computer s ystems that hold larger amounts of memory and applications. These computers also called servers significantly increase the computational efficiencies of individ ual systems. Their good performance is however restricted to the coverage of the local wireless network system. When migrated to another geographic region cover ed by another wireless network, the challenge of the network is to maintain seam less data transfer as also the previous context of the device. If this is not done there will be significant losses in terms of data, connectivity as also increas ed start up times. The Context Transfer Protocol has provided a method of ensuri ng that devices can be migrated from one networked area to another without any o f the bottlenecks mentioned earlier. With the incorporation of this protocol, ne twork storage devices functioning as servers can be seamlessly integrated with m obile devices forming network memory servers. Apart from memory management, another key requirement of operating systems is re liability. However due to the incorporation of monolithic kernels they have a te ndency to crash often. This is because of software bugs that interfere with the functioning of the kernel. However the development of microkernels can significa ntly improve the reliability of operating systems. In particular the Minix 3 ope rating system, with is small kernel code, fault isolation properties as also its reincarnation server process holds promise for providing a reliable and self tr oubleshooting platform for portable devices. An interface between a reliable ope rating system like Minix operating on mobile devices and the network memory serv ice will dramatically increase processing and computational capabilities of thes e devices. However the Minix is in a state of evolution and the possibilities o f interface with the NMS have yet to be ascertained. This scope of this disserta tion is limited to the testing of basic system calls such as ID identification & read / write command functions of Minix in NMS. If this basic interface is poss ible it stands to reason that there is scope for much more development in future too in this area.

Chapter 3 - Methodology 3.1. Introduction This chapter critically evaluates various research methodologies with the intent ion of choosing the most suitable approach that will achieve our research object ives. The strengths and weaknesses of the chosen method are then discussed. Katz and Brewer (2010) proposed the following different research methodologies which can be chosen for a particular project. Figure 3A: Research Wheel, (Katz and Brewer, 2010) 3.2. Research approaches All research methodologies can be classified as Deductive and Inductive (Perkins , 2000). Deductive methodologies apply a general postulate or hypothesis to spec ific individual cases while Inductive methodologies start with individual cases and arrive at a hypothesis or postulate. Deductive methodologies move from the g eneral to the specific while Inductive methodologies move from the specific to t he general (Perkins, 2000). The following table illustrates some key differences between Inductive and Deduc tive approaches. Deductive Inductive 1) Scientific Postulate or Hypothesis 2) Moving from theory to principle, i.e. from General to Specific 3) The collection of data can be quantitative 4) Concepts need to be oriented properly to to ensure clarity of definition 5)The approach is highly structured 1) Experimental or Empirical approach 2) Moves from Specific to General 3) Generation and use of Qualitative data 4)This approach is Structured minimally Table 3B: Difference between Inductive and Deductive approach, (Perkins , 2000) As per the above, the inductive approach starts with experiments and observation under scientific conditions resulting in the collection of primary data. This d ata is then subject to empirical analysis and a general theory or hypothesis arr ived at. A deductive approach first starts with the postulating of a general all encompas sing hypothesis which may or may not be true. The deductive approach in a partic ular domain consists of theoretical considerations and subjecting this theory to empirical scrutiny (Cidon and Sidi, 2009). This hypothesis is then tested by ap plying it to specific cases and then observing the outcomes through research (Ci don and Sidi, 2009). This method is also known as the Qualitative method since it is descriptive in nature. Qualitative method involves analysis of collated le ading to the formulation of theory. Figure 3C. Deductive and Inductive Research approaches, (Cidon and Sidi, 2 009). Since inductive methods are number based they are also known as Quantitative res earch methods and test hypotheses through experiments (Perkins, 2000). The resul ts of the experiments will either validate the hypotheses or conversely prove it to be false or incorrect. It involves the process of administering, processing

and analyzing the data to write up findings and conclusions. It is number based rather than theoretical. Insofar as our research methodology administers both ex periments and questionnaires it is both quantitative and qualitative in nature. As per Perkins (2000), the quantitative research method suffers from disadvantag es of the measurement process being resulting in an artificial and spurious sens e of precision and accuracy. He further goes on to say that the analysis of rela tionships between variables resulted in a static view of social life that is ind ependent of peoples real lives. In other words, results of quantitative methods p erformed under controlled, experimental conditions bear no relation to the reali ties of the modern world. Taking cognizance of this view, the author has done ri sk management for the research conducted in order to correctly and realistically analyze the variables and interprets the results obtained from the experiments.

3.3. Research Strategies The challenge was to identify an appropriate research strategy that would be mos t suitable to investigate in a detailed manner the results of experiments conduc ted to test the hypothesis. The deductive approach used in this dissertation first entailed detailed study o f literature available on the subject of network technologies. The author has pr esented his hypothesis or research question, i.e. the possibility of an interfac e between Minix 3 and Network Memory Server. This possibility will then be teste d hypothesis through experiments, collection and analysis of data obtained throu gh these experiments. The hypothesis above was arrived at through extensive reading of as many literat ure sources as possible on the subject of networking technologies and operating systems.. This provided the theoretical foundation on which to base the hypothes is which was then tested through experiments. 3.4. Experiments The experimental testing of the hypothesis was conducted under simulated conditi ons in the computer lab of the university. It involved the following steps: Determining the specifications of the hardware and software components. Downloading the MINIX 3 OS on the CPU of the mobile device Testing Basic System Calls which included o Get Password o GetClientID o Get Block Memory o Read Block Memory o Write Block Memory 3.4.1. Specifications of Hardware and Software Components Hardware Components: a) A Pentium CPU b) The system had a 16 MB RAM c) 50 MB Minimum and 600 MB minimum for full source Software Components a) Minix 3 operating system downloaded from the net and configured on the H ard Disk b) Linux Operating System for testing and comparison 3.4.2. Downloading Minix 3 The Minix Operating System is open source software. It can be freely downloaded from http://www.minix3.org. This site also provides installation instructions al ong with source code download. Accordingly the author accessed the Minix OS from the aforesaid site. To compile the OS in the mobile CPU, the following procedur e was used. Run the make image command. This results in the source code files in the src / k ernel and other sub directories of src / servers and drivers being converted to

object files. The next step is the linking of all the object files in the kernel forming a single program which can then be executed. The kernel object files in clude pm (process manager) and fs (files system). Apart from the kernel programs , additional programs which are in charge of other processes get installed in th eir respective directories. These are not however loaded in the kernel space but in the user space. These include tty (Console and keyboard driver) and printer (printer driver).

Some of the important Minix 3 system components which are loaded into the CPU ar e given below. Component Description kernel Kernel tasks pm Manages Processes fs Manages Files on the System rs (Re)starts servers and drivers memory RAM disk driver log Buffers log output tty Console & keyboard driver driver Disk (at, bios, or floppy) driver init parent of all user processes floppy Floppy driver (if booted from hard disk) is Information server (for debug dumps) cmos Reads CMOS clock to set time random Random number generator printer Printer driver Fig 3D: Minix 3 Commands The Minix 3 system has to be a bootable one. For this the Installboot program wa s run. This results in all the files being independently loaded and then linked to each other. The modular nature of Minix results in the loading of independen t programs where communication is restricted to passing of messages. They also e xecute processes independently. This makes modification of a particular file ver y easy. The Ethernet driver which will be used to interface with the Network Mem ory Server was activated after Minix 3 was loaded. The advantage of using the Minix OS was that the entire system required only 700 Kilo Bytes of memory including programs loaded in the kernel space and user spa ce. 3.4.3. System Calls Data Management in Minix 3 is done through device files. These files control acc ess to disks and terminals and each file is housed in an independent module. Sys tem Calls are used to read, write data or allocate memory through these device f iles. System Call configuration depends on method of storing data in a Network Memory Server. Data is typically stored in blocks. A block may be defined as a series o f bytes of certain length. This length is called block size. Data configured in this manner is said to be blocked. This form of data storage makes handling of d ata much easier since while storing data can be stored in the form of a block an d also can be read in the form of blocks at a time. The five system calls will be discussed now: Get Password This command is used by the NMS to verify and authenticate the Client System. Ea ch client is provided by a unique address called an IP each of which is provided with a unique password. However in order to use system resources it prompts use rs to enter their passwords. This password is then checked with that stored in i ts memory. If there is a match, the user is authenticated to use NMS resources e lse an error message is generated. The coding that is used in MINIX 3 to generate a password is given below: Log into Minix 3 as Root Go to the password command (password user) to set a password. A suitable passwor

d may then be entered. Creation of New User Create a group for normal users: vi /etc/group Add the line (9 is the next available group number): users:*:9: Fig 3E: Screenshot of the Get Password Commands A home directory is then created which contains all the user s home directories mkdir /usr/home Add user adduser username goes here users /usr/home/usernamegoeshere Assign the user a password - give them a good password passwd user name goes here Log out and back in as the user exit noname login: user name goes here password: user passwd goes here A password is thus generated and stored in the memory of the NMS for future auth entication of the user. Get Client ID Each client on the network is identified by a unique number or address called th e IP number or the Internet Protocol Number. The task of addressing is performed using 32 bit numbers. The function of addressing can be illustrated by the following example. Consider the following 32 bit binary number. 11000000101010000000000111001000 To identify the particular computer system this number refers to, this number is divided into four parts of 8 number each as follows: 11000000.10101000.00000001.11001000 In order to render this address more reader friendly, it is converted from its b inary form to a decimal form. Thus the above number will read as: 192.168.1.200 This number can be interpreted as follows: 192: The particular identification number of the client computer 168: The sub network containing the host computer 200: This number allows the client computer to connect with and identifies it to the Network Memory Server. Get Block Memory This command is used to allocate blocks of memory in the NMS. It uses memory man ager algorithms for this purpose. The algorithm first scans the NMS OS to see if there are any non allocated memory blocks. When it finds a continuous block of memory which has not yet been allocated and which is larger than the size reques ted for, it splits this block into the size requested for and allocates it. The remaining block of unused memory is merged with neighboring unallocated blocks t o form another contiguous block of memory. The driver that is used to block memory is called the character device driver an d use c as file identification letters. Each NMS server has a device number. Th is number is divided into the major and minor device numbers. This is done by th e mknod command. The major device number is used by fs to recognize the particul

ar device driver that allocates the block memory. The minor device number is the network memory number which points to the data block stored by the device on NM S. Read Block Memory The read_block() function is used to access data stored in Block Form on the Net work Memory Server. When this command is sent by the client, the NMS first detec ts the byte order of the request and sequences it randomly. Once the data is rea d it returns 1. If no further data remains to be read it returns 0 thereby indic ating that the system call is over. The arguments used to read data blocks are given below: fp: Input. A pointer to the channel.* file being read. here: Input/Output. A pointer to an array of shorts, which is where the data wil l be found when read_block () returns. If allocate=0, then this pointer is input . If allocate is non-zero, then this pointer is output. n: Output. A pointer to an integer, which is the number of data items read from the block, and written to *here. These data items are typically short integers, so the number of bytes output is twice *n. tstart: Output. The time stamp (elapsed time since beginning of the run) at the start of the data block. Taken from the binary header. srate: Output. The sample rate, in Hz, taken from the binary header. allocate: Input. The read_block() function will place the data that it has read in a user defined array if allocate is zero. If allocate is set, it will use mal loc() to allocate a block of memory, and set *here to point to that block of mem ory. Further calls to read_block() will then use calls to realloc() if necessary to re-allocate the size of the block of memory, to accommodate additional data points. Note that in either case, read_block() puts into the array only the data from the next block; it over-writes any existing data in memory. nalloc: Input/Output. If allocate is zero, then this is used to tell read_block( ) the size (in shorts) of the array *here. An error message will be generated by read_block() if this array is too small to accommodate the data. If allocate is nonzero, then this integer is set (and reset, if needed) to the number of array entries allocated by malloc()/realloc(). In this case, be sure that *nalloc is zero before the first call to read_block(), or the function will think that it h as already allocated memory! seek: Input. If seek is set to zero, then the function reads data. If seek is se t nonzero, then read_block() does not copy any data into *here. Instead it simpl y skips over the actual data. bh: Output. A pointer to the binary header structure defined above. mh: Output. A pointer to the main header structure defined above. Write Block Memory The write_block () function is a low level function that is designed to write da ta into a block of memory through the allocate command or does memory allocation itself. In the latter case, the nalloc command is used to do memory allocation tracking and consequent reallocation if needed. In the former case, the write_block() writes data using the malloc() / realloc() commands. When the memory block is no longer needed, the user can free it using the free () command. Chapter 4 - Analysis & Discussion While research indicates that the MINIX3 OS, being open source code, has been do wnloaded at least 250,000 times, its widespread adoption will still take more ti me. This is because Minix 3 is still not competitive with more established OS li ke UNIX and Linux. It still needs to be programmed more elaborately to become mo re complete in terms of its application offerings such as GNOME desktop, browsers like Fire Fox etc. Because of this it was thought fit to restrict the testing of the MINIX3 OS, loa ded on to a mobile device CPU, to the five basic system calls, Get Password, Get Client ID, Get Block Memory, Write Block Memory and Read Block Memory. While ge t Password and Get Client ID were tested from the NMS to Minix 3 OS the others w ere tested from Minix 3 OX and NMS. The purpose of this experiment was to gauge

the possibility of an interface between Minix 3 and an NMS. The Operating Syste m run on the NMS was Linux. The two measures of the success of the experiment were: The mobile devices, on which Minix 3 and Linux were loaded, are known, in networ king parlance as the Network Memory Clients or NMCs. They are served by the Network Memory Server or NMS. The possibility of compiling the Minix 3 OS commands on the NMS The response time of the system. In addition another test was conducted between a LINUX OS loaded on another Mobi le Device and the NMS. The same parameters as given above were measured and a co mparison made with the Minix 3.

4.1. Result of the test between Minix 3 and the NMS The code that was used to interface between the client Minix 3 and the NMS is gi ven below. \ Fig 4A: Screenshot of interface code between Minix 3 & NMS The code that was run to interface between the NMS and the NMC is given below Fig 4B: Screenshot of interface code between NMS and NMC

Linux to Linux Fig 4C: Linux to Linux interface Linux to Minix 3 Fig 4D: Linux to Minix 3 Interface Results Table The results of the experiment are listed in the table above. The codes were succ essfully run in the Minix OS and the Linux OS of the NMS. The time taken for the processing is also mentioned above. A similar experiment was run from a mobile device configured with Linux OS. The same commands were tested from the Mobile Linux OS to the Linux OS of the NMS. T he table below summarizes the results of the tests. A comparison between the two response times can be summarized below in the form of a graph. Fig 4E: Graphs showing Performance comparison between Minix and Linux. It can be clearly seen that a significant breakthrough has indeed been achieved, i.e. it is possible to interface a Minix 3 OS running on the Mobile Device with the NMS. However there is a significant difference between the response times o f a Minix 3 OS and a Linux OS. The former is considerably slower than the latter . This can be attributed to the fact that Minix is still a relatively new system with several programming gaps that need to be filled. In fact formal research con ducted on Minix 3 reveals that the time difference for system calls between Linu x (represented by the top bar in the graph below) and Minix (the lower bar in th e graph below) is nearly ten times.

Fig 4F: External Performance Comparison Graph This is in fact borne out by the results of the authors experiment. Except in th e case of Get Password command there exists an approximate 10 time difference be tween the Minux 3 and Linux Operating System System Call Performance. The increased times that a Minix 3 system currently takes for execution can be a ttributed to its construction. The microkernel of the Minix 3 OS has only thirty system calls. All the other functions run in independent modules in user space. Thus any malfunction in any one module will not destabilize the entire system. For increased security these modules do not have direct access to the core kerne l. Each command has to pass through individual layers before it can reach the ke rnel. This back and forth movement is what causes the increased delay in process ing.

However what Minix lacks in terms of speed it promises to deliver in terms of re liability. Linux systems while supporting large functionalities, incorporating a vast number of applications and able to run on several hardware systems, are al so subject to frequent crashes. This is because the Linux kernel consists of mil lions of lines of code. With software statistics revealing that programming erro r (bug) density is in the order of 0.5 to 0.75 ever thousand lines of code, the number of crashes that can potentially occur increases dramatically. These crash es are extremely inconvenient to users because they close all running applicatio ns, destroy all data that is not saved and involves rebooting of the entire syst em which is time consuming. Minix on the other hand has a maximum of about 5000 lines of code in kernel space thereby severely limiting the number of bugs that can occur. This smaller kernel promises to be far more reliable than any other O perating System. Apart from reliability, Minix also incorporates the reincarnation server which n o other operating system currently has. This server self heals any potential pro blems or bugs that can interfere with smooth functioning of the system. Regarding the tradeoff between higher speeds and reliability, the developer of M inix 3, Tanebaum himself said that he would would take a system anytime that was half as fast, if only it were error-free 4.2. Thus the key differences between Linux and Minix 3 can be summarized as fol lows: 1) Kernel Size Linux operating systems have monolithic kernels. These kernels typically incorpo rate millions of line of code. However the Minix 3 kernel is very small having o nly about a maximum of 5000 lines of code 2) Reliability Since the Linux kernel has more lines of code, the possibility of malfunction an d consequent crashing is high. Minix 3 has relatively fewer lines of code in its kernel and hence is more stable. 3) Drivers for Devices In Linux, all the device drives reside on the monolithic kernel. Whenever a new device is incorporated into the system, unknown possibly harmful code is introdu ced into the kernel. If this code malfunctions, the entire OS can crash. However

in Minix 3, device drivers are installed in user space in independent modules. A crash in any one module will not incapacitate the whole system. 4) Access In Linux, drivers can easily access memory and thus possibly bug programs which are running. However in Minix3, getting data from memory involves building descr iptors, access information and address IDs. Only when these are verified will dat a be released from the memory. Minix 3 also restricts access to core kernel func tions. 5) Reincarnation Server Minix 3 is the only operating system with self troubleshooting mechanisms. If th ere is a problem in any of the drivers, the reincarnation server will self correc t the system transparently without inconveniencing the users. A Linux OS on the o ther hand will have to be shut down and restarted again for the affected driver to correct itself. 6) Functionality Linux has far more functionality, more applications and runs on more devices tha n Minix. As of now, Minix 3 can run only on an X86 Personal Computer, a CD or US B. It will need more development to be used as widely as Linux. Thus from the above we can conclude that an interface is indeed possible between a Minix 3 Operating System and the Network Memory Server. The advantages of bot h technologies can therefore accrue to any system incorporating both of them. H owever the response time of system calls in Minix is considerably slower than th at of a Linux system. This is a measure of its increased reliability. The softwa re codes of Minix 3 systems need to be more fully developed for its wider use an d incorporation into bigger systems. Chapter 5 Conclusion, Limitations & Future Scope 5.1. Conclusion While the Minix 3 system promises increased reliabilities amongst currently used operating systems, its compatibility is limited to X86 PC, CDs and USBs. The purp ose of this dissertation was to determine if an interface was possible between t he Minix 3 OS loaded onto a mobile device and a network memory server. If such an interface is possible, the reliability of a Minix 3 OS can be combine d with the large functionalities of a Network Memory Server resulting in increas ed utility end users of mobile devices. Currently mobile devices possess limited memories, suffer from frequent breakdow ns and cannot be moved across geographies without compromise on service. All these issues can be addressed by both Minix 3 and the Network Memory Server. Minix is an extremely reliable operating system. NMS allows for larger memories , more applications and increased functionalities to devices. Increased networki ng technologies allow for faster transfer of data between the virtual memory and the mobile device. In the experiment conducted above a few basis system calls w ere tested between a Minix 3 OS loaded onto a mobile device and a Linux OS Netwo rked Memory Server. The author was able to perform these system calls thereby pr oving the possibility of an interface. However the time taken to process these c alls is significantly lower than that of a Linux OS. Thus it can be concluded that as per existing technologies an interface between the two systems is possible however users need to decide whether they wish for i ncreased reliabilities at the cost of speed. Research shows that most users pref er reliable operating systems even if they perform at speeds slower than Linux o perating systems. Hence the challenge is to improve on the existing technologies and ensure performance speeds of Minix 3 is improved. This would be a significa nt add on to its reliability offering. The Minix 3 technology also demonstrates that existing operating systems such as Linux and Unix suffer from frequent breakdowns primarily because they incorpora te monolithic kernels. Shifting device drivers to user space and retaining only key functions with limited code in kernel space is the only solution to increase d system stability. All operating systems should incorporate the reincarnation s erver technology of Minix 3. This will make it possible to self heal the system without inconveniencing users.

Minix 3 itself suffers from several limitations. It is currently under research at the VU university in Amsterdam. A measure of its potential is that the resear ch is under a EUR 2.5 million grand from the EU. It is the earnest wish of the a uthor that this research successfully extends the functionalities of Minix 3 whi ch will significantly increase its user base. 5.2. Limitations & Future Scope 1) This dissertation only examined a few of the system calls of Minix 3 on NMS. However the possibility of interface with larger number of system calls has not been demonstrated. 2) The possibilities of faster interface with Network Memory Servers have t o be explored 3) Minix 3 has to keep pace with developments in Network Memory Server Tech nologies also. 4) Minix 3 is not a complete software because: It does not incorporate popular applications like support for the Firefox Browse r or a KDE desktop. Hence its functionality is limited. Its library and frame work support functions is limited to interface with POSIX. If it is to be developed further larger library formats have to be incorporated . It provides support for essential drivers. However it still cannot effectively s upport drivers for larger range of peripherals and input output devices. While it runs in X 86 Personal Computers it cannot be configured on MIPS and ARM systems that are run on embedded systems. It cannot be used to support multiprocessor operations. The knowledge of its proper usage is confined to the Minix 3 community. Better d ocumentation and user manuals will render it accessible to more people. 5) Software Development must include improving response times of system cal ls of Minix 3 which currently is ten times slower than that of Linux 6) The reliability of Minix3 is due to its device drivers being configured in user space. If the same technology can be applied to Linux and UNIX, it will increase the stability of these operating systems as well. This needs to be deve loped further. 7) Along with fault isolation techniques, the security features of Minix 3 can be extended to protecting against protocol violations, driver intrusion dete ction and improving the overall resilience of the system. 8) The overall performance parameters of the Minix 3 OS has to be within

Das könnte Ihnen auch gefallen