Sie sind auf Seite 1von 97

DEVELOPING WEB-BASED CLIENT/SERVER MANAGEMENT INFORMATION SYSTEM

THESIS In Partial Fulfilment of the Requirements for the Degree of Master of Science in Electrical Engineering

Presented by Ismail Khalil Ibrahim 9472/I-1/726/97

Submitted to POST GRADUATE SCHOOL GADJAH MADA UNIVERSITY YOGYAKARTA 1998

THESIS

DEVELOPING WEB-BASED CLIENT/SERVER MANAGEMENT INFORMATION SYSTEM

Submitted By Ismail Khalil Ibrahim 9472/I-1/726/97

Faculty of Engineering Department of Electrical Engineering

Approved by

Supervisors

Ir. Dr. F. Soesianto, B.Sc. E., Ph.D. Drs. Bambang Prastomo, M.Sc.

Date: Date:

/ December/ 1998 / December/ 1998

ii

PERNYATAAN

Dengan ini saya menyatakan bahwa dalam tesis ini tidak terdapat karya pernah diajukan untuk memperoleh gelar kesarjanaan di suatu Perguruan Tinggi, dan sepanjang pengetahuan saya juga tidak terdapat karya atau pendapat yang pernah ditulis atau diterbitkan oleh orang lain, kecuali yang secara tertulis diacu dalam naskah ini dan disebutkan dalam daftar pustaka.

Yogyakarta, Desember, 1998

Ismail Khalil Ibrahim Tanda tangan dan nama terang

iii

PREFACE "Whatever you do will be insignificant, but it is very important that you do it." Mahatma Gandhi The web-based client/server computing has become the model for a new information architecture that will take enterprise-wide computing into the 21st century. It promises many things to many people; to end-users, ease access to corporate and external data; to managers, dramatically lower costs for processing; to programmers, reduced maintenance; to developers, an infrastructure that enables business processes to be reengineered for strategic benefits. However, The subject of web-based client/server seems to be still shrouded in mystery for most people. Is the web-based client/server computing really new? What has propelled it to the information systems forefront? Should the decision to move to web-based client/server computing be made on the basis of cost or are these productivity gains as well? Is there a hidden downside? Is it a new technology or a new methodology? The purpose of this research is to answer all these questions through introducing the concept of integrated web-based client/server architecture and to discuss a number of complex issues surrounding the implementation and management of this architecture. The approach followed in this research is the proof-of-concept implementation approach which has taken the unit of container terminal Belawan as a case study to develop the web based client/server terminal operations system.

iv

ACKNOWLEDGMENT
In the name of Allah, Most Merciful, and Most Compassionate. Praise and thanks go to Almighty Allah SWT for; it is only with His Guidance and Mercy, this research has been accomplished. In preparing this thesis, I am highly indebted to pass my heartfelt thanks to the many people who helped me in one way or another. First, and above all, I would like to acknowledge my sincerest, special, deep appreciation, and true thanks to my supervisor Ir. Dr. F. Soesianto, a type of people whom we rarely meet in our life. His daily question What is new? and my eternal answer No news is good news! made the completion of this thesis in less than six months not a miracle and made me very proud of the certificate I got under his supervision. Without his insights and guidance this thesis would not have been possible. My thanks go also to my second supervisor Drs. Bambang Prastowo, M.Sc. for his advice and guidance along with his research experience, which helps too much in doing this research. I wish also to thank the faculty of engineering and more so the department of electrical engineering, all my lecturers and colleagues in the department, all the staff members of the department for their help and support during my studies here which really lightens my shoulders a lot, special thanks to Dra. Tita Lestari, M.Sc., M.Pd. for her continuing support and encouragement as I have worked on this thesis. Without her help and encouragement, this thesis would not have been completed this quickly. Lastly, I am indebted greatly to my family in Iraq, my great mother, my beloved brothers and sisters, the great Haifa, the most beloved Farah and Mohammed. They have supported me with all the love in the world and patiently endured all the consequences of my being away, without their doas, love and spiritual support, no success can be realized. Peace, Mercy and Blessings of Allah, the exalted, be upon them all Ismail Khalil Ibrahim v

TABLE OF CONTENTS
Page COVER CONFIRMATION PERNYATAAN PREFACE TABLE OF CONTENTS LIST OF FIGURES LIST OF APPENDEXES ABSTRAK ABSTRACT CHAPTER I INTRODUCTION A. Backgroung to the Reasearch B. Formulation of the Problem C. Authenticity of the Reasearch D. Reasearch Objectives E. Benefits Expected from the Reasearch CHAPTER II FOUNDATION OF THE THEORY A. Client/Server Evolution B. Client/Server System Anatomy C. Client/Server System Architecture D. Distributed or Network Computing E. Web-Based Client/Server F. Client/Server Database Computing G. Advantages of Client/Server Computing CHAPTER III RESEARCH METHODOLOGY A. Client/Server Systems Methodology B. System Development Life Cycle CHAPTER IV RESULTS AND DISCUSSION A. Results B. Discussion i ii iii iv vi viii ix x xi 1 1 9 10 11 11 12 12 17 19 31 35 36 45 47 47 50 68 68 72

vi

CHAPTER V CONCLUSIONS AND RECOMMENDATIONS A. Conclusions B. Recommendations CHAPTER VI SUMMARY A. Introduction B. Background C. Reasearch Methodology D. Results E. Conclusions REFERENCES APPENDIXES

74 74 79 83 83 84 92 94 96 98 100

vii

PENGEMBANGAN SISTEM INFORMASI MANEGEMEN CLIENT/SEVER BERBASIS WEB ABSTRAK Kemajuan Internet dan perkembangan bisnis melalui World Wide Web mengubah dunia Client/server secara dramatis. Karena kebutuhan pemakaian bersama informasi dan sumber daya antar lokasi yang terpisah meningkat, perusahaan perangkat lunak mulai memberikan fasilitas web enable pada aplikasinya. Ini berarti aplikasi itu dapat dijalankan di Internet melalui Web browser. Sebuah sistem client/server berbasis web biasanya merupakan variasi dari arsitektur sistem client server tiga tier. Sistem ini didesain untuk menggunakan sepenuhnya keunggulan dari pemrosesan kooperatif dan komputasi terdistribusi menggunakan Internet atau Intranet. Lapis presentasi dalam sistem client-server berbasis web itu terdiria atas Web browser sebagai aplikasi antar muka utama. Pada umumnya, sebagian lapis pemrosesan ditempatkan di Web Browser untuk menerjemahkan file HTML yang berisi skrip seperti JavaScript atau VBScript karena aplikasi berbasis web disusun dengan obyek komponen yang disisipkan menggunakan teknologi Java atau ActiveX dalam HTML. Lapisan bisnis sistem client/server berbasis web ditempatkan di Web server yang terpisah. Dan data pendukung akan ditempatkan di server basis data yang dalam paradigma ini letaknya terpisah juga. Sangat penting untuk ditentukan apakah solusi client/server berbasis web adalah solusi terbaik untuk suatu organisasi ? Jawabannya tergantung pada beberapa hal. Bagaimanapun sebelumnya faktor penentu utama untuk mengevaluasi kelayakan penggunaan client/server berbasis web harus dipilih. Fokusnya berkaitan dengan kecocokan sistem yang diusulkan dengan infrastruktur sistem informasi yang ada, apakah sistem itu akan membantu atau menghambat dalam aktifitas bisnis? Jika pertanyaan ini telah dijawab, maka keputusan untuk menggunakan client/server berbasis Web dapat diambil dengan mempertimbangkan biaya implementasi, sumber daya manusia, dampak keseluruhan terhadap operasi bisnis dan manfaat potensial yang diperoleh Tujuan penelitian ini adalah menentukan bentuk dan kelayakan solusi client/server berbasis web untuk sebuah organisasi. Unit Usaha Terminal Peti Kemas Belawan digunakan sebagai studi kasus dalam penelitian ini.

viii

DEVELOPING WEB-BASED CLIENT/SERVER MANAGEMENT INFORMATION SYSTEM ABSTRACT The advent of the Internet and the growth of commerce on the Word Wide Web are changing the client/server landscape dramatically. As the demand for information and resource sharing between remote locations continues to escalate, vendors are beginning to web-enable their desktop application solutions. A web based client/server system is typically a variation on the standard threetier client/server system architecture. It is designed to take full advantages of cooperative processing and distributed computing, using the Internet or an intracompany WAN as the network infrastructure. The presentation logic in a Web-based client/server system consists of a Web browser as the primary application interface. Generally, some of the processing logic is built into the Web-browser to interpret HTML pages with embedded scripting languages like JavaScript or VBScript, because Web-based applications are constructed by embedding component objects developed using Java or ActiveX technology into the HTML, the business logic layer of a Web-based client/server system generally resides on a remote Web server. And as usual, the supporting data would reside on a database server, which, under the Web-based paradigm, is also remote. It is important to determine if a web-based client/server solution is the right solution for an organization. The answer to this question depends on a number of variables. However, a primary determinant for evaluating the feasibility of adopting a client/server solution had to be chosen. The focus will be on the fitness of the current information system. Is the current information system infrastructure is a help or a hindrance to the business activity? Once this question is answered, the decision to adopt a client/server solution can be evaluated in light of issues such as implementation cost, staffing requirement, overall impact on business operations, and the potential benefits to be realized from making a change. The purpose of the research is to determine the state and feasibility of adopting a web-based client/server solutions for an organization. Belawan container terminal operations unit is taken as a case study.

ix

CHAPTER I INTRODUCTION A. Background to the Research

We are in the midst of a fundamental change in both technology and its applications. In the early 80s, both applications and data were based on desktop operating systems, such as MS-DOS, and were dedicated to a single user. While freeing users to perform personal computing, isolated PCs made it difficult to share data among users. As the number of different types of PCs and operating systems increased, the need to share data effectively became acute for the effective functioning of organizations. This demand sparked the evolution from stand-alone PCs to a group of PCs linked via LANs. But trends in the modern workspace are pushing even traditional LAN computing to its limits. As IS professionals begin implementing enterprise-wide networks, the limitations of traditional LAN computing become apparent. In enterprise-wide networks, existing minicomputers and mainframes need to communicate with LAN systems in a manner that allows users to access host resources transparently without requiring users to know the precise location of every resource they need. This requirement has sparked in an interest in client-server computing. Client-server combines cost effective and almost additive power of desktop computers with the multi-user access to shared resources and data. Since evaluating and reducing risk in a multi-faceted confronts every IS manager.

it is important to understand the underlying casual factors that affect the migration towards client-server computing which can be stated as follows: 1. Competitive Forces. In a competitive world it is necessary for organizations to take advantage of every opportunity to reduce cost, improve quality, and provide service. Most organizations today recognize the need to be market driven, to be competitive, and to demonstrate added value. A strategy being adopted by many organizations is to flatten the management hierarchy. With the elimination of layers of middle management, the remaining individuals must be empowered to make the strategy successful. The client/server model provides power to the desktop, with information available to support the decision-making process and enable decisionmaking authority. The following are some key drivers in organizational philosophy, policies, and practices. Business Process Reengineering Competitiveness is forcing organizations to find new ways to manage their business, despite fewer personnel, more outsourcing, a market-driven orientation, and rapid product obsolescence. Technology can be the enabler of organizational nimbleness. Globalization To survive and prosper in a world where trade barriers are being eliminated, organizations must look for partnerships and processes that are not restrained

by artificial borders. Quality, cost, product differentiation, and service are the new marketing priorities. The information systems must support these priorities. Operational Systems Competition demands that information systems organizations justify their costs. Companies are questioning the return on their existing investments. Centralized IS operations in particular are under the microscope. Market Driven Product obsolescence has never been so vital a factor. Buyers have more options and are more demanding. Technology must enable organizations to anticipate demand and meet it. Downsized Organizational Structure Quality and flexibility require decisions to be made by individuals who are in touch with the customer. Many organizations are eliminating layers of middle management. Technology must provide the necessary information and support to this new structure. Enterprise Network Management If a business is run from its distributed locations, the technology supporting these units must be as reliable as the existing central systems. Technology for remote management of the distributed technology is essential in order to use scarce expertise appropriately and to reduce costs. Information and Technology Viewed as a Corporate Asset

Each individual must have access to all information he or she has a "need and right" to access, without regard to where it is collected, determined, or located. Technology can be used today to provide this "single-system image of information at the desk, whatever the technology used to create it. Cost Competitive Standardization has introduced many new suppliers and has dramatically reduced costs. Competition is driving innovation. Organizations must use architectures that take advantage of cost-effective offerings as they appear. Increasing Power and Capacity of Workstations Desktop workstations now provide the power and mainframe capacity that mainframes did only a few years ago. The challenge is to effectively use this power and capacity to create solutions to real business problems. Growing Importance of Workgroup Computing Downsizing and empowerment require that the workgroup have access to information and work collectively. Decisions are being made in the workplace, not in the head office. Expanded Network Access Standards and new technologies enable workstation users to access information and systems without regard to location. Remote network management enables experts to provide support and central, system-like reliability to distributed systems. However, distributed systems are not transparent. Data access across a network often has unpredictable result sets;

therefore, performance on existing networks is often inadequate, requiring a retooling of the existing network infrastructure to support the new data access environment. Open Systems Standards enable many new vendors to enter the market. With a common platform target, every product has the entire marketplace as a potential customer. With the high rate of introduction of products, it is certain that organizations will have to deal with multiple vendors. Only through a commitment to standards-based technology will the heterogeneous multiple vendor environment effectively service the buyer. Client/Server Computing Workstation power, workgroup empowerment, preservation of existing investments, remote network management, and market-driven business are the forces creating the need for client/server computing. 2. Technological Innovations In recent years, there has been a fundamental paradigm shift from a centralized (mainframe-centric) computing to a distributed (business-centric) computing. With increasing competition and upheaval in the business environment it is critical that computing resources in an organization be integrated and aligned with the business processes to ensure competitive advantage. The hallmark of client/server is that it brings the product or service closer to the customer and changes the way business interacts with a customer (figure 1.1.). As

a result of these changes, processes that drive companies must be redesigned. System integration presents a challenge, as it must support protection of the large investment in existing applications and platforms while taking advantage of new technologies seamlessly. Client-server computing allows us to create a coherent architecture out of autonomous workstations and create software solutions that leverage lowest cost technology options. Client-server computing was borne out of the technological revolution. Improvements in chip technologies, high-performance relational database management systems, and improvements in price/performance of desktop systems have all pushed the industry towards client/server. In 1980, the cost-per-MIPS of mainframes or minicomputers was 15 times more than that of workstations. In 1990, it was 100 times more. By years 2000, the cost could be anywhere from 700 to 2000 times more. Implemented wisely, Client/server system solutions can provide a more flexible, productive and user friendly systems than in the past.
User Network Computer User Computer Network

Figure 1.1. Technological innovation 3. Organizational Innovations

The technological change has been paralleled by transformations in the way organizations do business today. Over the last decade, we have witnessed changes from small improvements in remote parts of the organization to large-scale transformations (strategic, technological, structural, and human resources) that span entire organizations due to the challenges that stem from a highly competitive global environment, new enabling technologies, and deregulation are all forcing organizations to seek higher levels of performance to reestablish their dominance, regain their market share, and in some cases to ensure their survival. The competitive environment is forcing organizations to redesign critical business processes to achieve five main objectives: provide the service, reduce cost, reduce cycle time, improve quality, improve customer satisfaction It is known that successful organizations can be distinguished by their ability to use the information technology to transform their business (structures, processes, and roles) to obtain competitive advantage in the marketplace. This creates the need to juxtapose and integrate rapid technological progress with turbulent changes in organizational needs. Business process redesign and downsizing involves rethinking existing applications to remove non-value added activities and bring the consumers and producers of information into closer alignment without any middlemen. It is increasing becoming evident that client/server computing is being used by organizations for downsizing. Downsizing (moving applications from mainframes and minicomputers to workstations, PC networks, other distributed multi-platform

systems) of application systems is neither a passing phenomenon not a total solution to a firms information systems problems. It represents a major evolution toward the better utilization of information technology to address business needs. In spit of this, there is a noticeable lack of coherent strategy for successfully downsizing distributed workstation and LAN based application systems within the corporate environment. Downsizing is one of the hottest computing topics of the 90s. Downsizing creates new demands, problems and opportunities for the way an organization performs its functions. Downsizing has its plus and minus sides. The plus side of downsized computing offers tremendous cost savings, system flexibility, more user-friendly interfaces, stronger bonds between endusers and IS departments, enhanced user efficiency and productivity. The minus side of downsizing can result in a quagmire of hardware and software incompatibilities, forays into incomplete product lines, decreased system security, IS staff alienation and end-user anarchy. Downsizing computing in a large interand intra-enterprise transaction networks means having an architecture and a welldefined method to choose and integrate new applications and platforms for business needs while leveraging lowest cost technology options. This requires protection of the largest investment in existing applications while taking advantage of new technologies seamlessly. Todays downsized client/server and network based systems offer inexpensive processor time and ease of modification. In addition, new development tools make rapid development of custom systems on top of a well-designed database possible. Unfortunately, downsized systems

are still being developed with the same System Development Life Cycle methodologies of the past. The need for iterative development approach to developing networked and client/server applications is clear. The dos and donts of downsizing are not written in any textbook or taught in any classroom. Each corporate situation is different and unique into itself. It is also important to establish guidelines for understanding and measuring the impact of downsizing on the organization. In the data processing: batch processing in the 50s, time sharing in the 60s, desktop personal computers in the 70s, graphical user interfaces in the 80s and distributed processing in the 90s, have radically changed the way work is done in a corporation. Online Transaction Processing (OLTP) and Distributed Transaction Processing (DTP) have revolutionized the banking and financial sectors. Distributed processing is a technology, which allows workstations in a different to cooperate and coordinate by means of computer networks. Figure (1.2.) clarifies these concepts.
Legacy Enterprise Manual Back Office Manual Front Office Proprietary Systems Open Systems Client/Server Distributed Computing Multimedia Object Orientation

Mainframes

Automated Enterprise Back Office Automation Online Transaction Processing

New Enterprise Business Process Reengineering Paperless Office Expert Systems

Figure 1.2. Organizational innovations

10

B.

Formulation of the Problem

The advent of the Internet and the growth of commerce on the Word Wide Web are changing the client/server landscape dramatically. As the demand for information and resource sharing between remote locations continues to escalate, vendors are beginning to web-enable their desktop application solutions. This means that they can run right over the Internet via a Web browser. It is very important to determine the state and feasibility of adopting a webbased client/server solutions for an organization in light of issues such as implementation cost, staffing requirements, overall impact on business operations, and the potential benefits of making such a change, which are the focus of this research.

C.

Authenticity of the Research

Client/server architecture is a complex, involved, and often misunderstood subject. The amount of information related to this subject is tremendous and covers a wide range of functions, services, and other aspects of the distributed environment. This research is intended to contribute to the evolution of todays client/server technology by determining and evaluating the state and feasibility of adopting a web-based client/server solution for an organization based on the theoretical foundation of the distributed co-operative processing models. It will introduce the concept of client/server Systems Development Life Cycle methodology, which provide a consistent approach in the analysis and

11

investigation of all the issues surrounding the implementation and management of client/server systems. A real world state of study was chosen to apply these concepts, that is Belawan container terminal operation unit Northern Sumatra.

D.

Objectives of the Research

The purpose of this research is to determine the state and feasibility of using a web-based client/server solution for an organization. Belawan container terminal operation unit is taken as a case study.

E.

Benefits Expected from the Research

It is important to determine if a client/server solution is the right solution for an organization. The answer to this question depends on a number of variables. However, a primary determinant for evaluating the feasibility of adopting a client/server solution had to be chosen. The focus will be on the fitness of the current information system. Is the current information system infrastructure is a help or a hindrance to the business activity? Once this question is answered, the decision to adopt a client/server solution can be evaluated in light of issues such as implementation cost, staffing requirement, overall impact on business operations, and the potential benefits to be realized from making such a change, and that is what expected to be the main benefits of such a research.

12

CHAPTER II CLIENT SERVER SYSTEMS


The term client/server was first used in the 1980s in reference to personal computers (PCs) on a network. The actual client/server model started gaining acceptance in the late 1980s. The client/server software architecture is a versatile, message-based and modular infrastructure that is intended to improve usability, flexibility, interoperability, and scalability as compared to centralized, mainframe, time sharing computing (Umar,1993). A client is defined as a requester of services and a server is defined as the provider of services. A single machine can be both a client and a server depending on the software configuration.

A.

Evolution

Client/server computing is a relatively new technology that only recently has been adapted as system architecture for the deployment of applications. C/S computing is the wave of the 90s and it is anticipated that C/S computing will continue to gain popularity. To understand the reasons behind the success of C/S computing, it helps to understand the other common types of computing: mainframe and PC/file sharing. 1. Mainframe computing Before the late 80s and early 90s, mainframe computing, also called hostbased computing (figure 2.1.), was about the only computer choice for organizations

13

that required heavy-duty processing and support for a large number of users. Mainframes have been in existence for over 20 years. Their longevity has lead to their reliability. The ability of mainframes to support a large number of concurrent users while maintaining a fast database retrieval time contributed to corporate acceptance of mainframes. Mainframe computing refers to all processing carried out on the mainframe computer. The mainframe computer is responsible for running the Database Management System (DBMS), managing the application that is accessing the DBMS, and handling communications between the mainframe computer and dumb terminals. A dumb terminal is about as intelligent as its name implies: it is limited to displaying text and accepting data from the user. The application does not run on the dumb terminal; instead, it runs on the mainframe and is echoed back to the user through the terminal (Orifali, 1992).

Dumb Terminals

Mainframe

Data

Figure 2.1.Mainframe computing

14

The main drawback of mainframe computing is that it is very expensive to operate because they require specialized operational facilities, demand extensive support, and do not use common computer components. Rather than using common components, mainframes typically use hardware and software proprietary to the mainframe manufacturer. This proprietary approach can lock a customer into a limited selection of components from one vendor. 2. PC/File sharing computing PC/File sharing computing (figure 2.2.) became popular in the corporate environment during the mid to late 80s when business users began to turn to the PC as an alternative to the mainframe. Users liked the ease with which they could develop their own applications through the use of fourth-generation languages (4GL) such as dBase III+. These 4GL languages provided easy-to-use report writers and user-friendly programming languages. PC/File sharing computing is when the PC runs both the application and the DBMS. Users are typically connected to the file server through a LAN. The PC is responsible for DBMS processing; the file server provides a centralized storage area for accessing shared data The original PC networks were based on PC/file sharing architectures, where the server downloads files from the shared location to the desktop environment. The requested user job is then run (including logic and data) in the desktop environment. PC/File sharing architectures work if shared usage is low, update contention is low, and the volume of data to be transferred is low. In the 1990s, PC LAN (local area network) computing changed because the capacity of the file sharing was strained as the number of online

15

user grew (it can only satisfy about 12 users simultaneously) and graphical user interfaces (GUIs) became popular (making mainframe and terminal displays appear out of date). PCs are now being used in client/server architectures The drawback of PC-based file sharing computing is that all DBMS processing is done on the local PC. When a query is made to the file server, the file server does not process the query. Instead, it returns the data required to process the query. For example, when a user makes a request to view all customers in a state, the file server might return all the records in the customer table to the local PC. In turn, the local PC has to extract the customers that live in the state. Because the DBMS runs on the local PC and not on the server, the file server does not have the intelligence to process queries. This can result in decreased performance and increased network bottlenecks ( Watterson, 1995).

PCs running applications and DBMS

Requests for Data Data

File Server
Figure 2.2.PC/file sharing computing.

16

3.

Client/server computing As a result of the limitations of file sharing architectures, the client/server

architecture (figure 2.3.) emerged. This approach introduced a database server to replace the file server. Using a relational database management system (RDBMS), user queries could be answered directly. The client/server architecture reduced network traffic by providing a query response rather than total file transfer. It improves multi-user updating through a GUI front end to a shared database. In client/server architectures, Remote Procedure Calls (RPCs) or standard query language (SQL) statements are typically used to communicate between the client and server

Client Workstations Running Application

Queries Queries results

Server running DBMS


Figure 2.3.Client/server computing.

17

B.

Anatomy

There are a variety of definitions for the term client/server. For the purpose of this research the systems point of view is taken. In other words, the client/server system is defined in terms of individual pieces that work together as a whole and viewed as a system that integrates hardware, software, and networking. This integration of technology is specifically designed to share resources, in support of one or more business functions, in multiple locations simultaneously. As an example is the Internet, the worlds largest client/server system. This client/server system is comprised of thousands of clients and servers transferring information and supporting millions of business functions across a network that spans the globe. Every client/server systems consists of at least one of each of the following: Client Request Network Response Figure 2.4. Client/server system The client component of the client/server system can be either hardware or software. In the hardware context, a client is the personal computer functioning as a workstation. This client workstation is capable of stand-alone information processing, which distinguishes it from its mainframe predecessor, the dumb terminal. In the software context, a client is the software that allows to interact with the information Server

18

residing on the server. Web browsers are examples of software clients, as are email programs. The server component can also considered both hardware and software. As hardware, the server is typically a personal computer or workstation with enhanced storage capacity. Often, it resides in the same location as the business activity it is required to support. As software, servers have a variety of incarnations, depending on the operational function. For example, windows NT Server acts as a secure server, allowing users to share files and printers over a network. Web servers like Microsofts Information Server provides access to and delivery of information over the World Wide Web. In a client/server system, the network is the glue that binds the various pieces of hardware and allows the sharing of data and information. Networks are generally considered hardware, as they provide a physical link between clients, servers, printers, and other shared resources. In addition, in cases where networks are required to link and facilitates interaction between varying vendor-specific resources, middleware may play a role in supporting network functionality. Middleware, as the name implies, is the piece of hardware or software placed between two different technologies to allow the two technologies to interact with each other. In the context of client/server systems, middleware has a variety of incarnations; however, the two most common are network middleware, which handles message routing between different platforms, and database middleware, which handles the translation of data request into database commands.

19

C.
1. Purpose

Architecture

Client/server systems architecture must serve three purposes: 1. Respond to the organizations need for easy information access, flexibility, smooth administration, reliability, security and proficient application development. 2. Utilize current installations of hardware, software and networks, which often run mission-critical applications that cannot be compromised. 3. Anticipate future demands on information systems and networks

C/S computing systems come under the broad category of system architectures known as Open Systems Architecture. In an open systems environment, hardware and software from different vendors are interchangeable and can be combined into an integrated working environment. Two important concepts in open systems are Interoperability and Interconnectivity. Interoperability means that open systems can exchange information meaningfully with each other even when they are from different vendors. Interconnectivity, refers to the ability to connect computers from possibly different vendors, is an integral part of interoperability. Interconnectivity is at the hardware level. 2. Characteristics Most client-server systems have the following distinguishing characteristics:

20

1.

Service: client-server is primarily a relationship between processes running on separate machines. The server process is the provider of services. The client is a consumer of services. In essence, client-server provides a clean separation of function based on the idea of service.

2.

Shared resources: a server can service many clients at the same time and regulate their access to shared resources.

3.

Asymmetrical protocols: there is a one-to-one relationship between clients and servers. Clients always initiate the dialog by requesting a service. Servers are passively waiting on requests from the clients.

4.

Transparency of location: the server is a process, which can reside on the same machine as the client or on different machine across the network. Client-server software usually masks the location of the server from the clients by redirecting the service calls when needed.

5.

Mix and match: the ideal client-server software is independent of hardware or operating system software platform. One should be able to mix and match client and server platforms.

6.

Message-based exchanges: clients and servers are loosely coupled systems which interact through message passing mechanism. The message is the delivery mechanism for the service requests and replies.

7.

Encapsulation of services: the server is a specialist. A message tells a server what service is requested and it is up to the server to determine

21

how to get the job done. Servers can be upgraded without affecting the clients as long as the published message interface is not changing. 8. Scalability: client-server systems can be scaled horizontally or vertically. Horizontal scaling means adding or removing client workstations with only a slight performance impact. Vertical scaling means migrating to a larger and faster server machine or multi-servers. 9. Integrity: the server code and client code is centrally maintained, which results in cheaper maintenance and the guarding of shared data integrity. At the same time, the clients remain personal and independent. 3. Functional Layers Client/server systems can be configured in a variety of ways, depending on what is going to be accomplished. These different configurations are called client/server system architectures, the most common of which are the two-tier and three-tier architectures. To better understand the difference between the two, Client/server systems should be broken up into three functional layers: 1. The presentation or user interface layer: Presentation is directly related to end-user I/O and include such items as screen/window management, dialog control, function key mapping, field validation, range checking, context-sensitive help, data capture and security. 2. The function or application processing layer: Function includes programs responsible for business logic. It includes programmed access to user data and end-user I/O or issuing DML statements.

22

3.

The data management layer: Data management represents the functions related to executing data manipulation language (DML) e.g., disk I/O, buffer management, index maintenance, lock management. A DBMS or file management system usually performs these functions (Berson,1996).

Presentation

User Interface Data Entry Field Validation Data Manipulation Calculations Business Rules Data input/output Stored Procedures

Business logic

Data access

Figure 2.5 Client/server system functional layers 4. Architecture The two basic principles of client/server architectures for distributed applications are: 1. Most networked software systems can be viewed as a combination of a client or front-end portion that interacts with the user, and a server or back-end portion that interacts with the shared resource. The client process contains solution-specific logic and provides the interface between the user and the rest of the application system. The server process acts as a software engine that manages shared resources such as databases, printers, modems, or high-powered processors.

23

2.

The front-end task and back-end task have fundamentally different requirements for computing resources such as processor speeds, memory, disk speeds and capacities, and input/output devices.

A single processor can complete the multiple tasks if given time, but dividing the application along functional lines and distributing the work between two or more machines can usually be more efficient. This is specially true if the front-end processor specializes in, for example, graphical displays, and the application server processor specializes in calculation, data manipulation, communication services, or whatever shareable resources is at the server. For certain kinds of applications, a high degree of efficiency may be realized by applying a parallel-processing paradigm to allow the application tasks to occur simultaneously. The simplest form of a client/server application has only three pieces: a client process, a server process, and a middleware: Client Process: The client-based process is the front-end of the application that the user sees and interacts with. The client process contains solution specific logic and provides the interface between the user and the rest of the application system. The client process also manages the local resources that the user interacts with such as the monitor, keyboard, workstation CPU and peripherals. Server Process: The server-based process runs on another machine on the network. This application server could be the host operating system or network file server, the server is then provided both file system services and application services. Or in some cases, another desktop machine provides the application services. The server process

24

acts as a software engine that manages shared resources such as databases, printers, communication links, or high powered-processors. The server process performs the back-end tasks that are common to similar applications. More complex client/server arrangements allow one server process to service multiple client processes running simultaneously. Similarly, a single client can complete the processes that the client initiates. The connectivity layer or middleware directs the request from the client to the appropriate server that has the needed resources. A server component may reside on almost any type of computer, from PC to mainframe. Servers usually perform the following functions (Martin, 1997): Optimize and execute SQL Enforce business rules Store procedures and triggers Enforce security Manage concurrency Manage transactions Log transactions for recovery (also called rollback) Serve as repositories for enterprise data Connectivity: Connectivity (also called middleware) allows applications to transparently communicate with other programs or processes, regardless of their location. The key element of connectivity is the network operating system (NOS). NOS provide services such as routing, distribution, messaging, file and print, and the

25

network management services. NOS relay on communication protocols to provide specific services. The protocols are divided into three groups: 1. Media protocols: Media protocols determine the type of physical connections used on a network (some examples of media protocols are Ethernet, Token Ring, Fiber Distributed Data Interface (FDDI), coaxial and twisted pair). 2. Transport Protocols: Transport protocols provide the mechanism to move packets of data from client to server (some examples of transport protocols are Novells IPX/SPX, Apples AppleTalk, Transmission Control Protocol/Internet Protocol (TCP/IP), Open Systems Interconnection (OSI) and Government Open Systems Interconnection Profile (GOSIP). 3. ClientServer Protocols: Once the physical connection has been established and transport protocols chosen, a client-server protocol is required before the user can access the network services. A client-server protocol dictate the manner in which clients request information and services from a server and also how the server replies to that request (some examples of client-server protocols are NetBIOS, RPC, Advanced Program-to-Program Communication (APPC), Named Pipes, Sockets, Transport Level Interface (TLI) and Sequenced Packet Exchange (SPX) (Comer, 1991).

26

5.

Applications 1. Two tier architectures

With two tier client/server architectures (figure 2.6.) the user system interface is usually located in the user's desktop environment and the database management services are usually in a server that is a more powerful machine that services many clients. Processing management is split between the user system interface environment and the database management server environment. The database management server provides stored procedures and triggers. There are a number of software vendors that provide tools to simplify development of applications for the two-tier client/server architecture. The two-tier client/server architecture is a good solution for distributed computing when work groups are defined as a dozen to 100 people interacting on a LAN simultaneously. It does have a number of limitations. When the number of users exceeds 100, performance begins to deteriorate. This limitation is a result of the server maintaining a connection via "keep-alive" messages with each client, even when no work is being done. A second limitation of the twotier architecture is that implementation of processing management services using vendor proprietary database procedures restricts flexibility and choice of DBMS for applications. Finally, current implementations of the two-tier architecture provide limited flexibility in moving (repartitioning) program functionality from one server to another without manually regenerating procedural code.

27

Presentation Data Access Business Logic

Figure 2.6. Two tier architecture 2. Three tier architectures

The three-tier architecture (figure 2.7.) emerged to overcome the limitations of the two tier architecture. In the three-tier architecture, a middle tier was added between the user system interface client environment and the database management server environment. There are a variety of ways of implementing this middle tier, such as transaction processing monitors, message servers, or application servers. The middle tier can perform queuing, application execution, and database staging. For example, if the middle tier provides queuing, the client can deliver its request to the middle layer and disengage because the middle tier will access the data and return the answer to the client. In addition the middle layer adds scheduling and prioritization

for work in progress. The three-tier client/server architecture has been shown to improve performance for groups with a large number of users (in the thousands) and improves flexibility when compared to the two-tier approach. Flexibility in partitioning can be a simple as "dragging and dropping" application code modules onto different computers in some three tier architectures. A limitation with three tier architectures is that the development environment is reportedly more difficult to use

28

than the visually oriented development of two tier applications. Recently, mainframes have found a new use as servers in three tier architectures.

Client
Presentation

Server
Business Logic

Server
Data Access

Figure 2.7. Three tier architecture 3. 3-tier architecture with transaction processing monitor technology

The most basic type of three-tier architecture has a middle layer consisting of Transaction Processing (TP) monitor technology. The TP monitor technology is a type of message queuing, transaction scheduling, and prioritization service where the client connects to the TP monitor (middle tier) instead of the database server. The transaction is accepted by the monitor, which queues it and then takes responsibility for managing it to completion, thus freeing up the client. When the capability is provided by third party middleware vendors it is referred to as "TP Heavy" because it can service thousands of users. When it is embedded in the DBMS (and could be considered a two tier architecture), it is referred to as "TP Lite" because experience has shown performance degradation when over 100 clients are connected. TP monitor technology also provides. The ability to update multiple different DBMSs in a single transaction connectivity to a variety of data sources including flat files, non-relational DBMS, and the mainframe the ability to attach priorities to transactions robust security. Using a three tier client/server architecture with TP monitor technology

29

results in an environment that is considerably more scalable than a two tier architecture with direct client to server connection. For systems with thousands of users, TP monitor technology (not embedded in the DBMS) has been reported as one of the most effective solutions. A limitation to TP monitor technology is that the implementation code is usually written in a lower level language (such as COBOL), and not yet widely available in the popular visual toolsets (Orifali, 1992). 4. Three tier with message server

Messaging is another way to implement three tier architectures. Messages are prioritized and processed asynchronously. Messages consist of headers that contain priority information, and the address and identification number. The message server connects to the relational DBMS and other data sources. The difference between TP monitor technology and message server is that the message server architecture focuses on intelligent messages, whereas the TP Monitor environment has the intelligence in the monitor, and treats transactions as dumb data packets. Messaging systems are good solutions for wireless infrastructures. 5. Three tier with an application server

The three-tier application server architecture allocates the main body of an application to run on a shared host rather than in the user system interface client environment. The application server does not drive the GUIs; rather it shares business logic, computations, and a data retrieval engine. Advantages are that with less software on the client there is less security to worry about, applications are more scalable, and support and installation costs are less on a single server than

30

maintaining each on a desktop client. The application server design should be used when security, scalability, and cost are major considerations. 6. Three tier with an ORB architecture

Currently industry is working on developing standards to improve interoperability and determine what the common Object Request Broker (ORB) will be. Developing client/server systems using technologies that support distributed objects holds great promise, as these technologies support interoperability across languages and platforms, as well as enhancing maintainability and adaptability of the system. There are currently two prominent distributed object technologies: Common Object Request Broker Architecture (CORBA), and COM/DCOM. Industry is working on standards to improve interoperability between CORBA and COM/DCOM. The Object Management Group (OMG) has developed a mapping between CORBA and COM/DCOM that is supported by several products [OMG 96]. 7. Distributed/collaborative enterprise architecture

The distributed/collaborative enterprise architecture emerged in 1993. This software architecture is based on Object Request Broker (ORB) technology, but goes further than the Common Object Request Broker Architecture (CORBA) by using shared, reusable business models (not just objects) on an enterprise-wide scale. The benefit of this architectural approach is that standardized business object models and distributed object computing are combined to give an organization flexibility to improve effectiveness organizationally, operationally, and technologically. An enterprise is defined here, as a system comprised of multiple business systems or

31

subsystems. Distributed/collaborative enterprise architectures are limited by a lack of commercially available object orientation analysis and design method tools that focus on applications.

D.

Technologies

Network computing views the network of individual machines as a robust computing device in itself, measured by the aggregate power of all the machines on the network. A business-wide network can potentially function as an integrated multiprocessor computer system. LAN computing began with the goal of integrating PCs into a single vendors network, network computing involves the integration of local and wide area network technologies in a multi-vendor environment. A survey of a typical business will reveal host systems based on such operating systems such as DECs VMS, IBM VM, MVS and CMS and UNIX as well as LAN servers such as those based on NetWare or LANManager. The desktop of PC presents an equally diverse group of operating systems, with DOS, OS/2, Macintosh, and Unix workstations. Network computing systems are designed with one goal in mind: to readily distribute data and computing resources to a user everywhere in the organization. While traditional LAN systems base resources and services in a specific server, network computing uses the entire network as the platform for network services. In the past, conflicting standards at every level of network design made it difficult to achieve true interconnectivity or transparency. Media standards, protocol standards,

32

and a wide variety of host and client operating systems present formidable integration tasks. Before users can realize the benefits of network computing, networking solutions must offer independence from these multiple standards and allow users to integrate and support multiple standards within a single business-wide system. The applications that take advantage of this capability, using the business-wide network as a single processing entity will usher in the new phase of network computing. These applications, which are sometimes called distributed applications, networked applications or network computing applications require program-to-program communication and interaction across the boundaries of machine type, server operating system platforms, and workstation system standards. An event handler is a program that controls process distribution among processors. The event handler monitors the load at the various processors and distributes processes based on each processors load. Client-based applications meet many needs. Applications that create a minimal level of disk I/O, for example, will typically remain client-based. But client-based implementations can limit the efficiency of other type of applications, especially in the areas of communications and data management (Goscniski, 1991) 1. Cooperative Processing Cooperative processing is computing which requires two or more distinct processors to complete a single transaction. Cooperative processing is related to both distributed and client/server processing. It is a form of distributed computing where two or more distinct processes are required to complete a single business transaction.

33

Usually, these programs interact and execute concurrently on different processors. Cooperative processing can also be considered to be a style of client/server processing if communication between processors is performed through a message passing architecture (Goscniski, 1991). 2. Distributed Processing Distributed processing is the distribution of applications and business logic across multiple processing platforms. Distributed processing implies that processing will occur on more than one processor in order for a transaction to be completed. In other words, processing is distributed across two or more machines and the processes are most likely not running at the same time, i.e., each process performs part of an application in a sequence. Often the data used in a distributed processing environment is also distributed across platforms. 3. Client/server Processing Client/server processing refers to a style of computing based on a requesterserver approach where communication is accomplished through messaging. Client/server processing occurs when functionality for an application is divided between client programs and server programs that communicates through messages. Client/server processing does not require multiple distinct processors, it can be conducted on a single host.

34

Client Distributed Processing Presentation Business Logic Data Access Distributed Data and Database Processing Server Database

Cooperative Processing Client

Database Presentation Business Logic

Server Database Data Access

Figure 2.8. Distributed, cooperative, and client/server processing There are five styles of client-server processing depending on how the application functions are split between client and server programs (Berson,1996). 1. Distributed presentation:

The presentation layer is distributed between client and server machine. This is one implementation of a frontware solution, where a graphical user interface GUI front end is added to an IBM/MVS 3270 application and is placed on a workstation. 2. Remote Presentation

The entire presentation layer resides on the client. The host/server executes all business logic and data management. The client may perform dialog processing and

35

basic data validation, but no data resides on the client. This is the second type of frontware application and is conceptually very similar to distributed presentation. 3. Distributed Business Logic

Here the business logic is splitted between the client and the server. Distributed business logic applications are the most complex of the five processing styles, since two separately compiled application programs must be developed. Developers must analyze where each function should reside and what type of dialog must occur between the two programs. The underlying communications facilities may implement either a message-based or remote procedure call (RPC) mechanism for transfer of dialog and data. 4. Remote Data Management

In this implementation the entire business logic resides on the client and the data management is located on a remote server/host. A typical implementation is provided by popular RDBMS vendors such as Oracle or Ingres. Remote Data Management is relatively easily to program for because there is just one application program. The client communicates with the server using SQL, the server then responds with data that satisfies the query. RDBMS products that offer remote data management provide a layer of software on the client to handle the communication with the DBMS server. 5. Distributed Data Management

Distributed Data Management is an extension of Remote Data management, where the data management layer is distributed between client and server or between

36

multiple servers. It is usually the responsibility of the DBMS vendor to provide for recovery and integrity in case of a failure of any node involved in a distributed data transaction. Sybase and INGRES are examples of database vendors that provide this functionality.

Presentation
Distributed Presentation

Business Logic

Data Access
Client Server

Remote Presentation

Client Server

Distributed Business logic

Client Server

Remote Data Management

Client Server

Distributed Data Management

Client Server

Figure 2.9. Styles of client/server processing

37

E.

Web-Based Client/Server

A web based client/server system is typically a variation on the standard threetier client/server system architecture. It is designed to take full advantages of cooperative processing and distributed computing, using the Internet or an intracompany WAN as the network infrastructure. The presentation logic in a Web-based client/server system consists of a Web browser as the primary application interface. Generally, we find that some of the processing logic is built into the Web-browser to interpret HTML pages with embedded scripting languages like JavaScript or VBScript. Because Web-based applications are constructed by embedding component objects developed using Java or ActiveX technology into the HTML, the business logic layer of a Web-based client/server system generally resides on a remote Web server. And as usual, our supporting data would reside on a database server, which, under the Web-based paradigm, is also remote. Functionally, Web-based applications support many of the same business processes as their desktop counterparts. In fact, many vendors are Internet-enabling their popular desktop applications to conform to the Web-based client/server paradigm. For example, a popular client/server-based financial system, built for a two-tier architecture, is now offered as a web-based solution. Under the new web-based configuration, the product is offered for a threetier architecture. Consequently, the standard GUI, traditionally resident on the client PC, has given way to Netscape or Internet Explorer. The application, which would typically reside on a LAN server, can now be hosted on a WAN server on a separate geographic location, as can the database used to support the system.

38

CHAPTER III RESEARCH METHODOLOGY A.


1. Background Most major systems integrators and many large in-house IT groups have their own life cycle management methodology. However, every methodology has its own strengths, which are important to understand as part of the systems integration process (Alter, 1992). 2. Purpose The purpose of a methodology is to describe a disciplined process through which technology can be applied to achieve the business objectives. Client/server methodology helps define the environmental attributes that might be enhanced by client/server implementation. It should describe the processes involved through the entire life cycle, from BPR and systems planning through and including maintenance of systems in production 3. Paradigm In the client/server systems development methodology, It is necessary to understand and adhere to the flow of information through the life cycle. This flow allows the creation and maintenance of the systems encyclopedia or electronic repository of data definitions, relationships, revision information, and so on. This is

Client/Server Systems Methodology

39

the location of the data models of all systems. Figure 3.1 shows the processes in a client/server systems methodology in typical systems development life cycle. SYSTEM INVESTIGATION SYSTEM ANALYSIS

SYSTEM DESIGN

SYSTEM IMPLEMENTATION

SYSTEM MAINTENANCE

Figure 3.1. Systems Development Life Cycle 4. Stages, activities, deliverables The client/server systems methodology includes a strict project management discipline that describes the deliverables expected from each stage of the life cycle. These deliverables ensure that the models are built and maintained. Each application is built from the specifications in the model and in turn maintains the model's whereused and how-used relationships. Figure 3.2. details the major activities of each stage of the client/server systems development methodology

40

Stages

Goals
Determine whether a business problem or opportunity exits Determine whether a new or improved C/S is a feasible solution

Activities
Gather data Conduct interviews Review procedures and policies Observe operations Identify current situation Describe existing system Define requirements

Deliverables
Information Technology Profile on the Feasibility of a Web-based Client/server Solution

System Investigation

System Analysis

Analyze the information needs of end users, the organizational environment, and presently used systems Develop the functional requirements of a system that meet the needs.

Gather data Expand the requirements to the next level of detail Define information system requirements Prepare external system design

Functional requirement model Process flow model Data flow model

Develop specifications for H/W, S/W, people and data resources and the information products that will satisfy the functional requirements of the proposed system

System Design

Determine possible improvement goals Analyze applications and data architectures Analyze technology platforms Prepare implementation plan Identify systems development environment. Perform preliminary design Perform detailed design Design system test

System specifications User interface Database Software Hardware Personnel

System Implementation

Implement the IS solution

Acquire H/W & S/W Test the system Train people Convert to the new system

Proof-of-theimplementation

System Evaluation & Maintenance

Monitor, evaluate, modify the system as needed

Support H/W, S/W, and communication configuration

Post Implementation Review

Figure 3.2. Stages and Activities of Client/server Systems Development methodology

41

1.

System Investigation Before any changes can be made to an IS, or a new one developed, an

investigation of the current system must be made to determine problems, opportunities and objectives. Questions like do we have a business problem (or opportunity)? What is causing the problem? Would a new or improved information system help solve the problem? What would be a feasible information system solution to our problem? have to be answered in the system investigation stage. The system investigation stage requires a preliminary study called a feasibility study to determine the information needs of prospective users and determine the resource requirements, costs, benefits, and feasibility of the proposed information system (O'Brien,1996). Then the feasibility study should be evaluated in terms of four major categories: 1. Organizational feasibility: how well the proposed system supports the strategic objectives of the organization. 2. Economic feasibility: cost savings, increased revenue, decreased investment, increased profits, and service quality improvement. 3. Technical feasibility: hardware and software capability, reliability, and availability. 4. Operational feasibility: end user acceptance, management support, and customer requirements

42

2.

System Analysis The goal of system analysis is to obtain a clear understanding of the current

system and its shortcomings and to determine opportunities for improvement. Several basic activities of system analysis should be performed. Many of these activities are an extension to those used in the feasibility study. Some of the same information gathering methods are used. However, it is not a preliminary study. It is an in-depth study of end-user information needs that produces functional requirements to be used as the basis for the design of the proposed information system. System analysis involves a detailed study of: The information needs of the organization and end users. The activities, resources, and products of the presently used information system The information system capabilities required to meet the information needs, and those of other end users. System analysis can be seen as a whole or maybe broken into the following views (O'Brien, 19960: 1. Organizational analysis

This is done to know about the organization, its management structure, its people, its business activities, the environmental systems it deals with, and its current information system. 2. Analysis of the present system

43

This is used to analyze how the system uses hardware, software, and people resources to convert data resources, such as transactions data, into information products, such as reports and displays. 3. Functional requirements analysis

This is done to specify the information system capabilities required to meet the information needs of users, which are not tied to the hardware, software, and people resources that the end user presently use or might use in the new system. 3. System Design System analysis describes what a system should do to meet the information needs of users. System design specifies how the system will accomplish this objective. System design consists of design activities that produce system specifications satisfying the functional requirements developed in the system analysis stage. These specifications are used as the basis for software development, hardware acquisition, system testing, and other activities of the implementation stage. According to the client/server systems development methodology, the system design stage was accomplished through the following steps: a) Architecture definition The purpose of the architecture definition step in the system design stage is to define the application architecture and select the technology platform for the application. To select the application architecture wisely, The choice must be based on an evaluation of the business priorities. The organization must consider and weight the following criteria:

44

1. Cost of operationHow much can the organization afford to pay? 2. Ease of useAre all system users well-trained, computer literate, and regular users? Are some occasional users, intimidated by computers, users with little patience, or familiar with another easy to use system? Will the system be used by the public in situations that don't allow for training or in which mistakes are potentially dangerous? 3. Response timeWhat is the real speed requirement? 4. AvailabilityWhat is the real requirement? Is it 24 hours per day, 7 days per week, or something less? What is the impact of outages? How long can they last before the impact changes? 5. SecurityWhat is the real security requirement? What is the cost or impact of unauthorized access? Is the facility secure? Where else can this information be obtained? 6. Flexibility to changeHow frequently might this application change? Is the system driven by marketing priorities, legislative changes, or technology changes? 7. Use of existing technologyWhat is the existing investment? What are the growth capabilities? What are the maintenance and support issues? 8. System interfaceWhat systems must this application deal with? Are these internal or external? Can the systems being interfaced be modified? Once managers understand the application architecture issues, it becomes appropriate to evaluate the technical architecture options.

45

The following is a representative set of technical architecture choices: 1. Hardware (including peripherals)Are there predefined standards for the organization? Are there environmental issues, such as temperature, dirt, and service availability? 2. Distributed versus centralizedDoes the organization have a requirement for one type of processing over the other? Are there organizational standards? 3. Network configurationDoes the organization have an existing network? Is there a network available to all the sites? What is the capacity of the existing network? What is the requirement of the new one? 4. Communications protocolsWhat does the organization use today? Are there standards that must be followed? 5. System softwareWhat is used today? Are there standards in place? What options are available in the locale and on the anticipated hardware and communications platforms? 6. Database softwareIs there a standard in the organization? What exists today? 7. Application development tools (for example, CASE)What tools are in use today? What tools are available for the candidate platforms, database engine, operating system, and communications platforms? 8. Development environmentDoes such an environment exist today? What standards are in place for users and developers? What other platform tools are being considered? What are the architectural priorities related to development?

46

9. Application software (make or buy, package selection, and so on)Does the organization have a standard? How consistent is this requirement with industrystandard products? If there is a product, what platforms does it run on? Are these consistent with the potential architecture here? How viable is the vendor? What support is available? Is source code available? What are the application architecture requirements related to product acquisition? 10. Human interfaceWhat are the requirements? What is in place today? What are users expecting? b) Systems Development Environment Once the organization has defined its application and technical architectures and selected its tools, the next step is to define how to use these tools. An SDE comprises hardware, software, interfaces, standards, procedures, and training that are selected and used by an enterprise to optimize its information systems support to strategic planning, management, and operations. 1. An architecture definition should be conducted to select a consistent technology platform. 2. Interfaces that isolate the user and developer from the specifics of the technical platform should be used to support the creation of a single-system image. 3. Standards and standard procedures should be defined and built to provide the applications with a consistent look and feel. 4. Reusable components must be built to gain productivity and support a singlesystem image.

47

5. Training programs must ensure that users and developers understand how to work in the environment. The most significant advantages are obtained from an SDE when a conscious effort is made to build reusable components. These are functions that will be used in many applications and will therefore improve productivity. The advantages of building an SDE and including these types of components are most evident in the following areas: 1. Rapid prototypingThe development environment generates skeleton applications with embedded logic for navigation, database views, security, menus, help, table maintenance, and standard screen builds. This framework enables the analyst or developer to sit with a user and work up a prototype of the application rapidly. In order to get business users to participate actively in the specification process, it is necessary to show them something real. A prototype is more effective for validating the process model than are traditional business modeling techniques. Only through the use of an SDE is such prototyping possible. Workstation technology facilitates this prototyping. The powerful GUI technology and the low cost of direct development at the workstation make this the most productive choice for developing client/server applications. 2. Rapid codingIncorporating standard, reusable components into every program reduces the number of lines of custom code that must be written. In addition, there is a substantial reduction in design time, because much of the design

48

employs reusable, standard services from the SDE. The prototype becomes the design tool. 3. Consistent application designAs mentioned earlier, much of the design is inherent in the SDE. Thus, by virtue of the prototype, systems have a common look and feel from the user's and the developer's perspectives. This is an essential component of the single-system image. 4. Simplified maintenanceThe standard code included with every application ensures that when maintenance is being done the program will look familiar. Because more than 50 percent of most programs will be generated from reusable code, the maintenance programmer will know the modules and will be able to ignore them unless global changes are to be made in these service functions. The complexity of maintenance corresponds to the size of the code and the amount of familiarity the programmer has with the program source. The use of reusable code provides the programmer with a common look and much less new code to learn. 5. Enhanced performanceBecause the reusable components are written once and incorporated in many applications, it is easier to justify getting the best developers to build the pieces. The ability to make global changes in reusable components means that when performance problems do arise, they can often be fixed globally with a single change.

49

c) Design the new system In this step, the specifications for hardware, software, people, and data resources, and information products that will satisfy the functional requirements of the proposed information system were developed as follows: User interface specifications: the content, format, and sequence of user interface products and methods such as display screens, interactive dialogues, audio responses, forms, documents, and reports. Database specifications: the content, structure, distribution, and access, response, maintenance, and retention of databases. Software specifications: the required software package or programming specifications of the proposed system including performance and control specifications. Hardware and facilities specifications: the physical and performance characteristics of the equipment and facilities required by the proposed system Personal specifications: job descriptions of persons who will operate the system Controls and Security: Controls and security passwords were established here. d) Develop the New System After the system was designed and approved, it must be developed. This is when the hardware and software is actually acquired. Certain decisions must be made. Should the software be bought, or written? Who supplies the hardware? In addition, users must be trained on the new system. Sometimes the vendors of the software give training seminars. Also, any new procedures or polices must be taught.

50

Lastly, the system must be tested. Prototyping is used. The system is tested with minimal prototype data to catch bugs and errors. 4. System implementation After the system has been developed and tested, it must be implemented. There are different ways to accomplish this. These are the four main implementation methods: Direct Implementation: The change to the new system is made all at once. This method is used in a small business or in a larger one where a model was previously developed and thoroughly tested. Parallel Implementation: Both the old and the new systems are up and running at the same time. When the operation of the new system is satisfactory, the old system is shut down. Phased Implementation: Implementation of a part of the system and when it is running satisfactorily, another part follows. This is used for extremely large and complicated systems. Pilot Implementation: If a company has many widely dispersed locations, the new system is implemented in one location at a time. Some of the trained staff would then be moved to the new location.

5.

System Evaluation and Maintenance

51

After the system has been implemented, and has been running for a few months, an evaluation is made to determine if it is meeting its objectives. The client/server SDLC is an ongoing process, since all C/S systems are constantly in development and revision. The development of a C/S system never really stops. One thing that can stop development on a C/S systems is the failure of a systems development project. When this happens, a new system must be created from scratch, and the SDLC must be started again from the beginning. Following this process makes the systems development process more straightforward, and raises the chances for success. A system development projects failure results when the SDLC is not properly followed. It means the end of development on the current MIS, and the restarting of the development process at the first step. In addition, it is a large waste of time, money and resources. The failure of the development occurs when there is: 1. Lack of end user involvement This occurs when end-users and department supervisors are not consulted. If they don't like the system, they will not use it. 2. Continuation of a Flawed Design When a flawed design is continued using fix-ups failure is sure to follow. This creates an unstable, unfriendly system with strange behavior that end users do not appreciate.

52

3. Failure of Systems Integration The IS is not developed as one big, single project. It is a collection of little systems. A project fails when two or more portions of the system do not fit together properly. This demonstrates the need for an orderly development process.

B.

Client/Server Business Model

The client/server business model is an approach that helps define environmental attributes that might be enhanced by client/server implementation. This approach not only provides a basis for the business case that supports such a recommendation, it also lays a foundation for performing Business Process Re-engineering. Business Process Reengineering is the process of analyzing the functional and operational procedures in an organization for efficiency. This analysis is directed at identifying processes that may be redundant across organizations, or obsolete in their application. The ultimate intent is to define the optimum flow of work for an organization, and then determine how the optimized process might be enhanced through the implementation of automated solutions. The focus of this research is to determine whether a web-based client/server solution is the right solution for an organization. The answer to this question depends on a number of variables. However, a primary determinant for evaluating the feasibility of adopting such a solution had to be chosen. The focus will be on the fitness of the current information system. Is the current information system

53

infrastructure is a help or a hindrance to the business activity? Once this question is answered, the decision to adopt a client/server solution can be evaluated in light of issues such as implementation cost, staffing requirement, overall impact on business operations, and the potential benefits to be realized from making a change. The web-based client/server business model, developed for the purposes of this research can be used to identify general areas about the business activities of the case study considering a client/server solution which include the following: 1. Needs: A clear statement of the case of studys information processing needs to be enabled by sharing information among all of its business units in a fast, cost effective manner. 2. End user roles and responsibilities: A perspective on the case of study roles and responsibilities of end users to determine the resource sharing requirements among business units. 3. Information flow: An understanding of how information flows throughout the organization, in other words, what information goes into the organization and what comes out. 4. Standards and procedures: Insight into the existing information management standards, policies, and/or procedures of the case of study. This insight helps understand what governs the way that information is handled in the organization. 5. Education and training: An understanding of case of studys education and training processes related to the system.

54

6. System configuration: An inventory of the existing information processing system configuration. 7. Risk: Managements budget and risk tolerance thresholds. This information establishes limits for how much an organization willing to spend in relation to the capabilities received. 8. Problems and issues: A perspective on known problems and/or issues with the organizations existing information processing infrastructure. 9. Metrics: A perspective on measurements that might be collected to objectively evaluate proposed capability enhancements. After developing the client/server business model, the research began to make assessments related to the feasibility and potential benefits of a client/server solution for Belawan container terminal operations unit (Unit UTPK) which resulted in the decision to downsize the existing mainframe system to an integrated web based client/server system. The study conducted to determine the state and feasibility of a client/server solution was called Information Technology Profile on the Feasibility of a Web based Client/Server solution and shown in the appendix . The objective of this study is to describe the: 1. Existing business environment 2. Approach taken regarding assessments of client/server solution feasibility including organizational, economic, technical and operational considerations

55

3. Conclusions and recommendations regarding the feasibility of a client/server solution. This study was based on a report written by Ir. Dr. F.Soesianto for the PT Pelabuhan Indonesia 1 - Medan titled Management Information system: an approach to a mission for a vision which represents the cornerstone of the approach developed throughout this research. The models developed for the system analysis stage and the information about the business processes were based on a High Spot Review conducted by PT Sisindosat Lintasbuana for PT Pelabuhan Indonesia 1 - Medan While the implementation method applied is the proof-ofthe-concept implementation which is an experimental approach to determine the state and feasibility of adopting a client/server solution that can be developed to be any one of the other implementation methods.

56

CHAPTER IV RESULTS AND DISCUSSION A. Results

Client/server database computing evolved as an answer to the drawbacks of the mainframe and PC/file server computing environments. By combining the processing power of the mainframe and the flexibility and price of the PC, C/S database computing combines the best of both worlds Many corporations have turned to client/server database computing as their computing answer. Following are some of the underlying reasons for its popularity: Affordability: C/S database computing can be less expensive than mainframe computing. The underlying reason is Simplicity: C/S database computing is based on an open architecture, which allows for more vendors to produce competing products, driving the cost down. This is unlike mainframe-based systems, which typically use proprietary components available only through a single vendor. Also, C/S workstations and servers are often PC based. PC prices have fallen dramatically over the years, which has led to reduced C/S computing costs. Speed: The separation of processing between the client and the server reduces network bottlenecks, which allows a C/S database system to deliver mainframe performance while exceeding PC/file server performance.

57

Adaptability: The C/S database computing architecture is more open than the proprietary mainframe architecture. Therefore, it is possible to build an application by selecting an RDBMS from one vendor, hardware from another vendor, and development software from yet another vendor. Customers can select components that best fit their needs. Simplified data access: C/S database computing makes data available to the masses. Mainframe computing was notorious for tracking huge amounts of data that could be accessed only by developers. With C/S database computing, data access is not limited to those who understand procedural programming languages (which are difficult to learn and require specialized data access knowledge). Instead, data access is provided by common software products that hide the complexities of data access. Word processing, spreadsheet, and reporting software are just a few of the common packages that provide simplified access to C/S data. The potential benefits associated with such a decision to adopt a web-based client/server computing solution for the case of study of this research include, but are not limited to, the following list: 1. Improved Graphical User Interface. Client user interfaces are significantly more friendly than mainframe dumb terminals and typically use graphical elements that are familiar to users of desktop PCs and Macintoshes 2. Easy data access.

58

Data is more accessible (especially to managers and executives) in the web-based client/server model than in the mainframe model, where access is more cumbersome and tightly controlled. 3. Access to application development tools. Because clients have processing power, they can also be used to develop applications. 4. Multi platform integration. An open systems architecture facilitates solution integration across different platforms 5. Improved business operations. Often, part of the client/server solution includes business process re-engineering which usually results in more efficient business operations. 6. Distributed application processing. Since clients have processing power, they can share the processing load, whereas in the mainframe environment, the mainframe computer is responsible for all the processing. Distributed application processing can also be better because it allows the application to be closer to where the work is actually performed. 7. Lower system-operating costs. Tasks such as ad hoc reporting and data analysis can be performed by the user, rather than by a dedicated IT staff. This increases user productivity, which translates to labor cost savings. Additionally, mainframe support often carries

59

with it the expense of providing administrative around-the-clock coverage. Client/server solutions do not create such expenses.

B.

Discussion

The client/server business model makes the enterprise available at the desk. It provides access to data that the previous architectures did not. Standards have been defined for client/server computing. If these standards are understood and used in developing web-based applications, organizations can reasonably expect to buy solutions today that can grow with their business needs without the constant need to revise the solutions. Architectures based on open systems standards can be implemented throughout the world, as global systems become the norm for large organizations. While a supportable common platform on a global scale is far from standardized, it certainly is becoming much easier to accomplish. From the desktop, enterprise-wide applications are indistinguishable from workgroup and personal applications. Powerful enabling technologies with built-in conformance to open systems standards are evolving rapidly. Examples include object-oriented development, relational and object-oriented databases, multimedia, imaging, expert systems, geographic information systems (GIS), voice recognition and voice response, and text management. These technologies provide the opportunity to integrate their generic capabilitieswith the particular requirements of an organizationto create a cost-effective and customized business solution. The webbased client/server model provides the ideal platform with which to integrate these

60

enabling technologies. Well-defined interface standards enable integration of products from several vendors to provide the right application solution. Enterprise systems are those that create and provide a shared information resource for the entire corporation. They do not imply centralized development and control, but they do treat information and technology as corporate resources. Enterprise network management requires all devices and applications in the enterprise-computing environment to be visible and managed. This remains a major challenge as organizations move to distributed processing. Standards are defined and are being implemented within the client/server model. Client/server applications give greater viability to worker empowerment in a distributed organization than do the host-centered environments.

61

CHAPTER V CONCLUSIONS AND RECOMMENDATIONS A. Conclusions

Many IS organizations are moving to client/server technology because mainframe architectures are 20 years old-or more. And much of the software they house is undocumented, inadequate, and unwieldy. This results in hard to use applications, information bottlenecks, reliability problems, a very long development cycle for new applications, trouble changing and updating mainframe applications, and high maintenance costs Yet no organization can afford to dump its enormous investment in mainframe technology. Thats why vendors are stepping into the gap with client/server products that both improve old ways of doing things and provide bridges to newer technologies. 1. C/S based downsizing and business process redesign Client/server-based downsizing and business process redesign can produce dramatically positive change in an organizations information system: Less expensive systems as downsizing significantly reduces hardware costs and reduce need for costly staff to run the mainframe. Improved end-user access to corporate data without compromising security. Greater productivity of end-users, workgroups and departments, which in turn produces ideas for entirely new kinds of applications.

62

System flexibility and scalabilityadding another server or several new clients is much easier than increasing the size or capacity of a mainframe. Increasing responsive applications that can be adjusted to meet an organizations changing business requirements. Responsiveness in a competitive world. Putting the right applications in the right hands at the right time improves an organizations responsiveness to changing conditions and opportunities. 2. Information integration Client/Server permits the creation of a new class of applications by computerizing and integrating different types of information. Most notable is the integration of data, voice, and image. Client/server integrates these types of data in system applications, allowing existing applications to be enhanced in order to provide a more natural process for users. For example, C/S allows capture and exchange of data through digitized voice rather than written correspondence. The integration of these multiple forms of information allows previously manual tasks that would be difficult or impossible to automate in a traditional environment to be automated. For example, imaging allows document intensive tasks to be automated. 3. Processing costs Organizations are adopting C/S applications to take advantage of the cheaper computing power of the workstation. Processing costs of C/S applications are cheaper than processing costs of mainframes. The cost of microprocessor based computer

63

hardware has remained constant, or decreased, while their capabilities have dramatically improved. Client/server user training costs can be much less than user training costs on mainframe systems. Lower training cost applies to ongoing training costs, retraining costs can be the same or higher compared to mainframe systems. C/S applications can be developed on cheaper workstations and then ported to a more expensive production system. This can reduce development costs. The development environment can be different from the production environment until the application is tested. However, it should be recognized that porting to the production system is not always easy, and requires testing. Even though workstation processing-costs are in many cases less expensive than mainframe processing costs, workstations cannot perform some tasks as well. Mainframes may be a better choice when: Processing requires large batch runs. Large amount of transaction processing. Processing requires a high level of security. 4. User productivity The computer revolution has not increased white-collar productivity and blue-collar productivity as promised. The implementation of C/S processing increases user productivity as follows: Provides potential for seamless application integration Allows user-friendly information at the point of need Facilitates group/team computing Improves the usability of the system

64

Empowers end users Client/Server empowers end-users because GUI-based C/S systems are easy to use, and anyone within the organization can perform tasks on the system with minimal training. These tasks were previously reserved for data entry and systems personnel. Client/Server enables end users to develop solutions to their data and processing needs through easier to user tools combined with a familiar, workstationbased environment. The familiar environment is facilitated by the use of an application-related metaphor and a consistent user-friendly GUI that incorporates help and training. GUI applications provide more natural, intuitive interfaces to users. This encourages user to experiment and find better, more productive ways to work while increasing user job satisfaction. Data of particular interest to a user can be located close to the user (i.e., on a workstation or LAN). Data of interest to the whole organization can be centrally located (i.e., on a mainframe). This allows end users to satisfy their local needs on a more timely basis. Data is usually distributed based on processing needs or data organization, thus increasing the efficiency and responsiveness of the system. The nature of the relational databases used in C/S processing allows end users to view data in the format they want. Each user has dedicated processing, and more computer resources are available to perform tasks. 5. Team/group computing Client/Server processing facilitates team computing through the use of groupware. Groupware is computer software that is designed to support the

65

collective work of teams. Examples include: Lotus-Notes, Ventana-GroupSystem, Collaborative Technologies- VisionQuest, and IBM-TeamFocus Consumers and producers of information are brought into closer alignment, which allows for a faster response time on collaborative projects and requests for information. Multiple individuals can share files and documents electronically, which facilitates communication between users in multiple locations and allows tasks and meetings to be completed faster. The amount of paper required to complete tasks is greatly reduced. Everyone can speak at once during meetings since the groupware will accept input from all the computers and display it on the screen as received. Also, meetings can be conducted across wide areas since the only requirement is for the computer to be connected to the groupware package. More and more vendors of groupware packages are including workflow automation capabilities in their offerings. These provide an assortment of facilities, forms, routing, and trackingthat aim to improve the handling of business processes. At its simplest, workflow automation based on electronic mail software serially routes tasks. Somewhat more complex offerings coordinate parallel processes and maintain results in a databases, which allows work-in-progress to be tracked. At the high end of workflow automation are large, complex transaction-processing-oriented systems that handle claims processing, airline reservations, etc.

66

B.

Recommendations

There are some risks associated with the adoption of client/server solution that must be addressed and cared about 1. Technical complexity It is more difficult to support user-developed applications since the MIS department lacks knowledge about the applications content and purpose. The MIS department must address the following increased risks associated with end-user computing: lack of restore/recovery features for user developed applications, Lack of standards in user developed applications, lack of documentation, lack of resources for ongoing maintenance and support, potential duplication of hardware, system software, or data, and potential misuse of human resources (e.g., user spends more time developing applications than performing intended tasks) Diversity of hardware, system software, and data will require greater understanding of the unique features of each platform and system software package. MIS must keep informed of new releases of technical products so it can: increase the capability of the system and improve the manner in which tasks are performed, reduce the number of interfaces and amount of system code required to connect different platforms and products.

67

2. Inertia and change management Client/server generally involves business process redesign and organizational change. These changes must be managed to implement the C/S systems successfully. Some of the change management issues that must be considered when implementing C/S systems are listed below: The MIS department needs a more diversified skill base to maintain the multiple components, languages, and tools found in a C/S environment. These skills must be developed or new personnel with these skills must be hired. As an alternative, consultants maybe brought in to fill these skill gaps. The organization of the MIS department must be changed to more closely integrate the various support groups. Traditional MIS group organizations (e.g., mainframe, communications, and PC support groups) often cannot handle all problems that arise in a C/S environment since some problems may involve all areas. There will be an initial GUI learning curve for end users, although the ongoing training effort is normally less when using GUIs. Client/server can produce new classes of users with varying requirements. Some existing system users will be more familiar with text-based, block-mode screens and will need to adapt to the GUI environment. Executives, who may not be familiar with computers, often become users and will require training that explains job differences in addition to the systems use and purpose.

68

The transition to GUI is not usually difficult, but it must be identified early and managed to ensure acceptance of the new system and the success of the changes implemented. 3. When web-based client-server model may not be the best choice There are several types of applications that utilize C/S technology. These applications can be grouped into the following areas: Customer services: customer service/customer information, customer automation/integration Insurance case processing: claims processing, loan underwriting, medical admissions, collections Professional applications: executive information systems, brokerage house applications, real estate systems, sales force applications Document management: record keeping/ archiving systems, revenue accounting, publishing Some characteristics of situations when C/S may not be the best choice are listed below: Processes may require heads-down data entry activities. If the users task is to routinely enter data and ill out reports (i.e., heads-down data entry), GUIs (mouse-driven interface) are often more time-consuming and cumbersome to use than text screens (keyboard-driven interface).

69

If the current system satisfies the organizations requirements, it is often difficult to justify changing it. Unless a business process will be significantly improved, a C/S systems is usually not cost-justified. The technical tradeoffs between C/S systems and traditional transaction processing systems must be weighted when deciding the type of system to install. There is a smaller meantime between failures for the components in a C/S system than for the components in a traditional transaction processing system. There is a greater potential for problems in a C/S environment due to the increased number of components. Data that requires a high level of security or mission-critical systems that require data synchronization dictate a mainframe solution.

70

CHAPTER VI SUMMARY A.
1. Background to the Research The web-based client/server computing has become the model for a new information architecture that will take enterprise-wide computing into the 21st century. It promises many things to many people; to end-users, ease access to corporate and external data; to managers, dramatically lower costs for processing; to programmers, reduced maintenance; to developers, an infrastructure that enables business processes to be reengineered for strategic benefits. However, The subject of web-based client/server seems to be still shrouded in mystery for most people. Is the web-based client/server computing really new? What has propelled it to the information systems forefront? Should the decision to move to web-based client/server computing be made on the basis of cost or are these productivity gains as well? Is there a hidden downside? Is it a new technology or a new methodology? The purpose of this research is to answer all these questions through introducing the concept of integrated web-based client/server architecture and to discuss a number of complex issues surrounding the implementation and management of this architecture. The approach followed in this research is the proof-of-concept implementation approach, which has taken Belawan container terminal operations unit- Northern Sumatra as a case study to develop the web based client/server system.

Introduction

71

2. Formulation of the Problem The advent of the Internet and the growth of commerce on the Word Wide Web are changing the client/server landscape dramatically. As the demand for information and resource sharing between remote locations continues to escalate, vendors are beginning to web-enable their desktop application solutions. This means that they can run right over the Internet via a Web browser. It is very important to determine the state and feasibility of adopting a web-based client/server solutions for an organization. 3. Authenticity of the Research Client/server architecture is a complex, involved, and often misunderstood subject. The amount of information related to this subject is tremendous and covers a wide range of functions, services, and other aspects of the distributed environment. This research is intended to contribute to the evolution of todays client/server technology by determining and evaluating the state and feasibility of adopting a webbased client/server solution for an organization based on the theoretical foundations of the distributed co-operative processing models. It will introduce the concept of client/server systems development methodology, which provide a consistent approach in the analysis and investigation of all the issues surrounding the implementation and management of client/server systems. A real world case of study was chosen to apply these concepts, that is Belawan container terminal operations unit.

72

4. Objectives of the Research The purpose of this research is to determine the state and feasibility of using a web-based client/server solution for an organization through developing a client/server business model and introducing the client/server systems development methodology. 5. Benefits Expected from the Research It is important to determine if a client/server solution is the right solution for an organization. The answer to this question depends on a number of variables. However, a primary determinant for evaluating the feasibility of adopting a client/server solution had to be chosen. The focus will be on the fitness of the current information system. Is the current information system infrastructure is a help or a hindrance to the business activity? Once this question is answered, the decision to adopt a client/server solution can be evaluated in light of issues such as implementation cost, staffing requirement, overall impact on business operations, and the potential benefits to be realized from making such a change, and that is what expected to be the main benefits of such a research.

B.
1. Client/Server System Concepts

Background

The term client/server was first used in the 1980s in reference to personal computers (PCs) on a network. The actual client/server model started gaining acceptance in the late 1980s. The client/server software architecture is a versatile,

73

message-based and modular infrastructure that is intended to improve usability, flexibility, interoperability, and scalability as compared to centralized, mainframe, and time sharing computing. There is a variety of definitions for the term client/server. For the purpose of this research the systems point of view is taken. In other words, the client/server system is defined in terms of individual pieces that work together as a whole and viewed as a system that integrates hardware, software, and networking. This integration of technology is specifically designed to share resources, in support of one or more business functions, in multiple locations simultaneously.As an example is the Internet, the worlds largest client/server system. This client/server system is comprised of thousands of clients and servers transfering information and supporting millions of business functions across a network that spans the globe. Every client/server systems consists of at least one of each of the following: A client that requests information A server that supplies information A network that transfer information between the client and the server The client component of the client/server system can be either hardware or software. In the hardware context, a client is the personal computer functioning as a workstation. This client workstation is capable of stand-alone information processing, which distinguishes it from its mainframe predecessor, the dumb terminal. In the software context, a client is the software that allows to interact with the information

74

residing on the server. Web browsers are examples of software clients, as are email programs. The server component can also considered both hardware and software. As hardware, the server is typically a personal computer or workstation with enhanced storage capacity. Often, it resides in the same location as the business activity it is required to support. As software, servers have a variety of incarnations, depending on the operational function. For example, windows NT Server acts as a secure server, allowing users to share files and printers over a network. Web servers like Microsofts Information Server provides access to and delivery of information over the World Wide Web. In a client/server system, the network is the glue that binds the various pieces of hardware and allows the sharing of data and information. Networks are generally considered hardware, as they provide a physical link between clients, servers, printers, and other shared resources. In addition, in cases where networks are required to link and facilitates interaction between varying vendor-specific resources, middleware may play a role in supporting network functionality. Middleware, as the name implies, is the piece of hardware or software placed between two different technologies to allow the two technologies to interact with each other. In the context of client/server systems, middleware has a variety of incarnations; however, the two most common are: Network Middleware, which handles message routing between different platforms

75

Database Middleware, which handles the translation of data request into database commands. 2. Client/Server Systems Architecture Client/server information systems architecture must serve three purposes: Respond to the organizations need for easy information access, flexibility, smooth administration, reliability, security and proficient application development. Utilize current installations of hardware, software and networks, which often run mission-critical applications that cannot be compromised. Anticipate future demands on information systems and networks Client/Server computing systems come under the broad category of system architectures known as Open Systems Architecture. In an open systems environment, hardware and software from different vendors are interchangeable and can be combined into an integrated working environment. Two important concepts in open systems are Interoperability and Interconnectivity. Interoperability means that open systems can exchange information meaningfully with each other even when they are from different vendors. For example, Manufacturing relies on VAX/VMS, Accounting runs of the IBM 3090 mainframe, Sales and marketing use OS/2 server running Netware (Network Operating System) on a Token Ring network and top management and corporate planning need real time data from all three on their PCs and Macintoshes. Interoperability is at the software application level.

76

Interconnectivity, refers to the ability to connect computers from possibly different vendors, is an integral part of interoperability. Interconnectivity is at the hardware level. Most client-server systems have the following distinguishing characteristics: 1. Service: client-server is primarily a relationship between processes running on separate machines. The server process is the provider of services. The client is a consumer of services. In essence, client-server provides a clean separation of function based on the idea of service. 2. Shared resources: a server can service many clients at the same time and regulate their access to shared resources. 3. Asymmetrical protocols: there is a one-to-one relationship between clients and servers. Clients always initiate the dialog by requesting a service. Servers are passively waiting on requests from the clients. 4. Transparency of location: the server is a process, which can reside on the same machine as the client or on different machine across the network. Client-server software usually masks the location of the server from the clients by redirecting the service calls when needed. 5. Mix and match: the ideal client-server software is independent of hardware or operating system software platform. One should be able to mix and match client and server platforms.

77

6. Message-based exchanges: clients and servers are loosely coupled systems which interact through message passing mechanism. The message is the delivery mechanism for the service requests and replies. 7. Encapsulation of services: the server is a specialist. A message tells a server what service is requested and it is up to the server to determine how to get the job done. Servers can be upgraded without affecting the clients as long as the published message interface is not changing. 8. Scalability: client-server systems can be scaled horizontally or vertically. Horizontal scaling means adding or removing client workstations with only a slight performance impact. Vertical scaling means migrating to a larger and faster server machine or multi-servers. 9. Integrity: the server code and client code is centrally maintained, which results in cheaper maintenance and the guarding of shared data integrity. At the same time, the clients remain personal and independent. 3. Client/Server System Functional Layers Client/server systems can be configured in a variety of ways, depending on what is going to be accomplished. These different configurations are called client/server system architectures, the most common of which are the two-tier and three-tier architectures. To better understand the difference between the two, Client/server systems should be broken up into three functional layers: 1. The presentation or user interface layer

78

Presentation is directly related to end-user I/O and include such items as screen/window management, dialog control, function key mapping, field validation, range checking, context-sensitive help, data capture and security. 2. The function or application processing layer Function includes programs responsible for business logic. It includes programmed access to user data and end-user I/O or issuing DML statements. 3. The data management layer. Data management represents the functions related to executing data manipulation language (DML) e.g., disk I/O, buffer management, index maintenance, lock management. A DBMS or file management system usually performs these functions. 4. Two Tier and Three Tier Client/Server Systems In a two-tier architecture, the client generally handles the responsibilities of both the presentation and business logic layers, and the data access layer is managed by the server. Because the client performs the processing requirements of both the presentation and business logic in the two-tier architecture, it is also known as a fatclient architecture. Over time, it was found that two-tier architectures didnt scale very well. In addition, so much client processing required installing, updating, and maintaining large, platform specific applications on each client, which could be very expensive. For these reasons, there has been a move toward the three-tier architecture. With the three-tier architecture, each functional layer is isolated into three distinct (logical) components. In this architecture, clients are responsible for the presentation layer,

79

servers for the business logic layer, and servers for the data access layer. Because clients are then only responsible for handling presentation logic, this architecture is also known as thin-client architecture. In three-tier architecture, applications reside on the server, rather than from the client, so they are platform-independent, and the same application can be used by many clients on multiple platforms. In the example of a corporate Intranet, applications can reside on a Web server. Thin clients, such as a PC or a network computer, can access these applications from anywhere in the Intranet. Organizations that adopt the three-tier client/server architecture can realize benefits in reduced maintenance cost and system flexibility. 5. Web-Based Client/Server Architecture A web based client/server system is typically a variation on the standard threetier client/server system architecture. It is designed to take full advantages of cooperative processing and distributed computing, using the Internet or an intracompany WAN as the network infrastructure. The presentation logic in a Web-based client/server system consists of a Web browser as the primary application interface. Generally, some of the processing logic is built into the Web-browser to interpret HTML pages with embedded scripting languages like JavaScript or VBScript, because Web-based applications are constructed by embedding component objects developed using Java or ActiveX technology into the HTML, the business logic layer of a Web-based client/server system generally resides on a remote Web server. And as usual, the supporting data

80

would reside on a database server, which, under the Web-based paradigm, is also remote. Functionally, web-based applications support many of the same business processes as their desktop counterparts. In fact, many vendors are Internet-enabling their popular desktop applications to conform to the web-based client/server paradigm. For example, a popular client/server based financial system, built for a two tier architecture, is now offered as a web-based solution. Under the new web-based configuration, the product is offered for a three-tier architecture. Consequently, the standard GUI, traditionally resident on the client PC, has given way to Netscape or Internet Explorer. the application, which would typically reside on a LAN server, can now be hosted on a WAN server in a separate geographic location, as can the database used to support the system.

C.

Research Methodology

1. Client/Server systems Methodolog Microcomputers have been termed power tools or mind-tools for an information age, where the key to success is in acquiring and using knowledge. In the client/server systems, the end-user should be involved in the design of the system, and should help the systems people design the new requirements expected from the system. End users should also help with testing, as they will be using the new system.

81

In order to develop an effective client/server system, The system development life cycle should be viewed from the client/server business model point of view and a clearly defined cycle of phases had to be designed. The client/server systems development methodology is an ongoing process, since all client/server systems are constantly in development and revision. The development of a client/server systems never really stops. The System Development Life Cycle using a client/server business model is a five-stages process that is constantly in effect on any given information system. These stages should be followed carefully to ensure an effective system and to reduce the chance of failure. The following figure details these five stages and the different activities involved in them

82

Stages

Goals
Determine whether a business problem or opportunity exits Determine whether a new or improved C/S is a feasible solution

Activities
Gather data Conduct interviews Review procedures and policies Observe operations Identify current situation Describe existing system Define requirements

Deliverables
Information Technology Profile on the Feasibility of a Web-based Client/server Solution

System Investigation

System Analysis

Analyze the information needs of end users, the organizational environment, and presently used systems Develop the functional requirements of a system that meet the needs.

Gather data Expand the requirements to the next level of detail Define information system requirements Prepare external system design

Functional requirement model Process flow model Data flow model

Develop specifications for H/W, S/W, people and data resources and the information products that will satisfy the functional requirements of the proposed system

System Design

Determine possible improvement goals Analyze applications and data architectures Analyze technology platforms Prepare implementation plan Identify systems development environment. Perform preliminary design Perform detailed design Design system test

System specifications User interface Database Software Hardware Personnel

System Implementation

Implement the IS solution

Acquire H/W & S/W Test the system Train people Convert to the new system

Proof-of-theimplementation

System Evaluation & Maintenance

Monitor, evaluate, modify the system as needed

Support H/W, S/W, and communication configuration

Post Implementation Review

Figure 6.1. Stages and Activities of Client/server Systems Development methodology

83

2. Client/Server Business Model The client/server business model is an approach that helps define environmental attributes that might be enhanced by client/server implementation. This approach not only provides a basis for the business case that supports such a recommendation, it also lays a foundation for performing Business Process Re-engineering. Using client/server business model, we engage in a discovery process to identify general areas about an enterprise considering client/server solution. Typically, these include the following: Needs: A clear statement of the organizations information processing needs. End user roles and responsibilities: A perspective on the organizational roles and responsibilities of end users. Information flow: An understanding of how information flows throughout the organization. In other words, what information goes into an organization and what comes out? Standards and procedures: Insight into existing information management standards, policies, and/or procedures. This insight helps us understand what governs the way that information is handled in the organization. Education and training: An understanding of organizational education and training processes related to the system. System configuration: An inventory of the existing information processing system configuration.

84

Risk: Managements budget and risk tolerance thresholds. This information establishes limits for how much the organization willing to spend in relation to the capabilities received. Problems and issues: A perspective on known problems and/or issues with the organizations existing information processing infrastructure. Metrics: A perspective on measurements that might be collected to objectively evaluate proposed capability enhancements.

D.

Results

After developing the client/server business model, the research began to make assessments related to the feasibility and potential benefits of a web-based client/server computing model for the case of study which resulted in the decision to downsize the existing mainframe system to an integrated web based client/server system. The potential benefits associated with such a decision include, but are not limited to, the following list: 1. Improved Graphical User Interface. 2. Easy data access. 3. Access to application development tools. 4. Multi platform integration. 5. Improved business operations. 6. Distributed application processing. 7. Lower system-operating costs.

85

E.

Conclusions

The client/server business model makes the enterprise available at the desk. It provides access to data that the previous architectures did not. Standards have been defined for client/server computing. If these standards are understood and used in developing web-based applications, organizations can reasonably expect to buy solutions today that can grow with their business needs without the constant need to revise the solutions. Architectures based on open systems standards can be implemented throughout the world, as global systems become the norm for large organizations. While a supportable common platform on a global scale is far from standardized, it certainly is becoming much easier to accomplish. From the desktop, enterprise-wide applications are indistinguishable from workgroup and personal applications. Powerful enabling technologies with built-in conformance to open systems standards are evolving rapidly. Examples include object-oriented development, relational and object-oriented databases, multimedia, imaging, expert systems, geographic information systems (GIS), voice recognition and voice response, and text management. These technologies provide the opportunity to integrate their generic capabilitieswith the particular requirements of an organizationto create a cost-effective and customized business solution. The webbased client/server model provides the ideal platform with which to integrate these enabling technologies. Well-defined interface standards enable integration of products from several vendors to provide the right application solution.

86

REFERENCES
A, Umar: Introduction to Client/Server Systems, New York, Wiley, 1993. A, Umar: Object Oriented Computing and Client/Server Systems, Englewood Cliffs, NJ: Prentice Hall, 1997. Abrams, M. D., and I. W. Cotton: Introduction to Computer Networks: A Tutorial, Computer Network Association, 1978. Alter, Steven: Information Systems, a Management Perspective, California, Addison Wesley, 1992. Berson, A.: Client/Server Architecture, New York: McGraw-Hill, 1996. Chorotas, D.: Local Area Network Reference, New York: McGraw-Hill, 1989. Comer, D. E.: Internetworking with TCP/IP: Principles, Protocols, and Architecture, Englewood Cliffs, NJ: Prentice Hall, 1991. Crowley, C.: Operating Systems, A Design-Oriented Approach, IRWIN, Inc., Company, 1997. Fortier, P. J.: Generalized Simulation Model for the Evaluation of Local Computer Networks, Proceedings of HICSS-16, January 1983. Fortier, P. J.: Handbook of LAN Technology, New York: McGraw-Hill, 1992. Golouris, G., and et al.: Distributed Systems, Concepts and Design, New York: Addison-Wesley, 1994. Goscinski, A.: Distributed Operating Systems: The Logical Design, New York: Addison-Wesley, 1991. Hammond, J. L., and P. J. P. OReilly: Performance Analysis of Local Area Networks, Reading, MA: Addison-Wesley, 1986. Jordan, L. and Churchill, B.: Communications and Networking for the PC, New Riders Publishing, 1994. Martin, Richard J. and Weadock: Bulletproofing Client/Server Systems, Computing McGraw-Hill, New York, 1997.

87

Nugroho, Lukito Edi and F. Soesianto: Pengembangan Sistem Informasi Managemen Produksi Untuk Menunjang Automasi Industri, Yogyakarta, 1998. OBrien, James A: Management Information Systems: Managing Information Technology in the Networked Enterprise, 3rd Ed, New York, IRWIN, 1996. Orifali, Robert and Harkey: Client/Server Programming withOS/2.0, New York: Van Nostrand, 1992. Orifali, Robert and et al: The Essential Client/server Survival Guide, New York: John Wiley & sons, 1996. Siong, Neo Boow: Exploiting information technology for business competitive, Singapore, Addison-Wesley, 1996. Sisindosat: Interim Report, Dokumen High Spot Review, Penyusunan Rencana Induk Sistem Informasi Manjemen PT. Pelabuhan Indonesia I, Medan, 1998. Slonim, J., and et al: Building an Open System, Van Nostrand Reinhold Company , 1987. Soesianto, F: Sistem Informasi Managemen: Sebuah Pernyataan Atas Misi Untuk Sebuah Visi, Computec Executive Course, Yogyakarta, 1998. Watterson, Karen: Client/Server for Managers, New York, Addison-Wisely, 1995.

88

APPENDIX

Information Technology Profile on the Feasibility of a Web based Client/Server solution

Das könnte Ihnen auch gefallen