Beruflich Dokumente
Kultur Dokumente
Easy and simple to set up only requiring a hub or a switch to connect all computers together.
You can access any file on the computer as-long as it is set to a shared folder.
If one computer fails to work all the other computers connected to it still continue to work.
Real time communication is any mode of telecommunications in which all users can exchange
information instantly or with negligible latency
Content distribution can be used to send text based information to large group of users this is
usually seen with product updates and news lists.
1
Peer to peer Collaboration enables a group of people to share a workspace and files
Disadvantages
Security is not good other than setting passwords for files that you don't want people to
access.
If the connections are not connected to the computers properly then there can be problems
accessing certain files.
It does not run efficient if you have many computers, it is best to used two to eight computers.
There is no central server
No centralization of Data
Lacks management of the
Back-up and recovery possibility
Up gradation and scalability is always a problem
Client/Server Computing
Objectives
Optimized utilization of Hardware Resources
Sharing of Software Systems
High Utilization of Database
Better Security Aspects
Cost Effectiveness
Online Transaction Processing
3
Advantages
A client server can be scaled up to many services that can also be used by multiple users.
Security is more advanced than a peer-to-peer network, you can have passwords to own
individual profiles so that nobody can access anything when they want.
All the data is stored onto the servers which generally have far greater security controls than
most clients. server can control the access and resources better to guarantee that only those
clients with the appropriate permissions may access and change data.
Disadvantages
More expensive than a peer-to-peer network you have to pay for the start-up cost.
When the server goes down or crashes all the computers connected to it become unavailable
to use.
When you expend the server it starts to slow down due to the Bit rate per second.
When everyone tries to do the same thing it takes a little while for the server to do certain
tasks.
The effective factors that govern the driving forces are given below:
PC-based end user application software together with the increasing power
and capacity of workstations.
Growing cost and performance are advantages of PC-based platforms.
Client/Server Architecture
Client/Server architecture is based on hardware and software components that
interact to form a system. The system includes mainly three components.
(i)
(ii)
(iii)
The client is any computer process that requests services from server. The
client uses the services provided by one or more server processors. The client is
also known as the front-end application, reflecting that the end user usually
interacts with the client process.
The server is any computer process providing the services to the client and
also supports multiple and simultaneous clients requests. The server is also
known as back-end application, reflecting the fact that the server process
provides the background services for the client process.
The communication middleware is any computer process through which
client and server communicate. Middleware is used to integrate application
programs and other software components in a distributed environment. Also
known as communication layer. Communication layer is made up of several
layers of software that aids the transmission of data and control information
between Client and Server. Communication middleware is usually associated
with a network.
data storage take place and which networking scheme has been selected to build
the system.
A single-system-image of all the organizations data and easy management of
change are the promises of client/server computing.
Client/Server Computing
The single-system image is best implemented through the client/server model. Our experience
confirms that client/server computing can provide the enterprise to the desktop. Because the desktop
computer is the user's view into the enterprise, there is no better way to guarantee a s ingle image than
to start at the desktop.
Character mode applications, usually driven from a block mode screen, attempt to display as
much data as possible in order to reduce the number of transmissions required to complete a
function. Dumb terminals impose limitations on the user interface including fixed length
fields, fixed length lists, crowded screens, single or limited character fonts, limited or no
7
graphics icons, and limited windowing for multiple application display. In addition, the fixed
layout of the screen makes it difficult to support the display of conditionally derived
information
In contrast, the workstation GUI provides facilities to build the screen dynamically. This
enables screens to be built with a variable format based conditionally on the data values of
specific fields. Variable length fields can be scrollable, and lists of fields can have a scrollable
number of rows. This enables a much larger virtual screen to be used with no additional data
communicated between the client workstation and server.
Additional information can be encapsulated by varying the display's colors, fonts, graphics
icons, scrollable lists, pull-down lists, and option boxes. Option lists can be provided to
enable users to quickly select input values. Help can be provided, based on the context and
the cursor location, using the same pull-down list facilities.
Although it is a limited use of client/server computing capability, a GUI front end to an
existing application is frequently the first client/server-like application implemented by
organizations familiar with the host mainframe and dumb-terminal approach. The GUI
preserves the existing investment while providing the benefits of ease of use associated with a
GUI. It is possible to provide dramatic and functionally rich changes to the user interface
without host application change.
Electronic data interchange (EDI) is an example of this front-end processing. EDI enables
organizations to communicate electronically with their suppliers or customers. Frequently,
these systems provide the workstation front end to deal with the EDI link but continue to
work with the existing back-end host system applications. Messages are reformatted and
responses are handled by the EDI client, but application processing is done by the existing
application server. Productivity may be enhanced significantly by capturing information at
the source and making it available to all authorized users.
Typically, if users employ a multipart form for data capture, the form data is entered into
multiple systems. Capturing this information once to a server in a client/server application,
and reusing the data for several client applications can reduce errors, lower data entry costs,
and speed up the availability of this information.
Desktop
Application 1
Application 3
Application 2
Figure (Above) illustrates how multiple applications can be integrated in this way. The data is
available to authorized users as soon as it is captured. There is no delay while the forms are
passed around the organization. This is usually a better technique than forms imaging
technology in which the forms are created and distributed internally in an organization. The
use of workflow-management technology and techniques, in conjunction with imaging
technology, is an effective way of handling this process when forms are filled out by a person
who is physically remote from the organization.
Intelligent Character Recognition (ICR) technology can be an extremely effective way to
automate the capture of data from a form, without the need to key. Current experience with
this technique shows accuracy rates greater than 99.5 percent for typed forms and greater
than 98.5 percent for handwritten forms.
organizationally. This is called business downsizing. All this would result in hundreds of smaller
systems, all communicating to each other and serving the need of local teams as well as individuals
working in an organization. This is called cultural downsizing. The net result is distributed computer
systems that support decentralized decision-making. This is the client/server revolution of the
nineties.
The benefits of rightsizing are reduction in cost and/or increased functionality, performance,
and flexibility in the applications of the enterprise. Significant cost savings usually are
obtained from a resulting reduction in employee, hardware, software, and maintenance
expenses. Additional savings typically accrue from the improved effectiveness of the user
community using client/server technology.
Downsizing is frequently implemented in concert with a flattening of the organizational
hierarchy. Eliminating middle layers of management implies empowerment to the first level
of management with the decision-making authority for the whole job. Information provided
at the desktop by networked PCs and workstations integrated with existing host (such as
mainframe and minicomputer) applications is necessary to facilitate this empowerment.
These desktop-host integrated systems house the information required to make decisions
quickly. To be effective, the desktop workstation must provide access to this information as
part of the normal business practice. Architects and developers must work closely with
business decision makers to ensure that new applications and systems are designed to be
integrated with effective business processes. Much of the cause of poor return on technology
investment is attributable to a lack of understanding by the designers of the day-to-day
business impact of their solutions.
Downsizing information systems is more than an attempt to use cheaper workstation
technologies to replace existing mainframes and minicomputers in use. Although some
benefit is obtained by this approach, greater benefit is obtained by reengineering the business
processes to really use the capabilities of the desktop environment. Systems solutions are
effective only when they are seen by the actual user to add value to the business process.
Client/server technology implemented on low-cost standard hardware will drive downsizing.
Client/server computing makes the desktop the users' enterprise. As we move from the machinecentered era of computing into the workgroup era, the desktop workstation is empowering the
business user to regain ownership of his or her information resource. Client/server computing
combines the best of the old with the newthe reliable multiuser access to shared data and resources
with the intuitive, powerful desktop workstation.
Object-oriented development concepts are embodied in the use of an SDE created for an organization
from an architecturally selected set of tools. The SDE provides more effective development and
maintenance than companies have experienced with traditional host-based approaches.
Client/server computing is open computing. Mix and match is the rule. Development tools and
development environments must be created with both openness and standards in mind.
Mainframe applications rarely can be downsizedwithout modificationsto a workstation
environment. Modifications can be minor, wherein tools are used to port (or rehost) existing
mainframe source codeor major, wherein the applications are rewritten using completely new tools.
In porting, native COBOL compilers, functional file systems, and emulators for DB2, IMS DB/DC,
10
and CICS are available for workstations. In rewriting, there is a broad array of tools ranging from
PowerBuilder, Visual Basic, and Access, to larger scale tools such as Forte and Dynasty
Data that is collected as part of the normal business process and maintained on a server is immediately
available to all authorized users. The use of Structured Query Language (SQL) to define and
manipulate the data provides support for open access from all client processors and software. SQL
grants all authorized users access to the information through a view that is consistent with their
business need. Transparent network services ensure that the same data is available with the same
currency to all designated users.
2. Integrated Services
In the client/server model, all information that the client (user) is entitled to use is available at the
desktop. The desktop toolse-mail, spreadsheet, presentation graphics, and word processingare
available and can be used to deal with information provided by corporate application and database
servers resident on the network.
Example of Integration
Another excellent and easily visualized example of the integration possible in the client/server model
is implemented in the retail automobile service station. Figure illustrates the comprehensive business
functionality required in a retail gas service station. The service station automation (SSA) project
integrates the services of gasoline flow measurement, gas pumps billing, credit card validation, cash
registers management, point-of-sale, inventory control, attendance recording, electronic price signs,
tank monitors, accounting, marketing, truck dispatch, and a myriad of other business functions. These
business functions are all provided within the computer-hostile environment of the familiar service
station with the same type of workstations used to create this book. The system uses all of the familiar
client/server components, including local and wide-area network services. Most of the system users
are transitory employees with minimal training in computer technology. An additional challenge is the
need for real-time processing of the flow of gasoline as it moves through the pump. If the processor
does not detect and measure the flow of gasoline, the customer is not billed. The service station
automation system is a classic example of the capabilities of an integrated client/server application
implemented and working today.
3. Sharing Resources Among Diverse Platforms
The client/server computing model provides opportunities to achieve true open system computing
applications may be created and implemented without regard to the hardware platforms or the
technical characteristics of the software. Thus, users may obtain client services and transparent access
to the services provided by database, communications, and applications servers. Operating systems
software and platform hardware are independent of the application and masked by the development
tools used to build the application.
In this approach, business applications are developed to deal with business processes invoked by the
existence of a user-created "event." An event such as the push of a button, selection of a list element,
13
entry in a dialog box, scan of a bar code, or flow of gasoline occurs without the application logic
being sensitive to the physical platforms.
Client/server applications operate in one of two ways. They can function as the front end to an
existing applicationthe more limited mainframe-centric model discussedor they can provide data
entry, storage, and reporting by using a distributed set of clients and servers. In either case, the use
or even the existenceof a mainframe host is totally masked from the workstation developer by the
use of standard interfaces such as SQL.
4. Data Interchangeability and Interoperability
SQL is an industry-standard data definition and access language. This standard definition has enabled
many vendors to develop production-class database engines to manage data as SQL tables. Almost all
the development tools used for client/server development expect to reference a back-end database
server accessed through SQL. Network services provide transparent connectivity between the client
and local or remote servers. With some database products, such as Ingres Star, a user or application
can define a consolidated view of data that is actually distributed between heterogeneous, multiple
platforms.
The client/server model provides the capability to make ad hoc requests for information. As a result,
optimization of dynamic SQL and support for distributed databases are crucial for the success of the
second generation of a client/server application. The first generation implements the operational
aspects of the business process. The second generation is the introduction of ad hoc requests
generated by the knowledgeable user looking to gain additional insight from the information
available.
5. Masked Physical Data Access
When SQL is used for data access, users can access information from databases anywhere in the
network. From the local PC, local server, or wide area network (WAN) server, data access is
supported with the developer and user using the same data request. The only noticeable difference
may be performance degradation if the network bandwidth is inadequate. Data may be accessed from
dynamic random-access memory (D-RAM), from magnetic disk, or from optical disk, with the same
SQL statements. Logical tables can be accessedwithout any knowledge of the ordering of columns
or awareness of extraneous columnsby selecting a subset of the columns in a table. Several tables
may be joined into a view that creates a new logical table for application program manipulation,
without regard to its physical storage format.
The use of new data types, such as binary large objects (BLOBs), enables other types of information
such as images, video, and audio to be stored and accessed using the same SQL statements for data
access. RPCs frequently include data conversion facilities to translate the stored data of one processor
into an acceptable format for another.
6. Location Independence of Data and Processing
Developers today are provided with considerable independence. Data is accessed through SQL
without regard to the hardware, operating system, or location providing the data. Consistent network
access methods envelop the application and SQL requests within an RPC. The network may be based
in Open Systems Interconnect (OSI), Transmission Control Protocol/Internet Protocol (TCP/IP), or
Systems Network Architecture (SNA), but no changes are required in the business logic coding. The
developer of business logic deals with a standard process logic syntax without considering the
physical platform. Development languages such as COBOL, C, and Natural, and development tools
such as Telon, Ingres 4GL, PowerBuilder, CSP, as well as some evolving CASE tools such as
Bachman, Oracle CASE, and Texas Instruments' IEF all execute on multiple platforms and generate
applications for execution on multiple platforms.
The application developer deals with the development language and uses a version of SDE
customized for the organization to provide standard services. The specific platform characteristics are
transparent and subject to change without affecting the application syntax.
7. Centralized Management
As processing steers away from the central data center to the remote office and plant, workstation
server, and local area network (LAN) reliability must approach that provided today by the centrally
14
located mini- and mainframe computers. The most effective way to ensure this is through the
provision of monitoring and support from these same central locations. A combination of technologies
that can "see" the operation of hardware and software on the LANmonitored by experienced
support personnelprovides the best opportunity to achieve the level of reliability required.
The first step in effectively providing remote LAN management is to establish standards for hardware,
software, networking, installation, development, and naming. These standards, used in concert with
products such as IBM's Systemview, Hewlett-Packard's Openview, Elegant's ESRA, Digital's EMA,
and AT&T's UNMA products, provide the remote view of the LAN. Other tools, such as PC Connect
for remote connect, PCAssure from Centel for security, products for hardware and software inventory,
and local monitoring tools such as Network General's Sniffer, are necessary for completing the
management process.
SUPP NAME
STATUS
CITY
PART
PART NUMBER
PART NAME
COLOR
WEIGH
T
DELIVERY
SUPP NUMBER PART NUMBER
Figure: Sample schema diagram
QUANTITY
15
CITY
The Schema diagram displays only some aspect of a schema, such as the names of record types and
data items, and some types of constraints. The actual data in a database may change quite frequently;
for example, the database shown in Figure changes every time we add a supplier or enter a new color
for a part. The data in the database in a particular moment in time is called database state or
snapshot. It is also called the current set of occurrences or instances in the database. Every time we
insert or delete a record, or change the value of a data item in a record, we change a one state of the
database into another state.
The distinction between database schema and database state is very important. When we define
a new database, we specify its database schema only to the DBMS. At this point, the corresponding
database state is the empty state with no data. We get the initial state of the database when the
database is first loaded with the initial data. From then, on every time an update operation is applied
to the database, we get another database state. At any point in time, the database has a current state.
The DBMS is partly responsible for ensuring that every state of the database is a valid state - that is, a
state that satisfies the structure and constraints specified in the schema. Example in Figure shows
current state of the database.
SUPPLIER
SUPP NUMBER
S1
S2
S3
SUPP NAME
Jones
Black
Smith
STATUS
20
30
10
CITY
New York
Paris
London
PART
PART NUMBER
PART NAME
COLOR
P1
P2
Desk
Monitor
Blue
Red
WEIGH
T
20
10
DELIVERY
SUPP NUMBER
S1
PART NUMBER
P1
CITY
London
Paris
QUANTITY
200
operations, and constraints. A high-level data model or an implementation data model can be
used at this level.
The external or view level includes a number of external schemas or user views. Each
external schema describes the part of the database that a particular user group is interested in
and hides the rest of the database from that user group. A high-level data model or an
implementation data model can be used at this level.
Logical data independence is the capacity to change the conceptual schema without having
to change external schemas or application programs. We may change the conceptual schema
to expand the database (by adding a record type or data item), or to reduce the database (by
removing a record type or data item). In the latter case, external schemas that refer only to
the remaining data should not be affected. Only the view definition and the mappings need be
changed in a DBMS that supports logical data independence. Application programs that
reference the external schema constructs must work as before, after the conceptual schema
undergoes a logical reorganization. Changes to constraints can be applied also to the
conceptual schema without affecting the external schemas or application programs.
Physical data independence is the capacity to change the internal schema without having to
change the conceptual (or external) schemas. Changes to the internal schema may be needed
because some physical files had to be reorganized - for example, by creating additional
access structures - to improve the performance of retrieval or update. If the same data as
before remains in the database, we should not have to change the conceptual schema.
17
1. DDL - For describing data and data structures a suitable description tool, a data definition
language
(DDL), is needed. With this help a data scheme can be defined and also changed later. Typical
DDL operations are:
Creation of tables and definition of attributes (CREATE TABLE ...)
Change of tables by adding or deleting attributes (ALTER TABLE )
Deletion of whole table including content (!) (DROP TABLE )
2. DML
Additionally a language for the descriptions of the operations with data like store, search, read,
change, etc. the so-called data manipulation, is needed. Such operations can be done with a data
manipulation language (DML). Within such languages keywords like insert, modify, update, delete,
select, etc. are common. Typical DML operations are:
Add data (INSERT)
Change data (UPDATE)
Delete data (DELETE)
Query data (SELECT)
Often these two languages for the definition and manipulation of databases are combined in one
comprehensive language.
3.2. Database Interfaces (Figure 10)
operations might be restricted by the application. Form-based user interfaces are wide spread and are
a very important means of interacting with a DBMS. They are easy to use and have the advantage that
the user does not need special knowledge about database languages like SQL.
19