Sie sind auf Seite 1von 19

NMCA-E12: CLIENT SERVER COMPUTING

Unit -1: Client Server Computing


Syllabus
DBMS concept and architecture,
Single system image,
Client Server architecture,
Mainframe-centric client server computing,
Downsizing and client server computing,
Preserving mainframe applications investment through porting,
Client server development tools,
Advantages of client server computing.
Definition
Client - A client is a single-user workstation that provides presentation services and the appropriate
computing, connectivity and the database services and the interfaces relevant to the user.
Server- A server is one or more multi-user processors with share memory providing computing,
connectivity and the database services and the interfaces relevant to the users.
Client Server Computing
Client/Server describes the relationship between two computer programs in which one program, the
client, makes a service request from another program, the server, which fulfills the request.
Peer-peer network
Computer Network in which each computer in the network can act as a client or server for the other
computers in the network, allowing shared access to files and peripherals without the need for a
central server. It is a communications model in which each party has the same capabilities and either
party can initiate a communication session. Other models with which it might be contrasted include
the client/server model and the master/slave model. In some cases, peer-to-peer communications is
implemented by giving each communication node both server and client capabilities.
Advantages

Easy and simple to set up only requiring a hub or a switch to connect all computers together.
You can access any file on the computer as-long as it is set to a shared folder.
If one computer fails to work all the other computers connected to it still continue to work.
Real time communication is any mode of telecommunications in which all users can exchange
information instantly or with negligible latency
Content distribution can be used to send text based information to large group of users this is
usually seen with product updates and news lists.
1

Peer to peer Collaboration enables a group of people to share a workspace and files

Disadvantages

Security is not good other than setting passwords for files that you don't want people to
access.
If the connections are not connected to the computers properly then there can be problems
accessing certain files.
It does not run efficient if you have many computers, it is best to used two to eight computers.
There is no central server
No centralization of Data
Lacks management of the
Back-up and recovery possibility
Up gradation and scalability is always a problem

Peer-peer system vs. Client/Server

Client/Server Computing
Objectives
Optimized utilization of Hardware Resources
Sharing of Software Systems
High Utilization of Database
Better Security Aspects
Cost Effectiveness
Online Transaction Processing
3

Advantages

A client server can be scaled up to many services that can also be used by multiple users.
Security is more advanced than a peer-to-peer network, you can have passwords to own
individual profiles so that nobody can access anything when they want.
All the data is stored onto the servers which generally have far greater security controls than
most clients. server can control the access and resources better to guarantee that only those
clients with the appropriate permissions may access and change data.

Disadvantages
More expensive than a peer-to-peer network you have to pay for the start-up cost.
When the server goes down or crashes all the computers connected to it become unavailable
to use.
When you expend the server it starts to slow down due to the Bit rate per second.
When everyone tries to do the same thing it takes a little while for the server to do certain
tasks.

Client/server design- classical model

Forces that drive the Client/Server


The general forces that drive the move to client/server can be classified in two general
categories based on:
(i) Business perspective.
(ii) Technology perspective.
Business Perspective
The business perspective should be kept in mind for obtaining the following
achievements through the system:

For increased productivity.


Superior quality.
Improved responsiveness.
Focus on core business.

The effective factors that govern the driving forces are given below:

1. The changing business environment: Business process engineering


has become necessary for competitiveness in the market which is forcing
organizations to find new ways to manage their business, despite fewer
personnel, more outsourcing, a market driven orientation, and rapid
product obsolescence.
4

2. Globalization: Information Technology plays an important role in bringing


all the trade on a single platform by eliminating the barriers. IT helps and
supports various marketing priorities like quality, cost, product
differentiation and services.
3. The growing need for enterprise data access: One of the major MIS
functions is to provide quick and accurate data access for decisionmaking at many organizational levels. Managers and decision makers
need fast on-demand data access through easy-to-use interfaces.
4. The demand for end user productivity gains based on the efficient
use of data resources: The growth of personal computers is a direct
result of the productivity gains experienced by end-users at all business
levels.
Technology Perspective
Technological advances that have made Client/Server computing practical by
proper use of the following:

Intelligent desktop devices.


Computer network architectures.
Technical advances like microprocessor technology, data communication
and Internet Database system, operating system and graphical user
interface.
Trends in computer usage like:
(i)
Standardization: Trend towards open systems and adaptation of
industry standards, which includes:
de facto standard: protocol or interface that is made public & widely
accepted. (e.g., SNA, TCP/IP, VGA)
de jure standard: protocol or interface specified by a formal standards
making body. (e.g., ISOs OSI, ANSI C)
(ii)
Human-Computer Interaction (HCI): trend towards GUI, user Control.
(iii)
Information dissemination: trend towards data warehousing, data
mining.

PC-based end user application software together with the increasing power
and capacity of workstations.
Growing cost and performance are advantages of PC-based platforms.

Client/Server Architecture
Client/Server architecture is based on hardware and software components that
interact to form a system. The system includes mainly three components.
(i)
(ii)
(iii)

Hardware (client and server).


Software (which make hardware operational).
Communication middleware. (associated with a network
which are used to link the hardware and software)

The client is any computer process that requests services from server. The
client uses the services provided by one or more server processors. The client is

also known as the front-end application, reflecting that the end user usually
interacts with the client process.
The server is any computer process providing the services to the client and
also supports multiple and simultaneous clients requests. The server is also
known as back-end application, reflecting the fact that the server process
provides the background services for the client process.
The communication middleware is any computer process through which
client and server communicate. Middleware is used to integrate application
programs and other software components in a distributed environment. Also
known as communication layer. Communication layer is made up of several
layers of software that aids the transmission of data and control information
between Client and Server. Communication middleware is usually associated
with a network.

Single System Image

A single system- image is the illusion, created by software or hardware that


presents a collection of resources as one, more powerful resource. SSI makes
the system appear like a single machine to the user, to applications, and to the
network. With it all network resources present themselves to every user in the
same way from every workstation (See the Fig.) and can be used transparently
after the user has authorized himself/ herself once. The user environment with
desktop and often-used tools, such as editors and mailer, is also organized in a
uniform way. The workstation on the desk appears to
provide all these services. In such an environment the user need not to bother
about how the processors (both the client and the server) are working, where the
6

data storage take place and which networking scheme has been selected to build
the system.
A single-system-image of all the organizations data and easy management of
change are the promises of client/server computing.

Further desired services in single-system-image environment are:

Single File Hierarchy; for example: xFS, AFS, Solaris MC Proxy.


Single Control Point: Management from single GUI and access to every resource is provided
to each user as per their valid requirements.
Single virtual networking.
Single memory space e.g. Network RAM/DSM.
Single Job Management e.g. Glunix, Codine, LSF.
Single User Interface: Like workstation/PC windowing environment (CDE in Solaris/NT),
Web technology can also be used.
Standard security procedure: Access to every application is provided through a standard
security procedure by maintaining a security layer.
Every application helps in the same way to represent the errors and also to resolve them.
Standard functions work in the same way so new applications can be added with minimal
training. Emphasis is given on only new business functions.

Hence, single-system-image is the only way to achieve acceptable technological transparency.


Some of the visible benefits due to single-system image are as given below:

Increase the utilization of system resources transparently.


Facilitates process migration across workstations transparently along with load balancing.
Provides improved reliability and higher availability.
Provides overall improved system response time and performance.
Gives simplified system management.
Reduces the risk covered due to operator errors.
User need not be aware of the underlying system.
Provides such architecture to use these machines effectively.

Client/Server Computing
The single-system image is best implemented through the client/server model. Our experience
confirms that client/server computing can provide the enterprise to the desktop. Because the desktop
computer is the user's view into the enterprise, there is no better way to guarantee a s ingle image than
to start at the desktop.

Mainframe-Centric Client/Server Computing


The mainframe-centric model uses the presentation capabilities of the workstation to front-end
existing applications. The data is displayed or entered through the use of pull-down lists, scrollable
fields, check boxes, and buttons; the user interface is easy to use, and information is presented more
clearly. In this mainframe-centric model, mainframe applications continue to run unmodified, because
the existing terminal data stream is processed by the workstation-based communications API.

Character mode applications, usually driven from a block mode screen, attempt to display as
much data as possible in order to reduce the number of transmissions required to complete a
function. Dumb terminals impose limitations on the user interface including fixed length
fields, fixed length lists, crowded screens, single or limited character fonts, limited or no
7

graphics icons, and limited windowing for multiple application display. In addition, the fixed
layout of the screen makes it difficult to support the display of conditionally derived
information
In contrast, the workstation GUI provides facilities to build the screen dynamically. This
enables screens to be built with a variable format based conditionally on the data values of
specific fields. Variable length fields can be scrollable, and lists of fields can have a scrollable
number of rows. This enables a much larger virtual screen to be used with no additional data
communicated between the client workstation and server.
Additional information can be encapsulated by varying the display's colors, fonts, graphics
icons, scrollable lists, pull-down lists, and option boxes. Option lists can be provided to
enable users to quickly select input values. Help can be provided, based on the context and
the cursor location, using the same pull-down list facilities.
Although it is a limited use of client/server computing capability, a GUI front end to an
existing application is frequently the first client/server-like application implemented by
organizations familiar with the host mainframe and dumb-terminal approach. The GUI
preserves the existing investment while providing the benefits of ease of use associated with a
GUI. It is possible to provide dramatic and functionally rich changes to the user interface
without host application change.
Electronic data interchange (EDI) is an example of this front-end processing. EDI enables
organizations to communicate electronically with their suppliers or customers. Frequently,
these systems provide the workstation front end to deal with the EDI link but continue to
work with the existing back-end host system applications. Messages are reformatted and
responses are handled by the EDI client, but application processing is done by the existing
application server. Productivity may be enhanced significantly by capturing information at
the source and making it available to all authorized users.
Typically, if users employ a multipart form for data capture, the form data is entered into
multiple systems. Capturing this information once to a server in a client/server application,
and reusing the data for several client applications can reduce errors, lower data entry costs,
and speed up the availability of this information.

Desktop

Application 1

Application 3

Application 2

Figure (Above) illustrates how multiple applications can be integrated in this way. The data is
available to authorized users as soon as it is captured. There is no delay while the forms are
passed around the organization. This is usually a better technique than forms imaging
technology in which the forms are created and distributed internally in an organization. The
use of workflow-management technology and techniques, in conjunction with imaging
technology, is an effective way of handling this process when forms are filled out by a person
who is physically remote from the organization.
Intelligent Character Recognition (ICR) technology can be an extremely effective way to
automate the capture of data from a form, without the need to key. Current experience with
this technique shows accuracy rates greater than 99.5 percent for typed forms and greater
than 98.5 percent for handwritten forms.

Downsizing and Client Server computing


Rightsizing and downsizing are strategies used with the client/server model to take advantage
of the lower cost of workstation technology. Rightsizing and upsizing may involve the
addition of more diverse or more powerful computing resources to an enterprise computing
environment.
Downsizing: The downward migrations of business applications are often from mainframes to PCs
due to low costing of workstation. And also todays workstations are as powerful as last decades
mainframes. The result of that is Clients having power at the cost of less money, provides better
performance and then system offers flexibility to make other purchase or to increase overall benefits.
Rightsizing: Moves the Client/Server applications to the most appropriate server platform, in that
case the servers from different vendors can co-exist and the network is known as the system. Getting
the data from the system no longer refers to a single mainframe. As a matter of fact, we probably
dont know where the server physically resides.
Upsizing: The bottom-up trend of networking all the stand-alone PCs and workstations at the
department or work group level. Early LANs were implemented to share hardware (printers, scanners,
etc.). But now LANs are being implemented to share data and applications in addition to hardware.
Mainframes are being replaced by lesser expensive PCs on networks. This is called computer
downsizing. Companies implementing business process reengineering are downsizing
9

organizationally. This is called business downsizing. All this would result in hundreds of smaller
systems, all communicating to each other and serving the need of local teams as well as individuals
working in an organization. This is called cultural downsizing. The net result is distributed computer
systems that support decentralized decision-making. This is the client/server revolution of the
nineties.

The benefits of rightsizing are reduction in cost and/or increased functionality, performance,
and flexibility in the applications of the enterprise. Significant cost savings usually are
obtained from a resulting reduction in employee, hardware, software, and maintenance
expenses. Additional savings typically accrue from the improved effectiveness of the user
community using client/server technology.
Downsizing is frequently implemented in concert with a flattening of the organizational
hierarchy. Eliminating middle layers of management implies empowerment to the first level
of management with the decision-making authority for the whole job. Information provided
at the desktop by networked PCs and workstations integrated with existing host (such as
mainframe and minicomputer) applications is necessary to facilitate this empowerment.
These desktop-host integrated systems house the information required to make decisions
quickly. To be effective, the desktop workstation must provide access to this information as
part of the normal business practice. Architects and developers must work closely with
business decision makers to ensure that new applications and systems are designed to be
integrated with effective business processes. Much of the cause of poor return on technology
investment is attributable to a lack of understanding by the designers of the day-to-day
business impact of their solutions.
Downsizing information systems is more than an attempt to use cheaper workstation
technologies to replace existing mainframes and minicomputers in use. Although some
benefit is obtained by this approach, greater benefit is obtained by reengineering the business
processes to really use the capabilities of the desktop environment. Systems solutions are
effective only when they are seen by the actual user to add value to the business process.
Client/server technology implemented on low-cost standard hardware will drive downsizing.
Client/server computing makes the desktop the users' enterprise. As we move from the machinecentered era of computing into the workgroup era, the desktop workstation is empowering the
business user to regain ownership of his or her information resource. Client/server computing
combines the best of the old with the newthe reliable multiuser access to shared data and resources
with the intuitive, powerful desktop workstation.
Object-oriented development concepts are embodied in the use of an SDE created for an organization
from an architecturally selected set of tools. The SDE provides more effective development and
maintenance than companies have experienced with traditional host-based approaches.
Client/server computing is open computing. Mix and match is the rule. Development tools and
development environments must be created with both openness and standards in mind.
Mainframe applications rarely can be downsizedwithout modificationsto a workstation
environment. Modifications can be minor, wherein tools are used to port (or rehost) existing
mainframe source codeor major, wherein the applications are rewritten using completely new tools.
In porting, native COBOL compilers, functional file systems, and emulators for DB2, IMS DB/DC,
10

and CICS are available for workstations. In rewriting, there is a broad array of tools ranging from
PowerBuilder, Visual Basic, and Access, to larger scale tools such as Forte and Dynasty

Preserving Your Mainframe Applications Investment


Through Porting
Although the percentage of client/server applications development is rapidly moving away
from a mainframe-centric model, it is possible to downsize and still preserve a larger amount
of the investment in application code. The product (Micro Focus COBOL/2 Workbench)
provide the capability to develop systems on a PC LAN for production execution on an IBM
mainframe. These products, in conjunction with the ProxMVS product enable extensive unit
and integration testing to be done on a PC LAN before moving the system to the mainframe
for final system and performance testing. Used within a properly structured development
environment, these products can dramatically reduce mainframe development costs.
Micro Focus COBOL/2 supports GUI development targeted for implementation with OS/2
Presentation Manager and Microsoft Windows 3.x. Another product, the Dialog System,
provides support for GUI and character mode applications that are independent of the
underlying COBOL applications. Micro Focus has added an Object Oriented (OO) option to
its workbench to facilitate the creation of reusable components. The OO option supports
integration with applications developed under Smalltalk/V PM.
IBM's CICS for OS/2, OS400, RS6000, and HP/UX products enable developers to directly
port applications using standard CICS call interfaces from the mainframe to the workstation.
These applications can then run under OS/2, AIX, OS400, HP/UX, or MVS/VSE without
modification. This enables developers to create applications for execution in the CICS MVS
environment and later to port them to these other environments without modification.
Conversely, applications can be designed and built for such environments and subsequently
ported to MVS (if this is a logical move). Organizations envisioning such a migration should
ensure that their SDE incorporates standards that are consistent for all of these platforms.
Telon provides particularly powerful facilities that support the object-oriented development
concepts necessary to create a structured development environment and to support code and
function reuse. This combinationused in conjunction with a structured development
environment that includes appropriate standardsprovides the capability to build singlesystem image applications today. In an environment that requires preservation of existing
host-based applications, this product suite is among the most complete for client/server
computing.
These products, combined with the cheap processing power available on the workstation,
make the workstation LAN an ideal development and maintenance environment for existing
host processors. When an organization views mainframe or minicomputer resources as real
dollars, developers can usually justify offloading the development in only three to six
months. Developers can be effective only when a proper systems development environment is
put in place and provided with a suite of tools offering the host capabilities plus enhanced
connectivity. Workstation operating systems are still more primitive than the existing host
server MVS, VMS, or UNIX operating systems. Therefore, appropriate standards and
11

procedures must be put in place to coordinate shared development. The workstation


environment will change. Only projects built with common standards and procedures will be
resilient enough to remain viable in the new environment.
The largest savings come from new projects that can establish appropriate standards at the
start and do all development using the workstation LAN environment. It is possible to retrofit
standards to an existing environment and establish a workstation with a LAN-based
maintenance environment. The benefits are less because retrofitting the standards creates
some costs. However, these costs are justified when the application is scheduled to undergo
significant maintenance or if the application is very critical and there is a desire to reduce the
error rate created by changes. The discipline associated with the movement toward
client/server-based development, and the transfer of code between the host and client/server
will almost certainly result in better testing and fewer errors. The testing facilities and
usability of the workstation will make the developer and tester more effective and therefore
more accurate.
Key to the success of full client/server applications is selecting an appropriate application and
technical architecture for the organization. Once the technical architecture is defined, the
tools are known. The final step is to implement an SDE to define the standards needed to use
the tools effectively. This SDE is the collection of hardware, software, standards, standard
procedures, interfaces, and training built up to support the organization's particular needs.

Client/Server Development Tools


Many construction projects fail because their developers assume that a person with a toolbox
full of carpenter's tools is a capable builder. To be a successful builder, a person must be
trained to build according to standards. By reusing the models previously built to accomplish
integration, we all benefit from cost and risk reduction.
Computer systems development using an SDE takes advantage of these same concepts: Let's
build on what we've learned. Let's reuse as much as possible to save development costs,
reduce risk, and provide the users with a common "look and feel." Selecting a good set of
tools affords an opportunity to be successful. Without the implementation of a comprehensive
SDE, developers will not achieve such success.
The introduction of a whole new generation of Object Technology based tools for
client/server development demands that proper standards be put in place to support shared
development, reusable code, and interfaces to existing systems, security, error handling, and
an organizational standard "look and feel." As with any new technology, there will be
changes. Developers can build application systems closely tied to today's technology or use
an SDE and develop applications that can evolve along with the technology platform.

The Advantages of Client/Server Computing


The client/server computing model provides the means to integrate personal productivity applications
for an individual employee or manager with specific business data processing needs to satisfy total
information processing requirements for the entire enterprise.
1. Enhanced Data Sharing
12

Data that is collected as part of the normal business process and maintained on a server is immediately
available to all authorized users. The use of Structured Query Language (SQL) to define and
manipulate the data provides support for open access from all client processors and software. SQL
grants all authorized users access to the information through a view that is consistent with their
business need. Transparent network services ensure that the same data is available with the same
currency to all designated users.
2. Integrated Services
In the client/server model, all information that the client (user) is entitled to use is available at the
desktop. The desktop toolse-mail, spreadsheet, presentation graphics, and word processingare
available and can be used to deal with information provided by corporate application and database
servers resident on the network.
Example of Integration

Another excellent and easily visualized example of the integration possible in the client/server model
is implemented in the retail automobile service station. Figure illustrates the comprehensive business
functionality required in a retail gas service station. The service station automation (SSA) project
integrates the services of gasoline flow measurement, gas pumps billing, credit card validation, cash
registers management, point-of-sale, inventory control, attendance recording, electronic price signs,
tank monitors, accounting, marketing, truck dispatch, and a myriad of other business functions. These
business functions are all provided within the computer-hostile environment of the familiar service
station with the same type of workstations used to create this book. The system uses all of the familiar
client/server components, including local and wide-area network services. Most of the system users
are transitory employees with minimal training in computer technology. An additional challenge is the
need for real-time processing of the flow of gasoline as it moves through the pump. If the processor
does not detect and measure the flow of gasoline, the customer is not billed. The service station
automation system is a classic example of the capabilities of an integrated client/server application
implemented and working today.
3. Sharing Resources Among Diverse Platforms
The client/server computing model provides opportunities to achieve true open system computing
applications may be created and implemented without regard to the hardware platforms or the
technical characteristics of the software. Thus, users may obtain client services and transparent access
to the services provided by database, communications, and applications servers. Operating systems
software and platform hardware are independent of the application and masked by the development
tools used to build the application.
In this approach, business applications are developed to deal with business processes invoked by the
existence of a user-created "event." An event such as the push of a button, selection of a list element,

13

entry in a dialog box, scan of a bar code, or flow of gasoline occurs without the application logic
being sensitive to the physical platforms.
Client/server applications operate in one of two ways. They can function as the front end to an
existing applicationthe more limited mainframe-centric model discussedor they can provide data
entry, storage, and reporting by using a distributed set of clients and servers. In either case, the use
or even the existenceof a mainframe host is totally masked from the workstation developer by the
use of standard interfaces such as SQL.
4. Data Interchangeability and Interoperability
SQL is an industry-standard data definition and access language. This standard definition has enabled
many vendors to develop production-class database engines to manage data as SQL tables. Almost all
the development tools used for client/server development expect to reference a back-end database
server accessed through SQL. Network services provide transparent connectivity between the client
and local or remote servers. With some database products, such as Ingres Star, a user or application
can define a consolidated view of data that is actually distributed between heterogeneous, multiple
platforms.
The client/server model provides the capability to make ad hoc requests for information. As a result,
optimization of dynamic SQL and support for distributed databases are crucial for the success of the
second generation of a client/server application. The first generation implements the operational
aspects of the business process. The second generation is the introduction of ad hoc requests
generated by the knowledgeable user looking to gain additional insight from the information
available.
5. Masked Physical Data Access
When SQL is used for data access, users can access information from databases anywhere in the
network. From the local PC, local server, or wide area network (WAN) server, data access is
supported with the developer and user using the same data request. The only noticeable difference
may be performance degradation if the network bandwidth is inadequate. Data may be accessed from
dynamic random-access memory (D-RAM), from magnetic disk, or from optical disk, with the same
SQL statements. Logical tables can be accessedwithout any knowledge of the ordering of columns
or awareness of extraneous columnsby selecting a subset of the columns in a table. Several tables
may be joined into a view that creates a new logical table for application program manipulation,
without regard to its physical storage format.
The use of new data types, such as binary large objects (BLOBs), enables other types of information
such as images, video, and audio to be stored and accessed using the same SQL statements for data
access. RPCs frequently include data conversion facilities to translate the stored data of one processor
into an acceptable format for another.
6. Location Independence of Data and Processing
Developers today are provided with considerable independence. Data is accessed through SQL
without regard to the hardware, operating system, or location providing the data. Consistent network
access methods envelop the application and SQL requests within an RPC. The network may be based
in Open Systems Interconnect (OSI), Transmission Control Protocol/Internet Protocol (TCP/IP), or
Systems Network Architecture (SNA), but no changes are required in the business logic coding. The
developer of business logic deals with a standard process logic syntax without considering the
physical platform. Development languages such as COBOL, C, and Natural, and development tools
such as Telon, Ingres 4GL, PowerBuilder, CSP, as well as some evolving CASE tools such as
Bachman, Oracle CASE, and Texas Instruments' IEF all execute on multiple platforms and generate
applications for execution on multiple platforms.
The application developer deals with the development language and uses a version of SDE
customized for the organization to provide standard services. The specific platform characteristics are
transparent and subject to change without affecting the application syntax.
7. Centralized Management
As processing steers away from the central data center to the remote office and plant, workstation
server, and local area network (LAN) reliability must approach that provided today by the centrally
14

located mini- and mainframe computers. The most effective way to ensure this is through the
provision of monitoring and support from these same central locations. A combination of technologies
that can "see" the operation of hardware and software on the LANmonitored by experienced
support personnelprovides the best opportunity to achieve the level of reliability required.
The first step in effectively providing remote LAN management is to establish standards for hardware,
software, networking, installation, development, and naming. These standards, used in concert with
products such as IBM's Systemview, Hewlett-Packard's Openview, Elegant's ESRA, Digital's EMA,
and AT&T's UNMA products, provide the remote view of the LAN. Other tools, such as PC Connect
for remote connect, PCAssure from Centel for security, products for hardware and software inventory,
and local monitoring tools such as Network General's Sniffer, are necessary for completing the
management process.

DBMS Concept and Architecture


1. Data models, Schemas, and Instances
One fundamental characteristic of the database approach is that it provides some level of data
abstraction by hiding details of data storage that are not needed by most database users. A data model
- a collection of concepts that can be used to describe the structure of a database -provides the
necessary means to achieve this abstraction. By structure of a database we mean the data types,
relationships, and constraints that should hold on the data. Most data models also include a set of
basic operations for specifying retrievals and updates on the database.
1.1 Categories of Data Models
Many data models have been proposed, and we can categorize them according to the types of
concepts they use to describe the database structure.
1. Conceptual (high-level) data models provide concepts that are close to the way many end
users perceive data. Conceptual Data Models use concepts such as entities, attributes, and
relationships.
2. Physical Data Models describes how data is stored in the computer by representing
information such as stored record formats, record orderings, and access paths. An access path
is a structure that makes the search for particular database records efficient.
3. Implementation (record-oriented) data models: Provide concepts that fall between the
above two, balancing user views with some computer storage details.
1.2 Schemas, Instances, and Database State
In any data model it is important to distinguish between the description of the database and the
database itself. The description of a database is called the database schema, which is specified
during database design and is not expected to change frequently. Most data models have certain
conventions for displaying the schemas as diagrams. A displayed schema is called a schema
diagram. Figure shows a sample schema diagram. Diagram displays only the structure of each record
type but not the actual instances of records.
SUPPLIER
SUPP NUMBER

SUPP NAME

STATUS

CITY

PART
PART NUMBER

PART NAME

COLOR

WEIGH
T

DELIVERY
SUPP NUMBER PART NUMBER
Figure: Sample schema diagram

QUANTITY

15

CITY

The Schema diagram displays only some aspect of a schema, such as the names of record types and
data items, and some types of constraints. The actual data in a database may change quite frequently;
for example, the database shown in Figure changes every time we add a supplier or enter a new color
for a part. The data in the database in a particular moment in time is called database state or
snapshot. It is also called the current set of occurrences or instances in the database. Every time we
insert or delete a record, or change the value of a data item in a record, we change a one state of the
database into another state.
The distinction between database schema and database state is very important. When we define
a new database, we specify its database schema only to the DBMS. At this point, the corresponding
database state is the empty state with no data. We get the initial state of the database when the
database is first loaded with the initial data. From then, on every time an update operation is applied
to the database, we get another database state. At any point in time, the database has a current state.
The DBMS is partly responsible for ensuring that every state of the database is a valid state - that is, a
state that satisfies the structure and constraints specified in the schema. Example in Figure shows
current state of the database.
SUPPLIER
SUPP NUMBER
S1
S2
S3

SUPP NAME
Jones
Black
Smith

STATUS
20
30
10

CITY
New York
Paris
London

PART
PART NUMBER

PART NAME

COLOR

P1
P2

Desk
Monitor

Blue
Red

WEIGH
T
20
10

DELIVERY
SUPP NUMBER
S1

PART NUMBER
P1

CITY
London
Paris

QUANTITY
200

Figure: Sample current state of database


2. DBMS Architecture
Database management systems are complex software which were often developed and optimized over
years. From the view of the user, however, most of them have a quite similar basic architecture. The
discussion of this basic architecture shall help to understand the connection with data modeling and
the introduction to this module postulated 'data independence' of the database approach.
2.1 The Three-Schema Architecture
Three important characteristics of the database approach are insulation of programs and data, support
of multiple user views and use of a catalog to store the database description. Lets specify architecture
for database systems, called the three-schema architecture, which was proposed to help achieve and
visualize these characteristics. The goal of the three-schema architecture is to separate the user
applications and the physical database. In this architecture, schemas can be defined at the following
three levels:
The internal level has an internal schema, which describes the physical storage structure of
the database. The internal schema uses a physical data model and describes the complete
details of data storage and access paths for the database.
The conceptual level has a conceptual schema, which describes the structure of the whole
database for a community of users. The conceptual schema hides the details of physical
storage structures and concentrates on describing entities, data types, relationships, user
16

operations, and constraints. A high-level data model or an implementation data model can be
used at this level.
The external or view level includes a number of external schemas or user views. Each
external schema describes the part of the database that a particular user group is interested in
and hides the rest of the database from that user group. A high-level data model or an
implementation data model can be used at this level.

Figure .Three-Schemes Architecture


2.2 Data Independence
The three-schema architecture can be used to explain the concept of data independence, which can
be defined as the capacity to change the schema at one level of a database system without having to
change the schema at the next higher level. We can define two types of data independence:

Logical data independence is the capacity to change the conceptual schema without having
to change external schemas or application programs. We may change the conceptual schema
to expand the database (by adding a record type or data item), or to reduce the database (by
removing a record type or data item). In the latter case, external schemas that refer only to
the remaining data should not be affected. Only the view definition and the mappings need be
changed in a DBMS that supports logical data independence. Application programs that
reference the external schema constructs must work as before, after the conceptual schema
undergoes a logical reorganization. Changes to constraints can be applied also to the
conceptual schema without affecting the external schemas or application programs.

Physical data independence is the capacity to change the internal schema without having to
change the conceptual (or external) schemas. Changes to the internal schema may be needed
because some physical files had to be reorganized - for example, by creating additional
access structures - to improve the performance of retrieval or update. If the same data as
before remains in the database, we should not have to change the conceptual schema.

3. Database Languages and Database Interfaces


3.1 Database Languages

17

1. DDL - For describing data and data structures a suitable description tool, a data definition
language
(DDL), is needed. With this help a data scheme can be defined and also changed later. Typical
DDL operations are:
Creation of tables and definition of attributes (CREATE TABLE ...)
Change of tables by adding or deleting attributes (ALTER TABLE )
Deletion of whole table including content (!) (DROP TABLE )
2. DML
Additionally a language for the descriptions of the operations with data like store, search, read,
change, etc. the so-called data manipulation, is needed. Such operations can be done with a data
manipulation language (DML). Within such languages keywords like insert, modify, update, delete,
select, etc. are common. Typical DML operations are:
Add data (INSERT)
Change data (UPDATE)
Delete data (DELETE)
Query data (SELECT)
Often these two languages for the definition and manipulation of databases are combined in one
comprehensive language.
3.2. Database Interfaces (Figure 10)

Figure: Working Principle of a Database Interface


The application poses with the help of SQL, a query language, a query to the database system.
There, the corresponding answer (result set) is prepared and also with the help of SQL given back to
the application. This communication can take place interactively or be embedded into another
language.
Type and Use of the Database Interface
Following, two important uses of a database interface like SQL are listed:
Interactive: SQL can be used interactively from a terminal.
Embedded: SQL can be embedded into another language (host language) which might be
used to create a database application.
3.3. User Interfaces
A user interface is the view of a database interface that is seen by the user. User interfaces are often
graphical or at least partly graphical (GUI - graphical user interface) constructed and offer tools which
make the interaction with the database easier.
1. Form-based Interfaces
This interface consists of forms which are adapted to the user. He/She can fill in all of the fields and
make new entries to the database or only some of the fields to query the other ones. But some
18

operations might be restricted by the application. Form-based user interfaces are wide spread and are
a very important means of interacting with a DBMS. They are easy to use and have the advantage that
the user does not need special knowledge about database languages like SQL.

Figure:. Example of a Form-based User Interface


2. Text-based Interfaces (Figure)
To be able to administrate the database or for other professional users there are possibilities to
communicate with the DBMS directly in the query language (in code form) via an input/output
window. Text-based interfaces are very powerful tools and allow a comprehensive interaction with a
DBMS. However, the use of these is based on active knowledge of the respective database language.

Figure . Example of a Text-based User Interface

19

Das könnte Ihnen auch gefallen