Sie sind auf Seite 1von 49

Computer Applications in Pharmacy

Compiled from Different Books and Websites


No Citation is given due to lack of time.
It’s not my work, 100% copied and compiled from Books and website.

University College of Pharmacy


University of the Punjab, Lahore
Computer Application in Pharmacy Fundamentals

Computer Types According to Capability


Supercomputers
A supercomputer is a computer that performs at or near the currently highest operational rate for
computers. A supercomputer is typically used for scientific and engineering applications that must
handle very large databases or do a great amount of computation (or both). At any given time, there are
usually a few well-publicized supercomputers that operate at the very latest and always incredible
speeds.
Perhaps the best-known builder of supercomputers has been Cray Research, now a part of Silicon
Graphics. Some supercomputers are at "supercomputer center," usually university research centers,
some of which, in the United States, are interconnected on an Internet backbone (A backbone is a
larger transmission line that carries data gathered from smaller lines that interconnect with it) known as
vBNS or NSFNet.
At the high end of supercomputing are computers like IBM's "Blue Pacific," announced on October 29,
1998. Built in partnership with Lawrence Livermore National Laboratory in California, Blue Pacific is
reported to operated at 3.9 teraflop (trillion floating point operations per second), 15,000 times faste
than the average personal computer. It consists of 5,800 processors containing a total of 2.6 trillion
bytes of memory and interconnected with five miles of cable.

Mainframe Computers
A very large and expensive computer capable of supporting hundreds, or even thousands, of users
simultaneously. In the hierarchy that starts with a simple microprocessor (in watches, for example) at
the bottom and moves to supercomputers at the top, mainframes are just below supercomputers. In
some ways, mainframes are more powerful than supercomputers because they support more
simultaneous programs. But supercomputers can execute a single program faster than a mainframe.
The distinction between small mainframes and minicomputers is vague (not clearly expressed),
depending really on how the manufacturer wants to market its machines.

Servers / Minicomputers
A midsized computer. In size and power, minicomputers lie between workstations and mainframes. In
the past decade, the distinction between large minicomputers and small mainframes has blurred,
however, as has the distinction between small minicomputers and workstations. But in general,
minicomputer is a multiprocessing system capable of supporting from 4 to about 200 users
simultaneously.

Desktops
These are also called microcomputers. Low-end desktops are called PC’s and high-end ones
“Workstations”. These are generally consisting of a single processor only, sometimes 2, along with
MB’s of memory, and GB’s of storage. PC’s are used for running productivity applications, Web
surfing, messaging. Workstations are used for more demanding tasks like low-end 3-D simulations and
other engineering & scientific apps. These are not as reliable and fault-tolerant as servers. Workstations
cost a few thousand dollars; PC around a $1000.

Portables
Portable computer is a personal computer that is designed to be easily transported and relocated, but is
larger and less convenient to transport than a notebook computer. The earliest PCs designed for easy
transport were called portables. As the size and weight of most portables decreased, they became
known as laptop computer and later as notebook computer. Today, larger transportable computers
continue to be called portable computers. Most of these are special-purpose computers - for example,
those for use in industrial environments where they need to be moved about frequently.


 
Computer Application in Pharmacy Fundamentals

PDA (personal digital assistant) is a term for any small mobile hand-held device that provides
computing and information storage and retrieval capabilities for personal or business use, often for
keeping schedule calendars and address book information handy. The term handheld is a synonym.
Many people use the name of one of the popular PDA products as a generic term. These include
Hewlett-Packard's Palmtop and 3Com's PalmPilot.
Most PDAs have a small keyboard. Some PDAs
have an electronically sensitive pad on which
handwriting can be received. Apple's Newton,
which has been withdrawn from the market, was
the first widely-sold PDA that accepted
handwriting. Typical uses include schedule and
address book storage and retrieval and note-
entering. However, many applications have been
written for PDAs. Increasingly, PDAs are
combined with telephones and paging systems.
Some PDAs offer a variation of the Microsoft
Windows operating system called Windows CE.
Other products have their own or another
operating system.
At the highest level, two things are required for computing
Hardware
Computer equipment such as a CPU, disk drives, CRT, or printer
Software
A computer program, which provides the instructions which enable the computer hardware to work

Components of Computer
All computers have the following essential Hardware Components
Input
The devices used to give the computer data or commands are called Input devices. Includes
keyboard, mouse, scanner, etc
Mouse
A mouse is a small device that a computer user pushes across a desk surface in order to point to a place
on a display screen and to select one or more actions to take from that position. The mouse first
became a widely-used computer tool when Apple Computer made it a standard part of the Apple
Macintosh. Today, the mouse is an integral part of the graphical user interface (GUI) of any personal
computer. The mouse apparently got its name by being about the same size and color as a toy mouse.
Keyboard
On most computers, a keyboard is the primary text input device. A keyboard on a computer is almost
identical to a keyboard on a typewriter. Computer keyboards will typically have extra keys, however.
Some of these keys (common examples include Control, Alt, and Meta) are meant to be used in
conjunction with other keys just like shift on a regular typewriter. Other keys (common examples
include Insert, Delete, Home, End, Help, function keys, etc.) are meant to be used independently and
often perform editing tasks.
Joystick
In computers, a joystick is a cursor control device used in computer games. The joystick, which got its
name from the control stick used by a pilot to control the ailerons and elevators of an airplane, is a
handheld lever that pivots on one end and transmits its coordinates to a computer. It often has one or
more push-buttons, called switches, whose position can also be read by the computer.
Digital Camera


 
Computer Application in Pharmacy Fundamentals

A digital camera records and stores photographic images in digital form that can be fed to a computer
as the impressions are recorded or stored in the camera for later loading into a computer or printer.
Currently, Kodak, Canon, and several other companies make digital cameras.
Microphone
A device that converts sound waves into audio signals is called Microphone. This could be used for
sound recording as well as voice chatting through internet.
Scanner
A scanner is a device that captures images from photographic prints, posters, magazine pages, and
similar sources for computer editing and display. Scanners come in hand-held, feed-in, and flatbed
types and for scanning black-and-white only, or color. Very high resolution scanners are used for
scanning for high-resolution printing, but lower resolution scanners are adequate for capturing images
for computer display. Scanners usually come with software, such as Adobe's Photoshop product, that
lets you resize and otherwise modify a captured image
Processor
A processor is the logic circuitry that responds to and processes the basic instructions that drive a
computer.
The term processor has generally replaced the term central processing unit (CPU). The processor in a
personal computer or embedded in small devices is often called a microprocessor.
Short for microprocessor, the central processing unit in a computer. The processor is the logic of a
computer and functions comparably to a human central nervous system, directing signals from on
component to another and enabling everything to happen
Memory
Memory is the electronic holding place for instructions and data that your computer's microprocessor
can reach quickly. When your computer is in normal operation, its memory usually contains the main
parts of the operating system and some or all of the application programs and related data that are
being used. Memory is often used as a shorter synonym for random access memory (RAM). This kind
of memory is located on one or more microchips that are physically close to the microprocessor in your
computer. Most desktop and notebook computers sold today include at least 16 megabytes of RAM,
and are upgradeable to include more. The more RAM you have, the less frequently the computer has to
access instructions and data from the more slowly accessed hard disk form of storage.
Memory is also called primary or main memory.
RAM
RAM (random access memory) is the place in a computer where the operating system, application
programs, and data in current use are kept so that they can be quickly reached by the computer's
processor. RAM is much faster to read from and write to than the other kinds of storage in a computer,
the hard disk, floppy disk, and CD-ROM. However, the data in RAM stays there only as long as your
computer is running. When you turn the computer off, RAM loses its data. When you turn your
computer on again, your operating system and other files are once again loaded into RAM, usually
from your hard disk.
ROM
ROM is "built-in" computer memory containing data that normally can only be read, not written to.
ROM contains the programming that allows your computer to be "booted up" or regenerated each time
you turn it on. Unlike a computer's random access memory (RAM), the data in ROM is not lost when
the computer power is turned off.
The ROM is sustained by a small long-life battery in your computer.
Storage
Computer storage is the holding of data in an electromagnetic form for access by a computer processor.
It is also called secondary storage. In secondary storage data resides on hard disks, tapes, and other
external devices.


 
Computer Application in Pharmacy Fundamentals

Primary storage is much faster to access than secondary storage because of the proximity of the storage
to the processor or because of the nature of the storage devices. On the other hand, secondary storage
can hold much more data than primary storage.
Hard disk
Hard disk is a computer storage device which saves and retrieves the data when required. Its capacity is
much greater than the computer memory (RAM, ROM). Data on hard disk is stored and retrieved from
electromagnetically charged surface.
Today we can save huge amount of data on a single hard disk. Now hard disks can contain several
billion bytes.
Floppy disk
A diskette is a random access, removable data storage medium that can be used with personal
computers. The term usually refers to the magnetic medium housed in a rigid plastic cartridge
measuring 3.5 inches square and about 2 millimeters thick. Also called a "3.5-inch diskette," it can
store up to 1.44 megabytes (MB) of data.
Tape
In computers, tape is an external storage medium, usually both readable and writable, can store data in
the form of electromagnetic charges that can be read and also erased. A tape drive is the device that
positions, writes from, and reads to the tape.
CD
A compact disc [sometimes spelled disk] (CD) is a small, portable, round medium for electronically
recording, storing, and playing back audio, video, text, and other information in digital form.
DVD
DVD (digital versatile disc) is an optical disc technology that is expected to rapidly replace the
CDROM disc (as well as the audio compact disc) over the next few years. The digital versatile disc
(DVD) holds 4.7 gigabyte of information on one of its two sides, or enough for a 133-minute movie.
Classifying Memory/Storage
Electronic (RAM, ROM), Magnetic (HD, FD, Tape), Optical (CD, DVD)
Volatile (RAM), Non-Volatile (HD)
Random Access (RAM, HD), Serial Access (Tape)
Read/Write (HD, RAM), Read-Only (CD)
Output
The devices to which the computer writes data are called Output devices. Often converts the data into a
human readable form. Monitor and printer are output devices.
Printer
Plotter
Speakers
Monitor
Ports
On computer and telecommunication devices, a port is generally a specific place for being physically
connected to some other device, usually with a socket and plug of some kind. Typically, a personal
computer is provided with one or more serial ports and usually one parallel port.
Types of Ports:
Parallel
An interface on a computer that supports transmission of multiple bits at the same time; almost
exclusively used for connecting a printer, On IBM or compatible computers, the parallel port uses a 25-
pin connector.
Serial


 
Computer Application in Pharmacy Fundamentals

It is a general-purpose personal computer communications port in which 1 bit of information is


transferred at a time. In the past, most digital cameras were connected to a computer's serial port in
order to transfer images to the computer. Recently, however, the serial port is being replaced by the
much faster USB port on digital cameras as well as computers.
SCSI
A port that's faster than the serial and parallel ports but slower and harder to configure than the newer
USB port. Also known as the Small Computer System Interface.
A high-speed connection that enables devices, such as hard-disk drives and network adapters, to be
attached to a computer
USB
USB (Universal Serial Bus) is a plug-and-play hardware interface for peripherals such as the keyboard,
mouse, joystick, scanner, printer and modem. USB has a maximum bandwidth of 12 Mbits/sec and up
to 127 devices can be attached. With USB, a new device can be added to your computer without
having to add an adapter card. It typically is located at the back of the PC
Communication Devices
Ethernet Card
An Ethernet card is one kind of network adapter. These adapters support the Ethernet standard for
high-speed network connections via cables. Ethernet cards are sometimes known as network interface
cards (NICs).
Ethernet cards are available in several different standard packages called form factors:
• Years ago, large ISA cards were the first standard for PCs, requiring users to open their computer
case for installation.
• Newer Ethernet cards installed inside desktop computers use the PCI standard and are usually
installed by the manufacturer.
• Smaller PCMCIA Ethernet cards that resemble credit cards are readily available for laptop and
other mobile computers. These insert conveniently into slots on the side or front of the device. The
PC Card is a common PCMCIA device, although only certain PC Card and PCMCIA products
support Ethernet.
• Though they look more like small boxes than cards, external USB Ethernet adapters also exist.
These are a convenient alternative to PCI cards for desktop computers and also commonly used
with video game consoles and other consumer devices lacking PCMCIA slots.
Ethernet cards may operate at different network speeds depending on the protocol standard they
support. Old Ethernet cards were capable only of the 10 Mbps maximum speed offered by Ethernet
originally. Modern Ethernet adapters all support the 100 Mbps Fast Ethernet standard and an
increasing number now also offer Gigabit Ethernet support at 1 Gbps (1000 Mbps).
An Ethernet card does not directly support Wi-Fi wireless networking, but home network broadband
routers contain the necessary technology to allow Ethernet devices to connect via cables and
communicate with Wi-Fi devices via the router.
Modem
Modem is output as well as input device at the same time. It receives the data (analog signal) coming
through telephone line, converts them to digital signals and sends them to computer to which it is
attached. It also receives the data from computer and changes it to analog signals.

Computer Software
The Hardware needs Software to be useful; the Software needs Hardware to be useful. When the user
needs something done by the computer, he/she gives instructions in the form of Software to computer
Hardware. These instructions need to be written in a language that is readily understood by
microprocessor of the computer.
Computer Software are majorly divided in two classes of System Software and Application Software.
System Software

 
Computer Application in Pharmacy Fundamentals

System software is responsible for controlling, integrating, and managing the individual hardware
components of a computer system. System software performs tasks like transferring data from memory
to disk, or rendering text onto a display Specific kinds of system software include loading programs,
operating systems, device drivers, compilers, assemblers, linkers, and utilities.
Software libraries that perform generic functions also tend to be regarded as system software. System
software stored on non-volatile storage on integrated circuits is usually termed firmware. These
generally perform the background tasks in a computer. These programs, many times, talk directly to
the Hardware.
Machine Language
A system of codes directly understandable by a computer's CPU is termed this CPU's native or
machine language. Although machine code may seem similar to assembly language they are in fact
two different types of languages. Machine code is composed only of the two binary digits 0 and 1.
Every CPU has its own machine language, although there is considerable overlap between some. If
CPU A understands the full language of CPU B it is said that A is compatible with B. CPU B may not
be compatible with CPU A, as A may know a few codes that B does not.
Language Translators
Programs that take code written in a High Level Language and translate it into a low-level language
that is easily understood by the microprocessor are called Language Translators.
Human programmers write programs in a language that is easy to understand for them. They use
language translators to convert that program into machine language. It converts the human
understandable code in microprocessor understandable code, i.e. a language that is easy to understand
for the microprocessor System Software. Two categories of Language Translators are Compilers and
Interpreters.
Compiler
Compiler translates the program written in a High Level Language in one go. The translated code is
then used by the microprocessor whenever the program needs to be run
Interpreter
Interpreter translates the High Level Language program one statement at time. It reads a single
statement, translates it into machine language and passes that machine language code to the
microprocessor and then translates the next statement, and so on.
Operating System
It performs its work invisibly to control the internal functions of a computer, e.g. maintaining files on
the disk drive, managing the screen, controlling which tasks the microprocessor performs and in what
order. It interacts directly with the computer Hardware. Other Software normally does not directly
interact with the Hardware, e.g. Microsoft Windows, Mac OS, Linux, UNIX, Sun Solaris, DOS, CP/M,
VMS, and Firmware etc.
ROM is a component of OS that permanently stored on a chip. It is a firm ware program. When a
computer is powered-on, it is the first program that it always executes. Firmware consists of startup
and IBM-compatible PC’s, it is called BIOS
Device Drivers
A device driver, often called a driver for short, is a computer program that is intended to allow another
program (typically, an operating system) to interact with a hardware device. Think of a driver as a
manual that gives the operating system (e.g., Windows) instructions on how to use a particular piece of
hardware.
A device driver essentially converts the more general input/output instructions of the operating system
to messages that the device type can understand.
Utilities
It is a small program that provides an addition to the capabilities provided by the operating system. In
some usages, a utility is a special and nonessential part of the operating system. These are the computer


 
Computer Application in Pharmacy Fundamentals

programs that perform a particular function related to computer system management and maintenance.
Examples includes Anti-virus Software, Data compression Software, Disk optimization Software, Disk
backup Software
Application Software
Programs that generally interact with the user to perform work that is useful to the user. These
programs generally talk to the Hardware through the assistance of system Software.
Application Software are programs that interact directly with the user for the performance of a certain
type of work
Scientific/Engineering/Graphics Software, Mathematica; AutoCad; Corel Draw;
Business Software; The billing system for the mobile phone company
Productivity Software; Word processors; Spreadsheets
Entertainment Software; Games
Educational Software; Electronic encyclopedias


 
Computer Application in Pharmacy Operating System

Operating System
An operating system is a program that manages the computer hardware. It also provides a basis for
application programs and acts as an intermediary between a user of a computer and the computer
hardware.

Why an Operating System?


An operating system is an important part of almost every computer system. A computer system can be
divided roughly into four components:
• The hardware
• The operating system
• The application programs
• And the users
The hardware – the central processing unit (CPU), the memory, and the input/output devices –
provides the basic computing resources. The application programs – such as word processors,
spreadsheets, compilers, and web browsers, define the ways in which these resources are used to solve
the computing problems of the users. The operating system controls and coordinates the use of the
hardware among various application programs for the various users.
The components of a computer system are its hardware, software, and data. The operating system
provides the means for the proper use of these resources in the operation of the computer system. An
operating system is similar to a government. Like a government, it performs no useful function by
itself. It simply provides an environment within which other programs can do useful work. Operating
systems can be explored from two viewpoints: the user and the system.

Operating System Definition


User View
The user view of the computer varies by the interface being used. Most computer users sit in front of a
PC, consisting of monitor, keyboard, mouse, and system unit. Such a system is designed for one user to
monopolize its resources, to maximize the work (or play) that the user is performing. In this case, the
operating system is designed mostly for ease of use, with some attention paid to performance, and
none paid to resource utilization. Performance is important to the user, it does not matter if most of the
system is sitting idle, waiting for the slow I/O speed of the user.
Some users sit at a terminal connected to a mainframe or minicomputer. Other users are accessing
the same computer through other terminals. These users share resources and may exchange
information. The operating system is designed to maximize resource utilization – to assure that all
available CPU time, memory, and I/O are used efficiently, and that no individual user takes more than
her fair share.
Other users sit at workstations, connected to networks of other workstations and servers. These users
have dedicated resources at their disposal, but they also share resources such as networking and servers
– file, compute and print servers. Therefore, their operating system is designed to compromise between
individual usability and resource utilization.
System View
From the computer’s point of view, the operating system is the program that is most intimate with the
hardware. We can view an operating system as a resource allocator. A computer system has many
resources – hardware and software – that maybe required to solve a problem: CPU time, memory
space, file-storage space, I/O devices, and so on. The operating system acts as the manager of these
resources. Facing numerous and possibly conflicting requests for resources, the operating system must
decide how to allocate them to specify programs and users so that it can operate the computer system
efficiently and fairly.


 
Computer Application in Pharmacy Operating System

Mainframe Systems
Mainframe Computer Systems were the first computers used to tackle many commercial and
scientific applications. In this section, we trace the growth of mainframe systems from simple Batch
Systems, where the computer runs one – and only one – application, to Time-Shared Systems, which
allow for user interaction with the computer system.

Batch Systems
Early computers were physically enormous machines run from a console. The common input devices
were card readers and tape drives. The common output devices were line printers, tape drives and card
punches. The user did not interact directly with the computer systems. Rather, the user prepared a job –
which consisted of the program, the data, and some control information about the nature of the job
(control cards) – and submitted it to the computer operator. The job was usually in the form of punch
cards. At some later time (after minutes, hours, or days), the output appeared. The output consisted of
the result of the program, as well as a dump of the final memory and register contents for debugging.
The operating system in these early computers was fairly simple. Its major task was to transfer control
automatically from one job to the next. The operating system was always resident in memory.
To speed up processing, operators batched together jobs with similar needs and ran them through the
computer as a group.

Multiprogrammed Systems
The most important aspect of job scheduling is the ability to multiprogram. A single user cannot, in
general, keep either the CPU or the I/O devices busy at all times. Multiprogramming increases CPU
utilization by organizing jobs so that the CPU always has one to execute.
The idea is as follows: The operating system keeps several jobs in memory simultaneously. This set of
jobs is a subset of the jobs kept in the job pool – since the number of jobs that can be kept
simultaneously in memory is usually much smaller than the number of jobs that can be in the pool job.
The operating system picks and begins to execute one of the jobs in the memory. Eventually, the job
may have to wait for some task, such as an I/O operations, to complete. In a non-multiprogrammed
system, the CPU would sit idle. In a multiprogramming system, the operating system simply switches
to, and executes, another job, and so on. Eventually, the first job finishes waiting and gets the CPU
back. As long as at least one job needs to execute, the CPU is never idle.
This idea is common in other life situations. A lawyer does not work for only one client at a time.
While one case is waiting to go to trial or have papers typed, the lawyer can work on another case. If
she has enough clients, the lawyer will never be idle for lack of work. (Idle lawyers tend to become
politicians, so there is a certain social value in keeping lawyers busy.)

Time-Sharing Systems
Multiprogrammed, batched systems provided an environment where the various system resources ( for
example, CPU, memory, peripheral devices ) were utilized effectively, but it did not provide for user
interaction with the computer system. Time sharing (or multitasking) is a logical extension of
multiprogramming. The CPU executes multiple jobs by switching among them, but the switches occur
so frequently that the users can interact with each program while it is running.
An interactive (or hands-on) computer system provides direct communication between the user and the
system. The user gives instructions to the operating system or to a program directly, using a keyboard
or a mouse, and wait for immediate results. Accordingly, the response time should be short – typically
within 1 second or so.
A time-shared operating system allows many users to share the computer simultaneously. Since each
action or command in a time-shared system tends to be short, only a little CPU time is needed for each
user. As the system switches rapidly from one user to the next, each user is given the impression that
the entire computer system is dedicated to her use, even though it is being shared among many users.


 
Computer Application in Pharmacy Operating System

A time-shared operating system uses CPU scheduling and multiprogramming to provide each user with
a small portion of a time-shared computer. Each user has at least one separate program in memory. A
program loaded into memory and executing is commonly referred to as a process. When a process
executes, it typically executes for only short time before it either finishes or needs to perform I/O. I/O
maybe interactive; that is, output is to a display for the user and input is from a user keyboard, mouse,
or other device. Since interactive I/O typically runs at “people speeds,” it may take a long time to
complete. Input, for example, maybe bounded by the user’s typing speed; seven characters per second
is fast for people, but incredibly slow for computers. Rather than let the CPU sit idle when this
interactive input takes place, the operating system will rapidly switch the CPU to the program of some
other user.
Time-sharing operating systems are even more complex than multiprogrammed operating systems. In
both, several jobs must be kept simultaneously in memory, so the system must have memory
management and protection. To obtain a reasonable response time, jobs may have to be swapped in and
out of main memory to the disk that now serves as a backing store for main memory. A common
method for achieving this goal is virtual memory, which is a technique that allows the execution of a
job that may not be completely in memory.

Multiprocessor Systems
Most systems to date are single-processor systems; that is, they have only one main CPU. However,
multiprocessor systems (also known as parallel systems or tightly coupled systems) are growing in
importance. Such systems have more than one processor in close communication, sharing the computer
bus, the clock, and sometimes memory and peripheral devices.
Multiprocessor systems have three main advantages:
Increased Throughput
By increasing the number of processors, we hope to get more work done in less time. The speed-up
ratio with N processors is not N; rather, it is less than N. when multiple processors cooperate on a task,
a certain amount of overhead is incurred in keeping all the parts working correctly. This overhead, plus
contention for shared resources, lowers the expected gain from additional processors. Similarly, a
group of N programmers working closely together does not result in N times the amount of work being
accomplished.
Economy of Scale
Multiprocessor systems can save more money than multiple single-processor systems, because they
can share peripherals, mass storage, and power supplies. If several programs operate on the same set of
data, it is cheaper to store those data on one disk and to have all the processors share them, than to have
many computers with local disks and many copies of the data.
Increased Reliability
If functions can be distributed properly among several processors, then the failure of one processor will
not halt the system, only slow it down. If we have ten processors and one fails, then each of the
remaining nine processors must pick up a share of the work of the failed processor. Thus, the entire
system runs only 10 percent slower, rather than failing altogether. This ability to continue providing
service proportional to the level of surviving hardware is called graceful degradation. Systems
designed for graceful degradation are also called fault tolerant.
The most common multiple-processor systems now use symmetric multiprocessing (SMP), in which
each processor runs an identical copy of the operating system, and these copies communicate with one
another as needed. Some systems use asymmetric multiprocessing, in which each processor is assigned
a specific task. A master processor controls the system; the other processors either look to the master
for instruction or have predefined tasks. This scheme defines a master-slave relationship. The master
processor schedules and allocates work to the slave processors.

Distributed Systems

 
Computer Application in Pharmacy Operating System

A network, in the simplest terms, is a communication path between two or more systems. Distributed
systems depend on networking for their functionality. By being able to communicate, distributed
systems are able to share computational tasks, and provide a rich set of features to users.
Networks are typecast based on the distances between their nodes. A local-area network (LAN),
exists within a room, a floor, or a building. A wide-area network (WAN), usually exists between
buildings, cities, or countries. A global company may have a WAN to connect its offices, worldwide.
These networks could run one protocol or several protocols. The continuing advent of new
technologies brings about new forms of networks. For example, a metropolitan-area network
(MAN), could link buildings within a city. Bluetooth devices communicate over a short distance of
several feet, in essence creating a small-area network.

Client-Server Systems
As PCs have become faster, more powerful, and cheaper, designers have shifted away from the
centralized system architecture. Terminals connected to centralized systems are now being supplanted
by PCs. Correspondingly; user-interface functionality that used to be handled directly by the
centralized systems is increasingly being handled by the PCs. As a result, centralized systems today act
as server systems to satisfy requests generated by client systems.
Server systems can be broadly categorized as compute servers and file servers.
• Compute-server systems provide an interface to which clients can send requests to perform an
action, in response to which they execute the action and send back results to the client.
• File-server systems provide a file-system interface where clients can create, update, read, and
delete files.

Peer-to-Peer Systems
The growth of computer networks – especially the internet and World Wide Web (WWW) – has had a
profound influence on the recent development of operating systems. When PCs were introduced in the
1970s, they were designed for ‘personal’ use and were generally considered standalone computers.
With the beginning of widespread public use of the internet in the 1980s for electronic mail, ftp, and
gopher, many PCs became connected to computer networks. With the introduction of the Web in the
mid-1990s, network connectivity became an essential component of a computer system.

REAL-TIME SYSTEMS
Another form of a special-purpose operating system is the real-time system. A real-time system is
used when rigid time requirements have been placed on the operation of a processor or the flow of
data; thus, it is often used as a control device in a dedicated application. Sensors bring data to the
computer. The computer must analyze the data and possibly adjust controls to modify the sensor
inputs. Systems that control scientific experiments, medical imaging systems, industrial control
systems, and certain display systems are real-time systems. Some automobile-engine fuel-injection
systems, home-appliance controllers, and weapon systems are also real-time systems.
A real-time system has well-defined, fixed time constraints. Processing must be done within the
defined constraints, or the system will fail. For instance, it would not do for a robot arm to be
instructed to halt after it had smashed into the car it was building. A real-time system functions
correctly only if it returns the correct result within its time constraints. Contrast this requirement to a
time-sharing system, where it is desirable (but not mandatory) to respond quickly, or to a batch system,
which may have no time constraints at all.
Real-time systems come in two flavors: hard and soft. A hard real-time system guarantees that critical
tasks be completed on time. This goal requires that all delays in the system be bounded, from the
retrieval of stored data to the time that it takes the operating system to finish any request made of it.
Such time constraints dictate the facilitates that are available in hard real-time systems. Secondary
storage of any sort is usually limited or missing, with data instead being stored in short-term memory
or in read-only memory (ROM).


 
Computer Application in Pharmacy Operating System

A less restrictive type of real-time system is a soft real-time system, where a critical real-time task
gets priority over other tasks, and retains that priority until it completes. As in hard real-time systems,
the operating-system kernel delays need to be bounded. A real-time task cannot be kept waiting
indefinitely for the kernel to run it. Soft real-time is an achievable goal that can be mixed with other
types of systems. Soft real-time systems, however, have more limited utility than hard real-time
systems. Given their lack of deadline support, they are risky to use for industrial control and robotics.
They are useful, however in several areas, including multimedia, virtual reality, and advanced scientific
projects – such as undersea exploration and planetary rovers.

Operating System Structure


An operating system provides the environment within which programs are executed. Internally,
operation systems vary greatly in their makeup, being organized along many different lines. The design
of new operating system is major task. The goals of system must be well defined before the design
begins. The type of system desired is the basis for choices among various algorithms and strategies.
An operating system may be viewed from several vantage points. One is by
examining the services that it provides. Another is by looking at the interface that
it makes available to users and programmers. A third is by disassembling the
system into its components and their interconnections.
In this chapter, we explore all three aspects of operating systems, showing the viewpoints of users,
programmers, and operating system designers. We consider what services an operating system
provides, how they are provided, and what the various methodologies are for designing such systems.

System Components
We can create a system as large and complex as an operating system only by partitioning it into smaller
pieces. Each piece should be a well delineated portion of the system, with carefully defined inputs,
outputs, and functions. Obviously, not all systems have the same structure. However, many modern
systems share the goal of supporting the system components like Process Management, Memory
Management, Secondary Storage Management, File System Management, IO-System Management.
Process Management
A program does nothing unless its instructions are executed by a CPU. A process can be thought of as
a program in execution, but its definition will broaden as we explore it further. A time-shared user
program such as a compiler is a process. A word processing program being run by an individual user
on a PC is a process. A system task, such as sending output to a printer, is also a process. For now, you
can consider a process to be a job or a time shared program, but alter you will learn that the concept is
more general.
A process needs certain resources-including CPU time, memory, files, and, I/O devices-to accomplish
its task. These resources are either given to the process when it is created, or allocated to it while it is
running. In addition to the various physical and logical resources that a process obtains when it is
created, various initialization data (or input) may be passed along. For example, consider a process
whose function is to display the status of a file on the screen of a terminal. The process will be given as
an input the name of the file, and will execute the appropriate instructions and system calls to obtain
and display on the terminal the desired information. When the process terminates, the operating system
will reclaim any reusable resources.
We emphasize that program by itself is not a process; a program is a passive entity, such as the
contents of a file stored on a disk, whereas a process is an active entity, with a program counter
specifying the next instruction to execute. The execution of a process must be sequential. The CPU
executes one instruction of the process after one another, until the process completes. Further, at any
time, at most one instruction is executed on behalf of the process. Thus, although two processes may
be associated with the same program, they are nevertheless considered two separate execution
sequences. It is common to have a program that spawns many processes as it runs.


 
Computer Application in Pharmacy Operating System

A process is a unit of work in a system. Such a system consists of a collection of processes, some of
which are operating system process (those that execute system code) and the rest of which are user
processes (those that execute user code). All these processes can potentially execute concurrently, by
multiplexing the CPU among them.
The operating system is responsible for the following activities in connection with process
management:
• Creating and Deleting both user and system processes
• Suspending and Resuming Processes
• Providing mechanisms for Processes Synchronization
• Providing mechanisms for Process Communication
• Providing mechanisms for Deadlock Handling
Main Memory Management
The main memory is central to the operation of a modern computer system. Main memory is large
array of words or bytes, ranging in size from hundreds of thousands to billions. Each word or byte has
its own address. Main memory is a repository of quickly accessible data shared by the CPU and I/O
devices. The central processor reads instructions from main memory during the instruction-fetch cycle.
The I/O operations implemented via DMA also read and write data in main memory. The main
memory is generally the only large storage device that the CPU is able address and access directly. For
example, for the CPU to process data from disk, those data must first be transferred to main memory
by CPU generated I/O calls. Equivalently, instructions must be in memory space is declared variable,
and the next program can be loaded and executed.
To improve both the utilization of the CPU and the speed of the computer’s response to its users, we
must keep several programs in memory. Many different memory management schemes are available,
and the effectiveness of the different algorithms depends on the particular situation. Selection of a
memory management scheme for a specific system depends on many factors, especially on the
hardware design of the system. Each algorithm requires its own hardware support.
The operating system is responsible for the following activities in connection with memory
management:
• Keep track of which parts of memory are currently being used and by whom.
• Deciding which processes are to be loaded into memory when memory space becomes available.
• Allocating and de-allocating memory space as needed.
File Management
File management is one of the most visible components of an operating system. Computers can store
information on several different types of physical media. Magnetic tape, magnetic disk, and optical
disk are the most common media. Each of these media has its own characteristics and physical
organization. Each medium is controlled by a device, such as a disk drive or tape drive, that also has
unique characteristics. These properties include access speed, capacity, data transfer rate, and access
method (sequential or random).
For convenient use of the computer system provides a uniform logical view of information storage.
The operating system abstracts from the physical properties of its storage devices to define a logical
storage unit, the file. The operating system maps files onto physical media, and accesses these files via
the storage devices.
A file is a collection of related information defined by its creator. Commonly, files represent program
(both source and object forms) and data. Data files may be numeric, alphabetic, or alphanumeric. Files
may be free-form (for example, text files), or may be formatted rigidly (for example, fixed fields). A
file consists of a sequence of bits, bytes, lines, or records whose meanings are defined by their creators.
The concept of a file is an extremely general one.
The operating system implements the abstract concept of a file by managing mass storage media, such
as disks and tapes, and the devices that control them. Also, files are normally organized into directories


 
Computer Application in Pharmacy Operating System

to ease their use. Finally, when multiple users have access to files, we may want to control by whom
and in what ways (for example, read, write, append) files may be accessed.
The operating system is responsible for the following activities in connection with file management:
• Creating and deleting files
• Creating and deleting directories
• Supporting primitives for manipulating files and directories.
• Mapping files onto secondary storage
• Backing up files on stable (nonvolatile) storage media
I/O System Management
One of the purposes of an operating system is to hide the peculiarities of specific hardware devices
from the user. For example, in UNIX, the peculiarities of I/O devices are hidden from the bulk of the
operating system itself by the I/O subsystem. The I/O subsystem consists of
• A memory management component that includes buffering, caching, and spooling
• A general device driver interface
• Drivers for specific hardware devices
Only the device driver knows the peculiarities of the specific device to which it is assigned.
Secondary Storage Management
The main purpose of a computer system is to execute programs. These programs, with the data they
access, must be in main memory, or primary storage, during execution. Because main memory is too
small to accommodate all data and programs, and because the data that it holds are lost when power is
lost, the computer system must provide secondary storage to back up main memory. Most modern
computer systems use disks as the principal online storage medium, for both programs and data. Most
programs including compilers, assemblers, sort routines, editors, and formatters are stored on a disk
until loaded into memory, and then use the disk as both the source and destination of their processing.
Hence, the proper management of disk storage is of central importance to a computer system.
The operating system is responsible for the following activities in connection with disk management:
• Free space management
• Storage allocation
• Disk scheduling
Because secondary storage is used frequently, it must be used efficiently. The entire speed of operation
of a computer may hinge on the speeds of the disk subsystem and of the algorithms that manipulate
that subsystem.


 
Computer Application in Pharmacy Computer Networks

Computer Networks
Heterogeneous electronic components connected together for the purpose of data communication are
called a computer network.
A network is a set of device connected by communication links. A node can be a computer, printer, or
any other device capable of sending and or receiving data generated by other nodes on network.
Distributed Processing
Most networks use distributed processing, in which a task is divided among multiple computers.
Instead of a single large machine being responsible for all aspects of process, separate computers
handle a subset.

Network Criteria
A network must be able to meet a certain number of criteria. The most important of these are
performance, reliability and security.
Performance
Performance can be measured in many ways, including transit time and response time. Transit time is
the amount of time required for a message to a travel from one device to another. Response time is the
elapsed time between an inquiry and a response. The performance of a network depends upon the
number of factors, including the numbers of users, the type of transmission medium, the capabilities of
the connected hardware, and the efficiency of the software.
Reliability
In addition to accuracy of delivery, network reliability is measured by the frequency of failure, the time
it takes a link to recover from a failure, and the network ‘s robustness in a catastrophe.
Security
Network security issue includes protecting data from unauthorized access.

Physical Structure
Before discussing networks we need to define some attributes.
Types of connection
A network is two or more devices connected together through links. A link is a communication
pathway that transfers data from one devices to another. Foe visualization purposes, it is simplest to
imagine any link as a line drawn between to the same link at the same time. There are two possible
types of connections: point to point and multipoint.
Point-to-Point
A point- to- point connection provides a dedicated link between two devices. The entire capacity of the
link is reserved for transmission between those two devices. most point- to- point connections us an
actual length of wire or cable to connect the two ends, but other options, such as microwave or satellite
links, are also possible. when you change television channels by infrared remote control, you are
establishing a point -to- point connection between the remote control and the television’s control
system.
Multipoint
A Multipoint connection is one in which more than two specific devices share a single link.
In a multipoint environment, the capacity of the channel is shared either spatially or temporally. If
several devices can use the link simultaneously, it is a spatially shared connection. If users must take
turns it is a timeshare connection.


 
Computer Application in Pharmacy Computer Networks

Physical Topology
The term physical topology refers to the way in which a network is laid out physically. Two or more
devices connect to a link two or more links from a topology. The topology of a network is a geometric
representation of the relationship of all the links and linking devices to one another. There are four
basic topologies possible: mesh, star, bus, and ring.
Star
In a star topology, each device has a dedicated point-to-point link only to a central controller usually
called a hub. The devices are not directly linked to one another. Unlike a mesh topology a star topology
does not allow direct traffic between devices. The controller which then relays the data to other
connected devices.
A star topology is less expensive than a mesh topology. In a star each devices needs only one link and
one I/O port to connect it to any number of others. This factor also makes it easy to install and
reconfigure. Far less cabling needs to be housed, and additions moves and deletions involve only one
connection between that device and the hub.
Other advantages include robustness. If one link fails, only that link is affected. All other links remain
active. This factor also lends itself to easy fault identification and fault isolations. As long as the hub is
working it can be used to monitor link problems and bypass defective links.
However although a star techniques far less cable then a mesh, each node must be linked to a central
hub. For this reason often more cabling is required in a star than in some other topologies.
Bus
The preceding examples all describe point-to-point connections. A bus topology on the other hand is
multipoint. One long cable acts as a back bone to link all the devices in a network.
Nodes are connected to the bus cables by drop lines and taps. A drop line is a connection running
between the devices and the main cable. A tap is a connector that either splices into the main cable or
punctures the sheathing of a cable to create a contact with the metallic core. As a single travels along
the backbones some of its energy is transformed into heat. Therefore it becomes weaker and weaker as
it has to travel farther and farther.
Advantages of a bus topology include ease of installations. Backbone cable can be laid along the most
efficient path then connected to the nodes by drop lines of various lengths. In this way a bus uses less
cabling then mash or star topologies. In a star for example four network devices in the same room
require four same lengths of cable reaching all the way to the hub. In a bus this redundancy is
eliminated. Only the backbone cable stretches entire facility. Each drop line has to reach only as far as
the nearest point on the backbone.
Disadvantages include difficult reconnection and fault isolation. It can therefore be difficult to add new
devices. Single reflection at the taps can cause degradation in quality. This degradation can be
controlled by limiting the number and spacing of devices connected to a given length of a cable.
Adding new device may therefore require modification or replacement of the backbone.
In addition a fault or break in the bus cable stops all transmission even between devices on either the
same sides of the problem. The damaged area reflects signals back in the direction of origin creating
noise in both directions.
Ring
In a ring topology each device has a dedicated point-to-point connection only with the two devices on
either side of it. A signal is passed along the ring in one direction from device to device, until it reaches
its destination. Each device in the ring incorporates a repeater. When a device receives a signal
intended for another device, its repeater regenerates the bits and passes them.
A ring is relatively easy to install and reconfigure. Each device is linked only to its immediate
neighbors. To add or delete a device requires changing only two connections. The only constraints are


 
Computer Application in Pharmacy Computer Networks

media and traffic consideration. In addition fault isolation is simplified. Generally in a ring a signal is
circulating at all times.
However unidirectional traffic can be a disadvantage. In a simple ring a break in the ring can disable
the entire network. This weakness can be solved by using a dual ring or a switch capable of a closing
off the break.
Mesh
In a mesh topology, every device has a dedicated point-to-point link to every other device. The term
dedicated means that the link carries traffic only between the two devices it connects. a fully connected
mesh network therefore has physical channel to link n devices. To accommodate that they link, every
device on the network must have n input/output (I/O) ports.
A mesh offers several advantages over other network topologies. First the use of dedicated links
guarantees that each connection can carry its own data load, thus eliminating the traffic problems that
can occur when links must be shared by multiple devices. Second a mesh topology is robust. If one link
becomes unusable, it does not incapacitate the entire system. Another advantage is privacy or security.
When every message travels along a dedicated line, only the intended recipient sees it. Physical
boundaries prevent other users from gaining access to messages. Finally, point-to-point links make
fault identification and fault isolation easy. Traffic can be routed to avoid links with suspected
problems. This facility enables the network manager to discover the precise locations of the fault and
aids in finding its cause and solution.
The main disadvantages of a mesh are related to the amount of cabling and the number of I/O ports
required. First, because every device must be connected to every other device, installation and
reconnection are difficult. Second, the sheer bulk of the wiring can be greater than the available space
can accommodate. Finally the hardware required to connect each link can be prohibitively expensive.
For these reasons a mash topology is usually implemented in a a limited fashion for example as a
backbone connecting the main computers of a hybrid network that can include several other topologies.

Categories of the Networks


Today when we speak of networks we are generally referring to three primary categories: local area
networks, metropolitan area networks and wide area networks. Into which category a network falls is
determined by its size, its ownership, the distance it covers and its physical architecture.
Local Area Networks
A local area network is usually privately owned and links the devices in a single office building or
campus. Depending on the needs of an organization and the type of technology used a LAN can be as
simple as two PCs and a printer in someone’s home office or it can extend throughout the company and
include audio and video peripherals. Currently LAN size is limited to a few kilometers.
LANs are designed to allow resources to be shared between personals computers or workstations. The
recourses to be shared can include hardware software or data. A common example of a LANs found in
many business environments links a workgroup of a task related computers for example engineering
workstations or accounting PCs. One of the computers may be given a large capacity disk drive and
may become a server to the other clients. Software can be stored on this central sever and used as
needed by the whole group. In this example the size of the LAN may be determined by licensing
restrictions on the number of users per copy of software or by restriction on the number of users
licensed to access the operating system.
In Addition to the size LAN are distinguished from other types of networks by their transmission
media and topology. In general a given LAN will use only one type of transmission medium. The most
common LAN topologies are bus ring and star.
Traditionally LANs have data rates in the 4 to 16 megabits par second range. Today, however speeds
are increasing and can reach 100 Mbps with gigabit system in development.
Metropolitan Area Network


 
Computer Application in Pharmacy Computer Networks

A metropolitan area network is designed to extend over an entire city. It may be a single network such
as a cable television network or it may be a means of connecting a number of LANs into a large
amount network so that resources may be shared LAN-to-LAN as well as devices. For example a
company can use a MAN to connect the LANs in all its offices throughout a city.
A MAN may be wholly owned and operated by a private company or it may be a service provided by a
public company such as a local telephone company. Many telephones companies provide a popular
MAN service called Switched Multi-megabit Data Service (SMDS).
Wide Area Network
A wide area network provides long distance transmission of data voice image and video information
over large geographic areas that may comprise a country or even the whole world.
In contrast to LANs WANs may utilize public leased or private communication equipment usually in
combinations and can therefore span an unlimited number of miles.
A WAN that is wholly owned and used by a single company is often referred to as an enterprise
network.
Internetworks
When two or more networks are connected they become an internet network or internet.

OSI Model
The standard model for networking protocols and distributed applications is the International
Standard Organization's Open System Interconnect (ISO/OSI) model. It defines seven network
layers.
Short for Open System Interconnection, an ISO standard for worldwide communications that
defines a networking framework for implementing protocols in seven layers. Control is passed
from one layer to the next, starting at the application layer in one station, and proceeding to
the bottom layer, over the channel to the next station and back up the hierarchy.
At one time, most vendors agreed to support OSI in one form or another, but OSI was too
loosely defined and proprietary standards were too entrenched. Except for the OSI-compliant
X.400 and X.500 e-mail and directory standards, which are widely used, what was once
thought to become the universal communications standard now serves as the teaching model
for all other protocols.

The Structure of a Layer


One of ways that the OSI Reference Model has been beneficial is its definition of concepts and
structure for the layers. The elements of the layer structure (using the Transport Layer as an example)
are illustrated in Figure 3.6.Some readers will observe that this layer structure does not seem .to apply
at the top, where there is no service user, and at the bottom, where there is no service provider. These
boundary conditions will be discussed in Part II.
The Entity
The focal point of a layer is the entity, which is the representation in the OSI Reference Model of the
“things at the end of a protocol.” Entities exist within each open system. A single entity cannot be
spread across more. Than one open system or layer, thus layer and system independence are preserved.


 
Computer Application in Pharmacy Computer Networks

An entity has two important aspects: types and invocations. An entity-type is a class of entities
described in terms of set of capabilities defined for the layer. A layer can be defined as the set of
entities of a certain type (e.g., entity-type=transport).An entity-invocation is a specific utilization of all
or part of the capabilities of given entity.
As an example of this concept, the “telephone” is a device type, while the red telephone on your desk is
an invocation of that type.OSI entities correspond in an undefined, implementation-dependent fashion
to software in a real system that generates and responds to the protocol interactions of the layer.
An (N)-function is part of the activity of an (N)-entity. An (N)-directory and (N)-address-mapping are
examples of such functions.
Independence of layers is achieved by constraining protocol interactions to be strictly between entities
(called peer-entities) in the same layer. Formally defined interfaces to the layer directly above and
directly below ensure that changes in one layer do not affect adjacent layers.
The Service Primitive
An (N)-service is capability of the (N)-layer that is provided to the (N+1)-layer. The (N)-layer creates
these services using the services of the layer below if necessary. An entity in a layer above (in the same
open system) and may use the services of one or more entities located in the layer below.
A service consists of a number of elements of service called service primitives. Services can be of two
types (confirmed or unconfirmed), depending upon whether or not the destination provides a reply to
the sender’s message. Four types of service primitive have been defined request, indicate, response,
confirm. These are illustrated in Figure 3.7.
The Service-Access-Point
(N)-services are made available at an (N)-service-access-point, usually abbreviated as (N)-SAP
(transport-service-access-point, or TSAP, in the case of the Transport Layer).A SAP is the conceptual
point of interaction between two layers within an open system. Relationships between entities in
adjacent layers can be one-to-one or one-to-many (Figure 3.8).The SAP is an abstract modeling
concept and is not meant to be a real physical location.
A SAP is affixed reference point in the OSIE; its main purpose is to be an addressable point to which
entities can be associated or “attached”. Although the concepts of identifiers were included in the basic
OSI Reference Model, the whole area of naming and addressing is now the subject of more detailed
treatment in a refinement to the basic model. This topic will be reviewed in detail in chapter 4.
Figure 3.8 illustrates how the SAP acts as the “point of attachment” for entities in two layers. The basic
rule for a SAP is that it can be attached to only one entity in the layer above and one entity in the layer


 
Computer Application in Pharmacy Computer Networks

below at any given point in time. A SAP may be detached from its entities and reattached to the same
or different entities.

The Protocol
In OSI terminology an (N)-protocol is defined as “a set of rules and formats (semantic and syntactic)
which determines the communication behavior of (N)-entities in the performance of (N)-
functions.”More than one protocol may be defined for a layer and used by an entity. Meaningful
communication, however, requires that peer-entities agree on a single protocol to use for a particular
instance of communication.

The characteristics of protocol are determined by the functions it supports. In general, however, all
protocols are used to transfer “user-data” on behalf of the entities in the layer above (the service
users).OSI protocols typically operate between two entities. It is possible, however, to have protocols
that operate within more than two entities.

Layers in Detail
Control is passed from one layer to the next, starting at the application layer in one station, proceeding
to the bottom layer, over the channel to the next station and back up the hierarchy.
Layer 1 - Physical
Physical layer defines the cable or physical medium itself, e.g., thinnet, thicknet, unshielded twisted
pairs (UTP). All media are functionally equivalent. The main difference is in convenience and cost of
installation and maintenance. Converters from one media to another operate at this level.
Layer 2 - Data Link
Data Link layer defines the format of data on the network. A network data frame, aka packet, includes
checksum, source and destination address, and data. The largest packet that can be sent through a data
link layer defines the Maximum Transmission Unit (MTU). The data link layer handles the physical
and logical connections to the packet's destination, using a network interface. A host connected to an


 
Computer Application in Pharmacy Computer Networks

Ethernet would have an Ethernet interface to handle connections to the outside world, and a loopback
interface to send packets to itself.
Ethernet addresses a host using a unique, 48-bit address called its Ethernet address or Media Access
Control (MAC) address. MAC addresses are usually represented as six colon-separated pairs of hex
digits, e.g., 8:0:20:11:ac:85. This number is unique and is associated with a particular Ethernet device.
Hosts with multiple network interfaces should use the same MAC address on each. The data link
layer's protocol-specific header specifies the MAC address of the packet's source and destination.
When a packet is sent to all hosts (broadcast), a special MAC address (ff:ff:ff:ff:ff:ff) is used.
Layer 3 - Network
NFS uses Internetwork Protocol (IP) as its network layer interface. IP is responsible for routing,
directing datagrams from one network to another. The network layer may have to break large
datagrams, larger than MTU, into smaller packets and host receiving the packet will have to reassemble
the fragmented datagram. The Internetwork Protocol identifies each host with a 32-bit IP address. IP
addresses are written as four dot-separated decimal numbers between 0 and 255, e.g., 129.79.16.40.
The leading 1-3 bytes of the IP identify the network and the remaining bytes identifies the host on that
network. The network portion of the IP is assigned by InterNIC Registration Services, under the
contract to the National Science Foundation, and the host portion of the IP is assigned by the local
network administrators. For large sites, the first two bytes represents the network portion of the IP, and
the third and fourth bytes identify the subnet and host respectively.
Even though IP packets are addressed using IP addresses, hardware addresses must be used to actually
transport data from one host to another. The Address Resolution Protocol (ARP) is used to map the IP
address to it hardware address.
Layer 4 - Transport
Transport layer subdivides user-buffer into network-buffer sized datagrams and enforces desired
transmission control. Two transport protocols, Transmission Control Protocol (TCP) and User
Datagram Protocol (UDP), sits at the transport layer. Reliability and speed are the primary difference
between these two protocols. TCP establishes connections between two hosts on the network through
'sockets' which are determined by the IP address and port number. TCP keeps track of the packet
delivery order and the packets that must be resent. Maintaining this information for each connection
makes TCP a stateful protocol. UDP on the other hand provides a low overhead transmission service,
but with less error checking. NFS is built on top of UDP because of its speed and statelessness.
Statelessness simplifies the crash recovery.
Layer 5 - Session
The session protocol defines the format of the data sent over the connections. The NFS uses the
Remote Procedure Call (RPC) for its session protocol. RPC may be built on either TCP or UDP. Login
sessions uses TCP whereas NFS and broadcast use UDP.
Layer 6 - Presentation
External Data Representation (XDR) sits at the presentation level. It converts local representation of
data to its canonical form and vice versa. The canonical uses a standard byte ordering and structure
packing convention, independent of the host.
Layer 7 - Application
Provides network services to the end-users. Mail, ftp, telnet, DNS, NIS, NFS are examples of network
applications.

Internet Model (TCP/IP Protocol Suit)


The layered protocol stack that dominates data communications and networking today is the five-layer
Internet model, sometimes called the TCP/IP protocol suite. The model is composed of five ordered
layers: physical (layer 1), data link (layer 2), network (layer 3), transport (layer 4), and application
(layer 5). Figure shows the layers involved when a message is sent from device A to device B. As the

 
Computer Application in Pharmacy Computer Networks

message travels from A to B, it may pass through many intermediate nodes. These intermediate nodes
usually involve only the first three layers of the model.
In developing the model, the designers distilled the process of transmitting data to its most fundamental
elements. They identified which networking functions had related use and collected those functions
into discrete groups that became the layers. By defining and localizing functionality in this fashion, the
designers created an architecture that is both comprehensive and flexible.
Within a single machine, each layer calls upon the services of the layer just below it. Layer 3, for
example, uses the services provided by layer 2 and provides services for layer 4. Between machines,
layer x on one machine communicates with layer x on another machine. This communication is
governed by an agreed-upon series of rules and conventions called protocols. The processes on each
machine that communicate at a given layer are called peer-to-peer processes. Communication between
machines is therefore a peer-to-peer process using the protocols appropriate to a given layer.

Peer-to-Peer Processes
At the physical layer, communication is direct: In Figure, device A sends a stream of bits to device B.
At the higher layers, however, communication must move down through the layers on device A, over
to device B, and then back up through the layers. Each layer in the sending device adds its own
information to the message it receives from the layer just above it and passes the whole package to the
layer just below it.
At layer 1 the entire package is converted to a form that can be transferred to the receiving device. At
the receiving machine, the message is unwrapped layer by layer, with each process receiving and
removing the data meant for it. For example, layer 2 removes the data meant for it, then passes the rest
to layer 3. Layer 3 then removes the data meant for it and passes the rest to layer 4, and so on.

Interfaces between Layers


The passing of the data and network information down through the layers of the sending device and
back up through the layers of the receiving device is made possible by an interface between each pair
of adjacent layers. Each interface defines what information and services a layer must provide for the
layer above it. Well-defined interfaces and layer functions provide modularity to a network. As long as
a layer provides the expected services to the layer above it, the specific implementation of its functions
can be modified or replaced without requiring changes to the surrounding layers.

Organization of the Layers


The five layers can be thought of as belonging to three subgroups. Layers 1, 2, and 3 – physical, data
link, and network-are the network support layers; they deal with the physical aspects of moving data
from one device to another (such as electrical specifications, physical connections, physical addressing,
and transport timing and reliability). Layer 5- application – can be thought of as the user support layer;
it allows interoperability among unrelated software systems. Layer 4, the transport layer, links the two
subgroups and ensures that what the lower layers have transmitted is in a form that the upper layers can
use.
In figure, which gives an overall view of the layers, L5 data means the data unit at layer 5, L4 data
means the data unit at layer 4, and so on. The process starts at layer 5 (the application layer), then
moves from layer to layer in descending, sequential order. At each layer, a header can be added to the
data unit. At layer 2, a trailer is added as well. When then formatted data unit passes through the
physical layer (layer 1), it is changed into an electromagnetic signal and transported along a physical
link.
Upon reaching its destination, the signal passes into layer 1 and is transformed back into digital form.
The data units then move back up through the layers. As each block of data reaches the next-higher
layer, the headers and trailers attached to it at the corresponding sending layer are removed, and actions
appropriate to that layer are taken. By the time it reaches layer5, the message is again in a form
appropriate to the application and is made available to the recipient.
Framing


 
Computer Application in Pharmacy Computer Networks

The data link layer divides the stream of bits received from the network layer into manageable data
units called frames.
Physical addressing
If frames are to be distributed to different systems on the network, the data link layer adds a header to
the frame to define the sender and /or receiver of the frame. If the frame is intended for a system
outside the sender’s network, the receiver address is the address of the connecting device that connects
the network to the next one.
Flow control
If the rate at which the data are absorbed by the receiver is less than he rate produced in the sender, the
data link layer imposes a flow control mechanism to prevent overwhelming the receiver.
Error control
The data link layer adds reliability to the physical layer by adding mechanism to detect and transmit
damaged or lost frames. It also uses a mechanism to prevent duplication of frames. Error control is
normally achieved through a trailer added to the end of frame.
Access control
When two or more devices are connected to the same link, data link layer protocols are necessary to
determine which device has control over the link at any given time.
Network layer
The network layer is responsible for the source-to-destination delivery of a packet possibly across
multiple networks. Whereas the data link layer oversees the delivery of the packet between two
systems on the same work, the network layer ensures that each packet gets from its point of origin to its
final destination.
If two systems are connected to the same link, there is usually no need for a network layer. However, if
the two systems are attached to different networks with connecting devices between the networks, there
is often a need for the network layer to accomplish source-to-destination delivery
Logical Addressing
The physical addressing implemented by the data link layer handles the addressing problem locally. If
a packet passes the network boundary, we need another addressing system to help distinguish the
source and destination systems. The network layer adds a header to the packet coming from upper layer
that, among other things, includes the logical addresses of the sender and receiver.
Routing
When independent networks or links are connected to create an interwork (network of networks) or a
large network, the connecting devices (called routers or switches) route or switch the packets to their
final destination. One of the functions of the network layer is to provide this mechanism.
Transport Layer
The transport layer is responsible for process- to –process delivery of the entire message. Whereas the
network layer oversees host-to-destination delivery of the individual packets, it does not recognize any
relationship between those packets. It treats each one independently, as though each piece belong to a
separate message, whether or not it does. The transport layer, on the other hand, ensures that the whole
message arrives intact and in order, overseeing both error control and flow control at the process-to-
process level.
The major duties of the transport layer are as follows:
Post Addressing
Computer often run several processes (running programs) at the same time. For this reason, process-to-
process delivery means delivery not only from one computer to the next but also from a specific
process on one computer to a specific process on the other. The transport layer header must therefore
include a type of address called a post address the network layer gets each packet to the correct
computer: the transport layer gets the entire message to the correct process on that computer.
Segmentation and Reassembly


 
Computer Application in Pharmacy Computer Networks

A message is divided into transmittable segments, each segment containing a sequence number. These
numbers enable the transport layer to reassemble the message correctly upon arrival at the destination
and to identify and replace packets that were lost in the transmission.
Connection Control
The transport layer can be either connectionless or connection-oriented; A connectionless transport
layer treats each segment as an independent packet and delivers it to the transport layer makes a
connection with the transport layer at the destination machine first before delivering the packets. After
all the data are transferred, the connection is terminated.
Flow Control
Like the data link layer, the transport layer is responsible for floe control. However flow control at this
layer is performed end to end rather than across a single link.
Error Control
Like the data link layer, the transport is responsible for error control. However, error control at this
layer is performed end to end rather than across a single blink. The sending transport layer makes sure
that the entire message arrives at the receiving transport layer without error (damage, loss or
duplication) Error correction is usually achieved through retransmission.
Application Layer
The application layer enables the user, whether human or software, to access the network. It provides
user interface and support for services such as electronic media, remote file access and transfer, access
to the World Wide Web and so on.
The major duties of the application layer are as follows
Mail Services
This application is the basis for email forwarding and storage
File Transfer and Access
This application allows a user to access files in a remote host (to make changes or read data). To
retrieve files from a remote computer for use in the local computer, and to mange or control files in a
remote computer locally.
Remote log –in
A user can log into a remote computer and access the resources of that computer.
Accessing the World Wide Web
The most common application today is the access of the World Wide Web (WWW).

Transmission Media
Guided Media
Guided media which are those that provide a conduit from one device to another include twisted-pair
cable, coaxial cable, and fiber –optic cable. A signal travelling along any of these media is directed and
contained by the physical limits of the medium. Twisted-pair and coaxial cable use metallic conductors
that accept and transport signals in the form of electric current. Optical fiber is a glass cable that
accepts and transforms signals in the form light.
Twisted Pair Cable
A twisted pair consists of two conductors (normally copper). Each with its own plastic insulation,
twisted together.
One of the wires is used to carry signals to the receiver, and the other is used only as a ground
references. The receiver uses the differences between the two halves. In addition to the signal sent by
the sender on one of the wires, interference (noise) and crosstalk may affect both wires and create
unwanted signals. The receiver at the end, however, operates only on the differences between these
unwanted signals. This means that if the two wires are affected by noise or crosstalk equally, the
receiver is immune (the difference is zero).

10 
 
Computer Application in Pharmacy Computer Networks

If two wires are parallel, the effect of those unwanted signals is not the same in both wires because
they are at different locations relative to the noise or crosstalk sources (e.g., one closer and one farther).
This results in a difference at the receiver. By twisting the pairs, a balance is maintained. For example,
suppose in one twist, one wire is closer to the noise source and the other farther: in the next twist, the
reverse is true (noise or crosstalk). This means that the recovery, which calculates the differences
between the two, receives no unwanted signals.
Unshielded versus Shielded Twisted- Pair cable
The most common twisted-pair cable used in communications referred to as unshielded twisted-pair
(UTP).IBM has also produced a version of twisted-pair cable for its use called shielded twisted-pair
(STO). STP cable has a metal foil or braided-mesh covering that encases each pair of insulated
conductors. Although metal casing improves the quality of cable by preventing the penetration of noise
or crosstalk, it is bulkier and more expensive,
Categories
The electronic Industries Association (EIA) has developed standards to classify unshielded twisted-pair
cable into seven categories. Categories are determinate by cable quality, with 1 as the lowest and 7 as
the highest. Each EIA category is suitable for specific uses.
Connectors
The most common UTP connector is RJ45 (RJ stands for Registered Jack). The RJ45 is a keyed
connector, meaning the connector can be interested in only one way.
Performance
One way to measure the performance of twisted-pair is to compare attenuation versus frequency and
distance. A twisted-pair able can pas a wide range of frequencies. However, shows that with increasing
frequency, the attenuation, measured in describe per mile(dB/mi), sharply increase with frequencies
above 100KHz.Guage is the measure of the thickness of the wire.
Applications
Twisted pair cables are used in telephone lines to provide voice and data channels. The local loop -----
the line that connects subscribers to the central telephone office--- is most commonly unshielded
twisted pair cables.
The DSL lines that are used by the telephone companies to provide high data rate connections also use
the high- bandwidth capability of unshielded twisted-pair cables.
Local area networks, such as 10Base-T and 100Base, also use twisted pair cables.
Coaxial Cables
Coaxial cables carry signals of higher frequency ranges than twisted-pair cable, in part because the two
media are constructed quite differently. Instead of having two wires, coax has a central core conductor
of solid or stranded wire( usually copper) enclosed in an insulting sheath, which is, in turn, encased in
n outer conductor of metal foil, braid or a combination of the two. The outer metallic wrapping serve
both as shield against noise and as second conductor, which complete the circuit. This outer conductor
is also enclosed in an insulating sheath, and the whole cable is protected by a plastic cover.
Coaxial Cable Standards
Coaxial cables are categorized by their radio government (RG) ratings. Each RG number denotes a
unique set of physical specifications, including the wire gauge of the inner conductor, the thickness and
the construction of shield, and the size and the type of the outer casing.
Coaxial Cable Connectors
To connect coaxial cable to devices, we need coaxial connectors. The most common type of connector
used today is the Bayone-Neill-Concelman, or BNC, connectors.
The BNC connector is used to connect the end of the cable to a device, such as a TV set. The BNC T
connector is used in Ethernet networks to branch out a cable for connection to a computer or other
devices. The BNC terminator is used at the end of the cable to prevent the reflection of the signal.
Performance

11 
 
Computer Application in Pharmacy Computer Networks

As we did with twisted-pair cables, we can measure the performance of a coaxial cable. We notice that
attenuation is much higher in coaxial cables than in twisted-pair cable. In other words, although coaxial
cable has a much higher bandwidth, the signals weaken rapidly and needs the frequent use of repeaters.
Applications
The use of coaxial cable started in analog telephone networks where a single coaxial network could
carry 10, 000 voices signals. Later it was used in digital telephone networks where a single coaxial
cable carries digital data up to 600 Mbps. However, coaxial cable in telephone networks has largely
been replaced today with fiber-optic cable.
Cable TV networks also used coaxial cables. In the traditional TV network, the entire network used
coaxial cables later, however, cable TV providers replaced most of the network with fiber-optic cable:
hybrid network use coaxial cable only at the network boundaries, near the consumer premises. Cable
TV uses RG-59 coaxial cable.
Another common application of coaxial cable is in traditional Ethernet LANs. Because of its high
bandwidth, and consequently high data rate, coaxial cable was chosen for digital transmission in early
Ethernet LANs. 10Base-2, or Thin Ethernet, uses RG-58 coaxial cable with BNC connectors to
transmit data at 10 Mbps with a range of 185 m. 10Base5, or thick Ethernet, uses RG-11 to transmit 10
Mbps with a range of 5000 m. Thick Ethernet has specialized connectors.
Fiber-Optic Cable
A fiber-optic cable is made of glass or plastic and transmits in the form of light. To understand optical
fiber, we first need to explore several aspects of the nature of light.
Light travels in a straight line as long as it is moving through a single uniform substance. If a ray of
light, travelling through one substance enters into another (more or less dense), the direction of ray
changes.
If the angle of incidence (the angle the ray makes with the line perpendicular to the interface between
the two substances) is less than the critical angle, the ray refracts and moves closer to the surface. If the
angle of incidence is equal to the critical angle, the light bends along the interface. If the angle is
greater than the critical angle, the ray reflects (make a turn) and travels again in the denser substance.
Note that the critical angle is a properly of the substance, and its value is different from one substance
to another.
Optical fibers use reflection to guide light through channel. A glass or plastic core is surrounded by
cladding or less dense glass or plastic. The differences in the density of the two materials must be such
that a beam of light moving through the core is reflected off the cladding instead of being refracted into
it.
Propagation Modes
Current technology supports two modes (multimode and single mode) for propagating light along
optical channels, each requiring fiber with different physical characteristics. Multimode can be
implemented in two forms: step-index or graded-index.
Multimode
Multimode is so named because multiple beams from a light source move through the core in different
paths. How these beams move within the cable depends on the structure of the core
In the multimode step-index fiber, the density of the core remains constant from the center to the edges.
A beam of light moves through this constant density in a straight line it reaches the interface of the
core and the cladding. At the interface, there is an abrupt change to a lower density that alters the angle
of the beam’s motion. The term step index refers to the suddenness of this change.
A second type of fiber, called multimode graded-index fiber, decrease this distortion of the signal
through the cable. The word index here refers to the index of refraction. The index of refraction is
related to destiny. A graded-index fiber, therefore, is one with varying densities. Density is the highest
at the center of the core and decreases gradually to its lowest at the edge.
Single Mode
Single-mode uses step-index fiber and a highly focused source of light that limits beams to a small
range of angels, all close to the horizontal. The single-mode fiber itself is manufactured with a much

12 
 
Computer Application in Pharmacy Computer Networks

smaller diameter than that of multi mode fiber, and with substantially lower density(index of
refraction). The decrease in density results in a critical angle that is close enough to 90 to make the
propagation of beams almost horizontal. In this case, propagation of different beams is almost identical
and delays are negligible. All the beams arrive at the destination “together” and can be recombined
with little distortion to the signals.
Fiber Sizes
Optical fibers are defined by the ratio of the diameter of their core to the diameter of their cladding,
both expressed in micrometers.
Cable Composition
The outer jacket is made of either PYC or Teflon. Inside the jacket are Kevlar strands to strengthen the
cable. Kevlar is a strong material used in the fabrication of bulletproof vests.
The subscriber channel (SC) connector is used for cable TV. It uses a push/pull locking system. The
straight-tip (ST) connector is used for connecting cable to networking devices. It uses a bayonet
locking system and is more reliable than SC. MT·RJ is a connector that is the same size as RJ45.
Performance
The plot of attenuation versus wavelength in Figure 7.16 shows a very interesting phenomenon in
fiber-optic cable. Attenuation is flatter than in the case of twisted-pair cable and coaxial cable. The
performance is such that we need fewer (actually 10 times less) repeaters when we use fiber-optic
cable.
Applications
Fiber-optic cable is often found in backbone networks because its wide bandwidth is cost- effective.
Today, with wavelength-division multiplexing (WDM), we can transfer data at a rate of 1600 Gbps.
The SONET network that we discuss in Chapter 9 provides such a backbone.
Some cable TV companies use a combination of optical fiber and coaxial cable, thus creating a hybrid
network. Optical fiber provides the backbone structure while coaxial cable provides the connection to
the user premises. This is a cost-effective con- figuration since the narrow bandwidth requirement at
the user end does not justify the use of optical fiber.
Local-area networks such as 100Base-FX network (Fast Ethernet) and l000Base·X also use fiber-optic
cable.
Advantages and Disadvantages 0f Optical Fiber
Advantages
Fiber-optic cable has several advantages over metallic cable (twisted- pair or coaxial).
Higher Bandwidth
Fiber optic cable can support dramatically higher bandwidths (and hence data rates) than either
twisted-pair or coaxial cable. Currently, data rates and bandwidth utilization over fiber-optic cable are
limited not by the medium but by the signal generation and reception technology available.
Less Signal Attenuation
Fiber-optic transmission distance is significantly greater than that of other guided media. A signal can
run for 50 km without requiring regeneration. We need repeaters every 5 km for coaxial or twisted-pair
cable.
Immunity to Electromagnetic Interference
Electromagnetic noise cannot affect fiber-optic cables.
Resistance to Corrosive Materials
Glass is more resistant to corrosive materials than copper.
Light Weight
Fiber-optic cables are much lighter than copper cables.
More Immune to Tapping
Fiber-optic cables are more immune to tapping than copper cables. Copper cables create antenna
effects that can easily be tapped.

13 
 
Computer Application in Pharmacy Computer Networks

Disadvantages
There are some disadvantages in the use of optical fiber.
Installation/ maintenance
Fiber—optic cable is a relatively new technology. Its installation and maintenance require expertise
that is not yet available everywhere.
Unidirectional
Propagation of light is unidirectional. If we need bidirectional communication, two fibers are needed.
Cost
The cable and the interfaces are relatively more expensive than those of other guided media. lf the
demand for bandwidth is not high; often the use of optical fiber cannot be justified.

Unguided Media: Wireless


Unguided media transport electromagnetic waves without using a physical conductor. This type of
communication is often referred to as wireless communication. Signals are normally broadcast through
free space and thus are available to anyone who has a device capable of receiving them.
Radio Waves
Although there is no clear-cut demarcation between radio waves and micro waves, electromagnetic
waves ranging in frequencies between 3 KHz and 1 GHz are normally called radio waves; waves
ranging in frequencies between 1 and 300 GHz are called microwaves. However the behavior of the
waves rather than frequencies is a better criterion for classification.
Radio waves for the most part are omnidirectional. When an antenna transmits radio waves they are
propagated in all directions. This means that the sending and the receiving antennas do not have to
aligned. A sending antenna can send waves that can send waves that can be received by the receiving
antenna. The omnidirectional property has the disadvantage, too. The radio waves transmitted by the
one antenna are susceptible to interference by another antenna that may send signals using the same
frequency or band.
Radio waves, particularly those waves that propagate in the sky mode, can travel long distances. This
makes radio waves a good candidate for long- distance broadcasting such as AM radio.
Radio waves, particularly those of low and medium frequencies, can penetrate walls. This
characteristic can be both an advantage and a disadvantage. It is an advantage because, for example, an
AM radio can receive signals inside a building. It is a disadvantage because we cannot isolate a
communication to just inside or outside a building. The radio wave band is relatively narrow, just
under 1 GHz, compared to the microwave band. When this band is divided into subbands, the subbands
are also narrow, leading to a low data rate for digital communications.
Almost the entire band is regulated by authorities (e.g., the FCC in the United States). Using any part
of the band requires permission from the authorities.
Microwaves
Electromagnetic waves having frequencies between 1 and 300 GHz are called micro- waves.
Microwaves are unidirectional. When an antenna transmits microwave waves, they can be narrowly
focused. This means that the sending and receiving antennas need to be aligned. The unidirectional
property has an obvious advantage. A pair of antennas can be aligned without interfering with another
pair of aligned antennas.
Microwave propagation is line-of-sight. Since the towers with the mounted antennas need to be in
direct sight of each other, towers that are far apart need to be very tall. The curvature of the earth as
well as other blocking obstacles does not allow two short towers to communicate by using microwaves.
Repeaters are often needed for long- distance communication.
Very high-frequency microwaves cannot penetrate walls. This characteristic can be a disadvantage if
receivers are inside buildings.
The microwave band is relatively wide, almost 299 GHz. Therefore wider subbands can be assigned,
and a high data rate is possible

14 
 
Computer Application in Pharmacy Computer Networks

Use of certain portions of the band requires permission from authorities.


Unidirectional Antenna
Microwaves need unidirectional antennas that send out signals in one direction. Two types of antennas
are used for microwave communications: the parabolic dish and the horn.
A parabolic dish antenna is based on the geometry of a parabola. Every line parallel to the line of
symmetry (line of sight) reflects off the curve at angles such that all the lines intersect in a common
point called the focus. The parabolic dish works as a funnel, catching a wide range of waves and
directing them to a common point. In this way, more of the signal is recovered than would be possible
with a single-point receiver.
Outgoing transmissions are broadcast through a horn aimed at the dish. The micro- waves hit the dish
and are deflected outward in a reversal of the receipt path.
A horn antenna looks like a gigantic scoop. Outgoing transmissions are broadcast up a stem
(resembling a handle) and deflected outward in a series of narrow parallel beams by the curved head.
Received transmissions are collected by the scooped shape of the horn, in a manner similar to the
parabolic dish, and are deflected down into the stem.
Applications
Microwaves, due to their unidirectional properties, are very useful when unicasting (one-to-one)
communication is needed between the sender and the receiver. They are used in cellular phones
(Chapter 17), satellite networks (Chapter 17), and wireless LANs (Chapter 15).
Microwaves are used for unicast communication such as cellular telephones, satellite networks, and
wireless LANs.
Infrared
Infrared waves, with frequencies from 300 GHz to 400 THz (wavelengths from 1 mm to 770 nm), can
be used for short-range communication. Infrared waves, having high frequencies, cannot penetrate
walls. This advantageous characteristic prevents interference between one system and another; a short-
range communication system in one room cannot be affected by another system in the next room.
When we use our infrared remote control, we do not interfere with the use of the remote by our
neighbors. However, this same characteristic makes infrared signals useless for long-range
communication. In addition, we cannot use infrared waves outside a building because the sun’s rays
contain infrared waves that can interfere with the communication.
Applications
The infrared band, almost 400 THz, has an excellent potential for data transmission. Such a wide
bandwidth can be used to transmit digital data with a very high data rate. The Infrared Data
Association (IrDA), an association for sponsoring the use of infrared waves, has established standards
for using these signals for communication between devices such as keyboards, mice, PCs, and printers.
For example, some manufactt1rers provide a special port called the IrDA port that allows a wireless
keyboard to communicate with PC.

15 
 
Computer Application in Pharmacy Computer Network & Security

Security Attacks
A useful means of classifying security attacks is in terms of passive attacks and active attacks. A
passive attack attempts to learn or make use of information from the system but does not affect system
resources. An active attack attempts to alter system resources or affect their operation.

Passive Attacks
Passive attacks are in the nature of eavesdropping on, or monitoring of, transmissions. The goal of the
opponent is to obtain information that is being transmitted. Two types of passive attacks are release of
message contents and traffic analysis.
The release of message contents is easily understood. A telephone conversation, an electronic mail
message, and a transferred file may contain sensitive or confidential information. We would like to
prevent an opponent from learning the contents of these transmissions.
Passive attacks are very difficult to detect because they do not involve any alteration of the data.
Typically, the message traffic is sent and received in an apparently normal fashion and neither the
sender nor receiver is aware that a third party has read the messages or observed the traffic pattern.
However, it is feasible to prevent the success of these attacks, usually by means of encryption. Thus,
the emphasis in dealing with passive attacks is on prevention rather than detection.

Active Attacks
Active attacks involve some modification of the data stream or the creation of a false stream and can be
subdivided into four categories: masquerade, replay, modification of messages, and denial of service.
Masquerade
A masquerade takes place when one entity pretends to be a different entity. A masquerade attack
usually includes one of the other forms of active attack. For example, authentication sequences can be
captured and replayed after a valid authentication sequence has taken place, thus enabling an
authorized entity with few privileges to obtain extra privileges by impersonating an entity that has
those privileges.
Replay
Replay involves the passive capture of a data unit and its subsequent retransmission to produce an
unauthorized effect.
Modification
Modification of messages simply means that some portion of a legitimate message is altered, or that
messages are delayed or reordered, to produce an unauthorized effect. For example, a message
meaning “Allow John Smith to read confidential file accounts” is modified to mean “Allow Fred
Brown to read confidential file accounts.”
Denial of Service
The denial of service prevents or inhibits the normal use or management of communications facilities.
This attack may have a specific target; for example, an entity may suppress all messages directed to a
particular destination (e.g. the security audit service). Another form of service denial is the disruption
of an entire network, either by disabling the network or by overloading it with messages so as to
degrade performance.
Active attacks present the opposite characteristics of passive attacks. Whereas passive attacks are
difficult to detect, measures are available to prevent their success. On the other hand, it is quite difficult
to prevent active attacks absolutely, because of the wide variety of potential physical, software, and
network vulnerabilities. Instead, the goal is to detect active attacks and to recover from any disruption
or delays caused by them. If the detection has a deterrent effect, it may also contribute to prevention.
X.800 divides these services into five categories and fourteen specific services

16 
 
Computer Application in Pharmacy Computer Network & Security

Authentication
The authentication service is concerned with assuring that a communication is authentic. In the case of
a single message, such as a warning or alarm signal, the function of the authentication service is to
assure the recipient that the message is from the source that it claims to be from. In the case of an
ongoing interaction, such as the connection of a terminal to a host, two aspects are involved. First, at
the time of connection initiation, the service assures that the two entities are authentic, that is, that each
is the entity that it claims to be. Second, the service must assure that the connection is not interfered
with in such a way that a third party can masquerade as one of the two legitimate parties for the
purposes of unauthorized transmission or reception.
Two specific authentication services are defined in X.800:
• Peer Entity Authentication: Provides for the corroboration of the identity of a peer entity in an
association. It is provided for use at the establishment of, or at times during the data transfer phase
of, a connection. It attempts to provide confidence that an entity is not performing either a
masquerade or an unauthorized replay of a previous connection.
• Data Origin Authentication: Provides for the corroboration of the source of a data unit. It does
not provide protection against the duplication or modification of data units. This type of service
supports applications like electronic mail where there are no prior interactions between the
communicating entities.

Access Control
In the context of network security, access control is the ability to limit and control the access to host
systems and applications via communications links. To achieve this, each entity trying to gain access
must first be identified, or authenticated, so that access rights can be tailored to the individual.

Data Confidentiality
Confidentiality is the protection of transmitted data from passive attacks. With respect to the content of
a data transmission, several levels of protection can be identified. The broadest service protects all user
data transmitted between two users over a period of time. For example, when a TCP connection is set
up between two systems, this broad protection prevents the release of any user data transmitted over
the TCP connection. Narrower forms of this service can also be defined, including the protection of a
single message or even specific fields within a message. These refinements are less useful than the
broad approach and may even be more complex and expensive to implement.
The other aspect of confidentiality is the protection of traffic flow from analysis. This requires that an
attacker not be able to observe the source and destination, frequency, length, or other characteristics of
the traffic on a communications facility.

Data Integrity
As with confidentiality, integrity can apply to a stream of messages, a single message, or selected
fields within a message. Again, the most useful and straightforward approach is total stream protection.
A connection-oriented integrity service, one that deals with a stream of messages, assures that
messages are received as sent, with no duplication, insertion, modification, reordering, or replays. The
destruction of data is also covered under this service. Thus, the connection-oriented integrity service
addresses both message stream modification and denial of service. On the other hand, a connectionless
integrity service, one that deals with individual messages without regard to any larger context,
generally provides protection against message modification only.

NonRepudiation
Nonrepudiation prevents either sender or receiver from denying a transmitted message. Thus, when a
message is sent, the receiver can prove that the alleged sender in fact sent the message. Similarly, when
a message is received, the sender can prove that the alleged receiver in fact received the message.

17 
 
Computer Application in Pharmacy Computer Network & Security

Authentication Data Integrity


The assurance that the communicating entity is the one The assurance that data received are exactly as sent by
that it claims to be. an authorized entity (i.e., contain no modifications,
insertion, deletion, or replay).
Peer Entity Authentication Connection Integrity with Recovery
Used in association with a logical connection to Provides for the integrity of all user data on a
provide confidence in the identity of the entities connection and detects any modification, insertion,
connected. deletion, or replay of any data within an entire data
sequence, with recovery attempted.
Data Origin Authentication Connection Integrity without Recovery
In a connectionless transfer provides assurance that the As above, but provides only detection without
source of received data is as claimed. recovery.
Access Control Selective-Field Connection Integrity
The prevention of unauthorized use of a resource (i.e., Provides for the integrity of selected fields within the
this service controls who can have access to a resource, user data of a data block transferred over a connection
under what conditions access can occur, and what and takes the form of determination of whether the
those accessing the resource are allowed to do). selected fields have been modified, inserted, deleted, or
replayed.
Data Confidentiality Connectionless Integrity
The protection of data from unauthorized disclosure. Provides for the integrity of a single connectionless
data block and may take the form of detection of data
modification. Additionally, a limited form of replay
detection may be provided
Connection Confidentiality Selective-Field Connectionless Integrity
The protection of all user data on a connection. Provides for the integrity of selected fields within a
single connectionless data block; takes the form of
determination of whether the selected fields have been
modified.
Connectionless Confidentiality Non-Repudiation
The protection of all user data in a single data block. Provides protection against denial by one of the entities
involved in a communication of having participated in
all or part of the communication.
Selective-Field Confidentiality Non-repudiation, Origin
The confidentiality of selected fields within the user Proof that the message was sent by the specified party.
data on a connection or in a single data block
Traffic Flow Confidentiality Non-repudiation, Destination
The protection of the information that might be derived Proof that the message was received by the specified
from observation of traffic flows. party

Cryptography
Cryptography can be defined as the conversion of data into a scrambled code that can be deciphered
and sent across a public or private network. Cryptography uses two main styles or forms of encrypting
data; symmetrical and asymmetrical. Symmetric encryptions, or algorithms, use the same key for
encryption as they do for decryption. Other names for this type of encryption are secret-key, shared-
key, and private-key. The encryption key can be loosely related to the decryption key; it does not
necessarily need to be an exact copy.

Symmetric Cryptography
Cryptography in which a single key is used for both encryption and decryption. Symmetric
cryptography is susceptible to plain text attacks and linear cryptanalysis meaning that they are hackable
and at times simple to decode. With careful planning of the coding and functions of the cryptographic
process these threats can be greatly reduced.

Asymmetric Cryptography
18 
 
Computer Application in Pharmacy Computer Network & Security

Asymmetric cryptography uses different encryption keys for encryption and decryption. In this case an
end user on a network, public or private, has a pair of keys; one for encryption and one for decryption.
These keys are labeled or known as a public and a private key; in this instance the private key cannot
be derived from the public key.
The asymmetrical cryptography method has been proven to be secure against computationally limited
intruders. The security is a mathematical definition based upon the application of said encryption.
Essentially, asymmetric encryption is as good as its applied use; this is defined by the method in which
the data is encrypted and for what use. The most common form of asymmetrical encryption is in the
application of sending messages where the sender encodes and the receiving party decodes the message
by using a random key generated by the public key of the sender.

Techniques
Substitution Ciphers:
“A substitution cipher replaces one symbol with another”
A substitution cipher replaces one symbol with another. If the symbols in the plaintext are alphabetic
characters, we replace one character with another. For example, we can replace letter A with letter D,
and letter T with letter Z. If the symbols are digits (0 to 9), we can replace 3 with 7, and 2 with 6.
Substitution ciphers can be categorized as either monoalphabetic ciphers or polyalphabetic ciphers.
Monoalphabetic Ciphers:
“In monoalphabetic substitution, the relationship between a symbol in the plaintext to a symbol in the
cipher text is always one-to-one”
In monoalphabetic substitution, a character (or a symbol) in the plaintext is always changed to the
same character (or symbol) in the cipher text regardless of its position in the text. For example, if the
algorithm says that letter A in the plaintext is changed to letter D, every letter A is changed to letter D.
in other words, the relationship between letters in the plaintext and the cipher text is one-to-one.
Monoalphabetic ciphers are easy to break because they reflect the frequency data of the original
alphabet. A counter measure is to provide multiple substitutes known as homophones, for a single
letter. For example, the letter e could be assigned a number of different cipher symbols, such as 16, 74,
35, and 21, with each homophone used in rotation, or randomly. If the number of symbols assigned to
each letter is proportional to the relative frequency of that letter, then single-letter frequency
information is completely obliterated. The great mathematician Carl Friedrich Gauss believed that he
had devised an unbreakable cipher using homophones. However, even with homophones, each element
of plaintext affects only one element of cipher text, and multiple-letter patterns (e.g., diagram
frequencies) still survive in the cipher text, making cryptanalysis relatively straight forward.
Caesar Cipher
The earliest known use of a substitution cipher, and the simplest, was by Julius Caesar. The Caesar
cipher involves replacing each letter of the alphabet with the letter standing three places further down
the alphabet. For example,
Plain Text: MEET ME AFTER THE TOGA PARTY
Cipher Text: PHHW PH DIWHU WKH WRJD SDUWB
Note that the alphabet is wrapped around, so that the letter following Z is A. We can define the
transformation by listing all possibilities, as follows:
Plain: A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Cipher: D E F G H I J K L M N O P Q R S T U V W X Y Z A B C
Let us assign a numerical equivalent to each letter:
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Then the algorithm can be expressed as follows. For each plaintext letter p, substitute the ciphertext
letter C:

19 
 
Computer Application in Pharmacy Computer Network & Security

C = E (3, p) = (p + 3) mod 26
A shift may be of any amount, so that the general Caesar algorithm is
C = E (k, p) = (p + k) mod 26
Where k takes on a value in the range 1 to 25. The decryption algorithm is simply
p = D(k, C) = (C- k) mod 26
If it is known that a given ciphertext is a Caesar cipher, then a brute-force cryptanalysis is easily
performed: Simply try all the 25 possible keys. Figure 2.3 shows the results of applying this strategy to
the example ciphertext. In this case, the plaintext leaps out as occupying the third line.
Three important characteristics of this problem enabled us to use a brute-force cryptanalysis:
1. The encryption and decryption algorithm are known.
2. There are only 25 keys to try.
3. The language of the plaintext is known and easily recognizable.
With only 25 possible keys, the Caesar cipher is far from secure. A dramatic increase in the key space
can be achieved by allowing an arbitrary substitution. Recall the assignment for the Caesar cipher:
Plain: A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Cipher: D E F G H I J K L M N O P Q R S T U V W X Y Z A B C
If, instead, the “cipher” line can be any permutation of the 26 alphabetic characters, the there are 26! or
greater than 4 × 10^26 possible keys. This is 10 orders of magnitude greater than the key space for
DES and would seem to eliminate brute-force techniques for cryptanalysis. Such an approach is
referred to as a monoalphabetic substitution cipher, because a single cipher alphabet (mapping from
plain alphabet to cipher alphabet) is used per message.
Polyalphabetic Ciphers
Another way to improve on the simple monoalphabetic technique is to use different monoalphabetic
substitutions as one proceeds through the plaintext message. The general name for this approach is
polyalphabetic substitution cipher. All these techniques have the following features in common:
1. A set of related monoalphabetic substitution rules is used.
2. A key determines which particular rule is chosen for a given transformation.
In polyalphabetic substitution each occurrence of a character may have a different substitute. The
relationship between a character in the plaintext to a character in the cipher text is one-to-many. For
example “A” could be enciphered as “D” in the beginning of the text, but as “N” at the middle.
Polyalphabetic ciphers have the advantage of hiding the letters frequently of the underlying language.
Eve cannot use single letter frequency static to break the cipher text.
To create a polyalphabetic cipher, we need to make each cipher text character dependent on both the
corresponding plaintext character and the position of the plaintext character in the message. This
implies that our key should be a stream of subkeys, in which subkey depends somehow on the position
of the plaintext character that uses the subkey for encipherment.
Playfair Ciphers
The best-known multiple-letter encryption cipher is the Playfair, which treats diagrams in the plaintext
as single units and translates these units into ciphertext diagrams.
The Playfair algorithm is based on the use of a 5 × 5 matrix of letters constructed using a keyword.
Here is an example, solved by Lord Peter Wimsey in Dorothy Sayer’s Have His Carcase.
M O N A R
C H Y B D
E F G I/J K
L P Q S T
U V W X Z
In this case, the keyword is monarchy. The matrix is constructed by filling in the letters of the keyword
(minus duplicates) from left to right and from top to bottom and then filling in the remainder of the

20 
 
Computer Application in Pharmacy Computer Network & Security

matrix with remaining letters in alphabetic order. The letters I and J count as one letter. Plaintext is
encrypted two letters at a time, according to the following rules:
1. Repeating plaintext letters that are in the same pair are separated with a filler letter, such as x, so
that balloon would be treated as BA IX IO ON.
2. Two plaintext letters that fall in the same row of the matrix are each replaced by the letter to the
right, with the first element of the row circularly following the last. For example, AR is encrypted
as RM.
3. Two plaintext letters that fall in the same column are each replaced by the letter beneath, with the
top element of the column circularly following the last. For example, mu is encrypted CM.
4. Otherwise, each plaintext letter in a pair is replaced by the letter that lies in its own row and the
column occupied by the other plaintext letter. Thus, HS becomes BP and EA becomes IM (or JM,
as the encipherer wishes).
The Playfair cipher meets our criteria for a polyalphabetic cipher. The key is a stream of subkeys in
which the subkeys are created two at a time. In Playfair cipher, the key stream and the cipher stream
are the same. This means that above-mentioned rules can be thought of a s the rules for creating the
key stream. The encryption algorithm takes a pair of characters from the plaintext and creates a pair of
subkeys by following the above-mentioned rules. We can say that the key stream depends on the
position of the character in the plaintext. Position dependency has a different interpretation here: the
subkey for each plaintext character depends on the next or previous neighbor. Looking at the Playfair
cipher in this way, the ciphertext is actually the key stream.
Vigenere Cipher
One interesting kind of polyalphabetic cipher was designed by Blaise D Vigenere, a sixteenth-century
French Mathematician. A Vigenere Cipher uses a different strategy to create the key stream. The key
stream is a repetition of an initial secret key stream of length m, where we have 1 ≤ m ≤ 26. The cipher
be destroyed as follows where (k1 , k2, …km) is the initial secret key agreed to by Alice and Bob.
P = P1P2P3… C = C1C2C3…
K = [(k1,k2,…,km) , (k1,k2,…,km),…]
Encryption: Ci = Pi + ki
Decryption: Pi = Ci - ki
One important difference between the Vigenere cipher and the other two polyalphabetic ciphers we
have looked at is that the Vigenere key stream does not depend on the plaintext characters; it depends
only on the position of the character in the plaintext. In other words, the key stream can be created
without knowing what the plaintext is.
To aid in understanding the scheme and to aid in its use, a matrix known as the Vigenere tableau is
constructed. Each of the 26 ciphers is laid out horizontally, with the key letter for each cipher to its left.
A normal alphabet for the plaintext runs across the top. The process of encryption is simple: Given a
key letter x and a plaintext letter y, the ciphertext letter is at the intersection of the row labeled x and
the column labeled y; in this case the ciphertext is V.
To encrypt a message, a key is needed that is as long as the message. Usually, the key is a repeating
keyword. For example, if the keyword is deceptive, the message “we are discovered save yourself” is
encrypted as follows:
Key: D E C E P T I V E D E C E P T I V E D E C E P T I V E
Plaintext: W E A R E D I S C O V E R E D S A V E Y O U R S E L F
Ciphertext: Z I C V T W Q N G R Z G V T W A V Z H C Q Y G L M G J
Decryption is equally simple. The key letter again identifies the row. The position of the ciphertext
letter is that row determines the column, and the plaintext letter is at the top of that column.
The strength of this cipher is that there are multiple ciphertext letters for each plaintext letter, o0ne for
each unique letter of the keyword. Thus, the letter frequency information is obscured. However, not all
knowledge of the plaintext structure is lost. An improvement is achieved over the Playfair cipher, but
considerable frequency information remains.

21 
 
Computer Application in Pharmacy Computer Network & Security

In theory, we need look no further for a cipher. The one-time pad offers complete security but, in
practice, has two fundamental difficulties:
1. There is the practical problem of making large quantities of random keys. Any heavily used
system might require millions of random characters on a regular basis. Supplying truly random
characters in this volume is a significant task.
2. Even more daunting is the problem of key distribution and protection. For every message to be
sent, a key of equal length is needed by both sender and receiver. Thus, a mammoth key
distribution problem exists.
Because of these difficulties, the one-time pad is of limited utility, and is useful primarily for low-
bandwidth channels requiring very high security.

Transposition Ciphers
A transposition cipher does not substitute one symbol for another; instead it changes the location of the
symbols. A symbol in the first position of the plaintext may appear in the tenth position of the
ciphertext. A symbol in the eighth position in the plaintext may appear in the first position of the
ciphertext. In another words the transposition cipher recorders (transpose) the symbols.
Keyless Transposition Ciphers
Simple transposition ciphers, which were used in the past, are keyless. There are two methods for
permutation of characters. In the first method, the text is written into a table column by column and
then transmitted row by row. In the second method, the text is written into the table row by row and
then transmitted column by column.

Cryptanalysis
The autokey cipher definitely hides the single-letter frequency statistics of the plaintext. However, it is
still as vulnerable as to the brute-force attack as the additive cipher. The first subkey can be only one of
the 25 values (1 to 25). We need polyalphabetic ciphers that not only hide the characteristics of the
language but also have large key domains.

Computer Virus
A computer virus is a small software program that spreads from one computer to another computer and
that interferes with computer operation. A computer virus may corrupt or delete data on a computer,
use an e-mail program to spread the virus to other computers, or even delete everything on the hard
disk.
Computer viruses are most easily spread by attachments in e-mail messages or by instant messaging
messages. Therefore, you must never open an e-mail attachment unless you know who sent the
message or unless you are expecting the e-mail attachment. Computer viruses can be disguised as
attachments of funny images, greeting cards, or audio and video files. Computer viruses also spread by
using downloads on the Internet. Computer viruses can be hidden in pirated software or in other files or
programs that you may download.

Symptoms of a computer virus


If you suspect or confirm that your computer is infected with a computer virus, obtain the current
antivirus software. The following are some primary indicators that a computer may be infected:
) The computer runs slower than usual.
) The computer stops responding, or it locks up frequently.
) The computer crashes, and then it restarts every few minutes.
) The computer restarts on its own. Additionally, the computer does not run as usual.
) Applications on the computer do not work correctly.
) Disks or disk drives are inaccessible.
) You cannot print items correctly.
) You see unusual error messages.

22 
 
Computer Application in Pharmacy Computer Network & Security

) You see distorted menus and dialog boxes.


) There is a double extension on an attachment that you recently opened, such as a .jpg, .vbs, .gif, or
.exe. extension.
) An antivirus program is disabled for no reason. Additionally, the antivirus program cannot be
restarted.
) An antivirus program cannot be installed on the computer, or the antivirus program will not run.
) New icons appear on the desktop that you did not put there, or the icons are not associated with
any recently installed programs.
) Strange sounds or music plays from the speakers unexpectedly.
) A program disappears from the computer even though you did not intentionally remove the
program.

What Viruses Don't Do!


) Computer viruses cannot infect write protected disks or infect written documents. Viruses do not
infect compressed files, unless the file was infected prior to the compression. [Compressed files
are programs or files with its common characters, etc. removed to take up less space on a disk.]
Viruses do not infect computer hardware, such as monitors or computer chips; they only infect
software.
) In addition, Macintosh viruses do not infect DOS / Window computer software and vice versa. For
example, the Melissa virus incident of late 1998 and the ILOVEYOU virus of 2000 worked only
on Window based machines and could not operate on Macintosh computers.
) One further note-> viruses do not necessarily let you know they are present in your machine, even
after being destructive. If your computer is not operating properly, it is a good practice to check
for viruses with a current "virus checking" program.

How do Viruses Spread?


Viruses begin to work and spread when you start up the program or application of which the virus is
present. For example, a word processing program that contains a virus will place the virus in memory
every time the word processing program is run.
Once in memory, one of a number of things can happen. The virus may be programmed to attach to
other applications, disks or folders. It may infect a network if given the opportunity.
Viruses behave in different ways. Some viruses stay active only when the application it is part of is
running. Turn the computer off and the virus is inactive. Other viruses will operate every time you turn
on your computer after infecting a system file or network.

How to Prevent a Virus Invasion!


1. Load only software from original disks or CD's. Pirated or copied software is always a risk for a
virus.
2. Execute only programs of which you are familiar as to their origin. Programs sent by email should
always be suspicious.
3. Computer uploads and "system configuration" changes should always be performed by the person
who is responsible for the computer. Password protection should be employed.
4. Check all shareware and free programs downloaded from on-line services with a virus checking
program.
5. Purchase a virus program that runs as you boot or work your computer. Up-date it frequently.
6. On the computer, turn on the firewall.
7. Keep the computer operating system up-to-date.
8. Use updated antivirus software on the computer.
9. Use updated antispyware software on the computer.

Symptoms that may be the result of ordinary Windows functions


A computer virus infection may cause the following problems:

23 
 
Computer Application in Pharmacy Computer Network & Security

) Windows does not start even though you have not made any system changes or even though you
have not installed or removed any programs.
) There is frequent modem activity. If you have an external modem, you may notice the lights
blinking frequently when the modem is not being used. You may be unknowingly supplying
pirated software.
) Windows does not start because certain important system files are missing. Additionally, you
receive an error message that lists the missing files.
) The computer sometimes starts as expected. However, at other times, the computer stops
responding before the desktop icons and the taskbar appear.
) The computer runs very slowly. Additionally, the computer takes longer than expected to start.
) You receive out-of-memory error messages even though the computer has sufficient RAM.
) New programs are installed incorrectly.
) Windows spontaneously restarts unexpectedly.
) Programs that used to run stop responding frequently. Even if you remove and reinstall the
programs, the issue continues to occur.
) A disk utility such as Scandisk reports multiple serious disk errors.
) A partition disappears.
) The computer always stops responding when you try to use Microsoft Office products.
) You cannot start Windows Task Manager.
) Antivirus software indicates that a computer virus is present.

How to remove a computer virus


Even for an expert, removing a computer virus can be a difficult task without the help of computer
virus removal tools. Some computer viruses and other unwanted software, such as spyware, even
reinstall themselves after the viruses have been detected and removed. Fortunately, by updating the
computer and by using antivirus tools, you can help permanently remove unwanted software.
To remove a computer virus, follow these steps:
1. Install the latest updates from Microsoft Update on the computer.
2. Update the antivirus software on the computer. Then, perform a thorough scan of the computer by
using the antivirus software.
3. Download, install, and then run the Microsoft Malicious Software Removal Tool to remove
existing viruses on the computer

Trojan Horses
A Trojan Horse is not a virus. It is a program that you run because you think it will serve a useful
purpose such as a game or provides entertainment. Like a "Trojan Horse" it serves not as it claims, but
to damage files or perhaps plants a virus into your computer. A Trojan horse does not replicate or
spread like a virus. Most virus checking programs detect Trojan Horses.

Computer Worm
A computer worm is a self-replicating malware computer program, which uses a computer network to
send copies of itself to other nodes (computers on the network) and it may do so without any user
intervention. This is due to security shortcomings on the target computer. Unlike a computer virus, it
does not need to attach itself to an existing program. Worms almost always cause at least some harm to
the network, even if only by consuming bandwidth, whereas viruses almost always corrupt or modify
files on a targeted computer.

Symptoms of Worms and Trojan Horse viruses in e-mail messages


When a computer virus infects e-mail messages or infects other files on a computer, you may notice
the following symptoms:
) The infected file may make copies of itself. This behavior may use up all the free space on the
hard disk.
) A copy of the infected file may be sent to all the addresses in an e-mail address list.

24 
 
Computer Application in Pharmacy Computer Network & Security

) The computer virus may reformat the hard disk. This behavior will delete files and programs.
) The computer virus may install hidden programs, such as pirated software. This pirated software
may then be distributed and sold from the computer.
) The computer virus may reduce security. This could enable intruders to remotely access the
computer or the network.
) You receive an e-mail message that has a strange attachment. When you open the attachment,
dialog boxes appear, or a sudden degradation in system performance occurs.
) Someone tells you that they have recently received e-mail messages from you that contained
attached files that you did not send. The files that are attached to the e-mail messages have
extensions such as .exe, .bat, .scr, and .vbs extensions.

25 
 
Computer Application in Pharmacy Information Systems

Organization
A social unit of people, systematically structured and managed to meet a need or to pursue collective
goals on a continuing basis. All organizations have a management structure that determines
relationships between functions and positions, and subdivides and delegates roles, responsibilities, and
authority to carry out defined tasks. Organizations are open systems in that they affect and are affected
by the environment beyond their boundaries

Virtual Organization
One that (1) does not have a physical (bricks and mortar) presence but exists electronically (virtually)
on the internet, (2) is not constrained by the legal definition of a company, or (3) is formed in an
informal manner as an alliance of independent legal entities.
Advantages of a virtual organization are:
• Reduced costs of physical facilities.
• More rapid response to customer needs.
• Flexibility for employees to care for children or aging parents.

Information System
“Information systems are computer systems that support end users, giving them access to the
information. For a large number of systems the information is held in databases and access is via
database management systems. Information systems perform a variety of tasks. While all of the
information processes are represented in information systems, the emphasis in this topic is on the
processes or organising, storing and retrieving with database systems and hypermedia.” Page 35 of IPT
HSC Syllabus, 1999.

Five Types of Information Systems


Information systems are constantly changing and evolving as technology continues to grow. Very
importantly the information systems described below are not mutually exclusive and some (especially
Expert Systems, Management Information Systems and Executive Information Systems are can be
seen as a subset of Decision Support Systems). However these examples are not the only overlaps and
the division of these information systems will change over time.
At present there are five main types:
9 Transaction Processing Systems (TPS)
9 Decision Support Systems (DSS)
9 Expert Information Systems (EIS)
9 Management Information Systems (MIS
9 Office Automation Systems (OAS)

Transaction Processing System


A transaction processing system (TPS) collects, stores, modifies and retrieves the transactions of an
organization. Examples of such systems are automatic telling machines (ATMs), electronic funds
transfer at point of sale (EFTPOS – also referred to as POS). There are two types of transaction of
processing:
• Batch processing: where all of the transactions are collected and processed as one group
or batch at a later stage.
• Real-time processing: where the transaction is processed immediately
Examples of Real-time Transaction Processing
There are 3 examples of TPS examined in the HSC: Reservation Systems, POS, Library Loans
Reservation Systems

26 
 
Computer Application in Pharmacy Information Systems

These are used by businesses where services need to be booked. Examples of reservation systems can
be found at travel agencies, airline companies, rental companies, and entertainment agencies such as
Tickatek or the relevant venues. Every time a booking is made the available resources need to be
reduced by the same number or resources reserved or paid for. Once the transaction has taken place
then tickets need to be printed credits will be transferred from the customer's account to the booking
agency, receipts need to be printed and transaction records need to be updated in the database.
Point of Sale (POS)
POS is the system that is in place for purchases involving EFTPOS (electronic funds transfer at point
of sale). POS is just an abbreviation of the full name of EFTPOS. POS systems are found in many
businesses now from major outlets such as Coles or Woolworths to many mixed businesses and petrol
stations. The POS system is what allows the convenience of an ATM at midnight. When a transaction
takes place, goods are scanned from barcodes or details are typed in, any required credit card checks
are made, a receipt is sent to the customer (displayed on the screen or printed on a receipt), coded
inventory data is sent through to update the database and the inventory or bank balance is updated. It
will also work out the change required and any receipt will normally itemize items.
Library Loans
The borrower will normally have a library membership card which will be scanned and checked for
overdue books and fines, the borrowed items will be scanned and each item will then be placed under
the borrowers name. The details about available books will then be updated on the database. The
system will also allow reservations to be placed on the books.

Management Information Systems


Management Information Systems provide information to managers of an organisation. This relates to
reports, statistics, stock inventories, payroll details, budgets or any other details that assist managers
with running an organisation. An EIS, executive Information System is a form of MIS designed for
upper management and provides information which might help them make decisions ona strategic level
about future directions or issues concerning managers.

Decision Support Systems


Decision Support Systems are created to help people make decisions by providing access to
information and analysis tools. Many stockbrokers now use programs that will automatically put in
requests to sell shares once they reach a certain price (either high or low). A DSS creates a
mathematical model of the system which helps decision making about actions affecting a person
organization. Another example of a decision support system is the simple analysis tools that banks use
to help formulate loans for prospective customers. A DSS allows the users to pose what-if questions
and by changing a number of variables and then find out what the outcomes would be. In the home
loan DSS customers can analyze how paying off more each pay would affect their loans, how a
different type of loan may make it easier to make ends meet and by so doing tailor the loan to suit the
customer.
A DSS depends upon the accuracy of the math’s involved in creating the model and the ability of the
user to accurately interpret the resulting data.
Example
Computer-driven trade has significantly affected the stock exchange. The Australian stock exchange
uses a system called SEATS (Stock Exchange Automated Trading System). Computer and
telecommunications technology, besides opening a wide market in over the counter dealings, has also
given rise to trading on an international level. Personal computers and modems allow trading to occur
around the clock, and the securities trading on one major stock exchange can now significantly affect
the trading on others.

Expert Systems

27 
 
Computer Application in Pharmacy Information Systems

Expert systems help to guide users to find solutions to problems that would otherwise need expert
advice. They are useful in diagnosing, monitoring, selecting, designing, predicting and training. An
expert system will ask the user to answer series of questions after which it will provide a suggested
course of action. Expert systems will not take over but requires the user to make the final decision.
Expert systems will advise. Some examples of expert systems are programs which help Doctors to
diagnose a patient. In fact there are some web sites that will even diagnose patients. Another example
of an expert system might be a program where a person puts in all of the vital statistics and then the
program helps to advise about a fitness program which that person can then follow.

Office Automation Systems


Office Automation Systems are software packages such as MS Office which include word processors,
spreadsheets, databases, presentation software, email, internet, desktop publishing programs and
project management software. In office automation much work is processed electronically with the aim
of saving space, being more efficient, and reducing paper usage. In reality, since office automation
began in the mid 1980s, paper usage has soared as more people demand hard copies purely because
they are easier to produce. Also as a function of the ease with which documents are produced
compared to type writers higher standards of presentation are now expected.

Management
Management is a universal phenomenon. It is a very popular and widely used term. All organizations -
business, political, cultural or social are involved in management because it is the management which
helps and directs the various efforts towards a definite purpose. According to Harold Koontz,
“Management is an art of getting things done through and with the people in formally organized
groups. It is an art of creating an environment in which people can perform and individuals and can co-
operate towards attainment of group goals”. According to F.W. Taylor, “Management is an art of
knowing what to do, when to do and see that it is done in the best and cheapest way”.
Management is a purposive activity. It is something that directs group efforts towards the attainment of
certain pre - determined goals. It is the process of working with and through others to effectively
achieve the goals of the organization, by efficiently using limited resources in the changing world. Of
course, these goals may vary from one enterprise to another. E.g.: For one enterprise it may be
launching of new products by conducting market surveys and for other it may be profit maximization
by minimizing cost.

Levels of Management
The term “Levels of Management’ refers to a line of demarcation between various managerial
positions in an organization. The number of levels in management increases when the size of the
business and work force increases and vice versa. The level of management determines a chain of
command, the amount of authority & status enjoyed by any managerial position. The levels of
management can be classified in three broad categories: -
1. Top level / Administrative level
2. Middle level / Executory
3. Low level / Supervisory / Operative / First-line managers
Managers at all these levels perform different functions. The role of managers at all the three levels is
discussed below

28 
 
Computer Application in Pharmacy Information Systems

LEVELS OF MANAGEMENT

Top Level of Management


It consists of board of directors, chief executive or managing director. The top management is the
ultimate source of authority and it manages goals and policies for an enterprise. It devotes more time
on planning and coordinating functions.
The role of the top management can be summarized as follows -
a. Top management lays down the objectives and broad policies of the enterprise.
b. It issues necessary instructions for preparation of department budgets, procedures, schedules etc.
c. It prepares strategic plans & policies for the enterprise.
d. It appoints the executive for middle level i.e. departmental managers.
e. It controls & coordinates the activities of all the departments.
f. It is also responsible for maintaining a contact with the outside world.
g. It provides guidance and direction.
h. The top management is also responsible towards the shareholders for the performance of the
enterprise.

Middle Level of Management


The branch managers and departmental managers constitute middle level. They are responsible to the
top management for the functioning of their department. They devote more time to organizational and
directional functions. In small organization, there is only one layer of middle level of management but
in big enterprises, there may be senior and junior middle level management. Their role can be
emphasized as -
a. They execute the plans of the organization in accordance with the policies and directives of the
top management.
b. They make plans for the sub-units of the organization.
c. They participate in employment & training of lower level management.
d. They interpret and explain policies from top level management to lower level.
e. They are responsible for coordinating the activities within the division or department.
f. It also sends important reports and other important data to top level management.
g. They evaluate performance of junior managers.
h. They are also responsible for inspiring lower level managers towards better performance.

Lower Level of Management


Lower level is also known as supervisory / operative level of management. It consists of supervisors,
foreman, section officers, superintendent etc. According to R.C. Davis, “Supervisory management
refers to those executives whose work has to be largely with personal oversight and direction of
operative employees”. In other words, they are concerned with direction and controlling function of
management. Their activities include -
a. Assigning of jobs and tasks to various workers.
b. They guide and instruct workers for day to day activities.

29 
 
Computer Application in Pharmacy Information Systems

c. They are responsible for the quality as well as quantity of production.


d. They are also entrusted with the responsibility of maintaining good relation in the organization.
e. They communicate workers problems, suggestions, and recommendatory appeals etc to the higher
level and higher level goals and objectives to the workers.
f. They help to solve the grievances of the workers.
g. They supervise & guide the sub-ordinates.
h. They are responsible for providing training to the workers.
i. They arrange necessary materials, machines, tools etc for getting the things done.
j. They prepare periodical reports about the performance of the workers.
k. They ensure discipline in the enterprise.
l. They motivate workers.
m. They are the image builders of the enterprise because they are in direct contact with the workers.

Management Styles
What makes a good leader or manager? For many it is someone who can inspire and get the most from
their staff. There are many qualities that are needed to be a good leader or manager.
9 Be able to think creatively to provide a vision for the company and solve problems
9 Be calm under pressure and make clear decisions
9 Possess excellent two-way communication skills
9 Have the desire to achieve great things
9 Be well informed and knowledgeable about matters relating to the business
9 Possess an air of authority
Do you have to be born with the correct qualities or can you be taught to be a good leader? It is most
likely that well-known leaders or managers (Winston Churchill, Richard Branson or Alex Ferguson?)
are successful due to a combination of personal characteristics and good training.
Managers deal with their employees in different ways. Some are strict with their staff and like to be in
complete control, whilst others are more relaxed and allow workers the freedom to run their own
working lives (just like the different approaches you may see in teachers!). Whatever approach is
predominately used it will be vital to the success of the business. “An organisation is only as good as
the person running it”.
There are three main categories of leadership styles: autocratic, paternalistic and democratic.

Autocratic
Autocratic (or authoritarian) managers like to make all the important decisions and closely supervise
and control workers. Managers do not trust workers and simply give orders (one-way communication)
that they expect to be obeyed. This approach derives from the views of Taylor as to how to motivate
workers and relates to McGregor’s theory X view of workers. This approach has limitations (as
highlighted by other motivational theorists such as Mayo and Herzberg) but it can be effective in
certain situations. For example:
When quick decisions are needed in a company (e.g. in a time of crises)
When controlling large numbers of low skilled workers.

Paternalistic
Paternalistic managers give more attention to the social needs and views of their workers. Managers
are interested in how happy workers feel and in many ways they act as a father figure (pater means
father in Latin). They consult employees over issues and listen to their feedback or opinions. The
manager will however make the actual decisions (in the best interests of the workers) as they believe
the staff still need direction and in this way it is still somewhat of an autocratic approach. The style is
closely linked with Mayo’s Human Relation view of motivation and also the social needs of Maslow.

Democratic
A democratic style of management will put trust in employees and encourage them to make decisions.
They will delegate to them the authority to do this (empowerment) and listen to their advice. This

30 
 
Computer Application in Pharmacy Information Systems

requires good two-way communication and often involves democratic discussion groups, which can
offer useful suggestions and ideas. Managers must be willing to encourage leadership skills in
subordinates.
The ultimate democratic system occurs when decisions are made based on the majority view of all
workers. However, this is not feasible for the majority of decisions taken by a business- indeed one of
the criticisms of this style is that it can take longer to reach a decision. This style has close links with
Herzberg’s motivators and Maslow’s higher order skills and also applies to McGregor’s theory Y view
of workers.

System Development
Like any other set of engineering products, software products are also oriented towards the customer. It
is either market driven or it drives the market. Customer Satisfaction was the buzzword of the 80's.
Customer Delight is today's buzzword and Customer Ecstasy is the buzzword of the new
millennium. Products that are not customer or user friendly have no place in the market although
they are engineered using the best technology. The interface of the product is as crucial as the internal
technology of the product.

Market Research
A market study is made to identify a potential customer's need. This process is also known as market
research. Here, the already existing need and the possible and potential needs that are available in a
segment of the society are studied carefully. The market study is done based on a lot of assumptions.
Assumptions are the crucial factors in the development or inception of a product's development.
Unrealistic assumptions can cause a nosedive in the entire venture. Though assumptions are abstract,
there should be a move to develop tangible assumptions to come up with a successful product.
Research and Development
Once the Market Research is carried out, the customer's need is given to the Research &
Development division (R&D) to conceptualize a cost-effective system that could potentially solve the
customer's needs in a manner that is better than the one adopted by the competitors at present. Once the
conceptual system is developed and tested in a hypothetical environment, the development team takes
control of it. The development team adopts one of the software development methodologies that is
given below, develops the proposed system, and gives it to the customer.
The Sales & Marketing division starts selling the software to the available customers and
simultaneously works to develop a niche segment that could potentially buy the software. In addition,
the division also passes the feedback from the customers to the developers and the R&D division to
make possible value additions to the product.
While developing software, the company outsources the non-core activities to other companies who
specialize in those activities. This accelerates the software development process largely. Some
companies work on tie-ups to bring out a highly matured product in a short period.
Popular Software Development Models
The following are some basic popular models that are adopted by many software development firms
A. System Development Life Cycle (SDLC) Model
B. Prototyping Model
C. Rapid Application Development Model
D. Component Assembly Model

System Development Life Cycle (SDLC) Model


This is also known as Classic Life Cycle Model (or) Linear Sequential Model (or) Waterfall Method.
This model has the following activities.
1. System/Information Engineering and Modeling
As software is always of a large system (or business), work begins by establishing the
requirements for all system elements and then allocating some subset of these requirements to

31 
 
Computer Application in Pharmacy Information Systems

software. This system view is essential when the software must interface with other elements such
as hardware, people and other resources. System is the basic and very critical requirement for the
existence of software in any entity. So if the system is not in place, the system should be
engineered and put in place. In some cases, to extract the maximum output, the system should
be re-engineered and spruced up. Once the ideal system is engineered or tuned, the development
team studies the software requirement for the system.
2. 2. Software Requirement Analysis
This process is also known as feasibility study. In this phase, the development team visits the
customer and studies their system. They investigate the need for possible software automation in
the given system. By the end of the feasibility study, the team furnishes a document that holds the
different specific recommendations for the candidate system. It also includes the personnel
assignments, costs, project schedule, target dates etc.... The requirement gathering process is
intensified and focussed specially on software. To understand the nature of the program(s) to be
built, the system engineer or "Analyst" must understand the information domain for the software,
as well as required function, behavior, performance and interfacing. The essential purpose of this
phase is to find the need and to define the problem that needs to be solved .
3. System Analysis and Design
In this phase, the software development process, the software's overall structure and its nuances
are defined. In terms of the client/server technology, the number of tiers needed for the package
architecture, the database design, the data structure design etc... are all defined in this phase. A
software development model is thus created. Analysis and Design are very crucial in the whole
development cycle. Any glitch in the design phase could be very expensive to solve in the later
stage of the software development. Much care is taken during this phase. The logical system of
the product is developed in this phase.
4. Code Generation
The design must be translated into a machine-readable form. The code generation step performs
this task. If the design is performed in a detailed manner, code generation can be accomplished
without much complication. Programming tools like compilers, interpreters, debuggers etc... are
used to generate the code. Different high level programming languages like C, C++, Pascal, Java
are used for coding. With respect to the type of application, the right programming language is
chosen.
5. Testing
Once the code is generated, the software program testing begins. Different testing methodologies
are available to unravel the bugs that were committed during the previous phases. Different testing
tools and methodologies are already available. Some companies build their own testing tools that
are tailor made for their own development operations.
6. Maintenance
The software will definitely undergo change once it is delivered to the customer. There can be
many reasons for this change to occur. Change could happen because of some unexpected input
values into the system. In addition, the changes in the system could directly affect the software
operations. The software should be developed to accommodate changes that could happen during
the post implementation period.

The Waterfall Model Explained


The subject of Software Engineering doesn’t only deal with Software development but it about
developing good software by using knowledge of available theories with the help of various defined
methods and effective use of tools in hand.

32 
 
Computer Application in Pharmacy Information Systems

There are various software development approaches defined and designed which are used/employed
during development process of software, these approaches are also referred as "Software Development
Process Models". Each process model follows a particular life cycle in order to ensure success in
process of software development.
One such approach/process used in Software Development is "The Waterfall Model". Waterfall
approach was first Process Model to be introduced and followed widely in Software Engineering to
ensure success of the project. In "The Waterfall" approach, the whole process of software development
is divided into separate process phases. The phases in Waterfall model are: Requirement Specifications
phase, Software Design, Implementation and Testing & Maintenance. All these phases are cascaded to
each other so that second phase is started as and when defined set of goals are achieved for first phase
and it is signed off, so the name "Waterfall Model". All the methods and processes undertaken in
Waterfall Model are more visible.
The stages of "The Waterfall Model" are:
Requirement Analysis & Definition: All possible requirements of the system to be developed are
captured in this phase. Requirements are set of functionalities and constraints that the end-user (who
will be using the system) expects from the system. The requirements are gathered from the end-user by
consultation, these requirements are analyzed for their validity and the possibility of incorporating the
requirements in the system to be development is also studied. Finally, a Requirement Specification
document is created which serves the purpose of guideline for the next phase of the model.
System & Software Design: Before a starting for actual coding, it is highly important to understand
what we are going to create and what it should look like? The requirement specifications from first
phase are studied in this phase and system design is prepared. System Design helps in specifying
hardware and system requirements and also helps in defining overall system architecture. The system
design specifications serve as input for the next phase of the model.
Implementation & Unit Testing: On receiving system design documents, the work is divided in
modules/units and actual coding is started. The system is first developed in small programs called
units, which are integrated in the next phase. Each unit is developed and tested for its functionality; this
is referred to as Unit Testing. Unit testing mainly verifies if the modules/units meet their specifications.
Integration & System Testing: As specified above, the system is first divided in units which are
developed and tested for their functionalities. These units are integrated into a complete system during
Integration phase and tested to check if all modules/units coordinate between each other and the system
as a whole behaves as per the specifications. After successfully testing the software, it is delivered to
the customer.
Operations & Maintenance: This phase of "The Waterfall Model" is virtually never ending phase
(Very long). Generally, problems with the system developed (which are not found during the
development life cycle) come up after its practical use starts, so the issues related to the system are
solved after deployment of the system. Not all the problems come in picture directly but they arise time
to time and needs to be solved; hence this process is referred as Maintenance.

33 
 
Computer Application in Pharmacy Information Systems

Disadvantages of Waterfall Model


There are some disadvantages of the Waterfall Model.
1. As it is very important to gather all possible requirements during the Requirement Gathering and
Analysis phase in order to properly design the system, not all requirements are received at once,
the requirements from customer goes on getting added to the list even after the end of
"Requirement Gathering and Analysis" phase, this affects the system development process and its
success in negative aspects.
2. The problems with one phase are never solved completely during that phase and in fact many
problems regarding a particular phase arise after the phase is signed off, this results in badly
structured system as not all the problems (related to a phase) are solved during the same phase.
3. The project is not partitioned in phases in flexible way.
4. As the requirements of the customer goes on getting added to the list, not all the requirements are
fulfilled, this results in development of almost unusable system. These requirements are then met
in newer version of the system; this increases the cost of system development.

34 
 

Das könnte Ihnen auch gefallen