Sie sind auf Seite 1von 28

Hardware is a general term that describes all physical components used in the assembly of the computer.

It covers all physical structure that uses binary state to move, or hold computer data.

Software is the intangible element that constitutes the instructions on which the computer acts. This mostly consists of programs meant to control and direct the performance of the Computer.

CLASSIFICATION OF SOFTWARE
Software can be classified into three distinct groups: System Software, Application Software and Utility Software

System Software
The software that instruct the computer to control and manage its internal functions, like initializing on startup, controlling external devices, organizing the memory during operations and many such other activities is called System Software. The most important system software is the Operating System (OS), which truly is the soul of the system. Eg.. Windows XP, Unix, Linux etc
Examples of system software: Operating Systems Windows Drivers (Sound card driver, Display Driver), Viruses etc.

Application Software
The software which performs a specific data processing job is called Application Software. It consists of programs which carry out the specific processing required for user's application such as Word Processor, Spread Sheets, and Financial Accounting or a computeraided package fall under this category. Examples of Application Software: Microsoft Office, Payroll System, Inventory System.

Utility software Utility software may be considered as an application software or a system software which is very often used in the development of a programme.

Examples of Utility softwares: Winzip, Frontpage, Notepad, Web Browsers etc.

Hardware

System Software Application Software Users

Computer system

Software

Hardware

Application

System

Languages Low Level Machine

Communication High Level Assembly

OS

Utilities Editors Monitors Debuggers

Compiler Based

Interpreter Based

COBOL

Fortran

BASIC

PASCAL

Firmware are those programs that are permanently written and stored in computer memory and are necessary for the control of startup, switch off and input/output procedure in computer these are introduced at the time of manufacture and cannot be normally altered.

Machine Language computer can understand only Binary based Language. Instructions written in sequences of 0s and 1s are known as Machine Language. Every Instruction is composed of two parts 1. Operation Code or opcode.(Like add, Multiply, Move etc.)

Machine Language 2. Operand, which is the address of data that has to be acted upon.

Example
Opcode 001 Operand 0100011101001

Assembly Language It is the first step in evolution of programming languages. It used mnemonics (symbolic codes) to represent operation codes and strings of characters to represent address. Example Operation Operation address

READ
ADD

M
L

Assembly Language A Program written in assembly language needs to be translated into machine language before the computer can execute it. This is done by a special program called Assembler, which translates into equivalent machine code.

Assembly Language program - Source Program Machine Language program - Object Program

Assembler An assembler is a program that accepts an assembly language program to produce its machine language

Multipass Assembler Multi pass translation of an assembly language program can take care of the problem of forward references. Most assemblers process an assembly program in multiple passes. While analysing the statements of this program for the first time, LC processing is performed and symbols defined in the program are entered into the symbol table. In second pass, statements are processed for the purpose of synthesising the target form. Since all defined symbols and their addresses can be found in the symbol table, no problems are faced in assembling forward references.

Two Assembler

High Level Language A lack of portability of programs between different computers led to development of high level language. Algorithmic, or procedural, languages are designed for solving a particular type of problem. They contain commands that are particularly suited to one type of application. High level language program - Source Program.
Equivalent Machine level program Object Program

Compilers A compiler is a program which translates from an HLL into the machine language of a computer. Besides program translation, the compiler performs another very important function. This is in terms of the diagnostics, i.e., error detection capability.
Input Output

Program in High level Language


(Source Program)

COMPILER

Program in Machine Language


(Object Program)

Compilers Translating the HLL program input (hereafter we will refer to this as the source program) into an equivalent machine language program.

Program in High level Language (Source Program)

Interpreter An interpreter translates the instructions of the program one statement at a time. This translated code is first executed before the interpreter begins work on the next line. Thus instruction are translated and executed simultaneously. Object code is not stored in computers memory for future use. The next item instruction, it needs to be freshly translated by interpreter. Interpreter Result of
Input

(Translates and executes One statement at a time)

Output

Program Execution

UNIT II - Data Representation


A common form of data are letters of the alphabet (A to Z), numerals (0 to 9), some symbols (@, &, *) and certain control character (Ctrl,Shift). This types of data is convenient for human being but all of the data in digital computer represented in binary form. Some coding systems are used to represent these data into the binary form. (Characters are represented by a sequence of bits.)
22

Data Representation (cont.)

The famous coding system been used for data representation are:
ASCII (American Standard Code for

Information Interchange) EBCDIC (Extended Binary Coded Decimal Interchange Code ) BCD (Binary Coded Decimal)

23

Data Representation (cont.)


ASCII is used in almost all present-day personal computers. Each alphabetic, numeric, or special character is represented with a 7-bit binary number (a string of seven 0s or 1s). 128 possible characters can be represented.

27 The eight bit may be set to 0 or used as a parity bit for error checking on communication lines or other device-specific functions. Example: char A=65 in decimal,41 in hex, 0100 0001 in binary. 24

Data Representation (cont.)


ASCII Printable Characters
Code 32 is the "space" character, denoting the space between words, which is produced by the large space bar of a keyboard.

Codes 33 to 126 are called the printable characters, which represent letters, digits, punctuation marks, and a few miscellaneous symbols.

25

Data Representation (cont.)


BCD
In BCD, 4 bit binary number were used to represent 1 decimal number ( e.g 3d=0011b, 9d=1001b)

Highest decimal number were coded to BCD is 9 (1001). Thus, 1010, 1011, 1110 and 1111 were not used.
To encode the number such as 43; use: 43d = 0100 0011b

The BCD format usually used in the BIOS at Personal Computer (PC) to keeps the date and time for historical reason.

26

Unicode provides a consistent way of encoding


multilingual plain text and brings order to a chaotic state of affairs that has made it difficult to exchange text files internationally. Computer users who deal with multilingual textbusiness people, linguists, researchers, scientists, and otherswill find that the Unicode Standard greatly simplifies their work. Mathematicians and technicians, who regularly use mathematical symbols and other technical characters, will also find the Unicode Standard valuable.

The design of Unicode is based on the simplicity and consistency of ASCII, but goes far beyond ASCII's limited ability to encode only the Latin alphabet. The Unicode Standard provides the capacity to encode all of the characters used for the written languages of the world. To keep character coding simple and efficient, the Unicode Standard assigns each character a unique numeric value and name. The Unicode Standard and ISO/IEC 10646 support three encoding forms (UTF-8, UTF-16, UTF-32) that use a common repertoire of characters. These encoding forms allow for encoding as many as a million characters. This is sufficient for all known character encoding requirements, including full coverage of all historic scripts of the world, as well as common notational systems.

Das könnte Ihnen auch gefallen