Sie sind auf Seite 1von 5

DATA PROCESSING CYCLE

Introduction
Data processing refers to the transforming raw data into meaningful output.
Data can be done manually using a pen and paper, mechanically using simple devices e.g. typewriter
or electronically using modern data processing toolset computers.

Data collection involves getting the data/facts needed for processing from the point of its origin to
the computer

Data Input- the collected data is converted into machine-readable form by an input device, and send
into the machine.

Processing is the transformation of the input data to a more meaningful form (information) in the
CPU

Output is the production of the required information, which may be input in future.

Stages of the Data Processing Cycle


As discussed earlier data processing have three broad stages which have sub stages or steps
involved. These are the steps required in between these three broad stages. These deal with the
collection of data, choosing the processing methods, practicing data management best
practices, information processing cycle, making use of processed data for the desired purpose. Data
processing cycle diagram is presented below. The steps include:
1. Data Collection: This is the first step which will provide the data for the input. Collecting
data is a hard work in its own but is most essential on which the results depend. The quality
of input will determine the quality of output. This data collection can be done in various ways
by primary or secondary sources. This data might include census data, GDP or other
monetary figures, data about a number of industries, profit of a company, etc. Depending
upon the data requirement its source must be identified from which data will be collected.

2. Preparation/ Sieving: Some people consider this as a part of processing but does not
involve any processing. Preparation includes sorting and filtering of data which will finally
be used as input. This stage required you to remove the extra or unusable data to make
processing faster and better. This is a broad step in reducing the quantity of data to yield a
better result.

3. Input: This is the feeding of raw and sieved data for processing. If the input is not done
properly or done wrong, then the result will be adversely affected. This is because software
follows the rule of “Garbage in – garbage out.” Utmost care should be taken to provide the
right data.

4. Processing: This is the step where data is processed by mechanical or automated means.
The processed data is one who gives information to the user and can be put to use. The raw
data cannot be understood and thus needs processing which is done in this step. Processing
of data may take time depending on the complexity of the data and the volume of input data.
The step of preparation mentioned above helps in making this process faster.

5. Output/ Result – This is the last step of the data processing cycle as the processed data
is delivered in the form of information/results in this step. Once the result or output is
received, it may further be processed or interpreted. This is done by the user or software for
further value addition. This output can also be used directly in presentations or the records.
This output may even be saved as to be used as an input for further data processing which
then becomes a part of a cycle which is being discussed. If this data is not used as input, then
this complete process cannot be considered as cycle and will remain to be a one-time activity
of data processing. For using this data as input, it must be stored or simultaneously be
available for further processing.

All these steps or stages have a particular sequence which must be followed. If processing is done
manually as the automatic processing have inbuilt algorithms with pre-defined steps. In automatic
processing, the chances of error are drastically reduced. This happens only when the input is a correct
data or data.
Most of the programs which process data completely or partially have a back-end with a pre-defined
algorithm and sets of operation. A single software is performing all the required steps is considered
to have a complete data processing cycle in its back-end. A combination of a different set of
hardware and software is needed to complete the cycle in partial data processing. It becomes the
responsibility of the person operating this set to feed and receive the output in a particular sequence.

Limitations of the data processing cycle

Data processing cycle in most of the cases is a complete cycle in itself. But as mentioned above a
set of hardware and software might also be employed in some cases with special needs. In such
cases, some things need to be taken care of to get the sensible and useful output. This depends on
the correct sequence, operating skills, understanding of the steps forming the cycle. Partial output
from one part which will be used as an input for next part. If a person/operator/machine or software
fails to perform the steps in sequence than the output will not be useful.

Data integrity

Data integrity refers to the dependability, timeliness, availability, relevance, accuracy and
completeness of data/information.

Threats to data integrity

Data integrity may be compromised through:

 Human error, whether malicious or unintentional.


 Transfer errors, including unintended alterations or data compromise during transfer from
one device to another.
 Bugs, viruses/malware, hacking, and other cyber threats.
 Compromised hardware, such as a device or disk crash.

Ways of minimizing threats to data integrity.

 Backing up the data on external storage media


 Enforcing security measures to control access to data
 Using error detection & correction software when transmitting data
 Designing user interfaces that minimize chances of invalid data being entered.

DATA PROCESSING METHODS

1. Manual Data Processing

In manual data processing, data is processed manually without using any machine or tool to get
required results. In manual data processing, all the calculations and logical operations are performed
manually on the data. Similarly, data is transferred manually from one place to another. This method
of data processing is very slow and errors may occur in the output. Mostly, is processed manually in
many small business firms as well as government offices & institutions. In an educational institute,
for example, marks sheets, fee receipts, and other financial calculations (or transactions) are
performed by hand. This method is avoided as far as possible because of the very high probability
of error, labor intensive and very time consuming. This type of data processing forms the very
primitive stage when technology was not available or it was not affordable. With the advancement
in technology the dependency on manual methods has drastically decreased.

2. Mechanical Data Processing

In mechanical data processing method, data is processed by using different devices like typewriters,
mechanical printers or other mechanical devices. This method of data processing is faster and more
accurate than manual data processing. These are faster than the manual mode but still forms the early
stages of data processing. With invention and evolution of more complex machines with better
computing power this type of processing also started fading away. Examination boards and printing
press use mechanical data processing devices frequently.

3. Electronic Data Processing

Electronic data processing or EDP is the modern technique to process data. The data is processed
through computer; Data and set of instructions are given to the computer as input and the computer
automatically processes the data according to the given set of instructions. The computer is also
known as electronic data processing machine.

This method of processing data is very fast and accurate. For example, in a computerized education
environment results of students are prepared through computer; in banks, accounts of customers are
maintained (or processed) through computers etc.

a. Batch Processing

Batch Processing is a method where the information to be organized is sorted into groups to allow
for efficient and sequential processing. Online Processing is a method that utilizes Internet
connections and equipment directly attached to a computer. It is used mainly for information
recording and research. Real-Time Processing is a technique that has the ability to respond almost
immediately to various signals in order to acquire and process information. Distributed Processing
is commonly utilized by remote workstations connected to one big central workstation or server.
ATMs are good examples of this data processing method.

b. Online Processing

This is a method that utilizes Internet connections and equipment directly attached to a computer.
This allows for the data stored in one place and being used at altogether different place. Cloud
computing can be considered as a example which uses this type of processing. It is used mainly for
information recording and research.

c. Real-Time Processing

This technique has the ability to respond almost immediately to various signals in order to acquire
and process information. These involve high maintenance and upfront cost attributed to very
advanced technology and computing power. Time saved is maximum in this case as the output is
seen in real time. For example in banking transactions

Example of real time processing


 Airline reservation systems
 Theatre (cinema) booking
 Hotel reservations
 Banking systems
 Police enquiry systems
 Chemical processing plants
 Hospitals to monitor the progress of a patient
 Missile control systems

Advantages

 Provides up-to-date information


 The information is readily available for instant decision-making
 Provides better services to users/customers.
 Fast &reliable
 Reduces circulation of hardcopies.

Disadvantages

 Require complex OS and are very expensive.


 Not easy to develop
 Real time systems usually use 2 or more processors to share the workloads, which is
expensive.
 Require large communication equipment.

d. Distributed Processing

This method is commonly utilized by remote workstations connected to one big central workstation
or server. ATMs are good examples of this data processing method. All the end machines run on a
fixed software located at a particular place and makes use of exactly same information and sets
of instruction.