You are on page 1of 6

INTRODUCTION

The earliest form of recording the LAB experiment data involved manually
taking measurements, recording them to a written log, and plotting them on graph
paper. Doing a LAB experiment by following above described steps, it takes a team
of engineers to conduct an experiment. Team members should be technically
qualified. The authenticity of readings depends on human errors. Time interval
between the readings and no. of total readings is restricted by human capabilities. A
good amount of time is required from start of experiment and up to declaration of
result. Conducting an experiment like this has following disadvantages. (a) a team of
technically qualified engineers required. (b) time involved between two readings is
restricted by human capabilities. Eg 1 readings required in 1 second it is out of
human limit. (c). no. of readings eg. In an experiment time between two consecutive
readings is 30 seconds and the experiment has to be conducted for 1 week nonstop i.e.
day and night. Conducting an experiment like this is a very costly affair because it
requires manpower and time. Our project THE ONE MINIUTE LAB comes up with
a suggested working model solution to all these problems. It is very economical and
less time consuming if it’s not is demand of the experiment. In the late 19th century,
this process was automated with the use of strip-chart recorders that mechanically
recorded measurements onto paper. Strip-chart recorders were a great leap over
manual recording but still had drawbacks. Today, the more widely used method of
recording data is with a data logger (or paperless chart recorder). Data loggers are
stand-alone instruments that measure signals, convert them to digital data, and store
the data internally. Many data loggers include built-in displays and the ability to
transfer the data to a PC for offline analysis, permanent storage, or report generation.
INTERFACE CIRCUIT DIAGRAM
DEVICE UNDER TEST

`
The semiconductor devices parameter Acquisition system, in conjunction with the
software technical computing environment, gives you the ability to measure
and analyze physical phenomena. The purpose of any h – parameter
acquisition system is to provide you with the tools and resources necessary to
do so

The field of h – parameter acquisition and control (DA&C) encompasses a very wide
range of activities. At its simplest level, it involves reading electrical signals
into a computer from some form of sensor. These signals may represent the
state of a physical process, such as the position and orientation of machine
tools, the temperature of a furnace or the size and shape of a manufactured
component. The acquired data may have to be stored, printed or displayed.
Often the data have to be analyzed or processed in some way in order to
generate further signals for controlling external equipment or for interfacing
to other computers. This may involve manipulating only static readings, but it
is also frequently necessary to deal with time-varying signals as well.

Some systems may require data to be gathered slowly, over time spans of many days
or weeks. Other will necessitate short bursts of very high speed data
acquisition – perhaps at rates of up to several thousand readings per second.
The dynamic nature of many DA&C applications is a fundamental
consideration which we will repeatedly return to in this project. The IBM PC
is, unfortunately, not an ideal platform for DA&C. There are a number of
problems associated with using it in situations which demand guaranteed
response times. However, it is used widely for laboratory automation,
industrial monitoring and control, as well as in a variety of other time-critical
applications so why is it so popular? The most obvious reason is, of course,
that the proliferation of office desktop systems, running word processing,
accounting, DTP, graphics, CAD and many other types of software, has led
IBM and numerous independent PC-clone manufacturers to develop ever
more powerful and inexpensive computer systems.
The technology is now well developed and stable in most respects. For the same
reason, an enormous software base now exists for this platform. This includes
all manner of scientific, statistical analysis, mathematical and 4 PC interfacing
and data acquisition engineering packages that may be used to analyze
acquired data. A wide range of software development tools, libraries, data-
acquisition hardware and technical documentation is also available. Perhaps
the most important reason for using the PC for data acquisition and control is
that there is now a large and expanding pool of programmers, engineers and
scientists who are familiar with the PC. Indeed it is quite likely that many of
these personnel will have learnt how to program on an IBM PC or PC clone.

This project sets out to present some of the basic concepts of DA&C programming
from a practical perspective and to illustrate how elements of the PC
architecture can be employed in DA&C systems. Although it contains quite
detailed descriptions of certain elements of the PC’s hardware and interface
adaptors, the text concentrates on the software techniques that are required to
make effective use of the PC for DA&C. The first two headings begin by
discussing the structure of DA&C systems and attempt to assess how well the
PC and its operating systems meet the stringent requirements of data
acquisition and real-time operation.