Sie sind auf Seite 1von 470

Computer Architecture and Interfacing to Mechatronic Systems

by Dr. Dario J. Toncich PhD, M Eng, B.E.E. (Hons), MIE Aust, CP Eng

Published by...

Chrystobel Engineering
7 Wrixon Avenue Brighton Australia 3187 Orders/Enquiries Facsimile Number: +61-3-9578 5025

ISBN 0 646 16089 3 Copyright D.J. Toncich, 1994

Layout and artwork by Chrystobel Engineering of Brighton Australia. Printed and Bound in Australia (1994) (1995) (1996) (1997)

First Printed in Australia, 1994. Additional copies of this text may be obtained by mailing or faxing purchase enquiries directly to Chrystobel Engineering. Subject conveners and lecturers prescribing this text as a course reference may be eligible for complimentary copies.

Author
Dr. Dario Toncich was born in Melbourne Australia in 1960. He graduated (with honours) in Electrical Engineering from the University of Melbourne in 1983. Since that time he has held both industry and academic positions and has experience in the fields of data communications, computer architecture, FMS control and simulation. During his time in industry, Dr. Toncich spent a number of years developing computer control, simulation and communications equipment for FMS at an Australian-based Advanced Manufacturing Technology (AMT) Company. In 1988, he was appointed as Manager of Research Activity for the CIM Centre at the Swinburne Institute in Hawthorn, Victoria, Australia. As the manager of the Centre Dr. Toncich led research into AMT fields including FMS control and simulation and industrial data communications. He is also a professional consulting engineer to industry in the fields of FMS and data communications and has a Master of Engineering (by Research) degree from the Swinburne Institute of Technology and a PhD from the Swinburne University of Technology. In 1995, Dr. Toncich was appointed Research Leader in Control and Automation at the Industrial Research Institute Swinburne (IRIS). Dr. Toncich is also a subject convener and lecturer in a range of computer and CIM related subjects at postgraduate and undergraduate level at the Swinburne University. His research work has been published in international refereed journals and conference proceedings and he has also authored a number of guided learning programs in computer architecture and data communications. This is his second text book in the field of industrial control and communications systems.

No portion of this text book may be reproduced in any form whatsoever without the explicit, written consent of the author. Notification of errors or omissions in this text book, or suggestions for its improvement, can be mailed or faxed directly to Chrystobel Engineering

(i)

Table of Contents
Chapter 1 An Overview of the Computer Interfacing Process 1.1 1.2 1.3 General Issues - Analog and Digital Computation Interfacing Computers Via Hardware Interfacing Computers Via Software

1 2 6 9

Chapter 2 Computers and Control - Mechatronic Systems 2.1 2.2 2.3 2.4 2.5 2.6 A Review of Developments in Digital Computer Control Programmable Logic Controllers Intelligent Indexers and Servo-Drive Systems CNC and Robotic Controllers Development Systems for Mechatronic Control Manufacturing Systems

13 14 20 24 30 36 41

Chapter 3 Fundamental Electrical and Electronic Devices and Circuits 3.1 3.2 Introduction to Electronic Devices Diodes, Regulators and Rectifiers 3.2.1 Fundamentals and Semiconductor Architecture 3.2.2 Zener Diodes for Voltage Regulation 3.2.3 Diodes For Rectification and Power Supplies Basic Transistor Theory and Models 3.3.1 Introduction 3.3.2 Bipolar Junction Transistors (BJTs) 3.3.3 Field Effect Transistors (FETs) Analog and Digital Circuit External Characteristics Operational Amplifiers Linearity of Circuits - Accuracy and Frequency Response Thyristors 3.7.1 Introduction 3.7.2 Silicon Controlled Rectifiers 3.7.3 Diacs and Triacs 3.7.4 Unijunction Transistors (UJTs)

47 48 50 50 55 57 69 69 70 86 98 103 115 120 120 120 127 128

3.3

3.4 3.5 3.6 3.7

(ii)

Table of Contents (Continued)


Chapter 4 Fundamentals of Digital Circuits 4.1 4.2 4.3 4.4 4.5 A Building Block Approach to Computer Architecture Number Systems, Conversion and Arithmetic Representation of Alpha-numerics Boolean Algebra Digital Logic Circuits 4.5.1 Introduction 4.5.2 Transistor to Transistor Logic (TTL) 4.5.3 Schottky TTL 4.5.4 Emitter Coupled Logic (ECL) 4.5.5 CMOS Logic Digital Circuits for Arithmetic Functions Flip-Flops and Registers Counters - Electronic Machines

131 132 138 146 150 166 166 168 173 174 175 177 184 192

4.6 4.7 4.8

Chapter 5 Memory Systems and Programmable Logic 5.1 5.2 5.3 5.4 5.5 5.6 Introduction Overview of Memory Operation Volatile Read/Write Memory Non-Volatile Read Only Memory Non Volatile Read/Write Memory Programmable Logic Devices

197 198 202 205 208 210 211

Chapter 6 State Machines and Microprocessor Systems 6.1 6.2 6.3 6.4 6.5 6.6 6.7 State Machines Microprocessor System Fundamentals Microprocessor I/O - Data and Address Bus Structures Memory Mapping Microprocessor Program Execution Programming Levels for Processors Interrupts and Interrupt Programming

217 218 222 230 234 239 245 250

(iii)

Table of Contents (Continued)


6.8 Paging 6.9 Multi-Tasking Multi-User Systems 6.10 Combining the Elements into a Cohesive System 255 257 258

Chapter 7 Interfacing Computers to Mechatronic Systems 7.1 7.2 7.3 7.4 Introduction The Interfacing Process A/D and D/A Conversion Signal Conditioning, Protection and Isolation 7.4.1 Introduction 7.4.2 Signal Conditioning Circuits 7.4.3 Protection Circuits 7.4.4 Isolation Circuits Energy Conversion - Transducers Attenuation Problems Data Communications Combining the Interfacing Stages Commercial Realities of Interface Design

259 260 262 268 279 279 280 290 294 297 304 310 313 315

7.5 7.6 7.7 7.8 7.9

Chapter 8 Software Development Issues 8.1 8.2 8.3 8.4 8.5 8.6 Introduction Operating System Issues User Interface Issues Programming Language Issues - OOP Software Engines Specialised Development Systems

319 320 323 329 332 337 339

(iv)

Table of Contents (Continued)


Chapter 9 Electromagnetic Actuators & Machines - Basic Mechatronic Units 9.1 9.2 Introduction to Electromagnetic Devices Fundamentals of d.c. Machines 9.2.1 Introduction to d.c. Machines 9.2.2 Physical Characteristics of d.c. Machines 9.2.3 Separately Excited d.c. Machines 9.2.4 Series d.c. Machines 9.2.5 Shunt d.c. Machines 9.2.6 Compound d.c. Machine Configurations 9.2.7 Basics of Speed Control Fundamentals of a.c. Synchronous Machines 9.3.1 Introduction to Synchronous Machines 9.3.2 Physical and Magnetic Circuit Characteristics of Synchronous Machines 9.3.3 Electrical Models and Performance Characteristics 9.3.4 Basics of Synchronous Motor Speed Control Fundamentals of Induction (Asynchronous) Machines 9.4.1 Introduction to Induction Machines 9.4.2 Physical Characteristics of Induction Machines 9.4.3 Electrical and Magnetic Models and Performance 9.4.4 Basics of Induction Motor Speed Control Stepping (Stepper) Motors A Computer Controlled Servo Drive System

341 342 355 355 356 364 366 369 372 376 383 383 384 400 409 410 410 411 413 419 421 425

9.3

9.4

9.5 9.6

Appendix A - References

A-1

Appendix B - Index

B-1

(v)

List of Figures
Chapter 1 An Overview of the Computer Interfacing Process 1.1 1.2 1.3 1.4 Simple Analog Computation - Addition (a) Cascaded Digital Circuits (b) Allowable Input and Output Voltage Ranges for TTL Circuits Interfacing Digital Computers to External Systems The Computer Interfacing Software Development Process 3 4 6 10

Chapter 2 Computers and Control - Mechatronic Systems 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12 2.13 2.14 2.15 2.16 2.17 2.18 Hierarchical Computer Control Architecture Distributed Computer Control Architecture Heterarchical Control Architecture Schematics of a Programmable Logic Controller Distributed Control Using PLCs Stepper Motor Arrangement Schematic of Traditional Servo Drive System Arrangement Closed-Velocity-Loop Servo Motor System (a) A Simple XY-Machine (b) Problems with Using Position Control for Interdependent Axes Schematics of a CNC or Robotic Axis Control System Integrated PLC Control of High Power Peripherals Inter-locking a CNC Machine to a Robot Basic Interface Board Arrangement for Data-Acquisition and Control in Analog Systems Using Advanced Interface Boards with "On-Board Processors" for Standard Closed-Loop Control Functions Miniature Controller Development System Realms of Metal-Cutting Manufacturing Systems Schematic of Dedicated, In-Line Transfer Machine Schematic of a Flexible Manufacturing System 15 17 19 20 22 25 26 27 28 31 34 35 36 37 39 41 42 45

(vi)

List of Figures (Continued)


Chapter 3 Fundamental Electrical and Electronic Devices and Circuits 3.1 3.2 3.3 3.4 3.5 3.6 3.7 The Semiconductor Diode Second-order Approximation of Diode Behaviour Third-order Approximation of Silicon Diode Behaviour Circuit Approximations for Silicon Diodes Approximate Models for Zener Diode in Reverse Breakdown Typical Zener Diode Characteristic (a) Schematic of Transformer Construction (b) A Manageable Circuit Model for a Transformer (c) Transformer Characteristics for Varying Load Current and Operating Frequency Simple Half-Wave Rectifier Power Supply with (a) Capacitance Smoothing for Low Load Currents (b) Inductance (Choke) Smoothing for High Load Currents Analysis of Half-Wave Rectifier With Capacitance or Inductance Filtering Single-Phase Bridge Rectifier with Capacitance Filtering (a) Circuit Diagram (b) Effective Circuit Diagram for Voltage vab > 0 (c) Effective Circuit Diagram for Voltage vab < 0 Three-Phase Bridge Rectifier Operation of Three-Phase Bridge Rectifier (a) Zener Diode Regulating Output Voltage Vu from a Rectifier (b) Equivalent Circuit Replacing Diode With its a.c. Resistance Rz Grown (npn) Transistor Schematic and Circuit Symbol Representation Transistor Diode Analogy for "npn" and "pnp" Devices Classical Closed-Loop Control System Common Emitter Circuit Typical Common-Emitter Output Characteristic of an npn Transistor Circuit Diagram for a realistic TTL Inverter Circuit Emitter-Feedback Circuit Commonly Used for Amplification Emitter-Feedback Circuit with Thvenin Equivalent of Base Biasing Emitter-Feedback Circuit with a.c. Input Signal Applied Small-Signal (Hybrid-) Model for BJT Complete Circuit Model for Emitter-Feedback Amplifier Incorporating Small-Signal Model for BJT Total Voltage Waveforms (Quiescent + Signals) in an Emitter Feedback Amplifier Circuit Schematic of "n-channel" JFET Structure 50 52 53 54 56 56

59

3.8

62 62

3.9 3.10

3.11 3.12 3.13 3.14 3.15 3.16 3.17 3.18 3.19 3.20 3.21 3.22 3.23 3.24 3.25 3.26

63 63 64 65 67 70 71 74 76 76 78 80 81 81 82 84 85 87

(vii)

List of Figures (Continued)


3.27 Schematic of MOSFET Construction (a) Enhancement Mode n-Channel MOSFET (b) Depletion Mode n-Channel MOSFET 3.28 Complementary n and p channel MOSFETs (CMOS) 3.29 Circuit Symbols for Different FET Devices 91 3.30 Typical Output Characteristic for an n-channel MOSFET Operating in Both Enhancement (VGS Forward Biased) and Depletion (VGS Reverse Biased) Mode 92 3.31 Digital Inverter Gate Based Upon MOSFETs 93 3.32 CMOS Based Inverter Circuit with PMOS (Q2) Load 93 3.33 Amplifier Feedback Arrangement for n-Channel JFET 95 3.34 Source-Feedback Amplifier Circuit of Figure 3.33 with a.c. Signals and Thvenin Equivalent of Input Stage 95 3.35 Small Signal (Hybrid-) Model for JFET or MOSFET 96 3.36 Modelling Analog and Digital Circuits in Terms of External Characteristics 98 3.37 (a) Cascaded Circuits (b) Fan-out From One Circuit 99 3.38 (a) Thvenin Equivalent of a Circuit (b) Norton Equivalent of a Circuit 101 3.39 Determining Thvenin and Norton Equivalent Circuits from an Existing Circuit 102 3.40 Simplified Circuit Diagram for Single-Chip 741 Operational Amplifier 3.41 (a) Circuit Symbol for Operational Amplifier (b) Idealised Model of Operational Amplifier Circuit 3.42 (a) Operational Amplifier Based "d.c. Voltage Follower" Circuit (b) Operational Amplifier Based "a.c. Voltage Follower" Circuit 3.43 (a) Transconductance Amplifier for Floating Load (b) Transconductance Amplifier for Grounded Load 108 3.44 Transresistance Amplifier - Current to Voltage Conversion 3.45 (a) Inverting Amplifier Arrangement (b) Inverting "Summing" Amplifier for Providing the Weighted Sum of "N" Input Voltages 110 3.46 (a) Non-Inverting Amplifier (b) Non-Inverting Summing Amplifier for Providing the Weighted Sum of "N" Input Voltages 111 3.47 Differential Amplifier Configuration 112 3.48 (a) Integrator Circuit (b) Differentiator Circuit 113

88 90

104 105 107

109

(viii)

List of Figures (Continued)


3.49 Imperfections of Realistic Energy Transducers and Circuits (a) Accuracy Problems; (b) Limited Frequency Response; (c) Non-Linearity 3.50 Silicon Controlled Rectifier (a) Schematic of Semiconductor Structure (b) Equivalent Two-Transistor Circuit (c) Circuit Symbol 3.51 SCR Voltage-Current Characteristic 3.52 SCRs for Inversion and d.c. Transmission Systems 3.53 SCR Crowbar Circuit 3.54 SCR Phase Controller for Resistive Loads and Motors 3.55 Using a Variable Resistance to Adjust the Average Output From an SCR Phase Controller 3.56 (a) Circuit Symbol for Diac (b) Circuit Symbol for Triac 3.57 Unijunction Transistor (a) Circuit Symbol (b) Equivalent Circuit

116 121

121 121 123 124 125 126 128

129

Chapter 4 Fundamentals of Digital Circuits 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10 4.11 4.12 4.13 4.14 Simple Mechanical Controller Microprocessor Analogy to Mechanical Controller Basic Computer System Elements Building Blocks in Computer Architecture Typical Digital Waveforms on an 8-bit Data Bus The True Analog Nature of Digital Waveforms Boolean Logic Circuit to Test for Humidity Digital Control System for Incubator Boolean Logic Circuit for Heater in Incubator Control Karnaugh Map for Original Expression for "Z" in Design Problem 2 Karnaugh Map for "Z" in Design Problem 2 with Regions Circled Sample Karnaugh Map Problems Karnaugh Maps Marked Out to Maximise Regions of Ones Karnaugh Maps for Incubator Control (i) Heater; (ii) Fan 4.15 Simplified Incubator Control System 4.16 7400 Quad 2-Input NAND-Gate, Dual-In-Line Package Chip 133 134 136 137 139 140 152 153 155 159 160 161 162 164 165 166

(ix)

List of Figures (Continued)


4.17 4.18 4.19 4.20 4.21 4.22 4.23 4.24 4.25 4.26 4.27 4.28 4.29 4.30 4.31 4.32 4.33 4.34 4.35 4.36 TTL Inverter Gate Input and Output Voltage Levels Associated with TTL Effect of Fan-Out on TTL Gate Performance Typical Data-Book Timing Diagrams Illustrating Propagation Delays from Gate Input to Gate Output (a) Open-Collector TTL Inverter (b) Combining Open-Collector Gates to Create an "AND" Function CMOS Inverter Including "p" and "n" Channel Devices Block Diagram for Full-Adder Circuit Karnaugh Maps for Full-Adder Circuit A Circuit to Add two 8-Bit Numbers "A" and "B" (a) Schematic of R-S Flip-Flop Construction (b) Block Diagram Representation of R-S Flip-Flop (a) Clocked R-S Flip-Flop Schematic (b) Clocked R-S Flip-Flop Block Diagram Typical Timing Diagrams for a Clocked R-S Flip-Flop (a) Clocked D Flip-Flop Schematic (b) Clocked D Flip-Flop Block Representation Negative-Edge-Triggered JK Master-Slave Flip-Flop (a) Schematic; (b) Block Diagram Form Timing Diagrams Highlighting the Operation of Simple Negative Edge Triggered Flip-Flop and Master-Slave Flip-Flop Eight Bit Storage Register Eight Bit Shift Register and Timing Diagrams Asynchronous Up-Counter (a) Synchronous Hexadecimal Counter (b) Synchronous Binary Coded Decimal Counter Modified Incubator Control System 169 169 170 171 172 176 178 179 180 184 185 186 187 188 189 190 191 193 194 196

Chapter 5 Memory Systems and Programmable Logic 5.1 Tristate Logic Devices (a) Non-Inverting Circuit with Active High Enable (b) Inverting Circuit with Active High Enable (c) Non-Inverting Circuit with Active Low Enable (d) Inverting Circuit with Active Low Enable Inverter Gate with Both Inverting and Non-Inverting Outputs Schematic of Data Storage in Memory Chips Designing a One Word Static RAM Chip

5.2 5.3 5.4

200 201 203 205

(x)

List of Figures (Continued)


5.5 Representations for AND gates (a) Traditional Representation (b) PLD Equivalent Representation with Fixed Connections () (c) PLD Equivalent Representation with Programmable Connections () Programmable Array Logic (PAL) Structure Programmable Logic Array (PLA) Structure PAL Solution to Incubator Design Problem 1 from Chapter 4

5.6 5.7 5.8

212 213 214 216

Chapter 6 State Machines and Microprocessor Systems 6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8 The State Machine Concept Typical ASM Chart Schematic of Building Blocks and Data Flow Within a Microprocessor Chip Hypothetical 8-Bit Microprocessor Showing Important Pin-Out Features 68 Pin Grid-Array, Typical of Packaging for Devices with Large Pin-Outs Information Flow in a Microprocessor Based System Schematic of a Hypothetical 16-Word Memory Chip Using Memory Mapping Techniques to Create a Common Shared Area of Memory (or Registers) to Transfer Data To and From Non-Memory Devices (eg: Graphics Controller Card) Timing Diagram for Addition Program Execution A Simple Hexadecimal Keypad to Automate Machine Code Programming Interfacing Hardware and Software via an Operating System Polling Techniques (a) Waiting for Inputs (b) Executing a Task While Waiting for Inputs Interrupt Programming for Serial Communications (a) Schematic (b) Memory Map Paging From Disk (Virtual Memory) to RAM (Physical Memory) Combining Basic Elements into One Computer System 218 219 224 230 231 232 234

6.9 6.10 6.11 6.12

237 242 246 249

251

6.13

6.14 6.15

253 255 258

(xi)

List of Figures (Continued)


Chapter 7 Interfacing Computers to Mechatronic Systems 7.1 7.2 7.3 7.4 7.5 7.6 7.7 7.8 7.9 7.10 7.11 7.12 7.13 7.14 7.15 The Computer Interfacing Process - Closed Loop Control Basic Steps in the Computer Interfacing Process Interfacing External Asynchronous Signals to the Synchronous Internal Environment via a Programmable Parallel Interface Functionality of A/D and D/A Devices Interacting with a PPI Device (a) Schematic Representation of A/D Converter (b) Schematic Representation of D/A Converter Schematic of D/A Converter Operation (3-Bit) Comparator Circuit Formed from an Operational Amplifier Flash or Parallel A/D Converter Schematic of Dual-Slope, Integrating A/D Converter Operation of Dual-Slope Integrating A/D Converter Schematic of 8-Bit Successive Approximation A/D Converter Results after A/D then D/A Conversion after Sampling Waveform (a) at a Range of Frequencies - (b) fs (c) 2fs (d) 4fs Schematic of Sample and Hold Circuit Using Sample and Hold Circuits with a Multiplexer to Share an A/D Converter Between Eight Analog Input Lines (a) The Schmitt Trigger Circuit Symbol (a) "Squaring Up" Slowing Changing Input Signals (c) Producing Digital Signals from an Offset Sinusoidal Input Waveform Switch Debouncing Circuits Based on R-S Flip-Flops The Concept of Amplification The Transformer in Concept PWM Output for a Range of Different Duty-Cycles Switch Mode Power Supplies Based Upon Amplification of PWM Signals Filtering Out Unwanted Signals with Low-Pass, Band-Pass and Notch Filters Relay Configurations Protection Using Zener Diodes and Relays Driving Simple High Current Circuits from Digital Outputs by Using Relays The Problem of Measuring Small Voltage Differences Between Two Voltages Which are Both High with Respect to Earth Isolation Using Transformers 262 263 265 266 268 270 271 271 272 273 274 275 277 278

281 282 283 284 286 287 289 291 292 293 294 295

7.16 7.17 7.18 7.19 7.20 7.21 7.22 7.23 7.24 7.25 7.26

(xii)

List of Figures (Continued)


7.27 Using a Current Transformer to Isolate and Monitor Current 7.28 Opto-Couplers (Isolators) (a) Simple Opto-Isolator (b) Darlington-Pair High Gain Opto-Isolator 7.29 Schematic of Potentiometer 7.30 Incremental Position Encoder 7.31 Idealised Point to Point Link 7.32 Lumped Parameter Approximation of a Conductor 7.33 Exaggerated Output Voltage at the end of l Segment 7.34 Degenerative Effects of Long Conductors 7.35 A Common Interfacing Problem where the Distance "L" Between the Transducer and Control System is Large 7.36 Interfacing Via Data Communications Links (Point to Point) 7.37 Interfacing Devices Via a Bus Network 7.38 The Basic Closed Loop for the Interfacing Process 295

296 300 301 304 305 307 308 308 310 311 313

Chapter 8 Software Development Issues 8.1 Basic Hardware and Software Elements Within a Computer System 8.2 Interfacing Hardware and Software via an Operating System 8.3 The Basic Closed-Loop Control Elements 8.4 Using Distributed Control Based on Intelligent Interface Cards 8.5 Typical Screen from Microsoft Corporation Word for Windows 8.6 - Typical Screen from Borland International Turbo Pascal for Windows

320 321 322 328 329 330

Chapter 9 Electromagnetic Actuators & Machines - Basic Mechatronic Units 9.1 9.2 9.3 9.4 9.5 9.6 9.7 9.8 Energy Conversion in Electrical Machines Magnetic Field Induced Around a Current Carrying Conductor Typical Shape of a Magnetisation Curve Energised Toroidal Core with an N Turn Winding Magnetic Circuit with Air Gap Electromagnetic Induction Typical Hysteresis Characteristic Generation of Eddy-Currents in Ferromagnetic Cores 342 343 345 345 347 349 349 351

(xiii)

List of Figures (Continued)


9.9 Torque Produced in a Current Loop 352 9.10 (a) Schematic Cross-section of a Stator on a d.c. Machine (b) Schematic Cross-section of a Rotor on a d.c. Machine 356 9.11 Flux Density Distribution as a Function of Angular Position in Air Gap 357 9.12 Horizontal and Vertical Elevations of a Rotor (Schematic) 358 9.13 Rectification Effect of Commutator 359 9.14 Electrical Model for d.c. Machine 360 9.15 Magnetic Fields Caused By Field and Armature Windings 361 9.16 Affect of Armature Field on Flux Density at North Pole of a 2-Pole Machine 362 9.17 Separately Excited d.c. Generator Driven by a Prime Mover 365 9.18 Separately Excited d.c. Motor with Load of Torque "T" 365 9.19 Series d.c. Generator Driven by a Prime Mover 366 9.20 Series d.c. Motor Connected to a Mechanical Load 367 9.21 Shunt d.c. Generator (Self-Excited) 369 9.22 Shunt d.c. Motor Connected to Mechanical Load of Torque "T" 370 9.23 (a) Short and (b) Long Shunt Compound Generator Configuration 372 9.24 Long Shunt d.c. Motor Connected to Load of Torque "T" 374 9.25 Variation of Separately Excited Motor Torque-Speed Characteristic through - (a) Field Current Control; (b) Armature Voltage Control 378 9.26 Variation of Series Motor Torque-Speed Characteristic through (a) Terminal Voltage Variation; (b) Circuit Resistance Variation 379 9.27 Variation of Series Motor Torque-Speed Characteristic through (a) Terminal Voltage Variation; (b) Circuit Resistance Variation 380 9.28 Comparative Torque-Speed Characteristics for Different d.c. Motor Configurations 381 9.29 Schematic of Single-Phase Salient-Pole Synchronous Machine 384 9.30 (a) Flux Distribution as a Function of Rotor Position (b) Shaping Rotor Ends to Approximate a Sinusoidal Distribution (c) Induced Voltage in Stator as a Result of Rotor Movement 385 9.31 Schematic of Single-Phase Cylindrical (Round) Rotor Synchronous Machine 387 9.32 Three-Phase Salient Pole Synchronous Machine 388 9.33 (a) Stator Flux Distribution in Three-Phase Synchronous Machine (b) Induced Stator Voltages as a Result of Rotor Movement 388 9.34 Fields Generated through Application of Three-Phase Currents to Three Symmetrically Displaced Coils 389 9.35 Resultant Magnetic Field Vector for Varying Times (t) 391 9.36 Contribution of Total Field Vector to a Point "A" at Orientation "" 392 9.37 Fields Generated in a 4-Pole (2 Pole-Pair), 3 Phase Stator Winding 393

(xiv)

List of Figures (Continued)


9.38 Synchronous Machine with 3-Phase, 2-Pole-Pair Stator and Four-Pole Rotor 9.39 Torque Development in Synchronous Machines 9.40 Torque Production in a Three-Phase, Single-Pole-Pair, Synchronous Machine 9.41 Torque-Angle Characteristic for Three-Phase, Synchronous Machine 9.42 Typical Open and Short-Circuit Characteristics for a Synchronous Machine 9.43 Circuit Model for One-Phase of a Three-Phase, Cylindrical-Rotor Synchronous Machine 9.44 Phasor Diagram for Synchronous Machine in Generator Mode 9.45 Power Angle Characteristic for Cylindrical Rotor Synchronous Machine 9.46 Phasor Diagrams for Salient Pole Machines 9.47 Possible Three-Phase Synchronous Motor Connections 9.48 Different Configurations of Induction Machine Rotors 9.49 Electrical Circuit Representation of One Phase of a Three-Phase Induction Motor at Stand-Still 9.50 Electrical Circuit Representation of One Phase of a Three-Phase Induction Motor at Speed "N" and Slip "s" 9.51 Equivalent Representations of One Phase of a Three-Phase Induction Machine Rotating with Slip "s" 9.52 Torque-Slip (or Speed) Characteristic for a Three-Phase Induction Machine 9.53 Schematic of Stepper Motor with 4-Phase Stator and 2-Pole Rotor 9.54 Driving Currents in a 4-Phase Stepper Motor 9.55 A Simple Proportional Feedback Servo Control System 9.56 Intelligent Servo Drives (a) Retaining Analog Amplification (b) Complete Digital Servo Drive 9.57 "H-Bridge" Amplifier Configuration, Driven by PWM Output Coupled With Combinational (Boolean) Logic 9.58 A Distributed Control System Based Upon a Number of Intelligent Servo Drive Systems

394 395 398 399 400 401 402 404 406 407 411 413 414 415 417 421 423 425

426 427 428

(xv)

List of Tables
Chapter 1 An Overview of the Computer Interfacing Process No tables in this chapter

Chapter 2 Computers and Control - Mechatronic Systems No tables in this chapter

Chapter 3 Fundamental Electrical and Electronic Devices and Circuits No tables in this chapter

Chapter 4 Fundamentals of Digital Circuits 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10 4.11 4.12 Representation of Decimal Numbers from 0 to 20 ASCII and EBCDIC Character Representation Special Functions of the first 32 ASCII Characters Common Boolean Logic Functions and Representation Truth Table for Digital Incubator Controller Fundamental Principles of Boolean Algebra Truth Table for Original Expression in Design Problem 2 Truth Table for Simple Addition Circuit Truth Table for Full-Adder Truth Table for R-S Flip-Flop Truth Table for JK Flip-Flop Flip-Flop States for Asynchronous "Up-Counter" 144 147 149 151 154 156 158 177 178 185 187 192

(xvi)

List of Tables (Continued)


Chapter 5 Memory Systems and Programmable Logic 5.1 - Truth-Table for Tristate Inverter with Active High Enable 199

Chapter 6 State Machines and Microprocessor Systems 6.1 6.2 6.3 6.4 6.5 Truth Table for States Derived from ASM Chart for Figure 6.2 Part of an Instruction Set for a Hypothetical Microprocessor Memory Mapping of Identical Memory Chips in Figure 6.6 to Unique Locations with respect to Microprocessor Contents of System Memory (of Figure 6.6) after Program and Boot Address Entry Key Sequence Required to Enter Addition Program on a Simple Hexadecimal Keypad 221 226 236 240 245

Chapter 7 Interfacing Computers to Mechatronic Systems 7.1 7.2 Interfacing Options for Figure 7.38 Sample Interfacing Issues and Possible Courses of Action 314 317

Chapter 8 Software Development Issues No tables in this chapter

Chapter 9 Electromagnetic Actuators & Machines - Basic Mechatronic Units 9.1 9.2 "Dual" Electrical and Magnetic Circuit Properties Magnitude of Field Intensity Vectors at Differing Times 347 390

(xvii)

Introduction
Very few systems in modern engineering are either purely mechanical or purely electronic in nature. Most engineering devices, machines and systems are a combination of mechanical elements, computer controls and the electronic interfacing circuitry that binds these elements together. In the mid-1980s, these hybrid systems were recognised as being a rapidly growing part of the engineering world and were given the rather commercial, but nonetheless appropriate, title of "mechatronic" systems. It is somewhat ironic that the two fields of engineering (electronic and mechanical) that were derived from the classical mechanical engineering streams of the early 19th century have now had to be brought back together as a result of the growing need for mechatronic systems. A mechatronic system can have many forms. At a domestic level, it could be a compact-disk player or a video recorder. At an industrial level, a mechatronic system could be a robot, a computer controlled production machine or an entire production line. The extraordinary increase in low-cost processing power that has arisen as a result of the microprocessor now means that few mechanical devices in the modern world are born without some form of intelligence. The problem that most engineers now face is that their undergraduate courses have simply not equipped them to undertake the task of designing mechatronic systems. Mechanical and manufacturing engineers seldom understand enough about electronic engineering and computing concepts to tackle the inter-disciplinary realities of system design. Electrical and electronics engineers similarly understand very little about the mechanical systems for which they design computer controls and interfaces. It is surprising therefore, that in this day and age we still retain separate courses for electronic and mechanical engineering - and yet the trend towards greater specialisation is unfortunately continuing. The common university argument is that in order to be a good electronic or mechanical engineer, one needs to have a highly specialised undergraduate program. The reality is that in order to be a good engineer, one needs to have a good understanding of both mechanical and electronic engineering disciplines and a degree of specialisation that is born of practical realities, rather than esoteric theories. The purpose of this book is to address the links that need to be bridged between modern electronic and mechanical equipment. In other words, we look at the issue of what a mechanical or manufacturing engineer needs to know in order to sensibly design mechatronic systems. We also introduce the basic concepts that electrical / electronics engineers will need to understand when interfacing computer systems to mechanical devices.

(xviii)

Computer Architecture and Interfacing to Mechatronic Systems is not an exhaustive applications guide for computer interfacing and systems design. The issues that have been selected for discussion in this book are wide-ranging and at first glance may appear to be somewhat unusual. However, a great deal of thought has gone into the structure of this book so that mechanical, manufacturing and even chemical engineers can come to terms with the electronics, computers and control systems that are used to drive mechatronic systems. Similarly, electronics engineers will find that the book summarises the basic concepts that they have learnt in their undergraduate engineering courses and places this knowledge into perspective with the mechanical devices to which they must tailor their designs. In the final analysis, only a small percentage of engineers will undertake a complete interfacing exercise from first principles. Many would argue that there is little need for such developments in light of the number of commercially available building blocks that can be used to interface computers to the outside world. However, even if one accepts that interfacing is gradually (and fortunately) becoming a systems engineering task, one must also accept that this task cannot be undertaken without a sound understanding of the basic principles and limitations of the building blocks involved. It is to be hoped that this book will give you an understanding of those principles and limitations.

(xix)

How to Read and Use This Book


"Computer Architecture and Interfacing to Mechatronic Systems" is one of a series of books that has been designed to enable electrical, mechanical and manufacturing engineers to tackle mechatronic systems design and to enable electronic engineers to understand the realities of the industrial devices around which computer controls and interfaces must be designed. The other text book, currently released in the same series is: Toncich, D.J., "Data Communications and Networking for Manufacturing Industries (Second Edition)", 1993, Chrystobel Engineering, ISBN 0 646 10522 1 These books have been designed with a view to bringing together all the major elements in modern industrial mechatronic equipment, including robotics, CNC and Flexible Manufacturing Systems (FMSs). If you are a mechanical or manufacturing engineer, then this is probably the first of the books that you should read. All the books in the series feature a similar format, in the sense that they have modular chapters which can be read in isolation from other chapters in the same book. All books in the series have overlapping sections, to enable modules to be covered in their entirety and to allow readers to migrate from one text to another with reinforcement of critical issues at appropriate points. The writing style of all books in the series is such that each chapter begins in a qualitative form and then introduces equations and technical detail only after the broad concepts have been described. For this reason, you should find "Computer Architecture and Interfacing to Mechatronic Systems" to be a very readable text. Each chapter begins with a summary and a diagram of the overall interfacing process that this text has set out to address. The parts of the interfacing diagram that are most relevant to the chapter are shown in bold text and heavy lines, while the remainder are shown in normal text and dotted lines. The diagram should assist you in understanding where each chapter fits in to the global issues of this text book. This particular book has been written in a chapter (module) sequence that is felt to be the most suitable for learning the concepts to which the title alludes. You may feel that the contents of a particular chapter are already familiar to you and hence you may choose to omit that chapter. However, be judicious in omitting chapters - the time spent reading an additional chapter will more than be recovered if it assists you in understanding the concepts of a following chapter by refreshing your memory on subject areas that you may have forgotten (or misunderstood at undergraduate level).

(xx)

Chapter 1
An Overview of the Computer Interfacing Process

A Summary...
A qualitative description and overview of the entire computer interfacing process (shown below). The analog and digital computing domains and the basis of digital circuit design. Steps in computer interfacing including hardware design phases and software issues.

Digital Voltages

Analog Voltages

Analog Energy Forms

External Voltage Supply

Digital to Analog Conversion Computer Analog to Digital Conversion

Scaling or Amplification

Isolation

Energy Conversion External System

Protection Circuits

Scaling or Amplification

Isolation

Energy Conversion

External Voltage Supply

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

1.1 General Issues - Analog and Digital Computation


It is difficult to know whether the modern digital computer, and its Von Neumann architecture, were ever intended to interface to the real world. A computer is, by definition, a "reckoning machine" and certainly its original designers went to great lengths to make it into a device that could carry out repetitive computations and provide a limited amount of human reasoning. The analog computer, on the other hand, has always been closely associated with the control of physical systems, but has never been able to provide the reckoning ability that we associate with the digital computer. Analog computers no longer play any significant role in modern engineering, and so, this book is predominantly about digital computers and the problems that we face in connecting them to engineering systems. It is interesting to note that when we use digital computers as islands of intelligence (that is, on their own), we find that their capabilities are only restricted by our own reasoning ability (that is, our ability to generate software) and by the speed at which computers can carry out the reasoning that we have instilled via our software. However, when we wish to interface computers to the real world (so that they can use their programmed reasoning to control a physical system) we find that our reasoning needs to be supplemented with an understanding of electronics, physics and engineering design principles before we can generate sensible solutions. It is altogether likely that any computer programmer, with no knowledge of computer architecture or electronics, could create a working computer control system. Certainly, there are a sufficient number of commercially available, "black-box" solutions to assist in interfacing computers to external systems. The problem with using such solutions, without a proper understanding of the design principles behind them, is that sooner or later the seemingly minor problems that arise (system instability, unwanted spurious signals, irregular behaviour, etc.) become insoluble. This book has not been written with the intention of by-passing the ready-made solution, but rather, with the intention of helping you to understand the basis of these solutions and the problems that arise when using them. The first issue that really needs to be addressed is that of the digital and analog computing domains. In all our "Newtonian" time-frames, the world is essentially analog in nature. Physical quantities do not change from one energy level to another in zero time - there is normally a continuous transition from one state to another, rather than a quantum variation. One may well ask why, if the world is essentially analog, have we chosen to discard the concept of analog computing and replace it with digital (quantum) computing. There are many reasons why analog computers were discontinued in the 1970s. These include:

An Overview of the Computer Interfacing Process

Accuracy Size Power consumption Cost.

Underlying all the problems in analog computing is the issue of representing quantities accurately. Consider an analog computer that is required to take in two numbers (between zero and ten) and add them together to achieve a given result. How is this achieved? We could represent each of the inputs and outputs with a voltage and use a circuit to electrically add the inputs to provide the required output. This is shown in Figure 1.1.

Input A [0, 10]

Analog Computer (Addition Circuit)

Output [0, 20]

Input B [0, 10]

Figure 1.1 - Simple Analog Computation - Addition

From Figure 1.1, we would assume that if Input A is equal to 5 volts and Input B is equal to 2 volts, then the output would be equal to 7 volts. What happens however, if:

Input A = 5.001322447 volts Input B = 1.999933821 volts ?

The answer to this question really depends upon how accurately we can design and fabricate our analog addition circuit. In engineering, we know that accuracy in setting or measuring energy levels normally equates to higher complexity and higher cost. This is certainly true in electronic circuits. The alternative to generating accurate (expensive) circuits is digital computing, where we only allow electronic hardware to represent the binary numbers zero and one. Digital circuits allow us to carry out both arithmetic and a limited amount of human reasoning (tautology). The reasoning component is achieved by using zero to represent "false" or "off" and one to represent "true" or "on". The basis of digital circuits, and hence digital computing, is that voltage accuracy must not be permitted to dominate circuit design. As long as we can achieve voltages within certain ranges, then we can represent the only two numbers that we need to handle in digital computing.

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The major design effort in the fabrication of digital semiconductor devices is therefore oriented towards achieving:

High circuit speeds Small component sizes / High Component Density Low power dissipation Low circuit fabrication costs.

Figure 1.2 (a) shows two, cascaded digital circuits, each with two inputs and one output. Figure 1.2 (b) shows the allowable output and input voltage ranges for each of the circuits. An error margin exists between the output of one circuit and the input of the subsequent circuit, so that minor voltage variations that may occur do not affect the meaning of the information represented.

Input A Output Digital Circuit 1 Digital Circuit 2 Input B Input B (a) Input A Output

Voltage 5.0 v True / 1 / ON True / 1 / ON 2.4 v 2.0 v Error Margin

0.8 v 0.4 v False / 0 / OFF Circuit Ouputs False / 0 / OFF Circuit Inputs (b)

Figure 1.2 - (a) Cascaded Digital Circuits (b) Allowable Input and Output Voltage Ranges for Transistor to Transistor Logic (TTL) Digital Circuits

An Overview of the Computer Interfacing Process

The actual digital voltage ranges shown in Figure 1.2 (b) correspond to a family of digital circuits known as Transistor to Transistor Logic or TTL. This is the oldest of the widely used digital circuit technologies (dating back to the 1960s) and is important because most modern digital circuits are still designed with the same allowable input and output voltage ranges. One of the major objectives of this book is to help you to come to terms with the design of modern digital computer systems, so that you can understand why it is so difficult to interface them to the outside world. The strategy chosen in order to achieve this is to begin with an overview of the role of computers in the industrial arena, so that you may gain some insight into the range of performance attributes required by modern computers. The next stage in the process is to introduce (or rekindle) the basics of modern electronic circuits, including transistors, thyristors and so on. This serves two purposes. Firstly, it will help you to understand how both analog and digital circuits are designed and the limitations of those circuits. Secondly, it will introduce you to the basic elements used in interfacing computers to the outside world. As we progress through this text, we will examine how digital circuits are designed and how they are joined together to carry out binary arithmetic and limited "human" logic functions. Once we understand these concepts, we will move on to the development of digital storage devices (memory) and the digital state machine concept - the heart of modern microprocessor technology. We will then see how all the basic elements are brought together via an address and data bus structure to create what we now understand to be the modern digital computer. This text book does not concern itself with the specifics of any one computer system or computer architecture. There are a number of reasons for this. Firstly, there are so many different microprocessor and computer system architectures that any reasonable treatment of their architecture would make this book unwieldy and out of date (by the time the information was compiled). Secondly, there are so many technical side-issues involved in the design of a commercial computer system that they tend to cloud the reader's understanding of the more important basic issues. Despite many manufacturers' claims to the contrary, this author believes that most modern computer systems are still only variations on a central theme. It is that theme which we shall explore in this book. You should then have enough knowledge to move on to the task of examining and understanding the specifics of a particular architecture in your own right. The same principle applies in terms of the treatment of microprocessors and Digital Signal Processors (DSPs) - they are both lumped together under the generic term of microprocessor. The assumption is that the DSP is a specialised form of microprocessor that has been optimised for low level control functions. Although most DSPs have, what is referred to as a "Harvard" architecture (rather than Von Neumann), in this book they are simply treated as a variations upon the central, Von Neumann theme.

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

1.2 Interfacing Computers Via Hardware


When we talk of interfacing computers to external systems, we are generally endeavouring to create closed control loops that enable a digital computer to generate (compute) some desired driving force, based upon the feedback obtained from the outside system. Sometimes, of course, we don't need a closed loop. This is particularly true if we only use a computer based device as a data logging or monitoring system, or if we use the computer to drive a system independently of the feedback (ie: as an open loop controller). However, regardless of the application, the same sorts of issues need to be addressed as a result of the incompatibilities between the "Newtonian-analog outside world" and the "digital computing world". The issues are shown schematically in Figure 1.3, together with the book chapters (shown in braces { }) that address them. You will see this diagram in various forms throughout this text, particularly on the front page of each chapter, where it will be used to remind you of the topics covered within that chapter. The topics most closely related to that chapter will be shown in bold lines and bold text, while the remainder will be shown in dotted lines and plain text.

Digital Voltages

Analog Voltages External Voltage Supply

Analog Energy Forms

{7} Digital to Analog Conversion Analog to Digital Conversion {7}

{ 3,7 } Scaling or Amplification

{ 3,7 } Isolation

{ 2,9 } Energy Conversion (Actuators) External System

Computer

{ 2,3,4,5,6,7,8 }

Protection Circuits { 3,7 }

Scaling or Amplification { 3,7 }

Isolation { 3,7 }

Energy Conversion (Transducers) { 3,7 }

{ 2,9 }

External Voltage Supply

Figure 1.3 - Interfacing Digital Computers to External Systems When observing Figure 1.3, the most important point to understand is that modern computers will only respond to incoming digital voltage signals and will only generate outgoing digital voltage signals. The levels of these digital voltages are normally in the order of those shown for TTL circuits (in Figure 1.2 (b)), although the actual levels depend upon the specific architecture of the digital circuits within the computer.

An Overview of the Computer Interfacing Process

Modern computers do not respond to energy represented in any form other than voltage - generally, not even current. However, Figure 1.3 shows the generalised "outside world" as an external system which is propelled by continuous (analog) energy sources (mechanical, electrical, thermal, etc.) and feeds back analog signals that represent some form of energy (eg: temperature, voltage, current, pressure, stress, strain, position, velocity, acceleration, etc.) which is, in general, not a voltage. The closed loop of Figure 1.3 clearly represents some problems in terms of interfacing the small digital voltage signal world of the computer to the analog energy world outside. Signals fed back from the external system to the computer need to be: (i) Converted from their original form to voltage (ii) The raw voltage levels may need to be isolated from the external system (iii) The raw voltage levels need to be scaled to a level compatible with the needs of computer circuits (iv) In the event that the scaled voltages may suffer from unwanted spikes (as a result of spikes in the energy levels received from the outside world), protection circuits need to be considered to prevent high signals from damaging the computer (v) The scaled voltages need to be converted into a digital form. The other major problem with connecting digital computers to the outside world is that the circuits within the computer are not only low in energy consumption, but also incapable of providing a large amount of energy. We know that signals within the computer are represented by digital voltages, somewhere in the order of those shown for TTL circuits in Figure 1.2 (b). However, we also need to appreciate that the major limitation of those circuits within a computer is that they are unable to provide large amounts of current. Typically, a digital circuit can provide less than a milli-Amp of current. This means that digital circuits can only provide a few milli-Watts of power. The actual amount depends upon the type of digital circuit but, in any event, is generally much less than the energy normally required to drive many external systems. Consider the case where a computer is required to control a power station. The external system is a generator driven by a rotating turbine. The signals fed back to the computer may include items such as turbine speed, output voltages, etc. Based upon these signals, the computer needs to provide a controlled driving force that will cause the turbine to rotate at a given speed - clearly the computer is not going to supply the Mega-Watts of mechanical power that are required to rotate the turbine. Rather, the "driving force" produced by the computer is a very low level electrical signal that is used to drive actuators, valves, etc. that control the flow of steam or water to the generator turbine. The problem is then how to convert the computer's controlling signals into some form of energy that can ultimately drive the external system.

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Assuming then, that the computer circuits are unable to provide the required driving energy for an external system, there are a number of interfacing steps that may need to be carried out in order to achieve the required outcome. These include: (i) (ii) (iii) (iv) Conversion of the computer's internal digital voltages to analog form Amplification of the analog voltages to higher energy levels Isolation of the computer circuits from the external system Conversion from the analog voltage form to the required final energy (mechanical energy, thermal energy, etc.).

Chapter 3 of this text actually has two objectives. The first is to introduce you to the basic electronic elements so that you can understand how these are used in order to create modern digital computers. The second objective of the chapter is to introduce you to the basic electronic elements used in the computer interfacing process. One of the key elements in all digital and analog circuits is the transistor. Chapter 3 shows how the transistor can be used to create basic digital circuit elements and also how it can be used as an analog amplifier. In both cases, the transistor is shown to be a device whose function is to control the flow of energy from a fixed, external electrical energy supply in such a way that the outputs of transistor circuits are related to the inputs by some predefined characteristic. This concept is fundamental to both computing and interfacing. Following an examination of the basic elements of modern computer architecture, in Chapters 4, 5 and 6, Chapter 7 continues the hardware interfacing theme by examining the basic elements of closed-loop control systems. As you may gather from Figure 1.3, Chapters 3 and 7 are closely tied together. Chapter 9 completes the study of interfacing hardware by examining one of the classic mechatronic systems and benchmark control problems - the servo motor. The purpose of this chapter is to bring together, in hardware and software, the sort of elements that have been introduced during the course of this book. Electromagnetic actuators (motor drives) have been singled out from other mechatronic devices for special attention in this text. This is because of their enormous importance in modern industrial systems, most notably robots, CNC machines, transfer lines and flexible manufacturing systems (FMSs). As a result of their widespread application, inherent complexity and the fact that they embody the concepts of mechatronics and interfacing as a whole, they have been allocated an entire chapter. One major issue that isn't dealt with in this book is the hardware interfacing of computers via networks. This is an important subject in its own right and has therefore been dealt with in the complementary text book, "Data Communications and Networking for Manufacturing Industries".

An Overview of the Computer Interfacing Process

1.3 Interfacing Computers Via Software


The problems associated with interfacing computers to external systems are often accentuated when the software development phase commences. It is then that the designer realises the limitations of the hardware and has to make the decision of whether to continue on the existing course or perhaps revise the original hardware design altogether. In most applications where computers are interfaced to external systems (particularly mechatronic systems), the critical issue is whether incoming signals can be processed quickly enough and, in a closed loop, whether outputs can be generated quickly enough, to ensure the stable operation of the combined system. The most efficiently coded software cannot overcome inadequacies of the processing and interfacing hardware, but poorly written software can certainly exacerbate hardware problems. The objective therefore, in most time-critical applications, is to develop software that can carry out a given task within a specified time. This is referred to as real-time software development and has an added dimension of difficulty that isn't found in many other software development tasks - that is, the computational time factor. There are a number of features which are common to most pieces of software related to computer interfacing. These include: (i) Time-critical input/output routines that read incoming data from interfacing hardware and write outgoing data to interfacing hardware Control routines that process incoming data through some set of algorithmic or decision making criteria in order to generate an output response

(ii)

(iii) User input/output routines that enable a system user to interact with the computer and, thereby, the external system to which it is interfaced. These elements are shown schematically in Figure 1.4, which needs to be studied for a few moments in order to understand the interaction between the various components. Of particular significance is the so-called "operating system" of the computer control system, which is a piece of software that acts as a "shell" in which all the other software elements execute. In the majority of applications, the operating system will be a commercially produced piece of software. However in low level environments, such as application-specific microprocessor controls, the system designer may write a very limited form of operating system to achieve a given task.

10

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

An operating system is not only a shell in which software can execute but it is more importantly a manager that controls each program's access to resources such as disk-drives, screens, memory locations, etc. We will examine the important role of the operating system in some detail in Chapter 8, before we move on to other issues of software development and software selection.

Computer / Microprocessor Controller Operating System Output Routines User Input & Output External System Control Algorithm Driving Force

Input Routines Feedback

Keyboard Screen

Figure 1.4 - The Computer Interfacing Software Development Process

Attempting to provide the specifics of how pieces of software, such as input/output routines, etc. should be written is akin to telling people how long to cut a piece of string - it all depends upon the specific application. More importantly, the software development strategy depends upon the specifics of the functions provided by the operating system itself. The basic software routines illustrated in Figure 1.4, although common, are not universal to all applications. For example, the user input/output routines sometimes don't exist on a very low level microprocessor control system whose only purpose is to carry out some predefined task whenever activated. Output routines are not required in data logging applications and inputs are not required in open-loop control systems. However, in as much as both these tasks are common to most mechatronic control systems, they will be reviewed together in Chapter 8.

An Overview of the Computer Interfacing Process

11

Given that there are a wide range of applications in computer interfacing, the issue arises of how we can tackle the subject of software development within the confines of a single book or chapter. The answer is that we clearly cannot cover all the issues involved in software development. More importantly, it must be remembered that this book has not been written with the intention of superseding the countless, existing text books on programming languages and packages. In this book, our objective is to examine the process of selecting development platforms and packages to achieve a set of criteria. Some years ago, one might have argued that the selection of software development platforms was a trivial task - simply use low-level assembly language for time-critical tasks and high level languages (such as C or Pascal) for user input/output and common control algorithm development. However, this decision is no longer clear cut as a result of the introduction of "windows" type operating system environments and so-called "software development engines" (such as spread-sheets, databases, etc.). These are over and above the novel, 1980s attempts at computer programming, including neural networks, expert systems, etc. As we go through Chapter 8, we shall examine the task of software development with these new tools in mind, so that we can establish a decision making process for the selection and design of software. At the end of Chapter 9, we will see how hardware and software can be brought together in the design of the traditional servo motor control, implemented via both analog and digital techniques.

12

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

13

Chapter 2
Computers and Control - Mechatronic Systems

A Summary...
An overview and history of industrial computer control systems. General-purpose computer systems adapted for process and system control. Specialised computer control systems (micro-controllers, Computer Numerical Controllers (CNC) and robot controllers). Industrial computer interfaces for process control - the Programmable Logic Controller. Mechatronic elements and systems - servo motors, transfer lines, FMS, etc.

Digital Voltages

Analog Voltages

Analog Energy Forms

External Voltage Supply

Digital to Analog Conversion Computer Analog to Digital Conversion

Scaling or Amplification

Isolation

Energy Conversion (Actuators) External System

Protection Circuits

Scaling or Amplification

Isolation

Energy Conversion (Transducers)

External Voltage Supply

14

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

2.1 A Review of Developments in Digital Computer Control


Digital computer control is a relatively recent phenomenon. It began to proliferate in the 1970s as a result of the increased computational capacity of digital computers and the increased availability (and "affordability") of integrated digital circuits. However, early developers in digital computer controls faced a difficult task because the external systems requiring control were predominantly analog in nature. Ironically, now that we have an enormous supply of devices to simplify the process of interfacing to analog systems, we increasingly find that the "systems" themselves are becoming more digital in nature as a result of the increasing intelligence of constituent components. Unfortunately for early system designers, the cost of control computers was relatively high and so, as a result, only the most expensive and complex processes were generally considered for computerisation. The cost of applying computers to simple control systems was prohibitive and so designers had to add to their professional skills by first tackling the worst possible problems, with few experts to turn to for advice. Typical applications included:

Power station control systems Chemical plant and refinery controllers Metal smelting plant controllers Large-scale food processing control systems.

All of these types of computer applications can be classified under the umbrella of "real-time control" and all suffer from similar problems. The problems include: (i) The need to extract information from hundreds of sensors and energy transducers (ii) The need to process incoming information in "real-time" (ie: before the next change of information occurs) (iii) The need to output signals to hundreds of sensors and transducers.

Very few real-time control problems had been tackled by computer manufacturers up until the 1970s. Most manufacturers had been busy enough just developing computers to handle the growing number of data processing tasks that had arisen during the 1960s. However, the requirements of real-time control were quite different and, given the limitations of the technologies available in the 1970s, designers did an outstanding job in creating relatively reliable end systems. Typical computer control systems in the 1970s had an architecture of the form shown in Figure 2.1. This is commonly referred to as a "hierarchical" control architecture. It is composed of an intelligent control device (computer), referred to as the host, and a number of unintelligent slave devices (sensors, transducers, actuators, amplifiers, etc.) that together make up a functioning system.

Computers and Control - Mechatronic Systems

15

Control Computer (Host) Input/Output Interfacing Boards Feedback Signals from System Computer Outputs to System

Sensor 1

Transducer 1 Transducer N Relay 1 Amplifier 1 Solenoid 1 Solenoid N

Sensor N

Relay N

Amplifier N

Figure 2.1 - Hierarchical Computer Control Architecture

The hierarchical control architecture came about as a result of necessity rather than outright design acumen. Computer processing was a relatively expensive commodity in the 1970s and as a result it was most uncommon (and expensive) to consider the use of more than one computer for a control problem. For this reason, the host computer had to carry out all the functions associated with real-time control, including:

Input/output (normally abbreviated to I/O) Control algorithm execution Interaction with the system supervisors (users) Display of current status on screens and mimic panels.

This level of functionality was a radical departure from the traditional concept of computing that had developed in the late 1950s and throughout the 1960s. Computers had to become devices that could execute programs within a given time-frame and interact with the outside world to an extent that had previously been difficult to imagine.

16

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

There were a number of important issues to be overcome before a transition could be made from the data-processing role of the computer to a real-time control scenario. Firstly, specialised input/output (I/O) boards and circuits had to be devised in order to enable the computer to interact with the wide range of analog signals that emanated from external systems and to enable the computer to drive high level analog outputs. Secondly, the most fundamental piece of software within a computer system, the operating system, had to change its functionality. The operating system still had to remain as a resource manager and a scheduler and an interface between the computer hardware and software. The big change was that it now had to perform these functions in such a way that programs could execute quickly enough to process incoming signals before the status of those signals altered (otherwise data would be lost). This was the critical, "real-time" issue. In the 1950s and 1960s, operating systems were really designed so that large computer systems (main-frames) could process office data, entered on punch-cards (known as Holorith cards), as efficiently as possible. The basic principle was that it didn't really matter which data the computer processed first as long as, over a given time-period (eg: one day), the total amount of information processed was maximised. A small amount of task prioritisation was allowed, but in general, operating systems tended to treat all data in terms of "files". In strict terms, a file was a quantum of data stored on either magnetic disk or ferrite-core memory. However, the changing nature of data input and output (from punch-cards and print-outs through to video terminals and serial communications links) could not readily be accommodated in terms of the older operating systems. The operating systems were therefore modified to consider newer forms of data as though they were files (even though they were physically something else). Although the concept sounds rather convoluted, treating all inputs and outputs as though they were files on a disk worked satisfactorily in the office but it became unwieldy for control purposes. If inputs and outputs related to an external system under control were treated as though they were files, then they were subject to the same level of prioritisation as files. In simple terms this meant that the operating systems associated with office computing could not deliver the time responses required for realtime control. In other words, it could take the older computers longer to process data from external systems than it may take for the data to change - information could be lost. One of the first organisations to recognise the limitations of the older style computer architectures and operating systems was the Digital Equipment Corporation (DEC). Their response was to develop a range of computers that are still revered today for their innovative hardware and software. The computer range was given the title prefix PDP-11 and was really the first computer series that engineers would claim for themselves.

Computers and Control - Mechatronic Systems

17

Several factors made the DEC PDP-11 series revolutionary in its time (and led to its survival for some 20 years). The first was an innovative hardware design that simplified the control system development task for engineers through an enormous range of powerful instructions. The second advantage of the PDP-11 was that the company began to produce a range of products that enabled the computer to interact with the outside world, at a time when other manufacturers were still producing computers that worked in isolation. The third major advantage of the PDP-11 was its new operating system, specifically designed for real-time control applications. The operating system was given the acronym RSX-11 and also endured for some 20 years. The DEC PDP-11 series of computers became a bench-mark for digital control systems and were widely used in the complex types of applications cited earlier. Indeed, even in the 1980s (when computer processing costs had diminished), system designers still worked with the sort of hierarchical control architecture made popular (and feasible) by DEC. However, by the mid-1980s, the cost of microprocessor chips and microprocessor-based devices (including personal computers) had plummeted and a new trend in both computing and control emerged. Mid-range computers such as the DEC PDP-11 became relatively expensive in comparison to microprocessor based computers and controllers and so, a new industry arose, focused on the task of minimising the use of mid-range computers and maximising the use of low-range (single microprocessor) based devices. This was greatly assisted by the proliferation of intelligent (microprocessor controlled) devices in the early 1980s. The net effect was to make possible a different form of control architecture, which we now refer to as "distributed control". The concept of distributed control is shown schematically in Figure 2.2.

Host Computer (Event Scheduler)

Data Communications Links Local Processor 1 Local Processor 2 Local Processor N

Inputs / Outputs to local system 1

Inputs / Outputs to local system 2

Inputs / Outputs to local system N

Figure 2.2 - Distributed Computer Control Architecture

18

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The basic principle of distributed control is that a complex control system is divided up into a number of components and each component is controlled by a local computer (which may be microprocessor or Digital Signal Processor (DSP) based). The role of the host computer is then only to coordinate (schedule) the activities of each of the local (slave) processors and to interact with the system users. The host computer and local processors are normally connected to one another via data communications links or a network, both of which unfortunately create a new range of design problems that have to be resolved. However, if the communications issue can be resolved then the distributed control architecture has a great deal of potential. The idea of distributed control is to make each local control unit modular and simple so that the overall control system becomes more robust than one large and complex system. This means that the host system can be a much smaller computer than would be required for a hierarchical control system. The collective cost of the smaller host computer and the local processors can be comparable, or lower, than the cost of one large computer. Moreover, when a local processor fails, it is often much more cost effective to replace it entirely than it is to repair a mid-range or mainframe computer. A typical area of control that is commonly allocated to some form of local processor is the collection and processing of signals from external systems. If the local processor is designed to gather and process signals and it is not loaded down with other tasks, then it may result in a total system that can perform time-critical functions more efficiently than a large computer carrying out many tasks. There is some ambiguity amongst different text books on the subject of "hierarchical" and "distributed" control, largely because the definitions are rather subjective. In this book, we will simply define a distributed control system as being one in which the total control structure is divided up amongst a number of computers or processors. One could also argue that the structure shown in Figure 2.2 is both hierarchical and distributed since there is a host computer (ie: higher level computer) that controls the local processors. The distributed control architecture concept can be further extended to the point where there is no longer a need for a host computer. In other words, a system is controlled by a collection of computers (processors) that all work together in order to achieve some particular objective. This is referred to as a "heterarchical" control structure and is shown schematically in Figure 2.3. This is a departure from the other two computer control architectures in the sense that there is no coordinating device in the control system. The structure is sometimes referred to as a "functionally decomposed" control architecture, since all control functions have been devolved down to local devices. The heterarchical control system looks interesting and has a number of characteristics in common with the way in which the human brain operates - that is, as a collection of equally intelligent nodes operating together for some common purpose.

Computers and Control - Mechatronic Systems

19

Data Communications Network (Local Area Network)

Local Processor 1

Local Processor 2

Local Processor N

Inputs / Outputs to local system 1

Inputs / Outputs to local system 2

Inputs / Outputs to local system N

Figure 2.3 - Heterarchical Control Architecture

The problem with the heterarchical control architecture is that it makes the development of control software rather difficult because no single node has a coordinating role. This really requires a new way of thinking and many modern system designers are graduates of the hierarchical control school and have difficulty in translating their existing techniques to heterarchical control. Over and above the problems related to the development of control systems in heterarchical structures, there is the issue of networking. Networking has always been one of the irritating bottlenecks in the development of digital computers and computer control systems. Since the need for networking emerged in the 1970s, the progress towards standardisation has been painfully slow. In the case of heterarchical control systems, the networks form the backbone for communications between devices and are critical to the success or failure of the system. However, the speed of communications between devices across a network has always been a limiting factor in the use of heterarchical control. The heterarchical control architecture is making some progress as the performance of computer networks improves and system designers begin to change their ways of thinking about control problems. However, all three forms of control architecture are currently in use and all have advantages and limitations that need to be considered. It is to be hoped that once you have completed reading this book, you will have a much greater understanding of the issues that are involved in selecting a digital computer control architecture for a mechatronic control system. The remainder of this chapter is devoted to exploring a number of different computer devices that are available in common mechatronic systems and the way in which they are used in those systems.

20

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

2.2 Programmable Logic Controllers


The Programmable Logic Controller, more commonly known as a PLC, is an unusual device in many ways. The modern PLC is an industrial control computer in every sense but its predecessors were really only designed to be low-level digital replacements for the electromechanical control systems found on industrial equipment during the 1940s, 1950s and 1960s. As a result of this rather unusual heritage, the PLC has been developed in a rather peculiar way relative to other computer systems. The first point to note about the PLC's heritage is that it was originally developed as a "tradesman's tool" rather than the "professional's tool" that the traditional office computer was designed to be. For this reason, a great deal of emphasis in early PLC design was to create a programming language that was best understood by industrial electricians rather than professional programmers. This language became known as "relay-ladder-logic" and can still be found today on older equipment. The second point to note about the PLC is that it was always designed as a computerised device with extensive input and output facilities that were very rare in traditional office computers. PLCs are now amongst the most prolific of all modern industrial control systems. They are used for a wide range of applications and are very diverse in their capabilities. A PLC is shown schematically at its simplest level in Figure 2.4.

Voltage Inputs Power Transistor Front-End Microprocessor and Digital Control Circuitry Power Transistor Back-End

Voltage Outputs

Current Inputs

Current Outputs

Figure 2.4 - Schematics of a Programmable Logic Controller

Computers and Control - Mechatronic Systems

21

PLCs use power-transistor technology, in combination with microprocessors and digital circuitry, in order to produce a specialised computer system for high power switching and control. The power transistor front and back ends, are used to buffer the low-voltage microprocessor computer circuitry from high power industrial inputs and outputs. PLCs therefore provide the ideal combination of a small computer system together with interfacing to the industrial environment. We earlier noted that PLCs were introduced to replace the electromechanical relay-ladder logic systems used to implement industrial controls. A typical function could be as follows: "If Input A is high and Input B is greater than 50 volts, then delay 10 seconds and then set Output C to High." The starting point in PLC design was to give the devices the ability to perform such functions with minimum programming effort. Moreover, to enable the industrial electricians who once created relay ladders to program the newer technology PLCs. The inherent ability of PLCs to perform such functions makes them ideal for sequential control functions where (say) a number of hydraulic and pneumatic actuators and sensors have to be governed. For example, the opening and closing of safety doors or the switching of fluid pumps in a production system. Modern PLCs are relatively inexpensive items, which are industrially rugged in design and extremely modular in structure. It is commonplace to buy a Central Processing Unit PLC, together with any number of bus-connected Input/Output (I/O) modules. This allows both simple and complex machines to be based upon the same, basic PLC unit. An Original Equipment Manufacturer (OEM) may choose to design a machine using a basic PLC system (with say 10 to 20 inputs and outputs) and then purchase expansion I/O modules as customer, design requirements change. Programming languages for Programmable Logic Controllers are as diverse as the controllers themselves. Early PLCs were only programmable in "ladder-logic diagrams" that were a pictorial representation of Boolean circuits coupled with delay and timer elements. However, as people grew to realise the enormous range of design applications for these programmable devices, it became less and less attractive to use the now dated, ladder-logic, diagrams. Many modern PLCs are sold with specialised implementations of the BASIC programming language as a built-in feature. This allows a much more sensible and structured approach to system design to be used. It has also been a logical step, since the proliferation of Personal Computers meant that more and more technical people were comfortable with the concept of programming in a language, rather than using diagrams. In recent years, PLCs have now become available with specialised "C" or "PASCAL" compilers, that allow complex program development for control applications.

22

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Despite the availability of high-level languages on PLCs, they still remain very much a "tradesman's tool". PLCs are generally weak in terms of their ability to carry out complex computations and control algorithms. The real strength of the PLC is in its ability to interact with high voltage and current inputs and outputs. Moreover, since PLCs are designed in a modular (building-block) manner, an enormous range of energy conversion transducers can be used to turn PLC outputs into useful drivers for mechatronic equipment. For example, PLCs can readily be connected to a range of solenoids and pneumatic actuators to convert a voltage output into a mechanical movement. The functionality of the PLC makes it ideal for controlling sequential "eventoriented" systems such as conveyors, dedicated transfer machining lines and so on. However, the ability of the modern PLC to communicate with higher level computer systems through a data communications link or network has made it into a useful processing device for distributed control. Figure 2.5 illustrates a commonly used distributed control system (similar to that shown in Figure 2.2) in which the PLCs are responsible for interacting with the outside world while the host computer system carries out some complex control algorithm.

Host Computer (Control Algorithm)

Data Communications Links or Network PLC 1 PLC 2 PLC 3

Inputs / Outputs to local system 1

Inputs / Outputs to local system 2

Inputs / Outputs to local system N

Figure 2.5 - Distributed Control Using PLCs

The distributed system of Figure 2.5 enables the PLCs to input information from hundreds of external sources, carry out minor processing on that information and then feed it as control input data to the host computer. Once the host computer has calculated the next set of required outputs (as determined by the control algorithm) it sends the information to the PLCs via the communications links or network. The PLCs are then responsible for creating the high current or voltage output.

Computers and Control - Mechatronic Systems

23

The problem with the sort of architecture shown in Figure 2.5 is that the commonly used data communications links and networks are relatively slow. If, for example, the system shown therein was used for some form of continuous process control (such as in a power station or chemical plant) then the communications link between the PLC and the host computer could form an unacceptable bottle-neck in terms of data flow. For this reason, a number of PLC manufacturers are now developing devices that have a much closer coupling to the host computer system so that the external communications link can be avoided. This form of design involves connecting the PLC directly into the internal architecture of the host computer system and is referred to as a "back-plane" connection. The range of PLCs currently on the market is quite extensive and can include small devices costing less than a personal computer up to large systems costing more than most mid-range computer systems. Simple PLCs have an appearance similar to a programmable pocket calculator - however, the PLC is equipped with a number of high-voltage, high-current input/output terminals which are used for interaction with the outside world. These systems are useful for simple sequential control. Sophisticated PLCs can resemble computer workstations in both appearance and functionality. Most of the high-end PLCs are capable of executing multiple programs simultaneously, and interacting with the user through graphical software interfaces. These systems normally represent the "break-even" point in control systems design. A control system designer would sometimes need to decide whether to use one of these costly PLCs to carry out a complete control system or whether it would be more cost effective to use a traditional computer and a lower cost PLC system. The criteria typically used to select a PLC for a particular application include the following:

PLC Programming Language Number of Inputs and Outputs (I/O capability) Ability to interact with the user and/or display graphical system information Expansion Capability Processor Execution Speed Modularity of Design Ruggedness of Design Capacity for Integration with other systems through: Serial Communication Back-plane (Bus) Communication Local Area Network Communication.

Overriding the technical factors are the always considerable political factors, which cause a company to choose a PLC vendor based upon conformance with other systems already installed in a plant. This reduces the need for maintenance personnel to become familiar with a wide range of programming languages and implementation techniques.

24

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

2.3 Intelligent Indexers and Servo-Drive Systems


One of the most important issues in the development of mechatronic control systems for industrial applications is the ability to accurately move some mechanical element such as a cutting tool or end-effector from one position to another. This is essential to the development of precision devices such as robots, Computer Numerical Control (CNC) machine tools, indexing tables for laboratory equipment, etc. There are many ways in which an element can be moved from one position to another. Older industrial systems traditionally used hydraulics and pneumatics to propel mechanical elements. For centuries, mechanical clocks have used springs for energy and gears to index the arms with a relatively high degree of accuracy. However, in modern industrial systems the most common approach is to use electromagnetic techniques - that is, electric motors whose rotational positions can be accurately controlled. Rotational movement is transformed into linear motion by driving simple "screw-feeds" or low-friction, low-backlash, recirculating ball-bearing, screw-feeds (ball-screw-feeds). Those who are unfamiliar with the intricacies of electric motors will assume that the only function of a motor is to rotate continuously within a required velocity range. However, when we talk about the accurate positioning of a robot arm or CNC machine axis, we are really talking about motors that are designed to rotate a fraction of a revolution and then stop. The actual motors used for these applications are similar to the a.c. and d.c. motors that rotate continuously in other electrical machinery - the difference is in the way they are controlled. There are essentially two types of motor control systems that can be used for accurate positioning of mechanical elements:

Stepper Motor or Indexer Control (Open loop control) Servo Motor Control (Closed loop control).

The stepper motor system is based upon a special type of motor that rotates (indexes) by a fraction of a revolution each time a voltage pulse is applied to one of its windings. Unlike traditional motors, the stepper motor does not rotate smoothly but rather, steps from one position to another and hence its name. The overall stepper drive system is shown schematically in Figure 2.6. Stepper motors were originally used for small scale applications such as in printers, plotters, etc. However, they are now also used in industrial applications where the mechanical load on the motor is known (and stable). Although the control principles are similar, the industrial systems tend to be referred to as "indexers" in commercial literature.

Computers and Control - Mechatronic Systems

25

Electrical Energy Input

Stepper Motor Drive

Voltages Representing Required Position

Electrical Outputs

Stepper Motor

Mechanical Output

Figure 2.6 - Stepper Motor Arrangement

The nature of the actual stepper motor drive can vary considerably. For very small motors, the drive can be implemented on a single silicon chip. On large systems, the stepper motor drive has to include power electronics and cooling fins to dissipate heat and hence has to be implemented on a circuit board, with discrete transistors and digital circuits. The fundamental limitation of stepper motors arises when they are used as shown in Figure 2.6 - that is, as "open-loop" devices. If the load on the motor shaft is larger than the torque generated by the electrical energy input to the motor windings, then the motor will not index from one position to another in a predictable manner. The absolute positioning characteristic of the motor is therefore lost. As a result, open-loop stepper motors are only used in situations where the load is always well defined - for example, in parts transfer (shuttle or indexing) systems where the maximum load on the motor can be calculated during the system design phase. A stepper motor, running in "open-loop" mode would be inappropriate for positioning a cutting tool, since the load caused by tool could vary substantially depending on the work-piece properties and amount of material being removed. Some stepper motor controllers can function in a closed-loop, where the position of the motor shaft is fed back to the controller from a resolver or position encoder. However, once these devices are converted into closed-loop systems, then they lose their cost advantage over traditional servo drives that can provide a "smoother" rotation.

26

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The servo motor system is the embodiment of classical control theory and operates in a closed loop that enables the controller to drive the motor according to its current velocity or position. The traditional servo motor system is shown in Figure 2.7. It is composed of:

An a.c. or d.c. motor An analog resolver or digital encoder device that provides a voltage signal or signals corresponding to the orientation of the motor shaft The servo drive controller itself An electrical energy supply.

Electrical Energy Input

Servo Drive (Controller)

Input voltage proportional to required position

Position Feedback

Electrical Energy Output

Encoder or Resolver

a.c. or d.c Motor

Mechanical Output

Figure 2.7 - Schematic of Traditional Servo Drive System Arrangement

The servo drive is an electronic device that is used to provide a regulated flow of electrical energy from an external power supply to the motor, based upon the difference between a specified voltage signal (the set-point or reference position) and the feedback signal from the encoder or resolver on the motor shaft. This provides closed-position loop control of the shaft.

Computers and Control - Mechatronic Systems

27

A number of servo motor systems have servo drives that do not provide closedposition loop control. Instead, their purpose is to provide closed-velocity-loop control. In these systems, the servo drive provides an output proportional to the difference between the actual velocity of the motor and a specified voltage signal (the set-point or reference velocity). In servo drives such as this, the velocity feedback can either be obtained by differentiating the position feedback signal (readily achieved in both analog and digital servo drives) or from an additional element known as a tachogenerator. A tacho-generator is a d.c. machine that is mounted onto the same shaft as the main motor and provides an output voltage proportional to the speed of rotation of the shaft. This closed-velocity loop form of servo drive is shown in Figure 2.8.

Electrical Energy Input

Servo-Drive (Controller)

Input voltage proportional to required velocity

Position Feedback

Velocity Feedback

Electrical Energy Output

Encoder Tachoor Generator Resolver

a.c. or d.c Motor

Mechanical Output

Figure 2.8 - Closed-Velocity-Loop Servo Motor System

The "velocity" terminology in regard to servo drives will cause some annoyance to those concerned with engineering etiquette. Strictly, of course we are referring to the speed of rotation (not velocity). However, this speed is normally directly related to the linear velocity of some end-effector and so the terms tend to be used interchangeably. The closed-position and closed-velocity loop servo drives both have roles to fulfil in industrial applications. The closed-position loop system is most useful where only one independent axis of movement is required in order to move some element or "endeffector" to a given position. However, where two or more motors are used to drive an end-effector to a given position, the axes are often interdependent. The path taken to reach that position also needs to be controlled through the velocities of the servo motors.

28

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The difference between velocity and position control is best demonstrated with a simple "XY" machine, as shown in Figure 2.9 (a), where one motor controls X movement and the other controls Y movement. Figure 2.9 (b) shows two paths taken by the end-effector in order to reach the point "P". Path (i) is obtained using velocity control of servo motors so that a straight line is generated between the starting point and the final destination. This is called "linear interpolation". Path (ii) is what can result from using two position controlled servo motors, which each attempt to reach their final positions independently. In path (ii) it is clear that the "X" movement is faster than the "Y" movement and hence when the X motor reaches its final position, the Y motor still has to continue for some time. Since it is obviously not possible to have "total velocity control" over a motor, because of acceleration and deceleration, most multiple-axis machines (robots and CNCs) use a dual feedback loop arrangement incorporating both velocity and position.

End-Effector Position

X Drive

Y Drive

X (a) Y P

(i) (ii)

(b)

Figure 2.9 - (a) A Simple XY-Machine (b) Problems with Using Position Control for Interdependent Axes

Computers and Control - Mechatronic Systems

29

There are many different types of servo drive control available for industrial applications. The first distinction between drives is that some are designed for d.c. motors and others are designed for a.c. motors (induction and synchronous motors). Direct current motors are much simpler to control than alternating current motors but cost considerably more and are less reliable. As a result of technology limitations, older servo drives were only designed to control d.c. motors and a.c. drives did not emerge until the 1980s. The older types of servo drives (both d.c. and a.c.) are analog in nature and are characterised by the fact that the circuit boards are relatively bulky. Modern drives utilise digital technology to control the power flow to the motors and as a result, less "waste" heat is generated and hence the drives can be much smaller. Traditional servo drives did not have any in-built "intelligence" and could thus only carry out simple forms of closed-loop control. The most prolific form of control was (and still is) the so-called Proportional-Integral-Differential or PID control which is a classical, closedloop control methodology. In recent years, servo drives have also begun to utilise the low-cost processing power that has become available through microprocessors and Digital Signal Processors (DSPs). This enables manufacturers to design servo drives that can "intelligently" control the flow of energy to the motors through some complex algorithm. A number of commercially available indexers (stepper motors) are now also equipped with microprocessor control. In addition to allowing a broader range of control algorithms to be implemented (in addition to PID), the on-board processor can also allow the servo drive controller to be networked, so that it will respond to positioning or velocity commands. Servo motors and drives, both a.c. and d.c., are not only the basis for a great range of modern machinery design but are also contributors to the improved factory environment that now exists in many Western countries. Servo drive systems are much quieter than hydraulic and pneumatic systems and considerably reduce noise emission levels in the factory when they replace these older drive systems in low power applications (up to a few kilowatts). The servo motors not only provide quieter operation but also the ability to position elements with a high degree of accuracy over an entire range of displacements. Hydraulic and pneumatic systems, on the other hand, tend to be used only for point to point positioning and are not suitable for graduated positioning. However, the hydraulic and pneumatic drives still have advantages in situations where extremely high forces need to be applied to move actuators or endeffectors from one position to another.

30

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

2.4 CNC and Robotic Controllers


The servo motor drive system is a basic building block for most modern industrial equipment. In particular, it is the basis for Computer Numerical Control (CNC) machine and robot design. These two elements are, in turn, amongst the most prolific pieces of machinery in a modern industrial complex. The CNC machine is very much a "fish out of water" in the worlds of computing and manufacturing. CNC has never provided a full-fledged computer control system (in the way most engineers would understand it) and the machines which are controlled by CNC are still designed like machines that should be manually driven. Robotics on the other hand, benefited enormously by arriving at a more opportune "technological time" after suitable electronics and processing became available to provide sensible controls. In the middle of the twentieth century, the majority of lathes and mills were manually driven by operators that moved axes by turning hand-wheels. The axes were composed of "screw-feeds" that were used to move some end-effector (such as a cutting tool or work-piece fixture) so that the work-piece could be processed. These machines were designed around a rudimentary, geometric axis system (based on either orthogonal or cylindrical coordinates) so that the operators could easily relate the required geometry of a work-piece to the movement of a single hand-wheel on the machine. The most logical first step in automating these machines in the 1950s was to replace each of the manual hand-wheels with a servo motor drive. Initially, the servo motor drives were all connected to a controller, known as a Numerical Controller or NC, that would cause each of the drives to move, based on a punched, paper-tape program. Considering the high cost of computing in the 1950s, these machines provided an extremely good mechanism for automated processing of work-pieces. Unfortunately however, as computer technology evolved and low cost microprocessors proliferated in the 1970s, NC became Computer Numerical Control or CNC, with little revision of the fundamental concepts. CNC is conceptually little more advanced than NC and its basic advantage is that it provides the ability for programmers to enter, edit and simulate cutter paths on the controller itself. Few people question the design of CNC machines. As with many other industrial systems, features that were ill-designed due to lack of technology have remained as industry standards, long after the enabling technologies have emerged. There are many other limitations that have arisen in CNC, in terms of the design of the computer control itself. Many CNCs are still designed as though machines were intended to exist in isolation from other computers and the outside world - that is, as "islands of automation". However, in industry we now know that it is important for all computer controllers to either receive instructions from the outside world or send data to the outside world in order to simplify the task of factory automation.

Computers and Control - Mechatronic Systems

31

Robots, unlike CNC machines, really only came into being (on a large scale) after the advent of microprocessors and the dawn of "low-cost" computing. Modern robot design has therefore suffered far less from the manacles of the by-gone manual era than has CNC machine design. As a result, robots tend to look and perform like devices designed for a specific function using modern concepts of computer control. Even after all the advances in CNC design, robots still tend to interact with the outside world in a far more proficient manner than the CNCs. However, despite the obvious physical differences between, say, an articulated welding robot and a CNC milling machine, the principles behind the actual control systems are essentially similar. The schematics of CNC and robot control systems are shown in Figure 2.10.

Supervisory Controller (Executing Program) Reference Position Position Feedback Axis Controller (Axes 1 to N) Reference Velocity Servo Drive Controller 1 Voltage Motor 1 or Current Velocity Feedback Mechanical Coupling

Position Transducer End Effector

Figure 2.10 - Schematics of a CNC or Robotic Axis Control System

The supervisory controller in robotic and CNC systems is responsible for a number of simultaneous activities including the user (system) interface and part program parsing and execution. As each step (block) of a part program executes, the supervisory controller passes down positioning information to the axis control computer. The axis controller is responsible for achieving the desired position, using appropriate acceleration and deceleration curves. It does this by sending reference velocities to the servo-drive controller, on the basis of the actual position that has been achieved. As a result, a double feedback loop (velocity and position) is established.

32

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The relationship between the servo-drive controller and the servo-motor is dependent upon the type of motor in use. In d.c. systems, the servo-controller varies the average voltage (hence current) applied to the motor windings. In both synchronous and induction motor, variable-speed a.c. systems, the servo controller supplies the armature of the motor with a variable frequency supply voltage. Note also, that the servo motor drives in traditional CNC and robotic control systems are based upon closed-velocity-loop control. The CNC or robotic axis controller is then responsible for closing the position loop. In older CNC and robotic control systems, a number of processors were used to implement the entire control. One processor acted as the supervisory controller, another as the axis controller and so on. In some cases, each axis had its own processor because the task of closing the position loop was so "processor-intensive". Current processor technologies enable a typical four axis machine to be controlled by one microprocessor which performs all processing (supervisory, axis control, etc.) functions in real-time through multi-tasking. The transducers used to provide velocity and position feedback on both robots and CNC machines vary according to the specific applications. Commonly, shaftmounted tacho-generators are used to provide velocity feedback and linear resolvers or pulse-code transducers (encoders) provide position feedback. Both CNC and robot control systems share another common trait in that they tend to be very specialised in their design. They are optimised to achieve multi-axis control, with minimum user programming, and hence the languages that they use tend to be somewhat restrictive. For historical reasons, related to early hardware limitations, CNC machines were (and still are) traditionally programmed in a "G-Code" language. This dated system provides a user with a number of sub-programs, commonly prefaced with either a "G", "F", "S", or "T" and suffixed with a subroutine number or parameter. These facilitate the movement of a cutting tool through a predefined path; the selection of a cutting tool and so on. However, few (if any) of these languages will allow a programmer to do more than this. G-Code languages were never intended to provide the user with routines for accessing various aspects of the machine controller itself and for interfacing it to the outside world. These features, which are now both desirable and important, are difficult or impractical to implement on CNCs. For example, with older CNCs, it is often difficult to display user programmed screens as a part program executes. Further, it is generally not possible to access the serial communications facilities of a CNC through the G-Code language itself.

Computers and Control - Mechatronic Systems

33

CNC designers often attempt to augment the limited features of the G-Code languages by running additional (concurrent) tasks on the CNC. For example, many CNCs have programs designed to handle serial communications and remote commands, running as tasks, while a part program executes. This type of task is referred to as a Direct Numerical Control or DNC task/facility. It generally enables a host computer to remotely control a CNC machine through a serial link. There are other tasks, such as concurrent, graphic information displays, which are also added by a number of manufacturers. The deficiency with older forms of CNC architecture is that they ultimately provide a closed (black) box to the end-user. It is often difficult for end-users to reprogram CNCs in order to change graphics displays, or the way in which serial communication occurs. So, while traditional CNCs provided great flexibility in terms of cutting and shaping materials, they generally provided very little flexibility in terms of tailoring the user environment. Modern CNC designs, however, have improved since the early 1990s and are gradually moving towards high-level, structured languages and open-architecture programming capabilities. Robot controllers have generally been better than traditional CNCs in terms of programming flexibility. Unlike CNC machines, robots seldom, if ever, work as devices in total isolation from other systems. They are generally linked to other computer controlled or logic controlled mechanical systems. For example, a spraypainting robot must be linked to the production line that feeds it with work-pieces for painting - otherwise there can be no inter-locking between line movement and robot cycles. As a result, programming languages on robots have tended to reflect the systems oriented nature of these devices. However, it is far more difficult to categorise the capabilities of robot controllers, because they are far more diverse in software architecture than CNC systems. Some robot-controllers can be programmed in PASCAL or "C" (or special structured languages such as VAL) in the same manner as any normal computer system. These systems provide users with a high level of access to the internal hardware of the controller itself. This makes such controllers more amenable to interfacing with the outside world. A few, "less sophisticated", robot controllers are analogous to older CNCs and can only be programmed in restrictive, specialised, movement languages (similar to proprietary G-Codes). These systems suffer from the same interfacing disadvantages as CNC systems.

34

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

CNC systems and robot controllers generally come with built-in Programmable Logic Controllers, usually of a specialised and complementary design, and normally produced by the CNC or robot manufacturer. The PLCs are integrated into the CNC or robot control system, under the control of the main processor. They are used to control a range of sundry functions requiring high voltage or current switching. For example, on a CNC machine, the internal PLC may control the switching of coolant pumps, the opening and closing of doors, etc. On a robot, the PLC may control the opening and closing of grippers and the switching of inter-locked equipment. The inputs and outputs of PLCs on both robot and CNC systems are accessible from within the robot programming language or G-Code programming language. This scheme is shown in Figure 2.11.

CNC or Robot Control

Integrated Purpose-Built PLC

Inputs

Outputs

Servo Drive 1

..

Servo Drive N

Figure 2.11 - Integrated PLC Control of High Power Peripherals

CNC and robot controls are generally both provided with a "hard-wire" interface to the outside world. This provides a simple means of integrating the devices into automated systems. In a hard-wire interface, spare inputs and outputs from the integrated PLC are selectively connected to external devices so that they can be interlocked. Program execution is then made dependent upon the condition of inputs. For example, if we wished to use a robot to feed a CNC machine with workpieces, a hard-wire inter-locking arrangement, such as the one shown in Figure 2.12 may be used.

Computers and Control - Mechatronic Systems

35

PLC CNC Output x Input y

PLC Input m Output n Robot Controller

Figure 2.12 - Inter-locking a CNC Machine to a Robot

In such a system, the CNC machine program should start execution as soon as the robot has loaded a part (ie: when the robot program has been completed). The robot should unload a part when the CNC machine program has ended. This can be achieved by the robot setting output number "n" high when it completes a program (last executable line of code) and the CNC machine setting output "x" high when it completes a program (last executable line of code). The first lines of code on both the CNC and robot are to wait for inputs "y" and "m", respectively to go high before continuing. This form of inter-locking is suitable for simple systems, but is unable to deal with problems that occur during the execution of a robot program or CNC program. For example, the robot may jam a component while loading the CNC machine and stop while its grippers are still inside the machine. The CNC has no way of knowing the actual position of the robot and so may damage the grippers. These sorts of issues can only be resolved by having the robot and CNC intelligently communicate with one another through a data communications link. These links require specialised communications software packages (known as protocols) to execute on each of the devices. Robots have been equipped with communications protocols since the early 1980s and CNCs became available with similar protocols shortly thereafter. The actual implementation of a control system via a communications protocol is the subject of another book and is outside the scope of this text.

36

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

2.5 Development Systems for Mechatronic Control


The advances in computer processing power which took place during the 1970s produced an enormous number of low cost personal computers in the 1980s. However, by the mid-1980s it was evident that these personal computers were limited by their inability to communicate with the outside world. As a result, their use in engineering control was limited. Engineers with a good background knowledge of electronics were able to design interfaces between personal computers and real-world systems but these were "one-off" solutions that were very costly to pursue. For those without an electronic engineering background, the concept of linking personal computers and workstations to the outside world presented serious problems. By the latter part of the 1980s, a number of companies had recognised the need and potential for equipment suitable for interfacing personal computers to the outside world. Vendors began to launch a range of "building-block" products that could be used by engineers, with limited knowledge of computer hardware, to implement engineering control systems. Since that time, the interfacing-board market has grown to the extent where plug-in interfaces have almost become a commodity. There are literally thousands of different types of products that are suitable for mechatronic control systems in industrial and laboratory environments. In general, these are designed to plug directly into a variety of commonly available personal computers and workstations. At their most basic level, these interfacing boards provide the ability for computers to input and output analog voltage signals through a number of different channels. This is shown schematically in Figure 2.13.

Host Personal Computer Operating System User Program Interface Board 1 : N 1 : N

Analog Outputs

Interface Board Software Library

Analog Inputs

Figure 2.13 - Basic Interface Board Arrangement for Data-Acquisition and Control in Analog Systems

Computers and Control - Mechatronic Systems

37

An interface board such as the one shown in Figure 2.13 would also typically provide a library of software routines that would carry out the low level hardware access to the input and output channels of the board. The end user can utilise these routines in a common high-level language program such as C or Pascal without ever understanding the complexities of the board or the processes that transfer data between the board and the computer's memory areas. As with nearly all consumer items, the more one pays, the more one gets. A more sophisticated version of the board could provide protection and isolation between the external signals and the computer hardware. The basic arrangement of Figure 2.13 assumes that the end-user will wish to develop control software entirely on the "host" personal computer. However, there are many instances where the development of a control algorithm may be a case of "reinventing the wheel". A typical example would be the implementation of a basic PID control, where an incoming feedback signal is processed via a standard algorithm to produce a required output signal. A number of boards are equipped with their own "on-board" microprocessors or Digital Signal Processors that are available to carry out basic "closed-loop" control functions. The host personal computer then essentially becomes a development tool that provides a screen and keyboard/mouse input arrangement. However, when the control system is fully implemented, the system designer can develop software that will enable the personal computer screen and keyboard to become the interface between the user and the control system. This is shown schematically in Figure 2.14.

Host Personal Computer Operating System Interface Board Software Library User Program Interface Board with On-Board Processor Standard Control Loop Software 1 : N 1 : N

Analog Outputs

Analog Inputs

Figure 2.14 - Using Advanced Interface Boards with "On-Board Processors" for Standard Closed-Loop Control Functions

38

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The concept shown in Figure 2.14 is often extended to provide more than just a simple general-purpose control. A number of intelligent controller boards are available for specific functions including:

Servo motor control PC-based digital storage oscilloscope applications Waveform synthesis Micro-stepper and stepper motor control Video acquisition (frame-grabbing and image-processing, etc.).

To some extent, a number of plug-in boards available for personal computers and workstations really do little more than replicate the functionality available in a Programmable Logic Controller. There is however, one major advantage in using plugin interface boards in preference to PLCs and that is the fact that the interface board connects directly into the bus structure of the host computer. This provides the fastest possible link between the outside world and the computer. Most PLCs can only be connected to a computer via a network or point-to-point communications link, both of which are relatively slow for real-time control. In some instances, PLCs have evolved to a level where there is really no difference between the functionality of an interface board and a PLC - this is particularly true of PLCs that plug directly into the bus structure of a personal computer or workstation in much the same way as the arrangement of Figure 2.14. The major disadvantage of nearly all plug-in boards (and PLCs) is their relative cost. An interface board can often cost as much or more than the host computer system. If the board is to be used for a "one-off" system design then this is not an issue, since the cost of developing an interface board would be a much more expensive proposition. However, if the objective is to develop control systems for mass production, then clearly the off-the-shelf interfaces are unacceptable. One of the reasons for the relatively high cost of interface boards is the fact that they are designed to be "general-purpose". Most of the boards accept a wide range of voltage and current inputs and can provide a wide range of output voltages and currents. This sort of functionality is expensive. In situations where many boards need to be produced and cost is of the essence, specialised boards with strictly limited functionality (in other words, purpose-built) need to be designed from first principles. The focus of this book is to help you to come to terms with the basic principles behind the design of interfaces so that you can understand how commercial systems operate, their limitations and the appropriateness of designing from first principles.

Computers and Control - Mechatronic Systems

39

In some cases, the decisions in regard to control system design are already fixed by overriding physical or commercial factors. For simple controller applications, the cost of a personal computer plus interfacing boards may be too high to make such a solution viable. In other situations, the physical size of a personal computer and interfacing boards precludes the use of such a solution. There are several ways to resolve these control problems. The first is to design a complete microprocessor or DSP based controller from first principles. This is normally the most cost-effective solution where mass production is involved, but is expensive for low volume applications. The other alternatives are based upon the tailoring of so-called "miniature controllers" or "micro-controllers". Miniature controllers are microprocessor based computers that are usually designed to fit onto a relatively small printed-circuit board. Unlike the mother-board of a personal computer or workstation, a miniature controller already has built-in control functionality such as analog inputs and outputs and relay-drivers. The software for the miniature controllers is normally stored in special memory chips known as "Electrically Erasable Programmable Read Only Memory" or EEPROM. On a normal computer, programs are stored on magnetic disks and transferred to memory later for execution. This allows for much greater storage but also makes the overall computer system larger in size than the miniature controller. A typical arrangement is shown in Figure 2.15.

Key-Pad PC Workstation Operating System Software Development Kit

LCD Display

Miniature Controller P Serial Link EEPROM

A/D

D/A

Strain Gauge

Analog Output 1

Load Cells Temperature Sensor Linear Position Transducer

Analog Output 2

Analog Output 3

Analog Output N

Figure 2.15 - Miniature Controller Development System

40

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The beauty of miniature controller development systems is that a system designer can develop software on a standard PC workstation using a special development compiler (normally programmed in the "C" language). The software is then downloaded to the miniature controller memory via a serial communications link. Thereafter, the miniature controller can be disconnected from the PC and act as a standalone unit. Miniature controllers are designed in a building-block fashion so that the development does not require a highly skilled electronics engineer nor the production of printed circuit boards and so on. Typical accessories include liquid crystal display screens and operator keypads that enable the final users of such controllers to interact with them at a very basic level. As with all other general-purpose "building-block" devices, the cost of a miniature controller can become excessive when large volumes need to be produced. However, when one takes into account the fact that development times are minimised and hardware reliability is much higher than for "one-off" designs, the miniature controller is a very useful device.

Computers and Control - Mechatronic Systems

41

2.6 Manufacturing Systems


There are very few areas of engineering where control systems proliferate to the extent that they do in modern manufacturing systems. A sophisticated manufacturing system can contain control elements including:

Microprocessor or DSP controlled Servo Drives CNCs Robot Controllers PLCs Cell controllers (PC workstations or dedicated computers).

The difficulty, of course, lies in getting all these different types of controllers to talk to one another so that a cohesive manufacturing system can be produced. In continuous processes (such as in chemical, food and petrochemical production and power generation) the interaction between different levels of control is very tightly governed because the level of intelligence ascribed to each element tends to be limited. However, in discrete processes, such as in metal-cutting manufacturing systems or textile production systems, the boundaries between the different levels of control are somewhat blurred and cohesive control is more difficult to achieve. A number of different, metal-cutting, manufacturing systems are used in order to satisfy the performance criteria demanded by a wide range of industries, including workshops, automotive and aerospace manufacturers. The common system configurations are shown in Figure 2.16, which gives an indication of how each system fits into annual volume / variety regions in the production environment.
Annual Volume Dedicated Systems 100000 F.T.L. 10000 FMS with CNCs, AGVs & Robots 1000 100 Robot Fed CNC 10 Stand-alone CNC 1 Part Variety 1 10 100 1000

Figure 2.16 - Realms of Metal-Cutting Manufacturing Systems

42

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The systems mapped onto the graph in Figure 2.16 have differing performance characteristics and also place different demands upon the data communications used by their control systems. We shall now examine the structure of each of the integrated systems and the way in which communication between devices occurs. The dedicated, in-line transfer machine is shown in Figure 2.17 and is a highvolume, low part variety system. It is composed of a number of machining stations and a transfer conveyor. Each of the machining stations is designed and tooled for a specific application. For each station, tools are loaded into an induction motor driven, multi-spindle cutting head, which has an advance and retract motion. When a workpiece comes into position within a machine, the head advances for a fixed period, then retracts, to allow the work-piece to move down-line to the next destination. Each machining module is generally controlled by a PLC, which is hard-wired to its sensors, coolant pumps, etc. These machining modules are generally not user-programmable devices. They are pre-programmed to perform only a fixed task.

Supervisory PLC PLC PLC PLC

PLC Transfer Mechanism (Conveyor)

PLC Dedicated Machining Modules

PLC

PLC

Figure 2.17 - Schematic of Dedicated, In-Line Transfer Machine

The transport mechanism for in-line transfer machines is also a PLC controlled device. In simple systems, the transport mechanism controller is also the system controller, and is hard-wire inter-locked to the dedicated machining modules. In more complex systems, a separate, high powered PLC is used to coordinate the running of the system and drive mimic-panels and graphic information displays.

Computers and Control - Mechatronic Systems

43

In transfer machines, where individual modules are controlled by a range of PLCs, produced by different vendors, it is common practice to simply hard-wire from the supervisory PLC to other PLCs in the system. However, in a single-PLC-vendor environment, a number of proprietary solutions are generally feasible. Some of these solutions allow PLC data buses to be inter-connected through a back-plane system for information exchange. Other solutions allow for interconnection of PLCs through high speed Local Area Networks. Regardless of which system is chosen, the objective is for the supervisory PLC to implement sequential control over the system through input/output inter-locking with individual module controllers. Rotary transfer machines are analogous to in-line transfer machines except that parts transfer, from machine to machine, occurs through via an indexing mechanism in a circular path. The control principles however are almost identical. The hard-wire, inter-locking, communications techniques, shown in Figure 2.17, for dedicated systems are generally adequate because:

Individual machining modules are relatively simple devices, executing simple, fixed, programs The amount of information which any, one machine can feed back to a supervisor is comprised of little more than off/on limit-switch status The supervisory controller does not need to change programs on individual modules in the system.

Dedicated manufacturing systems, of the type shown in Figure 2.17, fulfil a vital role in the high-volume production of a small variety of parts. However, in order to vary the type of part that passes through such a system, it is necessary to manually retool each of the machining stations. If the type of part to be produced is radically altered, then such systems require major re-engineering, or as is often the case, complete replacement. Since these systems are designed for the production of a specific item, their cost and production life are calculated on the basis of anticipated product life. Increased competition in manufacturing, coupled with increasing consumer demands for new products, mean that product life-spans are decreasing. The cost effectiveness of dedicated production systems is therefore diminished accordingly. In addition, companies driving towards export competitiveness with products now find that they need to produce a "family" of products, tailored to specific global markets. These requirements engender a need for flexibility in production systems.

44

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Flexibility in production systems is achieved through the ability of individual modules in those systems to respond to changes in part variety. This is in turn, achieved through the use of fully programmable machining modules and flexible parts transport techniques. It would be sensible to suggest that flexibility in manufacturing could be achieved by taking a dedicated system, such as that shown in Figure 2.17, and replacing some or all the fixed machining modules with CNC machines. This is common practice, and the result is referred to as a "Flexible Transfer Line" or FTL. However, it should be noted that the cost ratio of a CNC machining module to a dedicated module can be as high as 10 to 1. It is therefore not economically feasible to replace, say 50 dedicated machining modules in a fixed system with 50 CNC machines. Generally, each CNC machine uses automatic, programmable tool changing in order to perform the functions of a number of dedicated modules. Thus, production flexibility is increased but throughput is decreased in what becomes a normal trade-off situation. While it may be common in a dedicated production line to have 50 to 100 machining stations, a flexible production system may have only 5 to 10 CNC machining stations, performing the same net function at a lower production rate. However, the benefits in flexible production become self-evident when production needs change, because flexible systems can respond very quickly to new demands, with minimal human intervention. The transfer line arrangement of Figure 2.17, whilst very fast, does not provide the optimal transport mechanism for maximum production flexibility. Robots, Gantry Robots and Automated Guided Vehicles (AGVs), on the other hand, provide a high degree of transportation flexibility at the cost of production throughput. All three devices use relatively sophisticated control systems. AGVs in particular, commonly use a powerful PLC as a Constant System Monitor (CSM), which governs the positions to which vehicles move. The Flexible Manufacturing System (FMS), designed for a very wide variety of parts, is more likely to resemble the schematic shown in Figure 2.18, rather than that of 2.17. The intelligence level of each module (machine) within the system is much greater than that within the dedicated production line. CNC machines in sophisticated FMS environments may even be augmented with specialised robots to transfer tools from AGVs to machine tool carousels and vice-versa. The practicality of such systems has long been questioned by industrialists and their reliability has been unsatisfactory. The primary reason for this is the difficulty of creating a cohesive and robust control system that can recover from the numerous system faults that arise when handling a wide range of parts.

Computers and Control - Mechatronic Systems

45

AGV

CNC

CNC

AGV Programmable Machining Modules

CNC

CNC

AGV

FMS Controller

AGV Controller

Figure 2.18 - Schematic of a Flexible Manufacturing System

In a complex FMS environment, where a number of different part-types may be within the system simultaneously, the controller is required to:

Coordinate the flow of work-pieces of differing types, from one machine to another, based upon a rolling schedule Activate different part programs on CNC machines, as required by the parttypes present in the system Down-load part programs to CNC machines as required by the machines Coordinate (inter-lock) the role of the work-piece transport system with the operation of CNC machines.

46

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Some of these functions can be (and sometimes are) implemented through the hard-wire inter-locking of devices to the FMS controller, which can be a powerful PC, PLC, workstation or mini-computer. However, FMS control is more appropriately achieved through data communications between the controller and the other computerised (intelligent) modules within the system. In a complex, FMS environment, the system controller must have the capacity to interrogate other equipment whilst programs are running. This gives the controller access to a wide variety of information regarding the status and error-conditions of machines, thereby allowing for intelligent decision making in the control algorithm. Simple, hard-wiring techniques only allow devices to exchange one piece of data per wire (say an on/off state or transducer voltage). They do not allow one computer to transfer data files to another computer. This of course means that down-loading of CNC machine programs from a supervisory computer cannot be achieved with the hard-wired system alone. In simple hard-wired, FMS systems, machine programs are normally resident in the local memories of each machine during a production run. Programs are generally down-loaded (or file-dumped) to machines, via data communications links, prior to the start of automatic FMS control. One of the major benchmarks of FMS is the ability to tolerate and reconcile fault conditions. Each module in the system performs a complex task and is therefore subject to a large number of possible faults or errors. It is costly for an FMS controller to shut down an entire system, simply because one machine has developed a fault. The objective is for the controller to attempt to maintain orderly and safe system operation even under certain fault conditions. However, as previously stated, this is far more readily achieved in a rarefied academic environment than it ever has been in industry. One of the major problems in FMS is the difficulty involved in integrating a range of different and proprietary computer controllers (PLCs, CNCs, robot controllers, etc.) via computer networks. There are considerable problems with lack of standardisation and an overwhelming sensation that many of the "intelligent" controlling devices to be linked in an FMS were designed to operate as "islands" of control without the necessary functionality for interaction. These sorts of integration problems are the subject of another book, but as we progress through this text we shall see why it is that integration problems arise with computer based devices.

47

Chapter 3
Fundamental Electrical and Electronic Devices and Circuits

A Summary...
An overview of the electrical and electronic devices that are the basis of modern analog and digital circuits. Basic analog devices including diodes, Bipolar Junction Transistors (BJTs) and Field Effect Transistors (FETs). Diode based circuits including regulators and rectifiers. Simple analog transistor amplifier circuits. Operational amplifier circuits. Transistors for digital logic. Interfacing circuits to one another input/output characteristics. Thyristors and thyristor based circuits.

Digital Voltages

Analog Voltages

Analog Energy Forms

External Voltage Supply

Digital to Analog Conversion Computer Analog to Digital Conversion

Scaling or Amplification

Isolation

Energy Conversion External System

Protection Circuits

Scaling or Amplification

Isolation

Energy Conversion

External Voltage Supply

48

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

3.1 Introduction to Electronic Devices


Most people can readily relate to the concept of resistance in electric circuits and to the concept of energy storage and release through inductance and capacitance. Our understanding is greatly enhanced by the fact that relatively straightforward and systematic techniques can be used to model circuits with these elements. In the socalled "time-domain" we can use the simple interrelationships between voltage and current in these devices to analyse these passive circuits: v = iR (Resistor Relationship - Ohm' s Law) dv i=C (Capacitor Relationship) dt di v=L (Inductor Relationship) dt ...(1) We couple these relationships with the use of Kirchoff's voltage and current laws in order to analyse circuits. As circuits become more complex, we introduce the traditional mathematical approaches to the solution of differential equations in order to determine the transient and steady-state conditions of circuits. The LaPlace Transform technique and the Phasor-Method technique are, respectively, the two most common (and interrelated) methods used to solve for transient and steady-state circuit conditions. When we introduce common electronic devices, such as diodes, transistors, etc. into our circuits, analysis becomes far more complicated - particularly when circuits are used for analog applications. Not only do we have to contend with all of the above analysis techniques, but we additionally need to consider dependent voltage and current sources that add substantially to analysis problems. Moreover, analysis of circuits with devices such as diodes and thyristors requires the use of intuition in order to simplify circuits that are otherwise unwieldy. It is therefore much more difficult to develop systematic techniques for analysing and understanding the operation of such circuits. It is not only the analysis of analog electronic circuits that causes problems. Implementation introduces a whole range of complex problems with which we need to contend. It is often said that the implementation and testing of analog electronic circuits is composed of 10% design and 90% trouble-shooting. This rule-of-thumb arises because of the parasitic characteristics of each of the electronic devices that we will be examining in this chapter.

Fundamental Electrical and Electronic Devices and Circuits

49

Most of the devices which we shall look at in this chapter are fabricated on semiconductor materials (Group IV in the Periodic Table) that have been doped with Group III and Group V impurities. The interaction of doped regions within the semiconductor gives rise to the valuable properties of each particular device and also leads to other parasitic or non-ideal behaviour patterns. As a result of these parasitic (non-ideal) characteristics, it is often difficult to justify the cost of engineers designing and debugging analog electronic circuits from first-principles. In addition, the staggering growth in digital computing since the 1960s has led to a need for circuits that can co-exist in heterogeneous analog/digital circuits. For these reasons, a number of interesting trends have arisen: (i) An enormous range of commonly used electronic circuits are normally available in a modular form in single-chip Integrated Circuit (IC) packages IC packages are normally designed in family groups so that a range of different devices can be put together in "building-block" fashion to create new systems

(ii)

(iii) Analog devices are often made compatible with digital circuits in order to facilitate bridge building between computers and continuous external signals. In terms of power electronics (ie: the conversion of low energy electronic signals to high-voltage and/or high-current outputs) it is also necessary to note specific trends that have arisen in electronic devices. Firstly, there is the ability of small, single-chip semiconductor devices to absorb, supply and switch high currents and voltages. Secondly, there has been a trend away from the traditional analog approach to circuit design. As we shall see later in this text, devices such as transistors can form far more energy-efficient amplification circuits when they are used as digital switches in Pulse Width Modulation (PWM) based circuits, rather when they are used as analog (linear) amplification devices. There are many issues that need to be examined in detail before one can carry out any electronic circuit design that will have industrial relevance in terms of reliability and accuracy. The objective of this chapter is not to make you an expert in electronic circuit design, but to assist you in understanding the basic phenomena involved, so that you can make intelligent decisions in the analysis and application of the semiconductor modules and electronic interfacing devices required in modern systems design.

50

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

3.2 Diodes, Regulators and Rectifiers


3.2.1 Fundamentals and Semiconductor Architecture Diodes are the most basic of electronic devices and are an important part of any electronic circuit design because they (and their controlled derivatives such as thyristors) are used for:

Providing uni-directional current paths through a circuit Regulating and limiting voltages Power supplies Converting a.c. signals into d.c. (rectification) Converting d.c. signals into a.c. (inversion).

Modern diodes are formed through the p-n junction which can be created by doping intrinsic (pure) silicon with Group V elements (giving n-type semiconductor) and group III elements (giving p-type semiconductor). This is shown in Figure 3.1

Junction Intrinsic Si doped with Group V atoms +++ +++ n-type +++ (charge neutral) +++ x --------y Intrinsic Si doped with Group III atoms

p-type (charge neutral)

(a) Semi-conductor Fabrication of diode

Depletion Layer (depleted of excess charge carriers) Vxy

Cathode

Anode

Id (b) Circuit Symbol for Diode

c Vac

Figure 3.1 - The Semiconductor Diode

Fundamental Electrical and Electronic Devices and Circuits

51

Intrinsic (pure) Silicon is charge neutral (equal number of protons and electrons) and is a poor conductor because it exists in a "covalent lattice" form, with all its valence shells complete. However introducing charge neutral Group V elements also introduces additional electrons free for conduction (n-type semiconductor). Introducing charge neutral Group III elements reduces the total number of electrons and thereby introduces "holes" for conduction (p-type semiconductor). When we have p-type semiconductor butted against n-type semiconductor, we have a region of instability. At the junction, the excess electrons in the n-type semiconductor (majority n-type carriers) recombine with the excess holes in the p-type semiconductor (majority p-type carriers). The junction region is therefore normally deplete of excess carriers and is referred to as the "depletion region". The n side of the junction is deplete of electrons and therefore has a net positive charge and because the p side of the junction is deplete of holes, it has a net negative charge. In other words there is a barrier potential formed across the junction (referred to as Vxy in Figure 3.1). In a silicon based diode, the junction potential Vxy is in the order of 0.7 volts. In a germanium based diode, the junction potential is in the order of 0.3 volts. If we apply a positive external voltage supply (Vac) to the diode (as shown in Figure 3.1), then the potential of the anode is higher than that of the cathode. Majority carriers in the p-type material (holes) are repelled from the positive side of the supply towards the junction, thereby replenishing the depletion region on the p-side. Similarly majority carriers in the n-type material are repelled from the negative side of the supply towards the junction, thereby replenishing that region. Provided that the applied voltage (Vac) is greater than the opposing barrier potential (Vxy), the diode can freely conduct current. If we apply a negative external supply (Vac) to the diode, such that the cathode potential is higher than the anode potential, then majority carriers are attracted away from the junction, thereby increasing the depletion layer width and thus prohibiting the flow of current. The diode acts as an open circuit. In practice a small leakage current still flows due to the presence of minority carrier holes and electrons in the vicinity of the depletion region. However, if we make the negative supply extremely large, then the potential across the junction is sufficient to force carriers across the depletion region. The structure of the junction effectively breaks down because new electronhole pairs are created and conduction can once again occur. This is referred to as "avalanche breakdown". Provided that power dissipation in the diode is limited, the avalanche breakdown is not destructive. A first-order approximation of diode behaviour is to say that the diode is a perfect conductor (short-circuit) whenever it is forward biased (ie: Vac positive) and a perfect insulator (open-circuit) whenever it is reverse biased (ie: Vac negative).

52

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

A second-order approximation of the behaviour of the diode structure (of Figure 3.1) is given in Figure 3.2. It shows a device which is an ideal conductor (short-circuit) when forward biased, provided that the opposing barrier potential has been exceeded. It also shows a device which behaves as an ideal insulator (open-circuit) when reverse biased - provided that the reverse breakdown voltage is not exceeded. After the reverse breakdown voltage is exceeded, the diode becomes an ideal conductor in the reverse direction.

Id

Peak Reverse Breakdown Voltage V


b

Vxy

Vac

Figure 3.2 - Second-order Approximation of Diode Behaviour

A third-order approximation of diode behaviour takes into account the reverse leakage current of the diode and the resistance of the bulk semiconductor material (which can never be an ideal conductor or short-circuit). The resistance of the semiconductor material contributes to what is termed the "bulk resistance" of the diode. The third-order characteristic for a Silicon diode is shown in Figure 3.3, using different scales for the forward and reverse bias regions. When analysing circuits containing diodes, we intelligently select the diode approximation model which is best suited to our level of analysis. The circuit model for each order of approximation is shown in Figure 3.4.

Fundamental Electrical and Electronic Devices and Circuits

53

Id

10mA 70V 20V 0.65V 0.5 Vac

*Note Different Scales in Forward and Reverse Directions

Figure 3.3 - Third-order Approximation of Silicon Diode Behaviour

None of the approximate models fully describe the characteristics of the diode, particularly in the so-called "knee" regions where the diode starts to conduct. An accurate model is not generally necessary and makes any practical analysis of circuits extremely difficult. In realistic situations, where we wish to accurately determine currents in a circuit, we generally measure and plot a voltage-current characteristic and then use the graphical "load-line" technique to determine exact operating points. The technique for analysing circuits with diodes is relatively straightforward. One generally starts with the first order approximation of the diode and redraws the circuit diagram at least twice - once for the condition where the diode is forward biased and once for the condition where the diode is reverse biased. A third diagram is required if the diode is likely to go into reverse breakdown. The operation of the circuit can then be traced through via normal network analysis principles. Once the general operation of the circuit is understood, a more accurate picture can be obtained by substituting second and third-order models. The power dissipation in a diode is simply calculated by multiplying the operating voltage and current together. In situations where voltages and currents are time varying, the power consumption is also time-variant and therefore the average power needs to be derived through integration. The average power determines the heating in the diode and therefore its susceptibility to damage. Note that the r.m.s. (root mean square) value of a power waveform has no physical significance whatsoever in engineering terms and should never be used for calculations.

54

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

First-order Model for Vac Positive

First-order Model for Vac Negative and less than Breakdown

+ 0.7V

Second-order Model for Vac Positive and equal to barrier potential

Second-order Model for Vac Negative and less than Breakdown

Vb

First & Second order Model for Vac Negative and equal to Breakdown voltage Vb

+ 0.6V

Rb

a 0.5 A

Third-order Model for Vac Positive and greater than or equal to Barrier Potential

Third-order Model for Vac Negative and less than Breakdown

Vb

Rb

Third order model for Vac negative and greater than or equal to Breakdown Voltage Vb

Figure 3.4 - Circuit Approximations for Silicon Diodes

Fundamental Electrical and Electronic Devices and Circuits

55

3.2.2 Zener Diodes for Voltage Regulation Zener diodes are a special type of p-n junction diode which are specifically designed for operation in the reverse breakdown region. These diodes do not suffer from avalanche breakdown, but rather from a phenomenon known as "Zener" breakdown. The doping in the p and n regions of a Zener diode is much higher than in a normal diode. This creates a far smaller depletion region at the junction and subsequently, a lower reverse-bias voltage will cause the junction to break down. Zener breakdown is not destructive in diodes, provided that the power dissipation within the device is kept within defined limits. The forward characteristic of the Zener diode is similar to the traditional diode. However Zener, diodes are seldom operated in the forward active region because it is their reverse characteristic that is of value. The reverse breakdown voltage on a Zener diode can be well defined by the semiconductor manufacturer and varies little with current. End-users can purchase Zener diodes with reverse breakdown regions ranging from a couple of volts, through to hundreds of volts. These features make the Zener diode ideal for voltage regulation, since the voltage drop across the diode can be selected and varies little with the current flowing through it. The reverse breakdown characteristic of the Zener diode is therefore of prime importance and the forward active region is seldom discussed at length. The second and third order approximate circuit models for Zener diode are shown in Figure 3.5. Note that the circuit symbol for a Zener diode is slightly different to that of the traditional diode (small wings are drawn on the cathode side). A typical characteristic for a Zener diode is shown in Figure 3.6. The voltage cited as the reverse breakdown voltage of the diode is quoted at a particular test current. Looking at the characteristic, it is clear that if we wish to achieve the rated breakdown voltage, then we need to ensure that we operate the diode at the appropriate current rating. Connecting a Zener diode across a component (ie: in parallel) not only helps protect the component from voltage spikes, but additionally regulates the voltage across that component and maintains it at approximately the reverse breakdown level of the diode.

56

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Vb

c Id

First and Second Order Model for Zener Diode in Reverse Breakdown Mode

+ Rb

c Id

Third Order Model for Zener Diode in Reverse Breakdown Mode

Figure 3.5 - Approximate Models for Zener Diode in Reverse Breakdown

Id

Rated Voltage V
ac

Test Current

Slope = Zener Impedence

Figure 3.6 - Typical Zener Diode Characteristic

Fundamental Electrical and Electronic Devices and Circuits

57

3.2.3 Diodes For Rectification and Power Supplies Those who are not familiar with modern semiconductor technology could be forgiven for believing that semiconductors can only be used for the fabrication of low power diodes, transistors and digital circuits. In fact a good proportion of modern semiconductor applications are in high power circuits and there are a wide range of devices that can handle both high currents and voltages. Diodes in particular are used in high powered circuits for conversion from a.c. to d.c. (rectification) and complete converters are also commonly available as a singlemodule solid-state device. Rectification is one of the most important functions that diodes are used to perform because there is an enormous demand for d.c. power supplies in engineering design - particularly with the overwhelming emphasis on digital circuit technology and computing. When we talk of designing a power supply, there are essentially three basic blocks that we need to look at: (a) (b) (c) The transformer The rectification The regulation.

(a)

The Transformer The transformer is used to convert an incoming a.c. waveform (normally from a general purpose power outlet) to a suitable level for rectification. Since power supplies can be either single-phase or three-phase, we need to have transformers for both applications. However, for most analyses, the three-phase transformers are essentially treated as three, single-phase transformers. The basic construction of a single-phase transformer is shown schematically in Figure 3.7 (a). This is composed of a laminated ferromagnetic core (that provides a low reluctance magnetic flux path) and a primary and secondary winding. The core is laminated to reduce power losses due to the circulation of unwanted "eddy currents". When we model transformers, we begin with a concept known as the "ideal transformer", which takes no account of losses in real systems, and then add the parasitic "loss" elements. The ideal transformer is shown in the shaded region of Figure 3.7 (b) and its characteristics are as follows:

The ratio of the secondary voltage to the primary voltage is the turns ratio (voltage transformation):

v2 =

N2 v1 N1

58

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The ratio of secondary current to primary current is the inverse of the turns ratio (current transformation):

i2 =

N1 i1 N2

An impedance placed on the secondary side of a transformer (Z2) has a value that can be measured on the primary side as Z1. Z1 is said to be the value of secondary impedance "referred" to the primary side and is equal to the secondary impedance multiplied by the square of the turns ratio (impedance transformation):

FI N Z = G J Z N HK
2 1 2 1

The losses in a transformer include:


Resistances of the primary and secondary windings Flux leaking from the core so that the same magnetic flux does not couple both windings Power losses due to "eddy currents" circulating in the core Power losses arising from the magnetisation and de-magnetisation of the core through the application of a.c. voltages - that is, "hysteresis" losses.

A complete transformer model is difficult to work with in an analytical sense, and so a number of minor approximations are made to create a working model. The approximate "working" model for a single-phase transformer is shown in Figure 3.7 (b). This circuit lumps together primary and secondary resistances and leakage reactances into single elements, "referred" to the primary side. The approximate model also includes a shunt resistance to represent hysteresis and eddy current losses and a shunt inductance to represent the magnetising current required for the transformer to operate even when no load current is flowing. This model is adequate in practical terms and makes analysis considerably easier than the complete model.

Fundamental Electrical and Electronic Devices and Circuits

59

Core made of Ferromagnetic laminations separated by insulating layers

Secondary Winding

Flux () I in v in I2 vout

Primary Winding (a)

I in

jX

I1

N:N
1

I2

Vin

jX o

Ro

out

Load

Ideal Transformer

R w X
L

= Combined Winding Resistance of Primary & Secondary Coils = Reactance Representing Flux Leakage in Transformer Core = Reactance of Core - Representing Magnetisation Current = Resistance Representing Hysteresis & Eddy Current Loss in the Core (b)

X o R o

vout v in

vout v in

I2

f (Hz) (c)

Figure 3.7 - (a) Schematic of Transformer Construction (b) A Manageable Circuit Model for a Transformer (c) Transformer Characteristics for Varying Load Current and Operating Frequency

60

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The approximate circuit model of the transformer reveals the effects of the basic loss elements in the following way:

As the secondary (load) current, I2, increases, the voltage drop across the winding resistance and flux leakage reactance increases, resulting in a decreasing secondary voltage (for a constant primary voltage) this is referred to as regulation and is responsible for the characteristic shown in Figure 3.7 (c) At zero frequency, the transformer is essentially short-circuited except for the winding resistance. Transformers therefore are unable to transfer d.c. voltages from the primary side to the secondary side As frequency increases from zero, the impedance due to the leakage reactance increases (jL) and the output is attenuated, ultimately to zero as frequency tends towards infinity.

(b)

Rectification Nearly all modern large-scale electricity generation is a.c. in nature. This is primarily because the generation and transformation of a.c. voltages was originally far more practical than the d.c. alternative. In some instances the enduse of the electricity generation process is also very efficient in the a.c. form particularly in the case of a.c. machines (motors). In other situations, both a.c. and d.c. are equally suitable - for example in resistive heating or incandescent lighting. However, there are a great number of applications for which a.c. is not well suited. These are primarily in the field of small-scale electronics (both digital and analog), metal smelting (furnaces, etc.) and historically in motor speed-control (servo applications). In recent years, however, in the case of motor speed-control, a.c. technology has also reached comparable levels to d.c. In the case of electricity transmission, it has been suggested in recent years that losses can be minimised by transmission of d.c. rather than a.c. and hence that there needs to be conversion from a.c. to d.c and back to a.c. again. Regardless of the relative merits of either system, the result of the differing end uses for electricity is that we need to be able to convert from a.c. to d.c. and from d.c. to a.c. Diodes are the primary mechanism for conversion from a.c. to d.c. and the associated process is referred to as "rectification". The reverse process (d.c. to a.c.) involves the use of triggered diodes (called thyristors) and is referred to as "inversion". We will briefly look at the process of inversion later in this chapter.

Fundamental Electrical and Electronic Devices and Circuits

61

Rectification is based upon the use of transformers to provide a suitable level of source voltage which can then be converted to the required d.c. level. The rectified d.c. output voltage normally contains some ripple which is eliminated via two techniques:

For low load currents, a capacitor is connected across the load to reduce output ripple For high load currents, an inductor (choke) is placed in series with the load to reduce output voltage ripple.

The simplest circuit is the single-phase, half-wave rectifier shown in Figure 3.8, with both low and high-current filtering for minimisation of ripple. The circuit is analysed empirically in several stages: (i) The capacitance or inductance filtering is ignored and the turns ratio of the transformer is ignored The output voltage waveform is determined for the situation where the transformer secondary voltage has a positive polarity (in Figure 3.8, this means the diode is approximately a short-circuit)

(ii)

(iii) The output voltage waveform is determined for the situation where the transformer secondary voltage has a negative polarity (in Figure 3.8, this means that the diode is approximately open-circuit) (iv) The effects of filtering components are then included. The net effect of the inductance or capacitance is to store and release energy in such a way as to minimise the change in current or voltage, respectively (v) The load current is determined by dividing the load voltage by the load resistance - the waveform shapes are identical for resistive loads.

The results of the multi-stage analysis of the rectifier circuit are shown in Figure 3.9. Note the smoothing effects of the capacitance or inductance, which introduce an exponential decay into the output waveform (with a time-constant of RLC or L/RL) during periods where the output would otherwise have been zero. The final stage of analysis is of course to determine the load current, which is simply obtained by dividing the load voltage by the load resistance. The output voltage waveform is uni-polar but is still time-variant. We therefore quantify such values by referring to their average or rms, rather than peak level. We can also approximately consider the effects of diode voltage drop, by subtracting a value of say, 0.7 volts from the load voltage waveform while the diode is turned on.

62

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

(a)

Vin

RL Vout

L (b) Vin RL Vout

Figure 3.8 - Simple Half-Wave Rectifier Power Supply with (a) Capacitance Smoothing for Low Load Currents (b) Inductance (Choke) Smoothing for High Load Currents

Primary or Secondary Transformer Voltage

Time

Load Voltage V L Diode ON Diode OFF Diode ON

Time

Load Voltage After Filtering

Time

Figure 3.9 - Analysis of Half-Wave Rectifier With Capacitance or Inductance Filtering

Fundamental Electrical and Electronic Devices and Circuits

63

One of the most common single-phase rectifiers is the bridge-rectifier, which is shown schematically in Figure 3.10. The diagram is shown in three parts. The first shows the total circuit and the other two diagrams consider the conditions where the output from the transformer (vab) has a positive and a negative polarity respectively. The net effect of the bridge is to provide a uni-polar output voltage and current independent of the input voltage polarity. We can again make the analysis more accurate by accounting for diode voltage drop and we can also analyse the effects of filtering (via capacitors, as shown in Figure 3.10, or via an inductive choke).

a
v

(a)

Vin

RL IL + -

C Vout

b a
v

(b)

Vin

a
t

RL IL +

C Vout

a
v

(c)

Vin

RL IL +

C Vout

Figure 3.10 - Single-Phase Bridge Rectifier with Capacitance Filtering (a) Circuit Diagram (b) Effective Circuit Diagram for Voltage vab > 0 (c) Effective Circuit Diagram for Voltage vab < 0

64

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The bridge rectifier is most commonly represented in circuits as a diamond of four diodes. In this text however, it is displayed as in Figure 3.10 in a rectangular shape. The reason for this is so that its functional similarity to the three-phase bridge rectifier is more apparent. The three-phase bridge rectifier is shown in Figure 3.11, connected to the secondary (star) windings of a three-phase star-star transformer. Note that while four diodes are required in order to fully rectify a single-phase sinusoidal voltage, only six diodes are required to rectify a threephase voltage waveform. Note also that in Figure 3.11, an inductive smoothing choke is shown rather than the parallel capacitance alternative. The reason for this is because three-phase rectifiers are predominantly used with high load currents and hence inductive smoothing is the only practical option for such systems.

R1

R2 1 2 3

jX Vout

B1 Y1

B2

Y2

RL

+ I L

Figure 3.11 - Three-Phase Bridge Rectifier

The three-phase rectifier of Figure 3.11 is a little more complex to understand than its single-phase equivalent, but is functionally similar. Its operation is best understood by looking at the three-phase waveforms generated on the secondary side of the transformer. These are shown in Figure 3.12. Each of the six diodes can only conduct (turn on) whenever the magnitude of its corresponding phase voltage is greater than that of the other two phase voltages. In Figure 3.12, conduction regions are shown on the voltage waveforms with heavy lines. The output voltage at any time is the difference between the voltage on the top half of the bridge and the bottom half of the bridge and this is shown in the second part of the diagram in Figure 3.12, highlighting the six-phase ripple that is produced.

Fundamental Electrical and Electronic Devices and Circuits

65

Diode 4 ON

Diode 5 ON

Diode 6 ON

Star-Point Voltage R2 Y2 B2 Time Vout

Diode 3 ON Vout

Diode 1 ON

Diode 2 ON

Time

4 Time 5 Diode Currents Time 6 Time 1 Time 2 Time 3 Time

Figure 3.12 - Operation of Three-Phase Bridge Rectifier

66

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Although the output (Vout) contains a six-phase ripple, we assume that in an ideal bridge rectifier, the output inductance is infinite and hence the d.c. output current is time invariant (ie: constant). This constant waveform is the sum of the currents in diodes 1 to 6. The current in each of the diodes is therefore a rectangular pulse, whose duration is one third of the phase-voltage frequency. These diode currents are shown in sequence in Figure 3.12. The phase currents flowing from the secondary side of the transformer can be determined by graphically summing the appropriate diode currents (eg: IR1 = I4 - I1). If you carry out this computation you will note that at every instant in time, these phase currents sum to zero. This means that the three phase bridge rectifier can be used with both Delta-Star and Star-Star transformer configurations. We can again improve our analysis by accounting for the traditional diode voltage drop on each of the six diodes. There are many other configurations of single and three-phase rectifier circuits, generally less common than the bridge circuits discussed thus far. However, their analysis is carried out in an analogous manner using a combination of graphical and analytical techniques as shown here.

(c)

Regulation Thus far we have seen little that could lead us to designing a variable d.c. power supply. There are in fact a number of different techniques that can be used. The most obvious method is to use a variable resistance across the d.c. output and to tap from the wiper arm the output voltage. The problem with this technique is of course that it wastes a lot of energy and subsequently generates a lot of heat, neither of which are desirable traits in power supplies. Another technique is to use a variable transformer (traditionally known by one of its early trademark names "Variac") to vary the a.c. voltage supplied to the rectification stage of the system. A Variac simply has a mechanical knob that is used to position a wiper, that effectively varies the number of turns across which the secondary voltage is extracted. This is then fed into a rectifier and ultimately to a load. A third technique is to use a closed-loop amplifier to vary the a.c. input or d.c. output voltage of a power supply. We shall look at these devices later in this chapter. Finally, the digital technique is to "chop" (switch on and off) the output d.c. voltage so that its average value increases or decreases according to the duty cycle (on:off ratio). If the output voltage waveform is filtered by an inductive choke then the net result is that a variable d.c. output has been obtained.

Fundamental Electrical and Electronic Devices and Circuits

67

There are however many instances where one does not need a variable power supply and the objective is to design a circuit which simply provides a very stable nominally-defined output voltage. As we have seen in (b) above, the rectifier circuit (in its own right) does not provide a pure, time-invariant output waveform and hence capacitive filtering or inductive choking are sometimes used. These techniques still leave some ripple in the output waveform. The solution to this problem is to use Zener diodes to regulate the output waveform and to minimise the ripple to a known range. Figure 3.13 (a) shows the output of a single-phase bridge rectifier being fed into a load via a limiting resistor (RL) and regulated by a Zener diode of a known reverse breakdown voltage. The load on the circuit could be a simple resistance or the front end of some other, more complicated circuit. We know that as long as the output voltage from the rectifier is lower than the reverse breakdown voltage of the Zener diode then the diode is effectively an open circuit and plays no part in the circuit. However, when the rectifier output voltage is greater than the reverse diode breakdown voltage then the diode begins to conduct and draws current away from the load. The voltage is then effectively tied to the reverse breakdown voltage of the Zener diode.

Rectifier Output

Limiting Resistor RL

Vu + vu

Zener Diode

Load

Rx

(a) RL

vu

Rz

Load

Rx

(b)

Figure 3.13 - (a) Zener Diode Regulating Output Voltage Vu from a Rectifier (b) Equivalent Circuit Replacing Diode With its a.c. Resistance Rz

68

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The unregulated output voltage from the rectifier circuit is the sum of two components - a d.c. offset and an a.c. ripple (in this case Vu + vu). The principle of superposition tells us that we can always analyse such circuits by examining the effect of each voltage acting in isolation and then adding the results for the total solution. Figure 3.13 (a) is adequate for analysing the d.c. effects of the problem, but for the effect of the a.c. ripple, we need to look at the a.c. resistance of the Zener diode (Rz). The a.c. resistance of a Zener diode (normally the value quoted by manufacturers) is the slope of the reverse V-I characteristic of the diode obtained for a constant junction temperature (this is normally much lower than the d.c. value measured by users as in Figure 3.6). The equivalent circuit for a.c. operation is shown in Figure 3.13 (b). From this circuit we can determine that the output voltage fluctuation vl caused by the a.c. component (ripple) from the rectifier is given by:

vl =

Rz vu Rz + RL
...(2)

Equation (2) is derived by simple voltage division, making the assumption that Rz is much lower than RL which is normally the case in practical circuits. The output ripple can therefore be minimised by making the limiting resistance much larger than the a.c. resistance of the Zener diode. These sorts of circuits only work with low load currents and nominally constant load voltages. In other situations, transistorised voltage regulators need to be designed for greater stability.

Fundamental Electrical and Electronic Devices and Circuits

69

3.3 Basic Transistor Theory and Models


3.3.1 Introduction In terms of their wide-ranging operational characteristics, transistors are by far the most complex of all the discrete semiconductor-based electronic components. They are also the basis for an enormous range of analog and digital integrated circuits. There are essentially two major groups of devices to be examined herein - that is, the Bipolar Junction Transistors (BJTs) and the Field Effect Transistors (FETs). The FETs also have a sub-group known as the Metal Oxide Semiconductor Field Effect Transistors or MOSFETs, which have similar characteristics to the FETs, but some operational advantages under certain conditions. As with p-n junction diodes, transistors can be analysed at a number of different levels. A complete discussion and analysis of the behavioural characteristics of transistors is beyond the scope of an entire book, much less a chapter section such as this. The purpose of this section is therefore to provide a brief overview of the basic transistor technologies, their functionality and typical circuits in which they are used. Firstly we will examine the use of transistors in digital circuits where they perform simple switching functions. Subsequently we will examine the use of transistors in analog circuits. In the discussions on transistor devices we again need to refer to the principle of superposition. We treat all d.c. voltages and a.c. (signals) separately in what are referred to as the "large signal" and "small signal models", respectively. In transistor theory, we are generally not greatly interested in the total solution (d.c. offset + a.c. signal) but rather the individual components. In particular, the d.c. components are of primary interest in digital circuits and the small signal (a.c.) components are of primary interest in analog circuits. Moreover, we need to understand that in analog circuits the d.c. voltages are only used to place transistors into a linear (analog) mode so that the a.c. signals can be amplified. There are many differing opinions on how transistor theory should be introduced. Some authors prefer to avoid the complexities of the so-called "small-signal" model, whilst others get so carried away with the semiconductor physics that the readers tend to lose sight of any practical applications of the transistors themselves. In this book, we will endeavour to tread the middle ground in order to provide as much insight into the design and analysis of analog circuits as is practical in this short treatise.

70

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

3.3.2 Bipolar Junction Transistors (BJTs) The Bipolar Junction Transistor (BJT) is one of the earliest forms of transistor. It looks deceptively simple and yet its true functionality is quite complex. It can be formed by successively doping intrinsic (pure) Silicon with "n-type" (Group V) impurities then a higher concentration of "p-type" (Group III) impurities, then again with a still higher concentration of "n-type" impurities. This creates a device with three regions and two p-n junctions in close proximity to one another. Simplistically, the transistor can be considered as two p-n junction diodes back to back. Figure 3.14 schematically shows the so-called "grown" construction of the "npn" transistor, together with its symbolic circuit representation. The three "doped" regions within the transistor are connected to the outside world via conductors and the three resulting terminals of the device are referred to as the "collector", "base" and "emitter. In this chapter we shall concentrate on the npn transistor although the "pnp" transistor is also readily available as a complementary device. The only major difference between the two types is that the d.c. voltages (biases) applied to pnp devices need to be of opposite polarity to those applied to the npn devices described herein. In both transistor types, we normally refer to the base (B) as the input side and the collector (C) and emitter (E) as the output side.

Collector

Collector

n Doping Concentration Base

Base

Emitter

Emitter

Figure 3.14 - Grown (npn) Transistor Schematic and Circuit Symbol Representation

Fundamental Electrical and Electronic Devices and Circuits

71

There are a number of points to note about the BJT. Firstly it is not a symmetrical device. Like the diode, the transistor is fabricated on a piece of intrinsic (pure) semiconductor, successively doped with increasing concentrations of impurities. The impurity concentration of "n-type" dopant in the collector is much higher than that in the emitter. Secondly, the width of the base region of the transistor is of critical importance. In Section 3.2, we briefly examined the p-n junction. In the BJT there are two p-n junctions and the width of the base region is such that these two junctions can interact with one another thereby producing a variety of different effects. Moreover, it needs to be noted that the width of the depletion region at each of the p-n junctions in the transistor is dependent on a number of factors, particularly the voltage applied across the junction itself (the "biasing" of the transistor). As the width of the depletion regions varies, so too does the effective base width of the transistor and hence a number of different effects can be achieved. The simplistic, two-diode approximation of the transistor is shown in Figure 3.15.

Collector

Collector

Collector

I n Base Base p Base I


B

n IE

Emitter

Emitter

Emitter

Emitter

Emitter

Emitter

IE p Base Base I
B

Base

p I
C

Collector

Collector

Collector

Figure 3.15 - Transistor Diode Analogy for "npn" and "pnp" Devices

72

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

We can begin to examine the operation of the BJT by establishing the basic relationships between the currents flowing within the device (using the conventions shown in Figure 3.15). From Kirchoff's current law, we know that:

IC + I B = I E
...(3) The transistor is designed such that there is a large "forward current gain" or Beta () and hence the collector current:

IC = I B
...(4) Since typical values of can range from 100 to 1000, the base current is normally negligible compared to the collector current and hence IC is approximately equal to IE. However, the base current effectively flows into the emitter and hence the emitter current is actually slightly higher than the collector current. Transistors can be placed into a variety of different modes depending upon the voltages applied to their terminals - that is, the d.c. "biasing". These modes are known as: (i) (ii) (iii) (iv) Cut-off Forward Active Saturation Reverse (Inverse) Active

We will not enter into a discussion of the physical phenomena that occur within the semiconductor as the d.c. voltages (biases) applied to the outside terminals are altered. Our objective herein is to summarise the electrical characteristics of the transistor for a range of different biasing conditions described in (i) - (iv). However, in order to understand the concept of forward and reverse biasing of junctions, it is necessary to refer back to our earlier discussions on diodes (section 3.2.1). A p-n junction is said to be forward biased when the potential of the "p" side is greater than the potential of the "n" side. The simple way to remember this is to think of the junction as being forward biased when "p" is connected to positive and "n" is connected to negative. The junction is reverse biased when this polarity is reversed. With these basic concepts in mind, the four operational modes of the BJT are summarised as follows:

Fundamental Electrical and Electronic Devices and Circuits

73

(i)

Cut-off Mode A BJT device is cut-off when the emitter-base junction and the collector-base junction are both reverse biased. Looking at the diode representation of Figure 3.15, we can see that this will be achieved when VBE has a value less than the barrier potential voltage required to cause forward conduction across the emitterbase junction (typically 0.7v). In this condition, no base current flows and hence no collector current flows. If we view the transistor as a switch that facilitates current flow from collector to emitter, then we can say that a cut-off transistor is effectively open circuit between collector and emitter. Cut-off mode is used in digital circuits.

(ii)

Forward Active Mode A BJT device is placed into forward active mode when the emitter-base junction is forward biased (and greater than the junction potential) and the collector-base junction is reverse biased. When d.c. voltages are applied to the terminals of a transistor to establish this forward active mode, then a.c. signals can be superimposed onto the base to be amplified at the collector. When there are no a.c. signals input into the transistor, then the circuit is said to be in the "quiescent" state. However, when small signals are applied to the base, then the transistor can be used to amplify them in a linear fashion and hence forward active is the operational mode of BJTs in analog circuits. This mode is also referred to as the linear region for the device.

(iii) Saturation Mode A BJT device is placed into saturation mode when the emitter-base junction is forward biased and the collector-base junction is forward biased. In an "npn" transistor, this normally occurs when a relatively large d.c. voltage is applied to the base, thereby forward-biasing both transistor junctions. The net effect of this can be understood by examining Figure 3.15, where it can be seen that, between the collector and emitter terminals, the transistor becomes almost short-circuited because the two junctions behave like two forward biased diodes. In practice, there is a small voltage drop across both diodes - approximately 0.2v (much less than for individual p-n junction diodes). Saturation mode can be compared to the short-circuit mode of a switch and hence forms the complementary digital circuit function to cut-off mode.

74

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

(iv) Reverse (Inverse) Active Mode One may intuitively feel that a transistor is a device which will operate equally well in both directions, since its semiconductor structure is essentially an "npn" or "pnp" sandwich. However, the difference in doping levels between the collector and emitter means that the BJT is not symmetrical. Endeavouring to operate a transistor with the collector-base junction forward biased and the emitter-base junction reverse biased will place the transistor into inverse active mode. While the transistor will still function, the forward gain will be greatly reduced and the transistor will be relatively inefficient. This mode is normally achieved by accident, when the emitter and collector terminals are inadvertently mistaken by a developer. The circuits that are used in order to create the different transistor modes are called biasing circuits. There are many different types of biasing circuits and their names are often somewhat confusing to novices in the field. Biasing circuits actually have two major functions:

To provide quiescent voltages that will place the transistor into an appropriate mode of operation To provide feedback that will stabilise the gain of the transistor.

We have already covered the need for the first function in our discussion of operating modes. The second function is of particular importance in analog circuits. When we wish to amplify a signal, we generally need to be sure that the gain will be well defined. However, the forward gain of a BJT device (the Beta) is an ill-defined value and is subject to variation due to the manufacturing process. The variation in the value between a number of transistors (of the same type) can be as high as three to one. For this reason it is rather futile designing circuits whose amplification is dependent upon this parameter. The solution to the problem is extracted from classical control theory, where we feed a proportion of the output signal back into the input circuit for closed-loop control. This is shown schematically in Figure 3.16.

System Xin + U Open-Loop Gain (A) Xout

F . X out

Feedback (F)

Figure 3.16 - Classical Closed-Loop Control System

Fundamental Electrical and Electronic Devices and Circuits

75

In Figure 3.16, the open-loop gain is defined as the amplification of a device acting in isolation (that is, with no feedback). The circle with the cross and polarity signs is the common symbol for a summing junction - in the case of Figure 3.16, the summing junction contains a "+" and a "-" and hence the output of the junction (U) is the difference between the two inputs. The whole system is referred to as a negative feedback arrangement and causes the device with an open-loop gain of "A" to amplify the difference between the input and the output. Analytically this system is described as follows:
X out = A U U = X in F X out X out = A X in F X out

1 X out A = = X in 1 + A F 1 + F A ...(5)

Equation (5) simply tells us that if the open-loop gain of the system is large enough (that is, "A" tends to infinity), then the ratio of output to input in under closedloop control will be inversely proportional to the feedback. In an analog circuit, the device with an open-loop gain of "A" could simply be a transistor with a forward gain of . The feedback arrangement is normally achieved through a simple network of resistors. The net result is that we can design bias circuits that fulfil both the mode selection and feedback roles and thereby provide us with the basis for stable amplifiers. It is important to keep the classical control system model in mind when examining transistor circuits. Not only does a basic understanding of this model assist in analysing amplifier circuits, but it also assists in understanding the nomenclature used in regard to transistor circuits. For example, some circuits are referred to as "Common Emitter (CE)" or "Common Base (CB)". This terminology refers to the fact that the emitter or base (respectively) are common to both the input and output circuits. A Common Emitter circuit is shown in Figure 3.17. This particular circuit can have a number of different roles. Firstly, it can be used to measure the output characteristics of a transistor, which show the dependence of IC upon VCE for a range of different base currents. A typical Common-Emitter output characteristic is shown in Figure 3.18 for a range of different base currents.

76

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

There are however, several points to note about the circuit shown in Figure 3.17. Firstly, we see the base and the emitter circuits grounded at the same point (common). Secondly, the resistance connected to the collector is referred to as the load. Since we know that the transistor can amplify the base current by a factor of , we know that there is potential to use the transistor to drive a higher current load than the voltage source Vin may be able to provide. Thirdly, we note the presence of the d.c. supply rail Vcc.

cc

Load (Rc ) V C B E
in out

IE

Figure 3.17 - Common Emitter Circuit

(mA) Saturation Region Breakdown Region

50 40 30

Forward Active Region I B = 200 A I B = 160 A I B = 120A

20 10

I B = 80 A I B = 40 A VCE 2 4 6 8 10 Volts

Figure 3.18 - Typical Common-Emitter Output Characteristic of an npn Transistor

Fundamental Electrical and Electronic Devices and Circuits

77

Most text books tend to understate the importance of the supply rail Vcc. It is referred to as a supply rail for two reasons. Firstly, because it is normally connected directly to the d.c. power supply for the system and secondly, because it supplies all the energy required for amplification of signals. In other words, the transistor is really a mechanism for controlling the energy obtained from the power supply rails. In digital circuits the transistor acts as a switch whose open/closed status is dependent upon the emitter-base voltage. In analog circuits the transistor acts as an infinitely variable valve (or resistance), controlled by the emitter-base voltage, that regulates current flow from the collector through to the emitter. Kirchoff's current law tells us that because some base current is flowing into the transistor the collector and emitter currents cannot be identical. There is an additional parameter, which like is also referred to as a current gain parameter. This other parameter "" is defined as follows:

IC IE

...(6)

It is less useful than because the base current in a transistor is normally much smaller than the collector current and hence the collector current and emitter current are almost identical for practical measurements. Thus far, we have only examined the d.c. characteristics of the transistor circuit shown in Figure 3.17 and the output characteristic of Figure 3.18 which shows some of the typical operating regions of the transistor. The d.c. voltages applied to the transistor terminals as a result of biasing circuits such as that in Figure 3.17 are referred to as quiescent operating conditions and it is conventional to represent all quiescent (d.c.) voltages and currents with upper-case letters. We shall later see that we can superimpose a.c. signals (represented by lower-case letters) onto the base terminals of the transistor so that it can be used as an analog amplifier. However, we first need to look at the role of the transistor as a digital switch, where we are only interested in using the device to switch on and off quiescent voltages and currents. Referring again to Figure 3.17, which illustrates the common-emitter transistor configuration, we can see that if we set the input voltage Vin to zero, then the emitterbase junction is less than the barrier potential and hence the junction is reverse biased as is of course the collector-base junction. As a result, the transistor is said to be "cutoff" (open-circuit) and hence no current flows from the collector to the emitter. As a result, the quiescent output voltage Vout is equal to Vcc:

Vout = Vcc I c R c = Vcc


In other words, if we input a low voltage, then we obtain a high voltage as an output from such a circuit.

78

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

On the other hand, if we input a high quiescent voltage into the base of the transistor (the same size as Vcc, for example), then we force the transistor into saturation mode, where the collector and emitter are effectively short-circuited together (in fact a 0.2 volt drop across VCE is typical). As a result, the output voltage is close to zero. In other words, a high input voltage results in a low output voltage. This circuit referred to as a Boolean inverter because the output has the inverse status of the input. For practical reasons, an actual inverter is fabricated onto a single piece of semiconductor and is typically made up of a number of transistors. A realistic circuit diagram for an inverter fabricated from BJTs is shown in Figure 3.19 and typically, a number of inverters would be integrated onto the same piece of semiconductor for economic reasons. Boolean logic circuits fabricated from BJTs are the oldest digital circuits and are referred to as "Transistor to Transistor Logic" or "TTL", typically operating with a value of Vcc equal to around 5 volts.

Totem-Pole Output Stage V cc 1.6 k 4k Q3 V D2 130

in

Q1 D1

Q2

Q4 1 k

Vout

Load

Figure 3.19 - Circuit Diagram for a realistic TTL Inverter Circuit

The circuit of Figure 3.19 can readily be analysed if one always assumes that the transistors therein can only be in either cut-off mode or saturation mode. For example, if Vin is low, then Q1 saturates (because both the collector and emitter junctions are forward biased), thereby short-circuiting the base of Q2 to low. Q2 is therefore cut-off and hence the base of Q3 is high. Q3 saturates and the output diode is short-circuit, so therefore the output voltage is high. The reverse analysis can be applied for a high input.

Fundamental Electrical and Electronic Devices and Circuits

79

Another feature of the TTL circuit is the output stage, composed of two transistors (Q3 and Q4) connected together in what is referred to as a "totem-pole". This feature is common to a number of TTL circuits and is designed such that only one transistor or the other conducts at any one time. The circuit of Figure 3.19 also illustrates another important point about digital circuits integrated on a single piece of semiconductor material. Transistors switching from cut-off to saturation (and vice-versa) only consume a small amount of energy. However, it is evident that there are also resistors fabricated into the digital circuit. These linear elements dissipate energy and generate heat. As long as the current flowing through these devices is small, then there is no problem - however, if the current flowing through the resistors is large, then they become a potential source of device failure. Let us assume that the output of the circuit in Figure 3.19 is used to drive a load (eg: a resistor or light-emitting-diode between the output terminal and ground). When the transistor Q3 is saturated, then the current flows from the supply rail, through the 130 resistor, through Q3 and through the external resistive load. If the load is large (ie: low resistance) then the current flowing through Q3 and the 130 resistor is also large. It is the 130 resistor that can cause problems, since the current flowing through it generates heat in the small semiconductor. For this reason, such a circuit can only provide (source) a very limited amount of output current. The level of current available for a load (eg: light-emitting diode, relay, etc.) can be increased by using a different form of TTL chip that eliminates the "heatgenerating" totem-pole output resistor. This is known as "Open-Collector TTL", because the resistor, npn transistor and diode in the normal totem-pole output (shown in the shaded region of the inverter of Figure 3.19) are omitted. Instead, the load is connected between the supply rail and the collector of the pnp output transistor (Q4 in Figure 3.19), via an external, current-limiting resistor. The external resistor can of course be rated at a higher power level than the semiconductor one that is normally fabricated in the totem- pole. The other advantage of the open-collector gates is that the outputs from a number of gates can be "tied" together in order to generate a Boolean "AND" function. This is referred to as "wired" logic and can not be achieved with standard TTL. There are a number of different digital circuit devices that can be fabricated into Small Scale Integrated (SSI) circuits in order to implement common Boolean functions. The generic name for such devices is gates, and we will examine these and their logic further in Chapter 4. At this stage however, it is necessary to examine the operation of the BJT as an amplifier. This often causes some confusion and so we will take a step by step approach to the analysis. The key point to remember is that amplifier circuits are analysed by using the principle of superposition. Firstly we analyse the quiescent circuit, assuming that all signal voltages are zero and then we analyse the small-signal circuit, assuming that all the d.c. voltages are zero. The actual voltage at any point in the circuit is the sum of the two components. However, we normally only concern ourselves with either one or the other.

80

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Figure 3.20 shows the common-emitter circuit of Figure 3.17 in the way it would normally be modified for amplification purposes. The d.c. voltage source at the base of the transistor is actually replaced with a resistive ladder arrangement and the values of the resistors are chosen to create a situation where the emitter-base junction is forward biased and the collector-base junction is reverse biased, thus setting the transistor into its normal mode of operation. A resistor, Re, has been placed between the emitter terminal and the ground to provide another negative feedback element that helps stabilise the circuit. If the collector current starts to increase in this circuit, the voltage drop across Re increases and hence the emitter voltage increases, hence the baseemitter voltage decreases, hence the base current decreases and hence the collector current decreases. The overall circuit is called an "emitter-feedback" circuit.

V cc
C

R1 IB

Rc V out

R2

Re IE

Figure 3.20 - Emitter-Feedback Circuit Commonly Used for Amplification

Figure 3.21 shows the same circuit, with the resistive ladder replaced by its "Thvenin Equivalent" circuit which is composed of a voltage source in series with a resistance Rb (refer to section 3.4). The effective values of the voltage source and resistance are simply calculated from the original circuit of Figure 3.20 as follows: VB = R2 Vcc R1 + R 2 R1 R 2 R1 + R 2

Rb =

...(7)

Fundamental Electrical and Electronic Devices and Circuits

81

V cc
C

Rc V out Rb IB

VB

Re IE

Figure 3.21 - Emitter-Feedback Circuit with Thvenin Equivalent of Base Biasing

Given the circuit of Figure 3.21, the output characteristic of the transistor (as shown in Figure 3.18) and the fact that the base-emitter voltage must be greater than approximately 0.7 volts in order for the transistor to be placed into active mode, we can calculate all the quiescent voltages and currents in the circuit. We can then look at applying small a.c. signals to the base of the transistor and determining the response of the circuit at the collector terminal. In order to do this we need to place one capacitor between the a.c. source and the existing base circuit and one capacitor between the collector terminal and the a.c. output. The capacitors "de-couple" (separate) the a.c. and d.c. components and are normally selected so that they have no impedance at normal operating frequencies. The combined bias and signal circuit is shown in Figure 3.22.

d.c. d.c. + a.c. a.c. Cb Rb I B+ ib vin VB

I + ic C Rc Cc

V cc a.c.

out

d.c. + a.c.

Re I E + ie

Figure 3.22 - Emitter-Feedback Circuit with a.c. Input Signal Applied

82

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Figure 3.22 shows the complete circuit and the distribution of a.c. signals and d.c. voltages at different points in the system. The circuit is now treated in two separate components and since we have already looked at the d.c. components, we now need to examine the a.c. influences in the system. We do this, following the principle of superposition, by setting all the d.c. voltages in the system to zero. When we began to analyse the BJT, we were only concerned with its characteristics under large-signal (quiescent / d.c.) conditions and we did our analysis using a very simplistic model, which we did not derive ourselves, but rather assumed to be correct from our limited overview of the semiconductor structure. It is interesting to note however, that when we wish to apply a.c. signals to the transistor, then we need a more sophisticated model. In particular, we are concerned with what happens when we make small a.c. perturbations about the quiescent operating points established by bias circuits, such as the emitter-feedback arrangement. A thorough derivation of the a.c. (small-signal) model is outside the scope of this book, and so it will need to be taken as read, that the model we have presented is correct. The small-signal or so-called "hybrid-" model for the BJT (operating in forward-active mode) is shown in Figure 3.23.

Base B ib

r bb' B'

ic

Collector C gm . vb'e vce

r vbe vb'e

be

ie Emitter E

Figure 3.23 - Small-Signal (Hybrid-) Model for BJT

The small-signal circuit model is composed of a number of different elements which are explained as follows:

The element rbb' represents the resistance of the contact between the actual base terminal of the transistor (on the semiconductor) and the outside world. Typically rbb' is small (50 - 200 ) and is sometimes neglected in analysis The quantity gm is referred to as the mutual conductance or transconductance of the transistor and is obtained by taking the partial

Fundamental Electrical and Electronic Devices and Circuits

83

derivative of collector current with respect to base-emitter voltage, for a constant collector-emitter voltage. The value of gm depends upon the quiescent (d.c.) current on the transistor (the operating point) and the temperature at which the transistor is operated. The following expressions are used to derive the transconductance of a transistor: gm IC VT k T q

VT =

k is Boltzmann' s Constant in Joules / Degree Kelvin q is the electron charge T is the temperature in Kelvin From the above expressions, we can determine that the transconductance of a transistor at room temperature (25C) is given by:

gm =

IC (mA / V) 25. 8

...(8)

The resistance between the semiconductor base and emitter terminals, rbe is dependent upon both the and the transconductance of the transistor and is defined as follows:

rbe =

gm

...(9)

The current source in the collector circuit is a dependent source whose value is determined by the transconductance and the base-emitter voltage:

i c = g m v b 'e v b'e = i b rbe = i b ic = i b gm


...(10)

Equation (10) illustrates that the small signal and large signal current gains from base to collector are identical and either form of the relationships described in this equation can be used for analysis purposes.

84

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

In order to analyse the emitter-feedback circuit of Figure 3.22 in terms of its performance as a small signal amplifier, we set the d.c. supply voltages to zero. In this case, the only voltage we need to concern ourselves with is Vcc (since Vb is not actually a voltage source, but rather a Thvenin equivalent element). We also replace the circuit symbol for the BJT in Figure 3.22 with the small signal model of Figure 3.23. Finally, we can work on the assumption that Cb and Cc are large capacitances and hence their impedance is assumed to be zero. The end result of these three actions is the circuit shown in Figure 3.24.

Rb

r bb' B' vb'e C

ic i out

vin

be

gm . vb'e E Rc vout Load

Re

Figure 3.24 - Complete Circuit Model for Emitter-Feedback Amplifier Incorporating Small-Signal Model for BJT

The circuit shown in Figure 3.24 will only provide us with the small (a.c.) signals existing in the amplifier. The original diagram in Figure 3.22 shows that at some points in the system both a.c. and d.c. voltages will exist and hence the total voltage is calculated by adding the small signal voltage to the quiescent d.c. values obtained from the large-signal model in Figure 3.22. The results are shown in Figure 3.25. Note well how the Vcc voltage is fixed at some d.c. level because it represents a d.c. power supply and hence no signal can exist at that point in the circuit. An important question that arises therefore is what happens when the signal at the collector and its quiescent voltage add up to a value higher than the Vcc supply rail. The simple answer is that they can't. If, for example, the signal waveform at the collector is sinusoidal, as shown in Figure 3.25, and its level increases as a result of increasing input, then it will ultimately become distorted (flattened off to the Vcc level). This is referred to as clipping. Similarly, if the waveform at the collector is so large that its bottom end attempts to go below the emitter voltage, then it too will distort into the shape of the waveform already existent at the emitter. In fact, the emitter and collector voltages can never be separated by less than approximately 0.2 volts (the voltage at saturation).

Fundamental Electrical and Electronic Devices and Circuits

85

Voltage

Vcc

vc = vout V

vin V
B

ve
E

Time

Figure 3.25 - Total Voltage Waveforms (Quiescent + Signals) in an Emitter Feedback Amplifier Circuit

Figure 3.24 shows that the emitter-feedback amplifier circuit can be analysed just like any other simple network, using Kirchoff's voltage and current laws and Figure 3.25 shows how the principle of superposition enables us to get a total solution for all node voltages. The only point to remember however, is that in general, the results will tend to somewhat complicated in an algebraic sense, since the circuit contains a current source which is dependent upon the true base to emitter voltage. As an amplifier, our main concern is to ensure that the signal input at the base of the system is amplified at the collector and so generally we wish to develop an expression for the voltage gain of the system (vout/vin). We also need to ensure that the voltage gain is independent of ill-defined values such as . However, we are additionally interested in ensuring that the input impedance of the system (vin/ib) is high and that the output impedance (vout/iout) is low. All these parameters can be adjusted by varying the size of resistive components in the emitter-feedback circuit.

86

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

In a realistic amplifier, the values of input and output capacitance are finite and hence these need to be included in the model of Figure 3.24. The fact that these have imaginary impedance values (1/jC) means that the signal voltage gain will be frequency dependent and the output voltages will (in general) be phase-shifted from the inputs. We therefore also need to concern ourselves with determining and adjusting the frequency bandwidth in which the amplifier will give us a well-defined signal gain. The emitter-feedback amplifier is one of the most fundamental amplifier circuits. However, there are many other BJT circuits that can be used for amplification and many of these rely upon multiple transistors. These circuits are typically designed so that the stability of gain is not compromised by minor variations in component tolerances, transistor , etc. They are also specially designed to provide higher levels of input impedance or lower levels of output impedance, etc. The analysis of all these amplifier circuits is carried out on the same basis described for the emitter-feedback circuit, and with experience, it is possible to simplify this analysis by making a range of different assumptions. Our next step however, is to move on to a slightly different transistor technology and to evaluate its effectiveness in digital and amplifier circuits.

3.3.3 Field Effect Transistors (FETs)


There are two generic types of Field Effect Transistor (FET). These are the Junction Field Effect Transistor (JFET) and the Metal Oxide Semiconductor Field Effect Transistor (MOSFET). FET devices are quite different in semiconductor structure to BJTs and yet, in terms of their electrical models, they can ultimately be used in similar applications to the BJTs. JFETs and MOSFETs offer both advantages and disadvantages over the BJT architecture, and it is in digital circuits and switching where the FET devices offer their greatest potential. In particular, FETs are much more compact than BJTs and have a higher input impedance, thereby making them suitable as loads on circuits with a high "fan-out". FETs can also be configured to act as resistive loads and hence integrated circuits can be designed using only FETs, without a need for other devices. The BJT, on the other hand, has the capacity to provide a higher gain over a much larger frequency range (bandwidth) than a FET device and hence it is more useful as an analog amplifier. The BJT can also switch at higher speeds than a FET in digital circuit applications. The semiconductor structure of the so-called "n-channel" JFET is shown schematically in Figure 3.26. Like the BJT, the JFET is composed of three, doped semiconductor regions. However, in the FET, two of these regions have the same doping and are connected together to form a terminal known as the gate. The third region is called a channel and one terminal is connected to each end. One end of the channel is referred to as the source and the other end of the channel is known as the drain..

Fundamental Electrical and Electronic Devices and Circuits

87

Depletion Regions p-type gate Gate (G) n-type channel p-type gate

Source (S)

Drain (D)

Figure 3.26 - Schematic of "n-channel" JFET Structure

An examination of Figure 3.26, reveals that the JFET is a three-terminal device which relies upon the flow of majority carriers from the source to the drain. JFETs can be designed with either "n" or "p" channels and hence rely upon either electron or hole conduction (respectively) in the channel. The flow of majority carriers from source to drain is facilitated by providing an electric field, which can be generated by applying a voltage between the source and the drain. In section 3.1.1, we looked at the p-n junction diode and we examined the phenomena that occurred at the junction when "p" and "n" type materials were butted together. In particular, we noted the depletion region that this generates for some distance on either side of the physical junction. The same also applies in the BJT and FET devices. Moreover, in all these devices, the external potential applied to the junction can be used to vary the width of the depletion region. In the JFET, if we vary the width of the two depletion regions, we can effectively vary the width of the channel through which majority carrier conduction can occur and hence control the flow of current from source to drain. We again note that the depletion region is so named because it is an area deplete of majority carriers for conduction purposes. The width of the junction is controlled by varying the gate potential with respect to the source. If the potential is sufficiently large, then the junction will effectively widen to the extent where conduction in the channel is restricted to a value which is independent of the drain to source potential. This is referred to as the "pinch-off" condition.

88

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

In addition to the JFET devices there are Metal Oxide Semiconductor (MOS) devices, of which three different types exist. These are the:

Enhancement MOSFET device Depletion MOSFET device Complementary MOSFET (CMOS) device.

In fact, the MOS devices are of greater significance to us than the JFET device since they are far more prolific in both digital electronic circuits and power electronics circuits. The structure of the Enhancement MOSFET and the Depletion MOSFET are both shown schematically in Figure 3.27 (a) and 3.27 (b) respectively.

Aluminium Source Gate Drain SiO n


------------2

p-type substrate

(a) Enhancement Mode MOSFET

Aluminium Source Gate Drain SiO n


--------2

p-type substrate

(b) Depletion Mode MOSFET

Figure 3.27 - Schematic of MOSFET Construction (a) Enhancement Mode n-Channel MOSFET (b) Depletion Mode n-Channel MOSFET

Fundamental Electrical and Electronic Devices and Circuits

89

The Enhancement MOSFET is unlike the JFET in the sense that no physical channel exists from source to drain until the transistor is biased. If a positive voltage is applied to the gate of the n-channel device shown in Figure 3.27 (a), while the source, drain and p-type substrate are all connected to ground (say), then the majority carriers in the substrate (positively charged holes) move away from the surface and are replaced by carriers (electrons) from the two neighbouring regions in the source and drain, thus creating an inversion layer, just below the Silicon Dioxide coating. A conducting channel is therefore formed in the substrate, joining the source and drain. The channel current is enhanced by the positive gate voltage, and hence the name of the device. The current flowing through the channel varies with the potential difference between the drain and the source. If the gate voltage is kept constant as the drain to source voltage increases, then the voltage of the gate, with respect to the drain decreases, thus reducing the size of the inversion channel, until a point is reached where the channel is effectively "pinched" off. Despite the physical difference between the enhancement MOSFET and the JFET, this characteristic is similar. Figure 3.27 (b) shows the Depletion MOSFET (also n-channel), which already has a channel diffused between the source and the drain. Current can be made to flow through the channel simply by applying a voltage between the drain and the source and maintaining the gate to source voltage at zero. If the gate voltage on the Depletion MOSFET is made more negative, then positive charges will be induced into the channel (from the p-type substrate), thereby recombining with the holes and hence, diminishing the number of free majority carriers available for conduction in the channel. Eventually the channel is pinched off. This type of MOSFET clearly derives its name from the fact that the channel conductivity is depleted as a negative gate voltage is applied. It is interesting to note, however, that the Depletion MOSFET can also be used in enhancement mode, provided that the gate to source voltage is kept positive. Despite the apparently more complex semiconductor structure of the MOSFET over the BJT, it is interesting to note that even in the mid 1970s (when the devices were introduced) it was possible to build MOSFET devices in a semiconductor area less than one twentieth of that required by the BJT. As a result of the smaller size of devices such as the enhancement MOSFET, it was decided at an early stage to combine two complementary devices (one n-channel and one p-channel) onto the same chip. These devices are called Complementary MOSFETs or CMOS and are commonly used in digital circuits. The semiconductor structure of a CMOS pair is shown in Figure 3.28.

90

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

G n-channel MOSFET S1 G1 p-channel MOSFET S2 D1 D 2 G2 SiO


2

p-type well n-type substrate

Figure 3.28 - Complementary n and p channel MOSFETs (CMOS)

The circuit symbols for the n and p channel versions of the JFET, the Enhancement and Depletion mode MOSFET devices are shown in Figure 3.29. Note that because the Depletion MOSFET can be used in Enhancement Mode, there are two possible circuit representations for the Enhancement MOSFET (as shown in Figure 3.29 (b) and (c)). Note also that the symbols for the CMOS devices is the same as for normal MOSFETs since a CMOS pair is simply two transistors fabricated within the same piece of semiconductor material. A typical output characteristic for a FET device is shown in Figure 3.30. Note that the normal mode of operation for a FET is in the so-called saturation (or pinch-off) region, where drain current is approximately constant with changing drain-source voltages. This differs somewhat from the BJT characteristic of Figure 3.18 where the device is not "saturated" in its normal mode of operation. Despite the physical differences between all the FET devices and the BJT it is interesting to note that they all share a similar output characteristic - that is, a linear region which knees into an approximately constant voltage-current region that knees into a breakdown region when high output voltages are applied to the end terminals of the devices. Regardless of their similarities, it turns out that all the devices have strengths in different applications. BJTs, for example, are used where high digital switching speeds are required or in wide-bandwidth, general-purpose linear amplifiers. JFETs, on the other hand, are typically used in special-purpose low-noise amplifiers and in circuits requiring a high input-impedance. JFETs can also be used as voltage controlled resistors. MOSFETs are predominantly used in power-switching circuits and in digital circuits where low power consumption is required and where high input impedances are required.

Fundamental Electrical and Electronic Devices and Circuits

91

Drain Gate Source IS n-channel JFET (a) Gate

Drain

Source IS p-channel JFET

Drain Gate Source IS n-channel Depletion or Enhancement MOSFET Gate

Drain

Source IS p-channel Depletion or Enhancement MOSFET

(b)

Drain Gate Substrate Source IS n-channel Enhancement MOSFET Gate

Drain

Source IS p-channel Enhancement MOSFET

(c)

Figure 3.29 - Circuit Symbols for Different FET Devices

92

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

D (mA)

Ohmic Region 20 16
VGS = +4 v

Saturation Region

12
VGS = +2 v

Enhancement Mode

8
VGS = 0 v

VGS = -2 v VGS = -4 v

Depletion Mode VDS Volts

12

16

20

Figure 3.30 - Typical Output Characteristic for an n-channel MOSFET Operating in Both Enhancement (VGS Forward Biased) and Depletion (VGS Reverse Biased) Mode
Unlike BJTs, JFETs are bi-directional devices because either end of the channel can be used as the source or the drain and the direction of current flow is only determined by the channel type and the polarity of the voltage applied across the channel. For an n-channel JFET, however, a negative gate-source voltage needs to be applied in order to cause conduction (compared with the npn transistor, where a positive base-emitter voltage needs to be applied to activate the device). The source of the transistor is always connected in keeping with the channel type - for example, the source on an n-channel FET is connected to the negative side of the circuit, while the source on a p-channel FET is connected to the positive side of the circuit. All FET devices can readily be used as variable resistive loads, by virtue of the fact that below the pinch-off region the channel current is dependent upon the drain to source voltage. This is an important feature of FETs because a normal resistor, diffused into semiconductor material (such as the 130 device shown in the TTL Gate of Figure 3.19) takes up 20 times the space of a FET device. As a result, it is rare to see MOSFET based digital circuits containing diffused resistors and instead, MOSFET loads are used. Consider again the simplistic inverter circuit of Figure 3.17 (the Common-Emitter arrangement). The MOSFET equivalent would be a Common-Source circuit using one MOSFET as a driver (Q1) and another as a pull-up resistance (Q2) as shown in Figure 3.31. Another common technique is to use Complementary MOSFETs, or CMOS devices in order to create the driver and load from an n channel MOSFET (Q1) and a p-channel MOSFET (Q2) respectively. A CMOS inverter circuit is shown in Figure 3.32.

Fundamental Electrical and Electronic Devices and Circuits

93

DD

D2 Q2 Active Load S2 D1 G1 V
in

G2

Q1 S1

out

Figure 3.31 - Digital Inverter Gate Based Upon MOSFETs

DD

Active Load

D2 G2 Q2 S2 D1 G1 Q1 S1 V
out

in

Figure 3.32 - CMOS Based Inverter Circuit with PMOS (Q2) Load

94

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The sorts of MOSFET based digital circuits shown in Figures 3.31 and 3.32 provide a number of advantages over BJT based (TTL) circuits. The major advantage of course is that the size of FET gates is much smaller than BJT gates, so that a much higher chip density can ultimately be achieved in order to implement more complex digital functions. Secondly, however, MOSFET based circuits consume far less power than BJT circuits because they are essentially closer to the concept of an ideal switch (with very little current flowing when the devices are cut off and almost no resistance when short-circuited). The disadvantage of MOSFET based circuits is of course that they are slower than equivalent TTL devices, although the CMOS based circuits generally provide superior performance to the standard MOSFET devices. MOSFETs are also referred to as "Insulated Gate FETs" or IGFETs, because of the Silicon Dioxide layer isolating the gate terminal from the semiconductor material and because of their high-input impedances. It is also possible to create hybrid devices with the high-input impedance characteristic of MOSFETs and the switching performance of BJTs. These devices are referred to as "Insulated Gate Bipolar Transistors" or IGBTs. In general, the number of analog circuit applications is diminishing in respect to the number of digital circuit applications - primarily because it is possible to create digital switching circuits which:

Emulate in digital form the functions of analog circuits (amplification, voltage conversion, etc.) Integrate with computer-based (microprocessor or Digital Signal Processor) controls Dissipate far less power than analog circuits because power is only supplied to devices when they are performing their required function.

As a result, the number of applications requiring MOSFET type devices is also proportionally increasing. However, JFET devices still have a role to play in lownoise, high-input impedance analog circuits, particularly amplifiers. To this end, we still need to configure JFETs in feedback circuits in order to create amplifier circuits whose gain is insensitive to ill-defined FET parameters. In particular, the pinch-off voltage level Vp and the drain current flowing with zero gate-source voltage (referred to as IDSS) are both temperature dependent parameters whose values (although they can be calculated from manufacturer's data at any temperature) need to be isolated from the transfer characteristic of amplifier circuits. The JFET analogy to the Emitter-Feedback circuit of Figure 3.20 is one of many circuits that can be used to stabilise the gain of the FET device as an amplifier. This circuit is shown in Figure 3.33 and the complete circuit, including a.c. signals and the Thvenin equivalent of the resistive ladder input stage, is shown in Figure 3.34.

Fundamental Electrical and Electronic Devices and Circuits

95

V I
D

DD

R1 G

Rd D S V out

R2

Rs IS

Figure 3.33 - Amplifier Feedback Arrangement for n-Channel JFET

V d.c. d.c. + a.c. a.c. Cg Rg I + id


D

DD

a.c. Rd Cd v out

d.c. + a.c.

vin

VG

Rs I D + id

Figure 3.34 - Source-Feedback Amplifier Circuit of Figure 3.33 with a.c. Signals and Thvenin Equivalent of Input Stage

In analysing the circuit of Figure 3.34, we follow the same sort of procedure that we used with the BJT device. That is, we undertake a large-signal (quiescent, d.c.) analysis to establish the operating points of the system and then a small-signal (a.c.) analysis to examine the amplification of the circuit.

96

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

We first analyse the quiescent levels in the circuit so that we can select resistive components that will place the transistor into its normal mode for amplification - in the case of the FET this means the pinch-off or saturation region in the output characteristic. In doing this analysis, we set the a.c. voltages to zero and we calculate all the resistive values required to place the transistor into its required mode by using the output characteristic (ID vs VDS) of the transistor in question. With the FET device, we can assume that the quiescent gate current is negligible in comparison with the drain current. For the second stage of the analysis, we restore the a.c. input and set all d.c. voltage sources to zero. We then replace the FET circuit symbol with the small-signal model for that device. The small-signal model for the FET is not unlike that for the BJT device. It is shown in Figure 3.35.

Gate G

id

Drain D

vgs

g m . vgs

vds

S Source

Figure 3.35 - Small Signal (Hybrid-) Model for JFET or MOSFET

One interesting difference between the small-signal model for the FET and that of the BJT is that there is no (measurable) analogy to the base-current path for the FET in other words, the gate and source terminals are open-circuit. The other parameters of the small signal model however are derived in an analogous manner to the BJT. For example, the small-signal transconductance of the FET is defined as the partial derivative of drain current with respect to gate-source voltage (in an analogous manner to the transconductance of the BJT). We will again simply cite the relationship between the transconductance and quiescent operating parameters for the transistor without proof, since this is outside the scope of this text. The transconductance relationship is as follows:

gm =

2 I DSS I D VP

...(11)

Fundamental Electrical and Electronic Devices and Circuits

97

where:

IDSS is the drain current for zero gate-source voltage ID is the quiescent operating drain current in the saturation region VP is the "Pinch-Off" voltage for a depletion FET or the Threshold Voltage for an enhancement FET

All of the above parameters are either quoted in manufacturer's data for a given transistor or else can be deduced from provided or measured operating characteristics. The dynamic drain resistance of the FET ("rd" in the small-signal model) is obtained by taking the partial derivative of the drain-source voltage with respect to drain current, for a given value of gate-source voltage. This needs to be done in the saturation or pinch-off region of the transistor's operation and can be achieved by measuring the slope of the output characteristic (as in Figure 3.30) for a given value of gate-source voltage. Once the small-signal model has been placed into the circuit, and all d.c. voltages have been set to zero, then an a.c. or small-signal analysis can take place in order to determine the transfer characteristic of the amplifier circuit in the same way in which the BJT circuits are analysed.

98

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

3.4 Analog and Digital Circuit External Characteristics


Regardless of whether circuits are analog or digital and regardless of whether they are BJT or FET based, there are a number of common characteristics which need to be determined in order to use them in conjunction with other circuits. In particular, we need to know what sort of an electrical load a particular circuit will place upon another circuit, or perhaps what sort of output characteristic a given circuit may have. For this reason, we commonly need to identify, measure or calculate the external characteristics of a given circuit. In order to this, we normally model a circuit, be it an amplifier or digital gate, as a black-box with some basic external parameters, as shown in Figure 3.36. We sometimes then treat systems as a cascade of black-box circuits as shown in Figure 3.37 (a). One of the most important parameters that we need to derive for a circuit is the input impedance (Zin). The input impedance defines the amount of current that the circuit will draw from a preceding circuit stage, for a given output voltage from that stage. This is important because many circuits, particularly digital circuits, cannot supply very much current and so we need to know exactly how much current the subsequent circuit will drain. In addition, in digital systems, we normally connect a number of circuits, in parallel, to the output of one particular circuit. This is referred to as "fan-out" and is shown in Figure 3.37 (b). With "fanned-out" circuits, we often need to know how many subsequent circuits, in total, can be connected to the output of a given device.

i in

Z out

i out

v in

in

vout

Figure 3.36 - Modelling Analog and Digital Circuits in Terms of External Characteristics

When we deal with families of circuits, such as with digital TTL or CMOS gates, the manufacturers specify the total fan-out for a device, assuming that other devices of the same family type will be connected to the output of the first stage. These characteristics are all defined on the basis of the input impedance of the family of devices in question.

Fundamental Electrical and Electronic Devices and Circuits

99

Circuit A

Circuit B

Circuit C

(a)

Circuit B

Circuit A

Circuit C

(b)

Figure 3.37 - (a) Cascaded Circuits (b) Fan-out From One Circuit

The input impedance for a circuit can either be derived or measured, depending upon whether or not the circuit diagram for the device is actually known. In either event, in terms of Figure 3.36, the input impedance is defined to be:

Zin =

v in i in

...(12)

If a known voltage is applied to the input terminals of the device and the input current measured, then the input impedance can readily be determined. If, for example, the circuit was known to be an analog amplifier, such as an emitter feedback circuit, then the input impedance could be algebraically determined from the complete small-signal model for the system, by deriving an expression for vin/iin.

100

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

In general, circuits are composed of resistors, capacitors, inductors and, in the case of small-signal transistor circuits, dependant current sources. As a result, the input impedance, to a.c. signals may be frequency dependent, due to the "j" terms that arise in applying the phasor method to circuit solution. Moreover, the a.c. and d.c. input impedances will not generally be the same due to this frequency dependence. For this reason, if we are measuring input impedance, we will be measuring magnitude only and we need to be sure that we are measuring at a frequency commensurate with the frequency at which the circuit will normally be operating. When we derive expressions for input impedance, we can of course calculate the magnitude of the impedance at any given frequency. For analog circuits, we need to be aware of the frequency dependence of input impedance, whereas in digital circuits, we are primarily concerned with only the d.c. impedance to current flow. Figures 3.37 (a) and (b) can be somewhat misleading in the sense that they may induce a novice to believe that any circuit can be divided up into isolated stages which can be analysed independently and then brought together at a later stage. This is only true if we always assume that there will be an output current and voltage from each stage of the circuit - that is, that every stage will be loaded. We cannot simply derive characteristics on the assumption that each stage will have no output current and then join these "independent" stages together. In general, the characteristics of a circuit, including its input impedance, depend upon the load applied on the output of that circuit, and hence, the subsequent stages of the system. There may be some specific circuits, particularly amplifiers and digital circuits, whose input characteristics do not vary greatly with the load applied to the output. This is however a function of the isolating properties of transistors and should not be taken as a general rule. If you are still in doubt, take a circuit composed of resistors, capacitors and inductors and determine the input impedance for the entire circuit. Then, divide the circuit at an appropriate point, analysing each stage on the assumption that the other stage has no loading effect - note the difference. In order to understand the concept of output impedance, it is necessary to review one of the most basic elements of network analysis - that is, the Thvenin and Norton equivalent circuits. Essentially, Thvenin and Norton showed that any circuit, regardless of how complex it is, can be represented as:

A voltage source in series with a resistor (called a Thvenin equivalent circuit) A current source in parallel with a resistor (called a Norton equivalent Circuit).

Fundamental Electrical and Electronic Devices and Circuits

101

The two equivalent circuits are shown in Figure 3.38 and are analytically derived as shown in Figure 3.39, using the following technique:

Take the original circuit and calculate the voltage output when the output is opencircuited (this is called voc). Take the original circuit and calculate the current that flows when the output terminals are short-circuited (this is called isc). The values for the Thvenin and Norton equivalent circuits are then:

v TH = v OC Z TH = v OC i SC

i N = i SC ZN = v OC i SC

TH

Simple or Complex Circuit

TH

(a)

Simple or Complex Circuit I


N

ZN

(b)

Figure 3.38 - (a) Thvenin Equivalent of a Circuit (b) Norton Equivalent of a Circuit

102

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Simple or Complex Circuit

Determine v
OC

Simple or Complex Circuit

Determine i
SC

Figure 3.39 - Determining Thvenin and Norton Equivalent Circuits from an Existing Circuit

Having briefly re-examined the concept of the Thvenin and Norton equivalent circuits, we are in a position to understand the concept of the output impedance of a given circuit. Referring back to Figure 3.36, we can see that the output of our "blackbox" circuit is in fact a Thvenin equivalent model. The question is, how can we determine the value of the output impedance. Analytically, we can do the open-circuitvoltage and short-circuit current test to calculate the value of the impedance, as described in terms of the Thvenin circuit model. However, in terms of measurement, a little more thought is required in order to determine the value. Theoretically, if we measure the open-circuit voltage and short-circuit current of a circuit, then we should be able to determine the output impedance. The problem with this technique is that in endeavouring to short-circuit the output we may damage the device in question by causing an excessive current to flow. The solution therefore is to measure the open-circuit voltage and then to measure the current flow when a known impedance (RL) is connected as a load (rather than a short-circuit). The value of the known impedance is chosen such that the maximum current flow in the circuit (ie: assuming the output impedance is zero) does not exceed the ratings of the circuit in question. The measured current flow in the circuit is then equal to:

i out =

v out Zout + R L

from which the magnitude of the output impedance can be calculated. In analog circuits, we must again keep in mind that the output impedance, like the input impedance may be frequency dependant.

Fundamental Electrical and Electronic Devices and Circuits

103

3.5 Operational Amplifiers


In Section 3.3 of this chapter, we examined the basic characteristics of various transistors and how they could be used in order to create digital and analog circuits, with some emphasis on analog amplifiers. We also examined amplifiers that were created with a single transistor, together with a feedback circuit for gain stabilisation. There are many variations on single-transistor amplifier circuits and many more variations on multiple transistor amplifier circuits. In general, the detailed, small-signal analysis of multiple transistor circuits is quite complex without computer simulation and the parasitic effects (such as stray capacitances, etc.) of transistor circuits can cause them to oscillate or resonate with minor changes in feedback resistance values. In the final analysis, the design of realistic amplifier circuits from first-principles of transistor theory is a specialised task, and not one which would be tackled by engineers involved in general-purpose systems design. For this reason, a number of commercially available amplifier packages have been designed as building blocks for systems design work. The generic name for these circuits is "Operational Amplifiers". Developments in power semiconductor technology have meant that circuits are now available for both low power and medium power applications. The objective of our discussions on operational amplifiers is not so much to delve into the intricacies of their design characteristics, but rather to understand their functionality and their typical applications in the process of interfacing microprocessor based control systems to industrial level signals. Operational amplifiers are not new devices. One of the oldest types of operational amplifier is the 741 series, which was introduced in the mid-1960s and has been in widespread use since that time. Despite the fact that the 741 has been superseded by more modern devices, it tends to remain as a reference system upon which the more modern designs have been based. Even this rather old amplifier design is based upon some 24 bipolar junction transistors and some 11 diffused resistors and hence its analysis is outside the scope of this book. A simplified version of this amplifier circuit is shown in Figure 3.40, wherein a number of the transistor circuits have been replaced with their functional equivalents. The operational amplifier is essentially a "difference" amplifier. It only amplifies the difference between the voltages at terminals A and B (as shown in Figure 3.40). Transistors Q1 and Q2 provide a high input impedance to the circuit (so that it only draws a very small current from preceding circuits) and Q3 and Q4 amplify the difference in the signals. Q5 and Q6 provide an active resistive load for the system. Ultimately, as the differential input is converted to a single-ended input and again amplified. The combined collection of transistors creates a device with:

High-Input Impedance Low Sensitivity to Common-Mode Signals Low-Output Impedance.

104

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

+VCC Q13B B Inputs + A Q1 Q2 Q19 Q18 Q20 Q23 Q3 I1 Q5 Q6 -VEE Q4 Q16 Q17 Q13A Q14 Output

Figure 3.40 - Simplified Circuit Diagram for Single-Chip 741 Operational Amplifier

One of the key points to note about the 741 device shown in Figure 3.40 is the supply rail system. In this rather old device, a positive 15 volt d.c. supply needs to be connected to create the VCC supply rail and a negative 15 volt d.c. supply needs to be connected to create the VEE supply rail. As with all analog amplifiers, it is the supply rails which provide the additional power that is injected into the system in order to provide amplification. The input signals are, for the most part, just signals and their energy levels are generally very low in comparison to the output energy. In digital circuits, the same philosophy applies - supply rails provide the energy required by the transistors in order to switch from their open-circuit mode to their short-circuit mode, and again, the inputs are just signals and not generally providers of the switching energy. One of the problems with the 741 design is its requirement for positive and negative d.c. supply rails, which add to system cost, and hence more modern designs endeavour to achieve the same sort of functionality as the 741 device with only a single, positive d.c. supply rail.

Fundamental Electrical and Electronic Devices and Circuits

105

The most common way to design systems using operational amplifiers is to treat the entire amplifier as a black-box device, normally represented with a circuit symbol as shown in Figure 3.41(a). The input marked with the "-" sign is referred to as the inverting terminal and the input with the "+" sign is referred to as the non-inverting terminal. The supply rails are not normally shown, because they vary from device to device and do not actually affect the functionality of the amplifier itself. However, it must always be remembered that it is these rails that make the operational amplifier an active device which can inject energy into a system. The idealised circuit model of the operational amplifier is shown in Figure 3.41 (b).

a
-

(a)

+ v Z out in Z in a.vin vout

(b)

Figure 3.41 - (a) Circuit Symbol for Operational Amplifier (b) Idealised Model of Operational Amplifier Circuit

106

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

In most operational amplifier system designs, we often make assumptions to further simplify the idealised circuit of Figure 3.41 (b). For example, we assume that the amplification factor, or gain, of the amplifier "a" is very large and tending towards infinity - hence, for any realistic output voltage, vin must always be approximately zero. For rough calculations we can also assume that the input impedance of the device is infinite and the output impedance is zero. One may well ask what the practical significance of an amplifier with infinite gain would be in terms of electrical circuits. This clearly doesn't happen in the real world and, as it turns out, the so-called open-loop gain of an operational amplifier is actually a finite quantity albeit only nominally defined. Moreover, most operational amplifier based systems are effectively designed in a manner which is not specific to a particular type of amplifier. The net result is that operational amplifiers are not used as open-loop devices, but rather as elements in closed-loop systems, surrounded by feedback elements. A proper understanding of closed-loop amplifier systems requires an understanding of classical control theory, where the objective is to realise a stable system transfer characteristic, which is independent of all unstable or ill defined variables. Once we understand this concept, then we should appreciate the fact that an infinite open-loop gain does not necessarily lead to a system with infinite closed loop gain, but rather a gain which is dependent upon the feedback elements in the system (as shown in Figure 3.16 and described by equation (5)). In order to understand the purpose of a particular operational amplifier circuit, analysis is normally carried out using the approximate model in the following way: Assume that the voltage across the input terminals is approximately zero Assume that the current flowing through the input terminals is approximately zero (iii) Apply Kirchoff's Voltage and Current Laws in order to obtain an expression for output voltage or current in terms of input voltage or current A more accurate analysis can be carried out by assuming a finite voltage gain "a" between the input and output terminals and by assuming an input impedance of Zin between the input terminals. Values for these parameters can normally be obtained from relevant data sheets for the specific operational amplifier in question. The first operational amplifier based circuit to be examined is called the d.c. voltage-follower. The circuit diagram for this is shown in Figure 3.42 (a). This is a rather peculiar circuit because the entire output is used as feedback into the inverting terminal of the amplifier. (i) (ii)

Fundamental Electrical and Electronic Devices and Circuits

107

Applying Kirchoff's Voltage Law to the d.c. voltage-follower circuit gives us the following relationship:

v s v in = v out
However, vin is approximately zero and hence:

v out = v s

vin vs

+ vout

(a)

C1 vs

vin

+ vout

R1 R2 C2

(b)

Figure 3.42 - (a) Operational Amplifier Based "d.c. Voltage Follower" Circuit (b) Operational Amplifier Based "a.c. Voltage Follower" Circuit

The voltage-follower circuit may appear to be somewhat unusual in the sense that it doesn't seem to do anything - there is no amplification or attenuation. However, the voltage-follower has a very important role to fulfil. It acts as a buffer or interface that provides a high input impedance circuit (ie: small load) to a preceding circuit. The voltage-follower replicates the voltage from the previous stage (unity gain) and is capable of driving a subsequent circuit that has a low input impedance (ie: high load).

108

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

In Figure 3.42 (b), the a.c. equivalent of the d.c. voltage follower circuit is shown. This circuit, like the one in Figure 3.42 (a) also acts as a high input impedance buffer between a primary circuit and a high-load circuit that would normally draw a current that is too large for the primary circuit to supply. The a.c. equivalent circuit has two capacitors, C1 and C2, whose sizes are selected in order to provide zero impedance at the a.c. operating frequencies of the system. R1 and R2 provide a path for d.c. current flow in the system. A common role for operational amplifiers is acting as energy transducers in circuits. That is, to convert voltage into current (transconductance amplification) or current into voltage (transresistance amplification). Figure 3.43 (a) and (b) show two circuits suitable for transconductance amplification.

ZL

IL

vin vs

+ vout

(a) R1 vs vin R1 ZL IL + R2 vout (b) R2

Figure 3.43 - (a) Transconductance Amplifier for Floating Load (b) Transconductance Amplifier for Grounded Load

Fundamental Electrical and Electronic Devices and Circuits

109

The circuit in Figure 3.43 (a) is the simpler alternative that is used when neither end of the load circuit (ZL) is grounded. The more complex arrangement of Figure 3.43 (b) needs to be used whenever one end of the load is grounded. A simple analysis of the circuit (assuming infinite input impedance and amplifier gain) reveals that for the circuit of Figure 3.43 (a), the output load current is:
iL = vs R

...(13)

For the circuit of Figure 3.43 (b), the output load current is:

iL =

vs R1

...(14)

The dual circuit of those shown in Figure 3.43 is the transresistance amplifier which converts current-based signals to voltage. This is shown in Figure 3.44.

R is

is

Rs vin

+ vout

Figure 3.44 - Transresistance Amplifier - Current to Voltage Conversion

The transresistance amplifier is simple to analyse. The non-inverting terminal is connected to earth and hence the inverting terminal is also approximately at earth potential (virtual earth). The current through the source resistance Rs is also zero since one end is at earth potential and the other end is at virtual earth. The output voltage is therefore simply:
v out = i s R ...(15)

110

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

One of the most common applications of operational amplifiers is in amplification and attenuation of signals in control systems. This can be readily accomplished with the inverting amplifier circuit of Figure 3.45 (a) or the non-inverting amplifier circuit of 3.46 (a). One circuit provides a negative amplification and the other a positive amplification.

Rf R vin vs + vout

(a)

Rf R1 R2 RN vin

vs1 vs2 vsN

+ vout

(b)

Figure 3.45 - (a) Inverting Amplifier Arrangement (b) Inverting "Summing" Amplifier for Providing the Weighted Sum of "N" Input Voltages

For the circuit of Figure 3.45 (a), the gain of the system is defined by the feedback resistors and the output signal is the negative of the input signal: v out = Rf vs R

...(16)

Fundamental Electrical and Electronic Devices and Circuits

111

For the circuit of Figure 3.45 (b), a number of weighted inputs can be summed together, amplified and inverted. For this amplifier: v out =

L v R M R N
f 1

s1

Rf R v s2 +.. + f v sN R2 RN

O P Q

...(17)

Rf R vin + vs (a) vout

Rf R R1 R2 RN vx vin + vout

vs1 vs2 vsN

(b)

Figure 3.46 - (a) Non-Inverting Amplifier (b) Non-Inverting Summing Amplifier for Providing the Weighted Sum of "N" Input Voltages

For the circuit of Figure 3.46 (a), the output waveform is a scaled, positive version of the input and is defined as follows:

v out = 1 +

F R I v G RJ H K
f

...(18)

112

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

For the circuit of Figure 3.46 (b), it is somewhat more difficult to calculate the transfer function, which ultimately depends upon the number of inputs. We know however that the transfer function can be derived from single input circuit equation (18) and will be:

v out = 1 +

F R I v G RJ H K
f

...(19)

where vx is the voltage at the non-inverting terminal of the amplifier. The method for calculating vx is to take the Thvenin equivalent circuit of all the inputs at the noninverting terminal, assuming that no current flows into that terminal. The result is then inserted into equation (19). A number of energy transducers provide a pair of outputs and it is often the difference between the two that needs to be amplified or attenuated. This can be achieved by using a differential amplifier configuration, as shown in Figure 3.47.

R2 R1 R1 vin + R2 vout

vs2 vs1

Figure 3.47 - Differential Amplifier Configuration

The output can be determined in terms of the two inputs by using the principle of superposition (ie: calculating the output voltage for each of the inputs individually and then adding the result). The relationship is as follows:

v out =

R2 v s1 v s2 R1

g
...(20)

The relationship is not altogether surprising since the differential configuration is a hybrid of the inverting and non-inverting amplifier configurations.

Fundamental Electrical and Electronic Devices and Circuits

113

The final two operational amplifier circuits to be examined are the integrator and the differentiator. These are shown in Figure 3.48 (a) and 3.48 (b) respectively. The purpose of these circuits, as their names suggest, is to provide an output voltage proportional to the integral or differential of the input voltage waveform. This sort of functionality is achieved by using a capacitor, whose current is proportional to the derivative of the voltage.

R vs vin + vout

(a)

R C vin +

vs

vout

(b)

Figure 3.48 - (a) Integrator Circuit (b) Differentiator Circuit

For the integrator circuit of Figure 3.48 (a), the inverting terminal is approximately at zero voltage, so the current through the capacitor is proportional to the derivative of the output voltage, and is equal to the current flowing through the resistor. The following relationships apply:

114

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

i = C

dv out v = s dt R 1 v s dt RC

v out =

...(21)

For the differentiator circuit of Figure 3.48 (b), the inverting terminal is also approximately at zero voltage and we again equate the current through the resistor to the current through the capacitor (ie: assume infinite amplifier impedance) in order to ascertain the relationship between output and input, as follows: i= v out dv = C s R dt dv s dt

v out = R C

...(22)

The integrator and differentiator both featured prominently in analog computers and controllers because they provided a very effective means of providing integrals and derivatives in real-time. These were particularly useful in classical control theory where Proportional Integral Derivative (PID) control techniques are used as a control strategy. However, despite the increase in availability of low-cost, sophisticated digital processing technology, these devices still retain their usefulness in control applications because they free the controlling processor from the task of carrying out integration and differentiation. This not only minimises the performance drain on the processor but additionally simplifies programming because many low-level controllers are often programmed in a machine or assembly language, where the software implementation of a digital integration or differentiation can substantially complicate the programming of a control algorithm.

Fundamental Electrical and Electronic Devices and Circuits

115

3.6 Linearity of Circuits - Accuracy and Frequency Response


Analog circuits are commonly used to interface incompatible devices by:

Scaling Converting from voltage to current Converting from current to voltage Injecting energy into systems from external supply rails.

All of these analog functions require circuits with a high degree of accuracy. In digital circuits, on the other hand, we only ever deal with voltage levels that correspond to the binary numbers one or zero. Therefore, the absolute values of voltage in digital circuits are not critical, provided that they are within the appropriate ranges used to describe those binary numbers. Analog circuits therefore provide a greater challenge to system designers than digital circuits because information is always explicitly contained within analog voltages. There are three main factors which need to be considered when using analog devices such as energy transducers (of which, circuits such as the basic transistor amplifiers and the operational amplifiers are a subset). These are: (i) Accuracy (ii) Frequency response (iii) Linearity. The overall effect of these factors is shown schematically in Figure 3.49. There are an enormous variety of energy transducers currently applied to engineering designs. Rather than attempt to cover the entire spectrum of devices, we will only discuss the performance factors in terms of the sorts of devices that we have examined in this chapter, such as transformers, basic transistor amplifiers and operational amplifiers. Most other energy transducers have analogous limitations to those exemplified by these devices. In order to begin our examination of deviations from ideal behaviour, let us recall the work we did in Section 3.3.2, where we examined the basic attributes of Bipolar Junction Transistors and noted that a number of BJT parameters were ill-defined and subject to temperature variation or variation with operating current. Our objective then was to design circuits that would make the transfer functions of these circuits independent of these parameters. This was achieved by the traditional method of feedback circuits, such as the one shown in Figure 3.20 for the BJT or those shown in Figures 3.42 - 3.48 for the Operational Amplifier. In all these cases, provided that the ill-defined open-loop gains of the systems are sufficiently large, then their transfer functions are only dependent upon the feedback circuits.

116

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Output

Ideal Measured

Input (a) Output Input

Frequency (b) Output Ideal

Measured Input

(c)

Figure 3.49 - Imperfections of Realistic Energy Transducers and Circuits (a) Accuracy Problems; (b) Limited Frequency Response; (c) Non-Linearity

The inverting amplifier of Figure 3.45 (a) is a classic example of a circuit whose gain is stabilised by the feedback of output to input. In this circuit, the transfer ratio of output voltage to input voltage is only dependent upon the ratio of circuit resistors as shown in equation (16). However, this analysis assumed that the open-loop gain and the input impedance of the amplifier were both infinite. Let us therefore examine the factors that cause the inverting amplifier circuit to deviate from ideal behaviour:

Fundamental Electrical and Electronic Devices and Circuits

117

(i)

Accuracy In a practical sense, we know that resistance values provided by resistor manufacturers are only nominal and subject to accuracy tolerances. The accuracy of the inverting amplifier transfer ratio (output to input) is therefore dependent upon the actual resistance values, rather than the specified values. Actual resistances can vary due to manufacturing tolerances and also due to the effects of age and operating temperature. Another factor affecting the accuracy of the circuit is the open-loop gain of the operational amplifier. If the open-loop gain of the operational amplifier is small, then the relationship in equation (16) no longer holds and so the circuit deviates from the expected behaviour. Frequency Response All physical devices have performance characteristics that change with operating frequency. In the case of the operational amplifier, we have a system made up from transistor components, fabricated onto a single chip. When we examined the small-signal (hybrid-) model of the BJT and the FET, we did not include the parasitic capacitances that exist between various points within the devices themselves because at normal operating frequencies these elements were insignificant. However, since the impedance of a capacitor is inversely proportional to frequency (in an a.c. system), we know that the impedance of these parasitic capacitances to high-frequency voltage components becomes zero (short-circuit). As a result, the parasitic capacitance between the base and emitter of a BJT (or gate and source of a FET) can ultimately form a short-circuit between these terminals, thereby lowering the gain of the transistor.
As a result of parasitic capacitances within the semiconductor structure, even the transistor has a limited operating frequency range. This limited range obviously applies to the combination of transistors within an operational amplifier. The ratio of output signal to input signal begins to attenuate outside the normal operating frequency range of the transistor devices until eventually, the output attenuates to zero at high frequencies. The frequency response of the operational amplifier is not quite like the one shown in Figure 3.49 (b) because the amplifier does provide gain even with zero frequency signals (d.c.). However, in the case of the inverting amplifier circuit, a reduction in the open-loop gain of the operational amplifier (at high frequencies) leads to a deviation from the assumption of "infinite gain" and hence a deviation from the transfer relationship of equation (16). Another frequency issue with operational amplifiers is the so-called "slewrate" or maximum rate of rise of output for a given input change. This specifies how quickly the outputs of an amplifier can respond to a change in input level.

(ii)

118

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

(iii) Linearity When we examined the use of the BJT in the emitter-feedback circuit, we came up with a total solution for the voltage at each node. This was shown in Figure 3.25. We noted then that the signal at the collector node of the transistor could never exceed the supply rail voltage and that as a result of increasing the magnitude of the input signal, the output signal would eventually distort (clip). This phenomenon occurs in all amplifier circuits, including the operational amplifier and ultimately leads to distortion, because the input and output waveforms are not linearly related. So, in the case of the inverting amplifier circuit, the system is only linear while the transistors within the operational amplifier have signals below the rail voltages.
Another familiar circuit that we can use in order to understand the three major limitations of circuits and transducers is the transformer:

(i)

Accuracy At normal load currents, realistic transformers do not provide a voltage transfer ratio equivalent to the turns ratio. This is because of the resistance of the windings and because of flux leakage in the core (as represented by circuit elements shown in Figure 3.7 (b)). At no load (zero output current), very little current flows in the windings and hence the voltage is almost identical to the turns ratio - however, as the load current increases, the voltage transfer ratio is lower than that specified by the turns ratio. This deviation from ideal is referred to as the "regulation" effect of the transformer and affects the accuracy of the circuits using the transformer as an energy transducer. Frequency Response Unlike operational amplifiers, transformers do not operate at zero frequency (d.c.). They do however, provide a relatively stable output/input ratio from low frequencies up into the kilohertz ranges and then gradually begin to attenuate the output voltage until at very high input voltage frequencies, no output voltage is obtained. The low-frequency limitation is due to Faraday's Law of Electromagnetic Induction and the high-frequency limitation occurs because of the magnetisation/demagnetisation of the core that has to take place with each voltage cycle - there are limits to the speed with which magnetic domains within a ferromagnetic material can be rearranged.
Every physical device has analogous frequency limitations caused by the inability of one energy form to be converted into another form at infinite speed. Even the resistance of a simple piece of wire increases with frequency, because of "skin-effect", where current tends to flow on the outside of a conductor at high frequencies. This tends to accentuate the attenuation of output voltage in transformers at high frequencies

(ii)

Fundamental Electrical and Electronic Devices and Circuits

119

(iii) Linearity The flux density (B) in the core of the transformer is non-linearly related to the magnetic field intensity (H) applied to the core by what is known as the magnetisation curve for the core. At low values of H, B and H are linearly related. However, as H is increased, the core saturates and the value of flux density in the core becomes almost independent of the magnetic field applied. H is dependent upon the current (hence voltage) applied to the primary windings. Increasing the primary voltage to a high level causes the transformer to saturate and creates non-linearity between the input and output sides of the device.
The above two examples illustrate the sort of reasoning that should be applied to any energy transducer before it is placed into a circuit, so that either the device can be operated within a limited range or else so that the deviations from ideal behaviour can be accounted for by other means. Manufacturers normally provide specifications for the limitations of their particular energy conversion device. For example, the specifications for an operational amplifier include frequency response characteristic curves, slew-rate, data for calculating (interpolating or extrapolating) parameters affected by temperature or operating current, etc. However, not all transducer manufacturers are as forthcoming with this sort of data, and in the absence of this sort of information, a reasoned system analysis cannot be carried out without a background investigation into the physics and design of the device in question.

120

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

3.7 Thyristors
3.7.1 Introduction
Once you begin to read this section, you may feel that it is rather peculiar that we have chosen to discuss diodes then transistors and amplifiers and have now reverted back to discussing diode-like devices again. In fact the reason we have done so is because, in semiconductor terms, the modern thyristor is more akin to a pair of transistors than it is to a single diode. However, the main purpose of thyristors is to provide a current path that is turned on and off whenever a triggering voltage exceeds a certain level. Thyristors were originally divided into a number of sub-groups that included the Silicon Unilateral Switch or "SUS" (which has now been discontinued) and the Silicon Controlled Rectifier or "SCR". The SCR is now by far the most common form of thyristor, although it is also possible to purchase bi-directional thyristors such as Diacs and Triacs, which we shall examine in section 3.7.3 as part of our overview of thyristor devices. Thyristors can ideally be used with sinusoidal voltages in order to provide phase control, and by controlling conduction, they can reduce the average value of an a.c. input voltage in order to provide motor speed control. Thyristor-based circuits can also be used in light dimmer circuits in order to reduce the voltage supplied to light fittings. More importantly, in large-scale power circuits, thyristors can be used to replace traditional diodes in rectifier bridges. The advantage of thyristors over diodes in rectifier applications is that it is possible to use bridges for d.c. to a.c. conversion. This is the reverse procedure to rectification and is referred to as "inversion". The circuits that are used to accomplish this are called inverters.

3.7.2 Silicon Controlled Rectifiers


The semiconductor structure for an SCR is shown schematically in Figure 3.50 (a). Figure 3.50 (b) shows the circuit equivalent for the SCR, which is essentially a pnp and npn transistor pair, joined between the base of the pnp and the collector of the npn. Figure 3.50 (c) shows the common circuit symbol for the SCR, which is effectively that of a diode with a controlling gate terminal. The voltage-current characteristic for the SCR appears to be most unusual on a first glance and to some extent misleads one as to the true functionality of the device. The typical voltage-current characteristic for the SCR operating in the forward mode (anode voltage greater than cathode voltage) is shown in Figure 3.51, for a range of different gate currents.

Fundamental Electrical and Electronic Devices and Circuits

121

Anode

Anode

Anode (A)

IT P Q1 N P N N P Q2 Gate Q1 Gate IG

Gate

Q2

Cathode (a) (b)

Cathode

Cathode (C) (c)

Figure 3.50 - Silicon Controlled Rectifier (a) Schematic of Semiconductor Structure (b) Equivalent Two-Transistor Circuit (c) Circuit Symbol

IT

IG2 > IG1 > I G0 IH I BL IG2 IG1 I G0 V VBL


AC

Figure 3.51 - SCR Voltage-Current Characteristic

122

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

In order to understand the SCR characteristic of Figure 3.51, first consider the curve for IG0, which we can assume to be the characteristic for the situation where no current flows into the SCR gate terminal. In this curve, we can see that simply applying a forward bias from the anode to cathode does not cause a significant amount of conduction until the threshold blocking voltage, VBL is reached. At this level of voltage, both transistors within the SCR become saturated and almost provide a shortcircuit between the anode and cathode, whereupon the voltage across the SCR rapidly drops. The rectifier remains "on" until the current flowing through the device drops below a threshold holding level IH, after which the device reverts back to its "off" mode and cannot be reactivated until the voltage again exceeds its threshold level. The next stage in the understanding process is to examine what happens for increasing levels of gate current. Note from Figure 3.51 that the threshold voltage level at which the SCR begins to conduct, decreases for increasing levels of gate current. If the gate current is sufficiently high, then the SCR characteristic will resemble that of a normal diode. The net effect is that the rectifier is a diode which can be caused to conduct by either applying a sufficiently large voltage across its terminals, or by supplying a sufficient gate current and applying a low voltage across its terminals. When we trigger the operation of the SCR with a gate current, we say that the SCR is being "fired". Once the SCR has fired into conduction, it continues to do so until the current flowing through it falls below the threshold level and then reverts back to its low conduction state. The reverse bias characteristic of the SCR has not been shown in Figure 3.51, because SCRs are normally designed to operate in their forward bias region and for all intents and purposes, the SCR is a unidirectional device. The reverse characteristic is not unlike that of any other diode, except that the reverse breakdown voltage is of a similar magnitude to the forward blocking voltage. The actual mechanism for triggering an SCR is not directly through the application of current to the gate, but rather by the application of a rectangular voltage pulse to the gate terminal (which then causes the Q2 transistor in the SCR to saturate and the SCR to fire). A typical data sheet for an SCR device would specify the trigger level voltage ranges allowable at the gate terminal and the blocking voltage and blocking current. In most circuits involving SCRs, the objective is to create a triggering pulse which will cause the SCR to conduct. This makes the SCR a useful device that can be controlled with digital circuits and microprocessor based control systems. However, it is interesting to note that a common parasitic problem with the SCR is that applying a rapidly changing anode to cathode voltage (VAC) can also cause the SCR to fire at inappropriate times. This is referred to as dv/dt breakdown and is generally an undesirable phenomenon which is commonly overcome through a so-called "snubber" circuit.

Fundamental Electrical and Electronic Devices and Circuits

123

A snubber circuit is simply a high frequency filter that consists of a resistor and capacitor in series. The snubber is connected between the anode and cathode terminals of the SCR. Since a capacitor has a very low impedance to high-frequency signals, the snubber circuit will draw current away from the SCR, thus preventing it from turning on. The series resistor is used to limit the current through the capacitor. Most applications involving SCRs are generally concerned with switching and hence there is always a potential for unwanted voltage spikes to occur or be induced in circuits. While the snubber circuit protects the SCR suffering from dv/dt breakdown, the gate terminal also needs to be protected from unwanted noise accidentally firing the SCR. The most common approach is to simply connect a capacitor between the gate and the cathode. The low impedance of the capacitor to high frequency signals effectively means that any short-duration spikes will effectively cause a short circuit between the gate and the cathode, thereby preventing the SCR from switching on. There are many applications to which SCRs are suited and our objective is only to examine a few that are of relevance. One major use of SCRs is in the inversion of voltages from d.c. to a.c. In Figure 3.11, we briefly examined the three-phase bridge rectifier circuit. However, we can actually replace all the standard diodes in this circuit with SCRs. If we set up the gate triggering such that the SCRs will fire whenever the anode voltage is greater than the cathode voltage, then the bridge will act as a normal three-phase rectifier. However, consider what happens if we reverse the operation of the three-phase bridge and supply a d.c. input across the terminals of the bridge and then selectively fire the SCRs. The net result is a conversion from d.c. to three-phase a.c. In other words, by having two three-phase bridges made up from SCR devices and connected via a pair of cables, we can have a d.c. transmission line system. This is shown in Figure 3.52.

a.c.

d.c. Transmission

a.c.

R2 1 2 3

dc 1 2 3

B2

Y2

6 + I dc

Figure 3.52 - SCRs for Inversion and d.c. Transmission Systems

124

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

One may well ask why there is a need for transmission of d.c. over long distances, but in fact, there are a number of potential benefits in terms of minimising the number of conductors and minimising transmission losses. In addition there are many systems that are based upon d.c. machines, such as tramway / trolley bus power systems and railway power systems. Since most power generation is in a.c. form, there are many benefits to be realised by transmitting at least some portion of the total energy requirements of a large city via d.c. systems such as those established with thyristors. At a lower level, SCRs can be used in conjunction with Zener diodes in order to protect circuits from voltage spikes. This is achieved via a circuit known as a "SCR crowbar". The circuit diagram for the SCR crowbar is shown in Figure 3.53.

+ R d.c. Input C RG Snubber Voltage-Sensitive Load (eg: Computer Input)

Figure 3.53 - SCR Crowbar Circuit

The operation of the circuit is really an extension of the voltage regulator design discussed in Section 3.2.3. Under normal operation, neither the Zener diode nor the SCR in Figure 3.53 are conducting and all current flows through the load. If however, a voltage spike occurs on the input side then the Zener diode goes into breakdown mode and conducts. This causes a current to flow in RG and hence a voltage to be developed across it (thereby creating a voltage at the gate of the SCR). The SCR then turns on and acts as a by-pass for the current and reduces the voltage across the load. The SCR also has the effect of short-circuiting the device that created the voltage spike. If the device is protected by a fuse, then the fuse should blow. Another alternative is to place a "normally-closed" (ie: normally short-circuited) relay coil in series with the SCR and the switch terminals of the relay in series with the d.c. supply. When current flows in the SCR, the relay is activated and the d.c. supply is temporarily isolated from the load. The SCR in the crowbar circuit essentially duplicates the role of the Zener diode. It is there because it is capable of withstanding a much higher current than the diode can withstand. The SCR in such circuits is expendable and really only needs to survive long enough to blow out a series fuse.

Fundamental Electrical and Electronic Devices and Circuits

125

Another common application for the SCR is in controlling the average value of the voltage applied to a load. This is done with an SCR phase controller circuit such as the one shown in Figure 3.54. The load could be a simple light-globe or it could be the armature terminals of a universal motor. The input waveform is a sinusoidal a.c. voltage and the output is a d.c. waveform whose shape is determined by the firing of the SCR.

R1 D vin C R2 vT

SCR

Resistive Load or Motor

vout

Figure 3.54 - SCR Phase Controller for Resistive Loads and Motors

It is evident, from Figure 3.54, that whenever the SCR is conducting then the output voltage will approximately equal the input voltage. When the SCR is not conducting, then the output voltage will be zero. The capacitor between the gate and the cathode of the SCR is present to prevent unwanted triggering. The SCR will only conduct when the anode to cathode voltage is positive and when the gate is triggered and so the output voltage can only be greater than zero when the input waveform is on the positive half of its cycle. The SCR in Figure 3.54 is triggered from a voltage level derived from the sinusoidal input waveform. The voltage level at which triggering occurs can be changed by adjusting the variable resistor, R2. This has the effect of changing vT and hence the SCR gate voltage (which is vT minus the diode voltage drop of 0.7 volts). If we adjust vT (by adjusting R2)so that the SCR is always triggered, then the output waveform will be a half-wave rectified version of the input (remember that the SCR does not allow for negative conduction). This provides the maximum possible average output voltage. Other values of R2 provide a lower average value of output voltage. The effect is shown in Figure 3.55.

126

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The purpose of the diode in the circuit of Figure 3.54 is to prevent the SCR from breaking down when the input voltage waveform is in the negative half of its cycle. Whenever the input voltage moves into the negative half of its cycle, the diode, D, becomes open circuit and so no voltage is applied to the gate.

v in

Time

v out when SCR always triggered

Time

out

when R2 is reduced

Time

Figure 3.55 - Using a Variable Resistance to Adjust the Average Output From an SCR Phase Controller

Fundamental Electrical and Electronic Devices and Circuits

127

The most common application of the circuit in Figure 3.54 is in light dimmer circuits, where the phase controller reduces the average voltage applied to the light. One may well ask why the same function cannot be achieved with a simple series resistor. The answer, of course, is that the series resistor would dissipate a great deal of energy and generate a lot of unwanted heat. The resistance values in the phase controller circuit however, can be made large, thereby conducting very little current and dissipating very little energy. The result is a much smaller light dimmer switch than can be achieved via a variable resistor. A universal motor is essentially a d.c. motor whose armature and field terminals are both supplied from a common source. The end result is that the d.c. motor can function on either a.c. or d.c. voltages. In addition, the armature and field inductances act as a choke so that even when a time-varying voltage is applied, the currents flowing in the field and armature are relatively smooth. The SCR phase controller is therefore an ideal way of controlling the flow of energy to the motor without wasting heat in additional resistors. The SCR phase controller arrangement is used in domestic motors such those found in vacuum cleaners, power tools, etc.

3.7.3 Diacs and Triacs


In Section 3.7.2 we noted that the traditional thyristors (Silicon Unilateral Switches and Silicon Controlled Rectifiers) were unidirectional devices. That is, when triggered by either a large anode-cathode voltage or by a gate pulse, the devices would conduct current in one direction. Diacs and Triacs are, in effect, bi-directional versions of the SUS and the SCR, respectively. A Diac is basically an npn or pnp transistor, except that unlike the BJT, the doping density in each region is identical and hence the device is bi-directional. The Diac has no gate terminal and hence it can only be triggered by applying a forward or reverse bias greater than the blocking voltage. Its forward and reverse characteristics are identical and the forward region of a Diac is similar to an SCR with zero gate current. The Triac is a considerably more complex device than the Diac, with six doped semiconductor regions. Operationally, the Triac behaves like two complementary SCRs connected in parallel and is capable of providing conduction in both directions provided that either the magnitude of the voltage across the device is greater than the blocking voltage or the magnitude of the gate pulse voltage is greater than the triggering level. The circuit symbols for the Diac and Triac are shown in Figure 3.56. The Diac is both physically and operationally a symmetrical device. The Triac, on the other hand, is not. In terms of operation it is best to view the Triac's Main Terminal 2 (MT 2) as being equivalent to the anode on the SCR and Main Terminal 1 (MT 1) as being equivalent to the cathode on the SCR.

128

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Anode 1

Main Terminal 2 (MT 2)

Gate Anode 2 Main Terminal 1 (MT 1) (a) (b)

Figure 3.56 - (a) Circuit Symbol for Diac (b) Circuit Symbol for Triac

In order to operate the Triac in its forward region (that is, identical to the SCR), we apply a voltage at MT 2 greater than MT 1 and apply a positive gate pulse. In order to operate the device in the reverse mode, we not only have to make MT 1 greater than MT 2, but we also have to apply a negative pulse to the gate terminal. The Triac can be used for phase control of a.c. waveforms just like the SCR, except that it provides the potential for full-wave phase control, rather than the halfwave control discussed in 3.7.2.

3.7.4 Unijunction Transistors (UJTs)


The Unijunction Transistor or UJT is a device specially designed for use as a triggering mechanism for SCRs and Triacs. The construction of the UJT is not unlike the JFET and its circuit symbol is also similar. The circuit symbol for the UJT and the equivalent circuit are shown in Figure 3.57 (a) and Figure 3.57 (b) respectively. The method of UJT operation is relatively straightforward and can be understood by examining Figure 3.57 (b). Base 1 (B1) is the output of the device and the Emitter is the input. When the voltage between the emitter and B1 is sufficiently high, the diode conducts and B1 is approximately equal to the Emitter voltage. However, by varying the voltage between B2 and B1, we increase the cathode voltage of the diode and hence the emitter voltage required to cause the diode to conduct. The base to base voltage therefore determines the threshold level at which the input is passed through to the output.

Fundamental Electrical and Electronic Devices and Circuits

129

Base 2 (B2)

Base 2 (B2)

R Emitter (E) Emitter (E)

R1

Base 1 (B1) (a) (b)

Base 1 (B1)

Figure 3.57 - Unijunction Transistor (a) Circuit Symbol (b) Equivalent Circuit

The voltage at the cathode of the diode can be determined by using voltage division:

VC = VB2B1
where:

R1 R1 + R 2

...(23)

VC is the voltage at the cathode of the diode VB2B1 is the voltage between B2 and B1. The resistance ratio, derived from voltage division in equation (23) is referred to as the intrinsic stand-off ratio "" for the UJT and is typically in the order of 0.8. In a typical application for the UJT, the B1 terminal of the device would be connected to the gate of the SCR. B2 would be connected to a supply rail and the Emitter would be fed with an input which is some scaled version of the supply rail voltage. The UJT is really only designed as a mechanism for firing SCRs and Triacs and is seldom used for other purposes. Although the UJT is often associated with SCRs and Triacs, it is not in itself a thyristor.

130

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

131

Chapter 4
Fundamentals of Digital Circuits

A Summary...
An introduction to digital circuits and systems and the way they are used to create logical and arithmetic circuits, storage devices (flip-flops and registers), counters and sequential logic. The binary number system and binary arithmetic. Boolean Algebra and digital logic circuit design. Boolean reduction techniques (Karnaugh Mapping and Boolean laws of tautology). Logic gates and types of hardware logic (TTL, MOS, CMOS, etc.).

Digital Voltages

Analog Voltages

Analog Energy Forms

External Voltage Supply

Digital to Analog Conversion Computer Analog to Digital Conversion

Scaling or Amplification

Isolation

Energy Conversion External System

Protection Circuits

Scaling or Amplification

Isolation

Energy Conversion

External Voltage Supply

132

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

4.1 A Building Block Approach to Computer Architecture


Many people who use and design systems based upon microprocessors never fully understand the architecture of such processors. There are many reasons for this. Some people have managed to live their professional lives without ever having learnt, whilst others have learnt but have failed to understand. The difficulty in understanding the architecture of a computer system or microprocessor is that these devices are a combination of many component pieces of knowledge and technology. Many universities and text books explain, in great detail, how individual components work but fail to show how the pieces are brought together into a computer system or microprocessor. Some have endeavoured to explain the basic operation of computers and microprocessors by choosing a realistic example, based upon a particular processor. The problem with this approach is that there are so many side (technical) issues involved in the practical implementation of a computer system (for commercial purposes) that they tend to obscure the basic principles. In this book, we will be taking a slightly different approach to computer architecture. Firstly, we will introduce, by analogy, the functionality of the microprocessor. We will then overview the basic building blocks that fit together to make up an operational microprocessor and computer system. We will then expand upon these blocks stage by stage until we can piece them together into something resembling a workable system. We will be looking at the design of a system by examining a hypothetical (generic) processor that exhibits the rudimentary traits of most modern processors. If you can come to terms with the basic aspects of the modern processor, then you should be able to approach any technical or commercial description of a particular device and understand where and why it is differs from the basic form. In order to begin our discussions on the architecture of modern digital computers, we examine a relatively basic "mechanical" device. We do so because our objective in this book is to establish, convincingly, that the most sophisticated microprocessors are still very much electronic machines and nothing more. Many people have great difficulty in developing hardware and software because they have come to believe, in a literal sense, the term "intelligent" that has been ascribed to microprocessor based devices. Ironically perhaps, those who use microprocessors as nothing more than machines often achieve far more with them than those who revere their ability. This shouldn't be altogether surprising because maximising the performance of a computer based system really depends upon understanding its intrinsic limitations.

Fundamentals of Digital Circuits

133

Consider a mechanical control system such as the one found on an older style automatic washing machine. The arrangement first appeared in the 1950s and is shown in Figure 4.1. The system is composed of a dial, driven by a small motor. The dial steps from one position to another at uniform time intervals. At each position, the control system generates a number of output voltages that are used to drive relays and solenoids (which in turn activate pumps, motors, etc.). These output voltages, which we refer to as the "state" of the system controller, are dependent upon the inputs to the controller (from the user buttons, temperature sensors, etc.) and the current position of the dial. The speed at which the system moves from one state to the next is determined by the speed of the motor driving the dial. The motor driven dial can be described as a "state machine".

Mechanical Controller Signals from push-buttons & sensors Conversion Phase 1

Mechanical Dial (State Machine)

Motor Internal State

Conversion Phase 2

Outputs to relays & solenoids

Figure 4.1 - Simple Mechanical Controller

The modern microprocessor is, in principle, little more than an electronic version of such a mechanical controller. The major difference of course is that the microprocessor can be fabricated onto a microscopic piece of semiconductor material and can move from one state to another in micro or nano seconds rather than seconds. In Figure 4.2, the microprocessor is shown in a completely analogous form to the mechanical controller of Figure 4.1.

134

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Microprocessor Data Bus Conversion Phase 1

State Machine

Clock Internal State

Conversion Phase 2

Data Bus

Address Bus

Figure 4.2 - Microprocessor Analogy to Mechanical Controller

In the case of the microprocessor, the heart of the system is an electronic "state machine", which moves the system from one state to another based upon a digital clock input. The faster the clock signal, the faster the machine moves from one state to another. The inputs to the microprocessor system come from the data bus and the outputs from the microprocessor can either be fed back onto the data bus or onto the address bus. The internal states of the microprocessor are decoded by conversion circuits so that some useful work can be performed before an output is provided. For example, successive inputs from the data bus can be added together or subtracted from one another and the result sent to the output side of the processor (data bus or address bus). Outsiders to the world of computing are often surprised to find that even the most powerful microprocessors are relatively primitive devices in the sense that they can do little more than add or subtract (most don't even multiply or divide), temporarily store and manipulate inputs and then feed them out again. However, this highlights the very "mechanical" nature of computers and the need for a high level of understanding before they can effectively be utilised.

Fundamentals of Digital Circuits

135

A number of basic elements are common to all modern processors. include the following:

These

Reasoning circuits (referred to as combinational or Boolean logic) Storage circuits (referred to as registers) Mathematical circuits (referred to as numerical logic) Sequential circuits or "state machines" A cluster of conductors for input and output of voltages (called the data bus) A cluster of conductors for output of voltages (called the address bus).

The microprocessor, in isolation, does not perform any realistic computation. In order to operate effectively, it must be coupled to a number of other devices, including:

A collection of registers for temporary storage of data (referred to as memory) that can provide input to the processor or receive output from the processor A bulk data storage facility controlled by another state-machine or processor (referred to as a disk-drive) A data entry pad for human users (referred to as a keyboard) An output device, driven by another state machine or processor, for interaction with human users (referred to as a graphics card).

The complete system is then referred to as a computer. However, the computer cannot perform any useful work unless the processor generates meaningful outputs (to the data and address bus structures). This, in turn, can only occur if the processor receives meaningful inputs (from the data bus). The meaningful inputs are entered by the human user and are stored on disk or in memory until they are fed through the processor. A collection of "meaningful" inputs is referred to as a program or as software. Since the microprocessor is only a machine, and all other devices in a computer system are of lesser or equal intellect, the only intelligence which can be ascribed to computer systems is via software. The computer system is shown schematically in Figure 4.3, with a number of its key elements. Most of the elements within the computer system share a number of common attributes. Firstly, all the devices work with only two numbers (zero and one) that are represented by two voltages - typically in the order of zero volts and five volts, respectively. Secondly, all the devices use a kind of reasoning or, more appropriately, logic, which we refer to as "Boolean logic". This is named after the mathematician George Boole (1815-1864) and is implemented via a range of different circuits that we refer to as "gates". Boolean logic gates can not only be used to instil human reasoning, but they can also be used to generate numerical circuits that can perform simple arithmetic, such as addition and subtraction. Boolean logic gates are most commonly formed by fabricating a number of transistors into circuits within a piece of semiconductor material, and in this chapter, we shall examine a few of the different technologies that are in use.

136

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Address Bus Data Bus Keyboard Interface Graphics Controller Disk-Drive Controller

Memory

Microprocessor

Clock

Keyboard Monitor

Disk-Drive

Figure 4.3 - Basic Computer System Elements

Boolean logic gates are fundamental to almost all areas of computing and so we need to find sensible ways of dealing with them. To this end, we shall examine a number of design and simplification techniques, such as the basic laws of Boolean Algebra (established by George Boole) and Karnaugh mapping. Boolean logic gates can also be used to create memory storage elements that we refer to as "flip-flops", and a collection of flip-flops can be used to create a register. A number of registers fabricated onto a chip creates "memory". Moreover, if we take a collection of flip-flops and interconnect them with other Boolean logic gates, then we can create "counters" that move from one set of outputs (state) to another on each clock cycle. Counters are the most basic form of electronic "state machine" and hence the basis for the "heart" of modern processors. So far, we have only discussed microprocessors and computers, and many would say that not all computers are based upon microprocessors. This is certainly true, and many larger computer systems do not contain a single-chip microprocessor. However, all modern computers contain some form of Central Processing Unit (CPU). Despite what many manufacturers would argue, there is little fundamental difference between the CPU of a large computer system, which is composed of many individual chips, and the CPU of a microcomputer system, which is composed of a single chip (the microprocessor). Most systems operate on the so-called "Von Neumann" architecture and the major difference between processors is in the way CPU functions are distributed - that is, on a single chip or over a number of discrete chips for improved performance.

Fundamentals of Digital Circuits

137

The Von Neumann architecture is one in which program instructions and data are all stored in a common area of memory and transferred into and out of the CPU on a common data bus. Not all processors function on the Von Neumann architecture that we will discuss in some detail. A number of specialised processors, referred to as Digital Signal Processors or DSPs function on a slightly different architecture, referred to as the so-called "Harvard" architecture (notably, Von Neumann was a Princeton man!). In the Harvard architecture, program instructions and data are stored and transferred separately from one another. This has advantages in a number of signalprocessing areas because it enables sophisticated control calculations (such as Fast Fourier Transforms, etc.) to execute on a Harvard processor more rapidly. For this reason, devices such as DSPs are widely used where low-cost, high-performance processors are required. The Harvard architecture is however, less suitable for common applications than the Von Neumann architecture and is therefore not as widely used. In the final analysis, if one can come to terms with the Von Neumann architecture then one should be also be able to understand the Harvard architecture without difficulty. The building blocks required to construct either the Von Neumann architecture or the Harvard architecture are the same. The difference is in the way the blocks are put together. The basic building blocks for both architectures are shown in Figure 4.4. It is these building blocks that we shall pursue for the remainder of this chapter and in Chapters 5 and 6, where we see how the basic elements fit together.

CPU (Microprocessor, DSP, etc.)

Registers & Memory (RAM, ROM, etc.)

Counters & Sequential State Machines

Numerical Circuits (Calculation)

Flip-Flops (Storage)

Combinational Logic (Reasoning)

Boolean Logic Gates and Programmable Array Logic

Integrated Circuit Transistors - BJTs, FETs, CMOS

Figure 4.4 - Building Blocks in Computer Architecture

138

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

4.2 Number Systems, Conversion and Arithmetic


Digital computer systems have been designed in such a manner as to minimise the range of possible voltage levels that exist within. In fact, only two voltage ranges are normally permitted to represent data. One voltage range (normally somewhere near zero volts) represents the number zero and the other voltage range (normally around five volts) represents the number one. The fact that we only have two possible voltage ranges means that we do not need to concern ourselves with circuit accuracy but rather with more important issues such as increasing speed and minimising size, cost and power dissipation. However, the restriction of having only two possible numbers (in other words, a binary number system) means that we need to be able to come to terms with a range of number systems other than the decimal one to which most humans are accustomed. In Section 4.1 we looked at a simplified model of the microprocessor and computer system, as shown in Figures 4.2 and 4.3. Within that system we noted that the microprocessor essentially had one input, called the data bus, and two outputs, these being the data bus and the address bus. The term "bus" really refers to a number of conductors that are used to transfer energy (by current and voltage) or information via voltage levels, as in a computer system. The data bus is therefore not a single input/output line but rather a cluster of lines. A data bus can typically be composed of anywhere between 8 and 64 conductors, depending on the microprocessor system in question. The same applies to the address bus. Each conductor on the address or data bus can, at any instant in time, be either "high" (around 5 volts) or "low" (around 0 volts) and in digital systems, we generally do not concern ourselves with the values in between. At any instant, therefore, a conductor can contain one binary digit of data. This is referred to by the abbreviated term "bit". Figure 4.5 shows the typical sort of voltage waveforms that could be travelling along an 8-bit data bus. At any instant in time, "T", the data lines contain 8bits of binary data. In many instances the 8 bits are used to represent a number or a character. For example, at time "T", in Figure 4.5, the data bus could be representing the number 10111101 or a character corresponding to that binary number. There is no stage at which the microprocessor (or computer) ever sees numbers or characters in the form in which humans enter them. They are always represented by binary numbers. The diagram of Figure 4.5 is actually only an approximation of what is actually occurring within a computer system. In fact, although we talk of digital circuits, most circuits only approximate digital behaviour. Figure 4.6 shows a time-scale enlarged version of a digital waveform which actually turns out to be analog in nature. However, for most design purposes the digital approximation is extremely good and it is only in relatively sophisticated "trouble-shooting" situations that we need to consider the analog behaviour of digital circuits.

Fundamentals of Digital Circuits

139

Voltage on Data Bus Line D7

Time

Voltage on D6

Time

Voltage on D5

Time

Voltage on D4

Time

Voltage on D3

Time Voltage on D2

Time

Voltage on D1

Time

Voltage on D0

Time T

Figure 4.5 - Typical Digital Waveforms on an 8-bit Data Bus

140

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Voltage

Actual Waveform

Approximate Waveform

Time

Figure 4.6 - The True Analog Nature of Digital Waveforms

All of the above discussions point to the fact that we need to understand how binary numbers can be used and manipulated and how we can convert from the binary number system to the decimal number system and so on. However, in order to understand how other number systems work, we must first ensure that we understand the decimal (or base 10) number system. To begin with, we note that the following is the natural, base-10, count sequence:

2 3 4 12 13 14 22 23 24 . . 90 91 92 93 94 100 101 102 103 104

0 10 20

1 11 21

5 6 7 8 15 16 17 18 25 26 27 28

9 19 29

95 96 97 98 99 105 106 107 108 109

Note the way in which we "carry" a digit each time we exceed the number "9". These representations are symbolic of the quantities that we actually wish to represent. For example, the decimal number 721 actually represents the following:

(7 x 102) + (2 x 101) + (1 x 100)

Fundamentals of Digital Circuits

141

Since the electronic circuitry in computer systems is designed to handle only two types of voltages (high and low), this representation is clearly inappropriate for our needs. There are however other commonly used number systems, which more closely relate to the needs of the computer, albeit indirectly. For example, the base 8, or "Octal" number system arises regularly. A count sequence in base 8 takes on the following form:

0 1 2 3 4 5 6 7 10 11 12 13 14 15 16 17 70 71 72 73 74 75 76 77 100 101 102 103 104 105 106 107 The octal number 721 actually represents the following: (7 x 82) + (2 x 81) + (1 x 80) which is equal to decimal 465 and not decimal 721. When working with a range of different number systems, it is common practice to subscript numbers with the base of the number system involved. For example, we can validly write the following expression: 7218 = 46510 Another number system that is commonly used with computer systems is the base 16 or hexadecimal number system. Since we do not have enough of the ordinary numerals (0..9) to represent 16 different numbers with a single symbol, we "borrow" the first six letters of the alphabet (A..F). A count sequence in base 16 then takes on the following form:

0 10

1 11

2 12

3 13

4 14

5 15

6 16

7 17 . .

8 18

9 19

A 1A

B 1B

C 1C

D 1D

E 1E

F 1F

F0 F1 100 101

F2 102

F3 103

F4 104

F5 105

F6 106

F7 107

F8 108

F9 109

FA FB FC FD FE FF 10A 10B 10C 10D 10E 10F

To similarly convert the hexadecimal number 721 to decimal: 72116 = (7 x 162) + (2 x 161) + (1 x 160) = 182510

142

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Finally we move on to the number system most closely related to the architecture of computer systems themselves, the binary number system, in which we can only count from 0 to 1 before performing a "shift" operation. The following is a base 2 count sequence: 0 1 10 11 100 101 110 111 1000 1001 1010 1011 1100

1101

1110

1111

As an example, in order to convert the number 101111012 to decimal, we use the following procedure: 101111012 = (1x27) + (0x26) + (1x25) + (1x24) + (1x23) + (1x22) + (0x21) + (1x20) = 18910 We have now seen that it is a relatively straightforward task to convert numbers from different bases to their decimal (base 10) equivalents. However it is also possible to convert from base 10 numbers into different number systems through a process of long division. In order to do this, the original decimal number is repeatedly divided by the new base (to which we wish to convert) and the remainders of each division are stored. The process is repeated until the original number is diminished to zero. The remainders then form the representation of the decimal number in the new base. This sounds complex, but in essence is relatively straightforward. For example, if we wish to convert the decimal number 189 into its binary representation, the following long division quickly achieves the result:

189 94 47 23 11 5 2 1 0

1 0 1 1 1 1 0 1

Low order Remainder

High order Remainder

Therefore 18910 = 101111012 as proven earlier.

Fundamentals of Digital Circuits

143

Conversion from the binary number system to the octal number system is a simple task, since each group of three bits directly represents an octal number. Binary numbers are partitioned into groups of 3 bits (binary digits), starting from the low order bits. Then each group of three digits is individually converted into its octal equivalent. For example, to convert the binary number 1011011110111 to its octal equivalent, the following procedure is used: 1 1 011 011 110 111 3 3 6 7

Therefore 10110111101112 = 133678. Conversion from the binary number system to the hexadecimal number system is similar to the binary-octal conversion, except that binary digits are placed into groups of four (since 4 bits represent 16 combinations). Each group of four is then individually converted into its hexadecimal equivalent. For example, to convert the binary number 1011011110111 to its hexadecimal equivalent, the following procedure is used: 1 1 0110 1111 0111 6 F 7

Therefore 10110111101112 = 16F716. Octal and hexadecimal numbers can also be readily transformed into their binary representation, simply by converting each digit individually to its equivalent 3 or 4 bit representation. This is the reverse operation to that shown in the previous two examples. You should now observe that we have a simple and direct mechanism for converting from octal and hexadecimal numbers to binary, but that in order to convert from decimal to binary we need to perform the long division calculation, shown previously. In order to establish an analogous, direct relationship between binary and decimal, another number representation has also been used. This is referred to as the Binary Coded Decimal or BCD system. In the BCD system, each decimal digit is represented in binary by four bits. For example, the BCD equivalent of the number 721 is given by: 0111 0010 0001 This is similar to the relationship between hexadecimal and binary, except that certain bit combinations can never occur, since the BCD system uses 4 digits (with 16 combinations) in order to represent the ten decimal digits, 0 to 9.

144

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Strictly speaking, BCD should not be regarded as a number system, but rather as a mechanism for directly converting human (decimal) input into an electronically suable binary form. It is most commonly used at a human to computer interface. For example, if a person pushes a number 7 (say) on a simple key-pad, then the appropriate voltages (low, high, high, high) can be generated. BCD is not used in sophisticated keyboards such as those found on most personal computers, workstations and main-frames. A more sophisticated representation is used for such keyboards and is discussed in Section 4.3. To summarise the various number representations, most commonly associated with computers, Table 4.1 shows how each of the number systems represents the decimal numbers from 0 to 20.

Decimal

Hexadecimal

Octal

Binary

BCD

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

0 1 2 3 4 5 6 7 8 9 A B C D E F 10 11 12 13 14

0 1 2 3 4 5 6 7 10 11 12 13 14 15 16 17 20 21 22 23 24

00000000 00000001 00000010 00000011 00000100 00000101 00000110 00000111 00001000 00001001 00001010 00001011 00001100 00001101 00001110 00001111 00010000 00010001 00010010 00010011 00010100

0000 0000 0000 0001 0000 0010 0000 0011 0000 0100 0000 0101 0000 0110 0000 0111 0000 1000 0000 1001 0001 0000 0001 0001 0001 0010 0001 0011 0001 0100 0001 0101 0001 0110 0001 0111 0001 1000 0001 1001 0010 0000

Table 4.1 - Representation of Decimal Numbers from 0 to 20

Fundamentals of Digital Circuits

145

Numbers from different bases can be dealt with arithmetically in exactly the same manner as decimal numbers, except that a "shift" or "carry" has to occur each time a digit equals or exceeds the base value. The following are simple examples of addition, subtraction, multiplication and division using the binary number system:

(i)

Addition of 111012 and 10112


11111 (Carry) 11101 + 1011 101000

(ii)

Subtraction of 10112 from 111012

11101 - 1011 10010

(iii) Multiplication of 111012 by 10112


11101 1011

11101 111010 11101000 100111111

(iv) Division of 111012 by 10112

1011

11101 10.10100011

146

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

4.3 Representation of Alpha-numerics


It should be clear from the discussions of 4.1 and 4.2 that microprocessor based systems (and indeed digital systems) can only understand voltage waveforms which represent bit streams. They have no capacity for a direct interpretation of the alphanumeric characters which humans use for communication. In section 4.2, the Binary Coded Decimal system was cited as a means by which numbers, entered on a simple keypad, could be directly and electronically represented in a computer. This is however very restrictive as only 16 different numeric characters can be represented by a 4 bit scheme (and only 10 combinations are actually used in 4 bit BCD). In order to represent all the upper and lower case alphabetic characters on a typical computer keyboard, plus symbols, carriage-returns, etc., it is necessary to use strings of 7 or 8 bits, which then provide enough combinations for 128 or 256 alphanumeric characters. Two specifications for the bit patterns representing alpha-numeric characters are in common use. These are the 7 bit ASCII (American Standard Code for Information Interchange) and the 8 bit EBCDIC (Extended Binary Coded Decimal Interchange Code) systems. The ASCII system is by far the more prolific of the two specifications and it is used on the majority of Personal Computers. The EBCDIC system is used predominantly in a mainframe (notably IBM) computer environment. The ASCII character set is listed in Table 4.2. This table shows each character beside its hexadecimal ASCII value, which explicitly defines the bit pattern representation for each character. For example, the character 'X' has the hexadecimal ASCII value of "58" that translates to a bit pattern of:

5 0101

8 1000

The corresponding EBCDIC hexadecimal values are also provided beside each character for comparison. Note that the EBCDIC system uses an 8 bit representation and therefore provides a much larger character set than the ASCII system. However some of the bit patterns in the EBCDIC system are unused.

Fundamentals of Digital Circuits

147

CHAR

ASCII Value

EBCDIC Value

CHAR

ASCII Value

EBCDIC Value

CHAR

ASCII Value

EBCDIC Value

NULL SOH STX ETX EOT ENQ ACK BEL BS HT LF VT FF CR SO SI DLE DC1 DC2 DC3 DC4 NAK SYN ETB CAN EM SUB ESC FS GS RS US SP ! " # $ % & ' ( ) *

00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F 20 21 22 23 24 25 26 27 28 29 2A

00 01 02 03 37 2D 2E 2F 16 05 25 0B 0C 0D 0E 0F 10 11 12 13 3C 3D 32 26 18 19 3F 27 22 -35 -40 5A 7F 7B 5B 6C 50 7D 4D 5D 5C

+ , . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U

2B 2C 2D 2E 2F 30 31 32 33 34 35 36 37 38 39 3A 3B 3C 3D 3E 3F 40 41 42 43 44 45 46 47 48 49 4A 4B 4C 4D 4E 4F 50 51 52 53 54 55

4E 6B 60 4B 61 F0 F1 F2 F3 F4 F5 F6 F7 F8 F9 7A 5E 4C 7E 6E 6F 7C C1 C2 C3 C4 C5 C6 C7 C8 C9 D1 D2 D3 D4 D5 D6 D7 D8 D9 E2 E3 E4

V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~ DEL

56 57 58 59 5A 5B 5C 5D 5E 5F 60 61 62 63 64 65 66 67 68 69 6A 6B 6C 6D 6E 6F 70 71 72 73 74 75 76 77 78 79 7A 7B 7C 7D 7E 7F

E5 E6 E7 E8 E9 4B E0 5B -DF -81 82 83 84 85 86 87 88 89 91 92 93 94 95 96 97 98 99 A2 A3 A4 A5 A6 A7 A8 A9 C0 6A D0 A1 07

Table 4.2 - ASCII and EBCDIC Character Representation

148

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Table 4.2 may appear to be somewhat confusing on first glance and so a number of points need to be noted about its contents: (i) The first 32 characters in the ASCII character set are special characters that cannot be generated by pressing a single key on a keyboard. In the ASCII character set they are represented by acronyms (abbreviations), but it must be noted that typing the characters in the acronyms will not generate these special characters. A special key-stroke sequence is required to produce these characters. For example, on an IBM or compatible Personal Computer (PC), holding down the "Ctrl" key and then pressing "A" will generate the ASCII character with a value of 1. The first 32 ASCII characters are sometimes referred to as non-printable characters. However, they do actually generate some symbols on particular computers - for example, on an IBM (or compatible) PC, these characters produce symbols such as , , , , etc. The main purpose of such characters is in data communications and they are also used for special instructions to printers. Table 4.3 lists the values of the first 32 non-printable characters for reference purposes, together with their commonly cited names and abbreviations. (ii) The ASCII system only uses 7 bits to represent characters with values from 0 to 7F (127) but most computers work with 8 bit units. In order to utilise the high order bit, an extended ASCII character set, using all 8 bits, displays special symbols on Personal Computers (PCs), but unfortunately there is no uniformity in definition. Some, older PC software packages take advantage of the spare high-order bit to store additional character information such as bolding, underlining, etc.

(iii) The choice of bit patterns to represent characters and numerics is essentially an arbitrary one. For example, in both the ASCII and the EBCDIC system, the number characters '0' to '9' are not represented by their equivalent binary values. In ASCII, the character '0' is represented by hexadecimal 30, which has a bit pattern of 00110000 and so on. This means that any numbers typed on a computer keyboard, that are intended to enter the microprocessor as numbers, need to be converted from their binary string (ASCII, EBCDIC, etc.) equivalent to their actual numeric value. For example, if we enter the characters "1" then "6" on the keyboard, then we generate the following ASCII string: 0011 0001 0011 0110

However, the binary number equivalent of 16 is actually 0001000 and so the microprocessor has to make the conversion from: 0011 0001 0011 0110 to 0001000.

This is generally carried out by the program executing on the microprocessor itself.

Fundamentals of Digital Circuits

149

HEX VALUE (ASCII) 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F

NAME

ABBREVIATED NAME

KEY CODE

NULL START OF HEADER START OF TEXT END OF TEXT END OF TRANSMISSION ENQUIRY ACKNOWLEDGE BELL BACK SPACE HORIZONTAL TAB LINE FEED VERTICAL TAB FORM FEED CARRIAGE RETURN SHIFT OUT SHIFT IN DATA LINE (LINK) ESCAPE DEVICE CONTROL 1 (XON) DEVICE CONTROL 2 DEVICE CONTROL 3 (XOFF) DEVICE CONTROL 4 NEGATIVE ACKNOWLEDGE SYNCHRONOUS IDLE END OF TRANSMIT BLOCK CANCEL END OF MEDIUM SUBSTITUTE ESCAPE (ESC) FILE SEPARATOR GROUP SEPARATOR RECORD SEPARATOR UNIT SEPARATOR

NULL SOH STX ETX EOT ENQ ACK BEL BS HT LF VT FF CR SO SI DLE DC1 DC2 DC3 DC4 NAK SYN ETB CAN EM SUB ESC FS GS RS US

CTRL @ CTRL A CTRL B CTRL C CTRL D CTRL E CTRL F CTRL G CTRL H CTRL I CTRL J CTRL K CTRL L CTRL M CTRL N CTRL O CTRL P CTRL Q CTRL R CTRL S CTRL T CTRL U CTRL V CTRL W CTRL X CTRL Y CTRL Z CTRL [ CTRL \ CTRL ] CTRL ^ CTRL _

Table 4.3 - Special Functions of the first 32 ASCII Characters

150

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

4.4 Boolean Algebra


There are really only two major attributes that are instilled within the modern computer. One is the ability to "calculate" or to manipulate numbers. The other attribute is the ability to undertake some form of human reasoning. The term "computer" is actually defined as meaning "reckoning machine" and, in a sense, both calculation and reasoning are forms of reckoning. We shall later observe that calculation and reasoning are somewhat interrelated phenomena in the computing domain because computers only ever calculate with the numbers "1" and "0" and only ever reason with the states "True" and "False". In order to understand how digital circuits are assembled in order to carry out both these functions, one must understand the fundamentals of Boolean Algebra, which is the mathematics of binary numbers and the reasoning (tautology) of systems with only "True" and "False" states. Boolean Algebra was named after the mathematician George Boole (1815-64) and is the simplest means by which we can convert human reasoning and tautology into a mathematical and electronic form for computation. The basic circuits used to provide Boolean logic within computer systems are referred to as "logic gates" and we shall examine these in a little more detail as we progress through this chapter. Table 4.4 shows the logic gate symbols for the basic Boolean logic functions, together with their equivalent algebraic expressions. The actual standards for the logic symbols vary from country to country, but the ones adopted herein are in widespread use. The logic gate symbols don't necessarily have to describe electronic circuits - they can equally well be symbols for human reasoning or symbols representing mechanical circuits in hydraulic or pneumatic systems. Note that in Boolean Algebra, the symbol "+" signifies a logical "OR" (not addition) and the symbol "" means AND (not multiplication). Table 4.4 is little more than a formal description of what many may feel to be perfectly obvious - simple reasoning elements. However, the objective of Boolean Algebra is to formalise a minute portion of the human reasoning process. The first step in doing so is to create a "truth table". This is shown beside each logic gate symbol in Table 4.4. A truth table is a listing of outputs corresponding to every possible input and input combination to a system. In a digital system there are only two possible values for every input - zero (False) or one (True). This clearly means that for a system with "n" inputs, there are 2n possible input combinations or lines in the truth table. For example, the "inverter" gate of Table 4.4 has only one input and hence there are two lines in the truth table. The other gates each have two inputs and hence there are four lines in each of their truth tables.

Fundamentals of Digital Circuits

151

LOGIC GATE

BOOLEAN LOGIC X

TRUTH TABLE

FUNCTION

X Inverter

Z is NOT X

0 1

1 0

Z=X

X Y AND Z

Z is X AND Y

0 0 1 1

0 1 0 1

0 0 0 1

Z = X.Y

X Y NAND Z

Z is NOT (X AND Y)

0 0 1 1

0 1 0 1

1 1 1 0

Z = X.Y

X Z Y OR

Z is X OR Y (Inclusive OR)

0 0 1 1

0 1 0 1

0 1 1 1

Z=X+Y

X Y NOR Z

Z is NOT (X OR Y)

0 0 1 1

0 1 0 1

1 0 0 0

Z=X+Y

X Y XOR Z

Z is either X OR Y but not both (Exclusive OR)

0 0 1 1

0 1 0 1

0 1 1 0

Z=X +Y

Table 4.4 - Common Boolean Logic Functions and Representation

152

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Consider how we can use Boolean logic to replicate human reasoning. example, if we say that: A = It is Hot B = It is Cloudy C = It is Humid D = It is Cold E = It is Wet Then we can make Boolean statements such as the following: It is cold and it is cloudy = DB (read D AND B)

For

We can also instil our own reasoning into a systematic equation form. For example, we can say:

It is always humid when it is hot and cloudy and wet


This can be converted into: C = ABE It doesn't really matter whether our reasoning is valid or not. The issue is how to "automate" our reasoning. Following on from the previous example, we can make an electronic reasoning circuit using the gates shown in Table 4.4. The result is shown in Figure 4.7.

Hot Cloudy Wet Cold Humid

Figure 4.7 - Boolean Logic Circuit to Test for Humidity


Figure 4.7 shows how we can make a very trivial reckoning circuit to test for humidity by replicating our own reasoning with a circuit.

Fundamentals of Digital Circuits

153

However, in order to understand the ramifications of building human logic into systems via Boolean circuits, we need to tackle a somewhat more substantial design exercise. Consider the following problem:

Design Problem 1: An incubation chamber needs to be controlled by a simple digital controller. The complete system is shown in Figure 4.8. The chamber is equipped with a fan (F) and a heating element (H). The temperature of the system is fed back to a digital control system in a binary form. The temperature is represented by a 3-bit binary number, T2T1T0, which represents the incubation temperature on a scale from 000 to 111 (ie: 010 to 710). If the temperature is less than 3, then the controller must switch the heating element on and the fan off. If the temperature is greater than 3 then the controller must switch the fan on and the heating element off. If the temperature is equal to 3 then the controller must switch both the fan and the heater element off. Design the control system using simple Boolean logic gates.

Digital Controller

Fan (F) Motor

Incubation Chamber

T2 T1 T0

Heater (H)

Digital Temperature Probe

Figure 4.8 - Digital Control System for Incubator

Solution to Design Problem 1: The first step in solving most digital design problems is to identify the inputs and outputs of the system. Sometimes this isn't as simple a task as it may seem. In this instance however, it is clear that the inputs are the three binary signals being fed back from the temperature probe (T2, T1 and T0). The next step in solving the problem is to construct a truth table for the problem. We will assume that the state "heater on" is represented by "H = 1", similarly, we assume that "fan on" is represented by "F = 1". The truth table is shown in Table 4.5.

154

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

(23 T2 0 0 0 0 1 1 1 1

System Inputs Combinations) T1 0 0 1 1 0 0 1 1 T0 0 1 0 1 0 1 0 1

System Outputs

H 1 1 1 0 0 0 0 0

F 0 0 0 0 1 1 1 1

Table 4.5 - Truth Table for Digital Incubator Controller

The next step in the design process is to determine the digital logic required to fulfil the logic in the truth table. The simplest technique is to use the so-called "sum-ofproducts" method. In order to do this, we go through each line of the truth table until we come across a line where the output variable is equal to one. We then write out the product of input variables that causes this to happen and then move down the truth table until we have written down a product for each time the output variable has a value of one. The products are then "ORed" together and the result is called the sum-of-products. In the case of the heater output "H" and the fan output "F", we have the following sum-of-products expressions:
H = T2 T1 T0 + T2 T1 T0 + T2 T1 T0 F = T2 T1 T0 + T2 T1 T0 + T2 T1 T0 + T2 T1 T0

The sum-of-products expressions are really nothing more than common sense and define exactly the sort of logic that will fulfil the truth table. Looking at the sum-ofproducts expression for H, we can say that the heater is on whenever: (T2 and T1 and T0 are all low) OR (T2 and T1 are low and T0 is high) OR (T2 is low and T1 is high and T0 is low) Once we have a sum-of-products expression, we can convert that expression into logic gate symbols so that we have a Boolean logic circuit. This is shown in Figure 4.9 for the heating circuit (H).

Fundamentals of Digital Circuits

155

T2 T1 T0

T2 T1 T0

T2.T1 T2.T1.T0 H T2.T1

T2.T1.T0 T1.T0 T2.T1.T0

Figure 4.9 - Boolean Logic Circuit for Heater in Incubator Control

A similar circuit can be constructed for the incubator fan. Design this circuit, as an exercise, using the sum-of-products expression, defined above, for F. Combine the two circuits to show the complete control system.

Design problem 1 gives us a good insight into the way in which relatively simple digital controls can be constructed using simple logic building blocks. However, it also illustrates that a large number of components may be required for what is a relatively simple circuit. The sum-of-products technique is the most direct way of designing a Boolean logic circuit, however, in general it does not provide the simplest possible circuit to achieve a given objective. Many different logic circuit combinations may achieve exactly the same truth-table result as the one shown in Table 4.5 but some will use far fewer gates than others. There are a number of laws and postulates in Boolean algebra that can be used to reduce an expression to its simplest form. These are listed in Table 4.6 and they are the basis of Boolean algebra. Using such laws to determine whether one Boolean algebraic expression is equivalent to another is referred to as "tautology". The final arbiter of tautology is the truth table. If two expressions are equivalent, then the truth table of the left hand side must be identical to the truth table of the right hand side for all possible variable combinations. The Boolean laws in Table 4.6 can all be verified by truth table.

156

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Postulates of Boolean Algebra


All variables must have either the value 0 or 1. If the value of a variable is not zero then it must be one and vice-versa. The following rules apply:

OR 1+1=1 1+0=1 0+0=0

AND 1.1=1 1.0=0 0.0=0 Boolean Laws of Combination

NOT 1= 0
0=1

A. B = B. A A + B = B+A A + B + C = A + ( B + C) ( A . B). C = A .( B. C) A .( B + C) = A . B + A. C A+A = A A. A = A A +1 = 1 A .1 = A A+0 = A A. 0 = 0 A+A =1 A. A = 0 A + A. B = A + B A=A A+B=A+B A. B = A . B A. B = A + B A + B = A. B

(Laws of Commutation)

(Laws of Association)

(Laws of Distribution) (Laws of Tautology - Idempotent Rule)

(Laws of Double Complementation)

(DeMorgan' s Theorem) (DeMorgan' s Theorem)

Table 4.6 - Fundamental Principles of Boolean Algebra

Fundamentals of Digital Circuits

157

The laws of Boolean algebra, as defined in Table 4.6, are normally used in order to make complex expressions simpler. Boolean logic is used for a number of functions including:

Design of digital circuits from logic gates Design of logic circuits for hydraulics, pneumatics, etc. Writing conditional expressions in computer programs.

For these applications we always need to establish the simplest form of a Boolean expression before committing ourselves to an implementation phase. This reduces the complexity and cost of circuits and the running time of software.

Design Problem 2: Using the laws and postulates of Boolean Algebra, simplify the following expression:
Z = A . B + C.( A + B). D

Solution to Design Problem 2:


Z = A . B + C.( A + B). D = A . B. C.( A + B). D = A. B.CD.( A + B) = A. B.C. D. A + A . B. C. D. B = A. A. B. CD + A . B. B. C. D = A . B. C. D + A. B. CD = A . B. C. D = A + B+ C + D (By De Morgan's Theorem) (By law of commutation) (By law of association) (By law of commutation) (By law of tautology) (By law of tautology) (By De Morgan' s theorem)

Design Problem 2 highlights the difficulty in applying the laws of Boolean algebra to simplify a circuit. The main problem is that there is no predefined way of beginning the simplification process - the first step is arbitrary. Secondly, there are multiple paths that can be taken to achieve the same objective and the approach shown in Design Problem 2 is not systematic. Finally, for complicated expressions, we never really know when we have reached the simplest expression.

158

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

There are a number of techniques that can systematically simplify a Boolean expression. The most common is called "Karnaugh Mapping" and it is this technique which we shall explore herein. A Karnaugh Map is really nothing more than a strategically drawn truth-table that plots the output variable in terms of the input variables. Table 4.7 is the conventional truth table for the original expression in Design Problem 2.

A
0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1

B
0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1

C
0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1

D
0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

Z
1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1

Table 4.7 - Truth Table for Original Expression in Design Problem 2

Figure 4.10 shows the Karnaugh Map, equivalent to Table 4.7, for the output variable (Z) in the expression. The Karnaugh map is just a truth table plotted in a different way. However, there are two points to note about the Karnaugh Map:

The count sequence on the map does not follow a normal binary count pattern. The reason for this is to ensure that only one variable changes at a time and is referred to as a "Gray Code" count sequence The map needs to be considered as a sheet of paper which can be folded around on itself. In other words, there really aren't any ends on the map. It can be rolled either vertically or horizontally.

Fundamentals of Digital Circuits

159

A Z CD 00 01 D 11 C 10 1 1 1 1 1 1 1 1 AB 00 1 1 01 1 1 B 11 1 0 10 1 1

Figure 4.10 - Karnaugh Map for Original Expression for "Z" in Design Problem 2

Once we have constructed a Karnaugh Map for an expression, the objective is to look for regions where the output variable is independent of the input variables. How do we do this? Firstly, by circling regions where the output has a value of 1 - however, we can only do this in a certain way: (i) In a 4 x 4 map as shown in Figure 4.10, we begin by looking for a region in which there are 16 "ones". If we find such a region then it means that the output is equal to one regardless of the inputs and hence is independent of the inputs. If this is the case then the process is concluded, otherwise we move on to (ii) We then move on to regions where there are 8 "ones" and circle all those. When there are no regions of 8 "ones" we look for regions of 4 "ones" then 2 "ones" and then 1 "one". It doesn't matter if some of the circled regions overlap

(ii)

(iii) Ultimately, all "ones" on the map have to be circled (iv) If a map has only regions of 1 "one", then there is no possibility of simplifying the expression (v) When all the regions have been circled, we look for regions of independence. In other words, we ask ourselves for each circled region, "do any of the inputs change in the circled region where the output remains constant?" If the answer is yes, then the output is independent of these inputs. If inputs do not change within a circled region, then the output is dependent upon those inputs

160

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

(vi) The expression for the output variable is the sum of expressions for all the circled regions. The Karnaugh Mapping technique can only be fully understood after some practice, so let us begin by simplifying the expression in Design Problem 2. As a first step, we identify regions as shown in Figure 4.11. Note how we only put boundaries around regions in either horizontal or vertical directions.

Z AB CD 00 00 01 11 10 1 1 1 1 01 1 1 1 1 11 1 0 1 1 10 1 1 1 1 Region 1 (shaded) Region 2 Region 3 Region 4

Figure 4.11 - Karnaugh Map for "Z" in Design Problem 2 with Regions Circled

The simplest Boolean expression can be determined by creating the largest possible regions (16, 8, 4, 2 and 1 consecutively) and then deducing how the output is affected in those regions. Let us begin with region 1 in Figure 4.11 (the shaded region). Note that the value of "Z" in this region is always one, despite the fact that input variables A, B and D change within the region. This means that for this region, the value of Z only depends upon C being equal to one. Therefore the simplest" expression for Z is: Z=C+?+?+? The second term in the expression for Z can be obtained from region 2 in Figure 4.11. In this region, the value of Z remains one, despite the fact that B, C and D inputs vary between one and zero. Hence in this region, Z is only dependent upon A being equal to zero. Our simplest expression for Z now becomes:

Z = C + A + ?+ ?

Fundamentals of Digital Circuits

161

The third term in the expression for Z can be obtained from region 3 in Figure 4.11, which is the "outside" region (remember that the Karnaugh Map can be rolled around so that the ends meet in both vertical and horizontal directions). In this region, A, C and D all vary between zero and one and only the variable B remains constant with a value of zero. Therefore, Z depends upon B having a value of zero. Our simplest expression for Z now becomes:

Z = C+A + B+?
The final term in our expression for Z is obtained from region 4 in Figure 4.11. In this region, A, B and C all vary between one and zero and have no affect upon Z. However, D must remain constant with a value of zero. Hence, our simplest expression for Z is:

Z = C+A + B+ D
as determined earlier by the unsystematic process of applying algebraic laws. Provided that we always select the largest possible groups, we will always arrive at the simplest possible expressions. Karnaugh Mapping is always difficult to come to terms with at first, so following are a number of simple design problems to assist your understanding.

Design Problem 3: Using the Karnaugh Maps shown in Figure 4.12, determine the simplest expressions for Z in each case.

Z CD

AB 00 00 01 11 10 1 0 0 1 01 0 0 1 0 11 0 0 0 0 10 1 0 0 1 (i)

Z CD

AB 00 00 01 11 10 1 0 0 0 01 1 1 1 1 11 1 0 0 1 10 1 0 0 0 (ii)

Z C

AB 00 0 1 1 1 01 0 1 11 0 0 10 1 1 (iii)

Z C

AB 00 0 1 0 1 01 1 0 11 0 1 10 1 0 (iv)

Figure 4.12 - Sample Karnaugh Map Problems

162

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Solution to Design Problem 3: The largest possible regions (containing binary ones) for each of the Karnaugh map are identified and marked as in Figure 4.13. Regions can be octets, quads, pairs or singles.

Z CD

AB 00 00 01 11 10 1 0 0 1 01 0 0 1 0 11 0 0 0 0 10 1 0 0 1 (i)

Z CD

AB 00 00 01 11 10 1 0 0 0 01 1 1 1 1 11 1 0 0 1 10 1 0 0 0 (ii)

Z C

AB 00 0 1 1 1 01 0 1 11 0 0 10 1 1 (iii)

Z C

AB 00 0 1 0 1 01 1 0 11 0 1 10 1 0 (iv)

Figure 4.13 - Karnaugh Maps Marked out to Maximise Regions of "Ones"

We begin by considering Karnaugh Map (i) in Figure 4.13. Note how we have been able to group the four corners together into a quad group. This is because we can "roll" the edges of a Karnaugh map so that they join one another in either the vertical or horizontal directions. Rolling is permissible provided that we adhere to the basic rule that no more than one variable can change from any one position in the map to any other position. This binary number sequence, in which only one bit changes at a time, is referred to as "Gray code". From Figure 4.13 (i), we find that there are only two regions to be considered. The simplest expression for Z in (i) is:

Z = B D + A B C D
In Figure 4.13 (ii), we have three "quad" regions. Two of the quads are in a line, but one arises from rolling the horizontal edges of the map together. The simplest expression for Z in (ii) is:

Z = C D + A B + B D

Fundamentals of Digital Circuits

163

The Karnaugh Map of Figure 4.13 (iii) has only three variables and hence eight possible combinations. Karnaugh Maps can also be constructed for expressions with only two input variables - these are 2 x 2 Maps. In Figure 4.13 (iii), the largest possible region is a quad, obtained by rolling the vertical (left and right) edges of the Map. Note also that we have overlapping regions in the Map. If we had not joined the lone "one" with another one to form a pair, then the expression we derived would not be in its simplest form. The simplest expression for Z in (iii) is:
Z = B+ CA

Figure 4.13 (iv) is an unusual Map and has deliberately been included because it is one instance where the Karnaugh Mapping technique doesn't provide the simplest Boolean expression at first sight. From the Map of Figure 4.13 (iv) it is evident that no octets, quads or pairs can be formed and that:

Z = A B C + A B C + A B C + A B C
However, it turns out that this expression, is identical to
Z = A B C

which is a much simpler expression, formed from "Exclusive-OR" operators. In order to find Exclusive-OR operators in a Karnaugh Map one needs to look for specific patterns such as the one in Figure 4.13 (iv). As an exercise, construct truth tables and Karnaugh Maps for 2 input and 4 input Exclusive-OR systems and note the patterns that arise.

The examples in Design Problem 3 really don't highlight the radical simplification that can occur in expressions as a result of Karnaugh Maps. In order to observe this phenomenon, we really need an example that highlights "before" and "after" cases.

Design Problem 4: Using the Karnaugh Mapping technique, redesign the incubation controller developed in Design Problem 1. Solution to Design Problem 4: The truth table derived for the incubator design problem was shown in Table 4.5. The Karnaugh Maps for the heater, "H", and the cooling fan, "F", are derived from that truth table and are shown in Figures 4.14 (i) and (ii) respectively.

164

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

T2T1 T0 0 1 00 1 1 01 1 0 11 0 0 10 0 0 (i)

T2T1 T0 0 1 00 0 0 01 0 0 11 1 1 10 1 1 (ii)

Figure 4.14 - Karnaugh Maps for Incubator Control (i) Heater; (ii) Fan

From these Karnaugh Maps, we can determine new expressions for H and F as follows:

H = T2 T1 + T2 T0 = T2 T1 + T0 F = T2
These are clearly much simpler than the previously derived expressions and lead to the new (simplified) controller circuit of Figure 4.15, which performs precisely the same function as the original, but with far fewer components.

Fundamentals of Digital Circuits

165

Incubator Controller

F T2 T1 T0

Figure 4.15 - Simplified Incubator Control System

166

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

4.5 Digital Logic Circuits


4.5.1 Introduction

In section 4.4 we examined a range of different digital logic circuits that could be used to exert some form of human reasoning (control) over a system. In that section, we dealt only with the symbol associated with each digital logic gate and assumed that the actual device could be fabricated from some form of electronic circuit. We now need to gain some understanding of how digital logic devices are actually fabricated so that we can understand their applications and limitations. As a starting point, it should be noted that all of the digital gates described in 4.4 are available in an integrated circuit (IC) fabricated within a semiconductor chip. Normally, digital logic gates are implemented in a low-density semiconductor fabrication referred to as SSI, which is an acronym for Small-Scale-Integration. Even with SSI, one chip generally contains more than one logic gate. For example, Figure 4.16 schematically shows the contents of a "7400 quad 2-Input NAND gate" device.

CC

14

13

12

11

10

Alignment Notch

1 Indicator Mark for Pin 1

7 GND

Figure 4.16 - 7400 Quad 2-Input NAND-Gate Dual-In-Line Package Chip

Fundamentals of Digital Circuits

167

A number of features need to be noted about commercially available chips such as the one shown in Figure 4.16: (i) The semiconductor material upon which the digital circuits are fabricated, is less than a few square millimetres in area. The user generally doesn't see the semiconductor material on low cost devices such as the one in Figure 4.16. The semiconductor is encased in plastic or ceramic material that is referred to as the "package". This provides a practical casing that simplifies manual handling of the device and allows a larger area for external connections to pins on the outside of the package.

(ii)

(iii) Extremely fine wires connect various points in the semiconductor to the physical conducting pins protruding from the package. (iv) The number of pins and their functionality is referred to as the "pin-out" of the package. (v) The pin numbers are generally not marked onto the package of the chip. Most packages therefore have identifying features (notches, circular indentations, etc.) that enable users to determine the pin numbering and orientation of the device.

(vi) The functionality of each pin in a particular package can only be determined by reference to a data sheet from the manufacturer and should never be "guessed" by looking at patterns for common chips (vii) Each chip has two power supply pins (normally referred to as VCC and GND). Unless a power supply is connected to these pins then the chip will not generate the required digital logic. From the above points it is evident that the package is generally much larger than the semiconductor material itself. In the 1960s, when this technology originated, few would have imagined the complexity of the circuits developed today and the major objective was to make packages to which users could easily connect other devices by manual techniques. However, one of the modern problems of developing complex circuits using packages such as the one shown in Figure 4.16 is that the size of the circuit largely represents packaging and not functional devices. In automated production environments, manual handling of devices can be eliminated and the bulk of the packaging removed. This makes circuits far more compact. The most common automated technique for using integrated circuits (ICs) without packaging is called "surface-mount" technology.

168

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

A surface-mount machine is similar to a 3-axis (XYZ) CNC machine but its purpose is to pick and place components onto circuit boards positioned in the machine bed. The specially designed ICs are normally purchased in bulk in a "bubble-pack" roll. The surface mount machine removes each IC from its bubble (by suction) and places it onto the circuit board. A light adhesive holds the IC in place temporarily and the entire board is then heated in an oven in order to create conducting joints at appropriate locations between the board and the IC. A range of ICs and other components (capacitors, resistors, etc.) are available in bubble-pack rolls for integration onto surface mount boards and the technology has been in widespread use since the early 1980s. It is one of the most efficient techniques for mass production of both analog and digital circuits. Most small-scale designers and prototype builders will have little use for surfacemount technology, since the machinery involved is quite substantial, even when purchased in a manual "pick and place" form. The two simplest approaches for building circuit boards with digital circuits involve:

Hot soldering (the traditional method of connecting electrical and electronic circuit elements) onto printed circuit boards Wire-Wrapping or cold soldering (a technique which involves wrapping wires around the pins of various IC sockets into which are plugged the ICs themselves).

Neither of these approaches are suitable for prototyping because it is difficult to "undo" mistakes or rectify design faults. A common short-term approach is to build digital circuits on prototyping boards that are specially designed so that wires, ICs, resistors, etc. can all be inserted into spring-loaded, conducting tracks so that temporary connections can be made for test purposes. These boards are sometimes referred to as "bread boards". As long as one operates digital circuits at moderate speeds, connects ICs together with short lengths of wire and doesn't load the output of one device with too many inputs from other devices, most physical circuits will function precisely as predicted in theory. However, the problems that arise from breaking such rules can only be understood when one understands the circuits used to fabricate digital circuit chips.

4.5.2 Transistor to Transistor Logic (TTL)

Transistor to Transistor Logic was one of the earliest and most widespread forms of digital logic and is still prominent today because most modern digital circuits still comply with the voltage and current standards established for that logic. The actual circuit for a TTL inverter gate is shown in Figure 4.17.

Fundamentals of Digital Circuits

169

Totem-Pole Output Stage V cc 1.6 k 4k Q3 V D2 130

in

Q1 D1

Q2

Q4 1 k

Vout

Load

Figure 4.17 - TTL Inverter Gate

The actual operation of this circuit was discussed briefly in Chapter 3 and the input/output voltage levels associated with TTL type circuits (originally shown in Figure 1.2 (b) ) are reproduced in Figure 4.18.

Voltage 5.0 v True / 1 / ON True / 1 / ON 2.4 v 2.0 v Error Margin

0.8 v 0.4 v False / 0 / OFF Circuit Ouputs False / 0 / OFF Circuit Inputs

Figure 4.18 - Input and Output Voltage Levels Associated with TTL

170

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The acceptable input and output voltage levels for TTL have been designed on the assumption that no logic gate is ideal and that its deviation from the ideal is restricted by good design. A primary consideration in gate design is the loading. Only a limited number of gates can be connected to the output of a TTL gate before its performance suffers. When TTL gates output a logical "High" they act as a source of current and when they output a logical "Low" they act as a sink for current. A gate's ability to source or sink current determines the maximum number of gates which can be connected to its output - that is, its "fan-out". If too much current is drawn from the output when a gate is in the high state, the current will eventually drag the gate down to a logical low, which is clearly unacceptable. The typical maximum output current from a TTL gate is in the order of a few milli-Amps and permissible fanouts are normally in the order of 10. Fan-out not only affects output voltage levels but also gate performance. Figure 4.19 is a time-scale enlarged diagram showing the output of an inverter gate (in response to a changing input) when the output is loaded with one gate and then with ten gates.

Voltage (v)

5 4 3 2 1 Time (nS) 10 20 30 40 50 60 70 80

Output with fan-out = 1 Input Output with fan-out = 10

Figure 4.19 - Effect of Fan-Out on TTL Gate Performance

Fundamentals of Digital Circuits

171

Notice in Figure 4.19 how the performance of a TTL gate suffers as a result of extra loading. Switching times are increased. In the case of most TTL gates, such as the inverter shown in Figure 4.17, the most notable effect of loading is that the transition from low to high is affected. This is due to the fact that the totem-pole transistor Q4 is heavily saturated when it is sinking a large load current. The time taken to move this transistor from saturation back to cut-off is affected by the level of transistor saturation. When only one gate is connected to the output of Q4, the transistor is only just saturated and can recover more quickly. Most semiconductor data books do not show detailed diagrams of gate performance as in Figure 4.19. Rather, a simplified approach is taken towards displaying the time performance of logic gates. This is shown in Figure 4.20 for the inverter gate.

Inverter Input Voltage

Threshold Voltage Time t HL Inverter Output Voltage t

LH

Threshold Voltage Time

Figure 4.20 - Typical Data-Book Timing Diagrams Illustrating Propagation Delays from Gate Input to Gate Output

In Figure 4.20, we see a time-scale enlarged diagram approximating the behaviour of gate inputs and outputs in the case of an inverter. The transition from high to low (HL) or from low to high (LH) is defined to occur when the input or output voltage reaches some predefined threshold level (typically around 1.5 volts in the case of TTL).

172

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The totem-pole output stage of the inverter gate shown in Figure 4.17 has what is referred to as an "active" pull-up section composed of Q3, the 130 resistor and a diode. A modified version of TTL removes this active pull-up stage and is referred to as open-collector TTL. The resistor can be supplied externally or can even be the "load" (such as a light-emitting diode, etc.) which is activated whenever the normal output (Vout) is low. An open-collector version of the inverter gate is shown in Figure 4.21 (a). Figure 4.21 (b) shows how a number of these gates can be interconnected to create a wired "AND" function, thus sparing one additional gate.

Totem-Pole Output Stage V cc 1.6 k 4k External Pull-Up

in

Q1 D1

Q2

Q4 1 k

Vout

Load

(a)

Open-Collector Gates A

Z B C D E (b) Z=A . (B+C) . (D.E)

Figure 4.21 - (a) Open-Collector TTL Inverter (b) Combining Open-Collector Gates to Create an "AND" Function

Fundamentals of Digital Circuits

173

TTL gates are normally available in IC packages that contain several of the same devices. TTL based ICs cover all the common Boolean logic functions including:

NOT (Invert) AND NAND OR NOR XOR.

However, it is interesting to note that any Boolean combinational logic can be realised using only NAND gates because all the other gates can be created from NAND gates. The reason for the other functions is really to minimise the number of chips required to create a circuit - that is, minimise the "chip-count". Minimising the chip-count in a circuit is much more important than minimising the number of gates because extra chips add to the size of a board, the complexity of the wiring and the energy requirements of the board. When we examined Karnaugh Mapping, our objective was to minimise Boolean logic expressions. However, we have to follow this up with an analysis of how many gates are available on each chip and how many chips we need to make a given logic. Another point that needs to be noted with TTL gates is that although we have only looked at 2-input gates, functions such as NAND, NOR, etc. are normally available with a range of inputs in order to simplify circuits. For example, it is possible to purchase 2, 3, 4 and 8 input NAND gates so that we can adjust designs to minimise the logic circuitry.

4.5.3 Schottky TTL

Digital circuits, have the same trade-off problems as most other modern devices. One of the most prevalent is the speed versus power consumption issue. High-speed logic generally uses more power than low-speed logic. Standard TTL is relatively fast compared to circuits built from MOSFETs but it dissipates significantly more energy. Energy consumption is an important issue because many digital logic devices are designed for battery operation and so energy usage has to be minimised in order to provide acceptable battery life levels. Standard TTL can be modified in a number of ways in order to vary the speed versus power consumption trade-off. Increasing resistance values in TTL gates decreases speed but also reduces power consumption. Decreasing resistance values increases both speed and power consumption. Both High-Speed TTL and Low Power TTL have been implemented by semiconductor companies.

174

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Schottky TTL is another interesting variation on standard TTL. The term "Schottky" refers to a special type of diode which is also known as a barrier diode or hot-carrier diode. The Schottky diode can be used in conjunction with a Bipolar Junction Transistor (BJT) to prevent that transistor from completely saturating. This is achieved by connecting the anode of a Schottky diode to the base of a BJT and the cathode to the collector of the BJT. In Section 4.5.2, we noted that one cause of propagation delays in gates (ie: performance degradation) is the level of saturation in transistors. If transistors can be kept from complete saturation then the performance of a logic gate can be increased and hence the development of Schottky TTL. In Schottky TTL, the diodes are actually fabricated as part of the transistors, rather than as separate elements. Logic circuits developed using Schottky TTL are much faster (typically three to five times faster) than those using normal TTL. This translates into another advantage in that it is possible to fabricate low power consumption gates with Schottky TTL (using higher resistance values) that still perform as fast as standard TTL.

4.5.4 Emitter Coupled Logic (ECL)

So far we have looked at a number of different gates that have all been based upon the Bipolar Junction Transistor (BJT). As a general rule, digital circuits based upon BJTs are significantly faster than those based upon Field Effect Transistors (MOSFETs, CMOS, etc.). However, we also noted that standard TTL can be varied to minimise power consumption or maximise speed. This can be achieved by variation of resistance values in the gate circuits. Performance can also be improved through the addition of Schottky diodes that prevent critical transistors from heavily saturating. All of these techniques have been used to produce commercially available integrated circuits. For some time-critical applications, none of the simple variations on TTL, cited above, can produce a sufficiently high switching speed. In these instances it is necessary to use a special type of high performance logic that is still based upon BJTs but uses a more complex circuitry. This high performance logic is referred to as Emitter Coupled Logic or ECL. The core of all ECL gates, as the name suggests, is a pair of transistors whose emitters are joined (coupled) together. The speed of ECL digital circuits is achieved through the use of high speed transistors and the use of a coupled-pair where transistors mutually prevent one another from saturating heavily.

Fundamentals of Digital Circuits

175

The problem with ECL is that it is much more difficult to work with than other forms of logic. Primarily, this is because the circuits tend to be operated at very high switching speeds, thereby emphasising parasitic effects in wires and so on. Most circuits designed using ECL devices have to be built using the so-called "ground-plane" construction technique. In this form of construction, one side of the circuit board is covered with conducting material (normally copper sheet) and this sheet is the ground point for the circuit. The objective of this technique is to avoid using wires to connect devices to ground. The inductance associated with ground "wires" is sufficient (at high switching speeds) to create an additional circuit between two wired ground points and this is referred to as an "earth-loop". This leads to unwanted noise in ECL circuits. Ground wires are not the only wires that cause problems in ECL circuit design. All wires need to be kept as short as possible within a circuit board. If ECL devices need to be interconnected across two circuit boards then a transmission line (such as a twisted-pair line or coaxial cable) link needs to be implemented thereby increasing the cost and complexity of the system. As a result of the practical difficulties of working with ECL it is normally reserved for applications that cannot be served by the more traditional logic circuits.

4.5.5 CMOS Logic

All the BJT based circuits that we have looked at have been primarily designed for high-speed operation. However, there are a number of problems with BJT based circuits. Firstly, BJTs are physically much larger than Field Effect Transistor based devices such as MOSFETs. This means that we cannot create high-density circuits using BJT technology. The second problem is that the gate circuits built from BJT devices are heavy consumers of power when compared to MOSFET based devices. This means that they are less desirable for battery operated systems, where power consumption can be a serious issue. The more common, modern technology for general-purpose applications that do not require high-speed switching is therefore based upon MOSFETs, and in particular, circuits containing one p-channel and one n-channel pair. This is referred to as Complementary MOSFET logic or CMOS. This form of digital logic is extensively used in memory circuits, microprocessors and covers the spectrum of devices commonly found in Small, Medium and Large Scale Integrated (SSI, MSI and LSI) circuits. The circuit for an inverter gate is shown in simplified form in Figure 4.22. The diagram omits a number of diodes which are integrated across each of the MOSFET devices for protection purposes.

176

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

DD

G2

p-Channel

G1 V
in

n-Channel

out

Figure 4.22 - CMOS Inverter Including "p" and "n" Channel Devices

The operation of CMOS logic is conceptually very simple to understand. A high input to the CMOS pair turns one MOSFET on and the other off. A low input to the CMOS pair causes the reverse switching to occur. In the case of the inverter, a high input causes the n-channel device to short-circuit the output to ground. A low input causes the p-channel device to short-circuit the output to the supply rail. The result is an output voltage which is the Boolean NOT of the input. As with most other forms of logic, CMOS provides a broad spectrum of gates, including AND, OR, etc. which can provide the same functionality as TTL. In many cases, voltage levels are kept compatible with TTL. There are two distinct disadvantages with CMOS devices. The first, as we have already noted, is that they are slower than TTL type devices. The second is that CMOS circuits are quite susceptible to damage from voltage spikes that can break down the insulating oxide layer in the MOSFETs. Despite the diode protection that is built into CMOS circuits, even static charges from humans handling CMOS chips can be sufficient to cause irreparable damage. Static charges, although very low in energy, are still associated with high voltages capable of destroying the delicate oxide layer in the MOSFETs. CMOS devices therefore need to be handled with care, particularly where synthetic floor-coverings can be responsible for the build-up of static. Moreover, when using CMOS devices in circuits, all inputs must be terminated in a manner that allows for a current discharge path to the circuit ground. Despite the disadvantages associated with CMOS, devices formed from this integrated-circuit technology are amongst the most prolific digital circuits currently available.

Fundamentals of Digital Circuits

177

4.6 Digital Circuits for Arithmetic Functions


Section 4.5 provided an important break from the primary objective of this chapter, which has been written to show how digital circuits can be designed in order to carry out the two forms of human reckoning that we wish to instil into computers, namely:

Simple human reasoning (Boolean logic) Arithmetic manipulation.

The reason for the departure of Section 4.5 is because one needs to understand the different forms of technology available to produce the various digital circuits that were described earlier (in design problems) and those which are to follow. In Section 4.4, we examined some simple circuits that could be used to provide some degree of human reasoning in low level control applications. In this section, we return to the design of digital circuits in order to pursue the development of systems that can be used to manipulate numbers. We already know that the sort of numbers, with which humans must work, can be represented in a binary form as strings of digits. At the very least, we would expect computers to be able to carry out functions such as the addition and subtraction of numbers that had been converted into a binary form. To begin with, we can examine the simplest of these circuits - that is, one that can add two binary digits. In order to design such a circuit, the starting point, as with all other digital designs, is a truth table. A truth table is shown in Table 4.8 for two binary digits, A and B that are added to produce a sum, S.

A 0 0 1 1

B 0 1 0 1

S 0 1 1 0

Table 4.8 - Truth Table for Simple Addition Circuit


The truth table shown in Table 4.8, doesn't really represent a "full" adder circuit. When we add the binary numbers "1" and "1" we should end up with binary "10" - or more specifically, zero carry one. However, for a simple addition circuit, there is no carry. This is referred to as a "half-adder circuit". We could create a Karnaugh Map for Table 4.8 or a sum-of-products expression in order to generate the circuit. However, it is obvious from looking at the table that S is the exclusive-OR of A and B. That is: S= AB

178

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The half-adder circuit is obviously not very useful for any sensible addition of binary digits and so we need to design a full-adder circuit. In a full-adder circuit, such as the one shown in Figure 4.23, we have two binary inputs and one carry input that are added to produce both a sum and a carry output. The truth table is shown in Table 4.9.

out

A Full-Adder Circuit B S

in

Figure 4.23 - Block Diagram for Full-Adder Circuit

A
0 0 0 0 1 1 1 1

B
0 0 1 1 0 0 1 1

Cin
0 1 0 1 0 1 0 1

S
0 1 1 0 1 0 0 1

Cout
0 0 0 1 0 1 1 1

Table 4.9 - Truth Table for Full-Adder

In order to derive the simplest expressions for the outputs in the full-adder circuit, we really need to go directly to the Karnaugh Mapping technique. The Karnaugh Maps for the outputs S and Cout are shown in Figure 4.24.

Fundamentals of Digital Circuits

179

S AB C in 0 1 00 0 1 01 1 0 11 0 1 10 1 0

C out

AB 00 C in 0 1 0 0

01 0 1

11 1 1

10 0 1

Figure 4.24 - Karnaugh Maps for Full-Adder Circuit

The Karnaugh Map for the output sum, S, cannot be simplified by normal techniques since there are no groupings. However, as discussed in Design Problem 3, earlier in this chapter, the pattern in the map corresponds to a known pattern for the exclusive-OR gate. The expression for S is:

S = A B Cin
The Karnaugh Map in Figure 4.24 for the output carry variable has three paired regions which lead to the following, simplified expression:

C out = A B + A Cin + B C in
The circuit diagram for the full-adder circuit can be derived directly from the above expressions. A number of full-adder circuits can be cascaded so that two digital numbers composed of multiple bits can be added together. For example, say that we wished to design a full-adder circuit that could add two 8-bit numbers as follows: A7A6A5A4A3A2A1A0
+

B7 B6 B5 B4 B3 B2 B1 B0 _____________________________________ S7S6 S5 S4 S3S2 S1S0

Cout7 +

The 8-bit full-adder circuit is shown schematically in Figure 4.25. Note that if the high-order output carry (ie: the output from the eighth adder, Cout7) is high, then an overflow error has occurred because the result is nine bits long.

180

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Overflow C Full-Adder C
in7

B A

7 7

out7

B A

C Full-Adder C
in6

6 6

out6

B A

C Full-Adder C
in5

5 5

out5

B A

C Full-Adder C
in4

4 4

out4

B A

C Full-Adder C
in3

3 3

out3

B A

C Full-Adder C
in2

2 2

out2

B A

C Full-Adder C
in1

1 1

out1

B A

0 0

C Full-Adder

out0

in0

=0

Figure 4.25 - A Circuit to Add two 8-Bit Numbers "A" and "B"

Fundamentals of Digital Circuits

181

As an exercise, you should attempt to design a circuit that will subtract one 8-bit number, B (the subtrahend), from another 8-bit number, A (the minuend), assuming that A is greater than or equal to B: A7 A6 A5 A4 A3A2A1A0
-

(Minuend) (Subtrahend)

B7 B6 B5 B4 B3 B2 B1 B0 ________________________________
D7D6 D5 D4 D3D2 D1D0

The problem with designing subtracter circuits is that when the subtrahend is larger than the minuend, we end up with a negative number. We clearly need methods of representing negative numbers. If we can find such techniques, then we don't really need special subtracter circuits because we can simply use addition circuits and add a negative number instead of a positive number. There are three techniques by which we can perform binary subtraction. These are all dependent upon the way in which we represent negative numbers and include: (i) One's Complement (ii) Two's Complement (iii) Signed Numbers. These methods are discussed below:

(i)

One's Complement Arithmetic


The one's complement of an n-bit number is obtained by taking the Boolean NOT of every bit - that is, inverting each bit. In order to subtract an n-bit number, B, from an n-bit number, A, using one's complement arithmetic, the following procedure is followed:

Take the one's complement of B Add the one's complement of B to the number A If the overflow bit is one, then the result is positive and the overflow bit is added to the result (this is referred to as an "end around carry") If the overflow bit is zero, then the result is negative and its magnitude is obtained by taking the one's complement of the result (excluding the carry bit).

182

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

As an example, consider the following subtraction:

- 11100011 00100010 11000001

+ 11100011 11011101 1 11000000 + 1 11000001

In this instance the results are verified by a simple examination of the original expression.

(ii)

Two's Complement Arithmetic


The two's complement of an n-bit number is obtained by adding binary one to the one's complement of that number. In order to subtract an n-bit number, B, from an n-bit number, A, using one's complement arithmetic, the following procedure is followed:

Take the two's complement of B Add the two's complement of B to the number A If the overflow bit is one, then the result is positive and the overflow bit is ignored If the overflow bit is zero, then the result is negative and its magnitude is obtained by taking the two's complement of the result (excluding the carry bit).

As an example, consider the following subtraction:


11100011 00100010 11000001 + 11100011 11011101 1 11000001 Again, the results can be verified by a visual examination of the original binary subtraction.

Fundamentals of Digital Circuits

183

(iii) Signed Numbers


In a signed number system, one bit out of each n-bit number (the high-order bit) is used to represent the sign of that number. Zero represents positive and one represents negative. For example, in an 8-bit system, the number +5 is represented as follows: 00000101 The number -5 is represented as follows: 10000101 The high-order bit is referred to as the "sign bit". The problem with signed number representation is that it fails to take advantage of special purpose hardware available on most processors that performs one's or two's complement subtraction as shown in (i) and (ii).

184

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

4.7 Flip-Flops and Registers


In the previous sections of this chapter, we examined a range of circuits that were designed to perform some instantaneous logic or arithmetic function - for example to produce a control output based upon inputs or to instantaneously add numbers together. However, in computer systems we need to do far more than add numbers together, we also need to be able to store the results. The most basic storage element in digital systems is called the flip-flop. The simplest of all flip-flops is the S-R flip-flop which is constructed from two NOR gates as shown in Figure 4.26.

Q R (a)

Q (b)

Figure 4.26 - (a) Schematic of R-S Flip-Flop Construction (b) Block Diagram Representation of R-S Flip-Flop

The R-S flip-flop looks deceptively simple but it is not a combinational circuit. The difference is that in a flip-flop, the outputs are fed back into the inputs. This means that the output "Q" is influenced by both the current inputs (R and S) and the current value of Q. For this reason, we can refer to the current value of Q as Qn and the next value of Q as Qn+1. Moreover, since the flip-flop output is dependent upon its own current value, combined with inputs, we refer to Q as being the "state" of the flipflop, rather than just the output. Notice also, that the flip-flop has two outputs which, from symmetry are complementary (ie: one is the NOT of the other). The truth table for the R-S flip-flop is shown in Table 4.10. The truth table is easy to determine when only one or other of the inputs is high. However, when both inputs are high, then the output cannot be determined from simple Boolean logic and really only depends upon the switching speed of the gates involved. The condition where both gates are set to high is referred to as a "race condition" and is a "not allowed" state because it is not defined by digital logic.

Fundamentals of Digital Circuits

185

S
0 0 0 0 1 1 1 1

R
0 0 1 1 0 0 1 1

Qn
0 1 0 1 0 1 0 1

Qn+1
0 1 0 0 1 1 Not Defined Not Defined

Table 4.10 - Truth Table for R-S Flip-Flop

Looking at the truth table of Table 4.10, it can be seen that the effect of "S" is to change the output state, Q, from 0 to 1. The effect of "R" is to change the output state from 1 to 0. For this reason, S and R are referred to as "Set" and "Reset", respectively. A further inspection of Table 4.10 reveals that once the output state of an R-S flip-flop is 1, it remains equal to 1 regardless of the value of S and until the value of R is set to 1. Similarly, the output state of a flip-flop remains equal to 0, regardless of R and until the value of S is set to 1. In other words, the flip-flop acts like a "latch" and it is often referred to by this alternate name. Despite the problem of the race condition, R-S flip-flops are still used in a number of circuits. In particular, the R-S flip-flop circuit can be modified so that the inputs are only allowed to influence the outputs when additional requirements are fulfilled. The so-called "clocked" R-S flip-flop is one such circuit and is shown in Figure 4.27.

S Q Clock Q R (a) S Clk R Q (b) Q

Figure 4.27 - (a) Clocked R-S Flip-Flop Schematic (b) Clocked R-S Flip-Flop Block Diagram

186

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

In "clocked" digital devices, the inputs are gated through an additional binary variable, abbreviated "Clk", and can only influence outputs when this variable is high. A clock waveform can be generated by a number of mechanisms, including a crystal oscillator or a special digital IC known as a "555 Timer". Although a clock can be just another binary signal, it is more typically a repetitive binary waveform, as its name implies. A typical set of waveforms for a clocked R-S flip-flop are shown in Figure 4.28.

Voltage (Clock)

Time

Voltage (S)

Time

Voltage (R)

Time

Voltage (Q)

Time

Figure 4.28 - Typical Timing Diagrams for a Clocked R-S Flip-Flop

The clocked R-S flip-flop arrangement does not eliminate the "race" condition, since it is still possible that both inputs will be high when the clock is high. One simple technique for avoiding the race condition is to ensure that S and R are never simultaneously high. This is achieved by connecting an inverter between S and R as shown in Figure 4.29. The result is a single-input flip-flop. The single-input is called "D" and the flip-flop is called a "D flip-flop".

Fundamentals of Digital Circuits

187

D Clock Q D Clk Q (a) Q (b) Q

Figure 4.29 - (a) Clocked D Flip-Flop Schematic (b) Clocked D Flip-Flop Block Representation

A more sophisticated flip-flop that is in very commonly used in digital circuits is the so-called JK flip-flop and J and K inputs are analogous to S and R, respectively. The JK flip-flop behaves identically to the R-S flip-flop whenever one or other of the inputs are high - however, the JK flip-flop does not suffer from the race condition and when both inputs are high, the outputs toggle (invert) from their previous value. The truth table for a JK flip-flop is shown in Table 4.11. Note that if Qn is 0 and both J and K are set to 1 then Qn+1 is 1. If Qn is 1 and J and K are both set to 1, then Qn+1 becomes 0. JK flip-flops, like R-S flip-flops are available in a clocked form.

J
0 0 0 0 1 1 1 1

K
0 0 1 1 0 0 1 1

Qn
0 1 0 1 0 1 0 1

Qn+1
0 1 0 0 1 1 1 (Toggle) 0 (Toggle)

Table 4.11 - Truth Table for JK Flip-Flop

188

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The problem with all clocked flip-flops is that it is very difficult to produce clock pulses such as those shown in Figure 4.28 - that is, uniform. Flip-flop circuits that depend upon uniform timing from clocks are subject to faulty state changes. For this reason, it is much more practical to design flip-flops that only change state on the positive or negative going edges of a clock pulse, thereby making them independent of the clock pulse width. These flip-flops are referred to as "edge-triggered" devices. A flip-flop which changes state as the clock pulse goes from high to low is referred to as "negative-edge-triggered" and one that changes state as the clock goes from low to high is referred to as "positive-edge-triggered". The problem with edge-triggered flip-flops is that they can miss small input pulses that do not coincide with the positive or negative edge of the clock waveform. For this reason, the so-called JK-Master-Slave flip-flop has been developed. This is shown in Figure 4.30.

Master J Clock K S
1

Slave
1

Q R
1

Q R
2

(a)

Clk

Q (b)

Figure 4.30 - Negative-Edge-Triggered JK Master-Slave Flip-Flop (a) Schematic (b) Block Diagram Form

Fundamentals of Digital Circuits

189

Note from Figure 4.30 (b) how a small circle is placed in front of the Clk input terminal on the flip-flop to indicate that the flip-flop is triggered on the negative edge. Figure 4.31 shows the timing diagram for the master-slave flip-flop of Figure 4.30 and compares it with the standard, negative edge triggered device that "misses" small input pulses. In the master-slave device, the master section is active whenever the clock is high, so the input values (whenever the clock is high) are registered and they can only change the outputs when the slave becomes active (on the negative edge).

Voltage (Clock)

Time Voltage (J)

Time Voltage (K)

Time Voltage (Q) Simple Negative Edge Voltage (Q) Negative Edge MasterSlave

Time

Time

Figure 4.31 - Timing Diagrams Highlighting the Operation of Simple Negative Edge Triggered Flip-Flop and Master-Slave Flip-Flop

190

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

In addition to the normal inputs, such as J, K, S, R and Clk, most flip-flops have two additional (asynchronous) inputs, normally referred to as PRESET and CLEAR. Whenever a PRESET terminal is enabled, the flip-flop output is immediately set to 1, regardless of other inputs and clock state. Whenever CLEAR is enabled, the flip-flop output is immediately set to 0, regardless of other inputs and clock state. A collection of flip-flops connected together for some common purpose is referred to as a register. There are many different kinds of registers to perform a range of different functions. The most basic function of a register is to act as a data storage device, with each flip-flop storing 1 bit of data. This is shown in Figure 4.32. The "state" of the register at any point in time is defined by the outputs (Qn to Q0) of the flip-flops in the register.

D Clk

Clock D

D Clk

D Clk

D Clk

D Clk

D Clk

D Clk

D Clk

Figure 4.32 - Eight Bit Storage Register

Fundamentals of Digital Circuits

191

The device in Figure 4.32 is simply an 8-bit storage location. Data is clocked in (in parallel) and outputs are held in flip-flops until required. Figure 4.33 shows a slightly more sophisticated form of register, called a "shift register". The purpose of a shift register is to take an incoming serial bit stream at the input and shift the data along from flip-flop to flip-flop until all are loaded with one bit of data. In the case of the shift register in Figure 4.33, if the first bit in the serial stream is the least significant bit (LSB) and the last is the most significant bit (MSB), then after eight clock pulses, Q0 contains the LSB and Q7 contains the MSB. The shift register has effectively converted serial data into parallel form.

Q0 Input D Clk Q D Clk Q

Q1 D Clk Q

Q2 D Clk Q

Q3 D Clk Q

Q4 D Clk Q

Q5 D Clk Q

Q6 D Clk Q

Q7

Clock

Input Time

Clock Time

Time

Time

Time

Figure 4.33 - Eight Bit Shift Register and Timing Diagrams

192

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

4.8 Counters - Electronic Machines


At the beginning of this chapter, we examined the possibility of generating an electronic machine that could actually move from one state to another. If each state can then be decoded using both Boolean logic and numerical circuits, then we can effectively produce a reckoning machine or computer. However, nearly all the Boolean logic and flip-flop circuits that we have examined in this chapter have been rather static in nature, except for the shift register of Figure 4.32. In the shift register, the state changed with every cycle of the clock. In this section we shall examine a few basic circuits that change from one state to another on each clock cycle and act as "state machines". The simplest state machines are actually counters and we shall see why they are so named after examining Figure 4.34. In this diagram, four JK flip-flops are interconnected to produce an "asynchronous up-counter". Each flip-flop has its J and K inputs tied to binary one and hence each is in toggle mode. Assuming that each flipflop is negative-edge-triggered, then each output will invert on the negative edge of each clock cycle. The timing diagram in Figure 4.34 shows that the output Q0 then effectively runs at half the clock frequency. Since Q0 is connected to the clock input of the second flip-flop, Q1 then runs at one quarter of the original clock frequency and so on. Table 4.12 lists the outputs Q3 to Q0 on each successive negative edge of the clock pulse.

Pulse
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Q3
0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0

Q2
0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0

Q1
0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0

Q0
0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0

Table 4.12 - Flip-Flop States for Asynchronous "Up-Counter"

Fundamentals of Digital Circuits

193

Q J Clock Clk K 1 Q Q

Q J Clk K Q Q

Q J Clk K Q Q

Q J Clk K Q Q

Clock 0 1 2 3 4 5 6 7 8 9101112131415161718 Time Q

Time Q

Time Q

Time Q

Time

Figure 4.34 - Asynchronous Up-Counter

The asynchronous up-counter of Figure 4.34 is composed of four flip-flops and hence can generate a maximum of sixteen different states (ie: 24), before we return to the original state. The sixteen different states produced by the up-counter follow a standard binary count sequence (from zero to fifteen) and hence the name of the circuit. In order to make a device that counts down from fifteen to zero, that is, an asynchronous down-counter, we take the inverted output from each flip-flop and connect it to the clock of the subsequent flip-flop (the output states are still taken from the non-inverting outputs).

194

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The problem with asynchronous state machines, such as the up-counter of Figure 4.34, is that each device has a different clock signal. The up-counter is called a ripple counter because flip-flops change state at slightly different times due to the propagation delay through each flip-flop. This can create problems in situations where the circuits are required to operate at very high speeds. An alternative solution is to create synchronous counter circuits. Although slightly more complex, synchronous counters are more predictable in their high-speed behaviour because all flip-flops share the same clock signal. Figure 4.35 (a) and Figure 4.35 (b) show two, common counters. The first (a) is a synchronous hexadecimal up-counter (counting from 0 to 15 before resetting) and (b) is a synchronous, decimal, or BCD, up-counter (counting from 0 to 9 before resetting)

J Clk K

J Clk

J Clk

J Clk

Clock (a)

Clock

J Clk K

J Clk

J Clk

J Clk

(b)

Figure 4.35 - (a) Synchronous Hexadecimal Counter (b) Synchronous Binary Coded Decimal Counter

Fundamentals of Digital Circuits

195

From the above discussions and diagrams it becomes evident that counters are analogous to the electronic machines that we were seeking at the beginning of this chapter. We will look at state machines in a little more detail in Chapter 6 in order to show how we can use basic digital logic to create an electronic machine that is capable of carrying out some form of reckoning. At this stage however, it is more important to consider what we can do with state machines such as counters. The self-evident application for counters is to count numbers so that they can provide a digital output proportional to time (ie: clock pulses). Another use for counters is to carry out some form of sequential control. As an illustration of the use of counters in control functions, and as a conclusion to this chapter, consider the following problem.

Design Problem 5: Referring back to the incubator control system of Design Problem 1, modify the original design so that the incubator temperature is only read once every ten seconds. After reading the temperature, the fan or heater should be turned on as before, but only for a period of three seconds and switched off thereafter until the next reading. Solution to Design Problem 5: In order to solve the problem, we need add a counter circuit to the original system and two flip-flops to store the required state of the fan/heater at the time the reading is taken. Since we would like to have a new cycle every 10 seconds, the BCD up-counter of Figure 4.35 (b) would be appropriate. If we drive the counter with a clock having a frequency of one cycle per second, then the counter increments one count per second and resets after ten seconds. The clock signal can be generated via a 555 Timer IC. The outputs from the counter (Q3Q2Q1Q0) become additional inputs to the digital controller previously designed. The new circuit is shown in Figure 4.36. When all counter outputs are zero, then the digital controller initialises the system by resetting both flip-flops (this is achieved by setting outputs rf and rh high) and setting "enable" outputs ef and eh to zero. When the counter has a binary output of 1, then the control calculation takes place as before. However, instead of turning on the fan or heater directly, the outputs f and h are stored in two S-R flip-flops by connecting to the set pin of each corresponding flip-flop. The controller outputs ef and eh are ANDed with the flipflop outputs to prevent a high signal reaching the fan or heater until the appropriate time.

196

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

When the counter has binary outputs equivalent to 2,3 or 4 the outputs of the appropriate flip-flops are gated to the fan or heater. When the counter has binary outputs equivalent to 5,6,7,8 or 9, the reset pins on the relevant flip-flops are enabled. The combinational logic required to perform this task can be designed using previously discussed techniques.

BCD Up-Counter Q Q Q Q
3 2 1 0

1 Hz Clock

ef f Digital Controller rf h T2T1T0 rh S Q R S Q R

Fan (F) Motor

Incubation Chamber

Heater (H)

Digital Temperature Probe

Figure 4.36 - Modified Incubator Control System

As an exercise, design the complete circuit for the incubator controller, beginning with the previously derived outputs for F and H.

197

Chapter 5
Memory Systems and Programmable Logic

A Summary...
A short chapter overviewing the types of memory devices available and their applications. An introduction to special purpose memory devices for implementing digital circuits (PROM, EPROM, EEPROM) and Boolean logic (Programmable Array Logic (PAL), Programmable Logic Array (PLA), etc.).

Digital Voltages

Analog Voltages

Analog Energy Forms

External Voltage Supply

Digital to Analog Conversion Computer Analog to Digital Conversion

Scaling or Amplification

Isolation

Energy Conversion External System

Protection Circuits

Scaling or Amplification

Isolation

Energy Conversion

External Voltage Supply

198

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

5.1 Introduction
In Chapter 4, we examined a number of techniques that could be used to create a reckoning machine, or computer, through simple digital circuits. In that chapter, we were able to instil a crude form of human reasoning into the machine through Boolean logic circuits and to carry out simple manipulation of numbers via numerical circuits designed from digital circuits. Clearly, these circuits are not sufficient to create the sort of computer systems with which we have become familiar. Another important trait of computer systems is their ability to store data on both a short and long term basis. In Chapter 4, we examined the flip-flop and the register (collection of flip-flops) as a mechanism for short-term data storage. In this chapter, we need to delve further into more practical memory storage devices that can be built with a higher density and lower cost. Traditionally, magnetic (tape, floppy and hard disk) storage media have been used for the long term storage of data. The major reason for this is that these media are considered to be "non-volatile". This means that the contents remain in tact even when power is removed from the circuits responsible for storing and retrieving data from the media. More recently, optical devices such as compact disks have been introduced as an alternative format, again offering a relatively non-volatile storage of data at a very high density. Another reason for using all these types of storage systems is that they have provided relatively low cost data storage formats at times when other alternatives proved considerably more expensive. The problem with all these storage techniques is that they are based upon the movement of mechanical components that scan the surface of the medium and hence are very slow relative to the processing abilities provided by microprocessors and other digital circuits. For several decades now, short term data storage has been facilitated by semiconductor memory. Digital memory circuits are orders of magnitude faster than any of the mechanically driven, long-term data storage formats described above. However, in the past, they have taken up more physical space than the equivalent mechanical formats (particularly because of IC packaging and pin-out) and have generally suffered from volatility problems - in other words, most semiconductor memory storage devices lose their contents when power is removed. The last two decades have seen dramatic increases in the density of memory storage devices, thanks largely to the introduction of CMOS based circuits and improvements in semiconductor fabrication technology. The increases in density have been coupled to dramatic decreases in storage costs, to the extent where the costs of semiconductor memory are approaching those of mechanical storage systems. At this point however, the question of volatility has still not been resolved satisfactorily. Although it is possible to purchase memory devices which do not lose their contents when power is removed, the sorts of circuits that have this characteristic, and can be written to and read from, are still relatively costly and inefficient for general computing.

Memory Systems and Programmable Logic

199

Data storage technologies change so rapidly that it is really inappropriate to dwell upon specific technologies within a text such as this. This chapter really provides an overview of the sorts of semiconductor data storage techniques that are available and their relevance to computing. The secondary purpose of this chapter is to look at a range of devices that can be used to store Boolean logic functions, rather than simple binary data. The reason for examining these particular circuits (referred to generically as Programmable Logic Devices) in a chapter associated with memory circuits is because of the similarity between the corresponding devices. Prior to beginning a discussion on memory circuits, we need to examine a few new Boolean devices that will have significance to us as we progress. The first group of new Boolean devices that we need to examine are referred to as "Tristate Devices". So far, all the other devices that we have examined have only been able to provide two outputs - a voltage equivalent to binary one and a voltage equivalent to binary zero. However, in some digital circuits, such as in memory chips, we need to be able to electronically connect and disconnect (ie: isolate) devices from the remainder of the circuit. This is achieved by a third state, referred to as a high-impedance state or Hi-Z state. The voltage output from a tristate device in a Hi-Z state is mid way between the one and zero logic levels (ie: in the forbidden band where it cannot be misinterpreted as a logical output). A range of tristate devices are shown in Figure 5.1. The simplest way to understand the operation of tristate circuits is to assume that the circuits perform their normal functions whenever enabled. For example, the circuit in Figure 5.1 (b) is a simple inverter whenever enabled - that is, whenever E = 1. The truth table for the circuit of Figure 5.1 (b) is shown in Table 5.1.

A 0 0 1 1

E 0 1 0 1

Z Hi-Z 1 Hi-Z 0

Table 5.1 - Truth-Table for Tristate Inverter with Active High Enable

The Hi-Z state means that the impedance between the "A" and "Z" terminals is ideally infinite and so the inputs are isolated from the outputs. The circuit of Figure 5.1 (a) simply has A and Z equivalent whenever the circuit is enabled (E = 1) but isolated from one another when disabled (E = 0).

200

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Z (a)

Z (b)

Z (c)

Z (d)

Figure 5.1 - Tristate Logic Devices (a) Non-Inverting Circuit with Active High Enable (b) Inverting Circuit with Active High Enable (c) Non-Inverting Circuit with Active Low Enable (d) Inverting Circuit with Active Low Enable

In order to understand the operation of the circuits in Figures 5.1 (c) and 5.1 (d), one needs to understand the "Active Low" terminology. Many digital circuits are enabled whenever a particular terminal is set to a voltage equivalent to binary zero. This inverse logic is represented by placing a bar over the top of the relevant input terminal. For example, in Figures 5.1 (c) and 5.1 (d), the enable input is shown as:

E
thereby signifying that it should be set low to enable the circuit. An alternative is to place a circle at the input terminal to signify inverted logic. In Figure 5.1 (c) and 5.1 (d), both symbols are used but circuits are often drawn using only one symbol or the other.

Memory Systems and Programmable Logic

201

Once one understands the significance of the inverted enabling logic, then the operation of the circuits in Figure 5.1 (c) and 5.1 (d) is almost self-evident. The circuit of Figure 5.1 (d) acts as an inverter whenever enabled (E = 0) and otherwise isolates terminal "A" from "Z". The circuit of Figure 5.1 (c) has "Z" equivalent to "A" whenever the circuit is enabled (E = 0) and "Z" isolated from "A" whenever the circuit is disabled (E = 1). Another simple circuit that needs to be introduced is a modified representation of the inverter gate as shown in Figure 5.2. It is introduced herein so that it can be differentiated from the Tristate logic device.

Z1 = A Z2 = A

Figure 5.2 - Inverter Gate with Both Inverting and Non-Inverting Outputs

The inverter gate of Figure 5.2, with both inverting and non-inverting outputs looks confusingly similar to the circuit of Figure 5.1 (c) so one has to be careful when examining circuits. Normally, tristate enables are shown with a "bent" line, whereas the two outputs of the inverter are shown with straight lines. However, it is best to look for further written evidence when dealing with circuits that may contain both devices.

202

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

5.2 Overview of Memory Operation


Many people tend to classify computer memory into two distinct types - Random Access Memory (RAM) and Read Only Memory (ROM). This is rather misleading because all common types of semiconductor memory are, in principle, "random access". In other words, we can normally access any location at any time without first having accessed other locations. Strictly speaking, when people make such a division between memory types, they are really differentiating between Read/Write Memory and Read Only Memory. Given that we have to cope with both the common nomenclature for memory and its true functionality, in this section, we will be looking at the following different types of memory devices: Static Read/Write Memory (commonly referred to as Static RAM or SRAM) (ii) Dynamic Read/Write Memory (commonly referred to as Dynamic DRAM) and Integrated RAM or IRAM (iii) Non-Volatile Read-Only Memory (ROM) including: Masked ROM PROM EPROM EEPROM (iv) Non-Volatile Read/Write Memory (commonly referred to as NVRAM). Despite their differences, most of the above memory types have a number of common traits and we shall examine these before discussing the specific attributes of any one type. Digital memory can be considered in terms of an array of storage locations in which each location or cell can store one bit of data. Schematically, we show the memory in terms of rows of bits This is shown in Figure 5.3 for a memory chip that has 2N rows of storage, each of which is 8 bits wide. The width of each row is referred to as the "word length" of the memory chip and varies from device to device. The word length defines the smallest unit that can be written to, or read from, a memory chip. If we assume a hypothetical chip where N = 3 (say) then we have 8 rows of storage (most realistic chips have much more storage than this). Each row has a unique address within the chip. For example, the first row in an 8 row chip would have the address 000 and the last row would have the address 111. In order to enter data into a particular row (ie: write to that row) or extract data from a particular row (ie: read from that row), we need to address that row by activating the appropriate chip address. This is achieved via the address lines A0 - AN1. A Boolean decoder within the memory chip enables the row of data, corresponding to the given address, to be written to, or read from, the outside world via the data lines D7-D0. (i)

Memory Systems and Programmable Logic

203

Input/Output Data Lines D7 D6 D5 D4 D3 D2 D1 D0

Decoder A0 A1

1 0 0 1 0

0 0 1 1 0

0 0 1 1 0

1 0 1 0 0

1 1 1 0 1

1 0 1 1 0

1 0 1 1 0

0 1 0 1 1

A N-1 2

Input/Output Control

Read / Write

Chip Select

+Vcc

GND

Figure 5.3 - Schematic of Data Storage in Memory Chips

All semiconductor memory chips, including non-volatile devices, need to have power applied to them via supply rails in order to function. However, even when power is connected to a memory chip, nothing can be written to or read from the device until the device is enabled or unlocked. Enabling is achieved via a special pin known as a "Chip Select" or "Chip Enable" pin that has to be set to the appropriate logic level in order to allow access to internal chip locations. Chip Enable and Chip Select pins are actually connected to the "enable" lines of tristate logic devices within the memory chip to control access to internal locations. In memory devices designed for both read and write functions, an additional pin is normally provided to control the flow of data to and from internal addresses. The Read/Write pin is also connected to tristate logic devices within the chip that control the flow of data into or out of the chip. In Figure 5.3, for example, the chip can be written to by applying binary zero to the Read/Write pin and the chip can be read from by applying binary one to the same pin.

204

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

To summarise then, a number of steps have to be taken in order to access data in a memory chip. These include: Supply power to the chip Apply the appropriate bit combination to the address lines of the chip, corresponding to the internal word address that is to be accessed (iii) Enable the chip by applying appropriate logic to the Chip Enable or Chip Select pin (iv) Place the chip into read or write mode (v) Apply a binary combination to the data lines to write to the chip or extract the binary combination from the data lines to read from the chip. (i) (ii)

Memory Systems and Programmable Logic

205

5.3 Volatile Read/Write Memory


There are two basic types of volatile read/write memory. These are referred to as static RAM (SRAM) and dynamic RAM (DRAM). Static RAM is perhaps the easier of the two types to understand in terms of common Boolean circuits, since each word (or row of data) is effectively a register, composed of flip-flops. This is shown schematically in Figure 5.4. In this diagram, the sort of logic circuit that could be used to construct a static RAM chip with one (4-bit) row of storage is shown together with the enabling logic that facilitates the "Chip Select" and "Read/Write" functions to occur.

Q3 Read/Write Chip Select D Clk Q D Clk Q

Q2

Q1

Q0

D Clk

D Clk

D3

D2

D1

D0

Figure 5.4 - Designing a One Word Static RAM Chip

In Figure 5.4, the data inputs to the device correspond to the inputs to the D-flipflops and the outputs correspond to the Q outputs of each flip-flop. Since the inputs and outputs are never enabled simultaneously by the Read/Write pin, then it is possible to join the two together without a conflict and thereby form a bidirectional data port. Note also that because the memory storage is purely transistor based, it is not possible to access data unless power is supplied to the digital devices within the chips. Moreover, the contents of the flip-flops are lost whenever data is removed. The flip-flops in static RAM chips are not actually fabricated from NAND gates or NOR gates but directly from transistors. This reduces the number of transistors per flip-flop (ie: storage bit) to approximately four. This takes up a considerable amount of space, even when CMOS devices are used, and hence SRAM chips are generally only used where low memory applications are required.

206

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The major advantage of SRAM chips is their speed of operation. Moreover, because static RAM chips are low density, it is easy to address individual words. For example, the Intel corporation's 2114A device was designed for only 10 address lines, thereby storing 1024 words, each of which is 4 bits in length. Static RAM chips can be connected in parallel so that the effective word length of a chip can be increased. For example, a device which only stores 4 bits can be converted to an 8, 12, 16, 32 or even 64 bit device. As long as the chip select lines and read/write lines are connected in parallel, the data can also be stored and retrieved in parallel. The problem of low density storage in static RAM chips is a serious one and makes them difficult to use in large quantities in computer systems which require ever increasing amounts of memory space for modern programming applications. The more common form of memory for high volume data storage applications is the dynamic RAM chip or DRAM. In a dynamic RAM chip, each storage cell is made up of a transistor and a capacitor, rather than a flip-flop. The capacitor emulates the role of the flip-flop by retaining its charge for a period of time after having been "set" or "reset". The overall amount of semiconductor space required to store a bit of data is reduced significantly. However, the charge leakage within the capacitor means that the stored charge or voltage is effectively lost within a few milliseconds. This means that every single cell in a DRAM chip needs to be "refreshed" every few milliseconds in order to retain the data. In terms of modern microprocessor systems, a few milliseconds is a considerable period of time and so the refresh problem is not as drastic as may first appear. However, it does mean that DRAM chips can only be used in conjunction with special support circuits whose role is to constantly refresh each memory location. DRAM, like SRAM, can only be operated when power is applied to the chips and all data is lost when power is removed. The major advantage of the DRAM chips (high density) is also the source of another major drawback. That is, the problem of addressing each memory location. DRAM chips can store hundreds of kilobytes of data and so, theoretically, they would require a large number of addressing lines on each chip. This presents major problems because the number of pins on a chip (ie: the pin-out) is a major expense in terms of fabrication. The problem is resolved by using a smaller number of data lines, in two phases, to address the same word. In other words, each address is divided into two segments and the address lines are switched (multiplexed) to accept the low order and high order address segments. The low order address segment is called the "row" and the high order address segment is called the "column". This enables a chip with 256K (ie: 256 x 210) addresses to be designed with only 9 address pins (instead of the usual 16), thereby allowing the chip to be implemented in a 16-pin integrated circuit (IC) package.

Memory Systems and Programmable Logic

207

Refreshing of DRAM chips is carried out by pulsing a special pin on the chip once for each row of data. This sort of functionality is achieved by special DRAM controller chips. However, the refresh cycle takes time and while in process prevents the DRAM chip from carrying out its normal functions. The net result is that the DRAM chip is both slower and more difficult to use than the SRAM chip. Despite the problems associated with DRAMs, they are still the most prevalent form of semiconductor memory at this point in time because of their high storage density and hence, most modern systems are designed with sophisticated controllers that refresh memory chips at optimum periods during computer cycles. One of the more innovative solutions to the problem of refreshing memory chips is to combine the refresh circuitry with the chips themselves. This alleviates the designer from the task of designing the special control circuitry required to handle the refresh cycles. The combined devices are given the title "Integrated RAM" or IRAM. Although these would seem to provide the ideal combination of characteristics, the built-in control circuitry takes up a considerable amount of chip space and tends to eliminate many of the gains made by using high density dynamic RAM in the first instance.

208

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

5.4 Non-Volatile Read Only Memory


The term "Read Only Memory" or ROM is another misnomer, since any storage device that can never be "written to" is clearly of little value. Read Only Memory chips are devices in which it is difficult to store data on a regular basis. In some cases the data is physically burnt into the chips by blowing fuses and in other cases it is built into the chip itself during the semiconductor fabrication process. In all cases, the objective is to build a device whose contents are permanently retained, even when power is removed. However, all chips must have power applied to them before data can be accessed. There are many reasons why some memory chips need to be designed as "Read" devices rather than as Read/Write devices. Quite often, it is necessary to build special purpose computer controllers that always work on a fixed program. In small-scale systems, the cost of the controller can be reduced by eliminating disk-drives and other unnecessary devices. This means that programs need to be stored entirely in nonvolatile memory devices that retain their contents even when power is removed. ROM chips are ideal for these types of applications. Another common use for ROM chips is to prevent users from tampering with particular areas in a computer system. Most modern computers have a number of basic functions burnt into ROM so that users cannot interfere with their functionality and create complex system faults. Finally, ROM chips can also be thought of as small disk-drives or cartridges that can be used to store programs and that can be removed and replaced with other chips to change the operation of a particular system. In order to have any purpose, all ROM chips need to be written to at least once. Some ROM chips can be written to a number of times. A device which can be written to only once is referred to, in computer jargon, as a "WORM" (Write Once Read Many) device. The "Masked ROM" is effectively a WORM chip. It is notable for its low cost in high volume applications and it is effectively "written" to by the semiconductor manufacturer. The "masked" notation comes from the fact that data is embedded into the chip by growing layers of silicon, silicon-dioxide and metal in areas of semiconductor selectively masked to create the required pattern of zeros and ones (ie: the data). The Masked ROM is really aimed at the mass production market (eg: video games, etc.) because of the set-up costs for manufacturing what are effectively special-purpose semiconductor devices. The Programmable Read Only Memory (PROM) chip is another WORM device. However, it can be programmed by low-cost hardware connected to a small computer work-station. A PROM is composed of a number of microscopic fuses (commonly known as "fusable links") and there is one fuse per bit of stored data. A logical one or zero is represented by the ability of a region to conduct or not conduct data.

Memory Systems and Programmable Logic

209

Fuses within a PROM chip are blown by addressing a row in a chip and applying "programming voltages" to the data lines of the chip (corresponding to the data to be stored). The programming voltages are considerably larger than the normal binary voltage levels and are sufficient to blow fuses in the required pattern. After a PROM chip has been burnt, its contents can be accessed just like any other memory chip. The disadvantage of PROMs is that once they have been burnt, the contents of the chip can never be changed, so they are not well suited to prototyping. A commonly used prototyping chip is the Erasable PROM or EPROM. In an EPROM, a one or zero is represented by the presence or absence of charge in an electrically isolated region called a cell. Like the PROM chip, the EPROM is programmed by the application of "programming voltages" to selected addresses within the chip. The programming voltages trap electric charge within cells, thereby forming a non-volatile storage method. The advantage of EPROM devices is that they can be erased by applying ultraviolet light to the semiconductor material. In fact, EPROM chips are notable for the glass window that exposes the semiconductor material through the packaging of the chip. Natural sunlight and most indoor lights emit a component of ultra-violet and hence EPROMs need to be shielded in order to prevent accidental erasure of data. This shielding can be realised through the application of opaque tapes across the erasure window. However, even without the shielding, it normally takes several years exposure to fluorescent lighting to erase an EPROM, so special-purpose lights need to be purchased for quick erasure when reprogramming.

210

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

5.5 Non Volatile Read/Write Memory


In Section 5.4, we examined a few of the basic non-volatile Read Only Memory systems. However, the so-called Electrically Erasable PROM or EEPROM or E2PROM more closely resembles read/write memory than it does read only memory. It is effectively a non-volatile read/write memory and is sometimes referred to as "read mostly memory". EEPROM devices can be electrically written to and read from but with some restrictions. Although reading times are similar to most other memory chips, write times are in the order of milliseconds, thereby (currently) making them impractically slow for general purpose computing. Moreover, EEPROMs can only be written to for a limited number of times before their reliability becomes unacceptable as a result of charge retention problems within each storage cell. The most common application for EEPROM is to store system configuration data in personal computer systems and small-scale controllers. There are difficulties involved in improving the response time of EEPROM chips and so some hybrid systems have been developed in recent years. One such hybrid is the so-called "Non-Volatile RAM" or NVRAM, where a static RAM chip is combined with an EEPROM chip. The RAM chip operates as normal, but every few milliseconds mirrors its data onto a parallel EEPROM chip in case power is removed from the system. Ideally, of course, the objective is to have non-volatile RAM chips that can perform both read and write functions at normal operating speeds and can be produced at a relatively low cost.

Memory Systems and Programmable Logic

211

5.6 Programmable Logic Devices


Memory devices are good at storing binary data, but there are also good reasons why we need a range of other devices that can be used to store Boolean logic - in other words to replace hard-wired Boolean gates. If we can store relatively complex Boolean logic into individual chips, then we can eliminate unreliable wiring, protect our logic designs and minimise power consumption. We can make circuits with fewer parts (again increasing reliability) and streamline the prototyping of circuits. The devices which help us to accomplish these functions have the generic title of "Programmable Logic Devices" or PLDs. PLDs are included in this chapter because they are really special-purpose memory devices that are used to store Boolean logic and also because their structure is not dissimilar to that of the traditional PROM chip - particularly because it relies upon the "blowing of fuses" in order to ascribe a particular logic. There are many different types of programmable logic devices and these include the following:

Programmable Array Logic (PAL) Programmable Logic Array (PLA) Gate Array.

The gate array is a chip in which logic is partially embedded by the semiconductor manufacturer, then tailored by an end-user and subsequently completed by the semiconductor manufacturer. Such devices are clearly designed for high-volume applications and so in this chapter we will look at the PAL and PLA devices, which can be tailored in low volumes by developers. As we shall see later, the basic concepts of PAL and PLA are similar, but for practical reasons, the PAL device is the more prolific of the two. In order to understand the functionality of the PAL and PLA devices and the difference between them, one needs to understand the special Boolean symbols used to represent elements in these programmable devices. The special symbols are used in place of traditional logic symbols because they enable structures to be drawn in a straightforward manner. The conventional and PLD representations for an AND gate are shown in Figure 5.5. All three diagrams in Figure 5.5 represent the same AND function in different ways. The representation of Figure 5.5 (c) shows programmable points in the circuit, which are actually created by small fuses that can be blown by applying a programming voltage. As a general rule, if the input variables are drawn with either a "" (representing a fixed connection) or a " or X" (representing an intact fuse) then there is a connection between the input variable and the gate - otherwise there is no connection (representing a blown-fuse).

212

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

A B C D

Z (a)

A BC D Z (b)

A BC D Z (c)

Figure 5.5 - Representations for AND gates (a) Traditional Representation (b) PLD Equivalent Representation with Fixed Connections () (c) PLD Equivalent Representation with Programmable Connections () If one can come to terms with the representation of Figure 5.5, then it is not difficult to understand the functionality of either the PAL device or the PLA device which are shown in Figures 5.6 and 5.7 respectively. The PAL device is composed of a programmable array of AND gates and a fixed array of OR gates. The size of the array and the number of inputs and outputs depends upon the specific device. In Figure 5.6, the hypothetical device is composed of four inputs and four outputs. By selectively blowing fuses in the AND array, we can create a required logic in the form of a sum of products expression. The PLA device is composed of a programmable array of AND gates and a programmable array of OR gates, thereby offering maximum programming flexibility. In the PLA device, fuses can be blown on the AND array and on the OR array in order to achieve a required logic function.

Memory Systems and Programmable Logic

213

Z1 Z2 Z3 Z4

Figure 5.6 - Programmable Array Logic (PAL) Structure

The PLA device offers more flexibility than is generally required for most applications and so is used less frequently than the PAL. However, the common feature of both devices is that they enable a range of logic functions to be implemented on a single chip. In both cases, the logic has to be converted into an AND/OR form before implementation. Another point of interest is that the PAL and PLA devices are similar in concept to the PROM device, which is actually composed of a fixed AND array and a programmable OR array. This structural similarity between programmable logic storage devices and programmable data storage devices is another reason for combining them in this chapter.

214

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Z1 Z2 Z3 Z4

Figure 5.7 - Programmable Logic Array (PLA) Structure

PAL and PLA devices are programmed in much the same way as a PROM devices, with a low cost PAL programmer that is connected to a personal computer workstation. Developers normally purchase special software that enables them to generate the required logic in PAL or PLA and then simulate its operation. The required logic is then burnt into the PAL/PLA by applying suitably high voltage levels to appropriate pins on the device. The programming process involves fuse-blowing and is therefore irreversible, so the circuit simulation software used in design of PAL/PLA is important if wastage is to be minimised. Consider the application of PAL, in the following exercise in order to cement your understanding of the PAL design process.

Memory Systems and Programmable Logic

215

Design Problem 6: In Design Problem 1 (in Chapter 4) you were asked to determine the logic required to control an incubation chamber. Using the Sum of Products expressions initially derived from the truth table in that problem, implement the logic using the hypothetical PAL device of Figure 5.6. Solution to Design Problem 6: From Design Problem 1, we know that the original Sum of Products expressions (unsimplified) are as follows:
H = T2 T1 T0 + T2 T1 T0 + T2 T1 T0 F = T2 T1 T0 + T2 T1 T0 + T2 T1 T0 + T2 T1 T0

Although these were greatly simplified by Karnaugh Mapping (in Design Problem 4), we will take these raw expressions and apply them to the PAL to show how such logic can be implemented. The result is shown in Figure 5.8. Note how even the raw Sum of Products expression originally determined in Design Problem 1 can still be transferred to a single PAL chip. Although one would normally use techniques such as Karnaugh Mapping to simplify expressions before committing them to a PAL implementation, this problem demonstrates that as long as the logic fits into a given PAL chip, the inefficiency of the expressions does not affect the cost of implementation. However, simplification of Boolean expressions is important because it minimises the complexity of the implementation and hence the probability of generating an erroneous design. Moreover, it is important, as a general engineering rule, to always begin with the best possible design approach, rather than allowing technology to cover up inefficiencies that should be removed in the first instance by the designer. Realistic PAL chips can support 16 or more inputs and so it is possible to implement relatively sophisticated Boolean logic on a single device. Moreover, because PAL circuit implementations are produced on only two levels of logic, propagation delays in signals are minimised.

216

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

T2

T1

T0

T2.T1.T0 T2.T1.T0 T2.T1.T0

T2.T1.T0 T2.T1.T0 T2.T1.T0 T2.T1.T0

Figure 5.8 - PAL Solution to Incubator Design Problem 1 from Chapter 4

217

Chapter 6
State Machines and Microprocessor Systems

A Summary...
This chapter brings together the digital elements introduced in Chapter 4 and Chapter 5 to show how a microprocessor based computer system can be created. The chapter begins with a more detailed examination of the state machine concept introduced in Chapter 4 and moves on to the microprocessor and its interaction with other elements via memory mapping techniques. The elements are then brought to show how a computer system is formed. This chapter also covers interrupt programming and polling techniques, multi-tasking and paging.

Digital Voltages

Analog Voltages

Analog Energy Forms

External Voltage Supply

Digital to Analog Conversion Computer Analog to Digital Conversion

Scaling or Amplification

Isolation

Energy Conversion External System

Protection Circuits

Scaling or Amplification

Isolation

Energy Conversion

External Voltage Supply

218

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

6.1 State Machines


A state machine is a digital circuit composed of "n" flip-flops that are interconnected to one another and to other inputs, with Boolean logic, in order to generate 2n internal states. The movement from one state to another is determined by a combination of the Boolean logic interconnecting the flip-flops and the inputs to the state machine. The internal states are then decoded with additional Boolean logic to provide useful outputs at required time intervals. A counter is therefore a very simple state machine. In Chapter 5, we examined a range of memory devices, some of which were based upon the use of flip-flop registers. In essence therefore, a state-machine can also be envisaged as a combination of Boolean logic, together with memory storage that holds the current "state" of the system. The overall concept is shown in Figure 6.1.

System Inputs Combinational Logic Combinational Logic (Next State Determination) Memory (Current State)

(Output Decoding)

Outputs

Figure 6.1 - The State Machine Concept

During the 1970s, state machines were extremely important to system designers because they enabled simple digital control systems to be implemented at a relatively low cost. The incubator control system highlighted in Design Problem 5 (Chapter 4) is a good example of a control system that is well suited to the state machine concept. In more recent times however, the need to design special state machines for specific control problems has been greatly reduced by the extremely low cost of microprocessors which provide a standard hardware solution that can be tailored with software.

State Machines and Microprocessor Systems

219

State machines are now most commonly used where microprocessor solutions are too expensive (ie: on very-high-volume, very-low-cost systems) or where it is necessary to perform very specific functions directly in hardware at a very high speed. The state machine is also of importance to us because it is really the basis of the modern microprocessor. In other words, a microprocessor is a generalised state machine that can be programmed to perform a specific sequence of functions. This is analogous to the PAL which is a generalised Boolean logic device that can be programmed to produce a required logic. The design of state machines can become rather complex when a large number of internal states need to be produced and it is fair to say that most modern designers would prefer to apply microprocessors or Digital Signal Processors rather than create a specific hardware design that requires substantial "debugging". However, a number of different techniques have been developed to design state machines in a more systematic manner than the ad hoc approach we chose in Design Problem 5. These include the Algorithmic State Machine (ASM) approach developed by the Hewlett Packard corporation and the Mnemonic Documented State (MDS) approach. The ASM approach is relatively easy to understand because it is carried out with a flow-chart like technique. A typical ASM chart is shown in Figure 6.2.

00

Light 1
01

Cold 0 B

Heater

Hot 0 C
10 11

Fan

Light

Figure 6.2 - Typical ASM Chart

220

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The ASM chart contains a few elements that help systematise the design of sequential digital circuits: (i) Rectangular boxes These represent system states. The labels on the left of the boxes are the names given to those states. The labels inside the boxes represent outputs that have to be set high or low whenever that particular state is reached. For example, in Figure 6.2, the first rectangular box indicates that the variable "Light" needs to be set to zero (because the variable has a bar over the top of it). The last box in the ASM chart indicates that the variable "Light" has to be set to one. The labels on the top right hand corner of each box correspond to flip-flop outputs (ie: the current state) that can be decoded to achieve required outputs defined inside the box. The state labels can only be entered after the chart is completed because they cannot be determined until the total number of states is known. Decision Diamonds: As in most flow charts, the diamonds control the flow of the chart. The label inside the diamond refers to a system input whose value determines the future flow from one state to another. For example, in Figure 6.2, the first diamond causes branching depending upon the value of input variable "Cold". The second diamond causes branching depending upon the value of input variable "Hot". Decision making is assumed to take no time.

(ii)

(iii) Rounded Boxes These do not represent states, but rather outputs that have to be set high or low when a particular condition is reached. For example, the first rounded box in Figure 6.2 indicates that the output variable "Heater" should be set to one when the state A (00) is reached and the value of input variable "Cold" is equal to one. The ASM chart allows a designer to systematically map out a sequence of events that is to occur before commencing the logic design phase. This helps minimise the complexity of the final product. For example, the ASM chart shown in Figure 6.2 shows that four states exist (A,B,C,D). This means that the system can be implemented using only two JK flip-flops, providing two output state variables (Qx and Qy, say). The numbers associated with each state can be arbitrarily assigned, but it makes sense to try and make them as close as possible to the required outputs to minimise decoding. In Figure 6.2, the states have been arbitrarily assigned a "Gray-Code" count sequence but any binary sequence would also have provided a solution. The current states and next states are then written down into a truth table as shown in Table 6.1. In situations where the input variables influence the transition from one state to another, then these also need to be included in the truth table.

State Machines and Microprocessor Systems

221

State Name A B C D

Present State QxN 0 0 1 1 QyN 0 1 1 0

Next State QxN+1 0 1 1 0 QyN+1 1 0 0 0

Table 6.1 - Truth Table for States Derived from ASM Chart for Figure 6.2

The relationship between the flip-flops inside the state machine can be determined by using the general successive-state dependence relationship for JK flipflops. Given that the next state of each flip-flop is governed by its current inputs and current state, such that:

Q N +1 J Q N + K Q N
we can determine the input logic for each flip-flop that will provide the appropriate next states for a problem. The design of an algorithmic state machine, for any given problem generally follows a number of steps:

Identification of digital inputs and outputs Identification of timing relationship between inputs and outputs (timing diagram) Construction of ASM chart Identification of total number of states (hence total number of flip-flops required) Allocation of binary sequences corresponding to "states" so that output logic can be minimised Construction of truth tables with present and next states Design of combinational logic to link flip-flops and inputs and outputs.

There is a considerable amount of detail that we could enter into on the subject of systematic design of state machines. However, since the task of interfacing computers to mechatronic systems is generally related to low volume activities, the amount of engineering design work involved in specialised (ie "hardware programmable) state machines generally precludes their use in favour of standard hardware solutions. These solutions are most commonly based upon low-cost microprocessors and Digital Signal Processors and hence the emphasis of this book is on solutions based upon these "software- programmable" state machines.

222

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

6.2 Microprocessor System Fundamentals


In order to maintain an accurate perspective on the capabilities of the computer, a microprocessor, or Central Processing Unit (CPU), should be viewed as generalpurpose, software-programmable state machine. This device can be coupled to a few basic elements, such as memory chips and other interfacing devices, to create simple control systems. However, we can also combine a microprocessor with other devices such as memory chips, disk drives, graphics controller cards, screens, keyboards, etc. to form another standard package that can be used for both analysis and control. We call that device a computer. The term microprocessor can be somewhat misleading. Although it is intended to convey the size of the silicon onto which it is embedded, many still equate the term "micro" with low performance and this is certainly no longer the case. In general, we refer to a microprocessor as being a single chip that is used as the CPU of a computer or computer-based system. Many computers have CPUs that are composed of a number of chips. Those chips, working together, have the same overall functionality of a single microprocessor. Historically, larger computer systems used multiple-chip CPUs and smaller computers used single-chip CPUs (microprocessors). However, there is currently no simple way of telling whether a multiple-chip CPU is more or less powerful than a single-chip CPU (microprocessor). There are many factors that influence performance and the historical demarcation is no longer relevant. In this text, therefore, when we talk of microprocessors, we are really talking about CPUs, since the basic principles apply to both single and multiple chip systems. In discussing microprocessors, one inevitably comes across a "sister" product known as a Digital Signal Processor or DSP. There are some key differences between the architecture of a microprocessor and that of a DSP. Microprocessors have traditionally been developed on the so-called "Von Neumann" architecture, whereas DSPs have been developed along the so-called "Harvard" architecture. The difference is that microprocessor based program execution depends upon program instructions and data being stored together, whereas DSP program execution depends upon isolation of program instructions and data. We shall see the ramifications of this a little later when we examine microprocessor program execution. However, in simple terms, the Harvard architecture enables a DSP to carry out common control functions (digital filtering, Fast Fourier Transforms, etc.) for a lower cost per given speed than the Von Neumann architecture. This performance advantage is offset by the relatively low volumes in which DSPs are produced. DSPs are therefore used for specialised control functions, involving significant amounts of digital processing, whereas microprocessors are used for both control and general computing.

State Machines and Microprocessor Systems

223

In this text, we will not be entering into a discussion of DSP architecture. This is not because the issue is unimportant, but rather because it is a difficult subject to approach, given the wide variation between DSPs and the relatively low volumes that each has in the market place (relative to common microprocessors). However, a sound understanding of microprocessor architecture will enable you to readily come to terms with DSP architecture from manufacturers' data books, should you need to examine it in more detail. In terms of industrial control applications, we can make the assertion that microprocessors are still the basic building blocks for nearly all of the "intelligent" control systems found in a modern manufacturing organisation. Smaller control systems have a single microprocessor chip acting as the entire Central Processing Unit (CPU). This is typical of Personal Computers, Workstations and small industrial controllers. Larger computer-based systems use microprocessors as building blocks for entire boards, which may themselves act as CPUs or closed loop controllers. In more recent times, Digital Signal Processors have also made some small inroads into the industrial control arena, most notably in areas such as servo motor control. In terms of microprocessor architecture, we have already stated that the device can be envisaged as a machine that generates a number of internal voltage levels which together define the internal "state" of that machine. The internal state of the microprocessor changes at a rate determined by an external clock chip. The internal "state" voltage levels are decoded (by appropriate circuits) in order to:

Move data into or out of the microprocessor Manipulate data within the microprocessor (add, subtract, etc.) Move data from one internal storage location (register) to another.

A microprocessor has a number of different input and output lines. These include:

The address bus which is a collection of conductors that carry binary data and are generally considered as outputs from the microprocessor The data bus, which is a collection of conductors that also carry binary data (in two directions) and act as input/output lines for the microprocessor The interrupt pin/s that act as special purpose inputs to the microprocessor.

Each cycle (tick) of the clock causes the microprocessor to jump from one internal state to another. As with other state machines, the "next state" of the microprocessor is determined by a logical combination of its current internal state, together with the condition of all the various input lines connected to it. This is shown schematically in Figure 6.3.

224

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The microprocessor is a device composed of all the other elements that we have examined in Chapters 4 and 5 and the beginning of this chapter. These elements include:

State machines Counters Registers Numerical circuits (Addition, Subtraction, etc.) Boolean logic circuits (OR, AND, etc.) Combinational Boolean logic.

By utilising these elements, at each cycle of the system clock, a microprocessor can execute a very simple operation - read data in; write data out; store data in an internal register; add data contained in internal registers; compare data in internal registers and so on. So, in essence, despite what one may intuitively believe, the microprocessor is a very crude electronic machine. General computing data and program instructions can be input into the microprocessor via the data bus and output from the microprocessor via either the data bus or the address bus. The number of bits that a microprocessor can handle as data or instructions defines the "bit-size" of the microprocessor. Typical sizes are 8, 16, 32 and 64 bits. Since both data and program instructions appear at the data bus of the microprocessor as a collection of ones and zeros, the microprocessor cannot differentiate between them in any way other than the sequence in which they arrive.

Clock

Sequential State Machine Internal State

Next Internal State Determined by current state combined with feedback from current inputs data bus, registers, etc.

Interrupt Data Bus Address Bus :

Decoding Logic

Register 1

Numerical Circuits (+, -, etc.)

Register N

Figure 6.3 - Schematic of Building Blocks and Data Flow Within a Microprocessor Chip

State Machines and Microprocessor Systems

225

When we consider the microprocessor as a sophisticated state machine, then we can gain an accurate definition of a program instruction. A program instruction is nothing more than a specific bit pattern of ones and zeros that appears at the data port of the microprocessor. The state machine within the microprocessor has been designed to move from one state to another based upon those specific bit patterns. In section 6.1, when we looked at the design of state machines, we referred to them as being "hardware programmable" because the hardware interconnection defines the functionality of the machine. The same is true of a microprocessor. The hardware design of the state machine within the microprocessor defines the course of events that will take place whenever a specific bit pattern arrives at the data port of the microprocessor. The hardware "programs" that are developed by the semiconductor manufacturers are known as micro-code. Micro-code is the lowest possible form of programming in microprocessors and it is the micro-code that defines the characteristics of the processor that end-users would recognise. Micro-code is analogous to PAL programming in that it instils the logic that defines the operational sequence for a state machine and defines how it will move from one state to another, based upon bit patterns entering via the data bus. The data bus bit-patterns to which a microprocessor will respond are referred to as the "machine code" for the microprocessor or the "instruction set" of the microprocessor. The instruction set for a microprocessor is generally rudimentary and far less sophisticated than one would expect. Table 6.2 shows part of an instruction set for a hypothetical microprocessor. The left column of Table 6.2 lists the bit patterns which the microprocessor has been designed to recognise as a result of micro-code programming. The middle column explains the actions that follow when the given bit patterns are entered into the data port of a microprocessor. The right hand column contains some abbreviated words or acronyms which are referred to as "mnemonics" (pronounced "nemonics"). These are short-hand reminders of what the bit patterns represent. If we wanted our hypothetical microprocessor to add the numbers 3 and 7 together, then the following machine code might be used to generate a program: 00000001 00000011 00000010 00000111 00000011 (LDA) (3) (LDB) (7) (ADD)

There are several questions that need to be examined at this point. Firstly, where does the program reside before it is executed by the microprocessor? Secondly, how does the microprocessor fetch these instructions? Thirdly, how can a microprocessor differentiate between the bit pattern corresponding to an instruction and that corresponding to data (eg: ADD and the number 3 clearly have the same bit patterns)?

226

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Binary Machine Code 0000 0001

Instruction

Abbreviation (Mnemonic) LDA

0000 0010

0000 0011

0000 0100

0000 0101

Load internal microprocessor register "A" with the number represented by the next byte appearing on the data bus. Load internal microprocessor register "B" with the number represented by the next byte appearing on the data bus Add the contents of register "A" to the contents of register "B" and store the sum in register "A" Take each bit in register "B" and "OR" it with the corresponding bit which comes from the next byte appearing on the data bus If the contents of register "A" are not zero then jump to the program instruction in memory specified by the next two bytes appearing on the data bus

LDB

ADD

ORB

JNZA

Table 6.2 - Part of an Instruction Set for a Hypothetical Microprocessor

The answer to the third question is the easiest to tackle and it has already been touched upon. The microprocessor cannot differentiate between data and instructions by any means other than the order in which they arrive. Each instruction within the microprocessor's set, must be accompanied by an "argument" or "qualifier" of a predefined length. For example, in the hypothetical processor, the LDA command must be qualified by an 8 bit number. The ADD command on the other hand, has a qualifier of 0 bits in length (ie: no qualifier). Therefore, the microprocessor will treat the second line of the above program as data (3) for the LDA instruction and not as the ADD command. The first and second questions are somewhat interrelated. In order to understand the answer to these questions and thereby, the mechanism by which program execution in a microprocessor based system occurs, we need to understand the concepts of "addressing" and memory mapping. Both of these concepts are fundamental to understanding microprocessor operation and computer interfacing and both will be covered in sections 6.3 and 6.4.

State Machines and Microprocessor Systems

227

In order to move on to the concepts of addressing and memory mapping, we need to review what we currently understand about the microprocessor: (i) It is a device based upon a state machine (ii) It contains a number of registers for temporary storage of data (iii) It contains numerical and logical circuits that can perform simple data manipulations between registers (eg: ADD, OR, etc.) (iv) It contains one input port for data (the data bus) and two output ports for data (the data bus and address bus) (v) The state machine has been designed (through micro-coding) to respond to particular bit patterns entering the data port. The specific bit patterns are the instruction set for the microprocessor (vi) The microprocessor moves from state to state at a speed determined by a "system clock" which is generally provided by another external chip (vii) It takes one or more clock cycles to execute each instruction. The number of clock cycles depends upon the architecture of the microprocessor (and its micro-coding) and the complexity of the instructions. Points (vi) and (vii) are most important, since they are directly related to the performance of the microprocessor. Many factors influence the performance of a microprocessor. These include:

The number of bits that a microprocessor can handle as data The maximum speed at which the microprocessor moves from one instruction to another The efficiency with which the microprocessor handles instructions.

We shall not enter into a debate on which factors are the most important to performance, since many experts seem to have difficulty in reaching agreement on this point. However, there are some general trends worth noting:

Increasing the Bit-Size of the Microprocessor: This is one of the two most obvious solutions to increasing performance simply make the processor handle larger numbers as single entities. Between the 1970s and 1990s, general-purpose processors increased in size from 8 bits up to 64 bits. In the early 1980s, the concept of "bit-slice" logic was heralded as a useful technique, but has failed to attract widespread support. In bit-slice technology, the processor is divided up into its basic elements, each of which can handle, say 1 bit. A designer then puts together as many of each element as required in parallel in order to make an "n-bit" machine. One of the problems with increasing the bit-size of the processor is that the number of data bus lines needs to be increased and this creates system design and compatibility problems.

228

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Increasing the Clock Speed of the Microprocessor: This is the other of the two most obvious solutions to increasing performance. Most processors are rated to operate at a clock speed which is not deleterious to the semiconductor structure. Digital switching, like most other processes, generates heat. The faster that switching occurs, the more heat is generated and hence the higher the likelihood of failure. Normally, a substantial increase in clock speed needs to be accompanied by a redesign of the original processor and this generally leads designers to other performance enhancements as described below. Complex Instruction Set Computer (CISC) Design: In the 1970s, it was thought that by providing a complex range of machinecode instructions, programmers would need less instructions to execute a program and hence performance could be improved. This assumption has turned out to be incorrect for several reasons. Firstly, CISC processors require more clock cycles to execute instructions because more they are complex and more qualifiers are involved. Secondly, because of variations from one processor to another, most programmers tended to only use the instructions common to a range of processors. Hence, CISC systems were not utilised in the way originally intended. CISC grew out of favour with processor designers in the early 1980s. Reduced Instruction Set Computers (RISC) Design: Since the early 1980s, the trend has been to generate processors with a basic set of instructions that can be executed in a minimal number of clock cycles. The ideal objective is to average one clock cycle per instruction. The RISC architecture has been responsible for substantial performance gains, particularly since few software developers used complex instructions anyway. Pipelining and Caching: Some instructions are more complex than others and take several clock cycles to execute. One way of speeding up the process is to recognise that instruction processing is a multi-stage operation. Therefore, rather than wait for one instruction to be fully processed before executing the next, it is preferable to start processing the next instruction as soon as the first instruction has passed through the first stage. This is called "pipelining". As an example, consider an instruction that passes through ten stages, each requiring one clock cycle. To process two identical instructions separately would require 20 clock cycles - however, with pipelining, as few as 11 clock cycles may achieve the same result. On average, processing time can be greatly reduced.

State Machines and Microprocessor Systems

229

As a consequence of the higher processing speed, it is necessary to be able to store program instructions and data in locations that can be accessed very quickly - otherwise the benefits of pipelining are lost. Some microprocessors therefore use fast, on-chip memory, referred to as "cache memory" to speed up processing. The above factors are only some of the many techniques that have been used to extract more performance from a microprocessor and new techniques are evolving all the time. However, as we shall see in subsequent sections, improvements in processor technology need to be coupled to improvements in memory and address/data bus technology in order to prevent bottle-necking elsewhere.

230

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

6.3 Microprocessor I/O - Data and Address Bus Structures


Consider a hypothetical microprocessor chip, mounted in a plastic package. The basic external elements, common to most devices and available to designers are shown in Figure 6.4.

1 2 3 Data Bus (8-Bits)

40 39 38

+Vcc Chip Reset Read/Write Interrupt Line

Microprocessor 8-Bit

: : Clock GND 20 21

: :

Address Bus (16-Bits)

Figure 6.4 - Hypothetical 8-Bit Microprocessor Showing Important Pin-Out Features

The processor shown in Figure 6.4 is somewhat dated in a number of ways. Firstly, it appears to be an 8-bit processor (this is a somewhat risky assumption we often make based upon the size of the data bus). Secondly, the processor has separate data bus (port) pins and address bus (port) pins. Most modern processors combine these two input/output (I/O) ports. This minimises the pin-out of the chip, which is important when data and address bus sizes reach 64 bits in size. It is also important in terms of the parallel conducting lines (buses) that have to connect to the processor from the base-board (mother-board) on which it and the other devices are mounted. Data and address buses take up a considerable amount of space on printed circuit boards and so combining the two together makes a lot of sense. In a system with dual-purpose bus structures, the bus is switched (multiplexed) from one function to another when required.

State Machines and Microprocessor Systems

231

The final point that highlights the age of our hypothetical processor is the package. Most modern processors (because of their bit-size and hence data/address bus sizes) have moved away from the so-called "dual-in-line" package because they have so many pins. The more common configuration is in a so-called square "grid-array" package which can support a pin-out of over 100. A typical example is shown in Figure 6.5.

L K J H G F E D C B A 1 2 3 4 5 6 7 8 9 10 11 Alignment Mark

Figure 6.5 - 68 Pin Grid-Array, Typical of Packaging for Devices with Large Pin-Outs

The reason we will be working with the hypothetical, older style device in all our subsequent discussions is because it highlights the basic principles of interaction between microprocessors and other devices, without clouding the issues with the implementation technicalities of more complex devices. The hypothetical processor is effectively a "common base" for discussions because it has the sort of attributes that are common to all modern processors. These include external characteristics such as:

Power supply lines Data and address lines (bus or buses) Clock input Interrupt line or lines Read/Write line Reset line.

We will examine the role of these lines as we progress through this chapter and we can begin by noting the power supply lines which are of fundamental importance since (as with all other digital circuits) nothing will function unless power is supplied to provide the energy for switching transistors.

232

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

In section 6.2, we saw a simple addition program that we could execute on our hypothetical processor. Theoretically, we could get this program to execute by simply applying the voltages corresponding to the relevant machine codes and data directly to the data port of the processor and by pulsing the clock pin high and low to simulate a system clock. However, more practically, a microprocessor needs to work in conjunction with memory chips that are used to feed instructions to the processor and are used as medium-term storage facilities for data. Most processors have some onboard storage facilities, but these are designed for short term storage of data (scratchpads) during processing and are not intended for general-purpose program storage. Onboard facilities include a number of registers and, in some cases, a small amount of memory for small programs (normally only provided on special microprocessors so that low-cost controllers can be implemented without additional memory chips). The general flow of information from memory chips to the microprocessor is illustrated in Figure 6.6.

Program Instruction and Data Flow

D7

Data Bus Microprocessor Memory Chip 1 ... Address Bus Decoder Logic 1 Address Bus ... Memory Chip N

D0

... Address Bus Decoder Logic N

A0 . . .

A15

Signals Activating Required Memory Locations

Figure 6.6 - Information Flow in a Microprocessor Based System

State Machines and Microprocessor Systems

233

The basic process of program execution is relatively straightforward to understand. The sequence is as follows: (i) The program is normally stored in one or more memory chips in sequential order The microprocessor is given sufficient information by the system designer to locate the first instruction of the program in memory

(ii)

(iii) The microprocessor selectively activates memory locations using its address bus outputs, which are set aside for this purpose (iv) Memory chips place the required information (program instructions and data) onto the data bus where it is fed into the microprocessor (v) The microprocessor has an internal register that always stores the location of the current memory location that has been accessed, so that it can then move on to the next location.

In the case of the simple addition program introduced in section 6.2, data only ever needed to flow from the memory devices to the microprocessor. However, in general, data needs to flow in both directions because information resulting from a program's execution may need to be stored in memory. Note also the role of the address bus - it is a special purpose bus that is not used for general data transfer but only to activate memory locations so that they can feed their data onto (or from) the data bus. Any data flow to or from memory chips then has to occur via the data bus. The program execution sequence implies that each memory location needs to be uniquely accessed by the microprocessor. This is normally achieved through memory mapping techniques that are described in section 6.4. These techniques ensure that, with appropriate decoding (as shown in Figure 6.6) each memory chip is given a unique location with respect to the microprocessor.

234

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

6.4 Memory Mapping


In order to help us understand the role of the address bus and the concept of memory mapping, we take another look at the memory chips themselves. Figure 6.7 schematically shows a hypothetical memory chip, which has storage for 16 words (each 8 bits in length) of information. Actual memory chips typically store many hundreds of kilobytes and have a correspondingly larger number of memory access pins. You should observe however, that in our hypothetical chip of Figure 6.7, the bit patterns we have shown inside are the same as our addition program from discussions in section 6.2.

Selected Row of Data Switched to System Data Bus

D7 D6 D5 D4 D3 D2 D1 D0 Boolean 0 Logic 0 Circuits 0 for Address 0 Decoding 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 1 1 1 1 1 0 1 1 0000 0001 0010 0011 0100 . . . 1111

Internal Memory Addresses Relative to Chip

Read/Write Chip Enable MA3 MA2 MA1 MA0

Figure 6.7 - Schematic of a Hypothetical 16-Word Memory Chip

The memory chip is composed of two functional sections - the Boolean decoding logic section and the actual storage section. The decoding section is responsible for controlling the transfer of data to/from the computer data bus from/to a storage address in the chip. This access control is based upon the status of the memory address pins (MA3..MA0), the Read/Write pin and the Chip Enable (or Chip Select) pin. The Chip Enable (or Chip Select) pin is the access device for the chip. In order to use the chip, the Chip Enable pin must be set either high or low (depending on the manufacturer's design). The Read/Write pin is used on RAM chips to define whether data should flow into or out of the chip. Depending upon the high or low state of this pin, data will be written to or read from the appropriate address location in the chip. As an example, if the microprocessor in our hypothetical system is to force the memory chip to place its fourth row (0011) of storage (ie: 00000111) onto the data bus, then it would have to set:

State Machines and Microprocessor Systems

235

MA3 = 0 MA2 = 0 MA1 = 1 MA0 = 1 Read/Write = 1 Chip Enable = 1 thus accessing the third row in the memory chip. The microprocessor controls the memory chip address pins and chip enable (select) pin, by setting appropriate lines on the address bus high or low. In other words, a selection of address lines from the microprocessor is connected to pins on the memory chips. These lines are selectively set or reset by the microprocessor in order to make the memory chips respond in the desired manner. The Read/Write line of the memory chip is tied to a corresponding driver line on the microprocessor. The dilemma that immediately arises is that if there are many identical memory chips, connected to the address bus of a microprocessor system in an identical manner, then all the chips will respond simultaneously to each request from the microprocessor. This is clearly ridiculous, since it would imply that no matter how many memory chips we have, we would only effectively have the storage capacity of one chip. Since it would be equally ridiculous to have scores of specially designed memory chips, the problem is overcome through the use of "memory mapping" techniques. Each memory chip in a microprocessor system must have a unique connection to the microprocessor address bus, otherwise a conflict occurs. This unique addressing is achieved through address bus decoding logic, as shown in Figure 6.6. In a simple system, this logic can be implemented through Boolean logic gates. Let us now assume that two of the memory chips in the system shown in Figure 6.6 both have a structure identical to the hypothetical one shown in Figure 6.7. The following Boolean logic could be used to decode the address lines for chip 1: MA0 = A0 MA1 = A1 MA2 = A2 MA3 = A3 Chip Enable = X1 where:

X1 = A 4 + A 5 + A 6 + A 7 + A 8 + A 9 + A10 + A11 + A12 + A13 + A14 + A15

236

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Memory chip number 2 in the system could have the following Boolean logic gates for address bus decoding:

MA0 = A0 MA1 = A1 MA2 = A2 MA3 = A3 Chip Enable = A4 . X2 where:

X2 = A 5 + A 6 + A 7 + A 8 + A 9 + A10 + A11 + A12 + A13 + A14 + A15


This arrangement implies that memory chip 1 can only be accessed (enabled) when all microprocessor address lines A4 through to A15 are low. Otherwise chip 1 remains locked. Memory chip 2 can only be accessed when address line A4 is high and lines A5 to A15 are low. Otherwise chip 2 remains locked. Table 6.3 shows the effect that this has from the microprocessor's point of view. A number of such chips could be selectively "mapped" so that to the microprocessor, all address locations from 0000 0000 0000 0000 to 1111 1111 1111 1111 can be accessed, with no two chips responding to the same address.

Address Bus Value

Memory Chip Accessed

Memory Chip Internal Address 0000 0001 0010 0011 . . 1111 0000 0001 0010 0011 . . 1111

0000 0000 0000 0000

0000 0000 0000 0000 . . 0000 0000 0000 0000 0000 0000 0000 0001 0001 0001 0001 . . 0000 0000 0001 0000 0000 0000 0000

0000 0000 0000 0000

0000 0001 0010 0011

1111 0000 0001 0010 0011

1 1 1 1 . . 1 2 2 2 2 . . 2

1111

Table 6.3 - Memory Mapping of Identical Memory Chips in Figure 6.6 to Unique Locations with respect to Microprocessor

State Machines and Microprocessor Systems

237

Memory mapping is a vitally important concept in computing because most devices are interfaced to the microprocessor via a range of memory addresses. For example, the parallel to serial conversion chip (UART) in a computer system has a number of internal registers. One of the registers in the chip contains data that tells the device how to perform its function and another register is used for incoming data and yet another for outgoing data. All these registers are mapped, just as if they were normal system memory. The microprocessor communicates with them as though they are normal memory locations, when in reality they are often part of another specialpurpose chip or system. The same memory mapping technique can be used to interface disk-drive controllers and graphics controller cards to a microprocessor within a computer system. A range of memory locations or registers in these devices are mapped into system memory as if they are normal memory chips. The microprocessor moves data to and from them as though they are normal memory locations. The devices that are mapped onto the system use the data supplied by the microprocessor to do their respective tasks. For example, the graphics controller card uses the data provided by the microprocessor to create an image on a screen. This is shown in Figure 6.8.

D7

Data Bus Microprocessor Memory Chip 1 ... Address Bus Decoder Logic 1 Graphics Controller Card ... Address Bus Decoder Logic N

D0 VDU Screen

Common Storage Registers or Memory Mapped into Main System A0 . . .

A15 Address Bus

Figure 6.8 - Using Memory Mapping Techniques to Create a Common Shared Area of Memory (or Registers) to Transfer Data To and From Non-Memory Devices (eg: Graphics Controller Card)

238

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Most computer systems are composed of many memory chips and each and every memory chip must ultimately have a unique address. Most computer users would be familiar with the concept of memory in the form of a card that plugs into the address/data bus structure of the computer. Each card may hold a number of memory chips. The manufacturers of the card generate the decoding logic that enables the entire card to be treated as one large "chip" (because each of the memory chips is actually decoded on the card itself). Users then complete the memory mapping function by setting dip-switches on the card to make the entire card map into an appropriate region of the computer's system memory. The dip-switches enable one card to be easily mapped into a number of different memory locations so that they can be used on a range of computers. Newer computer systems resolve the problem of memory mapping for cards by using very sophisticated address and data bus structures. Each device that plugs into the address and data bus structure has to be equipped with a special interface that is capable of responding to commands issued by the CPU. The memory mapping and configuration process can then occur automatically. Although this may appear, to endusers, to be a very useful solution to the memory mapping problem, it creates considerable problems and costs for those that have to generate cards for such systems.

State Machines and Microprocessor Systems

239

6.5 Microprocessor Program Execution


Now that we have seen how data is stored and transferred within a microprocessor based system, we can re-examine the addition program from section 6.2, where we had the following code for our hypothetical microprocessor: 00000001 00000011 00000010 00000111 00000011 (LDA) (3) (LDB) (7) (ADD)

In order to understand how this program will be executed, we need to refer back to the system configuration of Figure 6.6 and the memory map of Table 6.3. We now know that such a program would be stored in contiguous words of memory and therefore the microprocessor would need to fetch the instructions and data on a step by step basis by selectively activating the appropriate address lines on the address bus. Most microprocessors have one or more internal registers that are used to store the memory location of the current step (instruction or qualifying data) in the program. Each time a program instruction or data is fetched from memory and executed within the processor, the internal program counter register is incremented so that it contains the address of the next piece of information in the executing program. The program counter register within the microprocessor is the one that is normally transferred to the output stage (address bus) of the processor in the form of addressing information. The question however, is how does the microprocessor program counter register get primed with the address of the very first instruction in order for the program execution process to begin? Let us assume that our hypothetical processor works in the following way:

When the reset line of the microprocessor is asserted, the microprocessor's first function is to set a particular address bit pattern (which we will refer to as the "boot" address) The microprocessor's second automatic function (after reset) is to input data from the memory device at the predefined (boot) address and to store that data in the program counter register The data in the program counter register (obtained from the boot address) then becomes the address of the first instruction of the program.

240

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The operational boot sequence for the hypothetical microprocessor helps us to understand the purpose of the reset pin and also gives us a solution to the programming problem. We can tell the microprocessor the starting address of our program by loading it into the boot address in memory. Referring now to the memory map of Table 6.3 and assuming that our program is stored in chip number 2 of the hypothetical system in Figure 6.6, beginning from address: 0000 0000 0001 0000 Let us further assume that when the microprocessor "reset" pin is asserted, then the microprocessor will load 16 bits of information, beginning at memory addresses: 0000 0000 0000 0000 and ending at 0000 0000 0000 0001 into its program counter register/s. Note that since in our hypothetical system, each memory chip only stores 8-bit words, and addresses are 16-bit quantities, two consecutive addresses are needed to prime the program counter. Let us now examine the complete memory map, which contains the addition program and the starting address of the program at the boot address location. This is shown in Table 6.4.

Address Bus Value

Explanation of Memory Contents Boot Address (High Order) Boot Address (Low Order) Don't Care Don't Care . . Don't Care LDA Instruction 3 (Qualifier Data) LDB Instruction 7 (Qualifier Data) ADD Instruction Don't Care . . Don't Care

Contents of Memory Location 0000 0001 XXXX XXXX . . XXXX XXXX 0000 0001 0000 0011 0000 0010 0000 0111 0000 0011 XXXX XXXX . . XXXX XXXX 0000 0000 XXXX XXXX

0000 0000 0000 0000

0000 0000 0000 0000 . . 0000 0000 0000 0000 0000 0000 0000 0000 0000 0001 0001 0001 0001 0001 0001 . . 0000 0000 0001 0000 0000 0000 0000 0000 0000

0000 0000 0000 0000

0000 0001 0010 0011

1111 0000 0001 0010 0011 0100 0101

1111

Table 6.4 - Contents of System Memory (of Figure 6.6) after Program and Boot Address Entry

State Machines and Microprocessor Systems

241

Table 6.5 helps us to follow the program execution after resetting the microprocessor (when it has loaded the program counter register with the starting address of our program) so that we can clearly see the sequence of events that follows. The first instruction, according to the memory map in Table 6.4, has been loaded into address: 0000 0000 0001 0000 The microprocessor uses the address bus, and asserts the READ mode on memory chips to force them to place their contents on the data bus. Both instructions and data flow through the data bus into the microprocessor, where they are processed. This is all "time-governed" by the microprocessor system clock input. Figure 6.9 shows the voltage waveforms on the data bus (which enter into the microprocessor's data port) that make up our simple addition program. The following is the logical sequence of events for the execution of the addition program:

(i) (ii) (iii) (iv) (v) (vi) (vii) (viii) (ix) (x)

Microprocessor sets address bus to 0000 0000 0001 0000 and asserts READ line Memory chip 2 sets data bus to 0000 0001 (LDA) Microprocessor sets address bus to 0000 0000 0001 0001 and asserts the read line Memory chip 2 sets data bus to 0000 0011 (3) Microprocessor sets address bus to 0000 0000 0001 0010 and asserts the read line Memory chip 2 sets data bus to 0000 0010 (LDB) Microprocessor sets address bus to 0000 0000 0001 0011 and asserts the read line Memory chip 2 sets data bus to 0000 0111 (7) Microprocessor sets address bus to 0000 0000 0001 0100 and asserts the read line Memory chip 2 sets data bus to 0000 0011 (ADD).

The timing diagram of Figure 6.9 shows that information transfer along the data bus is essentially parallel, with all 8 bits issued and arriving simultaneously at their destination. Addressing information is also transferred in this parallel manner. However, it should be noted that the actual mechanism for data flow in realistic systems is complicated by the fact that the design needs to account for differences between microprocessor and peripheral (memory, etc.) chip speeds. This is generally achieved through the insertion of "idle" or "wait" states.

242

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

LDA

LDB

ADD

Voltage on D7 Time

Voltage on D6 Time

Voltage on D5 Time

Voltage on D4 Time

Voltage on D3 Time

Voltage on D2 Time

Voltage on D1 Time

Voltage on D0 Time

Figure 6.9 - Timing Diagram for Addition Program Execution

State Machines and Microprocessor Systems

243

The explanation provided so far still leaves a number of important questions unanswered. The most obvious question to ask is what happens at the end of the program execution? The program counter increments to the next address, but the information at that address is meaningless. How then does the program terminate? The simple answer is that the addition program we have written is unacceptable. In principle, low level machine code programs such as the one we have written can never be allowed to just terminate - they must either go on in some form of loop, adinfinitum, or else branch to some other address that contains executable code. If this doesn't happen, then the operation of the entire system becomes unpredictable because the microprocessor attempts to load meaningless information from memory. Most instruction sets for microprocessors contain "branch instructions" that help us to create machine programs that don't just terminate unpredictably. These instructions cause the microprocessor to load a new address into the program counter register/s. The objective for the programmer is to ensure that a program always ends by setting the program counter register to an address that is known to contain meaningful code. For example, in the addition program, another statement could be added to tell the microprocessor to jump back to the starting instruction address. In the last entry in Table 6.2, we saw one such instruction which is common to most microprocessors - it is referred to as a conditional branch instruction and causes the microprocessor to load a new program counter address based on the outcome of some function or the condition of a register. Most microprocessors provide many such branching possibilities. When novices look at a professionally designed computer system, such as a personal computer or workstation, they tend to assume that when their software (eg: word-processor, spreadsheet, etc.) is not executing, then the processor is idle. We now know that this is not the case. In fact, the processor is executing a program loop that is waiting for a branching condition to occur based on some input from the user keyboard, mouse, disk-drive, etc. Another question that has remained unanswered up to this point is the issue of transferring data back into memory locations. So far, we have only seen how information flows from memory to the processor. Needless to say, the reverse is also true, since we often need to store results of calculations back into memory regions. The process is also important when we remember that most peripheral devices are memory mapped (including hard-disk drives, graphics driver cards, etc.). Storing data from the microprocessor into the registers or memory regions of peripheral devices (which appear just like any other memory chips to the microprocessor) is the only means that the microprocessor has of communicating with these devices. So, for example, if the microprocessor wanted to display the result of our addition program on a screen, then the microprocessor would need to send the appropriate output data to the memory mapped address of the graphics controller card which would then display and refresh the number as required.

244

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Most microprocessors have, in their basic instruction sets, a number of instructions for transferring data directly to and from memory. Typically, these instructions have mnemonic names such as MOV or GET, etc. Their purpose is to transfer one or more words to/from registers from/to specific memory locations. These instructions are amongst the most important in any CPU instruction set because they really govern information flow within the computer system. In some older computer designs (CISC architecture), a large number of such instructions were provided in order to assist programmers. However, since it was discovered that most programmers tended to adhere to a basic set of instructions, more modern processors (RISC architecture) tend to only provide that basic set of "processor-memory" transfer instructions. Another issue that has not been given a great deal of prominence in this chapter is that of the complexity of the address/data bus structure in commercial computer systems. If one were to design a simple, one-off microprocessor based control system, then the sorts of design principles that we have discussed are adequate to realise a solution. However, for a general-purpose computer system, the address/data bus problem is much more complex, particularly because of timing problems between various devices and the range of devices that have to be accounted for in variations on a particular design. For this reason, most commercial systems often require an additional bus, composed of special function control lines. This is sometimes referred to as a "control bus" and often special purpose devices, referred to as "bus controllers" are added to coordinate the flow of data and addressing between devices and contention for use of the bus. Therefore, in a commercial system, the microprocessor is given less authority over peripheral devices than we have alluded to in discussions on simple systems. It is difficult to tackle commercial implementation issues, such as the control bus, in a text book because to do so requires concentration on one specific architecture and that is not the purpose of this book. Moreover, there are a broad range of techniques available to commercial system designers. However, if one can come to terms with the basic principles, then one can also come to understand why these need to be modified for commercial computer systems as they are described in manufacturers' design data.

State Machines and Microprocessor Systems

245

6.6 Programming Levels for Processors


After reading section 6.5, one may be driven to despair and wonder how anything sensible could be achieved with programming languages as primitive as those provided by a common CPU instruction set. Granted, we only looked at some very basic instructions, but the sum total of most modern instruction sets provides little more programming power to work with than the set of instructions used in section 6.5. Multiply and divide functions are notably absent in many processors and few processors have instructions related to input and output of data via common interfaces such as keyboards and screens. Designing a sophisticated programming environment requires a layered approach. The low level machine code can be used to create a crude programming language and the crude programming language can be used to create a more sophisticated programming language and so on. Ultimately, we can end up with sophisticated Windows based macro programming languages, such as those found in modern spreadsheets, word-processors and databases. Assuming that we had no more tools than the basic system layout of Figure 6.6, and the machine code instruction set of Table 6.2, then we would have great difficulty in developing realistic programs because we would physically need to set address lines high and low, enable certain memory addresses, then write data into those addresses by selectively setting data bus lines high and low with mechanical switches. However, this sort of process can be considerably improved by the addition of a hexadecimal keypad as shown in Figure 6.10. The purpose of the keypad is to apply binary voltages to selected memory addresses. This simplifies the programming task considerably because we no longer have to key in lengthy sequences of ones and zeros, but rather the much shorter hexadecimal equivalent values. As an example, if we wished to load our addition program into memory (as listed in Table 6.4), we might use the key sequence shown in Table 6.5.

Command Key M M M M M

Address (in Hex) 0010 0011 0012 0013 0014

Data (in Hex) 01 03 02 07 03

Table 6.5 - Key Sequence Required to Enter Addition Program on a Simple Hexadecimal Keypad

246

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

0 4 8 C

1 5 9 D M

2 6 A E X

3 7 B

Connect to Address Bus

Connect to Data Bus

F
Connect to Microprocessor Reset Connect to Memory Read/Write Line

Figure 6.10 - A Simple Hexadecimal Keypad to Automate Machine Code Programming

Each time the "M" function key is pressed on the keypad, the address line outputs are activated and take on the bit pattern corresponding to the four keys pressed. The keypad has some form of digital display to echo the hexadecimal keys that have been pressed. After four hexadecimal keys have been pressed (ie: the address), the data bus lines are activated and the write line is asserted. The data bus lines take on the binary value equivalent to the next two hexadecimal keys pressed (ie: the program instruction or data). Once the starting address is primed, pressing the execute (or X key) on the keypad should reset the processor to enable the program to run. Variations on this form of programming can be used to generate simple programs for purpose-built, small-scale microprocessor controllers. However, this is still rather laborious and prone to error. Even though machine code programming in hexadecimal is a vast improvement over binary, only short programs can be generated before the process becomes unwieldy. The solution has been to use machine code programming to generate a more sophisticated programming language that can operate in conjunction with a full alphabetic keyboard and text screen. The machine code program that does the interpretation from the source code (written in more sophisticated language) back to the necessary binary executable code is called an assembler. The programming language is called an assembly language and its syntax is nothing more than the mnemonics used to describe the machine code instructions.

State Machines and Microprocessor Systems

247

To write our addition program in assembly language, using a traditional keyboard and screen, we would type the following: LDA 3 LDB 7 ADD This source code program could then either be stored in memory or disk. The assembler (written in machine code) would take the raw source code and convert it into binary executable code. This is not as straightforward a task as may first appear because the assembly language program above is typically entered using a conventional keyboard that generates ASCII data. This is quite different from a hexadecimal keypad that generates binary numbers corresponding to hexadecimal numbers. For example, by examining the ASCII codes in Table 4.2, we can see that typing the instruction LDA 3 on the keyboard, generates the following binary code: L D A 3 0100 1100 0100 0100 0100 0001 0011 0011

The three binary strings corresponding to LDA need to be converted into the actual machine code for load register A (which is 0000 0001 in our hypothetical processor). Note also how even the number 3 needs to be converted from ASCII form (0011 0011) to binary form (0000 0011). Most modern assemblers are relatively sophisticated pieces of software and have been written in a layered form in order to reach that level of sophistication. In addition to converting programs, assemblers need to check the syntax of the source code file and alert the user to any inconsistencies or typing errors so that these can be corrected before the program is executed. Assembly language programming is a vast improvement over machine code programming since it is more intelligible to the human user. It is used extensively in situations where computer users or system designers need to access memory locations (and hence system hardware) directly and is therefore an important tool in computer interfacing. However, assembly language is still far too crude to enable us to develop sophisticated applications because it is difficult to find errors in long and complicated programs. For this reason, a higher level language must be used in order to improve the legibility of source code and to make it more structured so that it can be readily modified. Typical, compiled high-level languages include C, Pascal, Fortran, etc. and will be discussed in more detail in Chapter 8. High-level language compilers, like assemblers, are written in a layered approach that ultimately enables the development of the sophisticated development languages with which we are now familiar.

248

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

There are many activities that need to be coordinated when developing software for microprocessor based systems. These include:

Entering source code programs Verifying data entry Storing source code on some "permanent" medium Retrieving source code from storage media Converting source code to executable binary form Executing the binary code.

If we were to develop a "one-off" microprocessor based control system for some low level function, then we could use some makeshift solution (such as the hexadecimal keypad) to achieve our aims. However, when we need to repeatedly develop, execute and debug software, we need a more reliable and consistent approach. This is provided by the operating system, which is a program that acts as an interface between the human user and the computer hardware. It enables the user to carry out "house-keeping" tasks, such as storing and retrieving files, executing programs, etc. in a simple and efficient manner. The bulk of a typical (commercial) operating system is stored on disk because it is normally too large (in total) to reside in memory. Normally, only a kernel portion of the total system remains in memory (and executing) to call in other disk-based modules (sometimes referred to as external functions). External modules of the operating system software are then brought into memory for execution only when needed. In most modern computer systems, several small modules of software (that one could argue are part of the operating system) are stored in ROM chips. These modules provide low level access to system input/output hardware (such as keyboard, disk-drive, etc.) and make it easier for operating system developers to interface their software to specific system hardware. In other words, they are the link between the highest levels of hardware design and the lowest levels of software. The ROM based software modules that perform this interfacing function are often referred to as the Basic Input Output System or BIOS for the computer. Many computers do not even store the basic kernel for the operating system in ROM chips because of its size. For this reason, when the computer system is initiated and the microprocessor within is reset, its first function is to execute a small program (normally in ROM) that searches the disk drives to find the operating system kernel and load it into memory so that normal operations can commence. The small program that does the search for the operating system is referred to as a "bootstrap" program. The role of the operating system, bootstrap program, BIOS and other typical pieces of computer hardware and software are shown schematically in Figure 6.11.

State Machines and Microprocessor Systems

249

Assembler

High Level Language Compiler

Executable Program 1

Executable Program N

Software

Operating System Software/ Hardware Interface BIOS and Bootstrap Software

EndUser

Disk-Drive Controller

CPU

Graphics Controller

Keyboard

Memory

Hardware

Figure 6.11 - Interfacing Hardware and Software via an Operating System

The role of the operating system will also be expanded upon in Chapter 8, when we examine the task of programming in more detail.

250

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

6.7 Interrupts and Interrupt Programming


We now know how data is represented and transferred within microprocessor based systems, through the data bus. We have also seen how the microprocessor exerts control over other chips within the system, by selectively enabling and disabling "Chip Enable" (or Chip Select) pins, through the system address bus. However, up to this point, we have largely assumed that all devices connected in such systems are passive, and incapable of performing complex functions akin to those of the microprocessor. This is in fact not the case. Many devices that are mapped into the microprocessor system are intelligent or semi-intelligent in their own right. The chip devices that control the serial communications port (UARTs) in a computer system are a good example of semiintelligent devices. The disk-drive controller card and graphics controller card are another two examples. Mathematical co-processors are another. Such devices require a degree of autonomy in order to unload work from the microprocessor. However, there is little value in unloading work from the microprocessor if its only function (while the peripheral is working) is to wait until the peripheral device has completed its task. There are two types of programming that can be used to improve the utilisation of the processor while it is waiting for some external event to occur. These are referred to as:

Polling Interrupt Programming.

Both programming techniques are used in a range of different control applications. However, it is important to understand the difference between the two. Most control programs require input from a number of devices. The inputs are then processed and outputs generated. The problem is that the devices generating inputs to the control systems are not always ready to provide data when it is required. In a polling program, the objective is to interrogate (or poll) the various input devices to see if they are ready to provide data. If they are not ready, then the program can either execute a wait loop until the devices are ready (as shown in Figure 6.12 (a)), or else it can continue on to do some other control function and then poll the device again later. This is shown in Figure 6.12 (b). The polling techniques shown in Figure 6.12 both have problems. The one in Figure 6.12 (a) wastes processing time and works if only when there is one input for which the processor has to wait. What happens if three inputs all need to be waited for at overlapping time intervals?

State Machines and Microprocessor Systems

251

Input Ready? Yes Read Input

No

Input Ready? Yes Read Input

No

Other Task

Control Process

Control Process

Output Data

Output Data

(a)

(b)

Figure 6.12 - Polling Techniques (a) Waiting for Inputs (b) Executing a Task While Waiting for Inputs

The problem with the technique shown in Figure 6.12 (b) is that if the program executes some other task while waiting for an input, and the time taken to perform that task is longer than the time taken for the first input to change, then by the time the control program has returned to interrogate the input, data has been lost. This is a serious problem in most control applications because the stability of a control system depends upon accurate sampling of data. However, if the timing of the additional tasks imposed on the system is known and the rate at which the input needs to be sampled is known, then the technique can be made to work and is relatively simple to implement. It also leaves open the possibility of polling a number of inputs, while waiting for events to occur. Another shortcoming of the technique shown in Figure 6.12 (b) is that it is dependent upon the processor speed, because the timing of the additional task may vary with a different processor. Hence, if the polling control algorithm was developed on processor X (and found to be operating) and was subsequently moved to processor Y (which ran at half the speed, say), then the algorithm may no longer work.

252

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

A more complicated, but more systematic approach to control programming is to use the so-called interrupt-driven software technique. This is used internally within a computer to handle peripheral devices such as graphics controller cards, disk-drive controllers, etc. and also in control systems software for scanning inputs from external systems under computer control. Interrupt programming is based upon the one or more interrupt pins available on most processors. When an interrupt line is asserted on a processor, the processor immediately stops its currently executing program. The processor accesses an address in memory (the interrupt vector) that contains (ie: points to) the starting address of a program specially written to handle the interrupt. The processor executes a "branch" and the next instruction executed is the first step in the interrupt handling program. The interrupt handling program is called an Interrupt Service Routine or ISR. The obvious question to ask is what happens to the normal program that was executing before the interrupt occurs. In order to understand this, we need to examine the "current state" of the processor. If a program is executing on a processor and we can stop that program executing and save the contents of all the internal registers (including the program counter and processor status register), then we have saved the current state of the processor. In theory, we can switch off the processor, switch it back on, execute other programs and so on, but provided that we restore all the register contents to their original state, then the processor will continue executing the previous program as though it had never been interrupted. The same is true of the interrupt service routine. The first few lines in any interrupt service routine need to save the current state of the processor this is normally achieved by transferring all the register contents to a convenient area of memory. The routine is then written just like any other program but its sole purpose should be to "handle the interrupt". The last few lines of code in the interrupt service routine should be to restore the original processor register contents by doing a transfer from memory back to the processor. The control of the processor then returns to the normal program as though nothing had happened. There are many instances where the interrupt programming technique is used for normal computer operation. We have already mentioned that these include interaction with the disk-drive controller, graphics controller cards, serial and parallel ports, keyboard, etc. The keyboard interrupt system is a good example of how the technique is used to handle incoming information that only enters the system on a spasmodic basis. A key may be pressed every few milliseconds or every few hours or every few months, depending on the user. The processor can go about normal program execution until an interrupt is asserted. When a keyboard interrupt occurs, an interrupt service routine takes the information from the keyboard register and places it into a normal memory area (called the keyboard buffer) where it can remain until required by an executing program. In this way, the executing program does not have to continuously check whether the keyboard has been touched.

State Machines and Microprocessor Systems

253

The serial port (UART chip) example is very similar to the keyboard example and is important in terms of control, because serial data links are often involved. The microprocessor may require data entering the UART from an external system. There may be long and varying time intervals between the arrival of data. If the microprocessor has to spend all its time interrogating (polling) such a device to determine whether data has arrived, then there is little scope for the processor to carry out normal program execution. This problem is overcome through the use of interrupts as shown schematically in Figure 6.13 (a). Figure 6.13 (b) is a memory map that shows how the flow of the control program is disrupted and then restored by an interrupt service routine.

Computer Control System Microprocessor Chip Interrupt Pin UART Data Ready DATA External System

(a)

Memory Map

Control Program Interrupt

Interrupt Vector ISR for UART Save Registers Handle UART Restore Registers

(b)

Figure 6.13 - Interrupt Programming for Serial Communications (a) Schematic (b) Memory Map

254

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Most computer systems use a number of interrupts in order to function. In smaller systems, interrupt lines may be connected directly to the processor. However, in most commercial systems, in order to handle the number of devices generating interrupts, a special device known as an "interrupt controller" is used as an interface between the processor and incoming interrupt signals. Interrupt service routines are just like any other programs and hence they too need to be considered in term of their overall importance to system control. For this reason, most processors and interrupt controllers allow for prioritisation of interrupts. If several interrupts occur simultaneously, then the one with the highest priority is serviced first and the rest in respective order of priority. Some systems allow ISRs for lower priority interrupts to be interrupted by higher priority ISRs.

State Machines and Microprocessor Systems

255

6.8 Paging
Paging is a technique that has largely been designed to overcome the price differential between magnetic storage media (disk) and semiconductor storage media (memory). Although simple to understand in principle, it is a relatively sophisticated phenomenon and its application varies from one processor to another. Some CPUs and microprocessors have in-built design characteristics that enable them to handle paging, while in other cases it is a result achieved largely through specialised software, particularly in operating systems. Modern computer systems exclusively run programs directly from memory, but in the past, the high cost of memory has limited the number of programs that could reside within at any point in time. Disk storage space was traditionally much lower in cost than semiconductor storage, but it was also orders of magnitude slower in operation. Paging is a solution that enables memory and disk to work together in such a way as to appear to create more memory. The extra memory is really disk space but is referred to as "virtual memory", while the semiconductor memory is differentiated as being "physical memory". The concept is shown schematically in Figure 6.14.

Virtual Memory (Magnetic Disk)

9 8 7 6 Physical Memory (Semiconductor) 9 6 4 3 5 4 3 2 1

Figure 6.14 - Paging From Disk (Virtual Memory) to RAM (Physical Memory)

256

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

A portion of the hard disk space in a computer system can be devoted to the special task of paging. Effectively this portion of the hard disk is treated like an enormous area of memory that requires a large number of bits for addressing. The paging area is divided up into smaller segments. The physical memory space available for program execution is also divided up into the same size segments as those on disk and the normal addressing is extended with offsets to provide an address for each segment within the hard disk. The address bus in a paged system therefore contains information on both the physical and virtual locations of data. Some processors, designed for paging purposes, are capable of dealing with both physical and virtual memory very efficiently because they can divide raw addresses into physical addresses and virtual memory offsets. When a segment of code or data is required by the processor during program execution, it is taken from hard disk and paged into memory. When the segment is no longer required it is paged back to hard disk. Since there are many more virtual memory segments than there are physical memory segments, competition for physical memory can become intense. Some processors have the ability to carry out a paging algorithm that efficiently works out which segments have been used most infrequently and therefore should be sent back to disk and which segments are required from disk in memory. The technicalities of paging implementation vary somewhat from one processor to another and it is difficult to go into a great deal of detail on this process without discussing a particular type of processor. However, the principles of paging described herein are common to most implementations, whether they are largely performed in hardware or software. As can be deduced from the above discussions, paging is an overhead rather than an advantage and it exists because of the rapidly escalating memory requirements for multi-tasking, multi-user computer systems. A simple microprocessor controller, running a single task, on the other hand, does not generally require paging. Despite the dramatic decrease in the cost of semiconductor memory, it is still relatively expensive in relation to magnetic storage (particularly since hard disk storage costs have also dramatically decreased). Another reason for paging is because semiconductor memory (particularly in plastic package form) is still physically larger, on average, than disk storage. In the final analysis, the improvements in memory technology, considerable as they have been, have not kept pace with the escalation in memory requirements. For these reasons, paging will remain as an inherent part of computer architecture for some time to come.

State Machines and Microprocessor Systems

257

6.9 Multi-Tasking Multi-User Systems


In this chapter, we have examined in some detail the process of program execution in a microprocessor, which we have already stated, can also be applied to a range of multi-chip CPUs. However, nowhere in our discussions have we ever alluded to the fact that more than one program can execute on a processor at one time. How then can one explain the implementation of multi-tasking and multi-user systems? A multi-tasking computer system is one that appears to run a number of tasks concurrently. The important point is that it only appears to do so. At any instant in time, the processor still only executes one single instruction, and the majority of CPUs are physically only single-tasking devices, along the lines we have already described. Multi-tasking is in fact achieved through sophisticated software that allows portions of one program to run for a short period and then switches to another program and so on until the first program is permitted to continue again for a short period. This phenomenon is known as time-slicing. If the switching from one program to another is done quickly enough, then it appears as though a number of programs are running simultaneously. The switching from one program to another is based upon a relatively complex arrangement of interrupt programming that is normally built into the operating system software. The level of sophistication in the multi-tasking process is largely a function of the complexity of the operating system. Some operating systems allow for the allocation of priorities between tasks, the percentage of CPU time that each task can average and so on. Of equal importance is the operating system's ability to control and allocate the use of resources (keyboard, screen, serial and parallel ports, disk-drives, etc.) between tasks. Programs written for control purposes often have special needs in multi-tasking systems and so some operating systems are marketed as being "real-time" while others are not. The difference is in the ability to prioritise certain tasks and their access to system resources. A multi-user computer system is really an extension of the multi-tasking concept, since it is based on the idea of a number of programs executing simultaneously. The only real difference is that a multi-user system enables a number of different users to interact with the computer through a number of input/output devices (keyboards, screens, mice, etc.). The complexity of a multi-user operating system can be significantly greater than a single-tasking-single-user or multi-tasking-single-user system. This is because a multi-tasking-multi-user system needs to include security measures that prevent users from corrupting one another's data and programs or from dominating CPU usage. In this book, we shall not be looking at multi-user-multitasking systems in any detail.

258

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

6.10 Combining the Elements into a Cohesive System


The conclusion to this chapter really brings us back to the beginning of Chapter 4, where we endeavoured to gain a global perspective of the computer systems that we were trying to realise with basic digital elements. We therefore re-examine Figure 4.3, in the light of our current knowledge and combine it with the information that we gained from Figure 6.11, in order to create a more complete picture of the computer system. This is shown in Figure 6.15.

Assembler

High Level Language Compiler

Executable Program 1

Executable Program N

Operating System

Address Bus Data Bus ROM (BIOS + Bootstrap) Keyboard Interface Graphics Controller Disk-Drive Controller

Memory CPU

Interrupt Controller

Clock

Keyboard Monitor

Disk-Drive

Figure 6.15 - Combining Basic Elements into One Computer System

There are still many issues that have not been discussed in this chapter. In particular, the concept of parallel processing, where the task of executing an instruction or instructions is divided up amongst a number of processing elements, has not been introduced. However, what has been covered should be sufficient to enable you to read manufacturers' design data and handbooks with an understanding of the principles behind the technicalities of specific implementations. In many ways, this is a more important achievement than an exhaustive treatment of every detail of computer architecture.

259

Chapter 7
Interfacing Computers to Mechatronic Systems

A Summary...
A chapter covering the basic technical issues related to designing and selecting hardware to interface computers to mechatronic devices. This chapter delves into the basic aspects of the closed control loop in terms of protection, isolation and signal conversion between external systems and the computer. The chapter also looks in some detail at the problems of digital control including quantisation error in A/D conversion and the problems associated with transducers.

Digital Voltages

Analog Voltages

Analog Energy Forms

External Voltage Supply

Digital to Analog Conversion Computer Analog to Digital Conversion

Scaling or Amplification

Isolation

Energy Conversion (Actuators) External System

Protection Circuits

Scaling or Amplification

Isolation

Energy Conversion (Transducers)

External Voltage Supply

260

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

7.1 Introduction
Most modern devices that engineers encounter in the industrial environment are mechatronic in nature. That is, they are a combination of some or all of the following elements:

Mechanical components (both moving and stationary) Electro-mechanical components (motors, solenoids, relays, etc.) Traditional power circuits (single and three-phase power supplies) Power electronic circuits (servo drives, switched-mode power supplies, etc.) Digital circuits (discrete combinational logic and microprocessor logic) Analog circuits (transistor amplifiers, etc.) Energy conversion devices (transducers) Software.

In simple terms, mechatronic systems involve the application of digital and analog electronics and computer systems to the control of both traditional and modern mechanical devices. Since nearly every one of the elements in a mechatronic system is very nearly the basis of a professional discipline in its own right, the task of designing such devices and interfacing them to other computer systems can be quite daunting. No single text can therefore cover in absolute detail the spectrum of issues related to designing and interfacing to mechatronic systems and this text certainly does not endeavour to do so. This text, and this chapter in particular, should be viewed as an index that gives the reader an overview of the problems involved in interfacing digital electronics (particularly computers) to industrial systems. The title of this chapter and this book are somewhat ambiguous and deliberately so. The reason for the ambiguity is that sometimes we need to interface computers to mechatronic systems that already have computer control and other times we need to interface computers to mechatronic systems that may have analog and digital power electronics, but no computer control. In both cases, the introduction of a computer ultimately creates a larger mechatronic system and hence the ambiguity. Interfacing industrial signals to computers (or computer-based devices) can be a complex and time consuming task. There are many problems involved in isolating lowvoltage, low-current computer circuits from the high voltages and currents that are prevalent in industrial environments. However, these problems are just one part of the overall interfacing dilemma. Converting signals from one energy form to another or reducing or amplifying signal levels is the other part of the problem.

Interfacing Computers to Mechatronic Systems

261

In the engineering market-place there are an enormous number of commercially available (off-the-shelf) interfacing boards and transducers that have been designed to allow computers to interact with the outside world. These can satisfy the majority of our interfacing needs. Off-the-shelf interfacing boards are normally ideal for one-off applications because they save on development and testing costs. Unfortunately, commercial boards are normally "general-purpose" and can therefore be relatively expensive because they are designed to handle a range of interfacing needs. If we need to use large volumes of interfaces for specific applications, then we generally find that these commercial solutions are no longer viable. As with most commercial decisions, there is a natural "break-even" point and beyond this point, we discover that is necessary to develop our own, special-purpose interfaces from basic concepts. There are a number of fundamental concepts that need to be understood, in terms of computer interfacing, and this text is designed to introduce you to some of the more important ones in the sense that they will pertain to mechatronic systems. In Chapter 3, we examined a range of different electrical and electronic devices that are used as mechanisms for interfacing computers to external systems. However, in order to understand how these mechanisms fit in to the interfacing process, we need to understand the process itself. To this end, our task, for the remainder of this chapter, will be to examine the global objectives of interfacing and to see how the sorts of devices, introduced in Chapter 3, can be used to interface computers to external systems. If you have not already done so, then you should go through the concepts espoused in Chapter 3 before proceeding with this chapter. By the time you have completed this chapter, you should clearly understand that there is actually nothing unique about the problem of interfacing a computer to a mechatronic system. The issues involved in interfacing to chemical systems, largescale power systems, etc., are generic and all require an application of the basic principles that will be introduced in this chapter.

262

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

7.2 The Interfacing Process


The purpose of computer interfacing is to make the software on a computer system communicate with an external system that provides or accepts information in various physical forms and sizes. More precisely, we can say that signals must be transferred from the external system to the computer's CPU and memory devices via the data bus in order to create a feedback path. Signals must be transferred from the CPU and memory devices, via the data bus, to the external system to create a driving force. This input and output is normally used to create a closed-loop control system, as shown schematically in Figure 7.1 or a data-acquisition (monitoring) system if the computer is not required to provide any driving force as an output.

Data/Address Bus

Computer

Computer Interface Memory Locations for I/O

Output to System (Driving Force)

External System

Input from System (Feedback)

Figure 7.1 - The Computer Interfacing Process - Closed Loop Control

The computer in Figure 7.1 could be a general-purpose personal computer, a specially designed (one-off) microprocessor or for that matter, a mini-computer or main-frame system. The problems involved in interfacing any of these devices to the external system of Figure 7.1 are numerous. The key point to remember in understanding these problems is that the circuits in computer systems are generally designed to:

Respond to small, digital voltages (typically less than ten volts) and not currents or pressures or temperatures Provide small digital voltage outputs (typically less than ten volts) and very small current outputs (typically less than one milli-Amp).

Interfacing Computers to Mechatronic Systems

263

However, our knowledge of most physical systems requiring control tells us that the external systems:

Do not generally provide voltage feedback and are not necessarily voltage driven (other energy and signal forms are also prevalent including current, temperature, pressure, capacitance, inductance, etc.) Often do not provide or require digital signals (many feedback and drive signals are analog in nature) Generally provide and require signals with energy levels having magnitudes not compatible with the computer system (signals may be too large or too small and need to be converted)

As a result of this incompatibility, there are a number of steps involved in the interfacing process. These are shown in Figure 7.2 for the closed-loop system.

Digital Voltages

Analog Voltages

Analog Energy Forms

External Voltage Supply

Digital to Analogue Conversion Computer Analogue to Digital Conversion

Scaling or Amplification

Isolation

Energy Conversion (Actuators) External System

Protection Circuits

Scaling or Amplification

Isolation

Energy Conversion (Transducers)

External Voltage Supply

Figure 7.2 - Basic Steps in the Computer Interfacing Process

For the signals fed from the external system to the computer, the following steps are generally required:

264

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

(i) Conversion from initial energy form to voltage (via transducers) (ii) Conversion of raw voltage to levels appropriate to computer circuits (iii) Protection and isolation of the computer circuits from raw signals where spuriously high signals can occur in the external system (iv) Conversion of analog signal voltages to digital form. For the signals fed from the computer to the external system, the following steps are generally required: (v) Conversion from digital form to analog form (vi) Conversion of raw analog output to appropriate voltage and current levels (amplification and/or transformation) (vii) Conversion of output currents and voltages to appropriate drive energies (magnetic field, mechanical force, etc.). However, in examining Figure 7.1 and Figure 7.2, it must also be remembered that any signals fed back from the external system into the computer are effectively "asynchronous" or random. In other words, their timing has no relationship to the carefully timed activities within the computer system and its data and address bus structure. The first objective of computer interfacing is to therefore provide a mechanism that will accept this random, incoming information and retain it until it can be released by normal, internal addressing and timing techniques to the data bus and CPU. As far as the computer software is concerned, all incoming and outgoing data must ultimately appear in memory locations which are mapped within the normal computer address/data bus structure. In order to get an interface to provide this functionality, we have to convince the CPU that the entire interface is nothing more than a collection of memory addresses. There are a range of special chips whose internal registers can be mapped into a computer system (just like memory chips) and are equipped with input/output ports so that a range of asynchronous, incoming digital signals can be received or transmitted. These devices are often referred to as "Programmable Parallel Interfaces" (abbreviated to PPI), "Peripheral Interface Adaptors" or "Parallel Interface Adaptors" (normally abbreviated to PIA). The actual name usually depends upon the manufacturer The integration of these chips into a computer system is shown schematically in Figure 7.3. The specific functionality of each PPI chip depends upon the manufacturer and a range of different chips are available to meet different requirements in terms of inputs and outputs, number of internal registers and so on. Generally, a PPI provides a number of "ports" to which external system connections are made. Each port consists of a number of digital lines that can be used for input or output. Each port has a corresponding register that is mapped onto the address/data bus structure of the computer system, as schematically shown in Figure 7.3.

Interfacing Computers to Mechatronic Systems

265

Data Bus

Registers Microprocessor
Interrupt

Inputs

Programmable Parallel Interface Ouputs

Asynchronous Digital I/O From / To Other Devices

Address Bus Decoder Address Bus

Figure 7.3 - Interfacing External Asynchronous Signals to the Synchronous Internal Environment via a Programmable Parallel Interface

The CPU reads from the registers (which the CPU views as nothing more than memory locations) in order to get the current status of input ports and writes to registers in order to change the status of output ports. The PPI is therefore a device which performs the conversion between the asynchronous and synchronous data forms found outside and inside the computer, respectively. Most PPI devices have additional registers that provide the CPU with information about the status of incoming data and the PPI itself. Additional registers are also normally provided so that the operation of the PPI can be changed by the CPU. This attribute is common to many peripheral devices (including UARTs, etc.) whose characteristics can be programmed by changing the contents of internal registers. We have already noted that a substantial proportion of external feedback signals will be analog and similarly, that the driving forces required by external systems will also, in general, be analog. The PPI essentially only provides digital inputs and outputs. However, there are numerous single-chip devices available to perform the conversion between analog and digital (and vice-versa). The self-evident titles of these devices are "Analog to Digital (A/D) Converters" and "Digital to Analog (D/A) Converters". The functionality of these devices is shown schematically (in conjunction with the PPI chip) in Figure 7.4.

266

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Data Bus

PPI
D/A Converter v
out

Analog Output To External System

Output Register Input Register

A/D Converter (8-Bit)

in

Analog Input From External System

Address Bus

Figure 7.4 - Functionality of A/D and D/A Devices Interacting with a PPI Device

In Figure 7.4, we have shown an A/D converter which takes in an analog voltage (within a limited range) and converts it to an 8-bit form that can be fed into an input port of the PPI device. Similarly, we have shown an output port of the PPI feeding 8 bits of data (in parallel) into a D/A converter that then provides a voltage proportional to the number represented by the 8 bits. The actual number of bits to which analog to digital (& vice-versa) conversion occurs clearly defines the accuracy of the incoming data or outgoing driving force and is referred to as the "resolution" of the device. As a general rule, the higher the resolution of the device, the higher the cost. The conversion of data from and to an analog form takes time, and again, the faster the conversion, the higher the cost of the device. As a rule, the cost of A/D conversion is normally much higher than that of D/A transformation. As a result, it is often necessary to share a single A/D chip amongst a number of incoming analog voltages and to selectively switch between them. The device that performs this selective switching is referred to as a "multiplexer". We shall examine the A/D and D/A processes in more detail in section 7.3 because they are most important to the stability of a control system and the accuracy of a data acquisition system.

Interfacing Computers to Mechatronic Systems

267

Another interesting point to note is that it is possible to purchase the combined functionality of analog to digital conversion with the functionality of the programmable parallel interface, thereby having a single, memory-mapped device that is capable of accepting external analog signals. These inputs are accessed through registers in the same way as those in the discrete PPI device. Once the PPI chip is in place within a computer system, the "control software" developer needs to generate a range of sub-programs (procedures) which allow the main program to read and write to those "memory locations" (which are in fact Input/Output registers of the adaptor). These programs normally have to be written in a low-level assembly language. We have already noted that as far as the internal computer hardware is concerned, external feedback signals arrive at essentially random times. As far as the software is concerned, we generally have little or no control over the rate at which feedback data enters the registers in the PPI device. Normally the control software running on the CPU also has to perform many functions in addition to reading and using data from the interface adaptor. It therefore takes a finite amount of time between successive readings of data from the adaptor. A program which regularly checks inputs, in between performing other functions, is said to be "polling". The polling technique is simple and efficient, provided that inputs don't change too quickly. However, if the time taken to use the incoming data is longer than the refresh time then data in the I/O registers may be lost (ie: over-written). The "interrupt programming technique" is more complex than the polling technique but is used to avoid data loss in real-time systems. In an interrupt-driven system, the PPI "interrupts" normal program execution whenever new data arrives. This can be done (as in Figure 7.3) by directly interrupting the CPU or (more commonly) by asserting an interrupt on a special "interrupt controller" chip. Control of the CPU is then transferred to an Interrupt Service Routine (ISR), whose task it is to read data from the PPI and place it into a conventional area of memory (a buffer which is later accessed by the main program). Once control of the CPU is returned to the main program, it can use the data at its own pace. The development of an appropriate ISR is therefore an important part of the total interfacing process. The polling and interrupt techniques have already been discussed in some detail in section 6.7. A number of other issues related to software development in control systems will be covered in more detail in Chapter 8. As we progress through the remainder of this chapter, we will examine the major issues related to the hardware interfacing of a computer system to an external system as we have cited them in Figure 7.2. After we have examined these aspects in more detail, we shall return to the overall interfacing problem in section 7.8, where we can bring the basic elements together with a greater understanding of the broader problems.

268

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

7.3 A/D and D/A Conversion


The analog to digital (A/D) and digital to analog (D/A) conversion processes are two of the most important aspects of computer control. Although both of these devices are most commonly fabricated into commercially-available, single chip implementations, an understanding of their design and behaviour is important to selecting the appropriate devices for a particular task. The two devices are shown schematically in Figure 7.5.

D1 D2 Analog V
in

A/D Converter : Dn (a)

D1 D2 D/A Converter : Dn (b) Analog V


out

Figure 7.5 - (a) Schematic Representation of A/D Converter (b) Schematic Representation of D/A Converter

Figure 7.5 (a) shows the A/D device which provides a digital number output that is an approximation of the analog input. The accuracy of the approximation depends upon the resolution of the A/D converter, which in turn depends upon the number of output bits. For an A/D device of "n" bit resolution (as shown in Figure 7.5 (a)), there are 2n possible combinations of output. If the analog input voltage range is Vi, then the A/D can only change when the input voltage changes by a value greater than or equal to: Vi 2n ...(1)

Interfacing Computers to Mechatronic Systems

269

Incremental voltage changes of less than this size are not reflected in the binary output and so, information is lost. For example, an 8-bit A/D with an analog input range of 10 volts can only detect a change of voltage greater than or equal to: 10 = 39. 0625 mV 28 If we were seeking to develop a control system that could respond to changes of 10 mV, then clearly an 8-bit A/D would not be appropriate in this instance. Figure 7.5 (b) shows a D/A converter that provides an analog voltage whose value is dependent on the binary number at the input. The resolution of the D/A is also defined by the number of input bits into the system. The analog output of the D/A is therefore quantised and, for an n-bit converter, with an output voltage range of Vo, the output can only change in increments of: Vo 2n

...(2)

The loss of data involved in the A/D and D/A conversion processes is referred to as "quantisation error". Quantisation error can be minimised by increasing the bit resolution of the A/D and D/A devices, but it can never be eliminated. Hence, if one were to take an analog signal, pass it through an A/D and then through the "dual" process, D/A, the original signal could not be recovered. The objective, therefore, in selecting A/D and D/A devices is to ensure that the approximated signals entering and leaving the computer are sufficiently accurate to ensure stable control. D/A conversion is easier to realise than A/D conversion and is actually used within some A/D circuits. For this reason, we shall firstly examine the D/A process. Figure 7.6 shows the traditional way of achieving a conversion from binary input voltages to an analog output voltage. The technique is referred to as an "R-2R Resistive Ladder Network" and forms one side of the inputs to an operational amplifier, connected up to act as an inverting amplifier (See Figure 3.45 (b)). In Figure 7.6, the digital inputs to the D/A device are each connected to a voltage controlled switch (either a BJT or FET). When a low input is provided to the switch, then the corresponding component of VREF (from the resistive ladder) is switched to the non-inverting terminal of the operational amplifier (which is connected to ground), thus making no contribution to the output. A high digital input, on the other hand provides a voltage to the inverting terminal of the operational amplifier. The resistive ladder is designed to provide an input weighting corresponding to the order of the bit at the digital input. In the simple, 3-bit converter of Figure 7.6, the weighting ascribed to the digital signal input to switch 1 is twice that of switch 2, which is correspondingly, twice that of switch one. This provides the required relationship between the binary digital inputs and the analog outputs.

270

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

R V REF 2R Digital Inputs 1 Voltage Controlled Switches (BJTs or FETs) 2 2R

2R

2R

2R

Analog Output

Figure 7.6 - Schematic of D/A Converter Operation (3-Bit)

The problem with the D/A technique shown in Figure 7.6 (which is widely used) is that the voltages representing the digital inputs may not all be the same. With TTL, for example, the voltage representing a binary one can be anywhere between 2.4 and 5.0 volts. Ordinarily, the variation in digital voltages is of no consequence, provided that it is within a given range - however, in D/A conversion, digital voltage variations can cause problems because they are unfortunately amplified into an analog output value. The problem is resolved by the voltage reference circuitry in the D/A converter, which performs appropriate level shifting to compensate for variations in input voltages. The voltage reference circuit also controls the range of analog output values which can be set (within limits) for a given range of digital inputs. Analog to digital converters can be designed in a range of different ways. The fastest of the A/D converters are the so-called "flash converters" or "parallel converters". In order to understand the operation of these devices, one needs to understand the operation of a "comparator". A comparator is simply a specialised form of operational amplifier that provides a digital output voltage based upon the difference between two, analog input voltages. The device is shown in Figure 7.7. The flash-converter A/D is composed of a number of comparators, each of which is given a different reference voltage, via a resistive ladder network and the incoming voltage. The comparators effectively then sort the voltage into different levels, depending upon the reference levels with which they work. As for the D/A circuit, reference levels are set up so that they effectively form a base-2 number system. The result is shown in Figure 7.8.

Interfacing Computers to Mechatronic Systems

271

v Analog Inputs v

+ -

Digital Output v
out

v v

out

= High when v1 > v2 = Low when v1 < v2

out

Figure 7.7 - Comparator Circuit Formed from an Operational Amplifier

V V

in REF

2 -1 Dn : R 4 Boolean Logic Decoding : D3 D2 R 3 D1

Digital Outputs

R Comparators

Figure 7.8 - Flash or Parallel A/D Converter

The problem with the flash-converter is that for n-bits of resolution it requires: 2n - 1 comparators, which makes it unwieldy for any high levels of resolution (eg > 4 bits).

272

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

There are two, more commonly used A/D conversion circuits that are far more practical than the flash converter, despite the fact that they are not as fast. These are:

Integrating Converters Successive Approximation Converters.

The operation of both these devices is relatively easy to understand. The integrating A/D converter is composed of two operational amplifiers (one configured as an integrator and the other as a comparator), a counter circuit and some relatively simple Boolean control logic. The device is shown schematically in Figure 7.9.

Integrator V in -VREF R +

C Start Clock +

Boolean Control Logic

Counter

Digital Outputs

Figure 7.9 - Schematic of Dual-Slope, Integrating A/D Converter

In order to start the A/D conversion process in Figure 7.9, the Boolean control logic switches the analog input signal voltage to the integrator and at the same time switches on the counter. The output voltage of the integrator is the integral of the incoming analog waveform and increases with time. When the counter has reached a fixed number of counts, the integration process is halted by switching the input of the integrator to the negative reference voltage and resetting the counter. Given the negative reference voltage, the output of the integrator decreases until it reaches zero volts, which triggers the comparator and switches off the counter. The resulting digital output of the counter is then proportional to the analog voltage. The results of a typical conversion are shown in Figure 7.10. The dual-slope process is used to make the conversion independent of component tolerances.

Interfacing Computers to Mechatronic Systems

273

Integrator Output Voltage Stop & Reset Counter

Start

Comparator Switches off Counter

Time Clock Voltage

Time N Cycles N.V V


in

REF

Figure 7.10 - Operation of Dual-Slope Integrating A/D Converter

The problem with the integrating A/D converters is that their conversion time is dependent upon the magnitude of the input voltage. This can be observed by studying Figure 7.10. The maximum height of the integrator output is determined by the size of the input voltage (since the slope of the waveform is proportional to the input voltage). The negative reference voltage is fixed and so the down-going edge of the integrator output voltage always has the same slope. Hence, the higher the input voltage the longer the time taken to obtain a conversion to a digital representation, since the number of counts increases with input voltage. The successive approximation A/D converter is another device that is conceptually easy to understand. It is a simple, closed-loop system in which a digital number, stored in a register, is converted to analog (via an internal D/A converter) and then compared with the incoming analog system (via a comparator). When the two voltages match, the value stored in the register is an accurate representation of the incoming analog signal. The successive approximation A/D is shown schematically in Figure 7.11. The conversion control logic is implemented in the form of a state machine.

274

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Comparator Control Logic Analog Input + _ 1 0 1 1 0 1 0 0 D8 Digital Outputs : Approximation Register

D1 D/A Converter

Figure 7.11 - Schematic of 8-Bit Successive Approximation A/D Converter

In a successive approximation A/D, the conversion process begins when the most significant bit in the approximation register is set (while the other bits are reset). The analog output (from the D/A), resulting from this number is compared with the input signal. If the input voltage is greater than the voltage represented by the digital number, then the bit remains set to one, otherwise it is reset to zero. The second most significant bit is then set and the same test carried out. The process continues until all the bits in the approximation register have been tested, and thereafter, the register contains the digital equivalent of the analog input. The conversion time in a successive approximation A/D is only dependent on the number of bits in the approximation register and is therefore constant for any particular device, regardless of the input voltage level. This is a major advantage over the integrating A/D and is one reason why successive approximation A/Ds are amongst the most widely used A/D devices. It is clear that all A/D devices require a period of time in which to convert a signal from analog to digital representation. We already know that as a result of the limited number of bits used in digital representation that the conversion of data is accompanied by a loss of information, through quantisation error. There is however, another form of error introduced by the A/D conversion process, because of the time delay in converting from analog to digital. This is referred to as a "sampling error" because, in effect, when we use A/D devices, we are really only sampling an analog waveform.

Interfacing Computers to Mechatronic Systems

275

Sampling is normally carried out at uniform time intervals (regardless of A/D type), but the sampling frequency is of crucial importance, as demonstrated in Figure 7.12.

Analog Input Voltage 16

Time (a) Output Voltage after A/D then D/A Conversion

1111

1000

Sample Frequency = fs Sampling Interval = 1/fs

0000

Time (b) Output Voltage after A/D then D/A Conversion

1111

1000

Sample Frequency = 2.fs

0000

Time (c) Output Voltage after A/D then D/A Conversion

1111

1000

Sample Frequency = 4.fs

0000

Time (d)

Figure 7.12 - Results after A/D then D/A Conversion after Sampling Waveform (a) at a Range of Frequencies - (b) fs (c) 2fs (d) 4fs

276

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Figure 7.12 (a) shows a voltage waveform which is to be converted into a digital form by a 4-bit A/D converter and then converted back to analog form by a 4-bit D/A converter. Figures 7.12 (b), (c) and (d) show the results for a range of different sampling frequencies. The sampling frequency in Figure 7.12 (d) is twice that in Figure 7.12 (c) which is, in turn, twice that in Figure 7.12 (b). The higher the sampling frequency, the greater the accuracy of the approximation of the original waveform. Since A/D converters require a finite time for conversion, we must ensure that the conversion speed is sufficient to enable an adequate sampling rate to be achieved, otherwise, the results may be meaningless. The so-called "Nyquist Sampling Theorem" establishes the basic rule-of-thumb for sampling of waveforms. The theorem is based on the fact that any waveform is the sum of a number of sinusoidal components of differing frequency and amplitude (the Fourier Theorem). The Nyquist Sampling Theorem states that the minimum sampling frequency must be at least twice that of the highest frequency component of the original waveform, in order for the waveform to be adequately represented in a digital form. In practice, sampling frequencies are selected to be three to four times the highest frequency component of the incoming analog waveform. Another problem that arises in the A/D conversion process, as a result of the time taken to complete a conversion, is that the analog input waveform may vary during the conversion period. This means that even though a processor may initiate the analog to digital conversion cycle on an A/D chip at uniform time intervals, the digital outputs may not be uniformly spaced in time. This is particularly true for integrating A/D devices whose conversion times are dependent on the voltage magnitude. The solution to this problem is realised by a device that samples the analog input waveform, at uniform time intervals, and holds that voltage level until the next sample is taken. In other words, these devices sample the incoming waveform, and provide an output voltage equal to the input voltage, until triggered by an external signal (eg: from a processor). After triggering has occurred, the devices hold their output voltage at a constant level until they return to sampling mode. A/D conversion can then take place during the hold period. These devices are known as "sample and hold" devices and can either be designed individually or purchased as a commercial item. The basic structure of the sample and hold device is shown in Figure 7.13. It is composed of a capacitor (for holding a captured voltage), a transistor switch (for switching between hold and sample modes) and two operational amplifiers (configured as voltage followers). The first operational amplifier provides a high-input/low-output impedance circuit to drive the switch. The second operational amplifier provides a high-input impedance to minimise the charge drain on the capacitor. When the sample line on the device is asserted, the output voltage approximately equals the input voltage, as does the voltage across the capacitor. When the sample line is reset (ie: hold is asserted) the capacitor is disconnected from the input and retains the previously attained voltage and charge, thus providing the "hold" function of the device.

Interfacing Computers to Mechatronic Systems

277

Voltage Follower

Voltage Follower

FET V
in

out

Sample/Hold

Figure 7.13 - Schematic of Sample and Hold Circuit

A number of factors determine the effectiveness of the sample and hold circuit. The primary one is the capacitor's ability to retain charge. The rate at which the capacitor discharges (thereby diminishing the stored voltage) is referred to as the droop rate of the circuit. The lower the droop rate, the better the holding characteristics of the circuit. The other major factor affecting the circuit is the switching speed of the transistor which ultimately determines how quickly a sample can be captured for holding. In addition to the above-mentioned reasons for using sample and hold circuits, there are several common applications in which sample and hold circuits can be used to precede the A/D conversion phase. These are:

When the input signal has a high frequency component that needs to be captured and digitised within a time-frame shorter than the conversion time of an A/D device. When the input signal is not continuous or continuously present. The latter situation may occur with transients or when there are several signals feeding into the A/D through a switch (multiplexer) When the cost of a high-speed A/D is unacceptably high for a given commercial application and a lower speed device can be substituted in tandem with a sample and hold circuit.

The last application is of interest to us because it highlights one of the major issues associated with A/D conversion - that is, component cost. A/D devices are relatively expensive in digital circuit terms and their cost is dependent upon conversion speed, resolution, etc. Although the cost of such devices is gradually decreasing, there may be instances in which it is necessary to share one A/D amongst several incoming signals. In digital circuit terms, the switching circuit that performs this function is called a multiplexer and is a commonly available IC component.

278

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Figure 7.14 shows an application with eight incoming analog signals which share one A/D converter via sample and hold circuits and an "8 to 1" multiplexer.

Input Selector Lines Input 0 Input 1 Input 2 Input 3 Input 4 Input 5 Input 6 Input 7 S2 S1 S0 S&H S&H S&H S&H S&H S&H S&H S&H D0 D11 Multiplexer (MUX) A/D Converter :

Figure 7.14 - Using Sample and Hold Circuits with a Multiplexer to Share an A/D Converter Between Eight Analog Input Lines

The operation of the multiplexer is relatively easy to understand. A number of digital input-selector lines are used to connect the required input to the output of the circuit. For example, in the system shown in Figure 7.14, setting S2, S1 and S0 to one, zero and one, respectively, would connect analog input line 5 to the A/D converter and so on. The concept of sharing an A/D converter between a number of inputs really only has marginal benefits and normally one has to determine whether it is better to use one high performance A/D or several lower performance A/Ds to accomplish the task. If conversion speed is an issue, and one A/D is handling many input lines then its sampling performance per input line is reduced accordingly. However, the major benefit of sharing an A/D is when the device has a very high output resolution which cannot be readily achieved by several low-resolution chips in parallel.

Interfacing Computers to Mechatronic Systems

279

7.4 Signal Conditioning, Protection and Isolation


7.4.1 Introduction
The concepts behind signal conditioning, protection and isolation are almost as broad as the entire field of Electrical Engineering and it is not practical to provide a detailed coverage of all the points involved. The purpose of the sections that follow is to overview some basic devices that can be considered for the various tasks. However, before we do so, it is important to clarify what we mean by the terms encompassed in this section. The basic problem that we need to address really relates back to Figure 7.2, which is our global diagram for interfacing. The general case of interfacing signals from the real world to digital circuits really requires that we take analog signals in various energy forms and convert them to digital voltages of a size compatible with the inputs to the computer (or more specifically, programmable parallel interface). In section 7.3, we looked at the process of converting analog voltages into a digital form. In section 7.5 we will look at converting signals from other energy forms into a voltage representation. However, in this section we examine the process of taking voltages and converting them into an appropriate size and shape that is compatible for digital circuits, and conversely, taking outputs from digital circuits and converting them to a form suitable for driving external systems. We also consider the need to shelter digital circuits, and the users of those circuits, from dangerously high voltage and current levels that may exist in external systems. The following definitions are used in this book:

Signal Conditioning: Conversion of voltages from their raw form (either analog or digital) to a form suitable for use in another (analog or digital) circuit Protection: The design of circuits that can be used to prevent spuriously high signal levels from damaging other circuits or human operators Isolation: The design of circuits, based on devices that transfer electrical energy via some non-electrical intermediary form (magnetic circuits, optical circuits) and facilitate energy transfer without direct electrical connection.

280

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

7.4.2 Signal Conditioning Circuits


Signal conditioning circuits can be broadly classified into several groups, based upon the type of incoming signals and required outgoing signals. These include: (i) Waveform Correction Circuits: Schmitt Trigger Debouncing Circuits Scaling Circuits: Analog Amplifiers Digital (PWM) Amplifiers Transformers

(ii)

(iii) Filtering Circuits: Analog (R-L-C circuits) Digital (Switched-Capacitance Filters) We shall examine each of these circuits briefly, in turn.

(i)

Waveform Correction Circuits


Practical digital circuits operate at a very high speed and require inputs that are close to the "ideal" rectangular waveform shape in order to function correctly. Many incoming waveforms, which are thought to be digital, may not in fact change at a sufficiently high speed to be considered thus by a TTL or MOS digital circuit. Sometimes, it is also desirable to have an incoming analog signal converted to a digital signal by some "simple" circuit. The circuit that achieves these ends is known as a "Schmitt Trigger" and is a commonly available component. The circuit symbol for the Schmitt Trigger is a triangle with a hysteresis loop drawn within. This is shown in Figure 7.15 (a). The operation of the circuit is shown schematically in Figures 7.15 (b) and (c). The circuit works with two distinct threshold or "trip" levels, known as the upper trip point (UTP) and the lower trip point (LTP). When the input voltage exceeds the UTP, the output of the trigger sharply rises, as shown in Figures 7.15 (b) and (c). Similarly, when the input waveform drops below the LTP, the output of the trigger drops sharply.

Interfacing Computers to Mechatronic Systems

281

The fact that the LTP is lower than the UTP means that the Schmitt Trigger can also be used to improve incoming digital signals that contain some noise. Noise can sometimes vary a digital signal below or above the required digital logic level, thereby creating an erroneous bit. However, when the Schmitt Trigger is used to improve the signal, any noise induced during a "high" period in the waveform would have to reduce the signal below the LTP before any change in output would occur. Similarly, any noise induced during a low period in the waveform would have to raise the signal above the UTP before a change in output would occur.

Input

Output

(a) Input UTP LTP Output

Time

Time (b) Input UTP LTP Output

Time

Time (c)

Figure 7.15 - (a) The Schmitt Trigger Circuit Symbol (a) "Squaring Up" Slowing Changing Input Signals (c) Producing Digital Signals from an Offset Sinusoidal Input Waveform

282

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The Schmitt Trigger circuit can improve circuits with a small amount of noise but there are some inputs that contain so much noise that they need to be processed by other circuits before being fed into a digital circuit. The most common problem is with mechanical switches that all suffer from a property known as "bounce". Bounce, as its name implies, means that switches do not open and close (break and make) cleanly, but rather oscillate from one state to another for a few milliseconds before settling. Although a human user would not notice this short time-frame phenomenon by observing an ohm-meter, the results of switch bounce can lead to incorrect data flowing into a digital circuit. As a result, all switches need to be "debounced" before information from them can be processed. Debouncing can be achieved by simply waiting several milliseconds before reading a switch state or in simpler digital circuits, through a simple "debouncing circuit". A debouncing circuit can be achieved via a Schmitt Trigger based system with an RC network or, preferably,. through an R-S Flip-Flop arrangement, made from two NAND gates, such as the one shown in Figure 7.16.

Vcc R Debounced Output

0 R Vcc

Figure 7.16 - Switch Debouncing Circuits Based on R-S Flip-Flops

Interfacing Computers to Mechatronic Systems

283

(ii)

Scaling Circuits
One of the most common requirements in interfacing devices is the scaling of voltages. There are two techniques by which voltages can be scaled. These are:

Amplification Transformation.

Amplification can be either analog (linear transistor circuits such as operational amplifiers) or digital (PWM switching of transistors). Both analog and digital amplification can be used on time-invariant or timevariant, alternating and direct current signals and are the most versatile methods for scaling. The simplest techniques are realised through the use of operational amplifier circuits, which have been discussed in detail in 3.3 and 3.5, but in recent years, the more energy-efficient (and complex), PWM digital techniques have been applied for power circuits. The basic concept of analog and digital amplification is shown in Figure 7.17.

supply

I supply I in V Amplifier I out V

in

out

Vout

Vin

Frequency 0

Figure 7.17 - The Concept of Amplification

284

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Looking at Figure 7.17, and applying the concept of conservation of energy, we know that:
v in i in + v supply i supply = v out i out + Pamp where: Pamp is the power converted to heat in the amplifier. The important point to note about equation (3) is that the output signal power of the amplifier (voutiout) can be larger than the input signal power (viniin) because additional power is provided by the amplifier supply rails. This means that the amplifier can be used to convert small signals into larger signals. We also know that the output voltage and current from an amplifier are a scaled version of the input voltage and current, but that the relationship between input and output is also frequency dependent as a result of the physical attributes of the amplifier, as shown in Figure 7.17 and discussed in detail in 3.4 for the analog amplifier. The transformer is the other obvious device that we can use for scaling of analog signals. The operation of the transformer is shown conceptually in Figure 7.18 ...(3)

I in V Transformer

I out V

in

out

Vout

Vin

Frequency 0

Figure 7.18 - The Transformer in Concept

Interfacing Computers to Mechatronic Systems

285

Looking at Figure 7.18, we can deduce the following relationships between input and output power in a transformer:

v in i in = v out i out + Ptrans


where:

...(4)

Ptrans is the power consumed by the transformer through hysteresis, eddycurrent and copper losses. Equation (4) highlights the major difference between the transformer and the amplifier. The transformer is a passive device and if we use it to scale up voltage, then the corresponding output current decreases and vice versa. The other drawback of the transformer is that its frequency response is somewhat limited. Firstly, no transformation occurs at zero frequency and secondly, the upper frequency limits of common transformers are typically much lower than those of common amplifiers. The other limitation of the transformer is its physically large size. However, the transformer has the major advantage of providing complete electrical isolation between the input and output. While analog amplification and transformation are both suitable for scaling inputs to digital circuits, the problem of converting the small signals produced by digital circuits into large voltages and currents for driving high powered systems presents a number of additional problems. Analog transistor amplifiers can be used at high power levels and a number of high-powered operational amplifiers can also be purchased. The problem with all analog amplifier circuits is that they are really using transistors as a variable resistance that controls energy flow from the supply rails to external circuits. The bulk of the difference between the signal and supply energy input and the signal energy output is dissipated in the variable resistance provided by the transistors (when in linear operation). This means that a great deal of energy is converted into unwanted heat in the analog amplification process. The energy wastage is irrelevant in small circuits but can be of significance in large systems. The heat generated by the energy wastage is relevant in all circuits because it has to be compensated for by cooling mechanisms such as fins or electric fans. A more modern, digital form of signal amplification that is used to provide a variable output voltage is called pulse-width modulation or PWM. PWM circuits are available as commercial ICs and are really sequential circuits that can generate rectangular output waveforms with varying duty cycles (ie: varying "on" to "off" ratios). The outputs are then used to switch power transistors on and off.

286

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The average value of the PWM output can thus be varied between zero and the supply rail voltage. This is shown in Figure 7.19, which illustrates the average output voltage for a range of PWM output waveforms.

PWM Output Voltage Average for 0% Duty Cycle

Time PWM Output Voltage Average for 20% Duty Cycle

Time PWM Output Voltage Average for 50% Duty Cycle

Time PWM Output Voltage Average for 100% Duty Cycle

Time

Figure 7.19 - PWM Output for a Range of Different Duty-Cycles

Although we normally talk about varying the average voltage with PWM circuits, the reality is that the output is always a rectangular waveform, whose average value can be periodically varied. It is not a smooth output. Many systems are however tolerant of this type of digital waveform, including d.c. motors, where the technique is used. This is because the motors have an inductance whose natural tendency is to smooth (choke) the waveform. In other applications, smoothing filters may need to be added to make the outputs more suitable.

Interfacing Computers to Mechatronic Systems

287

PWM circuits are only low power digital circuits and cannot be directly used to drive any high power circuits. Instead, the output of a PWM circuit is normally used to drive the base or gate of high power transistors which, in turn, switch between high and low supply-rail voltages and can thereby provide substantial power outputs. The net effect is the creation of a digital amplifier circuit. The benefits of PWM based amplification over analog amplification can be substantial. Firstly, the transistors in a PWM based amplifier do not function in their linear region as variable resistors. They are only used in their "on" and "off" modes. This means that the power dissipation in the amplifier is substantially reduced and that power amplifier circuits based on PWM techniques can be much smaller than those based on analog circuits. The disadvantage is that we are always dealing with rectangular waveforms and this means that we sometimes need to develop more sophisticated control systems when we wish to drive systems based on, for example, sinusoidal voltages as in the case of three-phase motors. PWM circuits also have a limited bandwidth in which they can operate. This is governed by the switching speed of the transistors within the PWM and, more importantly the switching speed of the power transistors in the amplification section of PWM based drives. The most common application of PWM based amplification is in switchmode power supplies, commonly found in computers. The typical arrangement is shown in Figure 7.20.

I
Feedback Reference Signal Power Amplifier

out

+V

Verror

PWM

Circuits
Vref

Load

out

Figure 7.20 - Switch Mode Power Supplies Based Upon Amplification of PWM Signals

288

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

(iii) Filtering Circuits


Fourier analysis tells us that any waveform can be represented as the sum of a number of sinusoidal components of differing frequency and amplitude. Although we normally deal with systems in the time domain, there are instances where we also need to examine the frequency domain of signals. Mathematically, the conversion between the time-domain representation of a signal, x(t), and its frequency domain (spectrum) representation, X(f), are defined by the dual Fourier transforms: X( f ) = and x( t ) =

z
+ +

x ( t ) e j2ft dt

...(5)

X( f ) e j2ft df

...(6)

Equation (5) gives us the frequency spectrum of any time domain waveform, which, in general, is composed of a number of wanted components and also unwanted components that need to be removed. The process of removing unwanted components at various frequencies is called filtering. The spectrum of an incoming signal is shown in Figure 7.21, including wanted and unwanted components. The unwanted components can be filtered out by using a range of different filters, generically known as:

Low-Pass (which only pass frequencies from zero to an upper limit) Band-Pass (which only pass frequencies between two frequency limits) Notch (which pass frequencies between zero and an upper limit but not between two intermediary points).

The design of filter circuits is a specialised field in its own right and is a complex process, which will not be detailed in this book, particularly since there are many classic text books in the field. However, suffice to say that there are three techniques for filtering waveforms that have been obtained from some external system. These are:

Develop an analog circuit, composed of resistors, capacitors and inductors, that has the appropriate frequency domain characteristic to achieve the desired result. There are many text books written on how this can be achieved

Interfacing Computers to Mechatronic Systems

289

Develop a digital filter circuit using modern filtering techniques such as switched-capacitance filters (which are available as standard components). This is relatively easy to do but has performance limitations. Use an algorithmic approach, where the unfiltered signal is fed into a computer system, and an algorithm developed to obtain the frequency spectrum, and then only use the wanted components.

Voltage Wanted Signal Component Unwanted Signal Component

Frequency

Voltage

Ideal Low-Pass Filter

Frequency

Voltage Ideal Band-Pass Filter

Frequency

Voltage Ideal Notch Filter

Frequency

Figure 7.21 - Filtering Out Unwanted Signals with Low-Pass, Band-Pass and Notch Filters

290

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Of the three possible approaches to filtering, the second is perhaps the most practical because it can be based upon a commercial solution that only needs minor tailoring in order to get the desired effect. The other two approaches require a sound understanding of network theory and Fourier analysis techniques and are not really suitable for novices in the field. Moreover, even if one does understand the basic theory of analog filter design, fabrication can be a problem if elements such as inductors need to be included. Inductors can take up a significant amount of space and, in any case, normally need to be specially wound for filter applications. As a final point, it is interesting to note that all electrical circuits exhibit filtering characteristics because they have resistance, capacitance and inductance, even when these are parasitic entities and not actual (lumped) components in the circuit. The most common characteristic is therefore the low-pass characteristic, which occurs with all forms of amplifier and was typified in Figure 7.17. The transformer is a circuit element which exhibits a band-pass characteristic, because it will pass neither d.c. nor high frequency signals.

7.4.3 Protection Circuits


Industrial systems, composed of mechanical and electronic devices can be very hostile to the circuits within a computer system are concerned. Digital circuits in a computer system are designed to work at low (digital) voltages and can only source or drain very small currents (in the order of milli-Amps). The outside world on the other hand is filled with large analog voltages and currents which could very easily damage the internal circuitry of a computer system. The signals fed back to a computer from an external system are also unpredictable for a number of reasons:

The external equipment may be susceptible to large voltage or current spikes (for example, an induction motor current is abnormally large when the motor is started direct on line) The external equipment may accidentally be connected to other high voltage systems, as a result of cables on moving equipment accidentally disconnecting themselves Terminals on external equipment may be short-circuited as a result of human error or conducting materials falling across terminals.

Interfacing Computers to Mechatronic Systems

291

As far as feedback signals into the computer system are concerned, simply scaling the levels to a suitable size and conditioning them into an appropriate shape may not be adequate for protecting the computer circuits from damage. There are situations in which external devices could be subjected to unexpected energy surges which ultimately lead to voltage spikes at the input of the computer system. If this is a possibility then there are some simple protection measures that can be employed. There are several devices available for protecting digital circuits from high voltages and currents by disconnecting them from the source. They include:

Zener Diodes Thyristors Relays.

The first two devices have been covered in detail in Chapter 3 and so we will now examine the operation of the relay, which is a relatively simple device to understand. Essentially, the relay is a spring-loaded switch that is pulled open or closed by a force generated by a magnetic field which is, in turn, generated by the flow of current in a coil. Relays come in an enormous range of sizes and can be triggered by coil currents as low as micro-amps or as high as amps. Relays also come in two different configurations, referred to as "normally-open" and "normally-closed". A normallyopen relay is one where the switch is in its open-circuit position until current flows in the coil. A normally-closed relay is one where the switch is in its short-circuit position until current flows in the coil. The relay is shown schematically in Figure 7.22.

Coil Circuit Switch Circuit

Normally-Open Relay

Normally-Closed Relay

Figure 7.22 - Relay Configurations

292

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Relays can operate relatively quickly in mechanical terms, typically changing state in the order of milli-seconds. This switching speed is generally fast enough to prevent injury to human users but may not be fast enough to protect delicate circuits. In these instances, it is necessary to revert to semiconductor based switches that can provide response times in the order of nanoseconds or microseconds rather than milliseconds. In Chapter 3, it was noted that Zener diodes can be used to regulate voltage levels. If the breakdown voltages on these diodes are judiciously chosen, then these diodes will restrict voltage inputs to computer circuits to a maximum acceptable level, while still allowing normal signals to pass through. Back to back Zener diodes enable protection in a.c. circuits. The configuration is shown in Figure 7.23, where an additional safety feature, a (normally-closed) relay, is incorporated into the Zener diode branch of the circuit. The relay is "slow-acting" in comparison to the Zener diode but, when a current flows through the Zener diode branch for a sufficient period, the relay will isolate the offending current source in order to minimise the potential for damage.

Normally Closed A/D Converter (16-Bit) v a.c. Voltage


AB

External Device

Relay

Figure 7.23 - Protection Using Zener Diodes and Relays

The semiconductor alternative to the sort of relay circuit shown in Figure 7.23 is the Silicon Controlled Rectifier (SCR) based circuit known as the crowbar. The crowbar circuit has already been discussed in 3.7.2 and is shown in Figure 3.53. High currents in mechatronic circuits can arise from short-circuit conditions or changing mechanical conditions such as excessive loads on motors. Most over-current conditions can be handled through the use of relays that are triggered once currents exceed a certain level. Sometimes, the current flowing through a particular branch of a circuit can be too high for a given relay. In those instances, a scaled version of the current may need to be passed through the relay coil and this is normally achieved by using a transformer with a suitable turns ratio.

Interfacing Computers to Mechatronic Systems

293

So far we have only looked at protecting the inputs of low power circuits. However, there may be instances where we need to protect the outputs. This typically occurs when we need to drive some external device from a low-power circuit. The common solution is to use transistor based amplification, but sometimes simpler, relay solutions can achieve the same result. Consider the situation where the programmable parallel interface (PPI) from a computer system needs to perform a function such as turning on a lamp that draws several Amperes of current. A simple circuit, composed of an external power supply that is turned on by a relay with a triggering current of a few micro-Amperes of current can be used to drive the lamp as shown in Figure 7.24. The triggering current for the relay can be set up by inserting an appropriate currentlimiting resistor RL in series with the coil. The current through the external device can be adjusted by designing an appropriate power supply or by varying an appropriate series resistance.

PPI

Digital Output Lines at TTL Voltage Levels

+ RL Relay Lamp External Supply

Figure 7.24 - Driving Simple High Current Circuits from Digital Outputs by Using Relays

The circuit of Figure 7.24 is really a crude electro-mechanical version of an amplifier and can also be used in situations where a low current output is incapable of driving a large fan-out of other circuits. Providing that the relay switching speed is adequate, this simple type of circuit protects the low-power digital circuit from damaging itself by providing an excessively high output current.

294

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

7.4.4 Isolation Circuits


A common problem with extracting voltages from external systems is that voltages are relative quantities. For example, if we had two terminals on an external device (A and B) and we wanted to feed the voltage across those terminals (VAB) back to the computer through an A/D converter, then we may need to provide special isolation circuits, even if VAB is relatively small. The problem is clearly illustrated by the example in Figure 7.25.

is large and v is small

slightly larger

AB

External Device A/D Converter (16-Bit) v


AB

A v B v
B

Figure 7.25 - The Problem of Measuring Small Voltage Differences Between Two Voltages Which are Both High with Respect to Earth

In Figure 7.25, it is evident that if VB is large (say 500 volts) and VA is larger (501 volts) then even though the difference between the two is small, the value of either, with respect to earth is high. In these situations it is not the voltage VAB which is of concern. It is the fact that the voltage on either of the two lines (with respect to the ground pin that will exist on the A/D) will be very high. In other words two pins, separated by a few millimetres, may have a very large voltage across them. This is clearly dangerous and highlights the need for isolation. It would be very convenient if terminal B was simply floating on the external device and we could simply ground it without effecting the voltage across A and B. However, in general we do not have this luxury and grounding a high voltage can be extremely dangerous. For this reason, we need to use devices that can isolate the computer circuitry from the high voltages and still provide the required signal. If the voltages are low-frequency a.c., then a transformer inserted between the two devices will perform the necessary task, by allowing us to ground the side closest to the A/D device. This is shown in Figure 7.26.

Interfacing Computers to Mechatronic Systems

295

is large and v is small

slightly larger

AB

A/D Converter (16-Bit) v


AB

1:1 A

External Device

v B v
B

Figure 7.26 - Isolation Using Transformers

A similar problem exists when we need to measure the current passing through a conductor. Theoretically, we could simply insert a small resistance into the line and measure the voltage across it - however, that would simply create the same problem as in Figure 7.25. A good solution for a.c. waveforms is to use a current transformer (which is simply a voltage transformer with a small resistance across one side). This is shown in Figure 7.27 where, as an example, we wish to measure the current through a motor circuit. The current in the motor circuit is transformed to the primary side, and since the A/D has a very high input resistance, the current predominantly passes through the small resistor, R. The voltage across the resistor is then proportional to the current in the motor. Moreover, the apparent resistance in the motor circuit is reduced by the square of the turns ratio, thereby minimising the effects of the current monitoring process.

a.c. Power Supply N1:N2 V R

A/D Converter (16-Bit)

I M

Figure 7.27 - Using a Current Transformer to Isolate and Monitor Current

296

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Transformers are convenient devices to use if space permits because they not only provide isolation but also scaling of voltage levels through their turns-ratio. However, transformers do not work with time-invariant. signals and generally attenuate high frequency signals so they clearly have a limited application range. Other devices such as opto-couplers (opto-isolators) provide another mechanism for isolation that can be used for both a.c. and d.c. signals. The operation of the opto-coupler is shown schematically in Figure 7.28 (a).

(a)

(b)

Figure 7.28 - Opto-Couplers (Isolators) (a) Simple Opto-Isolator (b) Darlington-Pair High Gain Opto-Isolator

The simple opto-isolator is composed of a Light-Emitting-Diode (LED) and a light-sensitive phototransistor. The output of the transistor is dependent upon the intensity of light emitted from the LED which is, in turn, dependent upon the amount of current flowing through it. The transfer gain can be improved significantly (by a factor of 15 or more) by using a Darlington-Pair transistor configuration as in Figure 7.28 (b). Opto-isolators sometimes suffer from linearity problems and so it is necessary to ensure that non-linearity is compensated for by processing algorithms. However, the opto-isolators do fulfil an important role in interfacing because they are physically smaller than transformers and are capable of isolating both a.c. and d.c. signals.

Interfacing Computers to Mechatronic Systems

297

7.5 Energy Conversion - Transducers


The purpose of a transducer is to convert one energy form into another. In terms of interfacing industrial systems to digital circuits (and computers), the purpose of transducers is to convert some raw energy form, such as pressure, temperature, etc. into voltage. Depending on the type of transducer, the output can either be analog (which is still the more common form) or digital. The voltage then has to be scaled and, if analog, converted into a digital form for interaction with other digital circuits. We could also extend the idea of transducers and say that an electric motor, relay or solenoid are transducers because they convert electrical energy into some form of mechanical energy, normally to provide movement. In these instances, the transducers are more specifically "actuators". These devices are generally electromagnetic in nature and are vitally important in the creation of mechatronic systems. Electromagnetic actuator devices will be covered in some detail in Chapter 9. Limiting our discussions here primarily to input transducers, we can say that there are an enormous number of commercially available devices to meet the requirements of modern interfacing tasks and the range is continually expanding. It is well nigh impossible to cover the range of available devices and there is little point in doing so, since their characteristics are ultimately dependent on the specific devices in question. In this section therefore, we will only review a few of the basic transducers. There are a number of characteristics that need to be ascertained before transducers can be selected for an application. These include: (i) (ii) (iii) (iv) (v) (vi) (vii) (viii) (ix) (x) . Threshold Resolution Input Range Linearity Frequency Response Monotonicity Hysteresis Repeatability Slew-Rate Stability.

Characteristics (i) to (iii) are the ones which most people would intuitively check. The input threshold defines the minimum input energy that the transducer will detect. The resolution defines the minimum change in input energy that will be reflected in the output of the device and the input range determines the minimum and maximum input energy levels for the transducer.

298

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Characteristics (iv) to (vi) really define the real-time performance of the transducer. Linearity in transducers signifies that the output energy is linearly related to the input energy and simplifies processing of data. However, not all transducers are linear and many have a limited range of linearity, after which they tend to saturate. As with any other transformation device, the frequency response is important. Most mechanical and electronic systems tend to act as low-pass filters that attenuate highfrequency components. This is a reflection of the difficulty involved in changing any physical energy at a very high rate. However, when selecting transducers it is necessary to ensure that they can at least cope with the range of input frequencies expected. Sometimes it is necessary to measure or estimate the highest frequency input before selecting a transducer. Monotonicity is a characteristic that can have significance in control applications. In a monotonic device, an increasing input always leads to an increasing or static output. Similarly, a decreasing input always leads to a decreasing or static output. Devices which are not monotonic are difficult to deal with in a control sense. Characteristics (vii) to (x) relate to both the short and long-term accuracy of the transducer. Hysteresis, which leads to differences between forward and reverse characteristics of many physical devices is also common in transducers. This can lead to two different outputs for the same input, depending on whether the input is on the rise or fall when the output is recorded. Repeatability is really a tolerance or accuracy factor. It defines the maximum error between successive output values for a given input value. Slew-rate is a performance factor and some might argue that it is related to the frequency response. Slew-rate is the maximum rate of rise of output and so defines how quickly the transducer can respond to inputs such as step changes. Stability relates to the ability of the transducer to maintain its characteristics and accuracy over a period of time. Most electronic components have characteristics that vary with age - doping levels in semiconductors change, resistance characteristics change and so on. Mechanical components wear with use and so the working lifespan of a transducer needs to be considered during the selection process. There are too many transducers to enable a complete coverage of this subject and so, what follows is an overview of some of the more common devices:

(a)

Switches Few would consider the simple mechanical switch to be a transducer and yet, upon reflection, it is evident that this device converts mechanical energy into electrical voltages. More specifically, switches in all their various forms (including key-pads and keyboards) are the fundamental interface between digital circuits and human users. Basic mechanical switch operation is self-evident and the problems in converting mechanical switch movements into sensible voltages (by debouncing) have already been covered in 7.4.2.

Interfacing Computers to Mechatronic Systems

299

(b)

Light Emitting Diodes Light Emitting Diodes or LEDs are another family of simple devices that many would overlook when discussing transducers. However, they are an invaluable part of many control and monitoring systems because they provide a very simple mechanism for output of information to human users. In essence, the purpose of the LED is to convert electrical energy into light in the visible spectrum. LEDs are also used in conjunction with phototransistors in order to generate opto-couplers as described in 7.4.4.
LEDs are available in a range of different colors, including red, green yellow, orange and white. The common circuit symbol for all LEDs is the same, regardless of output color and is the simple diode symbol with two additional arrows, as shown in Figure 7.28. The characteristics of the diode are essentially similar to those of the normal p-n junction diode, discussed in Chapter 3, with the only major differences being that the LED's forward breakdown voltage is typically higher and reverse breakdown voltage lower than that of a normal diode. LED's are extremely useful because they can be driven by most common digital circuits, normally with an open-collector gate, that can provide the necessary forward current to cause illumination.

(c)

Potentiometers Potentiometers are another family of simple transducers that are commonly used in interfacing and control circuits. A potentiometer is really nothing more than a variable resistor, whose resistance changes with the movement of a central arm known as a wiper. The potentiometer therefore translates rotational position into a variable voltage. Potentiometers are used at the human-electronic interface, where they normally appear as control knobs, and also in older servo motor control systems, where they were coupled to the shaft of the motor to indicate the rotational orientation (position) of the motor. The potentiometer is shown schematically in Figure 7.29.
A fixed d.c. or a.c. voltage can be applied across the two ends of the potentiometer and the output voltage taken between any one end and the wiper arm. The resolution of the potentiometer depends on the number of turns of wire forming the total resistance. High quality potentiometers can become relatively expensive devices, particularly when they are used in high power circuits for accurate position detection. High quality potentiometers are also designed to provide minimal resistance variation with temperature. Potentiometers were primarily used in analog systems and most modern, computer-based control systems achieve accurate user input via other mechanisms such as keyboard or mouse input.

300

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Wiper

Figure 7.29 - Schematic of Potentiometer

(d)

Shaft Encoders and Resolvers One of the most common requirements of transducers is to convert a rotational or linear position into a voltage signal that can be used in a digital control system. Transducers which achieve this objective are known as resolvers or encoders. The term "resolver" tends to relate to a family of analog devices, whereas encoders are digital devices. By far the larger of the position feedback problems is the conversion of rotational position to voltage because this is a basic requirement of all servo drive systems and is therefore used in robots and CNC machines.
A simple, linear, position feedback system can be created by using a linear version of the potentiometer shown in Figure 7.29. The movement of a wiper arm, along one axis, creates an output voltage which is directly proportional to position. This is analogous to the orientation sensing of the traditional potentiometer. A more sophisticated form of analog position feedback system for shaft rotation is the so-called synchro-resolver, which was used for some years in NC and CNC machine tool systems. The system works by having one set of electrically-energised rotational coils (known as the armature) connected to the shaft of a rotating device and one stationary set of coils, within which the rotor spins. The sinusoidal and cosinusoidal output voltage waveforms (generated in accordance with Faraday's Law) are then used to detect relative position. The problem with these devices is their cost and complexity but the accuracy can be significantly better than the potentiometer technique.

Interfacing Computers to Mechatronic Systems

301

The most common form of rotational position detection system is the digital shaft encoder. There are essentially two types of shaft encoder - those that provide absolute position and those that provide incremental position. Both of them work on the same principle, which is based on a rotating disk with transparent slots. A set of LEDs is placed on one side of the disk and a set of phototransistors on the other. Each time a transparent slot passes by a LED, the corresponding phototransistor on the other side of the disk generates a pulse. The basic scheme is shown schematically for the incremental position encoder of Figure 7.30.

Slotted Disk

Motor

Paired detectors composed of 2 LED Sources & 2 Photo-sensitive transistors providing voltage pulses each time light is coupled through a slot in the disk
1 2 Voltage Pulses

t
2

Figure 7.30 - Incremental Position Encoder

In Figure 7.30, two LED-Phototransistor pairs are placed 90 apart in space. The rotational movement of the disk causes an identical pulse train to be generated by each phototransistor - however, the pulse trains are different in phase because of the physical separation of the detectors. Using a counter circuit to count the pulses provides an indication of incremental movement. the phase sequencing between the two outputs provides an indication of direction. Absolute position encoders are generally less accurate than incremental encoders. In an absolute position encoder, each disk has "n" circular tracks of slots, from the outside to the central hub. An equivalent number of LED-phototransistor pairs are place along the same radial position, on the disk. At any instant, the reading from the in-line phototransistors is an "nbit" number representing absolute position.

302

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

(e)

Pressure Transducers Pressure is a relatively difficult phenomenon to measure directly and the general trend in quantifying force per unit area is to measure the displacement that the force creates. There are a number of different types of pressure transducer available. The majority of these are two-part systems. The first part converts pressure into displacement and the second part converts displacement into a voltage signal which can be used as a feedback element. There are a range of different mechanisms that can be used to create a displacement as a result of a given pressure. These include pistons, Bourdon tubes, diaphragms, etc. Strain gauges, discussed in (f), below, can also be used in some instances.
Of the different types of pressure transducers, those based on the Bourdon tube offer the widest pressure operating range and a degree of accuracy in the order of half of one percent. However, one of the most versatile types of pressure transducers is based upon the piezo-electric crystal (composed of either natural quartz, Barium Titanate ceramics, etc), whose molecular structure creates a diaphragm. The application of a force across two faces of the crystal will generate a potential difference across another two faces, as a result of the electrostatic charge developed. Transducers based on piezo-electric crystal tend to be based upon a shear of the crystal, rather than a simple compression, because the latter causes unacceptably high charges to be built up. Despite their wide operating range, the accuracy of piezo-electric transducers is somewhat lower than other systems and is typically in the order of one percent.

(f)

Strain Gauges The strain gauge operates on the principle that the electrical resistance of a material is dependent upon its geometry. Therefore, if the geometry is altered through a deformation resulting from an applied stress, then the electrical resistance changes accordingly: R L R L
where: R is the original electrical resistance of the material L is the original length of the material The constant of proportionality is called the strain-gauge-factor.

...(7)

Interfacing Computers to Mechatronic Systems

303

The resistive material, which is the basis of the strain gauge is physically attached to the device under test by some mechanism (such as glue, etc.) and the material needs to be temperature matched to the device under test, so that the thermal expansion and contraction of the device is not recorded by the gauge. Strain-gauge resistive materials can also be connected in groups of four as a bridge (two vertical resistance elements and two horizontal resistance elements) to eliminate temperature sensitivity and facilitate measurement in compressive and tensile loads.

(g)

Temperature Transducers The most commonly used transducers for measurement of temperature are thermo-couples. These analog devices are based on the junction of two dissimilar metals. One side of the junction is held at a reference temperature and the other is subjected to the test temperature. A voltage is developed across the junction as a result of the temperature difference. There are two major problems with thermo-couples. One is that they are non-linear devices and the other is that they only provide a very low output voltage. Thermo-couples are very widely available as commercial devices, some incorporating amplification stages and so on. In particular, it is possible to purchase devices which incorporate data transmission facilities to enable results to be sent over long distances via a low level network. Proximity and Level Sensors A spectrum of devices has been designed to sense parameters such as proximity and level, through widely varying techniques. Proximity can be detected by micro-switches (as in many CNC machines and robots) or by inductive and capacitive sensors, where the moving device has suitable electromagnetic or electrostatic properties (permeance and permittivity). Proximity can also be detected by a light-beam (light source plus photo detector) or ultrasonic beam interruption system. Level sensing depends upon the properties of the medium. Simple level sensing systems can be composed of float-balls coupled to position transducers (such as potentiometers) and more sophisticated systems can be based on light-beam interruption (for translucent fluids) or pressure sensing techniques.

(h)

The discussion on transducers could go on indefinitely and there are many commercial organisations regularly issuing catalogues of devices and specifications which, in the final analysis, are far more useful in a practical sense than the general descriptions that can be included in any text book. The solution to most interfacing problems requiring transducers normally begins with a search of such catalogues and the descriptions provided above should only serve as a general introduction into this enormous field.

304

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

7.6 Attenuation Problems


One of the most awkward problems that needs to be overcome in the interfacing process is the tyranny of distance. In particular, we often view the conductor (wire), between two nodes, as an ideal element in an electrical circuit as is shown in Figure 7.31. We assume that the wire has no resistance to the flow of current and that therefore, the signal emanating from node A is the same as the one reaching node B.

Conductor Node A l Node B

Earth Potential

Figure 7.31 - Idealised Point to Point Link

In order to understand why this isn't the case over long distances, we need to look at a more appropriate model of the conducting material. Firstly, we look at an infinitely small section of the wire (l in length) and examine its physical properties. There are no "lossless" conducting materials. All materials have some resistance to current flow, and energy is converted to heat within the conducting medium. So the circuit model for our infinitely small length (l) of wire has a series resistor "R" to reflect the loss of energy at the receiving end of the conductor. Equally, the air between the conductor and earth is not a perfect insulator and therefore provides an alternative path in which current can flow through to earth. The conductance (inverse of resistance) of this alternate path to earth is "G" and reflects that current that does not appear at the receiving end of the conductor as a result of charge flow through the alternate path. Since the conductor has current flowing through it, a magnetic field is produced around the conductor, and the resultant magnetic flux linkage of the infinitely small section of wire is represented by a series inductor "L". The conductor will also have a certain voltage (and net charge), with respect to earth, causing an electric field between the conductor and earth, thereby giving rise to a capacitance "C".

Interfacing Computers to Mechatronic Systems

305

The series inductance and the shunt capacitance reflect energy storage and release within the conductor. Since both devices store and release energy at differing rates, the voltage at the "output" end of the infinitely small length of conductor will not generally be in phase with that at the "input" end. The circuit model for the entire conductor can be built up from these infinitely small sections and hence we could draw it as shown in Figure 7.32.

l Segment of Line R L

l Segment of Line R L

Vi ...

Vo

G ...

Figure 7.32 - Lumped Parameter Approximation of a Conductor

Figure 7.32 shows what is referred to as a "lumped parameter" approximation of the conductor because all the physical properties that, in reality, are distributed evenly along the line are represented by simple "lumped" circuit elements. Nevertheless, the approximate circuit provides us with some insight into what happens when signals pass through the conductor. Mathematical analysis (Fourier Series) tells us that any voltage waveform, regardless of its shape can be represented by the sum of a number of sinusoidal waveforms of differing frequency and amplitude. So, we can analyse any type of waveform on the conductor by assuming that it is made up of a number of sinusoidal components and using the phasor method to obtain a transfer ratio for the conductor. For any of the individual sinusoidal components of the digital waveform, the ratio of output voltage (at the end of an infinitely short length of wire l) over input voltage is obtained as follows: The impedance of the parallel branch is given by:

Zp =

1 j2fC + G

...(8)

306

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The impedance of the series branch is given by:

Zs = j2fL + R

...(9)

The ratio of output voltage to input voltage is then obtained through voltage division and is given by the expression:

Zp vo = v i Z p + Zs

...(10)

Substituting equations (8) and (9) into (10) gives us the complex number expression:

vo = v i ( j2fL + R ) + (

1 j2fC + G 1 ) j2fC + G
...(11)

The magnitude ratio of output voltage over input voltage is given by:

vo = vi

1 ( 2fRC + 2 fLG ) + (1 + RG 4 2 f 2 LC) 2


2

...(12)

The phase difference of the output voltage with respect to the input voltage is given by:

vo vi = Arc tan

2fRC + 2 fLG 1 + RG 4 2 f 2 LC

...(13)

Expressions (12) and (13) are both frequency dependent and therefore we can say that the phase and magnitude-ratio of output voltage to input voltage will be different for each sinusoidal component of the waveform on the conductor. Substituting some "limit" values into these expressions, we can observe that in the infinitely small section of conductor:

Very high frequency ("f" tending to infinity) sinusoidal components will be attenuated (diminished) to zero Low frequency ("f" tending to zero) sinusoidal components will be attenuated by a factor of (1 + GR)

Interfacing Computers to Mechatronic Systems

307

Low frequency ("f" tending to zero) sinusoidal components will be slightly "shifted" in phase with respect to the input.

If we were to exaggerate this effect for, say a digital voltage waveform, then the output voltage at the end of the infinitely small section would look as the voltage waveform illustrated in Figure 7.33.

Voltage

Input Waveform

Output Waveform Time

Figure 7.33 - Exaggerated Output Voltage at the end of l Segment

The attenuation of high frequency sinusoidal components, in the output waveform, means that the edges lose their sharpness. The phase shift in low frequency components, means that the output waveform appears to "lag" behind the input waveform. As a result of attenuation in some of the sinusoidal components, the output waveform also appears attenuated in some areas. If we then say that since a conductor is composed entirely of these infinitely small sections, the distortion and attenuation of the output voltages, with respect to the input voltages, is increased with length. If the length of the transmission line is sufficiently large, then the output waveform will be attenuated and distorted to such an extent that it cannot be discerned from noise. This is shown schematically in Figure 7.34 for a digital signal, but the same is true for analog signals.

308

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Voltage Input Waveform Output Waveform Noise Level Time

Figure 7.34 - Degenerative Effects of Long Conductors

Now that we know what happens to voltages over long lengths of conductor, we need to consider what happens in a typical interfacing problem. Figure 7.35 shows a relatively common scenario.

Computer

Remote Device

Conductor PPI A/D V V Transducer


2 1

Figure 7.35 - A Common Interfacing Problem where the Distance "L" Between the Transducer and Control System is Large

In Figure 7.35, we have a transducer separated by a long distance, "L", from the control computer. The term "long" is difficult to quantify, because it ultimately depends upon the original size of the signal (v1) and the characteristics of the conductor (resistance per unit length, etc.). However, if we were to place an order of magnitude figure of say 10 metres on the term "long" then it gives one some feel for the problem.

Interfacing Computers to Mechatronic Systems

309

Transducers normally produce relatively low output voltages which may attenuate to noise level over a long distance. Further, from equation (12), we know that the conductor connecting the two nodes is effectively acting as a low-pass filter that attenuates high frequency components of the transducer output. If the computer needs the high frequency components for monitoring or control purposes, then clearly the conductor may be responsible for corrupting the signal acquisition process. The obvious solution is to amplify the signal at the transducer end before it is transmitted down the conductor. This can be achieved with a simple operational amplifier circuit. However, all amplifiers need a low-voltage d.c. power supply which is not normally available at the remote end. The provision of such a power supply can be an issue, particularly if there is no power whatsoever at the transducer end that can be converted to low-voltage d.c. It may then be necessary to use a battery solution or solar cells, if sufficient light is available. The attenuation problem is particularly irritating if there are a number of widely-spaced transducers that each need to have their own power supply. There is no simple solution to the attenuation problem, but one alternative is to use data communications techniques, introduced in 7.7. In fact, some commercial transducers come equipped with data communications transceivers that help overcome the problem of attenuation. Another problem related to long distance interconnection of low voltage devices is the problem of electromagnetic interference or EMI. Long conductors are liable to have voltages induced in them as a result of magnetic fields produced by currents in other high power conductors. This is common in industrial environments, particularly where high-power induction machines or furnaces are switched on and off. The effect can be minimised by using conductors shielded by a conducting foil or copper-braid (thereby creating a Faraday Cage). EMI is also a problem where two long conductors, both carrying small signals, are close to one another and in parallel - the current in one induces a voltage in the other. The phenomenon is really EMI, but is more specifically referred to as "crosstalk". It is resolved by removing the parallel path between conductors which is, in turn, achieved by twisting pairs of small-signal conductors around one another. There are many commercially available cables composed of twisted-pairs for minimising crosstalk. Normally, twisted-pair cables are also shielded with foil and/or copper braid and are bound together in some form of plastic sheath.

310

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

7.7 Data Communications


Data communications techniques are useful for interfacing purposes because they can provide a modular hardware structure which one can use to transmit information over relatively long distances. Data communications techniques are primarily used to transfer information from one computer to another, but in many instances they can also be used to transfer data to and from remote sensors, actuators and transducers. This is shown schematically in Figure 7.36.

Computer Data Comms. Interface

Remote Device Data Comms. Interface

Digital Data Communication Link

Transducer

Figure 7.36 - Interfacing Via Data Communications Links (Point to Point)

There are basically two types of data communications links that can be established. They are: (i) (ii) Point to point links Networks: Star Bus Ring

In a point to point link, only two nodes are generally involved and they are connected together via a number of conductors that form the data communications link or transmission medium. The point to point environment is shown in Figure 7.36. In a network, a number of nodes can be interconnected via a range of different topologies (star, bus and ring), with the bus topology currently being the most prevalent. The bus topology is shown in Figure 7.37.

Interfacing Computers to Mechatronic Systems

311

Computer Network Interface

L Bus Network Transmission Medium

Network Interface

Network Interface

Network Interface

Transducer 1

Transducer N

Actuator

Figure 7.37 - Interfacing Devices Via a Bus Network

In both the point to point link and the network, data is transmitted in a digital format. The role of the data communications or network interface is to take the raw signal and convert it into the appropriate size and digital format for transmission. The problem however is that interfaces are not always available and this means that the system developer has to undertake this role by combining a number of commercially available components. In some cases, particularly for common transducers, the manufacturers have already pre-empted the need for networks (because they have realised that their transducers may be situated some distance away from the control or monitoring computer) and they have integrated the transducer with a suitable network interface. Communications between devices (nodes) on a network or point to point link can occur in two ways:

Parallel Serial.

External parallel communications is analogous to the data/address bus communications that occurs within a computer system and it also tends to exhibit the same limitations, particularly transmission distance. The most common parallel networking system available is the so-called General Purpose Instrumentation Bus (GPIB), otherwise known as the Hewlett-Packard Instrumentation Bus (HPIB) or as IEEE 488. The most common point to point parallel link is the computer to printer link, commonly referred to as a "Centronics" link. The Centronics link was originally designed for computer to printer communication but can theoretically be used for a range of different applications.

312

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Serial communications is the more common form of communication outside the computer system because it minimises the number of conductors between points or nodes on a network. In a serial link, information passing out of the computer data bus is converted into a sequential pulse train via a clocked shift-register. The data can then be transmitted on a single conductor (plus reference conductor) or converted to light pulses by LEDs for transmission on an optic fibre cable. There are a plethora of standards and defacto standards for serial communications and for networks and this has tended to make the networking of devices such as transducers somewhat difficult. In terms of interfacing computers to mechatronic systems, the other problem with point to point links and networks is that can be relatively slow for real-time processing of data. This is particularly true of serial systems. The field of data communications is enormous and is the subject of the complementary text to this one (Data Communications and Networking for Manufacturing Industries), so it will not be pursued in any detail herein. The main objective of introducing it in this text is because of the number of commercially available transducers with proprietary network interfaces that can make the task of interfacing devices over long distances considerably easier. However, if one had to design a network interface from first principles, for each type of transducer, then it is doubtful that data communications would be the preferred option. In particular, in Figures 7.36 and 7.37, if we assumed that the transducers provided an analog output signal, then the data communications or network interface for the transducer would need to:

Convert the signal to digital form and scale it to an appropriate size Convert the signal to the data communications form, with appropriate timing, etc. Amplify the signal to the required data communications level for transmission Read information from the host computer (via the data link), interpret it and adjust transducer parameters accordingly Respond to and obey the rules (communications protocol) specified for the link or network.

Similarly, the data communications or network interface at the computer end would need to:

Interface to the internal system (Address/Data) bus Read incoming signals and generate outgoing signals based upon the application software running on the computer Respond to and obey the rules (communications protocol) specified for the link or network.

None of these tasks, in their own right, is trivial and the development of systems which combine all these factors requires a significant amount of skill, thereby making non-standard solutions difficult to generate.

Interfacing Computers to Mechatronic Systems

313

7.8 Combining the Interfacing Stages


Now that we have examined the basic hardware elements in the interfacing process, we can return to the global diagram that has been featured throughout this book. This has been redrawn in Figure 7.38, where each of the basic stages has been labelled with a number in parentheses.

Digital Voltages

Analogue Voltages External Voltage Supply

Analogue Energy Forms

Computer D/A PPI

(8)

(9) Scaling or Amplification

(10) Isolation

(11) Energy Conversion External System

Conversion

MUX + A/D Conversion (7) (6) (5)

Protection Circuits (4)

Scaling or Amplification (3)

Isolation

Energy Conversion (1)

(2)

External Voltage Supply

Figure 7.38 - The Basic Closed Loop for the Interfacing Process

With the hindsight that is available after having read the previous sections of this chapter, Table 7.1 summarises the basic options which are available for each of the interfacing stages labelled in Figure 7.37. Table 7.1 should provide a reasonable summary of the issues involved in the interfacing of digital systems to mechatronic systems. However, we have not addressed the problems that arise from the physical environment itself. These issues include chemical, vaporous, metallic (swarf), thermal, vibrational and electromagnetic problems common to a range of different industrial situations. A full discussion of these is outside the scope of this book. However, it is indeed fortunate that for many years, specialist industrial enclosures, disk-drive systems, key-boards, screens, etc. have been available for protecting digital computer controls in situations where they must be located in a harsh environment. The only limiting feature of industrial enclosures is that they are relatively expensive (generally costing more than the computer or digital control system itself) and their costing needs to be considered as part of the complete interfacing process.

314

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Interfacing Stage

Typical Devices

Issues / Selection Criteria

Feedback Transducers (Strain gauges, (1) Thermo-couples, Energy Conversion Current-Transformers, tacho(2) Isolation (3) Scaling (4) Protection (5)
Multiplexin g Analog to Digital Converter (6) Chips A/D Conversion Programmable Parallel Interface (7) or Peripheral Interface Adaptor PPI Driving Force Digital to Analog Conversion (8) Chip D/A Conversion Transformers for scaling (9)
Scaling / Amplification

Energy rating; output voltage range; linearity; isolation of output voltages from inputs Frequency response; linearity; extent of electrical isolation Frequency response; linearity; scaling range Reverse diode breakdown voltage Switching speed and turn-on current for relays Number of input channels; switching speed

generators, encoders, etc.) Transformers Opto-Isolators Operational Amplifiers Transformers Operational Amplifiers Zener Diodes Relays Thyristors Multiplexer Chips

Number of output bits (resolution); conversion type and speed; cost, integration with PPI (PIA) Number of input and output ports; number of bits per port Number of input bits; integration with PPI (PIA) & Scaling ratio; frequency response; buffering; power consumption; linearity; heat generation Configuration for current or voltage amplification Relay turn-on current; switching speed Duty cycle, Parasitic switching effects Linearity; frequency response; extent of system isolation Depends on final system

(10) Isolation/ Buffering (11) Energy Conversion

Operational Amplifiers for Amplification External circuits switched by relays activated by computer Pulse-Width Modulator Amplifiers (PWMs) Transformers Opto-Isolators Operational Amplifiers Actuators - Motors, Solenoids, Relays, Speakers, Heating, Lighting, Chemical (Batteries), etc.

Table 7.1 - Interfacing Options for Figure 7.38

Interfacing Computers to Mechatronic Systems

315

7.9 Commercial Realities of Interface Design


Upon reading sections 7.1 to 7.8 of this text, one might come to the conclusion that the design of an interface between a complex digital circuit, such as a computer, and a complex external system is a difficult task. In a commercial sense, one also has to consider the developer's time and the technical resources required to design, build and debug an interfacing circuit. At the very least, voltmeters, ammeters, soldering and desoldering equipment, a high bandwidth oscilloscope, PAL programming board and personal computer may be required. One also needs to consider the external services that may be needed to produce a professional looking board - including items such as printed circuit board fabrication and design if in-house software is not available. Most organisations ultimately conclude that this work can only be amortised and hence justified on two occasions:

When many boards need to be produced When no commercial (off-the-shelf) packages are available.

In practice therefore, the solution to a one-off interfacing problem is generally to tailor a commercial interfacing board that most closely resembles the final product to be produced. This simplifies the task enormously because it enables designers to concentrate on the system at hand rather than the intricacies of debugging circuits that generate "glitches" at seemingly random times. However, the commercial interfacing board solution is also rather costly because boards are designed in a general-purpose manner, normally providing more functionality than may be required for any one application. Moreover, the selection of an appropriate board still requires a relatively sound understanding of all the basic design principles raised throughout this text. There are an enormous range of commercial interfacing boards, including servo drive control boards for motors, closed-loop PID controllers, etc. Many of these boards plug directly into the back-plane bus of a personal computer or workstation and provide a building-block solution to industrial problems. Often, the complementary part of such boards is a range of transducers and actuators that enable many common industrial problems to be resolved without special hardware design. The fundamental problem with designing hardware interfaces to computer systems from first principles is the complexity of the back-plane bus on the computer system. Mapping programmable parallel interfaces and A/D and D/A devices onto the address and data bus structure of a modern computer can be a difficult and time consuming task. To a large extent it is also a question of "reinventing the wheel". A half-way solution between a full commercial interfacing board with on-board processing of signals, etc. is a so-called "Input/Output (I/O)" or "A/D Board". Normally such boards plug directly into the back-plane bus of a computer system and provide analog and digital inputs and outputs. The application specific portions of the interface are then left to the designer.

316

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Higher quality I/O boards resolve the problem of isolation by providing optoisolated connections to the outside world and other useful hardware items such as counter-timers, etc. Low-end boards are generally only composed of a memorymapped PPI device with A/D and D/A converters. With these points in mind, one should generally approach interfacing problems by undertaking a study of commercial catalogues to ensure that one is not re-inventing the wheel (unless that is one's intention). One should also consider the possibility of the Programmable Logic Controller (PLC) as an alternative to either personal computers or specialised digital control boards. In many instances, the PLC can provide sequential control of simple systems and resolve many of the interfacing problems that are encountered in industry. Table 7.2 provides a rough-cut guide to the sorts of problems that typically arise and, for each, a logical sequence of steps to consider in resolving the problems. Note particularly how we use the personal computer as a low-cost tool for interfacing - in low volume interfacing applications, the cost of a personal computer is less than 50% of an engineer's salary for one week. This means that we no longer simply treat the PC as a complex device for computation but also, because it has a widely accepted back-plane bus structure, as a low-cost junction point for commercially available transducers and interface boards.

Problem

Available Professionals

Possible Courses of Action

Old sequential control machine (relay controlled) requires modernising of control Need to develop 200-off low-cost, hand-held remote controllers (digital) to open and close safety doors on machinery Need to control and monitor a chemical process in the laboratory environment

Mechanical / Manufacturing Engineers

Consider PLC control evaluate range of sensors and actuators from PLC manufacturers data books Electrical/Electronic Consider development of a Engineers digital circuit from first principles - possibly using a low-cost processor and commercial transmitter / receiver combination Mechanical Consider a personal /Manufacturing / Chemical computer (PC) based Engineers solution using a commercial I/O card and transducers Consider a PC based solution with commercially available data-acquisition and control hardware and software

Interfacing Computers to Mechatronic Systems

317

Problem

Available Professionals

Possible Courses of Action


Consider basing the system on a commercial (PC) computer and plug-in servo control cards with on-board processing and PID control. Only develop supervisory software. Consider basing the system on a commercial (PC) computer motherboard with commercially designed I/O facilities and design remainder of interface hardware and software from first principles Consider Development based on a Digital Signal Processor development kit plugged into the backplane of a PC - After development, consider transferring the entire control and monitoring function to a stand-alone DSP device networked to a host computer system Consider using 4 commercial PC motherboards for the control and one full PC for host coordination - use a commercial network

Need to develop a specialpurpose robot control system with multiple axes

Electrical / Mechanical / Manufacturing Engineers

Need to develop 4-off high-bandwidth signal acquisition, processing and control systems that are coordinated by a host computer

Electrical / Electronic / Mechanical / Manufacturing Engineers

Table 7.2 - Sample Interfacing Issues and Possible Courses of Action

318

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

319

Chapter 8
Software Development Issues

A Summary...
A short chapter covering the basic issues regarding the development and selection of software for interfacing problems. Real-time operating systems, multi-tasking, multiuser systems, windows environments and their relationship with object-oriented programming (OOP). The user interface.

Digital Voltages

Analog Voltages

Analog Energy Forms

External Voltage Supply

Digital to Analog Conversion Computer Analog to Digital Conversion

Scaling or Amplification

Isolation

Energy Conversion External System

Protection Circuits

Scaling or Amplification

Isolation

Energy Conversion

External Voltage Supply

320

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

8.1 Introduction
Those who relish the development of electronic hardware often place insufficient emphasis upon the software that needs to drive that hardware and conversely, those who specialise in software development tend to place insufficient emphasis on the functionality of the total system. The development of a modern mechatronic system requires a balanced approach and a recognition of the obvious point that the end system must be a cohesive union of electronics, mechanics and software. Most modern computers are themselves mechatronic systems and in Chapter 6, we began to look at the interrelationship between the electronics (digital circuits), mechanics (disk-drives, key-boards, etc.) and software (operating systems, executable programs, etc.) that are combined to generate a very effective development tool. However, as we now know, the computer can often become a single building block within a larger mechatronic system involving other devices such as motors, relays, etc. In this chapter, we will examine the software aspects of mechatronic design. We begin by reviewing two of relevant diagrams from Chapter 6. Figure 8.1 shows the basic hardware and software elements that are combined to form a modern computer system.

Assembler

High Level Language Compiler

Executable Program 1

Executable Program N

Operating System

Address Bus Data Bus ROM (BIOS + Bootstrap) Keyboard Interface Graphics Controller Disk-Drive Controller

Memory CPU

Interrupt Controller

Clock

Keyboard Monitor

Disk-Drive

Figure 8.1 - Basic Hardware and Software Elements Within a Computer System

Software Development Issues

321

Figure 8.2, also reproduced from Chapter 6, shows the interrelationship between software entities in a little more detail.

Assembler

High Level Language Compiler

Executable Program 1

Executable Program N

Software

Operating System

EndUser Software/ Hardware Interface

BIOS and Bootstrap Software

Disk-Drive Controller

CPU

Graphics Controller

Keyboard

Memory

Hardware

Figure 8.2 - Interfacing Hardware and Software via an Operating System

This chapter is really about the decision making process that one needs to develop before selecting various software elements that are required for a mechatronic application. This is not a chapter designed to teach you computer programming or about all the intricacies of a particular operating system - there are countless good text books on such subjects already. In order to set the framework for further discussions, we need to develop some sort of model for the type of systems that we will be discussing. Figure 8.3 shows the basic arrangement with which many will already be familiar. For any given set of computer and interface hardware, one is left with the software elements that need to be selected:

The operating system for the computer The development language for the interfacing software The development language for the application software The type of user interface to be provided by the application software.

This is not to suggest that these issues should be decided upon independently of the hardware selection. In fact, although the hardware is most likely to be implemented first, both software and hardware should be jointly considered in preliminary design stages.

322

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Computer Application Software External System

Operating System

Interface Software

Interface Hardware

Figure 8.3 - The Basic Closed-Loop Control Elements

If one was designing a system of the type shown in Figure 8.3 during the 1970s, then the hardware and software selection issues could be readily resolved fundamentally because there were few options from which to choose. A closed-loop control system would have typically been implemented on a Digital Equipment Corporation PDP-11 computer because it was one of few capable of providing input/output channels - the choice of computer then defined the range of operating systems, the operating system defined the types of programming languages available and the interface cards were normally provided by the computer manufacturer, thereby defining the software interfacing. However, in recent times, the number of options has expanded quite dramatically and so one has to ensure that the broad range of possibilities is examined before committing to one particular hardware/software architecture.

Software Development Issues

323

8.2 Operating System Issues


Operating systems have meandered through a number of different phases since the 1960s to reach the point at which they are today. In fact, the phases have included multi-tasking-multi-user systems (mainframes and mini-computers), then singletasking-single-user systems (PCs) then multi-tasking-single-user systems (workstations) and more recently, multi-tasking-multi-user systems (PCs and workstations). Early operating systems were primarily designed for large mainframe and minicomputer systems in banks, airline companies, insurance companies, etc. The emphasis there was to ensure that many users could access a large central computer and hence the systems were based upon multi-tasking-multi-user operating systems. Another key feature of such systems was that they had to provide security between application tasks and between the multiple users to ensure system reliability and integrity of data. We shall however, classify these operating systems as "generalpurpose" in nature, since they are not targeted towards one specific field. They provide an office-computing environment where the response times are generally designed around the human users. In the 1970s, companies such as Digital Equipment Corporation (DEC) realised the growing need for control computer systems and also determined that generalpurpose operating systems were not directly suitable for control applications. Companies such as DEC developed operating systems which were referred to as being "real-time" in nature and suitable for specialised control applications. A typical example was RSX-11M. Such operating systems were initially implemented on minicomputers and still provided multi-tasking-multi-user environments, but here the emphasis was on the performance of the control tasks rather than the human operator at the keyboard. At the same time, the general-purpose multi-tasking-multi-user operating systems continued to flourish and the most notable of these was the UNIX system developed at Berkeley University. The wide-scale acceptance of the PC in the 1980s undoubtedly surprised even the original designers (IBM) and software manufacturers. Initially, the intention was that the PC was to be used as either a stand-alone device in the home or office or as a "dumb" terminal to a mainframe. The operating system requirements were apparently minimal and so the single-tasking-single-user Microsoft DOS system came into being, essentially as a subset of the increasingly popular UNIX system. The major restrictions of early DOS versions included a 640 kB memory limit and an inability to cope with large hard disk-drives. Had the designers known then what we know now then perhaps they would have created an operating system more amenable to migration over to the more sophisticated UNIX parent system. However, in the 1980s, the market was still heavily segmented, with mainframes, minicomputers, workstations and PCs each holding a distinct niche.

324

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

As it eventuated, the inadequacies of the PC in the 1980s were not resolved directly through the improvement of the DOS system (since this was still acquiring an increasing market share) but rather through the introduction of workstations which were a half-way house between the mini-computers and the PCs. The workstations tended to have more powerful CPUs than the PCs and, unrestricted by the limitations of DOS, were able to run software traditionally run on mini-computers and mainframes particularly because workstations followed the UNIX operating system path. Nevertheless, the wide-scale support for PCs in the 1980s led to a number of third party companies developing improvements to DOS or alternatives to DOS that would enable PC users to expand the capabilities of their machines, particularly in the area of realtime control. The most common improvement/alternative to DOS was an operating system that provided multi-tasking so that the computer could be used for control purposes. Two examples are the QNX system (a scaled-down real-time version of UNIX for PCs) and Intel's IRMX operating system which was designed for real-time control. Although workstations in the 1980s tended to be based upon the UNIX operating system, a clear new trend had also emerged in this area. Workstation operating systems had abandoned the old-fashioned text-based user interfaces in favour of an environment composed of graphical, on-screen windows, with each window representing one active task or application of interest. This interactive type of graphical environment, originally developed by the XEROX corporation at their Palo-Alto-Research-Centre (PARC), made the human task of interacting with the operating system considerably easier because it was based on the use of graphical icons, selected by a mouse or pointer. The same environment was exploited by the Apple corporation in their computers. In many instances, the advent of the window environment was seen to be a simple adjunct to the older operating systems, but its ramifications proved considerably greater. In the 1970s, programmers tended to develop software with text-based user interfaces. These proved to be rather tiresome and if poorly designed, made data entry considerably more difficult than it might otherwise need to be because entire sections of text or data often had to be re-entered when incorrect. This was improved upon considerably by the adaptation of the XEROX interactive environment in the 1980s, where software developers designed programs with pull-down menu structures. Typical examples of this were found in DOS applications such as XEROX Ventura Publisher and numerous other third-party packages. However, a fundamental problem still existed - that is, all application programs had different ways of implementing the same, common functions. Although experienced users could quickly come to terms with software based on pull-down menus, most novices had great difficulty moving from one package to another.

Software Development Issues

325

The windows operating system approach tends to unify the user-interface side of the applications that run within the environment. It does so because it provides considerably better development tools to software houses. Older operating systems provided only bare-bones functions - that is, the basic interface between the computer hardware and the human user. Windows operating systems provide the basic interface and additionally, an enormous range of graphical, windowing and interactive text and menu functions that are not only used by the operating system, but can also be used called up from libraries by user programs. Thus, the source code for a high-levellanguage program developed for a windows environment is likely to be shorter than an equivalent piece of code for a non-windows environment - the windows program makes use of standard user-interface libraries while the latter requires the complete set of routines to be developed. As a consequence, windows based programs all tend to have a familiar appearance and operation, thereby minimising the learning curve. The Microsoft corporation endeavoured, in the late 1980s, to transfer the benefits of the windows environment to PC level. It did so by superimposing a package, which it named MS-Windows over the top of the executing DOS system. This was a significant step because it provided a migration path for the enormous DOS market to move to a windows environment. The MS-Windows system took the single-taskingsingle-user DOS system and converted it into a multi-tasking-single-user system. As one can imagine, this was a substantial achievement but it could only be seen as an intermediate measure, designed to wean users and developers away from DOS, to the extent where DOS could be removed from beneath the windows environment. This is certainly the case with the more modern windows operating systems for PCs, which more closely resemble workstation operating systems (ie: UNIX based) and can, in fact be ported to a range of different hardware platforms. At the same time, it has become evident that the lifespan of the mainframe computer system is now very limited. Advances in processor performance, networking and operating systems have considerably lessened the need for high-cost mainframe systems, to the extent where they are gradually being phased out. The end result is that for the next decade, we will live in an era where similar or identical operating systems will reside on a range of different hardware platforms and those operating systems will provide a much greater degree of functionality for end-users and user-interface support for software developers. It is important to keep this historical perspective in mind in terms of selecting an operating system for control purposes. Certainly, there are short-term and direct requirements such as real-time performance that cannot be overlooked. However, there are also political factors, such as the widespread support of the operating system in terms of development tools and so on that need to be considered.

326

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

In a control environment, there are typically a number of tasks that need to run concurrently and this tends to limit the choice of operating system. In particular, referring to Figure 8.3, a number of tasks can make up the final application: (i) A task that takes in data from the hardware interface and places it into memory (variable storage locations) where it can be used by other tasks the same task may also take data from memory (variable storage locations) and transfer it to relevant registers that cause the interface to output the information A task that reads the input variables acquired by (i) and processes them through a control algorithm, thereby generating output variables which are passed back through (i)

(ii)

(iii) A task that is responsible for interacting with the system user, displaying system state and enabling the user to change parameters or stop the system as and when required. Over and above these tasks, there are other tasks which the operating system may be running for general house-keeping or because they are initiated by a system user. Since computers only have a limited amount of memory, and most multi-tasking operating systems use paging, it is possible that any one task will be switched out of memory and placed onto disk for a short period of time. A number of issues need to be resolved before the operating system can be selected to perform such a control task. The questions that need to be resolved are:

Can we be assured that the timing of inputs and outputs in (i) is deterministic? In other words, can the operating system provide a mechanism whereby the time between task (i) issuing a command to change a variable and the actual change of that variable is well defined? Following on from the point above, is there an accurate system clock that can be read on a regular basis to ensure appropriate timing? In generalpurpose operating systems, when a program makes an operating system call requesting the current time, the actual time is not passed back to the program for a lengthy (in control terms) period, thereby making the figure meaningless Can tasks (i) - (iii) be allocated relative priorities in the overall operating system, so that they always receive a fixed proportion of the CPU's time? Do we know whether the important control tasks will always remain in memory, or will they be paged to and from disk, thereby potentially slowing down an important, real-time control function?

Software Development Issues

327

A number of general-purpose operating systems cannot provide the levels of support that will satisfactorily address the above questions. In older versions of UNIX, for example, I/O (such as serial communications, etc.) was handled in the same way as files were handled. This meant that all requests for I/O were queued along with requests for file handling, thereby creating potential problems for important control signal input/output. On the other hand, however, even a general-purpose operating system can be used for real-time control functions, provided that the system designers are aware of the limitations. Many control computers are dedicated devices and so it is possible to minimise or eliminate undesirable characteristics from a general-purpose operating system. For example, in a system where I/O is handled in a queued fashion, along with files, it may be possible to ensure that file handling is minimised or eliminated while the control system is running. Real-time operating systems generally address all the questions cited above, primarily because those issues are the basis of their fundamental design. However, a drawback of real-time operating systems is that they are specialised, and as a result, have a much lower market share than many general-purpose operating systems. This has important ramifications for system designers. In the short term, it means that development tools (compilers, etc.) for real-time operating systems may be less sophisticated and more costly than those offered for general-purpose operating systems. It also means that the computer selected for control purposes is more likely to become an "island of automation" because of the difficulty of transferring or porting software or information between it and other general-purpose systems. In the long term, the lower market share of real-time operating systems is more significant. It ultimately means that there is less money invested in such systems for improvement and development of tools than there is in general-purpose systems. The end result can be that the long-term improvements in general-purpose operating systems can ultimately provide better performance than the real-time systems and also, that the real-time systems are discontinued and designers are left to port software back to the general-purpose platform. As a result of the above points, one needs to formulate an operating system selection strategy that is both technical and political, if one is to have a product that is viable in the long-term. A logical approach to consider in selecting an operating system is to begin with the highest-volume, lowest-cost, general-purpose operating system available. If this is capable of performing the required task, keeping in mind the sort of questions raised above, then one should pursue this course. However, if it is not capable of performing the task, and it is not possible to compensate for software inadequacies with higher performance hardware, then one needs to pursue more specialised systems, again, commencing with those that are most widely accepted.

328

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Thus far, we have not examined the possibility of using single-tasking-single-user operating systems for control purposes. In fact, the Microsoft DOS system has been successfully used in many applications for both monitoring and control functions. There are several techniques that can be used to enable a single-tasking operating system to carry out real-time control. These include:

Polling techniques built into the control program Interrupt programming techniques Distributed control techniques.

The polling and interrupt programming techniques have already been discussed in 6.7 and have both been widely used in control systems development based upon PCs. The distributed control technique has arisen because of the ever decreasing cost of processing power that enables interface cards to be developed with on-board processors. This sometimes means that the personal computer provides little more than a front-end user-interface, while the bulk of the control work is done by microprocessor or DSP based interface cards. A typical scenario is shown in Figure 8.4. In effect however, the system is actually multi-tasking because each intelligent interface board is running one or more tasks (eg: PID control loops as shown in Figure 8.4).

Personal Computer SingleTasking Operating System

Application Program (User I/F)

Interrupt-Driven Interfacing Software

Intelligent Interface 1 (PID Control)

Intelligent Interface N (PID Control)

External System 1

External System N

Figure 8.4 - Using Distributed Control Based on Intelligent Interface Cards

Software Development Issues

329

8.3 User Interface Issues


The user interface, frequently underestimated by engineers, is probably one of the most important parts of a mechatronic system. It largely determines:

The user-friendliness of the system The learning curve associated with using the system The outward appearance of the system and hence, its market appeal.

Moreover, the user-interface can affect the operation of the system because it assists in the accurate entry of data. There are essentially five different types of user interface, reflecting different phases of computer development since the 1960s. These are: (i) Holorith punch card input and line-printer output (ii) Line-oriented input via keyboard and output via text screen (iii) Simple menu selection input via keyboard and cursor keys and output using text or graphics screen (iv) Pull-down menu selection input via keyboard, cursor keys and mouse and output using text or graphics screen (v) Full interactive graphics environment with pull down menus, graphical icons, etc. - input via mouse, keyboard and cursor and output via graphics screen using multiple window formats (as exemplified in Figure 8.5).

Figure 8.5 - Typical Screen from Microsoft Corporation Word for Windows

330

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The Holorith card system has effectively been obsolete since the late 1970s and is no longer in use. The line-oriented text input/output system was originally used on all levels of computer but is now becoming obsolete, although still used in some mainframe applications. User interface types (iii) to (v) have all been used extensively on PCs and workstations, with type (v) interfaces currently the industry standard. The major difference between the type (v) user interface and all the others is that the framework for the interface and the executable code for many of the functions is becoming an integral part of modern "window" operating systems. Those developing software in the windows formats often do so with cognisance of other common packages. This enables people to keep common features (such as file handling) operating in a similar way over a wide range of different software applications. For example, Figure 8.5 shows the user interface from the Microsoft corporation's Word for Windows version word-processing system. Figure 8.6 shows the user interface from Borland International's Turbo Pascal compiler for Windows. Note the similarity between common functions such as File, Edit, Window, Help, etc. The tools provided by various windows environments to create such software don't restrict the software developer to such formats but they do make it easier for the developer to create similar functions - particularly file handling, help, scrolling, etc.

Figure 8.6 - Typical Screen from Borland International Turbo Pascal for Windows

Software Development Issues

331

There is much to be said for developing control and other engineering applications that follow common software trends, such as those provided in wordprocessors, spread-sheets, etc. If system users sense some familiarity with a new piece of software then they are more likely to use and explore that software systematically. This should minimise unexpected results and damage to physical systems. Since the mid-1980s, most software houses have created programs that enable data entry to occur in a word-processing type format - in other words, the defacto standard technique is to enter data only once and thereafter, correct only the portions that have been mis-keyed or entered incorrectly. Compare this to the old, line-oriented technique where incorrect data had to be completely re-entered - as often as not, an old mistake was corrected and a new mistake created. The software development tools provided in a windows environment are all designed to facilitate the "correct only incorrect portions" technique of data entry. However, while the user-interface techniques, cited in (i) to (v) get progressively easier for the end-user, they unfortunately become progressively more complex for the software developer. In other words, a menu system is more difficult to implement than simple lines of text entry, a pull-down menu system is more difficult than a simple menu and a windows based menu system is much more difficult again. Although windows environments provide the tools and low-level routines that enable developers to use pull-down menus, file and window handling procedures, the scope of the tools is, by necessity, enormous because they can be used in so many different ways. The learning curve for software developers is therefore considerably larger than it was for older user interfaces. Data entry is of course only one side of the user interface and data output is the other side. It has long been established that displaying countless numbers on screens is quite ineffective, particularly when those numbers represent entities such as system state and the state is continually changing. Graphical representations that enable system users to relate to numerical quantities are naturally the preferred option. However, given limited screen resolutions, graphical representations (animations) are generally only an approximation of the actual system behaviour. Quite often, these need to be supplemented with important numerical quantities or alarms that alert system users to specific conditions that may go unnoticed in approximate displays. The widespread acceptance of windows operating systems has led to a large number of third-party software houses developing scientific and engineering software tools to supplement the basic user interface input/output tools provided by the operating system. Typically, the additional tools provided include the ability to display animated meters (instruments), graphs, warning lights, gauges, etc. - in fact, a graphical simulation of typical industrial devices that help the user to identify with various quantities. In the final analysis, the development of user interfaces is really a closed-loop, iterative process. It requires a great deal of interaction between the developers and untrained end-users in order to observe how a basic interface can be changed or enhanced to improve performance.

332

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

8.4 Programming Language Issues - OOP


There has been much debate, since the 1960s, regarding the various options in programming languages. Computer scientists are continually arguing the merits of new languages and the benefits of "C" over "Pascal" or "Fortran". From an engineering perspective, we need to divorce ourselves from these arguments because in a larger sense, they are trivialising the main objective of computer programming - that is, to create an operational piece of software that:

Will reliably and predictably perform a required function Can be easily read and understood by a range of people Can be readily modified or upgraded because of its structure and modularity.

If we refer back to Figure 8.3, we can see that for a general mechatronic control application, there are several levels of software that need to be written:

The interface between the hardware (interfacing card) and the main application - the I/O routines The user interface The control algorithm.

This leaves us with the problem of deciding upon various levels of programming and possibly, programming languages. The software that couples the hardware interface card to the main application is one most likely to be written in an assembly language, native to the particular computer processor upon which the system is based. Traditionally, most time critical routines (normally I/O) were written in an assembly language to maximise performance. Additionally, many routines that directly accessed system hardware (memory locations, etc.) were also coded in assembler. However, there are two reasons why many developers many not need to resort to assembly language programming. Firstly, most modern compilers are extremely efficient in converting the high-level source code down to an optimised machine code and there is generally little scope for improving on this performance by manually coding in assembly language. Secondly, many manufacturers of I/O and interfacing boards tend to do much of the low level work for the developer and provide a number of highly efficient procedures and functions that interface their hardware to high level language compilers such as Pascal and C. Given that the choice of interface software has largely been determined by the board manufacturer, a system developer is still left with the problem of selecting a high level language for implementation of the control algorithm and user interface. Contrary to the opinions of many computer scientists, from an engineering perspective, the choice of a high level language is largely irrelevant and is best decided on a political basis rather than a technical basis - in other words, issues such as:

Software Development Issues

333

Market penetration of the language Industry acceptance Compatibility with common windows operating systems Availability of third-party development tools Availability of skilled programmers

are far more important than the actual syntax differences between Basic, C, Fortran and Pascal. In fact, viewing the process with an open mind, one would probably find that most modern compilers satisfy the above criteria. In the 1970s and 1980s there was much ado about the deficiencies of the Basic programming language. Many of these were valid because the language at that time was largely unstructured (ie: was based on GOTO statements) and was often interpreted (converted to machine code line by line) rather than compiled (converted in its entirety to an executable binary code). This meant that Basic was very slow and programs developed in the language were often untidy and difficult to maintain. However, modern versions of Basic contain the same structures as the other languages, including records, objects, dynamic data structures, etc. Provided that one has the discipline to write structured, modular code, with no procedure, function or main program greater than the rule-of-thumb "30 lines" in length, then the differences between modern Basic and C become largely syntactic. Fortran is another language that appeared to be in its final days in the 1980s but has since had a recovery. Fortran was originally the traditional programming language for engineers because of its ability to handle complex numbers and to provide a wide range of mathematical functions. An enormous number of engineering packages were developed in Fortran, particularly older control systems, finite element analysis packages and so on. When these advantages disappeared as a result of the large number of third-party development tools for other languages, it seemed as though C would become the dominant new language in the 1980s. However, the availability of Fortran compilers, complete with tool-boxes for the windows environments has slowed the conversion of older programs from Fortran to C and has encouraged continued development in Fortran. Pascal, originally considered to be a language for teaching structured programming, came into widespread use in the 1980s as a result of the Borland International release of Turbo Pascal. This low cost development tool sparked an enormous range of third-party development packages and particularly encouraged interface manufacturers to provide Turbo Pascal code for their hardware. Again, as a result of windows development tools for the language, it is evident that it will remain viable for some years to come.

334

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The C programming language, most commonly favoured by computer scientists as a result of its nexus with operating systems such as UNIX, has also become a dominant, professional high-level language. Although structurally no more sophisticated than most modern implementations of the other languages, it has become the preferred language of software development houses and as a result, is supported by a large range of third-party software. Many hardware manufacturers provide C-level code support for their products and since C is the basis of modern windows environments it is also extensively supported by windows development tools. Object-Oriented Programming (or OOP) based software development has become a major issue in recent years. It should not be regarded as a major change to programming techniques, but rather, a logical extension of structured programming languages. Most modern compilers in Pascal, Fortran and Basic incorporate objects in their structure. The C language incorporating objects is generally referred to as C++. An object is really just an extension of the concept of a record, where a group of variables (fields) are combined under a common umbrella variable. For example, using Pascal syntax, the following record variable can be defined: Patient_Details = Record Name: Address: Phone: Age: End {Record};

String [12]; String [25]; String [7]; Integer;

A variable of type Patient_Details then contains the fields of Name, Address, Phone and Age which can either be accessed individually or as a record group. An object combines a group of variables and the functions and procedures (subroutines) that handle those variables. For example: Patient_Details = Object Name: String [12]; Address: String [25]; Phone: String [7]; Age: Integer; Procedure Enter_Name; Procedure Enter_Age; Procedure Write_Name; Procedure Write_Age;

:
End {Object};

Software Development Issues

335

Although the concept of combining variables with the functions and procedures that handle those variables may not seem to be of importance, it becomes far more significant because objects are permitted to inherit all the characteristics of previous objects and to over-write particular procedures and functions: Special_Patient_Details = Object (Patient_Details) History: String [100]; Procedure Enter_Name; Procedure Enter_History;

:
End {Object}; In the above piece of code, variables of the type "Special_Patient_Details" inherit all the fields of variables of type "Patient_Details" - moreover, they have an additional field (History) and procedure (Enter_History) added and a new procedure (Enter_Name) which over-writes the other procedure of the same name. Most modern programming is based upon the principles of OOP - particularly for windows environments. All the basic characteristics of the windows environments, including file handling, window and screen handling, etc. are provided to developers as objects. The developer's task is to write programs where the basic attributes are inherited and relevant procedures and functions over-written to achieve a specific task. The problem with the concept is that there are so many objects, variables and procedures provided in the windows environment that it is difficult to know where to begin and hence the learning curve is much longer than for older forms of programming - however, the end-results should be far more professional in terms of the user interface and in the long-term, programmers can devote more time to the task at hand rather than the user interface. The other major programming difficulty that people tend to find with the windows environments is the event-driven nature of such environments. This presents a significant departure from the traditional form of programming. In windows environments, any number of events should trigger some routine in a program - for example, the pressing of a mouse button, the pressing of a key on the keyboard, the arrival of a message from the network, etc. In essence then, the task of developing a program for a windows environment dissolves into "WHEN" type programming. In other words, we need to develop a procedure for "when the left mouse button is pressed" and "when a keyboard button is pressed" and so on and so forth. This is not as easy a task as one might imagine, particularly given the number of events that can occur and also because of the general complexity of the environment in terms of the number of objects and variables. Consider, for example, how many events can occur at any instant for the environment of Figure 8.5 - and these events are only for the user interface. In a control system one also has to deal with the dynamic interaction with hardware interfaces.

336

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

It appears that both OOP and windows environments based on OOP will remain as a major entity in computing throughout the 1990s and so the issues of development complexity must be tackled in modern mechatronic control systems. While the windows environments have provided considerable benefits for software houses developing commercial programs, they have (because of the extended learning curves) also provided new problems for engineers that only need to develop specialised programs on an infrequent basis. It is evident that many modern compilers, even those with OOP, have still made the software development task somewhat difficult because there have been few attempts at simplifying the problems associated in dealing with windows environments. However, it is also apparent that newer compilers will address the shortcomings of the windows development process for infrequent programmers. In particular, the most recent trend in compilers has been to provide a much simpler interface between the infrequent programmer and the complexity of the windows environment. This has important ramifications for engineering users who are unlikely to exploit more than a small percentage of the functionality of a windows environment, as opposed to the general-purpose software developers that may require the broad spectrum of functions.

Software Development Issues

337

8.5 Software Engines


Most professionals prefer not to "reinvent the wheel" when they design systems or products because re-invention often leads to the resolution of numerous problems that are better solved (or have already been solved) elsewhere. However, in terms of software, it is interesting to note that many designers have not recognised the development of software "wheels" that can be better used than traditional high-levellanguage compilers. Most modern, windows-based applications, including spreadsheets, databases, word-processors, CAD systems, etc. share a common object-oriented structure that leaves open the possibility of modifying their appearance to achieve some specific objective. In other words, we may be able to modify a spreadsheet program, for example, so that it can act as a control system. We can do so by changing the userinterface of the application and by changing the direction of data flow. There are two ways in which modern applications can be restructured to create new applications:

Through software developed in a traditional high-level-language compatible with the object-oriented nature of the original application Through the use of embedded or macro languages built into the application itself.

This means that modern general-purpose applications, such as spreadsheets, can no longer be considered as an end in themselves, but also as "software engines" that can be used to achieve some other objective. The ramifications of this are quite substantial, particularly because modern spreadsheet and database programs already have considerable graphics, drawing, animation and networking capabilities that can be harnessed and adapted rather than re-invented. In the case of databases there is also the issue of access to a range of other common database formats that has already been resolved. The question that designers of modern mechatronic control systems need to ask themselves is no longer simply one of: Which high-level-language should be applied to achieve our objectives? but rather Should one first look at the possibility of achieving the objective with a modified spreadsheet or database package before looking at total development through the use of high-level languages?

338

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The answer to the latter question is not always intuitively obvious. For all the benefits involved in modifying an existing "engine", there are also shortcomings, most notably, the fact that the performance of such a system may not be compatible with real-time requirements. On the other hand, if one examines the rapid escalation of processing power and the rapidly diminishing cost of both the processing power and peripheral devices (memory, disk storage, etc.), one is left with the conclusion that in the long-term, the use of software engines in a range of engineering applications will become significant. Another shortcoming of the software engine approach is that it does not necessarily diminish the need for the developer (or more appropriately, modifier) to understand the intricacies of both the operating system in which the application runs and the programming language which can be used to adapt the original software engine. On the contrary, those contemplating the use of software engines may well need to be more familiar with the entire software picture than those developing applications from first principles.

Software Development Issues

339

8.6 Specialised Development Systems


For many years now, computer scientists have debated about the way in which software should be developed. The argument has not only been in regard to the type of high level language that should be used but also about the very nature of programming. Most high level languages are little more than one step away from the assembly language or machine code instruction set that physically binds the Von Neumann hardware architecture, of most modern processors, to the software that end-users would recognise. Computer scientists have argued as to whether programming should be undertaken at a more human level than is provided by most high level languages - in other words, should programming be further divorced from the hardware architecture of the modern computer? In order to make computers appear to be more human, it is clear that a higher level of hardware is required in order to create an environment in which the more sophisticated (human) software can reside. The reality is that the "friendlier" the software development/application platform, the more overheads are imposed on the hardware. However, when we consider that the cost of hardware has been decreasing for almost four decades, it is clear that there is scope for improving software development techniques. There are two techniques which have received a considerable amount of exposure in international research journals since the 1980s. These are:

Artificial Intelligence (AI) languages / Expert System shells Neural Network development systems.

Both of these development strategies have tried to emulate some of the human thought processes and brain functions. However, both also suffer from the same problem, born from the adage: "Be careful what you wish for - you may get your wish" Novices in the field of computing always have problems in learning programming and wish for better development tools because they think that the computer is too complex. However, as we have seen, the computer is not at all complex, relative to humans, and its logic processes are incredibly simplistic. The problem in coming to terms with computing is that the uninitiated view the computer as some form of electronic brain, with an intelligence level similar to that of a human. Few people however, realise the complexities and, more importantly, the inconsistencies and anomalies in the human thought process. If one therefore wishes to have software that emulates human thought processes then one must also be prepared to endure the anomalies and inconsistencies of the process.

340

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

AI programming languages, such as LISP and PROLOG, have been in existence for many years and their purpose has been to develop "expert" control and advisory systems that can come up with multiple solutions to problems. However, the programming languages themselves are a problem. The flexibility that they allow in program structure, variable definitions, etc., makes them difficult to debug and impractical to maintain because of the bizarre programming syntax. Over and above these serious shortcomings is the fact that they impose considerable overheads on even the simplest programs. For these reasons, such languages lost credibility in the late 1980s, with many expert system developers opting for traditional languages which provided "maintainable" software solutions to problems. As a result of the shortcomings of the AI languages, a good deal of research and development was applied to making expert system shells, in which "intelligent" software could readily be developed. These too, impose considerable overheads for somewhat limited benefits over traditional development techniques. Their performance generally makes them unsuitable for most real-time control functions. Expert systems are often said to "learn" as they progress. In engineering terms, this simply means that expert systems are composed of a database with sophisticated access and storage methods. Overall however, the difficulty with any system that "learns" and then provides intelligent solutions to problems is whether or not we can trust a device, with a complex thought process, to determine how we carry out some function. As we already know, it is difficult enough to entrust a control or advisory device programmed in a simplistic and deterministic language (such as C or Fortran or Pascal), much less one that is less predictable in the software sense. Neural networks, on the other hand, have had a more promising start in the programming world. The basic premise of neural networks is that it should not be necessary to understand the mathematical complexity of a system in order to control its behaviour. Neural networks can provide a useful "black-box" approach to control. The network (software) has a number of inputs and outputs. The software is "trained" by developers providing it with the required outputs for a given set of inputs. This can be a complex task and the first "rough-cut" training may not provide the desired outcomes. An iterative training process is used so that the system can ultimately provide a reliable control performance. The difficulty with these systems is normally in determining and characterising the inputs and outputs in terms of the neural network requirements. Neural networks have already been successfully applied in a range of different control environments where a direct high-level-language-algorithmic approach is difficult. Examples include hand-written-character-recognition systems for postal sorting, high-speed spray-gun positioning control systems for road marking and so on. In fact, any control system which is difficult to categorise mathematically or algorithmically can be considered for neural network application. Another advantage of most neural network software packages is that they ultimately "de-compile" to provide Pascal or C source code that can then be efficiently applied without a development shell.

341

Chapter 9
Electromagnetic Actuators & Machines - Basic Mechatronic Units

A Summary...
This chapter examines the physics, background theory and performance characteristics of a range of electromagnetic devices that are commonly used in industrial applications to convert electrical energy into some form of mechanical movement. The basic devices introduced herein are used to generate one of the most fundamental mechatronic elements, the servo motor positioning system. The chapter also examines how the all the basic elements (computer, interface hardware, transducers, software, actuators, etc.) can be brought together to form a closed-loop mechatronic system that is commonly used in robotics, CNC machines and many other industrial applications.

Digital Voltages

Analog Voltages

Analog Energy Forms

External Voltage Supply

Digital to Analog Conversion Computer Analog to Digital Conversion

Scaling or Amplification

Isolation

Energy Conversion (Actuators) External System

Protection Circuits

Scaling or Amplification

Isolation

Energy Conversion (Transducers)

External Voltage Supply

342

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

9.1 Introduction to Electromagnetic Devices


Electromagnetic devices, placed under some form of computer control, such as in the case of servo motor drives, are one of the most convenient ways of converting electrical signals into mechanical movements with a high degree of accuracy and a low level of acoustic noise. Electromagnetic devices are one of the most prolific elements in modern mechatronic systems including robots, CNC machines and precision actuators. In low to medium power applications, they can be economically used to replace hydraulics and pneumatics, thereby leading to reductions in noise emission and an increase in the degree of control over the end system. However, in order to comprehend the design and performance of the "intelligent" motor control system, it is first necessary to come to terms with basic electromagnetic principles and the architecture and performance of the machines themselves. A complete analysis of the many electrical, electromagnetic and mechanical characteristics of the machines described herein is outside the scope of this book. The purpose of this chapter is to summarise and review the basic characteristics peculiar to the different types of electromagnetic machines and to show how these characteristics can be harnessed through intelligent control strategies. As a starting point, it can be noted that all of the machines that we will examine in this chapter can also be considered as transducers that convert electrical energy to mechanical energy (or viceversa). When these machines convert electrical energy to mechanical energy then they are said to be motors and when they convert mechanical energy to electrical energy they are said to be generators. Despite the apparent physical differences between the electrical machines that we shall explore herein, the basic principles of energy conversion are similar. All the machines use the coupling of magnetic and/or electromagnetic fields in order to convert from electrical energy to mechanical energy and vice versa. As with all forms of energy conversion, there are losses that need to be considered in the process and these are shown in Figure 9.1.

Generator

Electrical Input or Output

Electrical System

Magnetic System

Mechanical System

Mechanical Input or Output

Electrical Energy Losses Motor

Field Losses

Mechanical Energy Losses

Figure 9.1 - Energy Conversion in Electrical Machines

Electromagnetic Actuators & Machines - Basic Mechatronic Units

343

All the electrical machines discussed in this chapter follow the energy conversion processes shown in Figure 9.1 and all the machines suffer from the same characteristic losses that diminish their efficiency and contribute to unwanted heat. In order to understand the energy conversion and loss mechanisms described in Figure 9.1, it is first necessary to review the fundamental physical relationships that govern electromechanical energy conversion in all machines. These are summarised below, together with some common definitions used in machine theory.

(i)

Right Hand Thumb Rule One of the most basic rules of field theory is the so-called "thumb" or "right hand thumb" rule. It is most readily understood by referring to Figure 9.2.

dl

Figure 9.2 - Magnetic Field Induced Around a Current Carrying Conductor

The thumb rule is used to determine the direction of the magnetic field intensity vector (H) surrounding a conductor that carries a current (i). In order to apply the rule, one grasps the conductor in the right hand, with the thumb pointing in the direction of current flow. The fingers curl around the conductor and the finger tips indicate the direction of the magnetic field intensity.

(ii)

Turns, Coils and Windings A "turn" is hereafter defined as two conductors that are joined at one end or alternatively as a single, unclosed loop of conducting material. A "coil" is defined to be a number of turns connected in series (as its name implies). A "winding" is defined to be a number of coils connected in series.

344

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

(iii) Ampere's Law Looking again at Figure 9.2, it is possible to determine the relationship between the current in a conductor and the resulting magnetic field intensity around the conductor by applying Ampere's Law:

H dl = N i ...(1)

where H is the vector representing magnetic field intensity, dl is the vector representing the incremental length of the closed-integral path, N is the number of "turns" of conductor enclosed by the integral path and i is the current in the conductor. The quantity "Ni" is given the special name "magnetomotive force" or m.m.f.. In the case of Figure 9.2, the geometry presented is trivial because the vectors H and dl are collinear and the following relationship results from equation (1):

H 2 r = Ni H= Ni 2 r (Amps / Metre)

(iv) Magnetic Flux Density

The magnetic flux density (B) that is generated by a magnetic field intensity (H) is defined by the following relationships:

B = H = r o H
...(2) where is referred to as the "permeability" (or "permeance") of the medium, o is the permeability of free space (and is a constant) and r is the relative permeability of the medium ( r equals unity for free space). The units of flux density are Tesla (T) or Webers per square metre (Wb/m2). The actual value of relative permeance in magnetic materials varies with the magnitude of field intensity and a typical relationship is shown in Figure 9.3. This is known as a magnetisation curve. It represents the value of flux density as field intensity is increased from zero up to some finite value and the characteristic is different for each type of magnetic material.

Electromagnetic Actuators & Machines - Basic Mechatronic Units

345

Figure 9.3 - Typical Shape of a Magnetisation Curve

(v)

Magnetic Flux

Figure 9.4 shows a simple magnetic circuit, composed of a toroidal piece of ferromagnetic material, energised by a coil of N turns. If the coil carries a current, i, then a magnetic field intensity H is produced and a flux density B results in the core of the toroid. Ideally, it is assumed that there is no flux leakage and that all flux passes through the core of the toroid and not through the air. The flux within the toroid is given by the following, closed surface integral: = B ds ...(3)

where ds is a vector perpendicular to the infinitesimally small element of surface area through which the flux density B passes

Permeance H i Cross-Section Area A N Turns ds

Figure 9.4 - Energised Toroidal Core with an N Turn Winding

346

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

If the cross sectional area of the toroid is A, then the Flux () within the toroid is simply given by: = BA (Wb)

because the cross-section of the toroid is perpendicular to the magnetic flux density vector B.

(vi) Reluctance of Magnetic Paths

In magnetic circuits, it is normal to define a quantity known as the "reluctance" of a magnetic path. This is analogous to the resistance of an electrical path in network theory. Looking again at Figure 9.4, we can see that if the mean path length around the toroid is defined by "l", then using Ampere's law:

Ni l Ni B= l A = N i l N i l = where R = R A H=

The quantity R is referred to as the reluctance of the magnetic path.

(vii) Magnetic Circuits with Air Gaps

The magnetic circuits in a.c. and d.c. machines are essentially divided into three parts. The static portion of the machine (the stator) forms one part of the circuit, the rotational part of the machine (the rotor) forms a second part and the air gap that exists between them forms the third part of the circuit. For this reason it is necessary to be able to examine composite magnetic systems that are a series of paths of differing reluctances. A simplistic system, composed of a ferromagnetic core and air gap is shown in Figure 9.5.

Electromagnetic Actuators & Machines - Basic Mechatronic Units

347

Permeance H Air-Gap Permeance

N Turns

Cross-Section Area A

Figure 9.5 - Magnetic Circuit with Air Gap

Compound magnetic circuits, such as that in Figure 9.5 can be analysed in an analogous way to electrical circuits, by using the "magnetic" equivalent of Kirchoff's voltage law. In order to understand how this equivalent analysis arises, it is necessary to examine Table 9.1, which shows the "dual" electrical and magnetic properties.

Property

Electrical Circuit

Magnetic Circuit

Primary Energy Source

Energy Transfer Mechanism Impediment

Electromotive Force (e.m.f.) E Current i = e.m.f. / Resistance Resistance l R= A ( = conductivity l = length)

Magnetomotive Force (m.m.f.) F Flux () = m.m.f. / Reluctance Reluctance l R= A ( = permeability l = length)

Table 9.1 - "Dual" Electrical and Magnetic Circuit Properties

348

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Looking again at Figure 9.5, we can sum the m.m.f.s around the closed path, just as we would with voltages around a closed loop:

Ni = R f + R a Ni = H f lf + H a la l B l B Ni = f f + a a f o
where the subscript "f" refers to quantities in the ferromagnetic material and the subscript "a" refers to quantities in the air-gap. It should also be noted that solving these equations for particular variables is not as straightforward as one might first anticipate. Whilst the relationship between Flux Density (B) and Magnetic Field Intensity (H) in the air gap is linear, the same cannot be said for other magnetic materials, where a characteristic such as that shown in Figure 9.3 exists. The solution to such magnetic circuit problems normally comes from using a traditional graphical "load-line" approach that is based upon the magnetisation curve of the magnetic material used for the core. Another point to note in regard to solving compound magnetic circuit problems is that the magnetic flux lines are not consistent all the way around the circuit. In the air gap, the lines bow outwards, thereby giving the air gap a greater effective area than the core. This is called "fringing". However, if the length of the air gap is much smaller than the length of the core then the effects of fringing can often be ignored.

(viii) Faraday's Law

Consider the magnetic circuit shown in Figure 9.6. If a time varying current, i(t), is applied to the primary coil, then a time varying magnetic flux is produced in the magnetic core. Faraday's law tells us that a voltage is induced into the secondary coil as a result of the time variant flux that passes through it. The relationship is as follows:

v( t ) = N

d ( t ) dt ...(4)

This is the mathematical description that simply tells us that the voltage induced in a conductor is proportional to the rate of change of flux linking that conductor. In the case of a winding, the number of turns (N) becomes a multiplier.

Electromagnetic Actuators & Machines - Basic Mechatronic Units

349

Permeance h(t) i(t)

Primary Coil

N Turns 1

N Turns Secondary Coil v(t)

Figure 9.6 - Electromagnetic Induction

(ix) Hysteresis

Figure 9.4 shows a number of turns of conductor wrapped around a toroidal magnetic core, whose magnetisation curve is of the form shown in Figure 9.3. If a d.c. current is used to supply the m.m.f. for the core and the value of the current is increased from zero to some finite value, then the form of magnetisation curve typified by Figure 9.3 results. However, if the value of m.m.f. is then reduced, then a different relationship exists during demagnetisation. Similarly, if the m.m.f. is again increased, a different relationship exists during remagnetisation. This relationship is typified by the loop shown in Figure 9.7. This loop is referred to as a "Hysteresis Loop". Hysteresis is an important concept because whenever a sinusoidal, a.c. current excitation is applied to a core, then for each cycle of the current waveform, one hysteresis loop is generated, thereby dissipating energy.

5 H

Figure 9.7 - Typical Hysteresis Characteristic

350

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The energy dissipation due to hysteresis can be best understood by looking again at Figure 9.4, where it can be said that if a sinusoidal current waveform, i(t), passes through the coil on the toroid, then the e.m.f. across the coil can be deduced from Faraday's law: v( t ) = N d ( t ) dt

The power, p(t), consumed in the core is defined by the product of voltage and current:

p( t ) = v( t ) i ( t )
and we also know that: ( t ) = B A i( t ) = Hl N

so that the power consumed in the core is:

p( t ) = l A H

dB dt
...(5)

and hence the energy consumed in the core is: W = l A W = l A

z z
t2 t1 B2 B1

dB dt dt

H dB ...(6)

where the quantity "lA" represents the volume of the core. Equation (6) shows that for each magnetisation, demagnetisation and remagnetisation cycle of the hysteresis loop, energy is consumed. The amount of energy consumed is proportional to the area of the hysteresis loop and is dependent upon the volume of the core.

Electromagnetic Actuators & Machines - Basic Mechatronic Units

351

The volume dependence of the energy consumption due to hysteresis can be readily explained on physical grounds. The magnetisation and demagnetisation of a ferromagnetic material requires the rearrangement of the magnetic "domains" within the material. The larger the volume of material, the more rearrangement that has to take place and hence the higher energy consumption. It is normally not practical to evaluate the area under a hysteresis loop and so empirical methods are normally used to define the loss. The actual equations for determining power loss are dependent on the ferromagnetic material used to form the core of the magnetic circuit.

(x)

Eddy Current Losses

Faraday's law of electromagnetic induction tells us that the voltage induced in a conducting material is proportional to the rate of change of flux linkage in the material. In the case of ferromagnetic cores that are used to provide magnetic circuit paths, voltage is induced not only in coupled windings but also in the core itself. This results in a time-varying current flow through the core. The phenomenon is referred to as an "eddy-current" and is shown in Figure 9.8.

Cross-section Eddy current i(t) (t) Ferromagnetic Core

Figure 9.8 - Generation of Eddy-Currents in Ferromagnetic Cores

Any current flow in the ferromagnetic material will naturally lead to energy loss because the ferromagnetic material is not an ideal conductor and resistive heating occurs. The power loss due to eddy currents is normally minimised by making cores for electrical machines and transformers from thin laminations of ferromagnetic material, separated by insulating material. Since the power loss due to eddy currents is proportional to the square of the induced voltage in the material and inversely proportional to the resistance of the material, increasing the resistance through lamination decreases the losses.

352

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

(xi) Lorentz Force

The magnetic force (F) on a conductor carrying a current (i) within a magnetic field of density B is summarised by two equivalent Lorentz vector relationships:
F=ilB

...(7)
F=qvB

...(8) In equation (7), l represents a length vector which is in the direction of current flow. In equation (8), q represents the electron charge and v represents a velocity vector corresponding to the direction of electron flow.

(xii) Torque on a Current Loop in a Magnetic Field

The concept of calculating the torque produced on a single current loop exposed to a magnetic field is perhaps the most important physical concept relating to the operation of all electric motors. The basic elements of a simple circuit are shown in Figure 9.9, where a current loop is pivoted within a magnetic field of density B.

y x

a
Loop B Ip

i d

Figure 9.9 - Torque Produced in a Current Loop

Electromagnetic Actuators & Machines - Basic Mechatronic Units

353

Applying the vector equation (7) to sides b and d of the current loop shows that the forces imposed on each of these sides are always equal and opposite (regardless of the angular position of the loop) and therefore have no net effect on the movement of the coil. The magnitude of the forces on sides a and c of the loop are also always equal and opposite - however they do not act on the same line of action and therefore a net torque is produced. Applying the vector cross product rule for Lorentz force to conductor a (of length a) shows that the magnitude of the force on the conductor is: F = iaB and the force is in the "-x" direction. The force on conductor c is the same in magnitude but exists in the "+x" direction. From symmetry, it is clear that the torque on the loop () is double that generated by the force on one conductor and is given by the expression:

b = 2 i a B sin 2
...(9) where b is the length of side b of the coil. The product "ab" represents the area "A" of the loop. If we replace the loop with a coil of N turns, then the net torque is multiplied by a factor of N. Therefore, the magnitude of the torque on a coil carrying a current "i" and subjected to a magnetic field "B" is: = N i A B sin ...(10)

(xiii) The e.m.f. Equation

The e.m.f. equation arises from time to time when discussing a.c. machine theory and so it is valuable to understand its origins. Essentially the equation comes from applying Faraday's law to systems with sinusoidal excitation:

e.m.f.= N

d dt

354

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

If (t) = sin t, then the generated e.m.f. becomes: e.m.f. = Ncos t = 2fNcos t The r.m.s. value of the e.m.f. (Erms) is obtained by dividing the peak value of the cosinusoidal waveform by 2 and is therefore: Erms = 4.44fN ...(11) This relationship is referred to as the e.m.f. equation.

Electromagnetic Actuators & Machines - Basic Mechatronic Units

355

9.2 Fundamentals of d.c. Machines


9.2.1 Introduction to d.c. Machines
Direct current machines, like most other electrical machines, are electromechanical energy converters that can function in two directions - as both generators and motors. However, early technical difficulties in transmitting d.c. voltages and currents for residential and commercial use have meant that the bulk of d.c. machine implementations have been for motoring purposes. One of the key advantages of the d.c. motor is its relative flexibility in terms of speed control and the range of torque-speed characteristics that can be achieved by reconfiguration of motor windings. Unlike induction machines and synchronous machines, the speed of a d.c. motor can be accurately regulated through simple electrical circuits, and it was this factor above all others that led to the proliferation of the d.c. motor in applications such as servo drives and traction systems for trains, trams and electric trolley-buses. The introduction of power electronics in the 1970s diminished many of the advantages of the d.c. machine over its a.c. rivals. Accurate speed control of a.c. machines not only became more viable but also more attractive because of the widespread generation of a.c. power. The other relative disadvantage of d.c. machines is their additional maintenance requirement that arises through the use of limited-life "brushes" to connect the outside world to the main (armature) windings. Although the cost of brushes is not significant, the down-time that can arise from motor maintenance is of crucial importance when electrical machines form part of a major industrial installation. On the other hand, the simplicity and existing proliferation of d.c. machines means that their use is ensured for some time to come. Generally, you will find that books dealing with a range of electrical machines tend to devote more time to d.c. machines than to a.c. machines. As you will discover, the same is true within this chapter. There are two reasons for this emphasis. Firstly, although d.c. and a.c. machines differ in many ways, fundamental characteristics and magnetic phenomena are common to all machines and are perhaps best exemplified and introduced with d.c. machines. Secondly, d.c. machines can be configured in several different ways whereas a.c. machines each tend to have only one basic configuration. In essence, therefore, when we discuss d.c. machines we need to examine several different machine configurations, each of which has its own characteristics.

356

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

9.2.2 Physical Characteristics of d.c. Machines


The d.c. machine, like all other rotating electrical machines, is composed of two major components. The stationary component, that makes up the frame of the machine, is referred to as the "stator" and the rotating inner component is referred to as the "rotor". As with other electrical machines, both the stator and the rotor are equipped with either windings or permanent magnets that either cause an interaction of magnetic fields to produce a rotational torque or the induction of voltage in the complementary windings. In an electrical machine, the main winding through which power is generated (or absorbed during motoring) is referred to as the "armature". The secondary winding that is used solely to generate a magnetic field is referred to as the "field" winding. In a d.c. machine, the armature winding is located on the rotor and the field windings are located on the stator. Figure 9.10 shows in simple schematic form the fundamental components of a d.c. machine (with only one coil shown on the rotor for simplicity).

Quadrature Axis

Field Winding N x x Armature Winding (a) (b) S Direct Axis x

Figure 9.10 - (a) Schematic Cross-section of a Stator on a d.c. Machine (b) Schematic Cross-section of a Rotor on a d.c. Machine

In Figure 9.10, the magnetic field in the stator is shown to be produced by windings that create magnetic poles, but the field could also have been produced by using permanent magnets. The limiting factor with permanent magnets is of course the size of the field that can be produced (thereby limiting the size of the motor itself). Regardless of whether the d.c. machine acts as a generator or as a motor, the magnetic field must be present in the stator in order for the machine to function correctly.

Electromagnetic Actuators & Machines - Basic Mechatronic Units

357

In order to connect the rotating armature to the outside world, each end of the armature winding is connected to a "commutator" (that facilitates both voltage conversion and external connection to the rotating windings). The commutator is then connected by brushes to external wiring. A brush is simply a conducting material block that touches the surface of a rotating conductor in order to provide a good electrical (non-sparking) connection to a stationary external circuit. In order to prevent excessive wear on the rotor, the brush material must be relatively soft and therefore brushes are normally expendable components that need to be periodically replaced. Brushes are made from materials such as hard carbon, electrographite or metal graphite. The commutator is a device which is conceptually simple to understand but difficult to comprehend in realistic d.c. machines. The confusion arises because of the number of coils inside the armature of a practical d.c. machine. However, the basic concept of commutation can be best understood if we look at a simplistic d.c. machine with only one turn (loop) within the armature, such as that shown in Figure 9.10 (b). In Figure 9.10 (a) we see the sort of flux distribution that exists within the air gap of the stator. If we were to plot the magnitude of flux density (B) in the air gap as a function of the angular position () of a rotor within the air gap then we would have a distribution of the form shown in Figure 9.11.

N S S

Figure 9.11 - Flux Density Distribution as a Function of Angular Position in Air Gap

If the rotor of Figure 9.10 (b) is inserted into the constant magnetic field provided by the stator of Figure 9.10 (a) and the rotor remains stationary, then Faraday's law tells us that no e.m.f. is generated in the armature because there is no change in flux linkage to the armature winding. On the other hand, if we spin the rotor (thereby changing ) by using a prime mover, then we create an effective flux linkage change in the armature and thereby generate an e.m.f. The e.m.f. generated across the armature turn will have a waveform dependent upon the flux density distribution of Figure 9.11 (in fact the derivative of the flux waveform with respect to or time).

358

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

We now need to note well that a d.c. machine actually maintains an a.c. voltage waveform across the armature winding. The a.c. waveform in the winding is converted to and from d.c. (for connection to the outside world) through the action of the mechanical rectifier which we refer to as the commutator. Figure 9.12 schematically shows two elevations of a rotor (with only one turn in the armature), including the commutator.

Insulating Segment

Va Commutator

Brush Va Conducting Turn

Ea

B I a

Figure 9.12 - Horizontal and Vertical Elevations of a Rotor (Schematic)

The commutator is made up of a number of conducting segments and insulating segments. One end of each armature turn is connected to one end of each conducting commutator segment. The size of the insulating segments is smaller than the size of the brushes, so that when the brushes are across the insulating segments they are effectively short-circuiting the commutator (and the turn).

Electromagnetic Actuators & Machines - Basic Mechatronic Units

359

The commutator spins with the rotor while the brushes remain stationary. When the rotor (commutator) is in the position shown in Figure 9.12, the voltage across the turn (and the output voltage Va) is at its maximum or minimum value (as defined by the flux density in Figure 9.11). When the rotor is at = 90 from the original position, then the commutator is short-circuited by the brushes so the output voltage Va is zero. When the rotor is at = 180 from the original position, then the voltage across the turn has the opposite polarity - however, the position of the commutator has also effectively reversed the connection to the outside world. Therefore, although the voltage across the turn is changing polarity, the output voltage Va from the commutator has been rectified. The waveform is shown in Figure 9.13.

Va

Figure 9.13 - Rectification Effect of Commutator

Although the output waveform shown in Figure 9.13 is d.c. it obviously contains a high degree of ripple. However, the more turns there are in the armature, and the more commutator segments, the smaller the ripple. The turns in a realistic machine are all connected together in series through the commutator segments to form a complete armature winding that produces little ripple in output voltage waveforms. Another factor that affects the variation of flux density as the rotor spins is the number of magnetic poles within the stator. So far, we have only examined simplistic machines which have two poles. However, larger machines tend to have more poles so that there are more high-flux density regions in the air gap. Looking at Figure 9.11, which shows the variation of flux density with rotor position in a two pole machine, we see that a 360 change in (South-North-South) corresponds to a 360 change in the voltage waveform induced in the turn. However, if there were four poles in the machine of Figure 9.10, then a 360 change in (South-North-South-North-South) would result in two complete cycles of the output voltage waveform or 720 in an electrical sense. Therefore it can be said that the number of electrical degrees in a d.c. machine with "p" pairs of magnetic poles is "p" times the number of mechanical degrees.

360

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The distance between the centres of adjacent magnetic poles in a d.c. machine is referred to as the pole-pitch of the machine. In the machine of Figure 9.10, the pole pitch (which corresponds to 180 in an electrical sense) is mechanically 180. The pole pitch of the machine is a parameter that is used to characterise the angular distance between the sides of each turn (or coil) in the armature of a machine. In other words, it relates the separation of conductors in the armature to the relative position of poles in the stator and is a factor that is used in machine design. We have now seen that the combined operation of the rotor, stator field and commutator can act to produce a d.c. generator. However, we also know from Lorentz equation that we can readily use the d.c. machine as a motor simply by applying a d.c. voltage to the armature winding and thereby producing a net torque (as shown in section 9.1 (xii)). It is the motoring characteristics that we shall be most concerned with in this book. However, it is important to note, in terms of our previous discussions, the effects of induced armature voltage during motoring mode. At a first glance, one might expect that when the armature winding on a d.c. machine is energised, there will be no voltage induced in the armature. Of course, this is not the case and a close inspection of Figures 9.10 and 9.11 reveals that regardless of whether the armature windings are energised or not, there must always be an induced voltage in the windings if they are rotating. The voltage that is induced in the armature winding of a d.c. motor or generator, as a result of those windings being rotated through a nonuniform magnetic field, is referred to as the "back e.m.f." of the machine and is generally given the abbreviation Ea. At any given speed, the relationship between the back e.m.f. of a machine and its field current is the magnetisation curve for that machine and has the traditional form exemplified by many ferromagnetic B-H magnetisation curves. Once we have come to terms with the concept of "back e.m.f.", we can model the d.c. machine using normal electric circuit elements, as in Figure 9.14.

La
+

Ra

Ia Va Lf

Rf I f Vf

Ea
-

(a) Armature Winding

(b) Field Winding

Figure 9.14 - Electrical Model for d.c. Machine

In Figure 9.14 (a) we see that the basic elements included in the model are the back e.m.f. (Ea), the winding and brush resistance (Ra) and the winding inductance (La). For the field circuit, we include only the winding resistance (Rf) and winding inductance (Lf).

Electromagnetic Actuators & Machines - Basic Mechatronic Units

361

It should be evident that in the steady state, neither the armature winding nor field winding inductances will affect the performance of the machine. However, in all transient states (change of speed, current or load), these inductances can clearly have a marked effect on the relationship between the voltages and currents in the machine. Another magnetic phenomenon which impinges upon the electrical characteristics of realistic d.c. machines is "Armature Reaction" or "AR". In order to understand the concept of armature reaction, we can again look at the simplistic machine in Figure 9.10. This time however, we need to focus on the interaction between the magnetic fields generated by the rotor (armature) and the stator (field). These are shown in Figure 9.15.

Quadrature Axis

Field Winding N x x S Direct Axis

1 x 2

(a) Field (Stator)

(b) Armature (Rotor)

Figure 9.15 - Magnetic Fields Caused By Field and Armature Windings

If the rotor in Figure 9.15 (b) is placed inside the stator in Figure 9.15 (a), and an armature current flows in the rotor, then we clearly need to contend with two distinct magnetic fields that will interact with one another. Assuming that the rotor and stator are in the orientation shown in Figure 9.15, we can see that the rotor field at "1" will directly oppose the stator field at the top end of its north pole (in other words the effective m.m.f. at the pole is reduced). Conversely, the rotor field at "2" will enhance the field at the bottom end of the stator's north pole (effective m.m.f. at the pole is increased). The reverse situation applies at the stator's south pole. One might be drawn to the conclusion that the net enhancement of flux at one end of a pole will be cancelled out by a net reduction at the other end of that pole - however this is not necessarily the case. In general there is an overall reduction in flux at that pole. Although the physical system is symmetrical, the magnetic system is asymmetric due to the saturation effects that are exemplified by the magnetisation curve. The effect at the north pole of the system in Figure 9.15 is shown in the magnetisation curve of Figure 9.16 for the north pole.

362

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Flux Density (B)

Be Bo Br

Effective m.m.f. at north pole

Original m.m.f. (without armature field) m.m.f. at top end of north pole (with armature field)

m.m.f. at bottom end of north pole (with armature field)

Figure 9.16 - Affect of Armature Field on Flux Density at North Pole of a 2-Pole Machine

Look closely at both Figures 9.15 and 9.16. We can see that even if we assume that the effective increase in m.m.f. at the bottom half of the north pole (due to armature field) is the same as the effective decrease in m.m.f. at the top half of the north pole, the changes in flux density are not. If the magnetic material is near saturation, then the flux density, Be, due to field enhancement is close to the original value (where no armature current exists). Conversely the flux density, Br, due to field reduction may be substantially lower than the value where no armature current is applied. The effect of armature reaction is to decrease the magnetic flux density at magnetic poles within machines. It can be counterbalanced by providing a higher field current that will increase the field at the pole. For this reason, the effect of armature reaction is measured in terms of the amount of additional field current required to generate the same back e.m.f. that would be generated when no armature current flows. Conversely, we say that for any given armature current, the effective field current is diminished by the value of armature reaction current:

Ifeffective = Ifmeasured - IfAR


...(12)

Electromagnetic Actuators & Machines - Basic Mechatronic Units

363

We now know that e.m.f. is affected by the value of field current, but we also know that it must be related to the rotational speed of the machine. In section 9.1, we saw how Faraday's law could be applied to determine the e.m.f. in a coil or winding that experienced a changing flux linkage. Applying this law to d.c. machines provides us with a simple relationship for the average value of back e.m.f.: Ea or Ea = Ka ...(13) where is the flux per pole in the machine, is the rotational speed of the armature and Ka is referred to as the "armature constant" for the d.c. machine. The armature constant is a term which arises regularly in d.c. machine theory and its origin has no significance other than the fact that it is composed of a number of related geometrical machine constants that are grouped together when applying Faraday's law. The armature constant is defined as follows: Ka = Np a ...(14) where "N" is the number of turns in the armature winding, "p" is the number of poles in the machine and "a" is the number of parallel windings in the armature. If we apply Lorentz force calculations to an armature winding within a d.c. machine, we can determine the torque (T) produced in motor mode or the torque required in generator mode: T Ia or T = KaIa ...(15) Even without analysis, one would intuitively expect this form of relationship, because the force acting on any conductor in the armature will be determined by the size of the current in the armature and the size of the field in which the armature conductor is placed. It is interesting to note however that the armature constant arises in the torque equation as well as the e.m.f. equation.

364

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Equations (14) and (15) can be combined by virtue of the fact that the mechanical power input is largely converted into electrical output or vice versa: T = EaIa ...(16) Equations (15) and (16) essentially summarise the complete electromechanical energy conversion process in d.c. machines. With these equations and the theory presented earlier, we now have a reasonable understanding of the basic magnetic and electrical constructs within the d.c. machine. We can therefore move on to examine the way in which this theory is used to provide operational machines. In fact, one of the most interesting features of d.c. machines is the way in which their characteristics can be altered simply by varying the way in which we connect the field and armature windings. There are essentially four possible combinations for a d.c. machine:

Separately Excited (the field and armature windings are each connected to their own separate supplies) Series (the field and armature windings are connected together in series so that the field and armature currents are always the same) Shunt (the field and the armature windings are connected together in parallel so that the terminal voltage across them is the same) Compound (one part of the field winding is connected in series with the armature and the other part of the field winding is connected in parallel across the armature winding)

We shall see how these machines perform in sections 9.2.3 - 9.2.6.

9.2.3 Separately Excited d.c. Machines


A novice in the field of electrical machines may intuitively expect that all d.c. machines have separate power supplies for the field and armature, since on the surface the roles of these windings are totally different. Only a small proportion of d.c. machines have separate electrical excitation for the armature and field windings. However, permanent magnet d.c. machines can also be considered as separately excited machines with constant field excitation (eg: a dynamo). For both types of machine, acting in generator mode, the armature is rotated by a prime mover in order to provide a d.c. output voltage through the commutator. This is shown in Figure 9.17. Note that when we discuss generators, it is conventional to show positive current (Ia) as flowing out of the armature into a load.

Electromagnetic Actuators & Machines - Basic Mechatronic Units

365

La

Ra I a
Load

Rf Va Lf

If
+

Ea
-

Vf
-

Armature

Field

Figure 9.17 - Separately Excited d.c. Generator Driven by a Prime Mover

In the steady-state, the effects of armature and field inductance are ignored and the basic electrical relationships are derived from simple circuit theory:
Ea = K a Va = Ea Ra I a ...(17)

Vf = Rf I f

These relationships are used to construct the so-called "external characteristic" of the machine. The external characteristic of a generator plots the terminal voltage against the terminal (load) current for the machine. In this case the terminal voltage is taken to be the armature voltage and the terminal current is the armature current, so that the characteristic is a simple linear relationship defined by equation (17). In Figure 9.18, the separately excited machine is shown motoring mode, where it is conventional to show armature current flowing into the armature winding.

La Ra I a

Rf
+

If
+

Load T

Ea
Armature -

Va

Lf
-

Vf

Field

Figure 9.18 - Separately Excited d.c. Motor with Load of Torque "T"

366

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

In motoring mode, the following circuit relationships are appropriate in order to examine the external mechanical characteristic of the machine:

Ea = Ka Va I a R a = E a = K a T = Ka Ia = Va R 2 a 2 T Ka Ka ...(18) Equation (18) shows that for a constant terminal voltage (Va) and flux (), the relationship between speed and torque is linear. The equation also shows that the speed of the machine, at a given torque, can be changed by varying the terminal voltage, flux or total armature winding resistance. Conversely, if the mechanical load on the machine (T) changes, then we need to change either the terminal voltage, flux (through field current) or armature resistance to maintain a constant speed.

9.2.4 Series d.c. Machines


Series d.c. machines are commonly used in traction motors for electric trains, trams and trolley buses and so speed control is an important issue for these machines when they are used in motoring mode. However, we commence our examination by looking at the d.c. series generator which is shown in Figure 9.19.

Field

L a Ra I a

Lf

Rf

If Vt

Ea
Armature

Va

Load

Figure 9.19 - Series d.c. Generator Driven by a Prime Mover

Electromagnetic Actuators & Machines - Basic Mechatronic Units

367

In order to get the electrical terminal characteristics of the generator, we again assume that, in the steady state, the armature and field inductances can be ignored. We also note that in the series motor, the terminal voltage and the armature voltage are not the same but that the field, armature and load currents are identical. The following relationships then arise: Ea = K a

Ea Ra + Rf I a = Vt

...(19)

At first glance, equation (19) might suggest a linear relationship between terminal voltage and current - however this is not the case. In a series machine, the field current and the armature current are identical, so that if the armature current changes, so too does the field current, the flux and hence the back e.m.f. of the machine. In order to obtain the terminal characteristic, we need to have the magnetisation curve for the machine. This tells us how flux density (hence flux and back e.m.f.) vary with field current (hence armature current) at a given machine speed. We could then plot the terminal characteristic for the machine. We can also make the assumption of linearity, where we say that the machine is operated in the region where flux () is proportional to field current (If) and obtain an approximate characteristic for the machine. Figure 9.20 shows a series d.c. machine configured as a motor and connected to a mechanical load of torque "T".

Field

La R a I a

Lf

Rf

If
+

Load T

Ea
Armature

V a

Vt
-

Figure 9.20 - Series d.c. Motor Connected to a Mechanical Load

We again apply simple electrical circuit theory, together with the torque equation (15) to determine the external mechanical characteristic for the d.c. series motor: Vt Ra + Rf I a = Ea = Ka

T = Ka I a

...(20)

368

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

If we make the assumption that the motor operates in the linear region of the magnetisation curve for the stator's ferromagnetic material, then we can say that: If and since If = Ia, then the flux in the machine is (to a first order approximation) proportional to the armature current. Substituting this into the torque equation (15), we find that: = K I a T = Ka I a = Ka K I 2 a T = K I2 a ...(21) where K is some constant of proportionality that relates field (hence armature) current to flux in the core within the linear region of operation. Using these results we can combine the expressions with equation (20) as follows:

Vt (R + Ra ) I a f Ka Ka Vt (R + Ra ) f K Ia K Vt ( R + Ra ) f K K T ...(22)

Equation (22) shows that the speed of the motor is inversely proportional to the square root of the torque. It is an interesting relationship because it implies that if the mechanical load is removed from the d.c. series motor then the motor will theoretically accelerate towards infinite speed. In practice, a series d.c. motor without a mechanical load will eventually destroy itself. Another factor that is evident from equation (22) is that, for a constant torque, we can vary the speed of the motor through its terminal voltage or through the total resistance in the armature and field circuit. In other words, for a given torque, we can (for example) decrease motor speed by adding resistance to the motor circuit or by lowering the terminal voltage.

Electromagnetic Actuators & Machines - Basic Mechatronic Units

369

9.2.5 Shunt d.c. Machines


In the series d.c. machine, the most pronounced characteristic arises in motoring mode where the machine can theoretically accelerate to infinite speed when there is no applied mechanical torque. In the shunt d.c. machine, it is perhaps the generator mode which provides the most interesting characteristics. A shunt d.c. generator is shown in Figure 9.21.

La R a

Ia
If

IL

Ea
-

Va

Lf Rf
Field

Load

Armature

Figure 9.21 - Shunt d.c. Generator (Self-Excited)

The shunt generator is interesting because, unlike other d.c. machine configurations, there is no current supplied to the field unless there is a voltage present across the armature winding. There is no voltage present across the armature winding unless there is a back e.m.f. generated in the rotor and there is no back e.m.f. generated unless there is a field present in the machine. It would appear then that the shunt generator can never function - however this is not the case in practice. In a realistic machine, even with no field current applied, there still exists some residual magnetism in the stator, leading to a small magnetic field between poles. This is enough to generate a small back e.m.f. which then leads to an increase in armature voltage and thereby an increase in field current. As field current increases, armature voltage increases and hence field current again increases. This self-excitation continues until a steady state operating point is reached where the back e.m.f is defined by:
Ea = K a We also know that for the shunt machine, the terminal voltage is the same as the armature voltage and that the load current is the difference between the armature current and the field current. Ignoring inductance in the steady state, the following relationships apply:

370

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

IL = Ia If Va = Ea Ra I a = K a If = Va Rf

Va = Ea Ra I L + Va =

F VI G RJ H K
a f

Rf R R Ea a f I L Rf + Ra Rf + Ra
...(23)

The actual electrical characteristic relationship is in effect quite complex because back e.m.f. depends upon flux (ie: field current); field current depends upon armature voltage and armature voltage depends upon armature current which depends upon load current. Figure 9.22 shows the shunt d.c. machine configured as a motor and connected to a mechanical load of torque "T".

L a Ra

Ia If

IL
+

Load T

Ea
-

Va

Lf Rf
Field

Va

Armature

Figure 9.22 - Shunt d.c. Motor Connected to Mechanical Load of Torque "T"

In order to obtain the external mechanical characteristic of the shunt motor for steady-state operation, we again ignore armature and field inductance and appeal to the same basic relational equations used above: T = Ka Ia Va I a R a = E a = K a

Electromagnetic Actuators & Machines - Basic Mechatronic Units

371

Va R 2 a 2 T Ka Ka

If we assume that the motor is operated in the linear region of its magnetisation curve, then we say that the flux () is proportional to the field current: = K If We then have the following external characteristic:

=
where

Va R 2 a 2 T K If K If

K = Ka K However, in a shunt machine:

If =
so therefore

Va Rf

Rf R 2 a 2 T K K If
...(24)

From equation (24) we can see that for a constant output torque, the speed of the shunt d.c. machine can be varied by altering the field current. This can be achieved by altering the total resistance in the field circuit. Since the winding resistance is fixed, an additional (variable) resistance has to be used to change the field current. The other factor that is of significance in the shunt machine is its theoretical ability to reach infinite speed if the field current is removed, whilst the mechanical load is a constant. This is a destructive condition that is analogous to running a series d.c. machine with no mechanical load. The other feature that is apparent from equation (24) is that since the field current is proportional to the armature voltage, then by changing the armature (therefore terminal) voltage of the machine, we can also vary the speed of the machine. Major speed changes are normally made by armature voltage control and adjustments from a set speed are made by weakening the field (through increased field resistance).

372

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

9.2.6 Compound d.c. Machine Configurations


The field poles in a d.c. machine can be excited by two sets of field windings which can be externally interconnected to produce a range of different machines. One set of field windings can be used as a shunt field and the other set as a series field. In machines with two such sets of windings, the shunt windings are normally much larger (ie: have many more turns) and therefore provide a field circuit that takes only a small percentage of the current in the armature circuit. Series windings on the other hand, as the name implies, take the same current as the armature but have fewer turns than the shunt windings. When both series and shunt windings are excited in a d.c. machine then the machine is said to be running in a compound configuration. There are clearly two methods of combining the shunt and series windings in a d.c. machine. The first is to connect the shunt winding directly across the armature circuit and then place the series winding in series with the shunt combination. This is known as a "short shunt" machine. The second alternative is to place the shunt winding across both the series winding and the armature winding. This is known as a "long shunt" machine. The two configurations are shown in generator form in Figure 9.23.

Series Field

L a Ra

Ia I fp

L fs R fs I L

Ea
-

Va

L fp
Shunt R fp Field

Load

Vt

(a)

Armature

Series Field

La R a

Ia

L fs Rfs Ifp

IL

Ea
-

Va

Lfp R fp Field
Shunt

Load

(b)

Vt

Armature

Figure 9.23 - (a) Short and (b) Long Shunt Compound Generator Configuration

Electromagnetic Actuators & Machines - Basic Mechatronic Units

373

Depending on the polarity of the connection to the series and shunt windings it should also be evident that the two windings can either produce m.m.f.s which enhance each other or oppose each other. The former is known as a cumulative compound machine and the latter is known as a differential compound machine. In compound d.c. machines, the shunt field provides the bulk of the stator m.m.f. and so the characteristics of the compound machine should predominantly be those of the shunt machine, with deviations caused by the effect of the series winding. The series winding in a compound machine is essentially only present as a "loss compensation" winding that serves to provide additional m.m.f. to overcome terminal voltage drops due to armature resistance and field current diminution due to armature reaction. The net effect of the series winding is to create a machine which is capable of providing (in cumulative mode) a regulated terminal voltage under varying electrical load. To examine the electrical characteristic for the compound machine, in the steady state, we assume that the inductance in the field and armature can be ignored. In the following analyses, the subscript "p" refers to the parallel (shunt) winding and the subscript "s" refers to the series winding. Taking the long shunt machine as an example, we have the following relationships: E a I a R a + R fs = Vt ...(25)

Ea = Ka
If we assume that the machine is operated in the linear region of its magnetisation curve, then the flux in the machine () is proportional to the effective field current (If) in the stator. In order to determine the effective field current, we really need to work in terms of the m.m.f.s generated by each of the field components. Ignoring armature reaction, we find the following relationships N fp I f = N fp I fp N fs I fs I f = I fp Ns I fs Np ...(26) where Nfp and Nfs are the number of turns in the parallel and series windings respectively and Nfp >> Nfs. Ifs is the field current in the series winding. In the case of the long shunt machine, this is the same as the armature current - in the case of the short shunt machine Ifs is the load current. The "" relationship depends upon whether the fields are cumulative or differential, respectively.

374

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The long shunt machine is shown operating as a motor in Figure 9.24, where it is connected to a load of torque "T". Note the different direction of armature current flow that occurs in changing from generator mode to motor mode. It means that if we use the same connection polarity to the series field windings then a differential compound generator becomes a cumulative compound motor and vice versa.

Series Field

La R a

Ia

L fs R fs Ifp

IL
+

Load T

Ea
-

Va

L fp R fp Field
Shunt -

Vt

Armature

Figure 9.24 - Long Shunt d.c. Motor Connected to Load of Torque "T"

In order to determine the external mechanical characteristic for the long shunt d.c. motor we assume that armature and field inductances have no influence in the steady state and then apply the normal relational equations. As with other machines we also assume linearity in operation so that the flux is proportional to the effective field current. The analysis is as follows: Vt I a Ra + R fs = K a R x = R a + Rfs Vt I a Rx = K a Assuming the machine is cumulative, then the effective field current in the machine is given by: I f = I fp + Ns I fs Np

I f = I fp + x I fs = I fp + x I a

Electromagnetic Actuators & Machines - Basic Mechatronic Units

375

= K I f = K (I fp + x I a ) = K ( Vt + x Ia ) Rfp Vt + x Ia ) Rfp

Vt I a Rx = K a K (

K = K a K Ia = Vt K ) (1 R fp K x + Rx
...(27)

Substituting the expression for armature current into the relationship between torque, flux and armature current gives us the required relationship between torque and speed:

T = Ka Ia = K I a (

Vt + x Ia ) R fp

T=

K Vt K V Vt K (1 )( t + x (1 )) K x + Rx Rfp Rfp K x + Rx Rfp


...(28)

Clearly a complex relationship exists between the speed and torque in a compound cumulative motor and a similar relationship applies for the differential motor (making allowances for plus and minus signs in the relevant equations).

376

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

9.2.7 Basics of Speed Control


In sections 9.2.3 to 9.2.6 we derived expressions for the relationships between torque and speed in the various configurations of d.c. machines. It is evident from the analysis of the different motor types discussed in these sections that there are a number of electrical mechanisms that can be used to control the speed of a d.c. motor. These include:

Control of terminal or armature voltage through variable external supply Control of field current Control of armature current.

Each configuration of d.c. motor has its own unique range of mechanical characteristics and each has a common method by which speed is normally varied in order to achieve a desired result. The simplest techniques for motor speed control involve the variation of resistance in either the field or armature circuits of the motor in order to vary the corresponding current. This can be achieved via:

Manual or automated rheostats Manual switched-resistance boxes Electromagnetic switching of discrete resistances by "contactors".

These techniques are amongst the earliest and most reliable forms of d.c. motor speed control and are still in use today - however, it should be self evident that they contribute to unwanted energy losses and heating in d.c. motor systems. The control of terminal (or armature) voltage through variation of the supply voltage can be simple if voltage variation occurs through a variable resistance between the supply and the motor. However, this is somewhat crude and generates energy losses through resistive heating. The more common, modern approach is to use a pulse width modulator (PWM) IC with power electronics to switch a steady d.c. power supply waveform on and off. This generates a square waveform with a lower average voltage than the original. Varying the duty cycle (ratio of "on time" to "off time") in the switching process can provide an average voltage from zero up to the original supply level. One may well ask how a square wave voltage applied to a d.c. machine would affect its operation. In practice, the relevant machine circuit inductances (field and/or armature) smooth the waveform and the machine can operate as normal.

Electromagnetic Actuators & Machines - Basic Mechatronic Units

377

These control techniques are not only important for general speed variation during normal motor operation, but are also relevant to the starting of d.c. motors. We already know that the back e.m.f. of a motor depends on its rotational velocity and therefore at "start", when the velocity of the motor is zero, then the back e.m.f. of the machine is also zero. This means that if full terminal voltage is applied to the armature at start up, then an excessive current can flow until the motor speed builds up enough back e.m.f. to reduce that current. If the starting current reaches a sufficiently high level then the windings on the machine can be damaged. The two alternative mechanisms for reducing starting current in d.c. motors are relevant to our discussions on speed control and so are introduced at this point. The first alternative is to provide a limiting resistance in series with the motor windings that will reduce the level of starting current. The limiting resistance is gradually switched out as the motor speed increases. In its manual form, this type of arrangement is simply referred to as a "d.c. starter" and is realised through a mechanical lever that switches out resistors as it is moved. This sort of device has been used in series motors (such as those used in trolley/tram cars for most of this century). A more modern version uses electromechanical contactors to switch in and out resistors and can be automated through simple computer control. The second alternative in motor starting is to gradually increase terminal voltage to the rated level as speed (and hence back e.m.f.) increase. This can be done by either of the two voltage control techniques described earlier. In terms of speed variation during normal motor operation, we can best examine how this can be effected for each type of d.c. motor by examining some of the torquespeed characteristics calculated in 9.2.3 - 9.2.6. In the analyses that follow, we use the same terminology and labelling that was introduced in those earlier sections. We again neglect the effects of armature reaction and so our analyses are only approximate, but nevertheless adequate for our purposes.

(a)

Separately Excited d.c. Motor

If we carry out a simple rearrangement of equation (18), we can see that the relationship between speed and torque in a separately excited d.c. motor is relatively straightforward: T= Ka K2 2 Va a Ra Ra

We could also have made the assumption of magnetic circuit linearity (as we did with analyses for other machines) and have arrived at the following relationship:

378

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

T=

K If K I2 f Va Ra Ra ...(29)

Using the derived equation, (29), we can plot the way in which the torque-speed characteristic is affected by changes in either armature (terminal) voltage or field current. This is shown in Figure 9.25 (a) and (b).

Torque (T) Increasing Field Current (Constant Armature Voltage)

Torque (T)

Increasing Armature Voltage (Constant field current)

Speed () (a) (b)

Speed ()

Figure 9.25 - Variation of Separately Excited Motor Torque-Speed Characteristic through - (a) Field Current Control; (b) Armature Voltage Control

Field current control can be most simply realised by placing variable resistance in the field winding or alternatively through a variable d.c. voltage supply for the field. Armature voltage control can be realised through a variable d.c. voltage supply. A third mechanism for varying the torque-speed characteristic of the separately excited motor is by varying the total resistance of the armature winding. This is evident from the dependence of torque upon armature resistance Ra defined in equation (29). By adding an additional, variable resistance (Rp) into the armature circuit, whilst maintaining constant terminal voltage and field current we can also vary the motor characteristic. Figure 9.25 is somewhat misleading because of its scale, which actually goes well outside the normal operating range in terms of torque. In fact, separately excited and shunt motor configurations provide an essentially constant torque output over their normal operating range.

Electromagnetic Actuators & Machines - Basic Mechatronic Units

379

(b)

Series Motor

The torque-speed characteristic for a series d.c. motor can be determined from a simple rearrangement of equation (22): T= Vt2 K ( K + R f + R a )2 ...(30) From equation (30), it is clear that there are two mechanisms for varying the torque-speed characteristic of such a motor. The first is to vary the terminal voltage, as previously described, and the second is to vary the total resistance in the combined armature and field circuit. This is achieved by adding an additional variable resistance (Rs) into the series circuit. The effect of the two mechanisms on the characteristic curve is shown in Figure 9.26 (a) and (b).

Torque (T)

Torque (T)

Increasing Terminal Voltage (Constant Circuit Resistance)

Increasing Circuit Resistance R (Constant Terminal Voltage)

Speed () (a) (b)

Speed ()

Figure 9.26 - Variation of Series Motor Torque-Speed Characteristic through (a) Terminal Voltage Variation; (b) Circuit Resistance Variation

Regardless of the mechanism used for speed control in series motors, it is evident that their major advantage is a high (in fact maximum) torque development at start (zero speed). This is to be expected since the back e.m.f. of the machine is zero at start and hence both field and armature current are at their maxima.

380

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

(c)

Shunt Motor

The torque-speed characteristic for the shunt motor can be determined from a rearrangement of equation (24): T= Ra K2 2 K I2 If f Rf Ra ...(31) We also know that the field current in a shunt d.c. machine is simply the armature voltage divided by the total field resistance, so the torque-speed characteristic is also dependent upon terminal (armature) voltage: T= K K2 Va2 2 Va2 Rf Ra Rf Ra ...(32) From equations (31) and (32), we can see that the torque-speed characteristic for a shunt motor is linear provided that armature voltage and field current are kept constant. We can vary the field current by changing the total resistance of the field winding - that is, by adding a variable resistance to selectively lower the field current. This provides a mechanism for accurately fine-tuning the speed over a small range. Major changes in speed are effected by variation of the armature voltage. The effect of the two mechanisms upon the torque-speed characteristic is shown in Figure 9.27 (a) and (b).

Torque (T) Increasing Field Current (Constant Armature Voltage)

Torque (T)

Increasing Armature Voltage (Constant field current)

Speed () (a) (b)

Speed ()

Figure 9.27 - Variation of Series Motor Torque-Speed Characteristic through (a) Terminal Voltage Variation; (b) Circuit Resistance Variation

Electromagnetic Actuators & Machines - Basic Mechatronic Units

381

(d)

Compound Motor

The torque-speed characteristic for the compound motor is somewhat complex, even given the assumptions we have made (neglecting armature reaction, assuming linear operation, etc.). This was described by equation (28): K Vt K V Vt K (1 )( t + x (1 )) K x + Rx R fp R fp K x + Rx R fp

T=

Plotting the relationship between torque and speed should provide a characteristic which is intermediate between the shunt and the series motor. Upon examination of equation (28), we find that the basic mechanisms for speed control are:

Variation of total armature resistance Variation of terminal voltage Variation of total parallel field resistance.

A comparison of typical motor characteristics is shown in Figure 9.28, illustrating performance of differing motor configurations over normal operating ranges.

Motor Torque (% of Rated Torque)

Shunt & Separately Excited Motors 100

75

Series Motor

50

25

Compound Motor Speed 0 25 50 75 100 (% of rating)

Figure 9.28 - Comparative Torque-Speed Characteristics for Different d.c. Motor Configurations

382

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

In the final analysis, it should be evident that any variation of motor speed or torque for any d.c. motor can only be affected by changing the magnitude of the current in either the field or the armature winding. Depending upon the machine configuration, this can be achieved through the addition of a variable resistance in the relevant winding or by variation of the voltages applied to the field and armature windings. This can be achieved by either analog (traditional) or digital electronic circuits. All of the speed control techniques have limitations. For example, variation of field resistance in a shunt machine provides accurate speed control over a limited range. However, excessive weakening of the field, combined with the effect of armature reaction, can ultimately cause a machine to become unstable. On the other hand, in a series machine, excessive armature circuit resistance control leads to high resistive losses, which are manifested in overall system heat. Another, rather outmoded technique for d.c. motor speed control is the so-called "Ward-Leonard" system. In this system, a prime mover is used to drive a d.c. generator, which in turn, provides the armature voltage for a d.c. motor. The armature voltage output of the generator (and hence armature voltage to the coupled motor) is controlled by varying the field current to the generator. If the d.c. motor in the WardLeonard system is separately excited and its field current can be varied then the motor has both terminal voltage and field current control. This was one of the system's main advantages in its inception. The obvious disadvantage is that three separate machines are required to drive a single motor, so the system is both large and costly. The need for a Ward-Leonard arrangement was effectively eliminated by the introduction of power electronics capable of "chopping" (switching on and off) a constant, d.c. waveform into a rectangular waveform whose average value is lower than the original level. This technique represents the modern, energy-efficient way of controlling motor speed by terminal voltage variation. In subsequent sections of this text, we will look, in detail, at the means by which this form of d.c. motor control can be implemented with modern processing techniques for use in modern servo-drive systems.

Electromagnetic Actuators & Machines - Basic Mechatronic Units

383

9.3 Fundamentals of a.c. Synchronous Machines


9.3.1 Introduction to Synchronous Machines
The synchronous machine is most commonly associated with large, regional power supply systems and is predominantly used in its generator mode. It is traditionally also associated with three-phase power circuits in both generator and motor mode. However, on a smaller scale, permanent (rare-earth) magnet synchronous machines are also used in motoring mode for servo control applications in CNC machines and robots. There are many factors that influence the overall efficiency of a power supply and distribution system, and the methods of generation and transmission are certainly amongst the most important. In the early parts of the twentieth century, as a result of then insurmountable technical barriers, associated with switching, transformation and transmission of d.c. voltages, it was generally decided that a.c. generation and transmission were the optimal solution for power supply systems. This is essentially despite the efforts of Thomas Edison who was, in fact, a major proponent of d.c. systems and because of the efforts of Nicola Tesla who strove to implement a.c. systems. Whilst Edison was able to demonstrate the concept of d.c. electric motors, Tesla arrived at the remarkable observation that a rotating magnetic field could be created by applying three sinusoidal voltage waveforms (separated by 120 electrical degrees) to three machine windings (separated spatially by 120 mechanical degrees), thus generating the basis for the first a.c. motor. Despite many attempts at discrediting the concept of a.c. power generation and transmission (Edison, it was said, went around the United States electrocuting animals with a.c. electricity just to show how dangerous it really could be!), it is self-evident that a.c. power transmission has proven to be a successful phenomenon. It is also evident that, given modern semiconductor technology, many of the problems associated with d.c. transmission and transformation can now be overcome however it is most unlikely that there are any short term prospects for large-scale d.c. power generation. Moreover, it has long been established that three-phase a.c. voltage generation produces a more efficient power system than single-phase power generation. As a result, we are left to ponder the use of a.c. devices such as the synchronous and asynchronous (induction) machines, which can efficiently utilise supplied energy in its raw form. In this text, we examine the use of the synchronous machine and induction machine in their roles as motors and their potential for servo-drive applications.

384

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

9.3.2 Physical and Magnetic Circuit Characteristics of Synchronous Machines


The synchronous machine, like the d.c. machine, is composed of a stator and a rotor - however, in the synchronous machine, the main windings are normally in the stationary (stator) part of the machine and the field windings are in the rotational (rotor) part of the machine. The field windings are brought out to terminals via a reliable slipring arrangement. There is no commutator involved in an a.c. machine and hence the need for short-lifespan brushes is eliminated, thus providing a considerable advantage.. Another point that normally confuses people, on their first introduction to synchronous machines, is the fact that although the stator (armature) voltage is a.c., the actual current (excitation) applied to the rotor (field) of the machine is in fact d.c. This should not really be surprising when one considers that the objective of a field winding is to replicate (in a variable form) the magnetic fields that would normally be produced by a permanent magnet - in other words a constant field. A simplistic, single-phase synchronous generator configuration is shown in crosssection in Figure 9.29, with one closed coil in the stator. Compare this with the schematic of Figure 9.10 for the d.c. machine. In the orientations shown in Figure 9.10 and Figure 9.29, the flux distributions through the stator are essentially similar however, in the d.c. machine, the flux distribution through the stator remains fixed regardless of rotor position while, in the a.c. machine, the flux distribution changes with rotor position. Also note that the rotor shown in Figure 9.29 is not cylindrical, and is referred to as a "salient-pole" rotor. A "salient-pole" rotor has very distinct (projecting) poles with concentrated windings. Synchronous machines can also have cylindrical rotors (non-salient-poles) that more closely resemble the structure of a d.c. machine.

X Rotor S X

.
Stator

Figure 9.29 - Schematic of Single-Phase Salient-Pole Synchronous Machine

Electromagnetic Actuators & Machines - Basic Mechatronic Units

385

In Figure 9.29, with the rotor orientation shown ( = 0), the flux coupled through the stator coil is equal to zero, since the flux lines are collinear with the coil. As the rotor moves anticlockwise to = 90, the flux lines are perpendicular to the stator coil and the magnitude of the flux coupled through the coil is at a maximum. Moving the rotor to = 180 returns the flux coupled through the stator coil to zero again. Moving the rotor anticlockwise through to = 270 anticlockwise brings the flux coupled through the stator coil to its maximum magnitude, but with a direction opposite to that at = 90. At = 360, the rotor is at its original position and the flux coupled through the stator winding has returned to zero. This is shown in Figure 9.30 (a).

X ? X 90 ? Rotor Orientation ()

X
180 ? 270 X ?

X 360

(a)

Rotor Orientation () 90 180 270 360

(b) Induced Stator e.m.f.

Time

(c)

Figure 9.30 - (a) Flux Distribution as a Function of Rotor Position (b) Shaping Rotor Ends to Approximate a Sinusoidal Distribution (c) Induced Voltage in Stator Coil as a Result of Rotor Movement

386

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The shape of the flux distribution, as a function of angular position () is actually dependent upon the shape of the rotor ends. We now know that the flux linked through the stator coil changes from zero to positive maximum, to zero, to negative maximum and back to zero as the rotor's angular position moves through 360 mechanical degrees. It should also be clear that if the ends of the rotor can be suitably shaped, then the flux distribution, as a function of angular position, can be made to approximate a sinusoidal waveform as shown in Figure 9.30 (b) and the generated armature voltage (per phase) can be determined from the e.m.f. equation (11). If the rotor is turned at a uniform speed within the stator, then the angular position () is clearly proportional to time. Faraday's law tells us that the voltage induced in the stator coil will be the derivative of the flux waveform with respect to time and so, Figure 9.30 (c) shows that an approximately sinusoidal output voltage arises from the synchronous machine. Figure 9.30 (c) also highlights the fact that in a machine with 1 pole-pair, 360 mechanical degrees (2 radians) are equivalent to 360 electrical degrees. Many synchronous machines have more than 1 pole-pair. If a machine had, say, two pole-pairs, then for each complete 360 mechanical revolution of the rotor, the induced stator voltage would complete two electrical cycles because the flux would change polarity for every 90 mechanical degrees of rotor movement. It is therefore evident that:

Number of electrical Degrees = P x Number of Mechanical Degrees


and

Electrical Frequency = P x Mechanical Frequency


where P is the number of pole-pairs in a synchronous machine. It was earlier stated that pole-pairs in synchronous machines can either be generated through distinct (salient) structures, or through the traditional cylindrical rotor structure. We have already examined the salient-pole structure exemplified in Figure 9.29. Figure 9.31 shows the alternative structure through a schematic of a simplistic, cylindrical rotor synchronous machine with a single-phase stator coil. A number of coils embedded into the cylindrical rotor are used to produce the two poles in this simplistic machine. The cylindrical rotor machine is used in situations where the speed of revolution is relatively fast, and in generator mode, an adequate voltage output frequency can be produced by a low number of pole-pairs. The salient-pole configuration, on the other hand, provides the opportunity for more pole-pairs to be included in the rotor. This means that in situations where machines are used in generator mode, and the speed of rotor revolution is relatively slow, the salient-pole configuration can still provide a reasonable output supply frequency.

Electromagnetic Actuators & Machines - Basic Mechatronic Units

387

X X Rotor X X S X X

. . .

. .
Stator

Figure 9.31 - Schematic of Single-Phase Cylindrical (Round) Rotor Synchronous Machine

The machine configurations discussed thus far have only been single-phase. However, the larger proportion of a.c. machines in existence are designed to operate as three-phase machines, particularly in the case of generators, where three-phase generation and transmission of electricity has been almost universally adopted. It is not difficult however, to extrapolate the concept of the single-phase machine to the three-phase synchronous machine when we examine it in generator mode. Take, for example, the simplistic, three-phase, salient-pole machine illustrated in Figure 9.32. This machine is identical to that in Figure 9.29, except that two, additional stator coils have been added. The original coil is labelled R-R (Red Phase). The second coil, displaced 120 mechanical degrees from the first ( = 120), is labelled Y-Y (Yellow Phase). The third coil, displaced 120 mechanical degrees from the second coil (at = 240) is labelled B-B (Blue Phase). For each phase we can carry out the same flux distribution analysis that was described for the single-phase machine of Figure 9.29. Assuming that the salient-pole ends are appropriately shaped, the net result will be three sinusoidal flux distributions, displaced by 120. Applying Faraday's law, we find that the voltages induced in the stator coils will be three sinusoidal waveforms, displaced by 120 electrical degrees. This is shown in Figure 9.33.

388

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Y
X

X Rotor S

R
X

. .
Y
Stator

Figure 9.32 - Three-Phase Salient Pole Synchronous Machine

R Y B

90

180

270

360

() Rotor Orientation

(a) Induced Stator e.m.f.

B Time

(b)

Figure 9.33 - (a) Stator Flux Distribution in Three-Phase Synchronous Machine (b) Induced Stator Voltages as a Result of Rotor Movement

Electromagnetic Actuators & Machines - Basic Mechatronic Units

389

It is somewhat more difficult to comprehend how a synchronous machine acts as a motor. In order to understand how synchronous and asynchronous motors operate, one needs to come to terms with the concept of the three-phase-rotating field - a phenomenon first brought to light by Tesla in the early part of the 20th century. The three-phase-rotating field is created by applying three sinusoidal currents, displaced 120 electrical degrees from one another to three coils, displaced 120 mechanical degrees from one another. This is shown in Figure 9.34.

Y
X

.
hR

R
X

hY

hB

.
Y

Stator

Figure 9.34 - Fields Generated through Application of Three-Phase Currents to Three Symmetrically Displaced Coils

Firstly one needs to look at just one winding in isolation (R-R, for example). When a steady current flows in R-R (in the direction indicated by the "into-page" and "out-of-page" symbols in Figure 9.34), we can apply the right-hand-thumb rule to see that the magnetic field intensity (hR) will act in the direction shown in that diagram. One can similarly deduce the directions of the magnetic field intensity vectors hY and hB. Consider however, what happens to the magnetic fields in Figure 9.34 when we apply time-varying currents of the following form: iR(t) = Icos t iY(t) = Icos (t - 120) iB(t) = I cos (t - 240)

390

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Ampere's law tells us that the magnetic field intensity vectors will also be time varying. Each of the magnetic field vectors maintains its axis of force, but its size and polarity varies with time in a cosinusoidal manner:

hR(t) = Hcos t hY(t) = Hcos (t - 120) hB(t) = H cos (t - 240)


...(33) Each vector is effectively a standing wave, whose axis is constant, but whose magnitude varies cosinusoidally with time. The total magnetic field at any instant is the vector sum of the three components (the magnitude of each of which is timevariant). That is:

htotal (t) = hR(t) + hY(t) + hB(t)


...(34) In Table 9.2, the size of each of the component vectors in (33) is listed in terms of the value of the cosine component at various values of time.

Time

hR(t)
Hcos t H 1 0

hY(t)
Hcos (t - 120) 1 2 3 H 2 1 H 2 3 H 2 1 H 2

hB(t)
H cos (t - 240) 1 2 3 H 2 1 H 2 3 H 2 1 H 2

0 90 180 270 360

H 1
0

H 1

Table 9.2 - Magnitude of Field Intensity Vectors at Differing Times

Electromagnetic Actuators & Machines - Basic Mechatronic Units

391

The total field intensity vector at any time can be obtained graphically by adding the vectors described in equation (33). Figure 9.35 shows the graphical additions for the times calculated in Table 9.2. Note well that in one electrical cycle (360 electrical degrees), the total field intensity vector has rotated 360 mechanical degrees.

htotal h
R

hB hY hY
o

hB

htotal
o (b) t = 90

t = 0 (a) t = 360o
hY h
R

hB

hY hB

htotal

htotal (c) t = 180


o o (d) t = 270

Figure 9.35 - Resultant Magnetic Field Vector for Varying Times (t)

At any instant in time, the total magnetic field intensity will have a particular orientation with respect to the stator. Looking at Figures 9.35 (a) to (d), and applying the cosine rule, we can deduce that the magnitude of the total field vector is a constant for all values of "t" and is equal to:

3 H 2
We can also see that at any instant in time, the total field vector points in the direction defined by "t". However, we now need to calculate the effect of the total field vector at a time "t" upon a point "A" at orientation "" in the stator. This is shown in Figure 9.36.

392

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Y
X

.
t

htotal

R
X

.
Y

Stator

Figure 9.36 - Contribution of Total Field Vector to a Point "A" at Orientation ""

In Figure 9.36, we have continued to use the field axis of the "red" phase as our origin (to retain consistency with previous calculations), but this is an arbitrary choice. The contribution of the total field vector, towards the orientation of point "A" at orientation "" is given by:

3 hP ( ,t ) = H cos t 2

b g

...(35)

We can therefore say that at any point within the stator, the magnetic field intensity is a standing wave whose magnitude is dependent upon both time and the orientation of that point. It is however, the resultant, rotating magnetic field intensity vector within the stator that is of prime importance to us in the understanding of synchronous motor operation. The speed at which the resultant magnetic field intensity vector rotates is referred to as the synchronous speed of the machine. It is independent of the number of poles on the rotor and it is independent of the rotor type (cylindrical / salient pole). The speed at which the vector rotates is in fact dependent only upon the frequency of the currents in the stator windings and the number of poles in the stator. In Figure 9.35 we graphically saw how the resultant field intensity vector rotated in the two-pole stator of the machine shown in Figure 9.34. However, consider now a stator with four poles that is one in which each phase current energises two series-connected coils. This is shown in Figure 9.37.

Electromagnetic Actuators & Machines - Basic Mechatronic Units

393

R Y
X

B
X

B. N R
X

.
S hR

.
X

N
X

.B
Stator

.
R

Figure 9.37 - Fields Generated in a 4-Pole (2 Pole-Pair), 3 Phase Stator Winding

In the diagram of Figure 9.37, it is evident that we have a very different field distribution in a four-pole stator to that which we had with the two-pole stator. For simplicity, look at one phase only (the red phase, say) and apply the right-hand-thumb rule to determine the magnetic field lines when a steady current flows in the red conductors. Figure 9.37 shows the general tendency of the field intensity lines inside the stator air gap. The other two phases (yellow and blue) also have the same distribution, but are each displaced from the red phase by 120 mechanical degrees. Looking again at the red phase, we can see that if we were to plot the magnitude of magnetic field intensity (hr) as a function of angular position (), over 360 mechanical degrees, then we find that the movement from a South pole through North, South and North poles back to the South pole gives us two cycles of an approximately sinusoidal distribution. In other words, 360 mechanical degrees correspond to 720 electromagnetic (electrical) degrees. It then follows that the electrical frequency of the field intensity waveform () is given by:

= Pmech
...(36) where P is the number of Pole-Pairs in the stator. In dealing with machines, it is generally simpler to visualise fields in terms of a single-pole-pair machine and to translate results to multiple-pole pair machines using the simple relationship given in (36).

394

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Equation (35) describes the speed (electrical) of the rotating magnetic field vector in a single-pole-pair machine. This was referred to as the "synchronous speed" of the machine, and in the single-pole-pair machine, this is equivalent to the mechanical speed of rotation for the vector. In general, it is the mechanical synchronous speed of rotation in which we are interested so, for multiple-pole-pair machines, we use the translation of equation (36) to define the synchronous mechanical speed of rotation:

mech(synchronous ) =

( radians/ second) P N 2 f 2 s = 60 P 60 f Ns = ( revolutions / minute) P

...(37)

where: Ns is the mechanical synchronous speed of the machine in revolutions/minute f = Electrical supply frequency applied to stator coils P= Number of pole-pairs in the stator It must be stressed that the synchronous speed of the magnetic field in the stator is dependent on the number of poles in the stator and is not dependent on rotor poles, nor is it even dependent upon the stator winding having a rotor. However, that being said, it should also be noted that in synchronous machines, the rotor is designed so that there are an equivalent number of rotor and stator poles. This is shown in Figure 9.38.

R Y
X

B
X

B.
X

. .
N
X

.
Y

.
X

.
B

X X

.B
Stator

.
R

Figure 9.38 - Synchronous Machine with 3-Phase, 2-Pole-Pair Stator and Four-Pole Rotor

Electromagnetic Actuators & Machines - Basic Mechatronic Units

395

Given that we have a synchronous machine, whose rotor and stator are energised with current, it is evident that there must be a net torque developed on the rotor. Conceptually, this is best understood by examining a simplistic, single-phase, cylindrical-rotor synchronous machine as shown in Figure 9.39.

Resultant Field S

hs
F

hsr

s r

hr
N X

.
S

Wound Rotor

F Stator

Figure 9.39 - Torque Development in Synchronous Machines

The cylindrical-rotor machine (non-salient-pole) configuration is chosen because it creates an electromagnetic environment analogous to that described in section 9.1 (xii), where we examined the torque produced on a current loop. In order to calculate the torque produced in Figure 9.39, we can assume that a current if(t) flows in the field winding and a current ia(t) flows in the armature. We use the Lorentz relationship (7), to determine the force on each of the rotor conductors:
F = i f l r Bs = o i f l r hs = o i f l r hs sin 90 o = o i f l r hs

396

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The net torque on the rotor is determined by resolving the force vectors into components parallel to and perpendicular to the axis of the rotor coil (as shown in Figure 9.39). The components perpendicular to the rotor coil are additive and are those that create a net torque on the coil (those parallel to the coil are collinear and cancel out). The net torque on the coil (of radius "r") is twice that attributed to any one conductor and is given by:
T = 2 o i f r l r h s cos 90 o = 2 o i f r l r h s sin

h
...(38)

If there are "Nf" turns on the field winding, then the torque on the rotor increases by a factor of Nf. The product of Nf and if is the rotor m.m.f. (Fr). The magnetic field intensity of the stator (hs) can be determined from Ampere's law:

hs dl = N s i s = F s

and is thus dependent upon the stator m.m.f. (Fs). We should also note that in a machine with "P" pole-pairs, the torque is increased by a factor of P. Combining these pieces of information, we can therefore write an expression for the torque developed in a synchronous machine as follows: T = K P N s i s N r i r sin T = K P F s Fr sin ...(39)

where K is a synchronous machine constant, dependent upon the geometry and material properties of the machine. The components of "K" arise from the application of Ampere's Law. The net torque acts in such a way as to try to align the two magnetic fields. Looking at the geometry of Figure 9.39, we can determine the following relationship between the field intensities:

hs sin = hsr sin r


Thus, equation (39) can be rewritten in another common form - that is, in terms of the angle between the rotor field and the resultant stator/rotor field: T = KPFsrFrsin r ...(40) The angle "r" is referred to as the "Torque Angle" of the rotor.

Electromagnetic Actuators & Machines - Basic Mechatronic Units

397

Until now, in examining the torque produced in a synchronous motor, we have assumed a single-phase stator winding. In a single-phase machine such as that in Figure 9.39:

The direction of the stator field intensity vector is constant The stator current is sinusoidal and so the magnitude of the stator field vector (or Fs) is sinusoidal The rotor current has a constant d.c. value.

Equation (39) tells us that there is no net starting torque produced in such a machine if the rotor and stator fields are aligned (ie: = 0). If the rotor is physically displaced so that the torque angle is greater than zero, then a torque will be produced such that its magnitude varies sinusoidally with time (ie: pulsating torque). The same torque analysis can be carried out for a three-phase synchronous machine, because we know that the three armature currents in the stator produce a rotating field intensity vector hs. The physical effect of the interaction between the rotor field and the stator field is to create a net torque which endeavours to align the two fields. If we assume that the rotor spins at synchronous speed (Ns), then we can see that the torque on the rotor will continually attempt to bring it into alignment with the moving stator field. The result, in the steady state, is that the rotor spins at synchronous speed. However, at any instant, the torque exerted on the rotor is still dependent upon the sine of the relative displacement between the moving stator and rotor fields. If there is no displacement between the fields (ie: the fields are aligned) then there is no torque on the rotor. The torque relationship in a three-phase synchronous machine can best be understood by examining Figure 9.40, where the stator field rotates at synchronous speed (s radians/second) and the rotor spins at r (radians/second). In the steady state, where the rotor speed is equal to the stator speed, the angle is a constant and the torque equations (39) or (40) can be simply applied:

T = K F s Fr sin or T = K F sr Fr sin r
Although the result looks similar to that for a single-phase motor, there is a substantial difference. The magnitude of the stator field that arises from the three, time-displaced armature currents (and hence m.m.f., Fs) is constant in size, as we have shown in Figure 9.35. This means that for a constant , the three phase machine produces a constant torque rather than a pulsating torque.

398

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Y
X F

hs

.
s r
N

.
S Wound Rotor

hr
X

X X

F Stator

.
Y

Figure 9.40 - Torque Production in a Three-Phase, Single-Pole-Pair, Synchronous Machine

If the stator field speed and the rotor speed are different, then equations (39) and (40) become quite complex, because (or r) are time varying and dependent upon the relative movement of the stator and rotor fields. In a realistic synchronous motor, if we were to apply a mechanical load to the rotor shaft, then we would increase the value of (or r) and hence the armature would draw more current to produce the corresponding increase in torque. In practice this occurs by the rotor instantaneously slowing down when a load is applied, thus increasing the value of (or r). The rotor then returns to synchronous speed. However, once we have loaded the machine to the extent where (or r) becomes greater than 90 degrees, the machine would become unstable, because the torque produced by the machine would start to decrease (in accordance with its sinusoidal dependence upon or r). The rotor would pull out of synchronism with the stator field and the torque would then fluctuate with the time dependent change of torque angle. The overall characteristic is shown in the typical synchronous machine chart in Figure 9.41. The Torque vs Torque Angle characteristic is plotted on the assumption that field current and the resultant flux, generated by the armature and field, remain a constant.

Electromagnetic Actuators & Machines - Basic Mechatronic Units

399

Torque

Pull-out Torque Synchronism lost Motor Mode -180


o

-90

Torque Angle Generator Mode 90


o

180

Synchronism lost

Figure 9.41 - Torque-Angle Characteristic for Three-Phase, Synchronous Machine

In terms of torque production, it should be noted that in generator mode, the developed torque opposes the direction of motor rotation. In motor mode, the torque is in the direction of motor rotation (it is, of course, the cause of motor rotation). It should also be noted that neither the salient-pole nor cylindrical-rotor synchronous machines are naturally "self-starting". In other words, the motors, as we have shown them thus far, would not automatically run up to synchronous speed. This might seem somewhat surprising on an intuitive basis, but the explanation is actually straightforward. The stationary rotor has inertia which needs to be overcome by the torque produced through the interaction of the rotor and stator fields. However, the torque is produced when a north pole of the rotor attempts to align itself with a south pole of the stator. When the rotor is stationary, the rotational movement of the stator field (hence poles) is too rapid for the rotor to be able to lock on to the synchronous speed. In other words, the alignment torque that is produced, is only produced for a very short time period and hence insufficient energy is generated for the rotor to spin. This problem is overcome through the induction motor effect. To generate a starting torque a number of other short circuited conductors are inserted into the rotor. These are referred to as "damper bars". Alternating currents are induced into the damper bars and a starting torque is then produced. The damper bars go out of action once the motor has pulled into synchronous speed.

400

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

9.3.3 Electrical Models and Performance Characteristics


In dealing with the magnetic characteristics of the synchronous machine in section 9.3.2, we needed to consider the cylindrical-rotor and salient-pole machines separately because the physical shapes of the rotors differed and because the mechanisms for torque production differed. It should not be surprising, therefore, that the electrical circuit analysis of synchronous machines is also dependent upon the rotor configuration. There are, however, some common analyses and representations amongst the two different physical configurations. The most common of these are the open-circuit and short-circuit characteristic curves. The open-circuit characteristic curve for a synchronous machine is obtained by running the machine in generator mode and measuring the open-circuit (induced) armature terminal voltage for a range of different rotor field currents, while the machine rotates at a constant (synchronous) speed. This characteristic is typical of the magnetisation curves of many ferromagnetic materials and is analogous to that obtained for the d.c. machine. The short-circuit characteristic is obtained by running a synchronous machine as a generator (at synchronous speed), for a range of fieldcurrents, while the armature terminals are short-circuited. Typical open and shortcircuit characteristics are shown in Figure 9.42.

Armature Voltage Air-gap line

Armature Current

Open-circuit characteristic Rated Voltage Short-circuit characteristic

Rated Current

Field Current

Figure 9.42 - Typical Open and Short-Circuit Characteristics for a Synchronous Machine

Electromagnetic Actuators & Machines - Basic Mechatronic Units

401

Normally, when we discuss synchronous machines, we are referring to threephase machines. However, the electrical models that we create for the armatures of such machines are developed on a per-phase basis. The models can then be converted to three-phase forms, when the phases are interconnected in a star or delta arrangement. The models are described below:

(i)

Cylindrical (Round) Rotor Synchronous Machine


The cylindrical rotor machine is electrically modelled in much the same way as a d.c. machine. That is, by a back e.m.f. voltage element, together with a winding resistance and winding inductance. This is shown in Figure 9.43.

(Motor) Ra ia (t) eo(t)

ia (t) Ls (Generator) v (t) t V


f

If Lf

Rf

Armature / Stator

Field / Rotor

Figure 9.43 - Circuit Model for One-Phase of a Three-Phase, Cylindrical-Rotor Synchronous Machine

In Figure 9.43, we see clearly the need to be able to deal with both steady-state and transient problems. In the steady-state, the field current is constant and d.c., hence the field inductance has no reactance and therefore rotor circuit is trivial to analyse. Under transient conditions, both the armature and the field circuits can be analysed using LaPlace methods to derive the time-dependent responses. The armature winding, naturally carries an a.c. current even in the steady state, so it is commonly analysed in the frequency domain using phasor method. The armature inductance creates an impedance (reactance) known as the synchronous reactance (Xs). The synchronous reactance in a cylindrical rotor machine is made up of two components:

Magnetising Reactance Leakage Reactance.

402

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The magnetising reactance accounts for the voltage drop caused by the field weakening effects of armature reaction (similar to the effect in d.c. machines). The leakage reactance accounts for flux leakage across the armature slots and coil ends. Leakage reactance also accounts for the fact that in a realistic machine, in addition to the rotating magnetic field, there are other harmonic components due to deviations from ideal sinusoidal component field waveforms. The combined synchronous reactance and resistance are referred to as the synchronous impedance of the machine. In a cylindrical rotor machine, the synchronous reactance is independent of rotor position and the value of the armature resistance is much less than the value of synchronous reactance. The synchronous impedance is therefore approximately equal to the synchronous reactance. It can be determined from the characteristic curve of Figure 9.42 by the Thvenin/Norton technique of dividing the open-circuit voltage by the shortcircuit current at a given field excitation. A typical phasor diagram for a single phase of a synchronous machine is shown in Figure 9.44.

Eo Ia Vt Ia.R s Ia. jXs

(a) Generator Mode: Eo - I a .(Rs+ jXs ) = Vt

V t Eo Ia

Ia.R s Ia. jXs

(b) Motor Mode: V - I .(R + jX ) = E s t a s

Figure 9.44 - Phasor Diagram for Synchronous Machine in Generator Mode

Electromagnetic Actuators & Machines - Basic Mechatronic Units

403

In both motoring and generating modes, the cosine of the angle between the terminal voltage on the machine and the armature current flowing through the machine is the power factor of the machine and is defined in an analogous manner to all other a.c. devices. The angle between the back e.m.f. of the machine and the terminal voltage of the machine is referred to as the "power angle" of the machine. If we assume that the armature resistance is negligible compared to the synchronous reactance, then we can derive the following expression from the diagrams in Figure 9.44:

I a X s cos = E o sin
...(41) The power, per phase, developed by a synchronous machine, in generator mode, is defined as follows:

P = Vt I a cos
...(42) Combining equations (41) and (42) gives us an expression for developed power in terms of the angle between terminal voltage and back e.m.f.:

P=

E o Vt sin Xs
...(43)

The phasor diagram for motoring mode, provides a negative power angle () and similarly, the power developed by the motor is defined as negative (since it consumes electrical power). The relationship defined in (43) therefore holds for both generator and motor mode and is illustrated in Figure 9.45. Note the duality between the torque angle characteristic for the cylindrical rotor machine and the power angle characteristic. They are similar relationships but are defined in opposite frames of reference. In the torque angle characteristic, torque (hence mechanical power) produced by the machine is defined as positive. In the power angle characteristic, electrical power produced by the machine is defined as positive.

404

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Developed Power

Generator Mode -180


o

-90

Motor Mode

90

180o

Power Angle r

Figure 9.45 - Power Angle Characteristic for Cylindrical Rotor Synchronous Machine

Looking at equation (41), together with the phasor diagram for motoring mode operation in Figure 9.44, we can analyse the cylindrical rotor synchronous motor in terms of its performance under constant output power. Equation (41) and the phasor diagram tell us that if we keep the terminal voltage a constant, and the power output a constant, then varying the field current must change both the armature current and the power factor. This is because the magnitude of the back e.m.f. of the machine (Eo) is dependent upon the size of the field current (as illustrated by the open-circuit characteristic). If the magnitude of Eo changes while the terminal voltage remains constant, then both armature current and power factor must change. Whenever the field current is reduced to a level where Eo is less than Vt then the motor is said to be "under-excited" and the power factor is "lagging" (ie: Ia lags Vt in phase). When the field current is increased to a level where Eo is greater than Vt, then the motor is said to be "over-excited" and the power factor is "leading (ie: Ia leads Vt in phase). The synchronous motor can therefore be used as a mechanism for adjusting the power factor on a load.

Electromagnetic Actuators & Machines - Basic Mechatronic Units

405

(ii)

Salient Pole Synchronous Machine


In terms of magnetic circuits, the distinguishing factor of the salient pole synchronous machine is that there is a high reluctance in the air gap between the salient poles of the rotor, whereas the magnetic path along the rotor axis is low in reluctance. When we analyse such machines in an electrical sense, we need to do so by considering the so-called "in-phase" or direct (d) axis components of the model that arise whenever the stator and rotor fields are aligned and the "quadrature" components, in between stator poles. As a result of the non-uniformity of the air gap in a salient pole machine, its characteristics are dependent upon the geometric position of the rotor. We can model this by resolving the armature current (ia) into two components. One of these is in phase with the direct flux axis of the machine (id) and the other is in quadrature with the direct flux axis of the machine (iq). Faraday's law tells us that the induced voltage in a coil is proportional to the derivative of the changing flux linkage. In phasor notation, this means that the back e.m.f. in the armature (Eo) is 90 ahead of the direct axis flux and is therefore in line with iq. The direct axis current (id) is 90 behind the quadrature axis component. It is also important to note that the salient pole machine is modelled in terms of two synchronous reactances - one for the direct axis current (Xd) and one for the quadrature axis current (Xq). These values are not the same. However, as with the cylindrical rotor machine each of the components of the synchronous reactance is composed of two quantities - a leakage reactance component (Xl) and a magnetising reactance (Xd or Xq). The relationship is as follows: Xd = Xl + Xd Xq = Xl + Xd The phasor diagram for the salient pole synchronous machine is shown in Figure 9.46, where, as in cylindrical rotor machine analysis, we can make the assumption that the winding resistance is negligible in comparison with the direct and quadrature synchronous reactances. To derive the equivalent "power angle" characteristic for the salient pole machine we refer to the phasor diagram in Figure 9.46 (a) and apply basic trigonometry:

Pd = Vt I a cos = Vt I q cos + I d sin

406

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

However, from the phasor diagram, I q X q = Vt sin I d X d = Eo Vt cos Therefore,

Pd =

1 1 1 E o Vt sin + Vt2 sin 2 2 Xd Xq Xd


...(44)

F G H

I J K

which provides the same characteristic as the cylindrical rotor machine in situations where the quadrature and direct axis reactances are identical.

Eo Id Iq

Vt Ia jId. Xd

jIq. Xq

(a) Generator Mode

Vt

Id Iq

Ia

Eo

jIq. Xq jId. Xd

(b) Motor Mode

Figure 9.46 - Phasor Diagrams for Salient Pole Machines

Electromagnetic Actuators & Machines - Basic Mechatronic Units

407

It has already been stated that the majority of synchronous motors are of the three-phase variety and we have already seen how the individual phases in the synchronous machine are modelled through the use of vector analysis, for both cylindrical rotor and salient-pole machines. However, in order to look at the threephase synchronous machine as a component in an a.c. system, we provide a simplistic circuit showing just the armature windings of the machine. We do not generally show the field winding, firstly because it is d.c. and secondly, because variations in field current, affect the back e.m.f. of the machine in a non-linear manner and therefore its role in the circuit does not lend itself to simple phasor analysis. A three-phase machine is shown schematically in both star and delta arrangements in Figure 9.47.

1 3 N Y 2

B (a) Star-Connected Three-Phase Machine Configuration

Y 2 B (b) Delta-Connected Three-Phase Machine Configuration

Figure 9.47 - Possible Three-Phase Synchronous Motor Connections

408

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

If we assume, in Figure 9.47, that the same machine has been connected in the two different configurations, with the same line-to-line supply voltages in each case, then the following points emerge:

The voltage across the armature windings in the delta connection is 3 times the voltage across the windings in the star connection The current through the armature windings in the star connection is 3 times the current through the armature windings in the delta connection A natural neutral point is connected by connecting one side of each of the three armature windings to a common point.

With these points in mind, we can treat the vector models of Figures 9.44 and 9.46 as a single-phase of either of the three phase systems shown in Figure 9.47. It may appear strange that we have discussed the modelling of three-phase machines using single-phase models but that we have made little mention of the singlephase synchronous machine. The major reason for this is that such machines are not common (particularly in servo applications), and secondly, are often custom designed for low cost systems. It is of course possible to have single-phase synchronous motors, but as we have seen, these motors develop a pulsating torque rather than the uniform torque produced in a three-phase machine. We have also mentioned the fact that three-phase motors can only be made to rotate at synchronous speed through the addition of additional coils in the rotor (damper bars) or through "pony" motors that drive the rotor up to synchronous speed. Singlephase motors can be started by the use of auxiliary windings which are connected in parallel with the main winding and physically displaced from it by 90 electrical degrees. Although an identical voltage is applied to both windings, the ratio of synchronous reactance to winding resistance in each of the windings is different, thus resulting in armature currents of different phases. The motor is called a "split-phase" motor or two-phase motor. However, one of the windings (the auxiliary) is by-passed via a centrifugal switch that activates when the motor runs up towards synchronous speed. The phase difference between armature currents can also be obtained through the use of a capacitor, connected in series with the auxiliary winding. Motors with this configuration are also "split-phase" motors but are also referred to as "capacitor start" motors.

Electromagnetic Actuators & Machines - Basic Mechatronic Units

409

9.3.4 Basics of Synchronous Motor Speed Control


It should be evident from the discussions in sections 9.32 and 9.33 that there is really only one feasible way to control the speed of a synchronous motor in order to create a servo controlled motor system. Unlike the d.c. machine, we cannot vary the terminal voltage applied to the armature of the synchronous machine in order to change its speed and nor can we vary the intensity of the field current. These measures only serve to reduce torque, thereby causing the rotor to move away from synchronous speed and become unstable. Once we discard these measures, it is almost self-evident that the only electrical technique that is available to control the speed of the synchronous motor is the variation of the natural, synchronous speed. We know that this speed is dependent upon the frequency of the supply voltage and the number of magnetic poles created in the stator windings. Since we cannot practically vary the number of poles in such a machine, our only technique for speed control is the variation of the armature voltage supply frequency. At a first glance, this may seem a simple task but when one applies some thought to the issue, one realises that it is indeed quite complex. It is in fact, only since the advent of low cost digital circuits, processors and power electronics devices that supply frequency based speed control of synchronous machines could be achieved. Moreover, the advantages of synchronous machines (over d.c.) are not enormous if one has to supply a d.c. field (through slip-rings) and a variable frequency a.c. armature voltage. For this reason, the ability to generate "rare-earth" (permanent) magnet based rotors for such machines has also made a major contribution to the feasibility of such motors for servo applications. There is a potential to provide square-wave (chopped or PWM) voltages to modern synchronous machines in place of the traditional sinusoidal voltages. This substantially minimises the complexity of the speed control techniques that need to be implemented, but needs to take into account the electromagnetic effects generated by the harmonic constituents of the non-sinusoidal voltage waveforms. The net result is that we can expect that the overall speed-control circuitry for synchronous a.c. motors is likely to be more complex and costly than that for d.c. machines.

410

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

9.4 Fundamentals of Induction (Asynchronous) Machines


9.4.1 Introduction to Induction Machines
Induction machines are a.c. devices which can function on one, two or three phases. They are perhaps the most common of all rotating electrical machines. Induction machines are predominantly used as motors because they do not inherently have the capacity to act as generators and have poor characteristics when forced into generator mode. Many text books cover the fundamentals of induction machines before they cover the subject of synchronous machines. In this book however, we have deliberately chosen to reverse the order because it is felt that induction machines are a subset of synchronous machines. An induction machine is, in essence, a synchronous machine with the rotor coils short-circuited (rather than energised with d.c. currents). The fact that no energy needs to be supplied to or by rotor windings means that the induction machine is the most cost-effective machine to produce for a given power rating. The induction machine, unlike the d.c. machine and the synchronous machine, does not require slip rings, short-lifespan brushes or expensive commutators to transfer energy to and from a rotating set of windings. Unlike the synchronous machine, the induction machine only requires an a.c. armature excitation and needs neither d.c. field current excitation nor costly, permanent magnets in the rotor in order to produce a torque. There are some varieties of induction motor which do provide slip-rings and terminals to the outside world. These are referred to as "wound rotor" machines and the purpose of the rotor terminals is to allow users to insert variable resistances into the rotor coils to vary the speed of the machine. The more common induction machine is the "squirrel-cage" type which has a rotor composed of longitudinal bars, shortcircuited by end-rings, and has no external rotor connections. The fact that induction machine rotors require no connection to the outside world means that the machines can be better protected from chemical, gaseous and physical contaminants in the industrial environment. Added to this is the fact that the machine speed varies little over the normal operating torque range. All of these factors tend to indicate that induction machines would intuitively be an ideal choice for the majority of motor applications, and particularly for servo drive systems. The problem of course is that in the early years of motor development, it was difficult to accurately control the rotational speed of the induction motor by using traditional analog techniques. The advent of low-cost, high-performance processors, coupled with compact power electronics now means that it is possible to create speed and position control circuits for the induction machine. As the cost of these drive circuits becomes lower, the natural advantages of the three-phase induction machine (coupled with its low cost) will increase its usage in servo drive systems.

Electromagnetic Actuators & Machines - Basic Mechatronic Units

411

9.4.2 Physical Characteristics of Induction Machines


For all intents and purposes, the stator in a three-phase induction machine can be assumed to be identical to that of a synchronous machine. The induction machine stator is composed of three sets of windings, displaced by 120 mechanical degrees and energised by currents displaced by 120 electrical degrees. In fact, in some machines, the rotor is an interchangeable component so that the machine can be converted from a synchronous machine to an induction machine and vice-versa. In section 9.3.2, we addressed the issue of the generation of a rotating magnetic field in the three-phase stator windings of a synchronous machine. The same analysis also applies to the threephase induction machine. For this reason, in this section, we attempt to deal with the areas in which the induction machine differs from the synchronous machine. The most obvious difference between the machines is in the rotor windings. The two rotor configurations common to the induction machine are shown in Figure 9.48.

Conducting End-Rings

Iron rotor core composed of laminated disks Skewed Conducting Bars (a) Squirrel Cage Rotor

Slip-Rings

Iron rotor core composed of laminated disks

Skewed, 3-Phase Conducting Windings (b) Wound Rotor

Figure 9.48 - Different Configurations of Induction Machine Rotors

412

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Figure 9.48 (a) shows the more prevalent form of induction machine rotor configuration - the so-called "brushless" machine configuration or "squirrel cage" configuration. In this robust configuration, conducting loops are formed inside the rotor by embedding conducting bars into the laminated, iron core of the rotor and then short-circuiting the ends of the bars with end-rings. This is the simplest and most robust form of induction machine since it requires no rotor connections to the outside world and because rotor bars are often cast into place. However, the design of such motors tends to be compromised by the fact that the rotor resistance is non-variable. As we shall later see, a high starting torque is achieved by a high rotor resistance, but during normal operating conditions, a higher motor efficiency requires a low rotor resistance. Figure 9.48 (b) shows the "wound rotor" configuration of induction machine. This design has many of the inherent disadvantages of both d.c. and synchronous machines in that rotor conductors need to be brought out via slip rings and brushes. However, the objective of such an arrangement is to enable variable external resistors to be placed into the rotor in order to alter the starting torque of the machine while providing a higher operating efficiency under normal conditions. We need to note that in both Figure 9.48 (a) and (b), the conductors/bars on the rotor are not parallel to the axis of rotation. They are in fact skewed across the cylindrical surface. The purpose of the skewing is to minimise the effects of induced harmonics in the rotor. Electromagnetically, both the squirrel cage and wound rotors behave similarly. We already know that the application of three-phase voltages to the armature terminals of a three-phase stator winding produces a magnetic field that rotates at a synchronous speed Ns as defined by equation (37). We also know that there can be no torque produced on the rotor unless there are two magnetic fields within a system tending to align themselves. In the induction machine, the first of these fields is produced by the armature currents in the stator, but we have neither a permanent magnet nor an energised set of windings in the rotor, so there is no inherent second field. The induction machine, as its name suggests, relies on an e.m.f. being induced in the windings to create a rotor current, which then creates the second magnetic field required for torque production. In order to understand the mechanisms for torque production, it is first necessary to understand the definition of "slip". We will shortly see that in an induction motor, the rotor can never reach synchronous speed. In general it will have a rotational speed (N) slightly less than synchronous speed. Slip is then defined as follows:

s=

Ns N s = Ns s
...(45)

Electromagnetic Actuators & Machines - Basic Mechatronic Units

413

Faraday's law tells us that the e.m.f. induced in the rotor windings is proportional to the rate of change of flux linkage through the windings. The current through the rotor windings is then determined by dividing the induced e.m.f. by the impedance of the rotor windings. However, if the rotor spins at synchronous speed and the stator field spins at synchronous speed, then there is no changing flux linkage through the rotor, hence no e.m.f. induced in the rotor, hence no current induced in the rotor and hence no torque produced. The rotor in an induction motor must spin at less than synchronous speed in order for torque to be produced. When the rotor is stationary (ie: slip = 1), the frequency of the e.m.f (or current) induced in the rotor is the same as the frequency of the stator. However, as the rotor speeds up, the flux linkage is dependent upon the difference between the stator and the rotor fields, and so, the frequency of the induced rotor e.m.f. is then "s" times the frequency of the armature supply. It is therefore apparent that the induction machine acts as a transformer, which produces an induced secondary voltage equal to "s" times the primary voltage and also transforms the frequency of the secondary voltage by a factor of "s" times the primary frequency. The induced rotor e.m.f. is said to have a frequency known as the "slip" frequency.

9.4.3 Electrical and Magnetic Models and Performance


In an electrical sense, the induction machine is normally modelled in the same way as a transformer, since it is after all, a rotating transformer providing both voltage and frequency conversion. The electrical circuit model for the induction machine takes into account both the rotor and stator windings and their currents and voltages. It is shown in Figure 9.49 for the condition where the rotor is stationary.

R1

X1

Turns Ratio Ns : N r E1 E2

X2

R2

V1

Ro

Xo

Stator (Armature)

Air Gap

Rotor (Induced Field)

Figure 9.49 - Electrical Circuit Representation of One Phase of a Three-Phase Induction Motor at Stand-Still

414

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

In Figure 9.49, a number of electrical parameters are shown in an analogous sense to the electrical transformer. They are defined as follows: R1 : X1 : Ro : Xo : X2 : R2 : E2: Resistance of armature winding (normally neglected) Leakage reactance of the stator (normally small and neglected) Element representing hysteresis and eddy current loss Magnetising reactance of core (normally larger than for a transformer) Leakage reactance of the rotor Resistance of Rotor Induced Rotor e.m.f (= Nr/Ns times the stator e.m.f.)

Once the machine is rotating at a speed "N" and hence a slip "s" then the circuit model needs to be changed slightly. The voltage induced in the rotor and its frequency become "s" times the stand-still value. As a result, the effective leakage reactance in the rotor is "s" times the stand-still value. This is shown in Figure 9.50.

R1

X1

Ns : Nr E1

s.X 2

R2 I2

V1

Ro

Xo

s.E2

Stator (Armature)

Rotor (Induced Field)

Figure 9.50 - Electrical Circuit Representation of One Phase of a Three-Phase Induction Motor at Speed "N" and Slip "s"

The model shown in Figure 9.50 is not as useful as we might like. It shows an induced voltage in the rotor dependent upon the slip. We can however, manipulate the model by scaling down the induced rotor voltage and the rotor impedances by a factor of "s" so that we can more simply examine the induction machine characteristics. This development of the rotor model is shown in Figure 9.51. In Figure 9.51, Model (a), we note that the rotor current I2 is unaffected by scaling the induced rotor voltage and impedances by a factor of "s". In Model (b) of Figure 9.51, we divide the rotor resistance of Model (a) into two components. One of these is the physical resistance of the windings at stand-still (R2) and the other component is referred to as an "equivalent load resistance" which is dependent upon slip. Adding these resistances together obviously gives the same value as in Model (a). However, the net effect of this manipulation is to divide the rotor into "slip independent" impedances the same as those in Figure 9.49 and a slip dependent apparent load resistance.

Electromagnetic Actuators & Machines - Basic Mechatronic Units

415

R1

X1

Ns : Nr E1

X2

R2 s I2

V1

Ro

Xo

E2

Stator (Armature) Model (a)

Rotor (Induced Field)

R1

X1

Ns : N r E1 E2

X2

R2 I2 R2 . (1-s) s

V1

Ro

Xo

Stator (Armature) Model (b)

Rotor (Induced Field)

Figure 9.51 - Equivalent Representations of One Phase of a Three-Phase Induction Machine Rotating with Slip "s"

Given model (b) of Figure 9.51, we can easily determine the torque-speed characteristic of the induction machine in motoring mode (in this book we will ignore the seldom used induction generator mode) when it is rotating at a speed "N" (or ) and the synchronous speed is Ns (s). The mechanical power developed in the motor is equal to the power developed in the equivalent load resistance in the rotor. The developed power for the three-phase motor is simply calculated by multiplying the power in the single-phase circuit by three:

Pd = 3 I 2 R 2
2

1 s s

I2 =

s E2 R 2 + js X 2 s2 E 2 2
2 2

Pd = 3 Pd =

R + s X2 R 2 + sX 2 2

b g
2

R2

1 s s

3 s E 2 R 2 (1 s) 2

b g
2

...(47)

416

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

We also know that: Pd = Td = Td s (1 s)


Td =

3 s E2 R2 2

s R 2 + sX 2 2

b g
2

...(48)

The torque developed at the shaft is slightly less than that calculated above because of the effects of friction and windage. It is also a relatively straightforward task to determine the speed at which maximum torque occurs. This is done by taking the derivative of equation (48) with respect to slip and calculating the value of slip for which the derivative equals zero. This yields a slip value of:

s=

R2 X2

Substituting this value of slip into equation (48) gives us a maximum torque of: Td max = 3 E2 2 2 s X2

...(49)

There are many important points which arise from this analysis of induction machines. They are summarised as follows: (i) Torque is developed even with a slip equal to one (zero speed) and hence the motor is self-starting The size of the starting torque can be changed by variation of rotor circuit resistance

(ii)

(iii) The maximum value of torque is independent of the rotor circuit resistance (iv) The starting torque is typically under 50% of the maximum torque (v) The torque is zero at synchronous speed

(vi) If the rotor speed is greater than the synchronous speed then the machine absorbs mechanical energy and converts it to electricity - in this mode the machine acts as an induction generator

Electromagnetic Actuators & Machines - Basic Mechatronic Units

417

A typical torque-slip (or torque-speed) characteristic for an induction machine is shown in Figure 9.52.

Torque

1 3 2
R2 X2

R2 X2

-1

-2

Slip

0 rpm

Ns rpm

Figure 9.52 - Torque-Slip (or Speed) Characteristic for a Three-Phase Induction Machine

An induction motor is said to have a linear torque-speed characteristic because its normal operating range is between a slip value of zero and (R2/X2). The maximum torque shown in Figure 9.52 is actually two to three times larger than the rated full-load torque for the machine. The maximum torque is also referred to as a "pull-out" or "breakdown" torque because the induction motor becomes unstable when the mechanical load is increased beyond the maximum torque capacity. If this should occur, then the induction motor actually produces less torque and hence speed decreases rapidly. The starting torque of an electrical motor is an important factor. We can see from Figure 9.52 that if we make the ratio (R2/X2) larger, then the characteristic curve will have a wider spread around the zero slip point and hence the torque at stand-still (ie: starting torque at slip = 1) will be higher. This can also be deduced mathematically from equation (48). The leakage reactance of the rotor is clearly difficult to alter in a practical sense, and so the only mechanism we have for changing the starting torque is the variation of rotor resistance. If we increase the size of the rotor resistance relative to the reactance, then the starting torque will increase. In a wound rotor machine this is not difficult to achieve, because we can add additional resistors in series with the rotor windings through the slip-ring connections. The resistance is gradually reduced as the machine reaches operating speed and a clutch arrangement is used to internally shortcircuit the slip-rings and lift the brushes to minimise wear.

418

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

In order to change the starting torque of a squirrel cage motor, other techniques must be employed, because no electrical access is provided to the rotor during machine operation. One alternative is to use two squirrel cages in the rotor, acting in parallel. One cage is of low resistance and high inductance and the other is low inductance and high resistance. At stand-still, rotor currents are of a high frequency and hence the bulk of current flows through the low inductance (reactance) cage. At operating speed, rotor currents are of a low frequency and so the bulk of the current flows through the low resistance winding. Another alternative for varying the starting torque of the squirrel cage motor is the use of so-called deep-bar rotors. If the conducting bars in the rotor are sufficiently deep, the resistance/reactance characteristic changes with frequency. At standstill, the effective, relative resistance is larger than at full operating speed (low frequency). This is not unlike the skin-effect phenomenon where resistance increases with frequency, as current flow tends towards the surface of a conductor. The other factor that emerges from the electrical models of the induction motor is that at maximum slip (ie: stand-still), the rotor current I2 is also a maximum. In fact, the stand-still or starting current is in the order of six times the full-load current for typical motors, started by applying rated voltage to the armatures. This form of starting is referred to as Direct-On-Line (or D.O.L.) starting and is generally not harmful to the motor, but can cause problems for the supply. It is also apparent from equation (48) that torque is proportional to the square of the induced rotor voltage (E2), which is in turn dependent upon the armature voltage (E1). For a constant supply terminal voltage, the armature voltage drops as the current through the armature causes a larger voltage drop through the stator impedance (R1 and X1). The lower value of E2 causes a reduction in the available starting torque. As with synchronous machines, in this section we have again concentrated on the three-phase machine configuration. Induction motors can also be produced in singlephase versions. We have already seen in 9.3.2 that a single-phase stator will only produce a pulsating field and not the smooth rotating field inherent to the three-phase configuration. Nevertheless, single-phase motors are commonly produced, with specialised starting mechanisms such as the "split-phase" and "capacitor start" techniques described in section 9.3.3. These motors are often specially designed and purpose-built for low-cost applications. We shall not be analysing them in this text because they are generally not used for servo applications and because they have widely varying design features.

Electromagnetic Actuators & Machines - Basic Mechatronic Units

419

9.4.4 Basics of Induction Motor Speed Control


There are a number of issues that need to be discussed in relation to induction motor speed control. The first issue relates to the use of induction motors as constant speed devices. We have already noted that the speed of an induction motor changes little over a normal operating range of torques. However, the term "little" is relative and normally implies a variation in the order of 10-20%. This means that if we were to use an induction motor to drive a machine tool spindle (for example), then we could assume that the induction motor provided approximately constant speed characteristics, because we are generally not interested in a high degree of accuracy in rotational speed. In terms of servo applications, a speed variation of 20% is substantial and so clearly, there need to be techniques established for accurate control of motor speed and rotor orientation. We already know that the induction machine has some of the characteristics of the synchronous machine because it essentially has the same type of stator winding. However, we also know that because the induction machine rotor is not independently energised, it can not produce torque at synchronous speed in the same way as the synchronous motor. In the synchronous motor, we know that provided we do not apply an excessive load, the speed of the motor will be accurately known for a given supply voltage and frequency. We can never, however, run the induction motor as such an "open-loop" device, because its speed (slip) varies with mechanical load. For servo applications therefore, induction motors need to be incorporated into a closed loop system with some mechanism for velocity feedback (either tacho-generator or a differentiated position-encoder signal). Once we accept that accurate control of induction motor speed needs to be carried out in a closed-loop arrangement, we can look at a number of mechanisms for speed variation: (i) (ii) (iii) (iv) (v) Number of motor poles Supply voltage frequency Supply voltage magnitude Rotor resistance Insertion of variable frequency voltages into rotor.

Mechanism (i) works on the principle that the synchronous speed of the motor is inversely proportional to the number of magnetic poles in the stator and that by varying the number of poles, the motor speed can also be varied. Although pole-changing motors do exist, they clearly cannot provide an infinitely variable range of speeds for servo applications and are therefore of little use in most mechatronic applications.

420

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Mechanism (ii) works on the principle that the synchronous motor speed is proportional to the frequency of the supply voltage and that by changing the supply frequency, the motor speed can be accurately varied. This technique is the most prevalent modern method of accurate speed control because it is energy efficient and feasible to implement using modern power electronics drive circuits controlled by low cost microprocessors or Digital Signal Processors (DSPs). Speed control strategies actually focus on controlling the orientation of the rotating flux vector (by changing supply frequency). However, if the supply voltage frequency is varied, then the magnitude of the voltage also needs to be varied in order to maintain a constant flux density in the motor and thus retain a constant value of maximum torque. Mechanism (iii) stems from equation (48) which indicates that for a constant torque, variation of the terminal voltage and hence E1 and E2 will cause a variation in speed. This technique is in fact used with smaller squirrel cage motors, but clearly relies on achieving sufficient torque at a given speed. Mechanism (iv) also stems from equation (48) and provides a simple technique for speed variation. However, it clearly depends upon the use of a wound rotor configuration and thereby diminishes one of the most significant advantages of the induction motor. For servo motor applications, the use of an induction motor with brushes provides little if any advantages over the use of a d.c. machine. Mechanism (v) is again related to the use of wound rotor induction motors and entails the energising of rotor windings with complex variable frequency voltages. This is by far the most complex technique for speed control and effectively eliminates nearly all the inherent advantages of the induction motor over other electrical machines. For servo control applications this has no benefits over the use of a d.c. machine that can be driven with simpler circuits and controls. It is apparent that, given sufficient processing power and low cost power electronics devices, that mechanism (ii) (variation of supply frequency in squirrel cage motors) has the most potential for modern servo systems. However, it must also be remembered that induction motors are also used in a wide range of other applications within industry. In particular, induction motors are widely used as spindle motors on CNC machines. In these instances, speed variation is also required, but the accuracy of the variation is not critical. A number of machines still resort to driving spindles via gear-box arrangements in order to facilitate broad changes of spindle-speed.

Electromagnetic Actuators & Machines - Basic Mechatronic Units

421

9.5 Stepping (Stepper) Motors


Stepper motors are slightly different from the other motors examined in this chapter because they are seldom used in closed velocity or position loop servo systems. In fact, the stepper motor is generally used as an "open-loop" alternative to the servo systems applied to traditional motor problems. Stepper motors are typically small in size and cost and are normally categorised as "fractional horsepower" motors. However, they are an important part of many smaller mechatronic systems, so we need to include a brief description of their attributes for the sake of completeness. A stepper motor is shown schematically in Figure 9.53.

Phase a
X

ia (t) ib (t) ic (t) id (t) N

.
Phase

X Stator Phase N

Rotor

. .
Phase c

Figure 9.53 - Schematic of Stepper Motor with 4-Phase Stator and 2-Pole Rotor

The basic purpose of the stepper motor is to provide a relatively accurate positioning (or indexing) facility that can be driven in an open-loop manner by a digital circuit or computer. Some manufacturers even label their stepper motors as "indexers", because that is their primary function.

422

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

In Figure 9.53, we see the stepper motor armature energised by four currents. These currents can either be provided by a computer output, suitably scaled, or more commonly by electronic drive circuitry provided with the stepper motor arrangement. In many instances, the drives are configured so that they can be simply and directly triggered by low level voltage and current outputs from a computer system. As we shall now observe, the stepper motor currents are not sinusoidal but switched, d.c. inputs. The stepper motor, as its name suggests, can be made to rotate from one stator pole to another by sequentially exciting each phase of the armature. It is this stepping characteristic, rather than its torque-speed characteristics that are of primary interest. Within the accuracy limits of the motor, and assuming it is not overloaded, it is possible to use stepper motors for a myriad of applications, without the need to develop the complex drives and closed-loop control algorithms required by other types of motors. Typical applications for stepper motors could include such diverse applications as drives for x-y plotters and axis/work-piece positioning devices in machines. The stepper motor is essentially a multi-pole synchronous motor, with a permanent magnet supplying the magnetic field for the rotor. The four-phase motor arrangement, shown in Figure 9.53, works with a rotor composed of a 2-pole permanent magnet. Energising each of the phase windings sequentially creates a Lorentz force (& hence torque) causing the rotor to align itself with a stator pole. For example, if phase "a" becomes energised for a sufficiently long period, then the rotor will rotate a quarter turn anticlockwise where the field produced by phase "a" and the rotor field are aligned. Energising phase "b" will cause the rotor to move another quarter turn and so on. The timing diagram for the phase currents is shown in Figure 9.54. This simplistic arrangement, shown in solid line in Figure 9.54, apparently only gives us the ability to index to within 90. In fact, we can also energise two windings simultaneously, thereby changing the effective position of the stator poles. The net effect is that we can actually position this particular system to a resolution of 45. This concept is shown in the dotted lines in Figure 9.54. Increasing the number of stator phases and rotor poles on a stepper motor increases the resolution to which we can index the motor. In particular, we can increase the number of rotor poles, since they do not actually need to match up with the number of stator poles in order for the motor to function. A larger number of rotor poles not only provide a higher resolution but also lead to smoother motor operation.

Electromagnetic Actuators & Machines - Basic Mechatronic Units

423

Another issue related to stepper motors is the number of steps per second at which the motor can physically operate. Each current pulse corresponds to a quantum of electrical energy, which is converted to mechanical energy through the provision of torque to the rotor. If we make the current pulses too short in duration, then we may not provide sufficient energy for the rotor to move from one step to the next. In other words, stepper motors have an effective slew rate that limits the speed at which current switching can occur. This ultimately means that as the mechanical load on the motor is increased, the stepping speed must be reduced in order to provide sufficient torque to rotate the load.

i a (t)

Time i b(t)

Time i c (t)

Time i d(t)

Time

Figure 9.54 - Driving Currents in a 4-Phase Stepper Motor

424

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The difficulty with applying stepper motors to a range of problems is that because they are open loop, assumptions need to be made about the maximum torque applied to the motor shaft. In a conventional servo system, even if the mechanical load instantaneously exceeds the rated torque output of the machine, it is still possible for the servo to achieve an end position accurately. In the stepper motor however, an excessive load (even if instantaneous) can cause the rotor to miss a step and hence irrecoverably lose positional accuracy, because there is no positional output. With these limitations in mind, there are still innumerable open-loop situations in which the stepper motor can be effectively applied to eliminate the need for closed loop servo systems. Stepper motors can also be configured to operate in a closed servo loop. However, there is little advantage in doing so because the cost of the end system becomes comparable to that of a system incorporating traditional a.c. or d.c. motors which provide smoother torque-speed characteristics and a much wider range of power options.

Electromagnetic Actuators & Machines - Basic Mechatronic Units

425

9.6 A Computer Controlled Servo Drive System


The term "servo drive" has developed a number of different connotations over the years and is now somewhat ambiguous. Depending upon which text or promotional literature one reads, the term servo drive can refer to:

The motor, power electronics amplification stage, feedback and computer control system The power electronics amplification stage that drives the motors The power electronics and computer control system The motor, shaft and ball-screw feed arrangement.

Although rather different, all the definitions tend to allude to the fact that the servo drive is part of, or all of, a closed-loop positioning system, composed of a motor, power electronics, feedback and control components. Older servo drive systems had no computer control component and simply provided an analog amplification stage whose output was proportional to the difference between a position feedback signal (originally from a potentiometer) and a reference signal (obtained by adjusting a manual potentiometer). The negative feedback could be achieved via a differential amplifier circuit. The arrangement is shown in Figure 9.55. This simple form of proportional control was used for many years.

Differential Amplifier Driving Force Reference + Feedback Amplification Stage + Amplification Stage + M -

Figure 9.55 - A Simple Proportional Feedback Servo Control System

A number of linear elements (differentiator and integrator circuits) could be added to the existing system to create more sophisticated forms of control such as in Proportional, Integral Differential (PID) based systems.

426

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Most of the older servo control systems were based on d.c. motors because speed could be easily controlled by variation of terminal or armature voltage. Attempts were also made at developing analog control systems for a.c. (synchronous and induction) motors but these were rather unwieldy because of the difficulty in generating variable frequency terminal voltages using the then limited circuitry. The low cost of digital circuitry means that the traditional servo system can be made far more intelligent through the introduction of dedicated microprocessor/DSP controllers or a personal computer control. The analog feedback stage is eliminated by replacing the potentiometer with a digital encoder. The amplification of the driving force signals produced by the processor can either remain as analog or be converted to a more compact, energy-efficient, digital form using a PWM amplifier. The various alternatives are shown in Figure 9.56.

Intelligent Processing Personal Computer or Dedicated Microprocessor or DSP Controller

Amplification

Electromagnetics

Feedback Transducer

D/A

Analogue Amplifier

Motor

Encoder

(a)

Personal Computer or Dedicated Microprocessor or DSP Controller

PWM Circuit

Digital Amplifier

Motor

Encoder

(b)

Figure 9.56 - Intelligent Servo Drives (a) Retaining Analog Amplification (b) Complete Digital Servo Drive

Electromagnetic Actuators & Machines - Basic Mechatronic Units

427

In Figure 9.56, the "intelligent processing" portion of the drive can carry out functions such as differentiation and integration of feedback signals so that only position feedback is required. This leaves scope for both simple proportional control as well as PID control or any other suitable algorithm. In Figure 9.56 (a), a linear amplifier circuit can be designed or purchased to amplify the analog output waveform produced by the D/A converter. Although this has traditionally been the way of achieving the required result, the linear amplifier generates a substantial amount of heat because the transistors therein are acting as variable resistors that control the flow of energy from the supply rails to the output stage (ie: the motor). In Figure 9.56 (b), a commercially available PWM circuit provides a digital output, with a duty cycle governed by a digital output from the intelligent processing section. The low power digital output is then connected to the base/gate of a BJT/FET device that switches between supply rails. The result is a high power digital output voltage waveform, the average value of which is governed by the duty cycle of the PWM. In servo applications, the motor has to spin in both directions and so the amplifier circuit must be capable of switching the polarity of the voltage applied to the armature terminals. The total circuit is then composed of four power transistors, configured in what is referred to as a "H-Bridge" formation and driven by the PWM output and combinational logic that defines motor direction. This is shown in Figure 9.57.

+V 1 2

Motor Load

4 -V

Figure 9.57 - "H-Bridge" Amplifier Configuration, Driven by PWM Output Coupled With Combinational (Boolean) Logic

The inductance of the armature and/or field windings of the machine is normally sufficient to smooth the current flowing through the windings and so the machine runs efficiently even with the digital input waveform.

428

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

PWM based amplification techniques are most important because they can reduce the physical size and complexity of power drive circuits. Provided that some intelligent processing is provided, then PWM amplification can also be applied to generate the variable frequency, three-phase voltage waveforms needed to drive synchronous and induction motors. The techniques used to achieve this are considerably more complex than those required for a d.c. motor drive because they are effectively aimed at controlling the orientation of the rotating flux vector. However, they are nevertheless used to generate commercially viable, compact a.c. servo motor drives. The "intelligent processing" (controller) portion of the servo drive can be implemented in a number of different ways, depending on system requirements. For a single motor drive, the most likely scenario is that the processing will be carried out by a dedicated microprocessor or DSP based circuit that may even be built onto the same board (card) as the power amplification stage. In this case, additional inputs to the servo drive controller will come from some external system such as another computer (the host) to set the reference positions for the motor. The connection between the servo drive controller and the host can either be:

Hard-wired A point-to-point serial or parallel communications link A serial or parallel communications network (as shown in Figure 9.58).

The combination of a host (which may also be a general-purpose computer or dedicated controller) with the servo drive controllers is the common basis for CNC machine and robot control.

Host System Network

Servo-Controller 1

...

Servo-Controller N

Motor 1

Motor N

Figure 9.58 - A Distributed Control System Based Upon a Number of Intelligent Servo Drive Systems

Electromagnetic Actuators & Machines - Basic Mechatronic Units

429

If the servo control system was intended for some special purpose, where it was important that some user interaction occurred during operation, then it may be sensible to use a general-purpose personal computer as the controlling processor. This provides a low cost (and already implemented) screen/keyboard/mouse/disk-drive interface. In this case, the combination of the PC, amplification stage, feedback stage, etc. becomes the servo drive system. Another possible arrangement is a hybrid between the above two cases and is available as a commercial entity. In the hybrid solution, a number of intelligent (microprocessor or DSP controlled) servo cards can be plugged into the back-plane bus of a personal computer. The actual intelligent control of the motors is carried out by the servo cards and the personal computer acts as a host system that can dispatch commands to each of the servo drive cards. This too can be used to create a robotic or CNC control system. However, such a system has to be designed with care if the drive cards, containing the amplification stage (which generates considerable heat) are placed within the same casing that contains the personal computer mother-board. It is also interesting to note that a number of variations exist between commercially available servo drive cards, regardless of whether they are for a.c. or d.c. motors and regardless of how they connect to host systems. Some servo drive cards are essentially closed velocity loop systems - in other words, the reference input to the cards represents velocity and not position. The feedback loop is closed when the motor reaches the reference velocity. As noted in Chapter 2, velocity loop control of servo motors is sometimes used in CNC or robot control systems to help stabilise the endsystem in what is a dual-feedback loop system, reproduced in Figure 9.59.

Supervisory Controller (Executing Program) Reference Position Position Feedback Axis Controller (Axes 1 to N) Reference Velocity Servo Drive Controller Voltage Motor or Current Velocity Feedback Mechanical Coupling

Position Transducer End Effector

Figure 2.10 - Schematics of a CNC or Robotic Axis Control System

430

D.J. Toncich - Computer Architecture and Interfacing to Mechatronic Systems

The end result is a multiple-axis positioning system that is used in a wide range of CNC machines, robots and other high-accuracy applications. A complete system is composed of:

A mechanical end-effector (the external system), coupled to one or more servo drives by a range of linkages or ball-screw feeds A range of feedback transducers (encoders, resolvers, tacho-generators, etc.) Feedback scaling devices for linear feedback elements A/D conversion for analog feedback elements A controlling computer D/A converters for analog motor drives Amplification circuits (analog or digital PWM based) Electromagnetic actuators (d.c. or a.c. motors).

When one considers these elements, one can correlate them with the overall interfacing picture that has been presented throughout the course of this book.

A-1

Appendix A

References

A-2

D.J Toncich - Computer Architecture and Interfacing to Mechatronic Systems

[1]

Cripps, M., "Computer Interfacing - Connection to the Real World", 1989, Edward Arnold Publishers, ISBN 0-7131-3588-3 Electro-Craft Corporation, "DC Motors, Speed Control and Servo Systems" Fitzgerald, A.E., Kingsley, C. and Kusko, A., "Electric Machinery", Third Edition, 1982, McGraw Hill Publishers, ISBN 0-07-085224-3 Gray, P. and Meyer, R., "Analysis and Design of Analog Integrated Circuits", John Wiley and Sons Publishers, ISBN 0-471-01367-6 Ismail, A.R., and Rooney, V.M., "Digital Concepts and Applications", Saunders College Publishing, ISBN 0-03-026628-9 Kuo, B.C., "Digital Control Systems", Second Edition, 1992, Saunders & Harcourt Brace Jovanovich Publishing, ISBN 0-03-032974-4 Madhu, S., "Linear Circuit Analysis", 1988, Prentice Hall Publishers, ISBN 0-13536673-9 Millman, J., Digital and Analog Circuits and Systems, International Edition, 1979, McGraw Hill Publishers, ISBN 0-07-066410-2 Overmars, A.H., "A DSP Approach to Discrete Time Control of Distributed Systems", 1993, M Eng Thesis, Swinburne University of Technology, Hawthorn, Victoria, Australia

[2] [3]

[4]

[5]

[6]

[7]

[8]

[9]

[10] Overmars, A.H. and Toncich, D.J., "A New FMS Architecture Based Upon the Use of Networked DSP Servo Technology", 1994, International Journal of Flexible Manufacturing Systems [11] Overmars, A.H. and Toncich, D.J., "Application of DSP Technology to Closed Position Loop Servo Drive Systems", In Press, 1994, International Journal of Advanced Manufacturing Systems [12] Paynter, R., "Introductory Electronic Devices and Circuits", Second Edition, 1991, Prentice Hall Publishers, ISBN 0-13-482993-X [13] Prestopnik, R., "Digital Electronics, Concepts and Applications for Digital Design", 1990, Saunders College Publishing, ISBN 0-03-026757-9 [14] Say, M.G. and Taylor, E.O., "Direct Current Machines", Second Edition, 1986, Pitman Publishers, ISBN 0-273-02457-4

Appendix A - References

A-3

[15] Say, M.G., "Alternating Current Machines", Fifth Edition, 1983, Pitman Publishers, ISBN 0-273-01968-6 [16] Sen, P.C., "Principles of Electric Machines and Power Electronics", 1989, John Wiley and Sons Publishers, ISBN 0-471-61717-2 [17] Sinha, N.K., "Control Systems", 1986, HRW International Editions & Harcourt Brace Jovanovich Publishing, ISBN 0-03-910743-4 [18] Toncich, D.J., "Data Communications and Networking for Manufacturing Industries", 1992, Chrystobel Engineering Publishers, ISBN 0-646-10522-1 [19] Wade, J., Edwards, P. and Clark, J., "Electronic Circuit Analysis - A First Course", 1978, John Wiley and Sons Publishers, ISBN 0 471 91308 1 [20] Walsh, E. and Day, J.M., "Manufacturing Systems - Realms and Design Possibilities", 1985, Zenford-Ziegler, Reservoir, Australia

A-4

D.J Toncich - Computer Architecture and Interfacing to Mechatronic Systems

B-1

Appendix B

Index

B-2

D.J Toncich - Computer Architecture and Interfacing to Mechatronic Systems

A
Addition circuit, 177-179 Address bus, 223, 230-233 Decoding, 236 Alpha-numerics, 146-149 Ampere's law, 344 Amplifier characteristics, 283 Analog addition, 3 Analog and Digital Computation, 2 Analog to Digital (A/D) converter / conversion, 265-266, 268-276 Apple Computer Corporation, 324 Armature Reaction (AR) in d.c. machines, 361-362 Artificial Intelligence (AI), 339 ASCII system, 146-149 ASM Charts, 219-220 Assemblers / Assembly language programming, 247 Attenuation (of signals), 304-309 Automated Guided Vehicle (AGV), 44

B
Ball-screw-feed, 24 Basic programming language, 333 BIOS, 248 Bipolar Junction Transistor (BJT), 69, 70-86 Alpha, 77 Beta, 74 Common-Emitter Circuit, 76 Digital circuits / Transistor to Transistor Logic (TTL), 78 Emitter-Feedback Circuit, 80 Operating Modes, 72, 73-74 Small-Signal (Hybrid-) Model, 82 Transconductance, 83 Bit-slice logic, 227 Boolean Algebra, 150-165 Laws and Postulates of Boolean Algebra, 156 Truth table, 150, 151 Boolean Logic, 135, 152, Gates, 151

Appendix B - Index

B-3

C
C programming language, 332, 333, 334 Cache / caching, 228-229 Central Processing Unit (CPU), 136, 222 Chip Enable / Chip Select, 203, 234 CISC architecture, 228 Clock circuits (digital), 186 Closed control loop, 6, 7, 262 Closed-Position-Loop servo drive, 27, 28 Closed-Velocity-Loop servo drive, 27, 28 CMOS, 89, 90, 92-93 CMOS Logic Gates, 175-176 CNC Machine, 28, 30-35, 45 Coils, 343 Computer interfacing - software development, 10 Computer Numerical Control (CNC), 24,28, 30-35, 41 Counter circuits, 192-196 Asynchronous counter, 193 Synchronous Hexadecimal and BCD counters, 194 Cross-talk, 309

D
Data bus, 223, 230-233 Data logging, 10 D.C. machines, 355-382 Armature and field, 356 Armature reaction (AR), 361-362 Back-emf, 360 Back-emf relationship, 363 Brushes, 357 Commutator, 357-359 Compound machines, 372-375 Electrical model, 360 Pole-pitch, 360 Rotor and Stator, 356 Separately excited machines, 364-366 Series machines, 366-368 Shunt machines, 369-371 Speed control, 376-382 Torque production, 363

B-4

D.J Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Debouncing circuits, 282 Decoupling capacitors, 81 Depletion region/layer, 50, 71 Digital circuits - error margins, 4 Digital Equipment Corporation (DEC), 16, 322 Digital logic gates and circuits, 166-176 Digital Signal Processors (DSPs), 5,18, 29, 37, 137, 222 Digital to Analog (D/A) converter, 265-266, 268-270 Diodes, 50-54 Breakdown Voltage, 52 Circuit Approximations, 54 Forward/Positive Bias (Diodes), 51 Negative/Reverse Bias (Diodes), 51 Power Dissipation, 53 Zener Diodes, 55 Distributed control, 17, 18, 328 DOS (Microsoft), 323

E
EBCDIC system, 146-149 Eddy currents, 351 Electrically Erasable Programmable Read Only Memory (EEPROM), 39 Electromagnetic induction, 349 Electromagnetic Interference, 309 Electromotive force (EMF), 347 Electrons, 51 EMF equation, 353-354 Emitter-Coupled Logic (ECL), 174-175 Encoder, 26, 27, 300-301 Energy conversion in electrical machines, 342 Expert systems, 11, 339

F
Fan-out (of circuits), 98 Faraday cage, 309 Faraday's law, 348 Ferromagnetic material, 351

Appendix B - Index

B-5

Field Effect Transistor, 69, 86-97 JFET, 87 MOSFET, 86-94 Small-Signal Model, 96 Source-Feedback Amplifier, 95 Transconductance, 96, 97 Filter circuits, 288-290 Five-Five-Five (555) Timer IC, 186 Flexible Manufacturing System (FMS), 44, 45, 46 Flexible Transfer Line, 44 Flip-flops, 184-191 D Flip-flop, 187 JK Flip-flop, 187-189 R-S Flip-flop, 184-186 Fortran, 333, 334 Fourier Transforms, 288 Frequency response, 115-119 Full-Adder circuit, 178-180

G
G-Code (programming language), 32 Gray-Code (count sequence), 162 Grid-array, 231

H
Half-Adder, 177 Harvard Architecture, 5, 136-137, 222 Heterarchical control, 18, 19 Hexadecimal keypads, 246 Hierarchical control, 14, 15, 18 Holes, 51 Holorith cards, 16, 329 Hydraulic systems, 29 Hysteresis, 349 Hysteresis in energy transducers, 297

B-6

D.J Toncich - Computer Architecture and Interfacing to Mechatronic Systems

I
Icons, 329 IEEE-488, 311 IGBT, 94 IGFET, 94 Indexers, 24-25 Induction machines, 410-418 Electrical models, 413-418 Slip, 412 Speed control, 419-420 Squirrel-cage, 410, 411 Torque production, 415-417 Wound rotor, 410, 411 In-Line (dedicated) Transfer Machine, 42 Input Impedance, 98, 99 Integrated Circuits (ICs), 49 Intel Corporation, 324 Interactive graphics environment, 329 Interface Cards (intelligent), 328 Interfacing board development kits, 36-40 Interlocking (hard-wire / hardware) 34, 35 Interrupt programming, 250-254, 267, 328 Inversion (d.c to a.c. conversion), 123 IRMX (Intel Operating System), 324 Isolation circuits, 294-296

J
JK Flip-flop, 187

K
Karnaugh Map / Mapping, 158-165

Appendix B - Index

B-7

L
Light Emitting Diodes, 299 Linearity (of circuits), 115-119 Lisp programming language, 340 Local Area Networks, 43, 46, 310-312

M
Magnetic circuit, 346 Magnetic field intensity (H), 344 Magnetic flux, 345 Magnetic flux density (B), 344 Magnetomotive Force (MMF), 344 Manufacturing systems, 41-46 Memory mapping, 234-238 Micro-code, 225 Microprocessors, 222-229 Branch instructions, 243 Instruction sets, 225-226 Program execution, 239-244 RISC, 244 Microsoft (Corporation), 323 Miniature controllers, 39-40 Mnemonics, 225 MOSFETs, 88-94 Depletion Mode, 89, 91 Enhancement Mode, 89, 91 Multiplexer, 278 Multi-Tasking Multi User Systems, 257

N
N-Type semiconductor, 51 Network Bottle-necks, 23 Networks, 18, 19 Neural networks, 11, 339, 340 Norton equivalent circuit, 100-102

B-8

D.J Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Number systems, 138-145 Base 8 (Octal), 141 Base 10, 140 Base 16 (Hexadecimal), 141 Binary Coded Decimal (BCD), 143, 144 Numerical Control (NC), 30 Nyquist Sampling Theorem, 276

O
Object-Oriented Programming (OOP), 332-336 One's Complement arithmetic, 181 Open Collector TTL, 79, 172 Operating System, 323-328 Operational amplifiers, 103-114 741 Amplifier, 104 a.c. and d.c. Voltage Follower Circuits, 107 Differential amplifier, 112 Differentiating amplifier, 113 Idealised Op-Amp model, 105 Integrating amplifier, 113 Inverting amplifier, 110 Non-Inverting amplifier, 111 Transconductance amplifier, 108 Transresistance amplifier, 109 Optic Fibre, 312 Opto-Isolators (Opto-Couplers), 296 Output impedance, 102

P
Paging, 255-256 Parallel Interface Adaptor (PIA), 264 Pascal, 332, 333 Turbo Pascal, 333 P-N Junction Diode, 50 P-Type semiconductor, 51 PDP-11 (Computer), 16, 322 Peripheral Interface Adaptor (PIA), 264 Permeability, 344

Appendix B - Index

B-9

Pin-out (of an IC package), 167 Pipelining, 228 Pneumatic systems, 29 Point to Point links, 310-312 Polling, 250-251, 267, 328 Potentiometers, 299 Power Supplies, 57-68 Programmable Array Logic (PAL), 211-214 Programmable Logic Array (PLA), 211-214 Programmable Logic Controllers (PLCs), 20-23, 34-35, 38, 41, 316 Programmable Parallel Interface (PPI), 264-267 Prolog programming language, 340 Proportional Integral Differential (PID) Control, 29, 37, 114, 328 Protection circuits, 290-293 Pull-down menus, 329 Pulse Width Modulation (PWM Circuits), 49 PWM Digital Amplifiers, 286

Q
QNX (Operating System), 324

R
Random Access Memory (RAM), 202 DRAM, 202, 205-207 IRAM, 202, 207 SRAM, 202, 205-207 Read Only Memory (ROM), 202 EEPROM, 202, 210 EPROM, 202, 209 Masked ROM, 202, 208 NVRAM, 210 PROM, 202, 208, 209 Real-Time control, 14, 16 Real-Time Operating Systems, 326, 327

B-10 D.J Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Rectification, 60-66 Chokes, 62 Ripple Voltages, 61, 62 Single-Phase, Half-wave, 62 Single-Phase, Bridge, 63 Three-Phase, Bridge, 64-66 Register 8-bit storage, 190 8-bit shift register, 191 Regulation, 66-68 Relay-ladder-logic (programming), 20, 21 Relays, 291 Reluctance (of a magnetic circuit), 346 Resolver, 26, 27, 300-301 Right-hand-thumb rule, 343 RISC architecture, 228 Robot controllers, 30-35, 41 Robot/s, 28, 30-35 Rotary transfer machine, 43 RSX-11 (Operating System), 17, 323

S
Sample and Hold circuits, 276-278 Sampling, 275 Schmitt Trigger, 280-281 Schottky TTL, 173-174 Servo drive (& controller), 24-29, 32, 425 Servo drive system, 425-430 CNC / robotic control system, 429 Distributed drive system, 428 H-Bridge amplification arrangement, 427 PWM drives, 426 Servo motor, 24-29, 32 Signed number arithmetic, 181, 183 Slew-rate, 117 Software, 320-340 User-friendliness, 329 Software Engines, 337-338 Database packages, 337 Spreadsheet packages, 337 State-Machine, 134, 218-221 Stepper Motors, 24-25

Appendix B - Index

B-11

Stepper motors, 421-424 Strain Gauges, 302 Surface-Mount technology, 168 Switch-mode power supplies, 287 Switched-capacitance filters, 289 Synchronous (a.c.) machines, 383-409 Cylindrical (Round) rotor machines, 386, 387 Electrical models, 400-406 Magnetic circuit, 384-386 Rotating flux-vector, 391 Salient Pole machines, 384 Speed control, 409 Synchronous speed, 392, 394 Three-phase machines, 388-409 Torque development, 395-399

T
Tacho-generator, 27 Tautology, 3 Thermo-couples, 303 Thvenin equivalent circuit, 100-102 Thyristors, 120-129 Diacs, 127-128 SCR Crowbar circuit, 124 SCR Model, 121 SCR Phase-controller, 125 Silicon Controlled Rectifiers (SCRs), 120-127 Triacs, 127-128 Unijunction transistors (UJTs), 128-129 Torque on a current loop, 352 Transducers, 14, 32, 297-303 Encoders and resolvers, 300-301 Light Emitting Diodes, 299 Potentiometers, 299 Pressure transducers, 302 Proximity and level sensors, 303 Strain Gauges, 302 Switches, 298 Thermo-couples, 303

B-12 D.J Toncich - Computer Architecture and Interfacing to Mechatronic Systems

Transformer, 57-60 Characteristics, 284 Circuit Model, 59 Frequency Response, 59, 60, 118-119 Transistor to Transistor Logic (TTL), 5, 79, 168-173 Open-Collector TTL, 172 Performance, 171 Tristate devices, 199-200 Turns (coils and windings), 343 Two's Complement arithmetic, 182

U
UART, 237, 253 UNIX, 323

V
VAL Programming language, 33 Von Neumann Architecture, 5, 136, 137, 222

W
Windings, 343 Windows operating system environments, 325, 330, 331

X
Xerox (Corporation), 324 Palo Alto Research Centre (PARC), 324 Ventura Publisher, 324 XY-Machine, 28

Y-Z
Zener Diodes, 55-56, 292

Das könnte Ihnen auch gefallen