Sie sind auf Seite 1von 28

Business 4 Update Optional Reading

1
2008 DeVry/Becker Educational Development Corp. All rights reserved.
Business Environment and Concepts
B4: Information Technology

OPTIONAL READING: ENHANCED INFORMATION
AND BACKGROUND STORIES


The topics in Information Technology (IT) can be quite confusing at times for many of our students who are more
involved with accounting than with IT. This optional reading section will help to provide those students with some
of the "reasoning" behind IT topics and subject matter and, hopefully, will assist in their understanding and
memory of the topics for the CPA exam.
The following sections are referenced to the textbook material that they relate to.

"Insert 1": B4-6, Item I.C.1
The functions performed on the data in the list above are collect, process, store, transform, and distribute. The
order entry system example in the box has a slightly different order in that the store and process functions are
reversed. The order of the functions is really application-specific and depends partly on exactly what is being
processed and what is being stored. The process function would almost certainly contain some store functions
because at least some of the results of the processing would have to be stored somewhere.
Note also that the categorization of systems that starts with Item I.E: ACCOUNTING INFORMATION SYSTEMS
and then continues with Item II: TRANSACTION PROCESSING SYSTEMS is somewhat arbitrary. This type of
categorization is found only in textbooks and on the CPA exam and not in the real world. In the real world,
systems are designed to perform certain specific business functions, and it does not make any real difference
whether those functions would normally fit in an Accounting Information System definition, a Transaction
Processing System definition, or something else. Many real-life systems perform functions across these
definitional boundaries. The categorization, however, is still useful for rough generalizations and for purposes of
discussion.

"Insert 2": B4-8, Item III
How Decision Support Systems Work
Decision support systems work off knowledge and rules. The rules are normally constructed as a large number of IF-THEN statements
(such as IF Income > $45,000 THEN Tax = .15 * Income). Anybody who has ever done basic programming in almost any high-level
programming language should be familiar with IF-THEN statements. The set of rules is called a rule base. At this time, most rule bases
have no more than a few thousand rules.
The AI (artificial intelligence) shell is the programming environment of an expert or decision support system. The strategy to search through
the rule base is called the inference engine. Forward chaining begins with data that is already available or with data entered by a user and
uses the rules in the rule base to extract more data until an optimal goal is reached; forward chaining is thus data driven. Backward
chaining starts with a hypothesis and works backwards by asking questions of the user to see if there are data available to support the
optimal goal; backward chaining is thus goal driven.
Fuzzy logic deals with reasoning that is approximate rather than precise. Fuzzy logic does not necessarily reason in terms of IF-THEN
statements but attempts to reason like people, in terms, for example, of "good" and "bad" and "tall" and "short" and "hot" and "cold" and
shades of grey in addition to black and white. Fuzzy logic is derived from fuzzy set theory (that certainly helps, doesn't it). Fuzzy set theory
is an extension of classical set theory; in classical set theory, membership in a set (a collection of things) is either yes or no (sort of like a
CPA exam multiple choice question with two choices); in fuzzy set theory, membership in a set is determined by a membership function
that is more complicated than just yes and no. Members may be fully in the set or not in the set at all or somewhere in between. Enough of
that; no more mathematics for now.
Fuzzy logic is controversial in some circles, but it has been used in some successful applications. In the end,
however, all that is needed for the BEC exam (for this term and for the other terms in this box) is a general (a little
bit more than fuzzy but not too much) understanding of the definition. Terminology.
Business 4 Update Optional Reading

2
2008 DeVry/Becker Educational Development Corp. All rights reserved.

"Insert 3": B4-10, Item VI.E
Push Technology
Push reporting is a subset of what might be called broadcast or push technology. Historically, end users have searched out information,
either in internal application systems or on the Internet, using some kind of "pull" technology. Even search engines such as Yahoo and
Google are pull technology. Amazon sending information about new books that are available based on books that have been purchased in
the past can be considered to be a form of push technology (the ordering of books in the past can be considered to be a form of user
profile). A newspaper sending "breaking" news to people who might be interested in that news is another example of push technology
(normally there had to be something done previously by the person to indicate his/her interest in that news, some kind of user profile).

"Insert 4": B4-18, Item I.A
A Very Short History of Computer Hardware
It all really started with mainframes, big mainframes, and in the business and commercial world, "it" really started with IBM and the IBM
System/360 series of mainframes. Prior to the 1960s, computer hardware for business consisted mostly of tabulating machines that sorted
and processed and reported on punched cards. The "programs" were actually hard coded in the circuit boards of the machines, and the
programs were changed by rearranging the wires on the circuit boards (certainly a cumbersome process). Computers had been developed
previously (ENIAC, UNIVAC, and the IBM 700/7000 series), but those computers were not heavily used for business applications.
In 1959, IBM introduced the IBM 1401 (it had shipped its first electronic computer, the IBM 701, in 1953). In 1965, things really got rolling
with the introduction of the IBM System/360 (which was the first line of computers, with 6 compatible models of different sizes with a
common hardware architecture running a common operating system designed to run both business and scientific applications). Although
very primitive compared to what is available today (especially in terms of memory and input/output; a small IBM 360/30 might have all of 64
KB or kilobytes of memory), these computers were breakthroughs at the time. In 1969, IBM introduced the IBM System/370 (which
introduced virtual storage or virtual memory and a sophisticated operating system). In 1990, IBM introduced the System/390 consisting of
Enterprise System/9000 processors. In 2005, IBM introduced the z9 business class and enterprise class series of mainframes, which
according to IBM can process one billion transactions per day (that is transactions, not instructions). Mainframes, along with computer
hardware in general, have come a long way since their introduction only 40 or so years ago.
Early mainframes were installed in climate-controlled (with water cooling for the actual hardware) computer centers under the control of an
organization's IT personnel. The first mainframes processed punched cards (now obsolete except for possibly some election ballot
applications with the now infamous hanging chads) as input and produced printed reports that had to be distributed manually as output to
end users. Later, telecommunications was added; input was by terminals and output was remotely printed reports (the first airline
reservation system was introduced in 1959). At various times, there were mainframe vendors other than IBM (RCA, Univac, CDC,
Burroughs, General Electric, NCR, and some copycats such as Amdahl and Hitachi). Now the mainframe world is basically IBM. And, as
indicated previously, mainframes are still out there. They may be thought of as very large servers.
In 1965, Digital Equipment Corporation (which later became a part of Compaq which later became a part of HP) introduced the PDP series
of mid-range computers (starting with the PDP8 and later with the VAX series; the various VAX models all ran the same operating system).
DEC machines were often installed outside the computer centers and were run and controlled by engineers and scientists. Later, other
mid-range computers were developed by Sun, HP, IBM, and others that ran under the Unix operating system (each one of the
manufacturers developed its own version of Unix for marketing purposes). The first step or steps towards decentralization of computing
power had been taken. These mid-range computers are sometimes called minicomputers. They still exist today as large servers.
In 1964, Control Data Corporation (CDC) introduced the CDC 6600 supercomputer. It processed 3 million instructions per second,
considered really fast at the time. Supercomputers are optimized for calculation speed and sometimes for particular types of calculations.
They were and are used for highly calculation-intensive tasks such as weather forecasting and climate research (all those climate models
forecasting continued global warming). They are normally manufactured today as parallel processing systems connected together in
specially-designed clusters (called tightly-connected clusters or tightly-coupled clusters) or as groups of commodity PCs connected
together in LANs (called commodity clusters). The specialized applications run on supercomputers are suitable for being split up into
smaller pieces that can be processed at the same time. Supercomputer speed is measured in FLOPS (floating point operations per
second) or TFLOPS (1012 FLOPS). In 2005, IBM introduced a supercomputer that used more than 65,000 processors and that ran at
280.6 TFLOPS. Now that is a lot faster than 3 million instructions per second.
Business 4 Update Optional Reading

3
2008 DeVry/Becker Educational Development Corp. All rights reserved.
In 1981, IBM introduced the first real personal computer (PC). At first, PCs were stand-alone processors; later they were networked in the
LANs and WANs that we know today. Since 1990 or so, processing has shifted more and more towards the Internet and Internet-based
applications. PC-level computers are sometimes called microcomputers.
In 1986, IIBM introduced the first computer based on reduced instruction set technology (see the Glossary for the definition of reduced
instruction set computers or RISC). In 1990, IBM introduced the second generation of RISC computers called the RS/6000. It ran on IBM's
version of Unix called AIX. Today, RS/6000 systems can be considered to be large servers (but not mainframes). In 2000, IBM introduced
the eServer, an upgraded version of the RS/6000 server, which supposedly incorporated mainframe-class reliability and scalability, and
was designed for capacity on demand (utility computing; see the Glossary for the definition of utility or on-demand computing).
In 1969, IBM introduced the System/3, which was a commercial version mid-range computer. In 1975, IBM introduced the System/36. Next
came the AS/400 in 1983. The AS/400 line of computers, now renamed the System i5 as part of IBM's eServer marketing, is still in
production. The original AS400s came bundled with a DB2 relational database and ran Unix, even through it had its own operating system.
Programming languages were COBOL, C and C++, Java, and others.
The dates and the details of computer hardware history will not be on the BEC exam. However, the history does provide useful context for
some of the other information that may be on the exam, either as correct answers or as distractors.

"Insert 4b": B4-18, Item I.A.4.b
Multiprogramming is normally implemented in software; each part of the program runs in a tiny time slice, one
after the other, so that the various parts of the program look like they are all running at the same time (think of it
as multiplexing for processing). Multiprocessing and parallel processing are normally implemented in hardware;
the various programs actually do run at the same time on different processors. However, the operating system
software must be written to support the multiprocessing environment.

"Insert 5": B4-18, Item I.A Last ItemIncludes Enhanced Outline Material
5. PC Hardware and Hardware Terminology
PCs have a terminology all their own, and some of this terminology has shown up on the BEC exam. The
sections that follow present a very brief introduction to this terminology. Not all PC hardware and hardware
terminology is covered; there are full books on the subject. Because this information is summary, there are
certainly exceptions to some of the general statements. See the Glossary for definitions or for any other
terminology that you might not be familiar with.
Microprocessors (or processors) can be identified by how "wide" they are (in terms of bits) and how fast they
are (in terms of MHz or GHz, which are multiples of Hertz). The "width" of a processor is measured in the
data bus, the address bus, and the internal registers. The data bus is the transfer mechanism for data into
and out of the processor. The address bus is the transfer mechanism for (memory) addresses into and out of
the processor (the width of the address bus determines the largest possible memory address for that
particular processor), and the internal registers are the pieces of hardware that store data and addresses
within the processor and actually do something with that data (the size of the internal registers determines the
size of the data on which the processor can operate). For a Pentium 4 processor (first shipped in 2000 and
the 7
th
generation of processors), the data bus width was 64 bits, the address bus width was 32 bits, and the
register size was 32 bits. Because the Pentium 4 was like two 32-bit processors in one, this arrangement
makes sense. For an Itanium processor (introduced in 2001and the 8
th
generation of processors), the register
size was 64 bits.
As processor speeds increased over time, memory speeds could not keep up. The solution was called cache
(pronounced "cash") or cache memory. Cache is a high-speed memory buffer that temporarily stores the
data the processor needs, allowing the processor to retrieve that data faster than if it came from memory.
The principle of cache is based on the repetitive nature of a single computation or a series of computations.
Being able to reuse the same data or instructions in cache is called the cache hit ratio. The better the cache
hit ratio, the faster the processing cycle and the faster the processing time.
Modern PCs may have two levels of cache: Level 1 cache (L1 cache) and Level 2 cache (L2 cache). Some
servers have Level 3 cache (L3 cache). All of this cache is to make the whole system run faster and to keep
the system running faster than its slowest component (which is the normal bottleneck). Dual-core processors
Business 4 Update Optional Reading

4
2008 DeVry/Becker Educational Development Corp. All rights reserved.
include two processors in the same physical package and are used to run multiple applications at the same
time (multiprocessing as discussed previously).
Early PCs often had separate floating-point coprocessors or math chips for floating-point arithmetic
calculations (calculations based on scientific notation like .112664 x 10
6)
. Since the introduction of the 486
chip (the one that came before the Pentium), floating-point coprocessors have been built into the
motherboard.
The motherboard is the main circuit board of a PC. It has places to attach other circuit boards. Just about
every other internal component of a PC attaches to the motherboard. A chipset is the circuits that actually
perform the functions of the motherboard.
The chipset is divided into two pieces: the north bridge (closer to the CPU) and the south bridge (physically
below the north bridge and why it is called the south bridge). The north bridge handles communication with
memory and the video card. The south bridge handles communication with the PCI or PCI-Express bus, the
real time clock, the USB port, power management, the interrupt controller, and other devices. The north
bridge tends to be faster than the south bridge. There is a processor or front-side bus to transfer information
back and forth between the north bridge and the memory or cache.
Expansion slots are parts of the I/O bus (yet another bus, and that is 4 of them so far) that allow the processor
to communicate with peripheral devices. The I/O bus and the expansion slots allow the addition of whatever
peripheral devices are needed for a specific PC, such as sound cards and video cards and network interface
cards and floppy drives and printers and mice and whatever else. The 8-bit ISA bus was introduced with the
original IBM PC. A 16-bit ISA bus was introduced with the IBM PC/AT. The 32-bit MCA and EISA buses
were introduced by IBM and Compaq. All of these busses were relatively low speed. Some people may
remember LPT1 (a parallel port) and COM1 and COM2 (serial ports) that some of the peripheral devices of
early PCs were attached to. In some modern PCs, peripheral devices are built into the motherboard.
The PCI (peripheral component interface) bus was introduced by Intel in 1992. The PCI bus was different
from the previous busses in that it inserted yet another bus between the processor and the I/O bus, which
acted somewhat like a cache. The PCI bus is being supplanted by the PCI Express bus. PCI devices are
plug-and-play (PnP), which means that peripheral devices can be added without having to manually add new
device drivers.
Interrupts (or hardware interrupts) are used by hardware devices to indicate to the motherboard that they (the
hardware devices) are requesting service. Interrupts are wires on the motherboard and the expansion slots.
When a particular interrupt is invoked, a special software routine takes control of the system, saves the
register contents in a part of memory called a stack, and then accesses another part of memory which
indicates the memory addresses that contain the hardware device driver (the software) that services that
specific hardware interrupt. That hardware device driver is then executed.
The BIOS (Basic Input/Output System) is low-level software that controls the PC system hardware and acts
as an interface between the operating system and the hardware. The BIOS contains the hardware device
drivers. When the PC was first introduced, the BIOS was contained in nonvolatile read-only memory
(nonvolatile memory is memory that retains its data even when the power is turned off; read-only means that
it could not be updated). If new hardware was added to a system, the device drivers for that hardware were
contained on the adapter card itself or had to be manually loaded. The fact that there were new device
drivers was contained in special files called IO.SYS and CONFIG.SYS; IO.SYS and CONFIG.SYS found
about the new hardware when the device drivers were loaded. Currently, with 32-bit device drivers and
Windows/XP, the device drivers contained in the BIOS are used to get the boot process started (and possibly
perform some diagnostic checks), and the other device drivers are then loaded from XP and take over after
that. With modern BIOS, users can select which device boots first: the hard drive, the floppy drive, a CD, or a
flash memory drive.

A Very Short History of Personal Computers
As hard as it may be to imagine, there was a time when there were no PCs. Although there were microcomputers before 1981, including
some from IBM, IBM introduced the first real PC in 1981. According to the IBM press release, the system offered many "advanced"
features and was to be sold at ComputerLand dealers and at Sears stores. It had a "high-speed" 16-bit microprocessor that operated at
speeds in the "millionths of a second." It came with an EasyWriter word processor (well before WordPerfect and Word; EasyWriter was
Business 4 Update Optional Reading

5
2008 DeVry/Becker Educational Development Corp. All rights reserved.
quickly replaced by WordStar, which is reputed still to exist) and a VisiCalc spreadsheet (well before Lotus and Excel; VisiCalc no longer
exists). It came with the DOS operating system which had been developed by a small company called Microsoft for IBM. The standard
home edition of the IBM PC sold for $2,655, with a color monitor, memory of 64 KB, and a single diskette drive (a 5 inch floppy drive for
which the disks really were floppy; there was no such thing as a hard drive at the time). The expanded business system of the IBM PC
came with two diskette drives, a monitor, and a printer, and cost $4,425. Throw away your slide rules. The IBM PC sounded real neat at
the time but now sounds like something that could have come over on the Mayflower.
In 1983, Compaq introduced the first PC clone (IBM copycat PC) that used the same software as the IBM PC. Also in 1983, Apple
Computer introduced the first Macintosh. In 1984, IBM introduced the PC/AT (the AT stood for "advanced technology"). In 1989, Intel
introduced the 486 processor, and, in 1993, Intel introduced the Pentium. In 2000, both Intel and AMD (Advanced Micro Devices, a
competitor of Intel) introduced processors running at 1 GHz, and, in 2001, Intel introduced a processor running at 2 GHz. In 2005, Intel
introduced the first multi-core processor.
Today's IBM-compatible PCs run at 3 GHz or better, about 20,000 times faster than the original PC. Modern hard drives are 200 GB and
upwards (the 5 inch floppy no longer exists and the 3 1/2 inch floppy drive is getting harder and harder to find), and physical memory
can be up to 64 GB. Things are continuing to change so quickly that, by the time you read this paragraph, the paragraph will probably be
obsolete. IBM no longer even manufactures PCs, having sold that business to a Chinese company called Lenovo in 2005.
Moore's law, developed by Gordon Moore who was one of the founders of Intel, says that processing power doubles every 18 months or
24 months (Moore supposedly said 24 months, but quotes normally say 18 months). There are three variations of this statement as
follows: (1) the power of microprocessors doubles every 18 months, (2) computer power doubles every 18 months, and (3) the price of
computing falls by half every 18 months. Moore's original statement was in terms of cramming more components onto integrated circuits;
not all aspects of computer technology develop according to Moore's law. Memory speed, hard drive seek time (the time for a hard drive to
go get data), and software are all notable exceptions. In fact, Wirth's law, developed by Niklaus Wirth, states that "Software gets slower
faster than hardware gets faster." Programs tend to get bigger and more complicated over time; programmers have been known to write
inefficient programs and just assume that hardware speed would increase fast enough to make up for it. That kind of practice seldom
works all that well.
One of the major advantages of PCs over mainframes was that the cost of a mainframe MIP was considerably more than the cost of a PC
(or Unix processor MIP). There was also a considerable difference in the cooling requirements. Then people looked at the "large" highly-
paid staffs of system programmers and the Unix processor with its "single" part-time system administrator (or PC support person). Lo,
mainframes were more expensive and besides they were less cool, since only old people (people over 40 at the time) worked on them.
And PCs had GUI as opposed to clunky old command lines. Thus, mainframes were bad and would be gone in a couple of years.
It didn't work out exactly that way. Mainframes became cheaper and smaller and, all of a sudden, they did not require water cooling any
more. And somehow, cooling for Unix processors and PCs and servers became a problem because you had to put so many of the little
things together in racks to get enough processing power and the heat of all those processors together was substantial. And system support
people (none of whom were called system programmers) proliferated just like the PCs and servers did. The advantages of mainframes
were termed reliability (or robustness), availability, and serviceability, called RAS, by IBM mainframe marketers, who were still interested in
selling mainframes. See the definitions of downsizing and the total cost of ownership (TCO) and RAS in the Glossary.
One of the advantages of PCs was that processing power was placed in the hands of the end users. This was a cultural change from the
centralized processing of mainframes. End users were used to waiting for months or years for new application systems to be developed;
now they could do some of that for themselves. Putting PCs in the hands of end users was really the beginning of the information
revolution. The flaw in the thinking, however, was that PCs would replace mainframes. A more rational approach is to match the type of
processor to the task to be done.

"Insert 6": B4-19, Item I.B
A Very Short History of Operating Systems
To expand on the previous discussion of the history of computer hardware, operating system software has a history also. For the IBM
System/360, the operating system was called DOS (not the PC operating system also called DOS that came before Windows; there were
operating systems that came before the mainframe DOS that ran off tape; talk about slow). The mainframe DOS could run three programs
at a time (one in foreground1 and one in foreground2 and one in the background, and you had to do special "overlay" programming if you
needed more memory than was physically available). The next major operating systems were part of OS/360, specifically MVS/MFT
(multiprogramming with a fixed number of tasks) and then MVS/MVT (multiprogramming with a variable number of tasks).
Each of these operating systems defined particular fixed regions or partitions of memory where tasks (just think of tasks as separate
programs or parts of programs) could run (and note that it was multiprogramming, not multiprocessing). The difference between MFT and
Business 4 Update Optional Reading

6
2008 DeVry/Becker Educational Development Corp. All rights reserved.
MVT was that the partitions in MVT were allocated dynamically by the operating system itself instead of statically when the operating
system was IPL'ed (booted) for MFT. Later versions of the MVS operating system utilized virtual memory (and the special overlay
programming was no longer necessary). TSO (Time Sharing Option) was released in 1971was used for an interactive time-sharing
interface. CICS (Customer Information Control System) was used for online transaction processing. The interface between the operating
system and the programs that ran on it was called JCL (Job Control Language).
In its more advanced versions, JCL is extensive and complex, and entire books have been written on it (mainframe DOS had a different
and simpler JCL than the MVS-type operating systems). It is almost a "programming" language all by itself. Note that Job Control
Language has actually shown up on the BEC exam. See the released question box immediately below this box (the released question box
includes some "constructive" comments about having a question of this type on the BEC exam; note that these comments relate to exam
technique). Now, back to the history.
In 1996, IBM introduced OS/390, which it described as a "network-ready, open, integrated large server operating system that can run both
MVS and Unix applications." A mainframe running Unix applications; now that is a revolutionary concept! If IBM had allowed Unix on its
mainframes 20-25 years earlier, much of the push for separate Unix processors might have been avoided; not all of it would have been,
since many of the people who were pushing Unix just wanted to do their own thing as far away from IBM as possible, but it would have
helped. This mistake is the same kind of mistake that IBM made in allowing Microsoft to have the license for the original PC DOS operating
system. Also note that IBM is calling its mainframe a server.
MVS originally supported 24-bit addressing. As the underlying hardware progressed, it supported 32-bit addressing and now supports 64-
bit addressing. MVS itself is no longer supported by IBM. By 2008, only the 64-bit versions of the operating system will be supported.
The operating system for the IBM PC was DOS (at the time, there was an IBM version of DOS and a Microsoft version of DOS). Later,
along came Windows in 1985 and Windows 3.1 in 1992, which were among the first PC operating systems with a Graphical User Interface
(GUI). In 1993, Microsoft introduced Windows/NT. In 1995, Microsoft introduced Windows 95, the first 32-bit operating system. In 1998,
Microsoft introduced Windows 98. In 2000, Microsoft introduced Windows 2000. In 2001, Microsoft introduced Windows XP Home Edition
and XP Professional (which merged the Windows 98 and Windows 2000/Windows NT versions of the operating system into the same base
system). Security seems to be a continuing problem with the various versions of Windows, and Microsoft seems to distribute patches on an
continuous basis (every Tuesday) to correct these and other problems (but then so did IBM, but the only people who knew or cared about
that were the system programmers who maintained the IBM operating systems).
In early 2008, Microsoft introduced Vista which used to be called Longhorn. Service packs to correct errors and problems were released
soon after that.

"Insert 7": B4-19, Item I.B (before I.B.1.d)
Unix and Linux
Unix was developed in 1970 by a group of programmers at AT&T's Bell Laboratories for a DEC PDP7 mid-range computer (see the History
of Computer Hardware box for DEC and the PDP line of computers). In 1972, the operating system was rewritten in the C programming
language (see I B 2 Programming Languages). In 1979, the 7
th
edition of Unix was released. At that point, the development of Unix split
into an academic effort led by the University of California at Berkeley (BSD) and a continuing commercial effort at AT&T (System V). Many
different versions of Unix were eventually developed, often by the various hardware vendors of mid-range computers. At one time, even
Microsoft had its own version of Unix for PCs called Xenix. Currently, the license for Unix is held by the SCO Group.
Unix is an interactive operating system; it was designed to run interactive, time-sharing, online applications with considerable user interface
(like PCs) but without large amounts of data input and output. IBM mainframes, which preceded Unix, were designed to run batch jobs with
large amounts of data input and output (although online access was added later). MVS came with large general-purpose tools (or
additional tools could be purchased), and each one of them did a lot of work. Unix has many small narrowly-focused tools that perform
specific functions and that must be joined together to perform complex functions. The Unix philosophy was and is that small is good.
The Unix operating system is divided into two pieces: the kernel and the shell. The kernel handles the key tasks of memory allocation,
device input/output, process allocation, security, and user access. The shell handles the user's input and invokes other programs (joining
together small tools) to run commands. On a mainframe, MVS or a similar operating system is equivalent to the kernel, and something like
TSO (Time Sharing Option) is the shell (although it is never called that). Also, JCL is used to run the batch jobs. IBM mainframes use the
EBCDIC character set and hexadecimal (base 16 arithmetic), and Unix uses the ASCII character set. PCs use ASCII. Mainframes have
system programmers; Unix has system administrators. These people live in totally different worlds and normally don't like each other if they
are put in the same room.
Business 4 Update Optional Reading

7
2008 DeVry/Becker Educational Development Corp. All rights reserved.
Linux was developed in 1991 by Linus Torvalds specifically for PCs. Linux was not derived from Unix source code, but its interfaces and
many of its basic commands and its file structure are like Unix. Even though Linux itself is free (it is open source software licensed under
the GNU General Public License version 2 provided by the Free Software Foundation), it is packaged and distributed by a number of
independent software companies such as Red Hat for a nominal fee; full versions can also be downloaded. Linux was originally developed
for the then current PC (the 386), but it has been ported to almost everything, including to mainframes by IBM. The majority of Linux is
written in the C programming language.
At the time when Linux was introduced, there was little application software that ran on it. The situation has certainly changed. Now, there
are full Linux word processors, spreadsheets, and other office automation software.
Major software vendors like IBM (IBM is the world's largest software company after Microsoft) now fully support Linux. IBM product
announcements these days are full of Linux announcements of various kinds. In 2002, IBM and Red Hat announced "a multi-year alliance,
including services and expanded support for software and servers that will enable the two companies to provide broad Linux support to
enterprise customers around the world." In 2005, IBM announced that it planned to invest $100 million over the next three years to expand
Linux support across its software portfolio.
Linux was originally used as a server operating system. It is now often used at the desktop also. It is also often used in embedded systems
such as in mobile phones and handheld devices and TiVo and in some routers and firewalls where the operating system must be small
and stable.
It does not make any real difference whether Unix is spelled Unix or UNIX (we have used both in the text). Linux is spelled Linux (not all
caps). The exact pronunciation of Linux is subject to some dispute.

"Insert 8": B4-23, Item I.B.1.b(3)(e)
Data Warehouses
A good example of a data warehouse is a large grocery chain that keeps data on the purchases of shoppers and has identified certain
shoppers as "preferred shoppers" or "frequent buyers." Obviously, for any large grocery chain, the amount of data would be huge. But
analysis of this data (the data would more than likely be segregated in a separate data warehouse for easy access and to prevent
interference with the normal production systems) might identify buying patterns of the preferred shoppers so that possibly other less-
frequent buyers could be converted into more frequent buyers. Grocery (and other retailer) discount cards are used by the customer to
obtain discounts, but once the purchase data is captured, it can be used by the retailer for other purposes. The data can also be analyzed
for what products are selling where (and are not selling where) to help plan promotional campaigns and to determine product re-order
points and volumes.
Another good example of a data warehouse is a large telephone company that keeps data on local and long-distance telephone calls. The
company could analyze the data for calling volumes and patterns to help plan for network expansions and upgrades and to help create and
sell new products or service plans or possibly even to analyze responses to television or other ads. The volume of data in a data
warehouse of this type could range into terabytes of data.
A data warehouse seldom contains data from just one application system or application area. Data warehouse data is normally collected
from diverse applications (often running on different types of computers) and may be from both internal and external sources (integrated
company-wide data). As indicated, the amounts of data are likely to be enormous (data tends not to be deleted, so the data warehouse just
grows and grows). Note that in many cases, the data in a data warehouse duplicates some of the data in a normal production system. The
purpose of a data warehouse is data analysis. A simple operational database is seldom if ever a data warehouse. Obtaining simple reports
from data is not data analysis.
With a data warehouse, data is passed from the originating system either real time or in batch, depending on whether the data warehouse
is real time or offline. Data transformation programs receive the data from the originating system, clean the data (data cleaning, sometimes
called data cleansing or data scrubbing, is often a real problem because the data often contains errors and inconsistencies), standardizes
it (for example, company numbers and codes of various types must all end up the same in the data warehouse), and passes it on to the
data warehouse. The data warehouse itself is often called a data store. Data warehouses are often described by their metadata, which is
data about data. The process of extracting, transforming, and loading the data is sometimes called ETL. IT people can make acronyms out
of anything.
Data cleansing can be a real problem since the data from the originating systems was often edited and validated using different edit and
validation rules. Some sets of edit and validation rules may be more stringent than others, and thus different systems will almost certainly
have data of varying quality.
Business 4 Update Optional Reading

8
2008 DeVry/Becker Educational Development Corp. All rights reserved.
The process of analyzing large amounts of corporate data stored on data warehouses is often called business intelligence. Business
intelligence tools have been developed to mine data, analyze the data, and report on it. Data mining is sometimes thought of as a type of
artificial intelligence.
At the time this Addendum was written, data mining had come under considerable criticism due to its use in the war on terror to analyze
telephone and financial data.


"Insert 9": B4-26, Item I.B.2.c
A Short History of Programming Languages (and More on Objects)
It really started with FORTRAN and COBOL, or at least that is the way it seems. COBOL (common business-oriented language) was
developed in 1959 by Admiral Grace Hopper in the Department of Defense. FORTRAN (formula translator) was an engineering-oriented
language. In the early 1960s, before COBOL compilers became widely available in academia, business schools often introduced students
to computers and programming by teaching FORTRAN in introductory business statistics courses. COBOL was not very good at
computing, and FORTRAN was not very good at moving data around and reading from and writing to files. However, it is estimated that
there are still over 200 billion lines of COBOL program code in existence.
There were many other programming languages, some of which reside only in departments of computer science in universities. A very
primitive version of Basic (beginner's all-purpose symbolic instruction code, originally developed in 1963) was also taught in the early days.
None of these programming languages even remotely resemble the versions of the same languages that are available today. Each of
these languages has been expanded in terms of the functions that it can perform and the ease with which it can perform those functions.
As indicated in the text, these days, many common programming languages are object-oriented programming languages. An object-
oriented program is a collection of individual objects that act on each other, as opposed to the traditional view of a program as a sequential
list of instructions. An object is a self-contained little machine that receives messages from other objects, processes data received or
already stored in the object, and sends messages to other objects. If the objects are written correctly, they can be used by various
programs performing the same functions and can be readily reused, at least in theory. They can be modified easily because they perform
only one function or an extremely limited set of functions. Object-oriented programming is claimed by some of its proponents to be
equivalent to the invention of the wheel, especially for large programming projects. Unfortunately, it is sometimes difficult to distinguish
objects from subroutines, which have been around since early FORTRAN, other than that the objects are written in more "modern"
languages. Of course, object-oriented programming also has its share of critics.
There is considerable additional information on programming languages such as C and C++, Java, JavaScript, and Visual Basic in the
Glossary.

"Insert 10": B4-27, Item I.B (after the Hector Corp Example)
Introduction to Telecommunications Part 1
Telecommunications is the transmission of signals (electromagnetic waves or light waves) over a distance for communication purposes.
Telecommunications can be radio, television, or telephone. For telecommunications to occur, there has to be some device that transmits a
signal, some device that receives the signal transmitted, and some kind of transmission medium over which the signal is transmitted.
The worldwide telecommunications network is officially called the public switched telephone network (PSTN). Local area networks (LANs)
normally use private wiring within a local site and thus normally do not use the telephone network; wide area networks (WANs) normally
use the public telephone network in some manner. Equipment owned by the customer is called customer premises equipment. Normally,
that equipment includes some kind of PBX, which is the device that switches the various internal telephones and also hooks into the
telephone company's local loop through the telephone company's termination equipment located on the customer's premises. The local
loop connects to the telephone company's end-office switch (the point of presence or POP) which hooks into a trunk line for long-distance
transport.
A signal is a disturbance on the transmission medium. The disturbance travels down the transmission medium. The received signal may
differ from the sent signal if the signal has attenuated (weakened) or has been distorted (its wave shape has changed). These changes are
called propagation effects. If the propagation effects are too large, the receiver may not be able to interpret the signal correctly.
Noise is the random electromagnetic energy in a cable. There is always some noise. If the signal-to-noise ratio is high (the signal is much
stronger than the underlying noise), the signal can be interpreted correctly and the noise can be ignored. If the signal-to-noise ratio is low,
the noise may overcome the real signal. The mean of the noise is called the noise floor (surprisingly, the noise floor is the average noise
and not the minimum noise), but, since noise is a random process, there may also be noise spikes. Signals attenuate over distance; as the
Business 4 Update Optional Reading

9
2008 DeVry/Becker Educational Development Corp. All rights reserved.
signal attenuates, it gets closer to the noise floor. Signal attenuation can be combated by the use of repeaters, which are devices that
regenerate or re-create the signals. Note that regeneration is not just amplification; amplification would amplify both the signal and the
noise and thus would not accomplish anything.
Telephone lines normally use circuit switching, in which an entire circuit (the entire circuit between the sender and the receiver and part of
every switch and trunk line in between) is reserved for the entire duration of a call. Once there is a dial tone and the call is connected, the
circuit is reserved until the call is completed. Circuit switching works well for voice traffic where one person or the other is normally talking
most all of the time, but it does not work well for data traffic which tends to occur in bursts, with large amounts of data transmitted during
some periods of time and no data transmitted during other periods of time. There are other types of switching, some of which will be
discussed later in this Addendum.
If a guaranteed or higher-speed circuit is needed, a customer may lease a line (leased line, private line, or dedicated line). Each end of the
line is permanently connected, the line is always on, and there is no telephone number. Leased lines can be used for voice or data or a
combination of both. Leased lines are available in increments of 56 kbps or 64 kbps (1 voice circuit), up to T1's (a T1 is the equivalent of 24
voice circuits; part of a T1 is called a fractional T1; T1's are normally implemented using copper wire) and T3's (there are also T2's, but
T2's are not normally offered commercially in the U.S.). If a line is leased, the customer must provide its own network termination
equipment, called a Data Service Unit/Channel Service Unit (CSU/DSU), and manage its own network. Note that a private line is not the
same as a value added network; a private line normally provides voice or data transmission only. See the Glossary for a description of
kbps, T1's, and T3's.
Traditional telephone service is now often called plain old telephone service (POTS).

Introduction to Telecommunications Part 2
For data transmission, the network is called the public switched data network (PSDN). The PSDN is normally represented in network
diagrams as a cloud (it really is, not a joke). Each customer is connected to a POP by an access line and once the data gets to the cloud,
the customer does not have to worry about it any more. Most PSDN service providers offer service level agreements (SLAs) which are
quality of service guarantees within the cloud (service means throughput, availability, latency, error rate, and other matters; see the
Glossary for the definitions of these terms). Of course, these service guarantees are not free.
The most popular PSDN service was frame relay. Frame relay operates at 56 kbps to about 40 Mbps. Frame relay was quite popular for a
while to send voice and data over the same circuits. Frame relay divided messages into variable-length pieces called frames and left error-
correction up to the end points. A permanent "virtual" circuit was provided (a leased line is a permanent actual circuit), but the actual
routing of the message was controlled by the cloud. Nowadays, with cable, DSL, and virtual private networks (VPN), IP (discussed later)
has begun to replace frame relay.
For PSDN service faster than frame relay, there is asynchronous transfer mode (ATM). ATM services reach gigabyte per second speeds.
ATM was originally created to be the transport mechanism for the worldwide PSDN. ATM uses short, fixed-length frames called cells (the
shortness of the cells minimizes latency). There are classes of service and service guarantees for voice traffic with ATM, but there are no
such guarantees for data traffic.
Microwave transmission is point-to-point radio transmission using large dish antennas. Microwave repeaters are installed every few (40-60)
miles since microwave transmission has to be line of sight (the transmission between microwave repeaters is called a hop). Before optical
fiber became widespread, microwave transmission was heavily used for long-distance trunk lines. Nowadays, it is still often used where
fiber is not available, such as mobile TV transmitters or for facilities such as pipelines in very remote locations (one pipeline in the
mountains of Colorado has 300 hops of microwave). The microwave frequency spectrum is normally from approximately 1 GHz to 40 GHz
and then up to 170 GHz for some applications (see the box on modems for more information on frequency). It is divided into various bands
normally designated by letters, such as the L band, the S band, the Ku band, and many others. Microwave transmission uses frequency
division multiplexing. Microwaves pass easily through the Earth's atmosphere and are sometimes used in Bluetooth and other wireless
protocols (see the box on wireless transmission for more information on Bluetooth). Microwaves are used in ovens, but, of course, that
does not count here other than to note that the first microwave oven was produced in 1947 and was almost 6 feet tall and weighed 750
pounds. What would we do without microwave popcorn?
Satellite transmission uses a microwave repeater in the sky. Communication from the ground up is called the uplink; amazingly enough,
communication from the satellite down to the ground is called the downlink. Uplink transmissions must be focused since they must hit the
satellite's dish. Any ground station in the footprint of the satellite can receive a downlink transmission; microwaves are thus perfect for
wide-area transmissions. Most communications satellites use geosynchronous orbits, which means that the orbit time of the satellite
equals that of the earth so that the satellite appears to be fixed in the sky. Satellite transmission does suffer from latency since the distance
Business 4 Update Optional Reading

10
2008 DeVry/Becker Educational Development Corp. All rights reserved.
to and from the satellite is not short. Many people are familiar with VSAT (very small aperture terminal) satellites for Dish TV. The author
has tried that from several different homes at various locations in the country, but there always seems to be a tree in the way.
DSL (digital subscriber line) is a technology that provides digital data transmission over the wires of a local telephone network. There are
several different kinds of DSL lines; the one for residential service is asymmetric digital subscriber line (ADSL). The ADSL customer uses a
telephone wire splitter that separates data signals and voice signals or, alternatively, the splitting is done by the DSL modem; the DSL
modem also converts the digital data signals into phone line analog signals. DSL is a major competitor to cable modems for residential
broadband service. DSL is provided by telephone companies, and cable modem service is provided by the cable television companies.
Neither DSL nor cable modem service is necessarily available at all residential locations.
VoIP (voice over Internet protocol) is the routing of voice conversations over the Internet using some kind of high-speed Internet
connection such as a cable modem or DSL. VoIP does not provide a mechanism to ensure that data packets are delivered in sequential
order, so VoIP implementations often suffer from latency and jitter and echo (common on the author's VoIP service). Also, VoIP service is
extremely sensitive to cable or DSL outages (the cable actually carries the signals from the residence) and loss of electrical service at the
residence (none of it will work without electricity). POTS is not affected by loss of electrical power at the residence since the electrical
power is actually provided by the telephone company central office.
Multiplexing is the combination of multiple signals on a single communications circuit. As indicated, microwave transmission uses
frequency division multiplexing (FDM). In frequency division multiplexing, two or more signals at their individual frequencies are combined
and transmitted at the same time over a different, normally higher, frequency. Think of it as a 20 MHz channel divided into 5 separate
"virtual" channels that are each 4 kHz wide. The first signal is transmitted over the lowest virtual channel, and 4 other signals are
transmitted over the other virtual channels. The 5 individual signals are combined (think of it as the signals being added) and one real
signal is transmitted. Of course, at the other end, the 5 individual signals must be divided. Multiplexing has basically multiplied the capacity
of the single physical channel by a factor of 5, and there can be multiple levels of multiplexing built on top of each other. In optical
communications, FDM is called wavelength division multiplexing (WDM).
An alternative is time division multiplexing (TDM), which is often used for telephone circuits. Time division multiplexing is splitting one
communication channel for multiple messages based on giving each message a different time slot (as opposed to giving each message a
part of the frequency spectrum). Interleaving the messages can be done on a bit or byte basis (blocks of a certain number of bits or a
certain number of bytes). Time division multiplexing can be used for digital messages (data) or analog messages (telephone calls) that are
made to look and act like digital messages. In TDM, time slots are pre-allocated to the individual channels so that, theoretically, if there are
no signals on one of the channels being multiplexed, that time slot is wasted. Time division multiplexing is what allows a single T1 leased
line to carry 24 voice circuits.
Most modern networks utilize packet switching. Packet switching is a method of slicing digital messages into pieces called packets. The
individual packets are then transmitted over various transmission channels and are then rearranged or recombined at the destination node.
Packet switching optimizes the use of the bandwidth available in the network. Note that, with packet switching, time slots are not pre-
allocated to individual channels. If there are no signals on one of the channels, that time slot is used for another channel and thus is not
wasted.

"Insert 11": B4-28, Item I.C.2.e
More Detail on Transmission Media
Normal home telephone cables are unshielded twisted pair (UTP) and the connector at the each end of the cable is called an RJ-11 jack,
which terminates two of the four wires (the pairs of wires that are twisted) in the cable. One telephone line needs just two of the four wires;
the cable can be called one-pair voice-grade UTP cable.
Normal business telephone cables are also UTP but the connector is called an RJ-45 jack, which terminates all four pairs of wires. The
cable can be called two-pair data-grade UTP cable.
The cables are unshielded in that there is insulation around the individual wires but not around the cable as a whole (although there is a
plastic jacket around the cable as a whole). The twisting of the wires is to reduce the interference between the sets of wires (called
crosstalk).
The RJ-45 cables and connectors are sometimes called Ethernet cables (coaxial cable used to be used for Ethernet, but coax for Ethernet
has been mostly replaced by UTP). The wire is relatively easy to attach connectors to and thus is relatively easy to install and relatively
rugged. There are, however, distance limitations for such cables of 100 meters, which is one reason why switches are used to regenerate
signals.
Business 4 Update Optional Reading

11
2008 DeVry/Becker Educational Development Corp. All rights reserved.
Coaxial cable is cable consisting of a round conducting wire (instead of twisted pairs of wires) surrounded by insulation. It tends to be less
affected by interference from external electromagnetic fields since it is insulated. It is used today mostly for cable television cables. At one
time, coaxial cable was used for Ethernet, but these days, UTP is normally used. Gigabit Ethernet was the term that describes various
technologies for transmitting Ethernet packets at a speed of a gigabit per second over UTP cables.
Optical fiber (fiber optic cable) is a thin transparent fiber usually made of glass that transmits light. The extremely thin light transmitting core
is surrounded by a thicker glass cylinder called cladding. The cladding keeps the light in the central core and lessens the attenuation of the
signal. An optical fiber normally has two strands of fiber one for transmission in each direction, called full-duplex communication. Optical
fiber is flexible and can be bundled as cables. The maximum transmission distance is not limited by attenuation but by dispersion, which is
the spreading of the optical pulses as they travel along the fiber.

"Insert 12": B4-27, Item I.C.2.g
Modems, Codecs, Analog Signals, Digital Signals, and Frequencies
Signals can be analog or digital. Analog signals (normal telephone signals) are continuous wave-type signals that vary in value over time;
digital signals are discrete values, such as zeros and ones (although it might be more complex than that). As is discussed in the text,
modems convert digital signals from digital to analog (so that computerized information can be transmitted over analog telephone lines);
codecs convert analog signals from analog to digital. Most people are familiar with modems from their early access to the Internet and
email. Modems provide only low-speed connections. Of course, where analog telephone lines use digital transmission methods, digital
signals may have to be converted to analog signals (for transmission over the local loop) and then back to digital (for transmission over a
trunk line).
When a person talks into a phone, his/her voice causes the diaphragm inside the mouthpiece to vibrate. This vibration (or oscillation)
causes an electrical disturbance which propagates down the wire. The signal can be represented by a sine wave (from high-school
trigonometry, which you probably thought you would never use again). The rate of oscillation is called the frequency and is measured in
terms of cycles per second or Hertz, abbreviated Hz. One thousand cycles per second is kilohertz (kHz), and one million cycles per second
is megahertz (MHz). With sound, a higher frequency has a higher pitch.
The human ear can detect frequencies of about 30 Hz up to 20,000 Hz. However, telephone circuits do not transmit the entire range of the
human voice. Most of the energy of human speech is concentrated in the range of 300 Hz to 3,400 Hz, and telephone circuits are designed
to transmit approximately this range (3,100 Hz). The rest of the signal is merely chopped off, with no real loss (a process called bandpass
filtering). A bandwidth of 4,000 Hz (4 kHz) is used (bandwidth is the range of frequencies that is transmitted), which leaves some room for
space between the multiple signals transmitted (channels) over the same medium.
Digital signals are represented as binary numbers (zeros and ones). When digital signals travel over telephone wires, it is common to use
two different voltages to represent the zeros and ones (often a high voltage for zeros and a low voltage for ones). When digital signals
travel over optical fibers, it is common to use "off" for zeros and "on" for ones. Analog signals are converted into digital signals by a
process of sampling. The signal is sampled something like 8,000 times per second (twice the standard bandwidth of 4,000 Hz; two much
sampling would waste resources and two little sampling would not produce a good quality signal), and the intensity of the signal in each
sampling interval is used as the digital signal measured by a number between 0 and 255. It takes 8 bits to store the number. 8 bits times
8,000 samples per second is 64,000 bits. That is the reason why most voice/telephone channels are built around 64 kbps channels.
Sometimes 8 kbps is reserved for control information, so that 56 kbps is used for the data instead. Remember that leased lines are
available in increments of 56 kbps or 64 kbps.
Leased lines do not need to be converted from analog to digital. Leased lines carry digital signals.

"Insert 13": B4-28, Item I.C.2.h
Communication Protocols/Network Standards and TCP/IP Part 1
As indicated in the text, standards are the rules of operations that allow two hardware or software processes to work together.
Telecommunication standards govern the exchange of messages and the syntax and format of the messages. Standards are sometimes
called protocols, and many of their names include the word protocol. An architecture is the overall structure of a set of protocols or
standards.
As an example, the hypertext transfer protocol (HTTP) is the method used to transfer information on the World Wide Web. It was originally
designed as a way to publish and receive web pages. If you type www.beckerreview.com in the File\Open section of your browser,
http://www.beckerreview.com is generated. HTTP is a request/response protocol. The originating client (the browser in your PC) initiates a
Business 4 Update Optional Reading

12
2008 DeVry/Becker Educational Development Corp. All rights reserved.
request by establishing a connection to a particular port (normally port 80) on a remote host. An HTTP server listening on that port waits for
a request message from a client. Upon receiving the request message, the HTTP server responds with a response message, which may
be the requested information or may be an error message. Each HTTP message has a header, a data field, and a trailer (although some of
them may be missing in a particular message type). The syntax of each message is very rigid. FTP (file transport protocol) and SMTP
(simple mail transfer protocol) are other protocols. From an architecture standpoint, HTTP, FTP, DNS, and SMTP are all application-layer
protocols; the application layer is normally considered the highest layer of an architecture. More on layers later, but the names that are
being used here are from the OSI model discussed in the next box.
HTTP is called an unreliable protocol since it does not perform any error correction. Any messages that are discovered to be in error are
simply discarded. Unreliable protocols are error detecting but not error correcting. UDP (user datagram protocol), often used in VoIP, is
another unreliable protocol. From an architecture standpoint, UDP and TCP are both transport-layer protocols; the transport level is a lower
layer in an architecture than the application layer.
Unlike UDP, transmission control protocol (TCP) is a reliable protocol. Reliable protocols correct lost or damaged messages by resending
them. They are error detecting and error correcting. When a receiving TCP process receives a correct TCP message, it sends an
acknowledgement. If the sending TCP process does not receive an acknowledgement after a certain period of time (the message may be
either damaged or it may be lost), the sending process resends the message. Reliability, of course, is expensive in terms of network
resources. TCP is used to send messages from one network to another, using routers to determine the correct route from the source to the
destination. The work of implementing the error detecting and error correcting functions of TCP is done only on the sending and destination
hosts, and not on the routers that route the messages between the sending and destination hosts. Thus the extra work of reliability is
manageable for the network as a whole.
Protocols can also be categorized by whether they are connection-oriented or connectionless. Connection-oriented protocols establish a
connection before transmitting. Connectionless protocols do not establish a connection before transmitting. TCP is a connection-oriented
protocol. Connection-oriented protocols, of course, are expensive in terms of network resources.
IP, the protocol that is used to send messages over the Internet, is a connectionless protocol and an unreliable protocol. It does not need
to be reliable because that work is done by TCP when TCP and IP are used together. From an architecture standpoint, TCP is a transport
layer protocol and IP is a network layer protocol.

Communication Protocols/Network Standards and TCP/IP Part 2
Network standards are not created in a vacuum. There have been many different architectures, from Open Systems Interconnect (OSI) to
System Network Architecture (SNA for IBM mainframes) to DECNet (for the DEC PDP and VAX minicomputers) and Netware (for Novell).
Prior to OSI, the architectures were developed by vendors and thus were proprietary; OSI was an attempt to develop a non-proprietary
architecture. OSI, and all of the others, were constructed in layers, with each layer performing a particular function or set of functions.
The 7 OSI layers are, from the bottom up, the physical layer, the data link layer, the network layer, the transport layer, the session layer,
the presentation layer, and the application layer. Products could be developed for one or more layers and were supposed to follow the
rules for whatever layers they encompassed. The OSI architecture was joined by the Department of Defense model, which can be
considered to have 5 layers, from the bottom up, the physical layer, the network access layer, the internet layer, the transport layer, and
the application layer, and then the Internet Protocol Suite, which has 5 layers, from the bottom up, the physical layer, the data link layer,
the network layer, the transport layer, and the application layer. The TCP/IP Suite uses the OSI physical layer and data link layer
standards.
As indicated previously, IP is a network layer protocol, and TCP is a transport layer protocol. The Internet protocol Suite uses the OSI
standards for the physical layer and the data link layer. The OSI session layers and presentation layers are seldom used.
The bottom two layers, the physical and data link layers, govern transmission across a single network (no routers). Ethernet is a data link
layer protocol. So are WiFi and frame relay and ATM. The physical layer standards are 10BaseT, 100BaseT, 1000BaseT, 802.11, and
many others (see the Glossary for definitions). A carrier pigeon, if we still used them, would also be a physical layer protocol; the carrier
pigeon gets data from one place to another. The combination of 10BaseT (10 Mbps) and 100BaseT (100 Mbps) is called 10/100 Ethernet.
NICs and switches can automatically sense (autosense) the speed of the transmission and adjust to it.
Each layer provides services to the layer above it and receives services from the layer below it (this kind of thing is called a protocol stack).
No level worries about specifically how the other layers provide their services. The application layer, presentation layer, session layer, and
transport layer are called end-to-end layers and are needed in only the sending and receiving hosts. The network layer, data link layer, and
physical layers are called chained layers and are needed in the sending and receiving hosts and all intermediate hosts.
Business 4 Update Optional Reading

13
2008 DeVry/Becker Educational Development Corp. All rights reserved.
As each layer receives data from a higher layer, that data is placed in an "envelope," not a paper envelope but an electronic envelope.
Normally, this envelope contains some kind of header information and some kind of trailer information. The envelope of a specific layer is
intended to be read only by the corresponding layer in the intermediate or end host, as appropriate. Of course, if the message is already
too long, it might have to be broken up and additional information will have to be added to put the message back together again.
Ethernet is the data link layer protocol that constructs messages in a very specific format. An Ethernet frame or packet (frame and packet
are used somewhat interchangeably in this Addendum; see the Glossary for the distinctions), in addition to the actual data, has a
considerable number of additional information: the sending MAC address, the destination MAC address, the length of the message,
possibly some pad if the actual data is not long enough, and a frame check sequence field for putting the message back together again.
The IEEE working group that specifies Ethernet standards is the 802.3 working group. The 802.3 corresponds to the 802.11 working group
for wireless LANs and the 802.16 working group for WiMAX.

"Insert 14": B4-28, Item I.C.2.i
Gateways, Routers, Switches, and Hubs
Frames are forwarded within a single network by devices called switches. Switches are responsible merely for directing the frame to the
correct recipient NIC within the network, using a relatively simple process of address lookup. Each cable that is connected to the switch
has a port number. When a switch receives a frame or packet, it looks up the address of that packet and sends it out the correct port. It
also regenerates the signal. Switches use MAC addresses, not IP addresses. Both MAC addresses and IP addresses are discussed later.
Routers forward messages outside a single network. There may be several different routers involved to send a frame from where it was
sent (generated) to where it is intended to be received. The routers basically send the frame to the next router in line. Routing is much
more complicated than switching, and routers are thus more expensive than switches. Routers are special cases of gateways; some
routers handle multiple protocols and others do not. As networking moves more and more towards Ethernet, there is less and less need for
multiple protocols. Routers use IP addresses, not MAC addresses.
Hubs were used mostly before switches became economical. Switches send frames out a single port. In contrast, hubs broadcast frames
out all ports except for the port that received the frame in the first place. With switches, multiple stations on a network can broadcast at the
same time. With hubs, if two stations broadcast at the same time, their signals will collide. The control for when stations transmit is the
responsibility of media access control (MAC). The media access control method used with Ethernet hubs was called carrier sense multiple
access with collision detection (CSMA/CD). With switches, thankfully, we don't need that mouthful anymore. The problem with hubs and
CSMA/CD was that they did not work very well for larger networks. With large networks, there were just too many collisions to detect and
to fix (with retransmission).
Note that the designation of a piece of equipment by the manufacturer does not always indicate its actual function. And there are different
types of the various pieces of equipment.

"Insert 14a": B4-29, Item I.C.2.k(4)
Network Topologies
Ethernet is an example of a bus topology, or at least early Ethernet was a bus topology. As indicated in the gateways box, early Ethernet
used CSMA/CD with each station broadcasting its frames in each direction along the bus at the whopping speed of 10 Mbps. A competitor
was token ring from IBM (a ring network) which ran at 16Mbps and was more reliable. For whatever reason, possibly a higher cost or an
anti-IBM bias, token ring never did much in the market. When Ethernet went to 100 Mbps, token ring effectively died (in the late 1990s).
IBM eventually pulled out of the working group that set the token ring standards, and that was the end of that.

"Insert 15": B4-31, Item I.C.3.b
Network Addresses and Domain Names
When applications need to communicate with each other, they send messages to one another. Within a single network, those messages
are called frames. In modern networks, messages are broken into smaller pieces called packets. In this Addendum, frame and packet are
used somewhat interchangeably.
Business 4 Update Optional Reading

14
2008 DeVry/Becker Educational Development Corp. All rights reserved.
Each NIC on a network has a unique physical hardware address (called the MAC address) that is assigned by the manufacturer. Each NIC
also has a temporary or permanent Ethernet address (most networks, these days, use Ethernet). A single-network Ethernet address is 48
bits long and is normally written in hexadecimal notation such as 37-F5-EE-1F-2E-42 (see the Glossary for more information on
hexadecimal notation). In contrast, an IP address that is used over the Internet is 32 bits long and is normally written in what is called
dotted decimal notation such as 126.11.16.17 (all of the IP addresses and Ethernet addresses in this Addendum are pulled out of the air).
Each of the four sets of numbers between the dots is between 0 and 255. 255 is 2
8
, is the largest decimal number that can be represented
in 8 bits or 1 byte. An IP address is 4 times 8 bits or 32 bits. With the growth in Internet usage, a new IP standard (called IPv6 instead of
the current IPv4) has been introduced that has a 128-bit address.
Part of the IP address is the network address, another part of the IP address is the subnetwork address (a subnetwork, commonly called a
subnet, is a part of a network), and the third part of the address is the host address (an individual PC). A network mask, also known as a
subnet mask, is used to tell how long the subnet address is and how long the host address is. Since there are a limited number of
characters, a balance must be maintained between the number of subnets and the number of hosts on each subnet.
A domain name is a name which includes one or more IP addresses. In web addresses such as www.beckerreview.com, the domain name
is beckerreview.com. The .com is the top-level domain name and beckerreview is the subdomain name. Top-level domain names are .com
for commercial organizations, .gov for governments, .edu for educational institutions, .org for nonprofit organizations, and .mil for the
military (and there are many others that are not listed here). Each organization that wants to use a top-level domain name must register its
domain name as a subdomain within a top-level domain.
The domain name system (DNS) is the system of domain names that is employed by the Internet. Since the Internet is based on IP
addresses, not domain names, each web server must have a domain name server (DNS) to translate the domain names into IP
addresses. Think of domain name servers as large electronic telephone books.
If you type www.beckerreview.com in your browser, the software contacts a DNS server to determine the IP address of the site you are
looking for. Sooner or later (it may take several DNS servers), it gets the IP address. The networking software then sends a packet (the full
message is normally divided into multiple packets) to a default gateway, which then routes it along the network through various routers
until it gets to the site's web server. There is, of course, more to it than just this, but that is enough for our purposes.
For dial-up access to the Internet, the Internet service provider (ISP) provides a different IP address every time (dynamic IP addresses).
For high-speed access, the IP address is static and the same IP address is used every time.
IP messages have an assigned "time to live" which is set by the sender of the message. Each router or host on the way to the destination
reduces the time to live. When the time to live has been reduced to zero, the message has died and is sent back to the sender. This
approach is to keep undeliverable message from circulating around the Internet forever.

"Insert 16": B4-31, Before Item II
Wireless Networking
Until recently, networked computers were connected only by wires (and wires always seem to get in the way and get tangled and cause all
sorts of problems). Now, there are plenty of wireless networks out there, some for home networks and others outside the home. And, like
anything in IT, wireless networking has its own terminology and its own acronyms. Much of this box will be terminology related to wireless
hardware and standards. Please realize that, like in all other areas of IT, new hardware is being released continually.
WiFi is a set of standards for wireless LANs, also called the 802.11b standards (the standard numbering is from the IEEE; remember the
802.3 for Ethernet). Equipment that uses this standard often can interfere with microwave ovens, cordless telephones, and other
equipment using the same frequency. 802.11b is the old standard. It covered data transmission at either 5.5 Mbps or 11 Mbps. The more
advanced standard is 802.11a, which is faster and transmits between 6 and 54 Mbps. Equipment that uses this standard often can
interfere with some of the more modern cordless phones. The most advanced current standard at the time of this writing is 802.11g,
released in 2003 and backwards compatible with 802.11b. It transmits at up to 54 Mbps. The difference in the standards is the frequency
range that they transmit on, which is the reason for some of the interference. After all, the transmission is just radio waves going through
the air. WiFi is a wireless form of Ethernet.
Bluetooth is the popular name for the 802.15 networking standard for small personal area networks. Bluetooth can be used to connect up
to 8 devices (PDAs, mobile phones, laptops, PCs, printers, digital cameras, and the like) within a 10-meter (now 100-meter), depending on
the power of the transmission) area using low-power radio-based communication. The acronym for personal area networks is PAN. If there
is a way, somebody in IT will come up with an acronym. The reason for that is that many IT people cannot spell. Many wireless keyboards
and mice utilize Bluetooth. Wireless keyboards and mice were developed seemingly to sell batteries.
Business 4 Update Optional Reading

15
2008 DeVry/Becker Educational Development Corp. All rights reserved.
An access point is a hardware device that connects wireless communication devices together to form a wireless network. An access point
is often called a wireless access point (WAP, but the acronym should not be confused with the other WAP, wireless application protocol).
The access point normally connects to a wired network (but can be standalone). Several WAPs can link together to form a larger network
that allows roaming.
The access point on the author's current home network is a Linksys WRT54G (Linksys is now a part of Cisco) broadband router, that runs
at 2.4 GHz and is compatible with both 802.11b and 802.11g, that connects the spouse's downstairs study to the author's upstairs study
where the wired network lives and which has the cable modem Internet connection. No Ethernet cable to the downstairs study and no
having to use the house's electrical wiring! Isn't modern technology wonderful?
Wireless networking devices can operate in two modes: infrastructure mode and ad hoc mode. Infrastructure mode is where the
networking devices communicate through an access point. Ad hoc mode is where the networking devices are physically close enough
together so that they can communicate without the access point.
Remember that security should be an important consideration in anything wireless. Signals that go through the air are susceptible to
interception (that includes wireless phones, so remember that anything you say on your cell phone may be heard by people other than
those you intend). At the very least, home networks should be encrypted.
802.11i (also known as WPA2 or WiFi protected access by the WiFi Alliance) is an IEEE standard specifying security mechanisms for
WiFi. It supersedes the previous security specification called WEP (wired equivalent privacy) and is more secure since the encryption key
is longer and the key is changed periodically.

"Insert 17": B4-35, After Last Paragraph of Energy Company Example
Note that Energy Company's systems are a combination of online and batch processing. Revenue transactions are entered online and
then processed in batches (oil & gas revenue transactions are often processed in batch because each transaction that is input may
generate hundreds or thousands of additional transactions as a part of the revenue recording process). Once the transactions are posted,
reports are normally produced in batch. Interfaces to other systems are batch interfaces. For Energy Company, the data warehouse is
updated in batch.
Probably the best example of a combination of online and batch processing is a bank. Most banks these days allow depositors to view their
account activity online. Deposits that are made by the customer show up on the account immediately upon being deposited. Electronic
deposits show up on the account the next morning after being posted in batch during the night. Checks clearing the account show up on
the account the next morning after being posted in batch during the night (so that the banks will have a chance to sort checks from the
biggest to the smallest to maximize their overdraft charges). Effectively, transactions are posted to the account as they become available
or, in the case of Energy Company, as they are needed.
Another example that is sometimes used is grading the answer sheets on the BEC exam. We don't know enough about the way the
examiners' system works to say for sure one way or the other, but it is certainly quite possible that the grading is done online when the
candidates clicks on the "Done" button (since all of the questions on the BEC exam are multiple choice, online grading would be easy) and
that update of the candidate records is done in batch later. There would be no real reason to update candidate records at the same time
the quizzes are being graded since there is no apparent need for that information to be in the candidate record at that time; however, it is
certainly possible that candidate records could be updated at the time the exam is graded.
We do have to consider that there are multiple testlets on the BEC (and all of the other) exams; it is quite possible that, even if the
candidate records were updated at the time the grading of that part of the exam was complete, candidate records might not updated at the
time the grading of each testlet. A part of the exam that is not all computer graded, with discussion questions and simulation questions,
which are still at least partly manually graded, would present even more of a question.
The distinction between online and batch processing for a system is often made based on when the data in the system is needed. These
days, more and more processing is online, but that does not mean that all processing is online.






Business 4 Update Optional Reading

16
2008 DeVry/Becker Educational Development Corp. All rights reserved.
"Insert 18": B4-42, Item II.DEnhanced Outline Material

D. INFORMATION SYSTEM CONTROLS
Information system controls are divided into general controls and application controls. General controls,
which are discussed in more detail later, include software controls, physical hardware controls, computer
operations controls, data security controls, controls over the system implementation process, and
administrative controls. Application controls are both manual and automated (programmed) controls that
attempt to ensure that only authorized data are completely and accurately processed by an application.
Application controls are specific to an application.
When manual controls (e.g., checking for a valid account number, etc.) are built into a computerized
environment, they are called programmed controls. Most programmed controls are classified as either input
controls or processing controls. Output controls are also used.
1. Input Controls
Input controls verify that the transaction data entered is valid, complete, and accurate. Often, in batch
processing, a report of rejected transactions is produced; in online processing, error messages will
normally be displayed on the input device so that errors can be corrected immediately. This process is
often called editing the transactions. Good transaction editing is a key factor for the quality of data in a
system. Transaction editing is often called the application of business rules. In a perfect world, business
rules would be stored in a central data dictionary and all transaction editing would be done consistently
using that one set of business rules.
2. Processing Controls
Processing controls verify that all transactions are processed correctly (completely and accurately) during
file maintenance. This definition is oriented towards batch processing. Transactions are sometimes
rejected during file maintenance because some error conditions are not detectable until that time.
3. Output Controls
Output controls relate to the accuracy and distribution of computer reports and the distribution of those
reports.
4. Other Controls Terminology
Preventive controls (sometimes called preventative controls) are controls that reduce the occurrence of
errors in the first place, i.e., they prevent errors from happening.
Detective controls are controls that discover errors after they have occurred.
Corrective controls are controls that correct errors after they have occurred and have been detected.
COSO is the Committee of Sponsoring Organizations of the Treadway Commission, consisting of the
AICPA, the American Accounting Association, the Institute of Internal Auditors, the Institute of
Management Accountants, and the Financial Executives Institute. In 1992, COSO issued a study that
attempted to define internal control and to provide guidelines for evaluating internal control systems.
Internal control was defined as the process of providing reasonable assurance that control objectives
were achieved with respect to the effectiveness and efficiency of operations, the reliability of financial
reporting, and the compliance with applicable laws and regulations. The components of COSO's internal
control model are (1) the control environment, (2) control activities, (3) risk assessment, (4) information
and communication, and (5) monitoring.
The control environment is the policies and procedures that exist within an organization to ensure that
only valid transactions are recorded, that valid transactions are recorded accurately, and that rules are
complied with. The control environment consists of (1) management's commitment to integrity, (2)
management's philosophy and operating style, (3) the organizational structure, (4) the audit committee of
the Board of Directors, (5) the methods of assigning authority and responsibility, (6) human resources
policies and procedures, and (7) external influences.


Business 4 Update Optional Reading

17
2008 DeVry/Becker Educational Development Corp. All rights reserved.
"Insert 19": B4-37, Item II.C

This is an enhancement to the lecture materials. Add this on Page B4-37 as Item II.C SYSTEM
IMPLEMENTATION right after the Farflung Company example box, and change the current Item II.C OTHER
SYSTEM OPERATION CONSIDERATIONS and Item II.D TRANSACTION FLOW to Items D and E, respectively.

C. SYSTEM IMPLEMENTATION
The traditional stages of system implementation (often called the system implementation process) are system
analysis, system design, and system development (system development includes the programming, testing,
and conversion), with each of these stages having a definite start point and end point and conducted one after
the other. If an application system is purchased, the system design, programming, and testing (to some
extent) have already been done by the application package vendor. It should not be automatically assumed,
however, that a purchased application package has been fully tested (there are always problems of some
kind) or that the package will work exactly as expected. The application package developers did not
necessarily have the same definitions of system functions as the purchasers, the purchaser's data might or
might not be what the package is expecting, and the purchased package might actually contain bugs (the
author of this document was involved in an implementation of a major unnamed accounting system where
entire modules could not be used because they just did not work). After a system is implemented, there is
maintenance of the system. Maintenance of a system is often a major part of the total effort, because system
maintenance can last a long time and some systems are not easy to maintain. System implementation
through maintenance is sometimes called the application software life cycle.
1. System Analysis
System analysis is the analysis of a business problem that will be addressed by a new system of some
kind. It consists of analyzing the problem and suggesting a system solution. System analysis may
include a feasibility study that attempts to determine if a particular solution is technically or economically
or organizationally feasible. System analysis is sometimes called the determination of system
requirements. If the system analysis or requirements gathering is not done correctly, everything after that
will be weak at best.
2. System Design
System design is designing the system. System design includes specifying the input, processing, output,
user interface, and file or database design. It should also include controls and security. System design is
often combined with system analysis to speed up the process. Of course, there are drawbacks to such a
combination also if some parts of the design have to be redone once the analysis is completed.
3. System Development
System development is the programming, testing, and conversion.
a. Programming
Programming is the writing of the programs specified in the system design.
b. Testing
Testing is the testing of the programs and the system. Unit testing is the testing of the individual
programs. System testing is the testing of the system as a whole (or possibly parts of the system).
Acceptance testing is the final testing of a system before it is installed in production. Even if an
application package is purchased, it should still be tested. There are plenty of bugs, even in
purchased application packages, and, besides, the package may work but that does not mean it
necessarily works in the way that the purchaser expected. In addition, it may not work as expected
with the purchaser's converted data. The problem may be with the converted data and not with the
system itself, but the processing with the data that is going to be used must be verified. Sometimes,
testing an application in a production environment before acceptance is called "technical" testing (the
kind of problems that might be discovered in such testing is that the system produces all the "right"
answers, but it is just too slow to use, for whatever reason). Other terminology is integration testing,
which tests that all components of the system work together.
Business 4 Update Optional Reading

18
2008 DeVry/Becker Educational Development Corp. All rights reserved.
c. Conversion
Conversion (sometimes called data conversion when only data only is being discussed) is the
process of changing from an old application system to a new application system. There may be a
direct cutover or some kind of parallel processing where both the old application system and the new
application system are both run and the results are compared (often a quite difficult comparison to
make if the new and old systems do not perform the same functions in the same way, which is
normal).
Data conversion is converting data from an old application system to the new system. Data
conversion can be a quite sizable task. Even if consultants are utilized to install a package, internal
IT personnel are often used to design and write the conversion programs because they know the
existing systems and data.
At some point before a new system is implemented, the anticipated users of the system should be
trained. Sometimes, user training is squeezed out of the schedule when timelines are tight.
Sometimes, user training is just an afterthought. And system documentation is after that.
d. Maintenance
After a system is implemented, it must be maintained. Maintenance of a system is often is a
considerable portion of the total cost of the system, especially if the system has a long life. The ability
to maintain a system involves knowledge of the programming languages and/or technology involved
in the system and knowledge of the system itself. Maintenance normally becomes steadily more
difficult as a system ages and the people who were originally involved in the development of the
system go on to other jobs or other projects. Also, in many organizations, maintenance is a thankless
task, and many IT people tend to avoid it as career-limiting since it does not necessarily look that
good on a resume. It is almost always more fun and more exciting to work with new applications and
new technologies. Maintenance of a system is often divided into production support (which is just to
keep the system running) and changes to the system (to fix problems in the system or to provide
simple or even complex enhancements). Even if an application is purchased from an outside vendor,
and the vendor is providing regular updates for the software, there is still a need for some kind of
production support and maintenance (the updates for the software have to be applied at some point).
4. Other System Implementation Terminology
a. CASE
Computer aided software engineering (CASE) is a method of system development that supposedly
reduces the amount of repetitive work that a system developer needs to do though the use of certain
software tools that, for example, generate program code. Sometimes, CASE tools are divided into
upper CASE tools which are tools for the analysis and design phases and lower CASE tools which
are tools to support programming, testing, and configuration management. At one point in time,
CASE was the solution to all system development problems; unfortunately, it did not work out that
way. Sometimes it worked, and sometimes it did not.
b. RAD
Rapid application development (RAD) is a specific process or methodology of creating workable
systems in a very short time period. A RAD developed application starts as a prototype and
iteratively evolves into a finished application. However, sometimes the short time period results in
reduced features and reduced scalability (the ability to expand the system for an additional volume of
data or for additional functions). Sometimes, you just get what you pay for. RAD was supposed to be
another solution to all system development problems. Unfortunately, there are no panaceas.
c. Prototyping
Prototyping is the building of an experimental system rapidly and inexpensively for the system's end
users to evaluate. Often, end users cannot explain clearly what is needed in a system but can easily
critique something that they can see. Prototyping is often used when building the user interface to a
system and is used in RAD. To some extent, prototyping combines analysis, design, and
programming. Prototyping may speed up the system development process, and sometimes the
prototype may be fully developed into a part of the final application system.
Business 4 Update Optional Reading

19
2008 DeVry/Becker Educational Development Corp. All rights reserved.
d. Structured Development and Programming
Structured programming is a method of programming that attempts to eliminate the GOTO statement
and its variants (the statement was a staple of FORTRAN and COBOL, but can lead to what is known
as spaghetti code which is extremely difficult to maintain). Programmers break their programs into
small subroutines that can be easily understood by other programmers and that can be called by
higher-level routines. Structured programming is often associated with a top-down approach to
program design (top-down means write the upper routines first and then proceed to the lower-level
more detailed routines).

"Insert 20": B4-47, Just Before Item III.B.6Enhanced Outline

e. Firewall Technology (How a Firewall Works)
To understand how a (network) firewall works, it is necessary to know something about TCP/IP and
IP addresses. This information was covered in the telecommunications sections of the text in the
appropriate boxes. It is also necessary to know something about encryption, which is covered in the
text a little later. It might be appropriate to review both of these subjects first.
Some firewalls consist of a piece of hardware with integrated software, sometimes called a firewall
appliance. With these firewalls, there is little or no software configuration. Other firewalls are
software only; most personal firewalls that protect a single PC or a small network fall into this
category.
Many/most firewalls hide the actual IP addresses of the computers on the (internal private) network
and utilize network address translation (NAT) to determine those addresses when messages are to
be sent to those computers. The firewall's IP address is the only IP address seen by the outside
network (normally nowadays the Internet). With network address translation, all computers on the
internal network use a private range of IP addresses which is not used on the Internet. When a
computer on the internal network sends a message to a computer not on the internal network, the
firewall replaces the private IP address of the computer on the internal network (the source) with its
(the firewall's) IP address. It also replaces the source's port number with an unused port number
above 1023. It then retains the source's IP address and port number (which together are called the
state) so that it will know where to return the response. In effect, the internal network's IP addresses
are hidden from anything outside the internal network.
As indicated previously, a firewall examines individual incoming packets (packet filtering) and
determines whether to accept or reject each specific incoming packet based on the IP address used
and possibly the specific port that the packet is to go to. A firewall may also act as an application
proxy and examine more than just IP addresses and port numbers (packet filters examine only the IP
addresses and port numbers and that is often not enough for robust security). It may also log
(record) what is happening at the firewall on a firewall log so that after-the-fact analysis can be
performed by network security personnel (one of the first things that a hacker will attempt to do is to
modify logs so that his or her activity will not be logged). Certain network traffic patterns indicate that
some kind of network intrusion attempt is occurring, and the firewall monitors for those kinds of
network traffic pattern and trigger an alarm or shut down all or part of the network if such patterns are
discovered. For larger networks, there may be two or more firewalls for load balancing purposes (so
that an individual firewall is not overloaded).
General firewall strategies are built on "Allow all" rules to allow all incoming network packets except
those that are specifically denied or "Deny all" rules to deny all incoming network packets except
those that are specifically allowed. Some firewalls use "Deny all" as a default, and all network traffic
is blocked until "Allow" rules are written by the firewall administrator.
Many firewall access rule sets are a combination of "Allow" and "Deny" rules. For example, a set of
access rules might be (ports and HTTP were discussed previously):
(1) Deny network traffic on all ports,
(2) Except, allow network traffic on port 80 (HTTP),
Business 4 Update Optional Reading

20
2008 DeVry/Becker Educational Development Corp. All rights reserved.
(3) Except, allow HTTP video for certain users (a user name or a group rule),
(4) Except, deny all HTTP video during the night (a time of day rule).
Access rules based on Deny rules are easier to administer and are more secure. There is normally a
smaller list of rules to maintain and new rules do not have to be constantly added to address
problems that are discovered.
Some firewalls remember the packets that are outbound and for which a response should be
expected from outside the internal network. As previously indicated, the remembered information is
called the state, and this form of packet filtering is called stateful packet filtering. Packet filtering that
does not remember the state is called stateless packet filtering.
IP network traffic is sent over all kinds of networks between the source and the destination. Not all of
these links allow the same maximum packet size. As a result, larger packets may be split into smaller
packets. These smaller packets are called IP fragments. These IP fragments have their own IP
source and destination addresses but only a small part of the original packet's information; only the
first fragment, for example, contains the port number. Something has to put the fragments back
together again. If the firewall does that, it must save all of the fragments until the final fragment has
arrived (each fragment contains a fragment sequence number so that the fragments can be
reassembled). If the final fragment never arrives, and there are a large number of fragments being
saved, the firewall may end up doing a lot of extra work, and the firewall may or may not be able to do
all of the extra work. Sending IP fragments (and never sending the final fragment) is part of many
denial-of-service attacks (not all of them but many of them). The IP fragments will be retained only
for a certain period of time (as indicated previously, IP packets have a defined "time to live" and if that
time is exceeded, the packets are discarded), but, if there are enough IP fragments, the firewall might
crash.
An application level gateway, or application proxy or application layer firewall, examines more than
just IP addresses and port numbers. It operates at the application layer, and, since it has application
level knowledge, it is capable of inspecting the entire application data portion of the IP packet and of
maintaining a more extensive log. An application proxy does not merely forward acceptable packets
(like a packet filter does); it rebuilds the packet and sends it on to its destination. Due to these extra
functions, an application proxy is often slower than a firewall that just does packet filtering. A firewall
needs to have a separate application proxy for each application.
Remember that the word "application" is used in various ways in IT; the word is sometimes used as
an application system (the accounts receivable system or the accounts payable system) and
sometimes used to designate the top layer in a network protocol (the application layer). For an
application level gateway, the word means the application layer and not an application system. At the
application layer, the applications are such things as HTTP, FTP, DNS, and SMTP.
Network intrusions by a hacker can be address scans, where the hacker scans a range of IP
addresses to see which ones are responsive, and port scans, where the hacker scans a range of
ports to see which ones are open. There are also ways to attempt a reverse query on a DNS server
to try to trick the DNS server into divulging its IP addresses. Sometimes, there is a separate intrusion
detection system in addition to the firewall, and sometimes there are multiple firewalls.
Public servers that must be available to the outside world (email servers, web servers, and DNS
servers) normally are placed in a separate subnet called a DMZ (demilitarized zone) or perimeter
network. The DMZ is often outside the main firewall or firewalls or it may be between two firewalls.
Even if a hacker succeeds in taking over a server in the DMZ, the hacker will not be able to get into
other subnets inside the main firewall.
Firewalls and Access Rules
Smokeandfire Company places great value on its customer and internal financial information. In an attempt to secure that data from
outside intruders, it has created an extensive firewall system. Smokeandfire has an outer firewall between the Internet and its web server.
It also has an inner firewall between the web server and its internal network. Both of these firewalls are governed by a set of policy rules
(rules that govern who gets access to what) contained in a separate database.
Business 4 Update Optional Reading

21
2008 DeVry/Becker Educational Development Corp. All rights reserved.
The firewalls identify names, IP addresses, applications, and other characteristics of incoming traffic. They check this information against
the policy/access rules for the firewalls. Certain IP addresses are allowed access to the internal systems (passing through the inner
firewall); other IP addresses are allowed access only to the web server and thus can access data only through the web server (the web
server may be allowed to update only limited data and then only in certain ways).
In general, network firewalls can be used to (1) block incoming network traffic, (2) block outgoing network traffic (such as not allowing
access to certain Internet sites), or (3) block network traffic based on content (possibly not allowing incoming traffic containing certain
words or not allowing outbound traffic to certain inappropriate websites). In the end, however, the key to effective firewalls is the creation
and maintenance of the access rules. Rules, rules, and more rules. A poor job of firewall administration is sometimes worse than no
firewall at all.

"Insert 21": B4-50, Item IV.CEnhanced Outline Material

Change the existing IV.C PROGRAM MODIFICATION CONTROLS to IV.D PROGRAM MODIFICATION
CONTROLS and change the existing IV.D DATA ENCRYPTION to IV.E DATA ENCRYPTION).

C. DATA VALIDATION AND EDITING TECHNIQUES
Basic data validation and editing techniques include the programmed and other processing controls
necessary to ensure that data in an application system is as correct and accurate as possible. Together,
these various techniques attempt to ensure that data is correct before it enters a system. Once data gets
into and is proliferated throughout a system, it is considerably harder to correct. The best approach is to
never let incorrect data into a system in the first place. Data validation and editing techniques include:
1. Valid Code Tests
A valid code test is a data validation step where codes entered are checked against valid values
contained in some master file.
2. Check Digits
Check digits are a data validation step where some kind of technique is used to compute a digit to add to
an existing number and other programs use the same computation when that number is used. A good
check digit will protect against transposition errors.
3. Field Checks
A field check is a data validation step performed on a data element to ensure that it is of the appropriate
type (alphabetic, numeric, etc.).
4. Limit Tests
A limit test is a data validation step that calculates whether the value of a data element is within certain
limits.
5. Reasonableness Checks
A reasonableness check is a data validation step that calculates whether the value of a data element has
a specified relationship with other data elements.
As indicated previously, the various data validation and editing techniques are often called business rules.
The best approach to data validation is to subject a data element to the same data validation and editing
techniques regardless of the application system where that data element is entered. However, since
systems where the data is entered were almost always developed at different times by different groups of
people, that objective is very hard to accomplish.




Business 4 Update Optional Reading

22
2008 DeVry/Becker Educational Development Corp. All rights reserved.
Data Validation
A check digit algorithm is normally a reasonably complex algorithm if it is have any value.
A simple total of the numbers in a string of numbers will not find simple transpositions of digits; as long as all of the numbers are there
somewhere in the string of numbers, a simple summation check digit will indicate that everything is ok. To actually check for transpositions,
there has to be some algorithm that takes the order of the numbers in the string into account. For example, if the algorithm added the
numbers after multiplying each number by its position in the string, 123 would sum to 1*1 + 2*2 + 3*3 = 14 and 132 would sum to 1*1 + 2*3
+ 3*2 = 13. If the number 12314 were entered as 13214, such an algorithm on the first three digits of the number would calculate a check
digit of 13 (for 13214) instead of the expected 14 (for 12314) and would indicate that there was some kind of problem. This simple
algorithm would not be able to determine what the problem is, but it could at least indicate that there was a problem. Note that, in this
algorithm, the check digit is merely added to the end of the actual number and does not actually become part of the number.
Of course, as accountants, we are all familiar with this kind of thing. Back in the old days, in auditing, we used to foot columns of numbers.
If the difference between the total from footing and the expected total was evenly divisible by nine, then the problem was almost certainly a
transposition error. For example, add the numbers 36, 64, and 100. The total is 200. If, however, the numbers were entered as 63, 64, and
100, the total is 227. The difference between 200 and 227 is 27, which is evenly divisible by 9. The "evenly divisible by nine" check does
not mean that the error has to be a transposition error, but it is certainly a good first thing to look at.
Check digits are normally applied to numbers like account numbers and not to dollar amounts. Check digits are most often used to check
for errors when something is entered, and a check digit will not do that on a dollar amount since the system does not know what dollar
amounts are going to be before they are entered. Applying a check digit to a dollar amount would only ensure that that dollar amount, once
entered, is transmitted correctly (and there are plenty of other better ways to check for data transmission errors). With something like an
account number, the system may not know which account number is going to be entered, but it does know that whatever is entered should
at least fit a predetermined pattern. If the algorithm checks what is entered and the check digit on the entered account number is not what
it should be, then there is some kind of problem that can be communicated immediately to the person entering the data.

"Insert 21a": B4-50, New Items IV.D.1 and 2Enhanced Outline Material
Add on Page B4-50 as Item IV.D.1 Encryption Keys, and IV.D.2 Digital Signatures, and change the existing
IV.D.1 Digital Certificates to IV.D.3 Digital Certificates.

1. Encryption Keys
Data encryption offers a form of security, and it is based on the idea of keys. Each party has a public and
private key for their data (public key encryption is one type of encryption). The public key of the recipient is
distributed to others in a separate transmission. The sender of a message uses the public key of the recipient
to encrypt the message, and the recipient uses its private key (which hopefully never goes anywhere) to
decrypt the message. An encrypted message must properly process through the encryption algorithm after
the keys are applied. As long as the private key of the recipient is secure, the encryption scheme should
provide a secure transmission. All encryption keys can be cracked, but the longer the key is, the harder and
longer it is to crack it.
To summarize, a message is encrypted using the public key (of the recipient, which is readily available to
senders), and it is decrypted by the private key (of the recipient, which is retained by the recipient). It is
important that the private key cannot be mathematically determined from the public key, or the private key is
not going to be private very long. The mathematics of encryption, decryption, and keys will not be discussed
here.
2. Digital Signatures
Digital signatures, that authenticate a document by using a form of data encryption, are another form of data
security. A mathematically condensed version of the message is produced and encrypted by the sender's
private (not public) key. It is attached to the original message. The message and the digital signature can be
decrypted by the receiver (the party with the sender's public key). The original message can be compared to
the condensed version to ensure that the original message has not been changed.
The encryption/decryption process is backwards from what was discussed with messages since a digital
signature is not a message per se. It is a way of authenticating the sender. Unless indicated otherwise, the
remaining discussion is about messages.
Business 4 Update Optional Reading

23
2008 DeVry/Becker Educational Development Corp. All rights reserved.

"Insert 21c": B4-50, New Item IV.EEnhanced Outline Material

4. Encryption and Differences in Terminology
The preceding question does illustrate differences in terminology that can arise on the IT portion of the BEC
exam. The question apparently was making a distinction between "private key encryption" and "public key
encryption." Other wording that can be used is "symmetric encryption" and "asymmetric encryption" or
"symmetric algorithms" and "asymmetric algorithms" or "symmetric keys" and "asymmetric keys" or "single-
key" and "two-key" encryption; private key encryption can also be called secret key encryption. Regardless of
the terminology, the distinction is whether one key (private and symmetric) or two keys (public and
asymmetric) is/are used to encrypt and decrypt the data (or even the keys, because sometimes the keys are
encrypted also). Often, these days, both methods are used in combination.
The encryption technique presented earlier is that of public key encryption, where two different keys, one a
public key and the other a private key, are used for encryption and decryption. As stated there and in the
explanation to the released question, in public key encryption, the public key is used to encrypt the
message, and the private key is generally used to decrypt the encrypted message. Each party to the
communication has both a private key (which is private to each party) and a public key (which is known to the
other party). An individual's private key and public key are mathematically related. Public key (asymmetric)
encryption algorithms include RSA (which is used both for private/public key generation and for encryption
and is used for digital signatures). PGP (Pretty Good Privacy), which is often used for email, is often
considered an encryption algorithm although it is really an encryption program which uses both asymmetric
and symmetric algorithms.
The alternative to public key encryption is private key encryption (which was standard before public key
encryption was developed, since it was all there was at that time). In private key encryption (don't get
confused by the private key in public key encryption and private key encryption), there is no public key so
each party must have the other party's private key. The private keys must be communicated, and the
communication of the private keys is subject to interception if communicated electronically. For most of the
history of cryptography, a key had to be kept absolutely secret and was kept secret by face-to-face meetings
or trusted couriers. Private key (symmetric) encryption algorithms include DES (Digital Encryption Standard)
which uses a 56 bit key (now considered to be insecure due partly to the short key length), triple DES (which
uses 3 separate keys to encrypt each block of text, one after another, which is reasonably secure but
somewhat slow), and AES (Advanced Encryption Standard), which uses 128 bit keys, 192 bit keys, and 256
bit keys.
Normally, public key techniques are much more computationally intensive than purely private key techniques
and were not developed until the 1970's when the computational power became widely available.
Remember, private came first.
There is nothing intrinsically "more" secure about using public key cryptography with the keys. The objective
was to eliminate the need to distribute the private key, which had to be done when only one key was used.
5. Examples
We all use encryption on an almost daily basis, whether we realize it or not. Take the ubiquitous ATM card.
The strip on the back of the ATM card contains all sorts of identification information of the person who has
been issued the card. One of the pieces of information in the strip is the bank account number, encrypted, of
course. When the card is inserted into a card reader and a PIN is entered, the PIN/account number
combination is compared to the PIN/account number combination at the bank. A few occurrences of "no
match" and the ATM machine eats your card.
With credit (not debit) cards, there is normally no PIN number, but the process works basically the same way.
Some issuers such as American Express are requesting the billing zip code of the cardholders as additional
security.
Online, the little lock symbol down at the bottom of the screen indicates that a secure connection has been
established between your browser and a web server. The process is built upon SSL (Secured Sockets Layer)
which is used by your browser to encrypt the data that you send and to decrypt the data that the server
sends; SSL is used on the other end to decrypt what you send and to encrypt what the server sends. SSL
Business 4 Update Optional Reading

24
2008 DeVry/Becker Educational Development Corp. All rights reserved.
runs on top of TCP/IP and below application layer protocols such at HTTP and FTP. A socket is a Unix word
for a port.
On the other end of the connection, there is a SSL server with a digital certificate that identifies the party on
the other end (the certificate holder) and the name of the Certificate Authority that issued the certificate
(companies like RSA and VeriSign, who normally do something to verify the identify of the certificate holder)
and the public key of the certificate holder. When your browser points to a secured domain, an SSL
handshake identifies the server and the client and establishes an encryption method and a unique session
key. Once the handshake is complete, the session is secure, and real communication can begin. This overall
process is called the Public Key Infrastructure (PKI).

Data Encryption and SSL: An Example
As a further example (which has been simplified), let's assume that Peter Olinto is trying to submit information to a government agency. He
sits down at his PC and logs onto that agency's website, entering his user-id of OlintoP and his carefully chosen and totally secure
password of Peter, which he has written upside down on a Post-It Note stuck to the side of his monitor.
Peter, whether he realizes it or not, have already entered the world of cryptography. We assume that both Peter and the government
agency have registered with a Certificate Authority and possess valid digital certificates. The first thing that Peter's browser and the
agency's web site do is to establish "authentication." Peter's browser and the agency's software exchange digital certificates so that each
system knows who the other one is. The software on both systems obtains the certificate authority's public key and uses that key to verify
the validity of the certificate authority's digital signature on the digital certificate, thereby validating the information on the digital certificate
that it has just received. Peter's browser now has the agency's public key, and the agency's system has Peter's public key, so messages
can be encrypted.
Peter sees a lock symbol at the bottom of his browser and the "http:" on his browser changes to "https." He enters the appropriate
information into the appropriate boxes on the agency's form and clicks a button to send the completed data to the agency. His web browser
(actually the encryption software associated with it) creates a summary or "digest" of the information that Peter submitted. It then encrypts
the digest, using Peter's private key. The encrypted digest becomes Peter's digital signature for this submission. Using the agency's public
key, it then encrypts the information that Peter submitted. Peter's digital signature and the encrypted data are sent to the agency.
When the agency's system receives the message from Peter, the software first decrypts Peter's digital signature (the digest of the
information that Peter submitted) using the agency's private key. It then decrypts the information that Peter submitted, again using its
private key. It next creates its own digest from the decrypted information and compares its digest to the original digest. If they are the
same, then it knows that the information submitted was not changed during the transmission. It then probably sends Peter an
acknowledgement that it has received his information. The process for the acknowledgement should be the reverse of what has been
described.
Actually, there are several different types of "keys" involved in this process, more than just public and private keys. The whole process is
called an SSL session. When Peter first attempted to connect to the agency's web site and authenticate, the agency's web server sent its
public key to Peter's browser in its digital certificate. The web server and Peter's browser negotiated the highest level of encryption that
both could support (40 bit, 56 bit, or 128 bit). Peter's browser generated a secret key of the agreed-upon length. This key, called the
session key, was used only for the one SSL session. Peter's browser encrypted the session key using the agency's public key and sent it
to the agency's web server. The web server decrypted the session key using its private key. All data transmitted was then encrypted using
the session key.


"Insert 22": B4-53, New Item VI (after Monolithic Corporation)

VI. LEGAL ISSUES WITH RESPECT TO INFORMATION TECHNOLOGY
A. INTELLECTUAL PROPERTY
Intellectual property is a term used to describe products of the human intellect that have economic value.
Software is just one of the many forms of intellectual property. Copyright law covers only the particular
form or manner in which ideas and concepts have been manifested and not the actual ideas or concepts.
Business 4 Update Optional Reading

25
2008 DeVry/Becker Educational Development Corp. All rights reserved.
Since software can be viewed as a work of authorship or as a commercial product, there are three
separate types of law that can be used to protect software: trade secret law, copyright law, and trademark
law.
B. TRADE SECRET
A trade secret is information or know-how that is not generally known and that provides its owner with
some kind of competitive advantage in the marketplace. If the trade secret owner takes "reasonable"
steps to keep the information or know-how secret, the owner will be protected from disclosure of the
secrets by employees, competitors, and industrial spies. Trade secret protection is a matter of state and
not Federal law.
There are two ways to lose trade secret protection. The first way is for some other person or organization
to develop the same thing. The second way is if the owner of the information or know-how carelessly or
intentional discloses the information or know-how to some other party who has no obligation to safeguard
the information or know-how. The information or know-how is then in the "public domain." If everybody
knows it, it is no secret.
Reasonable steps to keep information or know-how secret include (1) identifying the information that is a
trade secret to the organization, (2) maintaining physical security over the information, (3) maintaining
electronic security over the information, (4) marking paper documents containing that information
"confidential," and (5) using nondisclosure agreements.
C. COPYRIGHT
A copyright is a statutory grant that protects creators of intellectual property from having their work copied
for any purpose during the life of the author plus a certain number of years. Copyright law covers only the
particular form or manner in which ideas or information has been manifested and not the actual ideas or
concepts.
When custom software is developed, it is necessary to determine in advance who owns the underlying
copyright to the software. Contrary to normal expectations, (lawful) possession of software does not
necessarily imply ownership. The possessor may have (and most often does have) only a limited license
to use the software.
D. PROTECTING PRIVACY
Privacy is the right of individuals to determine what information about them is communicated to others.
Privacy has become more of an issue in the electronic age because it has become so much easier to
collect information about individuals in the first place. The principles of privacy protection are as follows:
(1) the individual should have notice/awareness of the collector's privacy policy, (2) the individual should
be able to consent to the collection of information about them (see opt-in and opt-out in the Glossary), (3)
the individual should be able to access the data that is collected and should be able to make corrections
to that data, (4) the individual should be assured that the data that is collected is safe and secure and
protected from loss or fraudulent use, and (5) there must be a method of enforcement and remedy.
As indicated by a number of highly-publicized security breaches in 2005 and 2006, these principles are
not widely used and are even ignored sometimes.
E. EMPLOYEE OR INDEPENDENT CONTRACTOR
In the software industry, it is not always easy to distinguish between employees and independent
contractors (just ask Microsoft, which has been involved in considerable litigation in this area). Many
companies have workers with positions that range from full-time salaried employees, to full-time
independent contractors, to part-time hourly employees, to part-time independent contractors. The law of
agency is used to determine whether an employer-employee relationship exists by considering factors
such as (1) the control by the "employer" over the work (if the employer determines how the work is done,
has the work done at the employer's location, and provides equipment or other means to create the
work), (2) the control by the "employer" over the "employee" (if the employer controls the employee's
schedule and/or has the right to hire the employee's assistants), and (3) the status and conduct of the
"employer" (if the employer is in the business to produce such work, provides the employee with benefits,
and/or withholds tax from the employee's payment). Considerable litigation has occurred in this area.
Business 4 Update Optional Reading

26
2008 DeVry/Becker Educational Development Corp. All rights reserved.
Federal Legislation Related to Information Technology
Computer Fraud and Abuse Act of 1986 This act makes it a crime to knowingly and intentionally access a computer without authorization
to obtain national security data, information in the financial records of a financial organization, etc. or traffic in passwords with the intent to
defraud.
Computer Security Act of 1987 This act attempted to improve the security and privacy of sensitive information in Federal government
computer systems and systems of federal contractors. It was superseded by the Federal Information Security Management Act of 2002.
Privacy Act of 1974 This act provides that no Federal agency, with certain broad exemptions for law enforcement purposes and for
congressional investigations, shall disclose any information contained in a computer system without the permission of the person to whom
the information pertains. It also permits an individual to determine what information has been collected by such agencies and to correct or
change that information.
Electronic Communications Privacy Act of 1986 This act extended restrictions on wire taps of telephone calls to other transmissions of
electronic data.
Communications Decency Act of 1995 This act prohibited the making of indecent or offensive material available to minors via computer
networks.
Health Insurance Portability and Accountability Act in 1996 This act attempted to protect patients' medical records and other health
information provided to health care providers.
This list is only selected Federal legislation, certainly not everything. And, of course, in additional to Federal legislation, there is a large
amount of state legislation.

Software Licensing
Partly to protect trade secret information, software is normally distributed in object code form and not in source code form (if the source
code is not provided, it may be escrowed to provide some protection in case the vendor goes out of business) and is normally licensed and
not sold. If software is available in source code, it is normally relatively easy to determine how it works.
The license gives the user permission to use the software under the terms of the license, which might include a nondisclosure provision,
often a restriction on copying the program for other than backup and disaster recovery purposes, and a restriction on reverse engineering.
Sometimes, the license is perpetual regardless of whether the user obtains or maintains ongoing software support. Other times, especially
with a certain unnamed vendor selling software used in large-scale corporate environments, the license is contingent on continuing to buy
ongoing software support on a yearly basis. Support is often contingent on staying current with newly released version of the product. For
example, if a particular product is in Version 6, the vendor might provide full support for versions as far back as Version 3, might provide
only "best efforts" support for versions prior to Version 3, and might provide no support whatsoever for versions before that. Some software
is licensed on a site-license basis; other software is licensed on a per-user or per-connected-user basis. Software licenses are commonly
called End User License Agreements (EULA). Remember that software licenses, like any other contract, can normally be negotiated
(especially at the end of a quarter or the end of a year when software vendors are trying to realize revenue and salespeople are trying to
make their quotas and bonuses). Effective negotiation of such contract can pay huge dividends in the future, in terms both of cost and of
the ability to make changes (even sometimes in corporate structure) without unnecessary restrictions.
Mass-marketed software has been commonly distributed under some form of a shrink-wrap license, where the license is written on the side
of the package or made available to the user in some other equivalent way. A shrink-wrap license typically provides that, by opening the
package and using the program, or by accepting the license provisions before the software is actually installed by the installation program,
or by accepting the license provisions before the first time the software is actually used, the user agrees to use the software under the
terms of the license agreement. There is still considerable disagreement about the enforceability of shrink-wrap licenses. Some cases
have found such agreements enforceable and other cases have found them unenforceable.
If the purchaser is required to read and accept the software use agreement online before the purchaser pays for the software and delivery
is instituted (as part of the order process), the license will more than likely be enforceable. If the purchaser is required to read and accept
the software use agreement after the purchaser pays for the software but before the software can be installed or used, there is a little more
question about enforceability.



Business 4 Update Optional Reading

27
2008 DeVry/Becker Educational Development Corp. All rights reserved.
"Insert 23": B4-59, After Item III.F
B2B Exchanges
A public exchange (sometimes called an E-hub or net marketplace) provides a single digital Internet marketplace for many buyers and
sellers. Public exchanges can be industry-owned or can operate as independent intermediaries between buyers and sellers. Sometimes
the exchanges are set up for long-term purchasing (often called systematic sourcing) and other times for spot purchasing (purchasing
when the need arises).
Public exchanges proliferated during the early years of E-commerce, but many of them failed in the dot.com bust. Suppliers often did not
like them because they tended to drive prices down, because they were required to pay transaction fees even when dealing with already
established customers, and because they had to share information that they wanted to keep private. Buyers often did not like them
because they did not know the suppliers and which ones were and were not reliable.
A private exchange (private industrial network) is an exchange run by a single large company. A private exchange links a firm to its
suppliers, distributors, and other key business partners for efficient supply chain management. In this context, a private exchange would
just be the mechanism to implement the supply chain. They can be considered large extranets. Collaborative commerce (sometimes
called C-commerce) is a general term for the methods that organizations interact electronically to plan, design, build, buy, sell, distribute,
and support goods and services. The key word, of course, is collaborative.

"Insert 24": B4-59, Following Insert 23
Overview of ERP, SCM, and CRM
The first type of system is ERP (Enterprise Resource Planning), which is a system which attempts to automate a number of business
processes in the manufacturing, logistics, distribution, accounting, HR, and finance areas of an organization. Sometimes ERP systems
include supply chain and customer relationship management components, which are considered separately in this material. SAP,
PeopleSoft, and Oracle are all examples of ERP systems (be careful, Oracle is also a DBMS, and Oracle sells many more applications that
just ERP and now owns PeopleSoft). The Oracle e-business suite, for example, includes supply chain and customer components, along
with asset lifecycle management, procurement, manufacturing, and HR. SAP Business One includes financial accounting, bank
transactions, sales and distribution, purchasing, sales management, service management, MRP (material requirements planning), and
warehouse management. It is possible to buy specific modules of an ERP system and not the whole thing.
Prior to ERP systems, each department in a company would normally have their own separate systems to perform their own separate
functions and these systems would communicate (interface) with each other. Data definitions were not necessarily consistent and
essentially the same data was often stored more than one place. ERP systems attempt to overcome some of these problems by
performing all of the functions and storing one set of data in one place with one definition. However, the one system is necessarily
complex.
Installing an ERP system can be a horrendous activity, requiring hordes of consultants (some of the people in this class might be such
consultants) and people who actually know the system and how the system parameters and configuration tables have to be established to
make the system work for a customer. That is not to say that ERP systems should not be installed, but that organizations should
understand what they are getting into when they decide to do that. Most often it is easier to change business practices (to the system's
best practices) than to customize the software (there is some built-in customization in the configuration table parameters, but, that
customization, of course, has a limit); anything more than that would entail changing the code of the MRP system (if that is even an option)
or building another system on the side. Building another system on the side dilutes the overall "integration" benefits.
The second type of system is Supply Chain Management (SCM). As indicated, a SCM system might be a part of an ERP system or it might
be stand-alone. Regardless, it focuses on the supply side of the business (both the supply of goods from vendors and the supply of
products to customers; it can be thought of as the front end and the back end of the manufacturing process. As an example, Oracle SCM
includes procurement, purchasing, sourcing, transportation management, warehouse management, product lifecycle management, supply
chain planning, and order configuration and management. However, it also includes manufacturing components so it really incorporates
the front end, the back end, and the middle.
The third type of system is Customer Relationship Management (CRM). As indicated, a CRM system might be a part of an ERP system or
it might be stand-alone. Regardless, it focuses on the customer side of the business (it is almost always cheaper to retain an existing
customer than to acquire a new one). As an example, JD Edwards (a part of Oracle) CRM includes pricing, customer self service, mobile
sales, sales force automation, sales order management, and service management.


Business 4 Update Optional Reading

28
2008 DeVry/Becker Educational Development Corp. All rights reserved.
"Insert 25": B4-64, Item IV.B
Electronic Payment Methods
Digital cash (also electronic cash or E-cash) is currency in an electronic form that moves outside the normal channels of money. Digital
cash can be used by people who want to make purchases over the Internet but who do not want to use their credit cards.
Digital checking is an electronic check with a secure digital signature.
A digital wallet is software that stores credit card and other information to facilitate payment for goods on the Internet. A digital wallet can
hold a user's payment information, a digital certificate to identify the user, and shipping information for delivery.
A micropayment is a digital payment of less than $10. Micropayment systems have been developed to accumulate numerous small
payments into one larger payment through a normal payment process. Think of an EasyPass or EasyTag system on a toll road (they go by
different names in different parts of the country). Each time you go through a toll booth, it might cost $1.00. The various $1.00 charges are
collected and charged to your credit card possibly at the end of the month. It might not be cost efficient to charge credit cards for each of
the individual $1.00 charges.
A smart card is a plastic card the size of a credit card that stores digital information. The cards can contain electronic cash and can be
used for payments or access to an ATM machine.

Das könnte Ihnen auch gefallen