Sie sind auf Seite 1von 48

Part Workbook 2.

Hardware
and Device Configuration

Table of Contents
1. Hardware Overview ........................................................................................................ 4
Discussion ................................................................................................................ 4
The Red Hat Supported Hardware Database ........................................................... 4
Sources of Hardware Information ......................................................................... 4
Kernel Messages, the dmesg Buffer, and /var/log/dmesg ................................... 4
Processor Support .............................................................................................. 6
Symmetric Multiprocessing (SMP) ............................................................... 6
/proc/cpuinfo ............................................................................................ 6
Memory ........................................................................................................... 7
/proc/meminfo .......................................................................................... 7
Harddisks ......................................................................................................... 8
USB drives .............................................................................................. 8
IDE disks ................................................................................................. 8
SCSI disks ............................................................................................... 9
Examples ................................................................................................................. 9
Example 1. Becoming Familiar with a New Machine ................................................ 9
Online Exercises ...................................................................................................... 10
Specification ................................................................................................... 10
Deliverables .................................................................................................... 11
Questions ............................................................................................................... 11
2. Kernel and Kernel Modules ............................................................................................ 14
Discussion .............................................................................................................. 14
The Static Kernel Image ................................................................................... 14
What is the Kernel? ......................................................................................... 15
The Kernel as Resource Manager ....................................................................... 15
The Kernel as Interpreter .................................................................................. 15
Kernel drivers and modules ............................................................................... 16
The lsmod Command ....................................................................................... 16
Inserting Modules with modprobe ....................................................................... 17
Requesting Module Removal with modprobe -r ..................................................... 17
The /etc/modprobe.d/*.conf Files ........................................................................ 18
The /proc/sys Directory and sysctl ...................................................................... 20
Online Exercises ...................................................................................................... 21
Specification ................................................................................................... 21
Deliverables .................................................................................................... 21
Questions ............................................................................................................... 21
2. PCI Devices ................................................................................................................. 24
Discussion .............................................................................................................. 24
The PCI bus ................................................................................................... 24
Hardware Resources ......................................................................................... 24
Interrupt Request Line (IRQ's) and /proc/interrupts ......................................... 25
I/O Ports and /proc/ioports ......................................................................... 25
Device Memory Buffers and /proc/iomem .................................................... 26
Configuring PCI Devices .................................................................................. 26
Loading Modular Device Drivers ................................................................ 26
Assigning Resources ................................................................................ 27
Examples ............................................................................................................... 28
Example 1. Exploring a New Machine .................................................................. 28
Online Exercises ...................................................................................................... 28
Specification ................................................................................................... 29
Deliverables .................................................................................................... 29

rha130-6.1-1

Copyright 2011, Red Hat Inc.

Hardware and Device Configuration

Questions ............................................................................................................... 29
4. Filesystem Device Nodes ................................................................................................ 33
Discussion .............................................................................................................. 33
"Everything is a File" ....................................................................................... 33
Filesystem Device Nodes .................................................................................. 33
Why Two Types of Nodes? ............................................................................... 34
The Anatomy of a Device Node ......................................................................... 34
Commonly Used Device Nodes .......................................................................... 36
Symbolic Links as Functional Names .......................................................... 36
Dynamic Device Node Creation: udev ................................................................. 37
Examples ............................................................................................................... 37
Example 2. Allowing udev to add device nodes ...................................................... 37
Example 1. Using the dd Command to Create a backup of a Drive's Master Boot
Record ............................................................................................................ 38
Online Exercises ...................................................................................................... 38
Specification ................................................................................................... 38
Deliverables .................................................................................................... 39
Questions ............................................................................................................... 39
5. Performance Monitoring ................................................................................................. 42
Discussion .............................................................................................................. 42
Performance Monitoring ................................................................................... 42
CPU Performance ............................................................................................ 42
The uptime Command .............................................................................. 42
The top Command ................................................................................... 42
Memory Utilization .......................................................................................... 44
Process Memory ...................................................................................... 44
Disk I/O Cache ....................................................................................... 44
Monitoring Memory Utilization with /proc/meminfo ....................................... 44
Monitoring Memory Utilization with top ...................................................... 45
Why is my memory always 90% full? ......................................................... 45
Examples ............................................................................................................... 45
Example 1. Using top to Analyze System Activity .................................................. 45
Online Exercises ...................................................................................................... 46
Specification ................................................................................................... 46
Deliverables .................................................................................................... 46
Questions ............................................................................................................... 46

rha130-6.1-1

Copyright 2011, Red Hat Inc.

Chapter 1. Hardware Overview


Key Concepts
Red Hat maintains a database of supported hardware, accessible at http://bugzilla.redhat.com/hwcert.
Kernel messages are stored in a dynamic buffer referred to as the dmesg buffer. The contents of the
dmesg buffer can be examined with the dmesg command.
The file /var/log/dmesg contains a snapshot of the dmesg buffer taken soon after the most recent
boot.
The file /proc/cpuinfo reports information about the system's processors.
The file /proc/meminfo reports information about the system's memory.
The directory /proc/scsi/ reports information bout the system's SCSI devices.

Discussion
The Red Hat Supported Hardware Database
The advantages and disadvantages of the open source development model are particularly relevant to
hardware support. The disadvantage: because the open source community often does not have relationships
with hardware vendors, and is not privy to pre-release information (or sometimes any information), the
most recent version of a a given video card, sound card, or some other device is often not supported. (The
emergence of Linux distributors, such as Red Hat, is changing this trend.) The advantage: eventually,
someone in the open source community will want to take advantage of the features of the new device, and
there is a greater chance that it will eventually be supported.
Because of these competing influences, keeping track of which hardware is supported and which is not
can be an issue. Red Hat maintains a database of supported hardware at https://hardware.redhat.com/. The
searchable database helps identify hardware that is well supported by the Red Hat distribution (and Red Hat
technical support services), and hardware that is known to work, but not officially supported by Red Hat.

Sources of Hardware Information


The following resources can be used to help determine which hardware is installed (and recognized by
the kernel) on your system.

Kernel Messages, the dmesg Buffer, and /var/log/dmesg


The first evidence of detected hardware is the stream of messages emitted by kernel as the system boots,
which quickly flow off of the top of the screen, apparently never to be seen again. These messages, and all
messages emitted by the kernel, are stored in a dynamic kernel buffer referred to as the dmesg buffer. The
dmesg buffer is a "ring buffer". Once the buffer's space has been consumed with messages, it will begin
overwriting the oldest messages with newer ones.
The current contents of the dmesg buffer can be dumped to standard out with the dmesg command. Soon
after the most recent boot, the buffer begins with the same messages seen at boottime.

rha130-6.1-1

Copyright 2011, Red Hat Inc.

Hardware Overview

[root@station root]# dmesg


Initializing cgroup subsys cpuset
Initializing cgroup subsys cpu
Linux version 2.6.32-131.4.1.el6.x86_64 (mockbuild@x86-003.build.bos.redhat.com) (gcc version 4.4.5 201
Command line: ro root=UUID=09ec85c1-d948-40ee-a288-6875ee3a5ee2 rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM
KERNEL supported cpus:
Intel GenuineIntel
AMD AuthenticAMD
Centaur CentaurHauls
BIOS-provided physical RAM map:
BIOS-e820: 0000000000000000 - 000000000009d000 (usable)
...

If a machine has been running for a while, however, the buffer will be consumed by intermittent kernel
messages, and this information will be lost. During the default Red Hat Enterprise Linux startup process,
however, a snapshot of the dmesg buffer is recorded in the file /var/log/dmesg. This file is overwritten
with each boot, so its contents will reflect the most recent startup.

The Hardware Abstraction Layer: hald and lshal


While much of traditional Unix design has withstood the test of time, traditional Unix, and early versions
of Linux, did not anticipate "hot attached" devices. Hot attached devices are devices, such as USB thumb
drives, which are not present as the system boots, but instead are connected once the system is already
up and running.
Much work has gone into recent (2.6) versions of the Linux kernel to better accomodate hot attached
devies, and the hald ("Hardware Abastraction Layer Daemon") utilizes these enhancements. This daemon
connects to the D-Bus system message bus to discover, monitor, and invoke operations on devices. In Red
Hat Enterprise Linux 6, many applications communicate directly with the D-Bus APIs.
While the design of the hardware abstraction layer is beyond our scope, one can view a list of devices
currently manged by hald using the lshal command.
[root@station ~]# lshal
Dumping 109 device(s) from the Global Device List:
------------------------------------------------udi = '/org/freedesktop/Hal/devices/computer'
info.addons = {'hald-addon-acpi'} (string list)
info.callouts.add = {'hal-storage-cleanup-all-mountpoints'} (string list)
info.interfaces = {'org.freedesktop.Hal.Device.SystemPowerManagement'} (string list)
info.product = 'Computer' (string)
info.subsystem = 'unknown' (string)
info.udi = '/org/freedesktop/Hal/devices/computer' (string)
...

By default, lshal dumps a comprehensive and initimidating amount of information.


The lshal command can also be used to monitor hardware changed dynamically, by adding a -m command
line switch. The following output reflects a USB mouse being attached to the system, followed by a power
management notification from a laptop battery.
[root@station ~]$ lshal -m
Start monitoring devicelist:
------------------------------------------------12:00:40.899: usb_device_46d_c404_noserial added
12:00:41.031: usb_device_46d_c404_noserial_if0 added
12:00:41.057: usb_device_46d_c404_noserial_usbraw added
12:00:41.178: usb_device_46d_c404_noserial_if0_logicaldev_input added
12:01:48.705: acpi_BAT0 property battery.charge_level.current = 69460 (0x10f54)
12:01:48.707: acpi_BAT0 property battery.voltage.current = 12362 (0x304a)
12:01:48.709: acpi_BAT0 property battery.reporting.current = 69460 (0x10f54)

rha130-6.1-1

Copyright 2011, Red Hat Inc.

Hardware Overview

The /proc Filesystem


Another invaluable resource for determining hardware configuration is the proc filesystem. The proc
filesystem is a virtual filesystem which is implemented by the Linux kernel, invariably mounted to the
/proc directory.
[root@station hwdata]# mount
/dev/sda3 on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
/dev/sda1 on /boot type ext4 (rw)
...

The proc virtual filesystem, associated with the device proc, is mounted to the /proc mount
point.
What is meant by a virtual filesystem? All of the files underneath /proc do not exist on any physical
media. They might be considered a figment of the kernel's imagination, albeit a useful figment. When a
file is read from, the kernel returns a dynamically generated response. (Some files within /proc, when
written to, can be used to change parameters within the kernel.) When the kernel no longer exists, such as
when the machine is shut down, the files in the proc filesystem no longer exist either, in any form.
In our discussion of various hardware, we will appeal to files within the /proc filesystem many times.
Later in the workbook, we focus on using the /proc filesystem as a mechanism for configuring kernel
parameters. In the meantime, the proc(5) man page, the cat command, and some time spent in exploration
can serve as an introduction.

Processor Support
In addition to the x86_64 (Intel and AMD compatible) family of processors, the Red Hat Enterprise Linux
distribution supports the Intel and AMD x86 (32 bit processors), IBM POWER, IBM system Z and S/390
architectures. This course, and the RHCSA and RHCE certifications, cover only the x86_64 compatible
version of the distribution.

Symmetric Multiprocessing (SMP)


Linux supports symmetric multiple processing, with x86_64 support for up to 4,096 CPU's. Red Hat
Enterprise Linux support is purchased based on the number of populated CPU sockets on the motherboard.
A two socket-pair system might have 16 processors (four quad core) but would be entitled under a twosocket subscription. Multiprocessor granularity occurs naturally at the process level (i.e., with two CPU's,
a single process will not run twice as fast, but two processes can run concurrently, each on a separate
CPU). With targeted development, Linux also supports multi-threaded processes, where a single process
can spawn multiple threads of execution, which can then run concurrently on multiple CPU's.

/proc/cpuinfo
The proc filesystem file /proc/cpuinfo reports information about detected CPU(s), as seen in the
following example.
[root@station root]# cat /proc/cpuinfo
processor : 3
vendor_id : GenuineIntel
cpu family : 6
model : 37
model name : Intel(R) Core(TM) i3 CPU M 370
stepping : 5
cpu MHz : 933.000
cache size : 3072 KB
physical id : 0

rha130-6.1-1

@ 2.40GHz

Copyright 2011, Red Hat Inc.

Hardware Overview

siblings : 4
core id : 2
cpu cores : 2
apicid : 5
initial apicid : 5
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr
bogomips : 4799.85
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:

The following fields tend to provide the most useful information.


The processor number. On a single processor machine, this is 0. On an SMP machine, the cpuinfo
file contains multiple stanzas, each identified by a distinct processor number.
The CPU model.
The speed of the CPU.
The size of the CPU memory cache, which can significantly reduce memory access times.
Various flags, which indicate capabilities of the CPU. For example, the tsc flag indicates that the
CPU supports a time stamp counter (which can be used for precise timing information), while the
vmx flag specifies that the CPU can support full virtualization (more on these, below).
A misleading and often misused statistic that is used to correlate CPU cycles to real world time. It
only merits mention because it is often inappropriately used as a measure of processor speed. Use
the MHz parameter (above) instead.

Memory
Because the x86 family is a 32-bit architecture, one could reasonably expect the Linux kernel to support
4 GB of memory (where 2 raised to the 32nd power is equal to about 4 gigabytes). Unintuitively, there
are some Intel (and compatible) 32 bit processors that support "Physical Address Extensions" (PAE)
extensions, whereby a 32 bit processor can actually access up to 64 GB of memory (though not all at the
same time).
In order to accomodate large memory systems, the Red Hat 32 bit distribution ships with multiple versions
of the kernel. The default kernel is configured to take advange of up to 4 GB of memory. The Red Hat
Enterprise Linux 5 distribution ships with a reconfigured kernel found in the kernel-PAE package. Once
this package is manually installed, the system can be rebooted and the PAE kernel can be chosen from
GRUB's boot menu.
The x86_64 version of the distribution does not need to have multiple versions of the kernel for large
amounts of memory. The standard kernel package includes support for up to 64TB of physical memory
installed in te system.

/proc/meminfo
The proc filesystem file /proc/meminfo provides statistics about the amount of detected memory, and
current memory utilization, as seen in the following example.
[root@station root]# cat /proc/meminfo
MemTotal:
512172 kB
MemFree:
51996 kB
Buffers:
65064 kB
Cached:
322792 kB
...

rha130-6.1-1

Copyright 2011, Red Hat Inc.

Hardware Overview

SwapTotal:
SwapFree:
...

530136 kB
530136 kB

For now, our discussion is limited to the following fields.


The first two fields report how much physical memory is present, and how much is freely available.
The Buffers and Cached fields collectively report how much memory is currently devoted to the the
I/O cache. On a healthy machine, the I/O cache can take up the majority of system memory. The I/
O cache will be discussed in a later lesson.
The SwapTotal and SwapFree fields report the amount of disk space available as a memory reserve
("swap space"), and how much of that reserve is available.
Many of the remaining statistics in this file report in more detail how the "used" memory is being utilized.
We will return to this topic in a later lesson.

Harddisks
Like processors and memory, the kernel is responsible for automatically detecting the presence of
harddisks. Harddisks generally use either the IDE or SCSI bus protocol, and Linux uses the following
naming convention for each.

USB drives
USB hard drives are also seen as SCSI drives and assigned the device names sdx, where x is replaced
with one or more lowercase letters. Even a USB floppy drive is seen as a SCSI drive and would use the
next available sd device letter.
While it does not assist in determining the hard drive device name, the lsusb can be used to view a list
of USB devices detected on the system. In the following output, a Cruzer Flash Drive is the removable
USB stick and the Logitech device is a mouse.
[elvis@station
Bus 002 Device
Bus 002 Device
Bus 002 Device
Bus 002 Device
Bus 001 Device
Bus 001 Device
Bus 001 Device

~]$ lsusb
005: ID 0781:5150
004: ID 046d:c064
002: ID 8087:0020
001: ID 1d6b:0002
003: ID 5986:0149
002: ID 8087:0020
001: ID 1d6b:0002

SanDisk Corp. SDCZ2 Cruzer Mini Flash Drive (thin)


Logitech, Inc.
Intel Corp. Integrated Rate Matching Hub
Linux Foundation 2.0 root hub
Acer, Inc
Intel Corp. Integrated Rate Matching Hub
Linux Foundation 2.0 root hub

IDE disks
The IDE bus protocol allows computers to have multiple IDE controllers, each of which can manage two
disks (referred to as the "master" and the "slave" drives). Many common desktop computers ship with
two IDE controllers, referred to as the "primary" and "secondary" controllers. This configuration allows
four IDE drives to be connected, referred to as the "primary master", "primary slave", "secondary master",
and "secondary slave" drives. The role of a particular drive is determined entirely by how it is physically
cabled to the machine.
In past versions, Linux refers to IDE drives as hdx, where x is replaced by a single lowercase letter. The
letters are mapped directly to the physical positions of the drive.

Table 1.1. IDE Harddisk Naming Conventions


Name

Position

hda

primary master

rha130-6.1-1

Copyright 2011, Red Hat Inc.

Hardware Overview

Name

Position

hdb

primary slave

hdc

secondary master

hdd

secondary slave

hde

tertiary master

...

...

Recently, drives which support the newer "Serial ATA" interface have become popular. Althought these
drives are technically IDE drives, recent Linux kernels, and Red Hat Enterprise Linux, uses a SCSI
emmulation to interface with the SATA controller. Therefore, beginning in Red Hat Enterprise Linux 5,
SATA drives are reported as SCSI drives.

SCSI disks
The Linux naming convention for SCSI disks is not as transparent as for IDE disks. Usually, Linux uses
names of the form sdx, where x is replaced with one or more lowercase letters. Unfortunately, the names
cannot be mapped directly to the physical position of the drive, or even to the drive's SCSI ID. Linux
usually refers to the first detected SCSI drive as sda, the second drive as sdb, and so on. Predicting which
of multiple connected drives Linux will consider the "first", "second", and so on, however, is notoriously
difficult. Even worse, if a new SCSI drive is added to the machine, a preexisting disk referred to as sdb
might (upon rebooting) become sdc, with the newly connected drive taking the name sdb.
One exception to the sd rule is the CDROM drive. The CDROM or DVD drive will be seen as srx,
where x is replaced by a number.
The proc filesystem directory /proc/scsi contains subdirectories for various types of SCSI devices.
The subdirectories contain low level information about the device.
[elvis@station ~]$ ls /proc/scsi/
device_info scsi sg usb-storage

Although the proc filesystem provides a /proc/scsi directory, and in particular a file /proc/scsi/
scsi which lists all detected disks, neither provides simple information about the names of currently
recognized SCSI disks. The best approach is to watch for kernel messages (as reported by the dmesg
command, or in the file /var/log/dmesg) about detected SCSI disks.
As an example, the following output from /proc/scsi/scsi reports an "ATA" vendor for a locally
attached IDE SATA drive and CD-ROM device. It does not, however, mention that these are sda and
sr0 respectively.
[elvis@station ~]$ cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: ATA
Model: ST9500325AS
Type:
Direct-Access
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: hp
Model: DVDRAM GT30L
Type:
CD-ROM

Rev: 0005
ANSI SCSI revision: 05
Rev: mP04
ANSI SCSI revision: 05

Examples
Becoming Familiar with a New Machine
The user elvis has logged into his account on a new machine, and would like to find out more about it's
hardware. First, he examines the machine's CPU.

rha130-6.1-1

Copyright 2011, Red Hat Inc.

Hardware Overview

[elvis@station elvis]$ cat /proc/cpuinfo


processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 37
model name : Intel(R) Core(TM) i3 CPU
stepping : 5
cpu MHz : 933.000
cache size : 3072 KB
physical id : 0
...

M 370

@ 2.40GHz

He notes that the processor is an Intel Core i3 processor, with a 3072 kilobyte cache. He next examines
the machine's memory.
[elvis@station elvis]$ cat /proc/meminfo
MemTotal:
3850932 kB
MemFree:
1895616 kB
Buffers:
37252 kB
Cached:
317836 kB
SwapCached:
20640 kB

Observing the MemTotal: line, he determines that the machine has 4 gigabytes of RAM. He next checks
to see which drives are connected, if they are hard disks or CD-ROMS, and their model.
[elvis@station elvis]$ cat /proc/scsi/scsi/
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: ATA
Model: ST9500325AS
Rev: 0005
Type:
Direct-Access
ANSI SCSI revision: 05
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: hp
Model: DVDRAM GT30L
Rev: mP04
Type:
CD-ROM
ANSI SCSI revision: 05
Host: scsi4 Channel: 00 Id: 00 Lun: 00
Vendor: SanDisk Model: Cruzer Mini
Rev: 0.2
Type:
Direct-Access
ANSI SCSI revision: 02

Online Exercises
Lab Exercise
Objective: Determine hardware configuration information about the local machine.
Estimated Time: 10 mins.

Specification
Collect the following information about your machine's hardware, and store it in the specified files. Each
file should contain a single word answer.
File

Contents

~/lab2.1/cpuspeed

The speed of your current processor, in megahertz.

~/lab2.1/cpucache

The size of your CPU's cache, in kilobytes.

~/lab2.1/memsize

The amount of physical memory, in megabytes.

If you have performed the lab correctly, you should be able to generate output similar to the following.
(Do not be concerned if your actual values differ).
[student@station student]$ head lab2.1/*

rha130-6.1-1

10

Copyright 2011, Red Hat Inc.

Hardware Overview

==> lab2.1/cpucache <==


256
==> lab2.1/cpuspeed <==
697.867
==> lab2.1/memsize <==
255

Deliverables
1.
1. The files tabled above, each of which contains a single word answer.

Questions
1.

Which process types are supported by Red Hat Enterprise Linux?


a.

Intel and AMD x86 32 bit processors

b.

Intel and AMD x86 64 bit processors

c.

IBM s/390 processors

d.

IBM series Z processors

e.

All of the above

2.

The /proc filesystem provides what function?


a.

It provides multiprocessing (SMP) support for the Linux kernel.

b.

It provides support for larger than 4GB of memory installed in the system.

c.

It provides space for memory swap (paging).

d.

It provides an interface for humans to view information in the kernel.

e.

None of the above

3.

What is an expected device name for a USB drive on a system ?


a.

/dev/hdb

b.

/dev/ubb

c.

/dev/sdb

d.

/dev/usbb

e.

None of the above

4.

rha130-6.1-1

Which file contains a snapshot of the kernel's dmesg buffer soon after the most recent boot?
a.

/var/log/messages

b.

/var/log/boot.log

11

Copyright 2011, Red Hat Inc.

Hardware Overview

c.

/var/log/dmesg

d.

/var/log/klog

e.

None of the above

5.

Which command displays the current contents of the kernel's dmesg buffer?
a.

klog

b.

kmesg

c.

kdump

d.

dlog

e.

None of the above

6.

What command can be used to monitor hardware changed dynamically?


a.

monitor -hw

b.

lshw

c.

watchhw

d.

hwmonitor

e.

lshal -m

Use the following transcript to answer the next 4 questions.


[root@station root]$ cat /proc/cpuinfo
processor
: 0
vendor_id
: GenuineIntel
cpu family
: 6
model
: 8
model name
: Pentium III (Coppermine)
stepping
: 3
cpu MHz
: 797.435
cache size
: 256 KB
fdiv_bug
: no
hlt_bug
: no
f00f_bug
: no
coma_bug
: no
fpu
: yes
fpu_exception
: yes
cpuid level
: 2
wp
: yes
flags
: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 mmx fxsr sse
bogomips
: 1592.52
[root@station root]$ cat /proc/meminfo
MemTotal:
255184 kB
MemFree:
52280 kB
Buffers:
24436 kB
Cached:
90608 kB
SwapCached:
6532 kB
...
SwapTotal:
521632 kB
SwapFree:
504940 kB
[root@station root]$ cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00

rha130-6.1-1

12

Copyright 2011, Red Hat Inc.

Hardware Overview

Vendor: ATA
Model: ST9500325AS
Type:
Direct-Access
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: hp
Model: DVDRAM GT30L
Type:
CD-ROM
Host: scsi4 Channel: 00 Id: 00 Lun: 00
Vendor: SanDisk Model: Cruzer Mini
Type:
Direct-Access

7.

Rev: mP04
ANSI SCSI revision: 05
Rev: 0.2
ANSI SCSI revision: 02

In general terms, what is the speed of the processor?


a.

800 megahertz

b.

256 megahertz

c.

1.6 gigahertz

d.

Not enough information is provided.

e.

None of the above

8.

In general terms, how much memory is installed on the local machine?


a.

256 kilobytes

b.

128 megabytes

c.

256 megabytes

d.

512 megabytes

e.

None of the above

9.

How many USB disks are attached to the machine?


a.

b.

c.

d.

e.

Not enough information is provided.

10.

rha130-6.1-1

Rev: 0005
ANSI SCSI revision: 05

Who manufactured the CDROM drive?


a.

ATA

b.

SANDISK

c.

Intel

d.

HP

e.

Not enough information is provided.

13

Copyright 2011, Red Hat Inc.

Chapter 2. Kernel and Kernel Modules


Key Concepts
The kernel's job is to manage system resources and facilitate communication between hardware and
software.
Kernel functionality can be either built-in or provided by modules.
Modules are managed with the lsmod, insmod, modprobe and rmmod commands.
Extra module configuration directives can be stored in /etc/modprobe.conf and /etc/
modprobe.d/*.conf files.
Kernel updates are provided in the form of binary RPM packages. When installing an updated kernel,
one should take care to use rpm -i, instead of rpm -U.
Internal kernel parameters can be specified using the sysctl command, or by writing to files within the
/proc/sys directory directly.
The /etc/sysctl.conf file can be used to set kernel parameters automatically at startup.

Discussion
The Static Kernel Image
One of the reasons for the popularity of the Linux kernel is its modular design. Device drivers may be
implemented in either of two ways: as a module or as part of the static kernel image. How a device driver
is implemented dictates how its configuration parameters (if any) are specified.
The static kernel image is the file that is loaded when your system is booted. In Red Hat Enterprise Linux,
the image file conventionally lives in the /boot directory, with the name vmlinuz-version, where
version is replaced with the kernel's version number.
Device drivers which are used during the boot process, before filesystems are available, such as IDE device
drivers and console drivers, are usually implemented in the core kernel image. Because these device drives
are loaded as part of the kernel image itself, the only way to pass parameters to them is at boottime. Most
Linux bootloaders, such as GRUB and LILO, allow users to pass parameters to the kernel as it is booted,
through the kernel command line. Here, we only introduce the concept of the kernel command line. Later
in the course, we discuss how it is configured.
When read, the file /proc/cmdline reports the command line that was used to boot the current instance
of the kernel.
[root@station root]# cat /proc/cmdline
ro root=LABEL=/ rd_NO_LUKS rhgb quiet vga=0x317

Documentation for some of the more commonly used kernel boottime parameters can be found in the
bootparam(7) man page.
While not discussing the specifics of any particular device driver, this lesson has tried to introduce the
following concepts.

rha130-6.1-1

14

Copyright 2011, Red Hat Inc.

Kernel and Kernel Modules

Device drivers in Linux can be implemented either statically or as a module.


Static device drivers are configured at boottime through the kernel command line.
Modular device drivers are configured primarily through the /etc/modprobe.d/*.conf files at
load time.

What is the Kernel?


The kernel is the core component of an operating system. The kernel serves two primary functions: The
first is to act as a resource manager, transparently allocating memory, cpu access, etc to processes. The
second is to serve as an interpreter, relaying messages between processes and the hardware.

The Kernel as Resource Manager


The kernel's role as resource manager is comparable to that of your own autonomic nervous-system. You
do not have to think about breathing or about keeping your heart beating. Oxygen gets where it needs to
go and carbon-dioxide is released without conscious action on your part. Likewise, the kernel performs
a number of functions vital to the operation of a computer system but which require no special action
by the user. These include managing the queue of processes requesting use of the processor, allocating
memory to processes and ensuring that one process is not allowed to access portions of memory reserved
for another. Because these functions are extremely complex and because the system administrator does
little or nothing to affect them, we will spend most of our time covering the kernel's other major role.

The Kernel as Interpreter


Most students should be familiar by now with the concepts of hardware and software. Hardware refers
to the physical components of the computer, whereas software refers to the magnetically or otherwise
encoded sets of data (otherwise known as files and applications) that are used to control the hardware.
Everything one does with a computer, from performing basic math to playing video games, involves
communicating with the computer's hardware via its software.
To illustrate the kernel's role as an interpreter between hardware and software, consider a simple CDplayer application. By clicking on the application's "play" button you are telling your computer to execute
the instructions associated with that button. Those instructions might be something like:
Spin up the CDROM in the drive
Read data from the CDROM
Play the data as audio through the sound card
As you can see, a single action on the part of the user can translate into a much more complicated set
of actions performed by the application. This simplification is the essence of what an application is:
an interface that allows the user to easily execute an otherwise complicated set of instructions. Such
instructions become even more complicated when one considers that a simple function like "spin up the
CDROM in the drive" can be implemented very differently on different hardware.
Each piece of hardware in a computer system is designed to perform certain actions in response to certain
electrical signals. In order to spin up the CDROM we need to send the correct signal to the device. But how
do we (or our application) know which signal to send? Answering this question at the application level gets
very sticky indeed. Do we need to build in a complete list of instructions for every known CDROM drive

rha130-6.1-1

15

Copyright 2011, Red Hat Inc.

Kernel and Kernel Modules

into our CD-player application? What happens if the drive we want to use is newer than the application?
What if two applications want to use the same device simultaneously? How can such conflicts be managed?
The answer is to manage all communication with hardware through the operating system or, more
specifically, the kernel. Applications almost never communicate with hardware directly. Instead they
initiate system calls to the operating system. A system call is a special function that asks the OS to have the
hardware perform some action. In this case, the CD-player application performs an open() system call on /
dev/cdrom. This requests that the operating system open the device and prepare it for reading, but does not
tell the OS exactly how this should be done. Next the application issues commands such as read and write
via the ioctl() system call in order to send data from the CDROM to the sound card. Again the application
neither knows nor cares exactly how the devices it is talking to work. That is the job of the kernel.

The Linux API


The complete set of system calls that a kernel supports are are referred to as the Application
Programming Interface (API) of the operating system. Detailed documentation of the system
calls available in the C implementation of Linux's API are available in chapter 2 of the Linux
manual. If you are curious what system calls a program makes, experiment with the strace
command (i.e. strace cat /etc/hosts ).

Kernel drivers and modules


So how does the kernel know what signals to use when communicating with a device? In some cases the
appropriate instructions are built into the kernel itself, but usually a module is used instead.
A module is a set of functions that work as part of the kernel but can be loaded and unloaded as needed.
While support for any device can be compiled directly into the kernel, by using modules and only loading
those that are necessary the kernel is kept small and versatile. When people refer to a device driver for
Linux, they are generally talking about a module.
Each module's code is stored in a separate object file (denoted by the .o extension), which is designed
for a specific version of the Linux kernel denoted by a kernel version number. Every time a new kernel
is released, the kernel version number is incremented and a new set of modules have to be created
(these are included in the kernel's rpm). You can view your current kernel's version number with the
uname -r command. Modules reside in the /lib/modules/kernelversion/ directory tree (where
kernelversion is replaced with the current kernel version) so multiple kernel versions and their
associated modules can exist on the system simultaneously.
Your kernel will usually load the appropriate module when a new device is detected. Alternately, modules
can be handled manually with the lsmod (list modules) and modprobe (load and upload modules)
commands.

The lsmod Command


The simplest module-management command is lsmod. Run as root, it takes no arguments and simply lists
the modules that are currently loaded into the kernel. The columns displayed show the module's name, it's
size in memory, the number of system components currently using the module and a space-delimited list
of other modules that depend on it.
[root@station ~]# lsmod
Module
Size
...
iptable_filter
2759
ip_tables
17765
ip6t_REJECT
4562

rha130-6.1-1

Used by
1
3 iptable_nat,iptable_mangle,iptable_filter
2

16

Copyright 2011, Red Hat Inc.

Kernel and Kernel Modules

nf_conntrack_ipv6
nf_defrag_ipv6
...
ext4
mbcache
jbd2
sd_mod
crc_t10dif
sr_mod
cdrom
ahci
...
snd_hda_intel
snd_hda_codec
snd_hwdep
snd_seq
snd_seq_device
snd_pcm
snd_timer
snd
soundcore
...

8650
12148
359671
7918
88768
38196
1507
16194
39769
40197
25261
86617
6714
56557
6626
84668
23087
70053
8052

2
1 nf_conntrack_ipv6
4
1
1
5
1
0
1
4

ext4
ext4
sd_mod
sr_mod

2
3 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel
1 snd_hda_codec
0
1 snd_seq
3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec
2 snd_seq,snd_pcm
13 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,s
1 snd

Examine the output above. Note that, among other things, the iptables firewall (ip_tables), support for
ext3 filesystems (ext3) and support for the Intel sound card (snd_hda_intel) are all implemented through
modules currently loaded on this machine. Remember that iptables is a kernel-level service, not a device.
So it is important to understand that not all modules are device drivers.
It is also important to understand module dependencies. The lsmod output above shows that the
snd_hda_intel module depends on other modules, including sndpcm and snd. Without these modules
loaded, snd_hda_intel cannot be loaded. Likewise, neither sndpcm nor snd can be unloaded until
snd_hda_intel is unloaded. With that in mind, let's examine the commands for loading and unloading
modules.

Inserting Modules with modprobe


The modprobe command is used for loading modules. If the module that modprobe is instructed to load
depends on a module that is not in memory, modprobe will load the dependency first and then load the
requested module. Modprobe does not determine which modules depend on which automatically. It reads
a file called /lib/modules/kernelversion/modules.dep, which lists all module dependencies.

Figure 2.1. Loading modules with modprobe

Requesting Module Removal with modprobe -r


To remove a module from memory, use the modprobe -r command. This command will remove the
specified module as well as any otherwise unused modules it was dependent upon.

rha130-6.1-1

17

Copyright 2011, Red Hat Inc.

Kernel and Kernel Modules

The next figure illustrates the use of modprobe -r to unload the usb-storage.o module and assumes
that the usbcore.o module is not in use by any other system components.

Figure 2.2. Unloading modules with modprobe -r

The /etc/modprobe.d/*.conf Files


While most modules should work fine by default, parameters may be passed to modular device drivers
as they are loaded. Whenever a module is loaded "on demand" by the kernel, the /etc/modprobe.d/
*.conf files are examined for module parameters. For example, the sb kernel module, which implements
the device driver for SoundBlaster compatible soundcards, can be configured to expect a particular type
of soundcard by passing a parameter of the form type=N, where N is replaced with some integer. If the
appropriate line is added to the file /etc/modprobe.d/custom.conf, this parameter will be set
whenever the kernel loads the sb module.

[root@station root]# cat /etc/modprobe.d/custom.conf


options sb type=3
alias snd-card-0 snd-intel8x0
install snd-intel8x0 /sbin/modprobe --ignore-install snd-intel8x0 && /usr/sbin/alsactl restore >/dev/nu
remove snd-intel8x0 { /usr/sbin/alsactl store >/dev/null 2>&1 || : ; }; /sbin/modprobe -r --ignore-remo
alias usb-controller uhci-hcd

This line specifies parameters to be used whenever the sb kernel module is implicitly loaded.
While these *.conf support a number of directives, the most common include:
alias

The alias directive allows you to associate a module with an arbitrary


name. Aliases are usually used to bind a module (and thus a specific
physical device) to a logical device. For example:
alias snd-card-0 snd-intel8x0

would cause the first detected Intel-i810 chipset soundcard to


become the primary sound device for your system even if another
device, such as one built onto the motherboard, would normally take
precedence.
install, remove

rha130-6.1-1

These directives allow you to override modprobe's default behavior


when loading or unloading a module. Whereas modprobe normally
just loads or unloads the specified module, sometimes proper
management requires a more complex action. For example, when
unloading a sound module, the command alsactl store should be
run to store the system's volume levels so they are not lost upon
reboot. Conversely, when loading a sound driver the command
alsactl restore should be run to re-load the volume settings.
18

Copyright 2011, Red Hat Inc.

Kernel and Kernel Modules

The example below shows modprobe.conf lines that accomplish


this for the snd-intel8x0 sound driver. Note that each line lists
two commands: one to execute alsactl and another to load/unload
the module using modprobe, which is given the --ignore-install or
--ignore-remove argument. These options prevent modprobe from
re-running the install or remove instructions, which would create an
infinite loop.

install snd-intel8x0 /sbin/modprobe --ignore-install snd-intel8x0 && /us


remove snd-intel8x0 { /usr/sbin/alsactl store >/dev/null 2>&1 || : ; };

Determining what parameters are available for a particular module can be the hardest part. Modules
generally do not have man pages. Many modules are documented with the kernel documentation in the
kernel-doc package. Another useful tool is the modinfo command. For some modules such as msdos there
is not much information.
[root@station root]# modinfo msdos
filename:
/lib/modules/2.6.32-131.0.15.el6.x86_64/kernel/fs/fat/msdos.ko
description:
MS-DOS filesystem support
author:
Werner Almesberger
license:
GPL
srcversion:
150BBB1F801B3B55FB50F0D
depends:
fat
vermagic:
2.6.32-131.0.15.el6.x86_64 SMP mod_unload modversions

The name of the file, a brief description, author, and license information is provided. Any module
dependancies are also listed. The msdos module does not have any configurable parameters. The Intel(R)
PRO/1000 Network Driver known as e1000 does have a number of configurable parameters.
[root@station root]# modinfo e1000
...
depends:
vermagic:
2.6.32-131.0.15.el6.x86_64 SMP mod_unload modversions
parm:
TxDescriptors:Number of transmit descriptors (array of int)
parm:
RxDescriptors:Number of receive descriptors (array of int)
parm:
Speed:Speed setting (array of int)
parm:
Duplex:Duplex setting (array of int)
parm:
AutoNeg:Advertised auto-negotiation setting (array of int)
parm:
FlowControl:Flow Control setting (array of int)
parm:
XsumRX:Disable or enable Receive Checksum offload (array of int)
parm:
TxIntDelay:Transmit Interrupt Delay (array of int)
parm:
TxAbsIntDelay:Transmit Absolute Interrupt Delay (array of int)
parm:
RxIntDelay:Receive Interrupt Delay (array of int)
parm:
RxAbsIntDelay:Receive Absolute Interrupt Delay (array of int)
parm:
InterruptThrottleRate:Interrupt Throttling Rate (array of int)
parm:
SmartPowerDownEnable:Enable PHY smart power down (array of int)
parm:
KumeranLockLoss:Enable Kumeran lock loss workaround (array of int)
parm:
copybreak:Maximum size of packet that is copied to a new buffer on receive (uint)
parm:
debug:Debug level (0=none,...,16=all) (int)

The last line indicates that there is a debug parameter which can be set at a level from 0 to 16. Other
parameters are not as clear on their usage but do provide enough information to investigate further with
Internet searches or a peek at the source code.

No Need to "Install" Device Drivers


Installing device drivers for particular devices is not an issue in Linux as in other operating
systems. Because of the modular nature of the kernel and the freedoms provided by open
source software, modules for most supported hardware are included by default. A device driver
implemented as a kernel module is only loaded if the hardware it manages is detected, so the only
wasted resource is a little bit of disk space.

rha130-6.1-1

19

Copyright 2011, Red Hat Inc.

Kernel and Kernel Modules

The /proc/sys Directory and sysctl


As we learned earlier, most of the files in /proc are read-only representations of information about your
system. There is an exception to this rule: the files in /proc/sys/. The /proc/sys/ directory contains
a tree of writable files, each of which represents a different kernel setting.
There are too many kernel settings in /proc/sys/ to cover what each of them does here. Complete
coverage for the curious is available in the Red Hat Enterprise Linux Reference Guide.
While files in /proc/sys/ can be written to and read from directly, the most common method for
manipulating them is with the sysctl command. When using sysctl, the settings in /proc/sys/ are
referred to by a period-delimited name instead of a filesystem path. For example, the setting in /proc/
sys/fs/file-max which, dictates the maximum number of files that may be opened at once, is referred
to by sysctl as simply fs.file-max. Below you will see sysctl used to display the value of this setting.
[root@station]# sysctl fs.file-max
fs.file-max = 52371

This kernel is configured to allow a maximum of 52,371 files to be opened simultaneously. To alter a
setting with sysctl , the -w (as in write) command line switch must be used.
[root@station]# sysctl -w fs.file-max=60000
fs.file-max = 60000

The maximum number of open files on this system is now 60,000. However, because /proc is a pseudofilesystem, existing only in RAM, all non-default settings will be lost when the system reboots. In order to
make changes to /proc/sys/ persistent, sysctl must be configured to automatically reinstate settings
when the system boots up. Custom sysctl commands can be stored in the /etc/sysctl.conf file,
which is read in during system initialization by the sysctl -p command, which is run by /etc/rc.d/
rc.sysinit. Each line in sysctl.conf follows the format "path.to.setting = value". The default
sysctl.conf usually looks like this.
[root@station root]# cat /etc/sysctl.conf
# Kernel sysctl configuration file for Red Hat Linux
#
# For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
# sysctl.conf(5) for more details.
# Controls IP packet forwarding
net.ipv4.ip_forward = 0
# Controls source route verification
net.ipv4.conf.default.rp_filter = 1
# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0
# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0
# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1
...

To make our change to fs.file-max permanent, we would add the following line to /etc/sysctl.conf.
# Increase maximum filehandles
fs.file-max = 60000

Be aware that altering sysctl.conf has no affect until the system is rebooted or sysctl -p is run
manually.

rha130-6.1-1

20

Copyright 2011, Red Hat Inc.

Kernel and Kernel Modules

Online Exercises
Lab Exercise
Objective: Manage kernel modules, updates, and parameters.
Estimated Time: 10 mins.

Specification
1. Use the modprobe command to force the insertion of the firewire_net kernel module. Use the
lsmod command to confirm that the module was inserted.
[root@station root]# modprobe
[root@station root]# lsmod |
firewire_net
13401
firewire_core
51195
crc_itu_t
1683

firewire_net
grep firewire
0
1 firewire_net
1 firewire_core

2. Configure /etc/sysctl.conf such that upon bootup, the maximum number of open files on the
machine (as reported from /proc/sys/fs/file-max) is set to 50000. Use the sysctl command
to set this value directly, as well.
[root@station
fs.file-max =
[root@station
fs.file-max =
[root@station
49999

root]# grep file-max /etc/sysctl.conf


50000
root]# sysctl -w fs.file-max=49999
49999
root]# cat /proc/sys/fs/file-max

Deliverables
1.
1. A currently installed firewire_net kernel module.
2. A properly configured /etc/sysctl.conf configuration file, such that the kernel's
maximum number of open files is set to 50000 upon reboots.
3. A current kernel configuration with the maximum number of open files, as reported by /proc/
sys/fs/file-max, set to 49999.

Questions
1.

In what directory are kernel modules found?


a.

/lib/modules/kernel-version/

b.

/boot/modules/

c.

/usr/share/kernel-version/modules/

d.

/var/lib/modules/

2.

What command is used to request the removal of a module?


a.

rha130-6.1-1

depmod -x

21

Copyright 2011, Red Hat Inc.

Kernel and Kernel Modules

b.

modules -r

c.

modprobe -r

d.

rm /proc/modules

An administrator attempts to remove a module by running the following command.


[root@station root]# modprobe -r cdrom
cdrom: Device or resource busy

3.

Which of the following would be a conventionally named static kernel image?


a.

/boot/image-2.6.32-8

b.

/boot/zimage-2.6.32-8

c.

/boot/vmlinux-2.6.32-8

d.

/boot/vmlinuz-2.6.32-8

e.

None of the above

4.

Which of the following could be a reason that the module cannot be removed?
a.

The module's usage count is greater than 0.

b.

The module is depended on by another module which is in use.

c.

Once a modules is inserted into the kernel, it cannot be removed.

d.

A or B

5.

What command is used to list all currently inserted kernel modules?


a.

depmod -l

b.

modules

c.

lsmod

d.

cat /proc/kmod

6.

Which command can be used to request the insertion of a kernel module?


a.

modules -i

b.

modprobe

c.

vmlinuz

d.

linux -m

7.

rha130-6.1-1

What configuration file is examined whenever modprobe inserts a module?


a.

/etc/modprobed/*.conf

b.

/etc/sysconfig/modules

c.

/boot/modules/modprobe.conf

22

Copyright 2011, Red Hat Inc.

Kernel and Kernel Modules

d.
8.

Which file contains values for /proc/sys/ directory entries which are set automatically at
boottime?
a.

/etc/sysctl.conf

b.

/etc/proc.conf

c.

/etc/sysconfig/proc

d.

/etc/systab

9.

After editing the file mentioned in the previous question, what command can be run to immediately
implement the changes?
a.

procctl -s

b.

kernelctl -i

c.

setproc

d.

sysctl -p

10.

rha130-6.1-1

/etc/modtab

Within the configuration file mentioned above, which of the following lines would correctly set
the value of /proc/sys/net/ipv4/ip_forward to 1?
a.

proc.sys.net.ipv4.ip_forward = 1

b.

net.ipv4.ip_forward = 1

c.

ip_forward = 1

d.

sys.net.ipv4.ip_forward = 1

23

Copyright 2011, Red Hat Inc.

Chapter 2. PCI Devices


Key Concepts
The lspci command lists all detected PCI devices. Including the -v command line switch lists
configuration information associated with each device.
The file /proc/interrupts lists the system's interrupt request line (IRQ) assignments and activity.
The file /proc/ioports lists the system's I/O port assignments.
The file /proc/iomem lists the physical addresses of the system's RAM and device memory buffers.

Discussion
The PCI bus
The PCI bus plays a primary role in most x86_64 compatible architectures. All PCI devices share a
common configuration protocol, whereby PCI devices can identify themselves with hardwired Vendor
and Device ID's. PCI devices include common expansion cards based devices, such as sound cards and
network controller, but also bridges which connect other buses to the primary PCI bus. The lspci command
can be used to list all attached PCI devices, as in the following example.

[root@station root]# lspci


00:00.0 Host bridge: VIA Technologies, Inc. VT8375 [KM266/KL266] Host Bridge
00:01.0 PCI bridge: VIA Technologies, Inc. VT8633 [Apollo Pro266 AGP]
00:05.0 Multimedia audio controller: Creative Labs SB Audigy (rev 03)
00:05.1 Input device controller: Creative Labs SB Audigy MIDI/Game port (rev 03)
00:05.2 FireWire (IEEE 1394): Creative Labs SB Audigy FireWire Port
00:07.0 Ethernet controller: Linksys Network Everywhere Fast Ethernet 10/100 model NC100 (rev 11)
00:10.0 USB Controller: VIA Technologies, Inc. USB (rev 80)
00:10.1 USB Controller: VIA Technologies, Inc. USB (rev 80)
00:10.2 USB Controller: VIA Technologies, Inc. USB (rev 80)
00:10.3 USB Controller: VIA Technologies, Inc. USB 2.0 (rev 82)
00:11.0 ISA bridge: VIA Technologies, Inc. VT8235 ISA Bridge
00:11.1 IDE interface: VIA Technologies, Inc. VT82C586A/B/VT82C686/A/B/VT8233/A/C/VT8235 PIPC Bus Maste
00:12.0 Ethernet controller: VIA Technologies, Inc. VT6102 [Rhine-II] (rev 74)
01:00.0 VGA compatible controller: S3 Inc. VT8375 [ProSavage8 KM266/KL266]

This device connects the AGP ("Advanced Graphics Port"), used by many video cards, to the PCI bus.
The Audigy soundcard on this machine is an example of a "multifunction" PCI device. This and
the following three lines identify the single device filling the role of three individual devices: a
soundcard, a MIDI/joystick controller, and a FireWire controller.
These two devices are two individual network interface cards. (Although not evident from this output,
one is supplied with the motherboard as an "on board" component, and the other is an expansion card.)
These three PCI "devices" are examples of controllers for alternate buses: The USB bus, the ISA
bus, and the IDE bus. The PCI bus provides the base infrastructure for other buses. Notice that IDE,
ISA, and USB devices will not be listed by the lspci command, only the bus controllers.
The lspci command provides a good starting place when attempting to discover what hardware is connected
to an unfamiliar machine.

Hardware Resources
The x86 and x86_64 architectures provide common mechanisms for hardware devices to interact with the
Linux kernel. When adding new devices to a machine, care must be taken to share the following resources
without conflicts between the various devices.

rha130-6.1-1

24

Copyright 2011, Red Hat Inc.

PCI Devices

Interrupt Request Line (IRQ's) and /proc/interrupts


Every device needs some way to grab the kernel's attention, as if to say "Hey, someone just moved the
mouse, and I want to tell you about it", or "Hey, I've finished transferring that block of information to the
disk like you told me to". Many devices use an interrupt request line, or IRQ, for this purpose. In the x86
architecture, 15 IRQ lines are available, and multiple devices can share a single IRQ line as well.
The proc filesystem file /proc/interrupts displays the available IRQ lines, and any device drivers
which are using them. The absolute number of times that the interrupt has occurred (since the machine
booted) is included as well.
[root@station root]# cat /proc/interrupts
CPU0
0:
6091325
XT-PIC timer
1:
41608
XT-PIC keyboard
2:
0
XT-PIC cascade
3:
0
XT-PIC ehci-hcd
5:
115473
XT-PIC usb-uhci
8:
1
XT-PIC rtc
10:
16384184
XT-PIC usb-uhci, eth0
11:
9720993
XT-PIC usb-uhci, eth1, Audigy
12:
848836
XT-PIC PS/2 Mouse
14:
190363
XT-PIC ide0
15:
1765002
XT-PIC ide1
NMI:
0
ERR:
0

On an SMP machine, hardware interrupts are raised on individual CPU's, and a separate column of
interrupt activity for each CPU would be included.
IRQ 0 is invariably used by the timer device driver. The timer interrupts the kernel at a constant
rate of 1000 interrupts per second, prompting the kernel to interrupt the normal flow of activity and
perform any periodic or pending tasks.
Three completely unrelated device drivers (the USB controller, an Ethernet network interface card,
and the Audigy sound card) are all sharing IRQ 11.
The NMI field counts the number of occurrences of "Non Maskable Interrupts". These interrupts are
normally used to signal low level hardware error conditions.

I/O Ports and /proc/ioports


After getting the kernel's attention (by raising an interrupt), devices usually want to perform some type of
data transfer into or out of the system. The x86 architecture provides a distinct 16 bit address space for
devices, whose addresses are referred to as I/O ports. When communicating with the kernel through I/O
ports, the kernel and the device must agree on what ports are being used.
The proc filesystem file /proc/ioports displays which ports have been claimed by which device
driver. (The port addresses are displayed as hexadecimal digits.)
[root@station root]# cat /proc/ioports
0000-001f : dma1
0020-003f : pic1
0040-005f : timer
0060-006f : keyboard
0070-007f : rtc
...
03c0-03df : vesafb
03f6-03f6 : ide0
03f8-03ff : serial(auto)
0cf8-0cff : PCI conf1
c000-c01f : Creative Labs SB Audigy
c000-c01f : Audigy

rha130-6.1-1

25

Copyright 2011, Red Hat Inc.

PCI Devices

c400-c407 : Creative Labs SB Audigy MIDI/Game port


c800-c8ff : Linksys Network Everywhere Fast Ethernet 10/100 model NC100
c800-c8ff : tulip
...

Device Memory Buffers and /proc/iomem


Many modern devices implement their own memory, which once mapped into the memory address space
of a system, can be used to easily transfer data back and forth. Video cards are a classic example of devices
that provide their own memory buffers.
The proc filesystem file /proc/iomem displays all of the devices whose memory buffers have been
mapped into physical memory, and the physical memory addresses which have been assigned to each
buffer (listed in hexadecimal digits).
[root@station root]# cat /proc/iomem
00000000-0009fbff : System RAM
0009fc00-0009ffff : reserved
000a0000-000bffff : Video RAM area
000c0000-000c7fff : Video ROM
000f0000-000fffff : System ROM
00100000-2dfeffff : System RAM
00100000-002766f6 : Kernel code
002766f7-00384807 : Kernel data
...
e3000000-e3003fff : Creative Labs SB Audigy FireWire Port
e3004000-e30043ff : Linksys Network Everywhere Fast Ethernet 10/100 model NC100
e3004000-e30043ff : tulip
e3005000-e30057ff : Creative Labs SB Audigy FireWire Port
e3006000-e30060ff : VIA Technologies, Inc. USB 2.0
e3006000-e30060ff : ehci-hcd
e3007000-e30070ff : VIA Technologies, Inc. VT6102 [Rhine-II]
e3007000-e30070ff : via-rhine
...

As far as this file is concerned, the machine's main memory (RAM, or "Random Access Memory")
is considered "just another device", and mapped into lower physical address spaces.
The physical address space does not need to be used contiguously (without gaps). Here, the mapping
of the system's RAM is interrupted in order to map the VGA video device. The address of this device
occurs early in the physical address space, and for legacy reasons cannot be moved.
Most modern devices that implement memory buffers are mapped into the upper addresses of the
physical address space.

Configuring PCI Devices


When attaching new devices to a Red Hat Enterprise Linux system, two steps must generally occur to
configure the kernel to use the new device. First, if the device driver for the device is implemented as a
kernel module, the module must be loaded (if not already). Secondly, the device driver (and the device)
must be configured to use any or all of the resources mentioned above in a manner that will not conflict
with any other devices. The following outlines how this is accomplished for PCI devices. Usually, both
steps occur with minimal intervention on the part of the administrator.

Loading Modular Device Drivers


When a new PCI device is recognized by the Hardware Abstraction Layer, if a driver for the device is
not already present in the kernel, a table of appropriate modules for various PCI devices is consulted,
found at /lib/modules/2.6.32-8/modules.pcimap (where 2.6.32-8 might be replaced by an
updated kernel version). If the device's vendor and product id are recognized, an appropriate module will
be automatically inserted into the kernel.

rha130-6.1-1

26

Copyright 2011, Red Hat Inc.

PCI Devices

Often, the kernel recognizes the need for a particular driver because of the role it fulfills. As an example,
consider the following /etc/modprobe.d/dist.conf file, which is used to configure some of the
same devices discovered with the lspci command above.
[root@station root]# cat /etc/modprobe.d/dist.conf
alias block-major-8-* sd_mod
alias block-major-9-* md
...
alias plip0 plip
alias plip1 plip
alias tunl0 ipip
...
alias char-major-116-* snd
alias sound-service-*-0 snd-mixer-oss
alias sound-service-*-1 snd-seq-oss

The kernel might be aware that it must load the appropriate kernel module to fill a particular role, such
as "the sound-service", or "the tunl0 network interface", or "the SCSI controller", but the kernel is not
aware which of several different modular device drivers that could fill that role is appropriate for the local
hardware. Using aliases in the /etc/modprobe.d/*.conf files, when the kernel attempts to load
"the tunl0 interface", the appropriate modular device driver for the local hardware is loaded (in this case,
the ipip kernel module).
The following table outlines some of the "roles" that the kernel is aware of, and that various software
installations configure /etc/modprobe.d/*.conf files to fill appropriately. Do not be concerned
when the table discusses concepts which are not yet familiar; they will (mostly) be discussed later in the
course.

Table 2.1. Common Aliases Found in modprobe.conf


Alias

Role

ethN

The device driver appropriate for the network interface card


associated with network interface ethN.

snd-card-N

The device driver appropriate for the sound card associated with the
appropriate slot in the sound subsystem.

usb-controller

The device driver appropriate for the system's USB controller.

scsi_hostadaptor

The device driver appropriate for the system's SCSI controller.

char-major-N

The device driver associated with character device node major number
N.

block-major-N

The device driver associated with block device node major number N.

You can also use the install and remove keywords to specify custom commands to use when a particular
module is loaded or unloaded or specify options to be passed to a module with the options keyword.

Assigning Resources
In addition to loading the appropriate kernel module, resources must be assigned to the device. Usually,
this is handled at boottime using the Plug n' Play protocol. All PCI devices use a common mechanism for
advertising which of usually multiple different configurations they can support (such as IRQ 11 at I/O port
0xe000, or IRQ 10 at I/O port 0xc800), and for assigning a particular configuration.
At bootup, all PCI devices are probed, and a set of non-conflicting resources are specified for each card.
Usually, this happens with no intervention on the part of the administrator. The lspci command introduced
above, if used with the -v command line switch, will report the resources associated with each device.
In the following example, the lspci -v command reveals the IRQ, I/O port, and physical memory address
assigned to the Linksys Ethernet card (10, 0xc800, and 0xe300400, respectively).

rha130-6.1-1

27

Copyright 2011, Red Hat Inc.

PCI Devices

[root@station root]# lspci -v


...
00:07.0 Ethernet controller: Linksys Network Everywhere Fast Ethernet 10/100 mod
el NC100 (rev 11)
Subsystem: Linksys: Unknown device 0574
Flags: bus master, medium devsel, latency 32, IRQ 10
I/O ports at c800 [size=256]
Memory at e3004000 (32-bit, non-prefetchable) [size=1K]
Expansion ROM at <unassigned> [disabled] [size=128K]
Capabilities: [c0] Power Management version 2
...

Occasionally, an administrator might need to tweak these assignments. Usually, this can be accomplished
by using option lines in the modprobe.d/*.conf files to associate parameters with the appropriate
kernel modules. The /usr/share/doc/kernel-doc-kernel_version/Documentation/
contains information on the kernel, kernel modules, and their respective parameters. The kernel
documentation is provided by the kernel-doc package. To install this documentation, run the following
command as root:
[root@station ~]# yum install kernel-doc

Examples
Exploring a New Machine
The user elvis is continuing to explore a new machine. In order to discover what PCI devices are connected
to the machine, elvis uses the lspci command.
[elvis@station elvis]$ /sbin/lspci
00:00.0 Host bridge: Intel Corp. 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (rev 03)
00:01.0 PCI bridge: Intel Corp. 440BX/ZX/DX - 82443BX/ZX/DX AGP bridge (rev 03)00:03.0 CardBus bridge:
00:03.1 CardBus bridge: Texas Instruments PCI1420
00:07.0 Bridge: Intel Corp. 82371AB/EB/MB PIIX4 ISA (rev 02)
00:07.1 IDE interface: Intel Corp. 82371AB/EB/MB PIIX4 IDE (rev 01)
00:07.2 USB Controller: Intel Corp. 82371AB/EB/MB PIIX4 USB (rev 01)
00:07.3 Bridge: Intel Corp. 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:08.0 Multimedia audio controller: ESS Technology ES1983S Maestro-3i PCI Audio Accelerator (rev 10)
00:10.0 Ethernet controller: 3Com Corporation 3c556 Hurricane CardBus (rev 10)
00:10.1 Communication controller: 3Com Corporation Mini PCI 56k Winmodem (rev 10)
01:00.0 VGA compatible controller: ATI Technologies Inc Rage Mobility M3 AGP 2x (rev 02)
02:00.0 Ethernet controller: Xircom Cardbus Ethernet 10/100 (rev 03)
02:00.1 Serial controller: Xircom Cardbus Ethernet + 56k Modem (rev 03)

Skimming the list, he sees an ESS soundcard, a 3Com networking card , a Winmodem, an ATI video card,
and what is apparently a combined Ethernet/modem Xircom (PCMCIA) card.
Curious which kernel modules are being used as device drivers for these devices, he browses the /etc/
modprode.d directory and discovers a maestro.conf with specific information about his sound card.
[elvis@station elvis]$ cat /etc/modprobe.d/maestro.conf
alias snd-card-0 maestro3
install snd-card-0 /bin/aumix-minimal -f /etc/.aumixrc -L >/dev/null 2> &1 || :
remove snd-card-0 /bin/aumix-minimal -f /etc/.aumixrc -S >/dev/null 2> &1
|| :

Online Exercises
Lab Exercise
Objective: Determine hardware configuration of devices on your local machine.

rha130-6.1-1

28

Copyright 2011, Red Hat Inc.

PCI Devices

Estimated Time: 10 mins.

Specification
Collect the following information about your machine's hardware, and store it in the specified files. Each
file should contain a single word answer.
File

Contents

~/lab2.2/irq1

The name of the device which is using interrupt request line 1 (IRQ 1).

~/lab2.2/fpuports

The range of I/O ports which are being used by your machine's fpu
device.

~/lab2.2/videorom

The range of physical addresses being used by your machine's


Video ROM.

If you have performed the lab correctly, you should be able to generate output similar to the following.
(Do not be concerned if your actual values differ).
[student@station student]$ head lab2.2/*
==> lab2.2/fpuports <==
00e0-00ef
==> lab2.2/irq1 <==
rtc
==> lab2.2/videorom <==
000c0000-000c7fff

Deliverables
1.
1. The files tabled above, each of which contains a single word answer.

Questions
1.

Which of the following commands conveniently lists all connected PCI devices?
a.

listpci

b.

pcidump

c.

dmesg

d.

kudzu

e.

lspci

Use the following transcript to answer the next 2 questions.


[root@station root]$ cat /proc/interrupts
CPU0
0:
29957960
XT-PIC timer
1:
78565
XT-PIC keyboard
2:
0
XT-PIC cascade
5:
75302119
XT-PIC eth0
8:
1
XT-PIC rtc
9:
0
XT-PIC Intel ICH 82801AA

rha130-6.1-1

29

Copyright 2011, Red Hat Inc.

PCI Devices

11:
14:
15:
NMI:
ERR:

198578
258317
20398
0
0

2.

XT-PIC
XT-PIC
XT-PIC

usb-uhci, eth1
ide0
ide1

About how long has the system been running (since the last boot)?
a.

About 30,000 seconds (about 8 hours)

b.

About 20,000 seconds (about 5 hours)

c.

About 10,000 seconds (about 3 hours)

d.

About 10 days

e.

Not enough information is provided

3.

What about the above output implies a misconfiguration of the machine?


a.

There are no NMI interrupts reported.

b.

Two device drivers are conflicting over IRQ 11.

c.

No device drivers have claimed IRQ&apos;s 4-7.

d.

Only one CPU has been identified.

e.

None of these conditions necessarily imply problems.

4.

Which of the following files lists currently claimed I/O ports?


a.

/etc/ioports

b.

/var/log/ioports

c.

/etc/sysconfig/ioports

d.

/var/state/ioports

e.

/proc/ioports

5.

Which of the following files lists the physical addresses of currently mapped memory (including
RAM and other devices)?
a.

/proc/slabinfo

b.

/proc/meminfo

c.

/proc/iomem

d.

/var/log/meminfo

e.

/etc/meminfo

Use the following transcript to answer the next 3 questions.


[root@station root]$ lspci -v
...
00:1f.2 USB Controller: Intel Corp. 82801AA USB (rev 02) (prog-if 00 [UHCI])

rha130-6.1-1

30

Copyright 2011, Red Hat Inc.

PCI Devices

Subsystem: Intel Corp. 82801AA USB


Flags: bus master, medium devsel, latency 0, IRQ 11
I/O ports at 2440 [size=32]
00:1f.5 Multimedia audio controller: Intel Corp. 82801AA AC'97 Audio (rev 02)
Subsystem: Compaq Computer Corporation: Unknown device b1bf
Flags: bus master, medium devsel, latency 0, IRQ 9
I/O ports at 2000 [size=256]
I/O ports at 2400 [size=64]

01:00.0 VGA compatible controller: nVidia Corporation NV5M64 [RIVA TNT2 Model 64/Model 64 Pro] (rev 15)
Subsystem: nVidia Corporation: Unknown device 0017
Flags: bus master, 66Mhz, medium devsel, latency 64, IRQ 10
Memory at 41000000 (32-bit, non-prefetchable) [size=16M]
Memory at 44000000 (32-bit, prefetchable) [size=32M]
Expansion ROM at <unassigned> [disabled] [size=64K]
Capabilities: [60] Power Management version 1
Capabilities: [44] AGP version 2.0
...

6.

What IRQ is the USB controller using?


a.

32

b.

c.

10

d.

11

e.

Not enough information is provided

7.

What I/O port base address(es) is (are) being used by the sound card?
a.

The device is not using any I/O ports

b.

41000000

c.

2440

d.

2000 and 2400

e.

Not enough information is provided

8.

How much of the video card's memory has been mapped into the system?
a.

16 megabytes

b.

32 megabytes

c.

48 megabytes

d.

41000000 bytes

e.

44000000 bytes

Use the following transcript to answer the next question.


[root@station root]$ cat /etc/modprobe.d/custom.conf
alias parport_lowlevel parport_pc
alias eth0 8139too

rha130-6.1-1

31

Copyright 2011, Red Hat Inc.

PCI Devices

alias snd-card-0 i810_audio


alias usb-controller usb-uhci

9.

Which kernel module is the device driver for the network interface card associated with the
interface eth0?
a.

Network interface device drivers are always compiled into the static kernel image.

b.

usb-uhci

c.

i810_audio

d.

8139too

e.

Not enough information is provided.

10.

rha130-6.1-1

file lists modules loaded on the system?


a.

/proc/devices

b.

/proc/modules/devices

c.

/proc/hw

d.

/proc/lsmod

e.

/proc/hwdevices

32

Copyright 2011, Red Hat Inc.

Chapter 4. Filesystem Device Nodes


Key Concepts
Processes access device drivers through a special file type referred to as device nodes.
Linux supports two fundamentally different types of devices, block devices and character devices.
Consequently, file system nodes are either block or character nodes.
Every filesystem device node has a major number (which indexes a particular device driver in the kernel)
and a minor number.
Filesystem device nodes are created by udevd

Discussion
"Everything is a File"
The previous Lessons have discussed device drivers as specialized kernel components which allow the
kernel to communicate with various devices. This Lesson addresses the next layer: how do processes
communicate with device drivers? When managing the flow of information to and from processes, Linux
(and Unix) follows a simple design philosophy: everything is a file. Following this philosophy, the previous
question is answered: processes communicate with device drivers as if they were files.
For example, the terminal device driver for virtual console number 4 is referenced by the device node /
dev/tty4. What happens when information is written to this file?
[root@station root]# echo "hello world" > /dev/tty4

The information is passed to the terminal device driver, which does what a terminal device driver should
do when information is written to it: displays it on the terminal. Switching to the (previously unused)
virtual console number 4, we find the following.
Red Hat Enterprise Linux Server release 6.1 (Santiago)
Kernel 2.6.32-131.4.1.el6.x86_64 on an x86_64

station login: hello world

The terminal device driver displayed the information that was written to /dev/tty4 at the current
cursor location.

Filesystem Device Nodes


The files which reference device drivers are referred to as device nodes. The Linux filesystem supports
several different file types. Most users are familiar with three: regular files, directories, and symbolic (soft)
links. System administrators usually become acquainted with two more: block and character device nodes.
By convention, device nodes are found in a dedicated /dev directory. When a new device is detected by
the kernel, a device node is added for it here. Note that this differs from the behavior of older versions of
Linux, where a device node existed for every possible device, whether it was in use or not.
[root@station root]# ls -l /dev

rha130-6.1-1

33

Copyright 2011, Red Hat Inc.

Filesystem Device Nodes

total 0
crw-rw----.
crw-rw----.
drwxr-xr-x.
drwxr-xr-x.
drwxr-xr-x.
lrwxrwxrwx.
lrwxrwxrwx.
...
crw-rw----.
crw-rw----.
...
brw-rw----.
brw-rw----.
brw-rw----.
brw-rw----.
brw-rw----.
brw-rw----.
brw-rw----.

1
1
2
2
3
1
1

root
root
root
root
root
root
root

video
root
root
root
root
root
root

10, 175 Aug


10, 57 Aug
720 Aug
100 Aug
60 Aug
3 Aug
3 Aug

1 root lp
1 root lp

6,
6,

1
1
1
1
1
1
1

8,
8,
8,
8,
8,
8,
8,

root
root
root
root
root
root
root

disk
disk
disk
disk
disk
disk
disk

0 Aug
1 Aug
0
1
2
3
4
16
17

Aug
Aug
Aug
Aug
Aug
Aug
Aug

1
1
3
3
1
1
1

10:26
10:27
10:19
10:19
10:26
10:27
10:27

agpgart
autofs
block
bsg
bus
cdrom -> sr0
cdrw -> sr0

1 10:26 lp0
1 10:26 lp1
1
3
3
3
3
3
3

10:27
08:19
08:19
08:19
08:19
10:19
10:19

sda
sda1
sda2
sda3
sda4
sdb
sdb1

When listed with the ls -l command, block and character device nodes can be recognized by the first
character on each line. While regular files are identified by a -, and directories by a d, character device
nodes are identified by a c, and block device nodes are identified by a b.

Why Two Types of Nodes?


The filesystem allows for two different types of device nodes because the Linux kernel distinguishes
between two different types of devices.
Block Devices

Block Devices are devices that allow random access, and


transfer information in fixed size chucks, or "blocks".
Generally, disks are thought of as block devices. More
importantly, for improved I/O performance, all transfers to and
from block devices make a use of cache within the kernel,
sometimes referred to as the "Page Cache", "Buffer Cache", or
simply "I/O Cache".

Character Devices

Character Devices are devices that don't interact with the I/O
cache. Often, character device operate on stream of sequential
bytes (or "characters"), such as keystrokes on a terminal, or data
sent to a printer. Character devices can also provide access to
arbitrary buffers, however, such as memory buffers on video
cards.

The Anatomy of a Device Node


In the following, the ls command is used to generate a long listing of a regular file, and the /dev/fd0
device node.
[root@station
-rw-r--r-[root@station
brw-rw----

root]# ls -l /etc/passwd
1 root
root
4004 Sep 17 12:34 /etc/passwd
root]# ls -l /dev/fd0
1 root
floppy
2,
0 Jan 30 2003 /dev/fd0

Regular files are used to store information on a harddisk. Accordingly, the ls -l command reports the
amount of information stored in the file (i.e, the length of the file).
Device nodes, in contrast, are not used for storing information. Instead, they serve as a conduit,
transferring information to and from an underlying device driver. Therefore, the concept of a filesize

rha130-6.1-1

34

Copyright 2011, Red Hat Inc.

Filesystem Device Nodes

is meaningless. In its place, the ls -l command reports two numbers which are associated with every
device node, the device node's major number and minor number.
Every device driver in the kernel registers for a major number, which is an integer that is used to identify
that device. The major number of a filesystem device node correlates to the major number of the device
driver with which it is associated. A list of device drivers which have registered with the kernel, and
their major numbers, is found in the proc filesystem file /proc/devices. In the following, note that
character devices and block devices are treated separately. The character device driver with major number
7 is vcs, while the block device driver with major number 7 is loop.
[root@station root]# cat /proc/devices
Character devices:
1 mem
4 /dev/vc/0
4 tty
4 ttyS
5 /dev/tty
5 /dev/console
5 /dev/ptmx
7 vcs
10 misc
13 input
14 sound
...
251 usbmon
252 bsg
253 pcmcia
254 rtc
Block devices:
1 ramdisk
259 blkext
7 loop
8 sd
9 md
11 sr
65 sd
...
253 device-mapper

The minor number associated with a device node is treated as a parameter, which is passed to the device
driver when data is written to or read from the node. Different device drivers implement minor numbers
differently. For example, the floppy driver (block major number 2) uses the minor number to distinguish
between possible floppy formats, while the primary SCSI driver (block major number 8) uses the minor
number to distinguish different partitions the harddisks (more on that in a later workbook).
[root@station
brw-rw----. 1
brw-rw----. 1
brw-rw----. 1
brw-rw----. 1
brw-rw----. 1
brw-rw----. 1
brw-rw----. 1
brw-rw----+ 1

root]# ls -l /dev/sd* /dev/sr*


root disk 8, 0 Aug 1 10:27 /dev/sda
root disk 8, 1 Aug 3 08:19 /dev/sda1
root disk 8, 2 Aug 3 08:19 /dev/sda2
root disk 8, 3 Aug 3 08:19 /dev/sda3
root disk 8, 4 Aug 3 08:19 /dev/sda4
root disk 8, 16 Aug 3 10:19 /dev/sdb
root disk 8, 17 Aug 3 10:19 /dev/sdb1
root cdrom 11, 0 Aug 1 10:27 /dev/sr0

Summarizing, device nodes have three parameters which are used to associate the device node to a
particular device driver: A block or character file type, a major number, and a minor number. Notice that
the filename of the device node is not included in this list, and is not used to determine the appropriate
device driver.
This scheme has problems associated with it. Commonly, a device node might exist, for which there is no
corresponding device driver in the kernel:

rha130-6.1-1

35

Copyright 2011, Red Hat Inc.

Filesystem Device Nodes

[root@station root]# echo "hello world" > /dev/sde


-bash: /dev/sde: No such device or address

Less commonly, there might be a device driver in the kernel with no device node referencing it, and thus
no traditional way for processes to interact with it.

Commonly Used Device Nodes


While the concept of device nodes is shared by all versions of Unix, the names traditionally associated with
device nodes tends to vary from one version of Unix to another. The following tables list the Linux names
of commonly used device nodes, which should be part of every Linux administrator's working knowledge.

Table 4.1. Common Linux Block Device Nodes


Node Name

Associated Device

sda

"First" SCSI Drive

sdb

"Second" SCSI Drive

fd0

First Floppy Disk

sr0

First Optical Drive

Table 4.2. Common Linux Character Device Nodes


Node Name

Associated Device

ttyn

Virtual Console number n

ttySn

Serial Port n

lpn

Parallel Port n

null

All information written to this virtual device is discarded.

zero

When read, this device is an infinite source of binary zeros.

urandom

When read, this device is an infinite source of random binary data.

Symbolic Links as Functional Names


Often, applications are more concerned with the function of a device, rather than its actual device node. For
example, most CD-ROM drives implement either an IDE or SCSI interface, and thus could be addressed
by the hdc, sr0 or sda device nodes. An audio music player application, however, would not care if the
device were an IDE or SCSI device, it would just like to use "the CD-ROM". If the CD-ROM drive is a
writable device, there may also be a symbolic link named cdrw.
Red Hat Enterprise Linux often uses symbolic links in the /dev directory to help ease the configuration
of these applications. The following table list commonly created symbolic links, and the types of devices
to which they often resolve.

Table 4.3. Common Symbolic Links Found in the /dev Directory


Link Name

Sample Device Nodes

/dev/cdrom, /dev/cdrw

/dev/sr0, /dev/sdb

/dev/dvd, /dev/dvdrw

/dev/sr0, /dev/sdb

/dev/modem

/dev/ttyS0, /dev/ttyS1

/dev/pilot

/dev/ttyS0, /dev/usb/ttyS1

rha130-6.1-1

36

Copyright 2011, Red Hat Inc.

Filesystem Device Nodes

Dynamic Device Node Creation: udev


The traditional Unix technique for accessing devices (device nodes) has withstood the test of time, but it's
not without problems. For example, there can be a device driver in the kernel which cannot be accessed
because there's not a complementary device node in the filesystem. Conversely, there can be device nodes
in the file system without the corresponding device driver in the kernel, which leads to confusion.
These problems are only exacerbated by the increasing popularity of hot swappable devices, which people
rightfully expect to plug into a machine and they "just work".
Originally, and up through Red Hat Enterprise Linux 3, Red Hat solved the problem by "prepopulating"
the /dev directory with every conceivable device node, even if the corresponding device drivers were not
present. As a result, the /dev directory had literally thousands of entries.
Starting with Red Hat Enterprise Linux 4 (and the Linux 2.6 kernel), Red Hat (and the Linux community)
is trying to take a more intelligent approach to the problem. The kernel now implements a notification
device (known as "netlink", because it is also associated with networking), which is monitored by a new
udevd daemon.
The gist is that when a new device is attached to the machine, the kernel notifies udevd, which consults
a database of rules found in /etc/udev. The rules tell the udevd daemon what type of device node to
create for the device, and what ownerships and permissions to apply to the node.
Likewise, when a device is removed, the udevd daemon again responds to a kernel notification, this time
by removing the device node.
Interestingly, the udevd daemon effectively considers every device as a hot swapped device. Permanent
devices "just happened to be there" as the system was bootstrapping.
There are a couple of consequences to the new technique.
1. On the plus side, the /dev directory has many fewer device nodes, and the nodes that do exist are a
direct reflection of the underlying detected hardware.
2. On the down side, you can no longer simply use mknod to create your own device nodes within
the /dev directory. More exactly, you can create the node, but because the /dev directory is now
dynamically populated by udevd, you cannot expect the node to survive a reboot.
The details of configuring the udevd daemon are beyond the scope of this course, but the interested can
use rpm -ql udev to obtain a list of man pages (such as udev(8)), and more complete documentation
in the /usr/share/doc/udev-version directory.

Examples
Allowing udev to add device nodes
Curious about the hard disks on his system, elvis looks to see what sd devices are available.
[elvis@station elvis]$ ls /dev/sd*
/dev/sda /dev/sda1 /dev/sda2 /dev/sda3

/dev/sda4

Seeing only the one device, elvis assumes that this is his local SATA hard disk. He then inserts his USB
thumb drive and looks again.
[elvis@station elvis]$ ls /dev/sd*
/dev/sda /dev/sda1 /dev/sda2 /dev/sda3

rha130-6.1-1

37

/dev/sda4

/dev/sdb

/dev/sdb1

Copyright 2011, Red Hat Inc.

Filesystem Device Nodes

Having finished working with the USB thumb drive, elvis removes the device. Curious, he looks again at
the /dev and see that the the /dev/sdb files have been removed.
[elvis@station elvis]$ ls /dev/sd*
/dev/sda /dev/sda1 /dev/sda2 /dev/sda3

/dev/sda4

Using the dd Command to Create a backup of a Drive's


Master Boot Record
In general, device nodes allow direct access to devices. Device nodes associated with disks allow users to
read the contents of the disk, byte for byte, as if the disk were one giant file. The dd command can be used
to extract specific lengths of data from specific locations of a file.
The first block (512 bytes) of a bootable harddisk is referred to as the "Master Boot Record", and includes
sensitive information, such as the boot loader and the drive's partition table. As a precaution, a system
administrator wants to make a backup of this information. He uses the dd to make a copy of the first 512
bytes of the disk /dev/sda. He specifies that his input file should be /dev/sda, his output file should
be /tmp/MBR.backup, and that exactly 1 block should be transferred, using a block size of 512 bytes.
As a result, he ends up with a file exactly 512 bytes in length.
[root@station root]# dd if=/dev/sda of=/tmp/MBR.backup bs=512 count=1
1+0 records in
1+0 records out
[root@station root]# ls -l /tmp/MBR.backup
-rw-r--r-1 root
root
512 Sep 28 07:58 /tmp/MBR.backup

If his Master Boot Record ever becomes corrupted, it can be restored by inverting the previous command.
[root@station root]# dd if=/tmp/MBR.backup of=/dev/sda

(Without arguments specifying otherwise, the dd command transfers the entire input file).

Online Exercises
Lab Exercise
Objective: Gain familiarity with Linux filesystem device nodes.
Estimated Time: 10 mins.

Specification
1. Create the file ~/lab2.4/block-7-0, which contains the name of the block device node associated
with major number 7, minor number 0. Refer to the file using an absolute reference.
2. Create the file ~/lab2.4/char-1-8, which contains the name of the character device node
associated with major number 1, minor number 8. Refer to the file using an absolute reference.
3. Create the file ~/lab2.4/cdrom, which contains the name of the device node to which the symbolic
link /dev/cdrom resolves. Refer to the file using a relative reference (i.e., without the /dev/). If
your machine does not have a CD-ROM drive (or if the file /dev/cdrom does not exist), simple put
the word "none".
If you have performed the lab correctly, you should be able to generate output similar to the following.
[student@station student]$ ls -l lab2.4/

rha130-6.1-1

38

Copyright 2011, Red Hat Inc.

Filesystem Device Nodes

total 12
-rw-rw-r--rw-rw-r--rw-rw-r-...

1 student
1 student
1 student

student
student
student

10 Sep 28 07:11 block-7-0


9 Sep 28 07:12 cdrom
10 Sep 28 07:11 char-1-8

Deliverables
1.
1. The file ~/lab2.4/block-7-0, which contains the name of the block device node
associated with major number 3, minor number 0, as an absolute reference.
2. The file ~/lab2.4/char-1-8, which contains the name of the character device node
associated with major number 1, minor number 8, as an absolute reference.
3. The file ~/lab2.4/cdrom, which contains the name of the device node to which the symbolic
link /dev/cdrom resolves, or the word "none".

Questions
1.

What two types of filesystem device nodes exist in Linux?


a.

character and block device nodes

b.

major and minor device nodes

c.

hard and soft device nodes

d.

hot and cold device nodes

e.

None of the above

Use the following transcript to answer the next 3 questions.


[root@station
brw-rw---brw-rw---brw-rw---brw-rw---brw-rw---brw-rw---brw-rw---brw-rw----

2.

2,
2,
2,
2,
2,
2,
2,
2,

0
1
2
3
128
129
130
131

Jan
Jan
Jan
Jan
Jan
Jan
Jan
Jan

30
30
30
30
30
30
30
30

2003
2003
2003
2003
2003
2003
2003
2003

/dev/fd0
/dev/fd1
/dev/fd2
/dev/fd3
/dev/fd4
/dev/fd5
/dev/fd6
/dev/fd7

What is the length of the file /dev/fd4?


a.

2 bytes

b.

128 bytes

c.

128 kilobytes

d.

4 kilobytes

e.

The question is meaningless, because /dev/fd4 is a device node.

3.

What type of file is /dev/fd2?


a.

rha130-6.1-1

root]$ ls -l /dev/fd?
1 root
floppy
1 root
floppy
1 root
floppy
1 root
floppy
1 root
floppy
1 root
floppy
1 root
floppy
1 root
floppy

symbolic link

39

Copyright 2011, Red Hat Inc.

Filesystem Device Nodes

b.

character device node

c.

block device node

d.

directory

e.

None of the above

4.

What is the major number of the file /dev/fd7?


a.

b.

131

c.

d.

Not enough information is provided

e.

The question is meaningless, because /dev/fd7 is a symbolic link.

Use the following transcript to answer the next 3 questions.


[root@station
crw--w---crw--w---crw------crw------crw------crw------crw------crw--w---crw--w---crw--w----

5.

rha130-6.1-1

0
1
2
3
4
5
6
7
8
9

Jan
Oct
Oct
Oct
Oct
Oct
Oct
Jan
Jan
Jan

30
28
24
24
24
24
24
30
30
30

2003
02:55
17:40
17:44
17:44
17:44
17:44
2003
2003
2003

/dev/tty0
/dev/tty1
/dev/tty2
/dev/tty3
/dev/tty4
/dev/tty5
/dev/tty6
/dev/tty7
/dev/tty8
/dev/tty9

What type of file is the file /dev/tty8?


a.

regular file

b.

block device node

c.

symbolic link

d.

directory

e.

none of the above

6.

7.

root]$ ls -l /dev/tty?
1 root
root
4,
1 root
tty
4,
1 root
root
4,
1 root
root
4,
1 root
root
4,
1 root
root
4,
1 root
root
4,
1 root
root
4,
1 root
root
4,
1 root
root
4,

What is the major number of the file /dev/tty3?


a.

b.

c.

d.

e.

None of the above


What users on the machine can cause text to be displayed on the first virtual console (/dev/
tty1)?

40

Copyright 2011, Red Hat Inc.

Filesystem Device Nodes

a.

root and members of the group tty

b.

root only

c.

root and members of the group console

d.

all users

e.

not enough information is provided

8.

Which of the following files is the device node for the CDROM drive?
a.

/dev/cd

b.

/dev/sr0

c.

/proc/scsi/sr0

d.

/proc/scsi/cdrom

e.

None of the above

9.

Which of the following is the device node for the SCSI disk with a SCSI ID of 3?
a.

/dev/sd2

b.

/dev/sdc

c.

/proc/scsi/sdc

d.

Not enough information is provided

e.

None of the above

10.

rha130-6.1-1

A USB drive is added to the system. What command must be run to create the device node?
a.

modprobe /dev/usb

b.

mknod /dev/usb

c.

No command needs to be run. USB devices do not use separate device nodes

d.

No command needs to be run. udev will create the device node automatically

e.

No command can be run. All device nodes are created at installation.

41

Copyright 2011, Red Hat Inc.

Chapter 5. Performance Monitoring


Key Concepts
Linux commonly uses a general measure of system activity referred to as the "load average".
Linux uses memory for two fundamental purposes: supporting processes and caching block device I/
O operations.
The top command can be used to monitor processor and memory activity.

Discussion
Performance Monitoring
We conclude this workbook by mentioning some simple tools and measures which are commonly used
to monitor performance on Linux systems.

CPU Performance
The uptime Command
As the name implies, the uptime command returns how long the machines has been operating without
powering down or rebooting.
[root@station root]# uptime
08:14:10 up 1:28, 4 users,

load average: 0.56, 0.23, 0.12

While this machine has only been up one hour and 28 minutes, Linux has such a reputation for stability
that network servers often have uptimes measured in months rather than hours.
More relevant to CPU performance, the uptime command also returns a commonly used Linux (and Unix)
statistic referred to as the load average of the machine (often abbreviated loadavg). The load average
is a time average of the number of processes which are in the "Runnable" or "Involuntary Sleep" state.
Conventionally, three time averages are maintained: A 1 minute average, a 5 minute average, and a 15
minute average).

The top Command


The top utility repeatedly lists processes on the running on the machine, sorted in order of CPU activity.
The list is refreshed about every five seconds. While top is running, the keyboard is "live", implying that
single keystrokes are interpreted as commands, without having to hit the RETURN key. Notably, the q
key quits top, while the h key displays a help screen, documenting other keystroke commands.
The top three lines of top's display monitor the system's CPU related activity. The top line return the same
information as the uptime command, notably including the load average.
[root@station root]# top
top - 09:33:48 up 12:07, 5 users, load average: 0.23, 0.40, 0.30
Tasks: 91 total,
2 running, 88 sleeping,
0 stopped,
1 zombie
Cpu(s): 8.6%us, 3.7%sy, 0.3%ni, 86.7%id, 0.7%wa, 0.0%hi, 0.0%si,

0.0%st

...

rha130-6.1-1

42

Copyright 2011, Red Hat Inc.

Performance Monitoring

The second line enumerates the processes running on the machine, and the state they are in. Many processes
in the "runnable" state (more formally called "running") would imply a busy machine.
The third and fourth lines classify how the CPU has been using its time, using the following three standard
Unix classifications.
user - "us"

When the CPU is in "user" mode, it is performing calculations on


behalf of a user process. Processes performing computationally intensive
calculations, such as an image manipulation program, or a cryptographic
library, would cause the CPU to spend a large amount of time in user
mode.

system - "si"

When the CPU is in "system" mode, it is performing kernel acticities on


behalf of a process. Examples include servicing system calls on behalf
of a process (such as disk read and write operations). Processes which
are transferring large amounts of data to and from disk, or to and from
network connections, would cause the CPU to spend a large amount of
time in "system" mode.

idle - "id"

Just as the name suggests, this is the amount of time that the CPU spends
cooling its heals because no processes are in the runnable state.

Additionally, Red Hat Enterprise Linux tracks the following statistics.


nice - "ni"

This is the amount of time that the CPU


spends acting on behalf of processes that
have been "niced", so as to have a lower
execution priority. Time spent in this state is
also counted as time spent in the "user" or
"system" state, as appropriate.

I/O wait - "wa"

As the name implies, this is the amount of


time the CPU spends idling because a process
in blocked on I/O. Anytime the CPU would
have been marked as idle, but a process is
"blocked" in the uniterruptible sleep ("D")
state, the time is charged to wait instead.

Hardware Interrupt - "hi"

This is the amount of time that the


CPU spends servicing lowlevel hardware
requests, or "interrupts". As the name implies,
hardware may interrupt the normal flow of
CPU execution when immediately pertinent
information is available, such as a mouse
which has been moved to a new location, or
an Ethernet card which has received a packet
over the network.

Software Interrupt - "si"

This is the amount of time that the CPU


spends servicing deferred kernel tasks which
are not directly related to a process, such as
the processing of network packets.

Stolen - "st"

This is the amount of time that the CPU would


have run, but a virtualized guest was using the

rha130-6.1-1

43

Copyright 2011, Red Hat Inc.

Performance Monitoring

CPU instead. This field is only relevant on


systems using Xen virtualization.

Memory Utilization
Perhaps the most important task of the Linux kernel is to efficiently use one of the important resources
for overall system performance, memory. Linux memory management tends to be a complicated subject,
making it difficult to give simple answers to seemingly simple questions such as "how much memory is
this process using?". In general, however, administrators can think of the Linux kernel trying to balance
the memory demands of two important tasks.

Process Memory
The various processes running on the machine each make memory demands of the kernel. For smaller
processes, the memory demands may be as modest as 100 kilobytes or less per process. Larger applications
(particularly graphically intensive ones, such as the Firefox web browser) can request tens of megabytes
of memory.

Disk I/O Cache


The Linux kernel tries to optimize disk I/O by buffering all disk (or, more exactly, block device) operations.
The process of reading or writing information to a disk is dramatically slower than reading or writing
information to memory. When a process requests to read information from a disk, the information is of
course read as soon as possible into memory. When a process requests to write information to a disk,
however, the kernel quickly stores the information in the memory cache, and returns control to the process.
When there is a "quiet moment" (i.e., when there are few other pending operations to perform), the kernel
will perform the relatively slow operation of committing the cached information to disk.
What happens if another process (or the same process) tries to read the same information from the disk
before it gets flushed from the cache? The kernel returns the information from the cache, bypassing the
disk altogether. Commonly used files, which are often read from or written to, might spend most of their
time in the I/O cache, only occasionally being synced back to disk. On systems with large amounts of
memory, the disk I/O cache may easily use more than half of it. Efficiently managing the disk I/O cache
leads to dramatically better overall system performance.

Monitoring Memory Utilization with /proc/meminfo


We look again at the contents of the /proc/meminfo, this time focusing on how much memory is
devoted to the disk I/O cache.
[root@station ~]# cat /proc/meminfo
MemTotal:
255232 kB
MemFree:
4340 kB
Buffers:
12096 kB
Cached:
138840 kB
...

Recall the MemTotal line, which lists detected memory, and the MemFree line, which lists how
much of the memory is left unused. We now focus on the Buffers and Cached lines. The
meminfo file distinguishes between two subtly different variations of the I/O cache (the latter is
for caching disk I/O which is related to the contents of regular files, the former is for all other block
device interactions). For our purposes, the combined sum of both can be considered the "I/O cache".
On this machine, which contains about 256 megabytes of physical memory, about 150 megabytes (or over
half) is currently devoted to the I/O cache.

rha130-6.1-1

44

Copyright 2011, Red Hat Inc.

Performance Monitoring

Monitoring Memory Utilization with top


The top command, introduced above, also monitors memory consumption. Above, we focused on the first
3 lines of the top display, which related to CPU activity. We now focus on aspects of the next three lines.
[root@station root]# top
top - 09:51:08 up 12:25, 6 users, load average: 0.00, 0.12, 0.28
Tasks: 91 total,
2 running, 88 sleeping,
0 stopped,
1 zombie
Cpu(s): 0.3%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem:
514604k total, 505516k used,
9088k free,
219620k buffers
Swap: 1044216k total,
132k used, 1044084k free,
110568k cached
...

This field lists total physical memory available.


This field lists how much of the available memory is currently in use.
This field lists how much of the available memory is currently available.
These fields, combined, give a measure of how much memory is currently being used to cache disk
I/O.
The remaining fields on the last line relate to Linux's use of swap space (disk based virtual memory), and
will be covered in a later Workbook.

Why is my memory always 90% full?


After observing the Linux kernel's memory dynamics during various operations, users new to Linux are
often surprised by their inability to "free up" memory, even after exiting all large applications. Why, under
Linux, is memory always 90% full? Because the kernel is doing exactly what it is supposed to be doing,
namely using its resources as efficiently as possible. Any memory not being currently used by processes
is devoted to the I/O cache to improve the speed of I/O operations.

Examples
Using top to Analyze System Activity
The user elvis has realized that his machine is acting sluggishly. He uses the top command to analyze
his machine's behavior.
[elvis@station elvis]$ top
top - 08:09:40 up 3 days, 15:29, 2 users, load average: 12.19, 6.00, 2.34
Tasks: 103 total,
9 running, 93 sleeping,
0 stopped,
1 zombie
Cpu(s): 20.0%us, 79.8%sy, 0.0%ni, 0.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem:
255148k total,
245412k used,
9772k free,
52968k buffers
Swap: 1044216k total,
176k used, 1044040k free,
111860k cached
PID
15276
15278
15280
15282
15273
15367
1

USER
blondie
blondie
blondie
blondie
root
root
root

PR
25
25
25
25
17
16
16

NI
0
0
0
0
0
0
0

VIRT RES
5208 1428
5352 908
4720 1432
4932 904
3852 968
2848 964
1932 516

SHR
632
632
632
632
756
756
440

S
R
R
R
R
S
R
S

%CPU %MEM
26.6 0.3
25.3 0.2
24.3 0.3
23.3 0.2
0.3 0.2
0.3 0.2
0.0 0.1

TIME+
6:00.14
6:12.23
5:58.29
6:15.64
0:04.60
0:00.18
0:00.56

COMMAND
find
grep
find
grep
top
top
init

...

First, he notices the number of grep and find commands that blondie appears to be running. He makes
the following observations.

rha130-6.1-1

45

Copyright 2011, Red Hat Inc.

Performance Monitoring

The 1 minute load average, 12.19, is high, while the 15 minute load average, 2.34, is comparatively
low. This implies that the machine is responding to a burst of activity.
His CPU is spending a lot of time in the "system" state, probably as a result of all of the I/O activity
required by the find and grep commands.
Of his machine's 256 megabytes of memory, 53 + 111 = 164 megabytes of memory is being used
to cache I/O operations. As a result, other processes on the machine will probably respond very
sluggishly, as they are short on physical memory.

Online Exercises
Lab Exercise
Objective: Become familiar with the uptime command.
Estimated Time: 5 mins.

Specification
1. Record the output of the uptime command into the file ~/lab2.5/uptime.

Deliverables
1.
1. The file ~/lab2.5/uptime, which contains the output of the uptime command.

Questions
1.

What is the name of the utility which displays a dynamically updated list of processes currently
running on the machine, as well as CPU and memory utilization statistics?
a.

top

b.

monitor

c.

vmstat

d.

ps

e.

None of the above

Use the following transcript to answer the next 2 questions.


[root@station root]$ uptime
05:06:00 up 3 days, 12:25,

2.

rha130-6.1-1

2 users,

load average: 0.89, 3.25, 8.87

Which of the following best describes the activity of the machine?


a.

The machine has recently been active, but is currently inactive.

b.

The machine has recently been inactive, but is currently active.

c.

The machine has been and continues to be active.

d.

The machine has been and continues to be inactive.

46

Copyright 2011, Red Hat Inc.

Performance Monitoring

e.
3.

None of the above


How long has passed since the machine was most recently booted?

a.

5 hours

b.

8.87 hours

c.

12 hours

d.

2 hours

e.

84 hours

Use the following top 5 lines from the top command to answer the next 4 questions.
top - 05:14:52 up 3 days, 12:34, 2 users, load average: 4.39, 1.40, 0.48
Tasks: 86 total, 79 sleeping, 7 running, 0 zombie, 0 stopped
Cpu(s): 20.0%us, 79.8%sy, 0.4%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem:
255184k total,
247244k used,
7940k free,
91620k buffers
Swap:
521632k total,
20756k used,
500876k free,
67012k cached

4.

How much physical memory does the machine have?


a.

512 megabytes

b.

128 megabytes

c.

64 megabytes

d.

1024 megabytes

e.

256 megabytes

5.

Which of the following best describes the type of activity on the machine?
a.

The machine is predominantly running processes which involve a lot of numeric computation.

b.

The machine is predominantly running processes which involve a lot input and output activity.

c.

The machine is predominantly idle, without many processes running.

d.

None of the above describe the machine's activity.

6.

Roughly what percentage of the machine's physical memory is devoted to caching I/O operations?
a.

5%

b.

10%

c.

25%

d.

33%

e.

60%

7.

Roughly how much physical memory is being devoted to processes, in megabytes?


a.

rha130-6.1-1

255 - 92 = 163

47

Copyright 2011, Red Hat Inc.

Performance Monitoring

b.

255 - 8 = 247

c.

255 - 67 = 188

d.

255 - 92 = 163

e.

255 - 8 - (92 + 67) = 88

8.

What file contains information about current memory utilization?


a.

/etc/memory

b.

/proc/top

c.

/var/log/mem

d.

/proc/meminfo

e.

None of the above

Use the following transcript to answer the next 2 questions.


top - 05:32:33 up 3 days, 12:52, 2 users, load average: 1.92, 1.87, 1.87
Tasks: 87 total, 82 sleeping, 5 running, 0 zombie, 0 stopped
Cpu(s): 99.0%us, 1.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem:
255184k total,
250332k used,
4852k free,
8560k buffers
Swap:
521632k total,
76584k used,
445048k free,
100384k cached

9.

On average, how many processes have been running for the past few minutes?
a.

b.

c.

d.

87

e.

Not enough information is provided

10.

rha130-6.1-1

Which of the following best describes the type of activity on the local machine?
a.

The machine is predominantly idle, without many processes running.

b.

The machine is predominantly running processes which involve a lot input and output from
the filesystem.

c.

The machine is predominantly running processes which involve a lot of numeric computation.

d.

None of the above describe the machine's activity.

48

Copyright 2011, Red Hat Inc.

Das könnte Ihnen auch gefallen