Beruflich Dokumente
Kultur Dokumente
SuSE Linux AG
4 038564 009418
SuSE Linux AG
Training Document – Article No. 45452-1INT
SuSE Linux Enterprise Server: Advanced System Administration II
Authors: Björn Lotz, Oliver Wittenburg
All programs, illustrations and information contained in this manual were compiled to our best
knowledge and tested carefully. This, however, does not exclude the possibility of errors. For this
reason, the program material contained in this present manual shall not constitute any obligation or
guarantee of any kind. The authors of SuSE Linux AG will thus accept no responsibility or in any
way be held liable for damages of any kind which may result from the use of this program material,
parts thereof, or for any resulting violation of the law by third parties.
The representation of registered names, trade names, the naming of goods etc. in this training
manual does not give the right, even where not specifically stipulated, to assume that such names,
in terms of trade names or protection of trade name legislation, can be regarded as free and thus be
put to use by anybody whosoever.
All trade names are used without the guarantee for their free use and may possibly be registered
trade marks. SuSE Linux AG essentially adheres to the guidelines of the manufacturers. Other
products named here may be trade marks of a respective manufacturer.
This work is protected by copyright. All rights in connection with the reproduction or copying
of this training manual or parts thereof are reserved. This also applies to translations thereof. No
part of this work may, in any form whatsoever (print, photocopy, microfilm or any other proce-
dures) and also not for training purposes, be reproduced or electronically processed, duplicated, or
disseminated without the written permission of the publisher.
1 Compiling Software 1
1.1 Basics . . . . . . . . . . . . 2
2.1 Introduction . . . . . . . . . . 10
2.2 Basics . . . . . . . . . . . . 10
2.2.1 Sources . . . . . . . . . . 11
2.2.2 Patches . . . . . . . . . . 11
2.4.1 BuildRoot . . . . . . . . . 15
4 System Optimization 37
4.2 Powertweak . . . . . . . . . . 39
4.3 hdparm . . . . . . . . . . . 41
4.4 ulimit . . . . . . . . . . . 42
5 RAID 47
5.1 Basics . . . . . . . . . . . . 48
5.1.1 RAID Levels . . . . . . . . . 48
5.2 Configuration . . . . . . . . . . 49
5.3 Administration . . . . . . . . . . 52
5.4 For More Information . . . . . . . . . 54
Learning Aims
As most of the software for Linux is under the GPL (General Public License), it is usually
available as source. Happily, distributors spare users a lot of work by compiling these
sources, packing them in packages, and selling them in bundled form on CDs. Eventually,
you may come across a program for which there is no suitable RPM package that must be
compiled before you can use it.
This chapter covers the basic tools and components needed for compiling a program from
source. A small C program is used as an example in this chapter. A different procedure
may be needed for other programming languages.
1.1 Basics
Actually, computers only understand zeros and ones, which is rather difficult for humans.
In the early days of computer programming, programmers had to master this language to
communicate with their computers. As this was extremely cumbersome, various program-
ming languages were developed, enabling commands to be communicated to the computer
in a more or less intelligible language — at least more intelligible than a text consisting
exclusively of zeros and ones.
However, the computer does not understand these commands directly; first, these com-
mands must be translated to machine code, a language that the computer is able to un-
derstand. This is done by means of a special program called the compiler. Actually, the
translation of program code to machine code comprises several steps, as shown in Fig-
ure 1.1 on the facing page.
In practice, the individual programs do not have to be executed manually — the program
gcc (Gnu C Compiler) makes sure that the individual functions are executed in the correct
sequence.
Source code in
high level language
Compiler
Program in
assembler language
Assembler
Linker
Executable program
Figure 1.1: From Source Code to the Executable Program (source: Oualline, 1997)
This procedure can be demonstrated by means of a simple example. For this purpose, let
us take a look at the program "Hello World", which is frequently used as a simple
example when introducing a programming language.
This manual does not attempt to provide an introduction to C programming. It merely
endeavors to show the basic workflow for creating a program.
/* hello.c
* Prints Hello World on the screen */
/* Main function */
int main ()
{
/* Prints Hello World */
printf("Hello World\n");
return (0);
}
Comments are inserted between “/*” and “*/”. The line #include <stdio.h> in-
cludes a header (or include) file. Files specified in this way are included in the source
code during compilation.
int main () is the main function of the program. The commands to execute are placed
between braces. In this example, only two commands are used, one for printing "Hello
World" on the screen and one for returning “0” as return value.
The program can be compiled by simply executing gcc:
tux@earth:~ > gcc -Wall -o hello hello.c
The option -Wall makes sure that alerts are displayed. The result is the program hello,
whose only function is to print "Hello World" on the screen.
The individual steps represented in the above figure are not visible when gcc is executed.
However, some aspects can be seen later, such as the fact that the program is linked to
libraries.
tux@earth:~ > ldd ./hello
libc.so.6 => /lib/libc.so.6 (0x40022000)
/lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)
Exercise
Perform the steps described above. Write a program called hello.c then
compile and test it.
The example hello.c demonstrates the basic elements of a program. However, real
software projects consist of multiple files that have to be compiled in the correct sequence.
The resulting object files must be linked to form an executable program.
This task can be automated with the help of the program make and a makefile. More-
over, make accelerates the development of programs, as only the program parts whose
sources changed since the last compiling cycle are recompiled when the source is tested or
edited.
For example, to compile the program hello.c with the help of make, prepare a makefile
containing the needed commands.
A makefile can
A makefile consists of targets, dependencies, and the commands for the targets. Targets
and dependencies are separated by a colon. The commands must be placed under the target,
indented with one tab space. “#” introduces comments. A makefile for the file hello.c
could appear as follows:
# Makefile for hello
all: hello
hello: hello.c
gcc -Wall -o hello hello.c
install: hello
install -m 755 hello /usr/local/bin/hello
uninstall: /usr/local/bin/hello
rm -f /usr/local/bin/hello
clean:
rm -f hello
If you execute the command make while you are in the respective directory, the program
make will search this directory for the files GNUMakefile, Makefile, or makefile
and execute the commands in the file it finds first.
If make is executed without any parameters, the first target is used. In our example, this is
all. This target is associated with the target hello, which specifies the steps to take: in
case hello.c is newer than the file hello, gcc is executed with the respective options.
The command make can also be used with individual targets: for exam-
ple, make install (as root) installs the file at the specified location and
make uninstall removes the file.
The use of make is not limited to the development of programs. It can also be used for
other purposes, such as updating NIS maps or other projects in which specific actions
should be performed on the basis of the changes made to files.
Basically, even large software projects are carried out in the same way. Naturally, the
respective makefiles are much more extensive and complex. If the software should be
compiled to a functional program on multiple architectures, things get really complicated.
For this reason, a number of tools were developed to facilitate the programmer’s task of
creating a suitable makefile, such as the programs autoconf and automake. The end
product of these tools is a script called configure.
This script generates a makefile that is adapted to the machine on which the software will be
compiled. Options used when executing configure are taken into consideration (such
as configure -prefix=/usr).
If a configure script exists, the software installation from source is usually limited to
three short commands:
earth:~ # ./configure
creating cache ./config.cache
checking for a BSD compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking whether make sets ${MAKE}... yes
checking for working aclocal... found
...
earth:~ # make
make all-recursive
make[1]: Change to the directory »/tmp/xpenguins-2.2«
Making all in src
make[2]: Change to the directory »/tmp/xpenguins-2.2/src«
gcc -DHAVE_CONFIG_H -I. -I. -I.. -I/usr/X11R6/include -g -O2 -DPKGDATADIR=
\""/usr/local/share/xpenguins"\" -c main.c
...
earth:~ # make install
In contrast, the uninstallation is a bit more difficult. Usually, make uninstall is not
available, so the individual programs often have to be removed manually. This can be
solved by using the package manager RPM.
Software is never static. As time goes on, bugs are fixed, new features are programmed,
and security issues are solved. Often these changes merely affect a small part of the project
files. If the source archive is very large, it would therefore spare resources if only the
difference to the previous version is downloaded from the Internet. The programs used for
this purpose are diff and patch:
• patch generates the new version from the old file and the file generated by diff.
Consider the program hello.c. The project is internationalized by placing the English
version of hello.c in the directory en and a German version in the directory de:
/* hello.c
* Prints Hello World on the screen */
/* Main function */
int main ()
{
/* Prints Hello World in German */
printf("Hallo Welt\n");
return (0);
}
The difference between the two directories, which only contain the two said files in our
example, is generated with the command diff:
tux@earth:~ > diff -rNu de/ en/
--- en/hello.c 2003-01-30 16:34:49.000000000 +0100
+++ de/hello.c 2003-01-30 16:34:26.000000000 +0100
@@ -7,7 +7,7 @@
/* Main function */
int main ()
{
- /* Prints Hello World */
- printf("Hello World\n");
+ /* Prints Hello World in English*/
+ printf("Hallo Welt\n");
return (0);
}
The output can be redirected to a file such as hello.diff. The options used have the
following meanings:
Option Meaning
-r Recursively compare any subdirectories found
-N Treat absent files as empty
-u Output in unified output format
Line Meaning
--- and +++ Source and target file
@@ ... @@ Range of the change
- and + Removed and added lines
Additional options and their meanings are listed in the man pages of diff.
patch generates the new version from this file and the old file in the directory de:
tux@earth:~ > patch -d de -p1 < hello.dif
patching file hello.c
The options used with the command have the following meanings:
Option Meaning
-d First change to the specified directory
-p1 Removes the first slash in path specifications
Additional options and their meanings are listed in the manual pages of patch. The new
file can be compiled and used as usual.
Exercise
• Steve Oualline: Practical C, 3rd. Ed., O’Reilly & Associates, Sebastopol, 1997
• Andrew Oram & Steve Talbott: Managing projects with make, O‘Reilly & Asso-
ciates, Sebastopol, 1991
Summary
• Usually, the three commands ./configure, make, and make install are
sufficient for compiling programs from source.
• The programs diff and patch can be used to create a new file version based on
the difference from the previous version of the respective file.
Learning Aims
2.1 Introduction
The Red Hat Package Manager (RPM) is a powerful tool for installing and uninstalling
software on Linux machines. In the course “SuSE Linux Enterprise Server: System Ad-
ministration Basics”, you learned how RPM can be used for installing software. In contrast,
this section teaches you to build your own RPM packages.
Although the installation of a program from source archives allows the program to be
adapted to the machine in the best way possible, this approach has some disadvantages.
For instance, you need to install a compiler and other tools on the machine and uninstal-
lation usually requires an additional effort — often you have to delete the individual files
manually.
RPM is characterized by the following features and advantages:
• The sources are used as they are obtained from the program author.
Nevertheless, to benefit from these advantages, additional work is necessary initally, as the
RPM package first must be built.
2.2 Basics
To build an RPM package, you need at least two (sometimes three) components:
• A spec file
The sources, spec file, and patches are located in a defined directory structure. In SuSE
Linux Enterprise Server, this structure already exists under /usr/src/packages/:
• SPECS: This directory contains the spec files controlling the build process.
• RPMS: Upon completion, the RPM package is placed in this directory (or one of its
subdirectories).
• SRPMS: Upon completion, the source RPM package is placed in this directory.
2.2.1 Sources
Authors usually make the sources available as packed .tgz archives on FTP servers. Copy
these archives to the SOURCES directory without modifying them.
2.2.2 Patches
Patches may be necessary to take specific requirements of the target system into consider-
ation or to solve platform-dependent problems.
The spec file is the core of the RPM build process. It contains the information RPM needs
for building the package. Its basic structure is as follows:
• The Preamble
The preamble contains information displayed when the user queries package infor-
mation. This includes information about the version, copyright, and a description of
the package.
As an example, we will consider the SuSE FTP Proxy, which is part of the SuSE Proxy
Suite (to date, the suite merely contains this item). The sources are available on the SuSE
FTP server:
ftp://ftp.suse.com/pub/projects/proxy-suite/src/proxy-suite-
1.9.tar.gz
Copy this source archive to the directory SOURCES. As the compilation of the package is
part of the build process of an RPM package, first check if the package can be compiled
without RPM according to the instructions in the README file. This should work with-
out problems with the Proxy Suite. However, some additional packages may need to be
installed.
If a package can only be installed after some modifications (e.g., of the makefile), these
modifications must be included in a patch that is installed when the package is built with
RPM. In this way, the RPM can be built without modifying the sources. The procedure
for creating the patch exceeds the scope of this manual. However, information about this
subject is available in other documentation, such as the RPM HOWTO:
http://www.tldp.org/HOWTO/RPM-HOWTO/build-it.html,7.2 Test
Building.
#
# Spec file for the Proxy Suite.
#
Summary: The SuSE Proxy Suite: ftp-proxy.
Name: proxy-suite
Version: 1.9
Release: 1
Copyright: GPL
Group: Productivity/Networking/Ftp/Servers
Source: ftp://ftp.suse.com/pub/projects/proxy-suite/src/proxy-suite-1.9.tar.gz
%description
The SuSE FTP Proxy (part of the SuSE Proxy Suite, still
waiting to become a Suite after all.)
Use to proxy ftp connections across an ALG.
Generally, the structure of this section is as follows: keyword : entry (in the same
line). The sequence of the lines within the preamble is not important.
Name, Version, and Release specify the name of the software, the version number
assigned by the author of the software, and the RPM release number, respectively. The
file name of the RPM package and the name of the package are generated from these
three components. The other lines are more or less self-explanatory. %description is
followed by a package description (possibly consisting of several lines) for the user.
In this example, the %prep section merely contains a preconfigured macro for unpacking
the source archive to the directory BUILD:
%prep
%setup
Shell scripts (without the usual #/bin/bash at the beginning) can be inserted in this
section as well as in most of the following sections.
The %build section contains the commands needed for compiling the sources.
%build
./configure
make
The generated files must be copied to the correct location in the file system. This is handled
by the %install section:
%install
make install
Finally, the generated files must be included in the RPM package. As the previous steps do
not provide any basis for RPM to guess which files actually belong to the package, the files
must be listed explicitly in the %files section. Documentation files and configuration
files can be marked as such. In the following example, the files AUTHORS, COPYING,
INSTALL, and CREDITS are part of the documentation.
%files
%doc AUTHORS COPYING
%doc INSTALL CREDITS
/usr/local/sbin/ftp-proxy
/usr/local/man/man5/ftp-proxy.conf.5.gz
/usr/local/man/man8/ftp-proxy.8.gz
/usr/local/etc/proxy-suite/ftp-proxy.conf
In this example, the %clean section can be used to delete the files copied to the file
system:
%clean
rm /usr/share/man/man5/ftp-proxy.conf.5.gz
rm /usr/share/man/man8/ftp-proxy.8.gz
rm /usr/local/etc/proxy-suite/ftp-proxy.conf
rpm -ba builds a binary RPM package as well as a RPM source package. rpm -bb
only builds a binary RPM package. Now, run the test:
rpm -ba /usr/src/packages/SPECS/proxy-suite.spec
Exercise
The procedure described so far leads to a functional RPM package. Nevertheless, some
fine-tuning may be required. For instance, one of the greatest disadvantages is that the
program is installed on the build machine when the RPM package is built. This does not
really matter if the program is not yet installed on the build system. However, if you want to
build a package for a program that is already installed, such as Postfix, you have a problem.
In this case, the existing Postfix installation and the configuration would be overwritten,
resulting in the corruption of the existing mail configuration.
Furthermore, you may want to change the directories to which the program and data files
are copied. For example, you may want the finished files to be copied to /usr/ instead of
/usr/local/. This, too, requires some modifications.
2.4.1 BuildRoot
The terms BuildRoot and BuildDir and the associated variables RPM_BUILD_ROOT
and RPM_BUILD_DIR are quite similar to each other and can easily be
confused. The build directory mentioned above is usually the directory
/usr/src/packages/BUILD/. This is where the sources are unpacked and
compiled.
Provided nothing else is specified, BuildRoot is the root directory /. If a different
directory is specified in the initial part of the spec file, RPM will look for the files listed
under %files relative to this directory.
...
Source: ftp://ftp.suse.com/pub/projects/proxy-suite/src/proxy-suite-1.9.tar.gz
BuildRoot: /tmp/%{name}-buildroot
%description
...
Execute the command rpm -ba proxy-suite.spec once more. The modifications
performed are not sufficient to solve all problems. make still installs the programs at
the same location in the file system and RPM cannot find anything in the BuildRoot
directory.
Therefore, additional changes are required in the %build and %install sections. The
procedure depends on the program for which to build the package. One possibility is to
enter the command ./configure --help | less in the directory containing the
unpacked sources. The output reveals the command line options you can use with the
script. The commands relevant here are those that can be used to change the installation
directory. Enter the suitable options at the correct position in the spec file:
%build
./configure --prefix=$RPM_BUILD_ROOT/
make
%install
make install
During the test stage, you can add the line rm -rf $RPM_BUILD_ROOT to the %prep
section when building the RPM package. Following a test run of RPM, you will see where
the files were copied. If this line is entered in the spec file, the directory will be removed
prior to the next run, thus preventing the files from the previous run from interfering.
Although the above configuration installs the files under the specified directory, thus pre-
venting them from corrupting the build system, the files still do not appear where they
should on the target system. The build process will not proceed without errors unless the
%files list is modified. However, even if you modify this list, the above configuration
will cause the binary ftp-proxy to end up in the directory /sbin/ — a place where it
does not belong. Therefore, additional measures are necessary.
Accordingly, the customized %build section of the spec file could appear as follows:
#./configure --prefix=$RPM_BUILD_ROOT/ --sbindir=$RPM_BUILD_ROOT/usr/sbin \
# --mandir=$RPM_BUILD_ROOT/usr/share/man
#./configure
make
The entry %config marks configuration files that the user can
query with the command rpm -qpc package.rpm, just as the
entry %doc marks documentation files that can be queried with
rpm -qpd package.rpm.
Once the build process proceeds without any errors, the final step can be performed. Test
the installation of the package with rpm on another system and check if the RPM package
and the installed package work correctly.
Exercise
Modify the spec file from the previous exercise in such a way that the files
are installed in a separate directory and installed on the target system in an
FHS-compliant way.
• The book “Maximum RPM” (published by SAMS) is available in pdf format at:
http://www.redhat.com/docs/books/max-rpm/index.html
• Any spec file, such as those of the source RPMs enclosed on the installation CDs
Summary
• You need the source code, a spec file, and possible some patch files to create your
own RPM package.
• The effort required for preparing the spec file and the RPM package is compensated
by the comfortable software installation and uninstallation.
Learning Aims
The kernel is the core of the operating system — a layer between the hardware and the
application processes. The kernel performs the following tasks:
• Management of the hardware resources (CPU, RAM, devices, computing time, etc.)
• Process management
Application
Kernel
Hardware
The first kernel was released in September 1991 under the version number 0.01. The size
of the compressed tar archive was only 70 KB (decompressed: 465 KB). In contrast, the
size of the compressed archive of the kernel sources of version 2.4.20 is almost 25 MB.
In the realm of Open Source, the version number 1.0 is reserved for the first stable version.
Linux reached this status in March 1994, when version 1.0 was released.
Today, the generally accepted version number system is as follows: linux X.Y.Z
• X: Is only stepped up when the kernel undergoes drastic changes. For example,
new features in version 2.0 included multiprocessor support, dynamic loading and
unloading of kernel modules, and quotas.
• Y: Even numbers indicate stable versions. Odd numbers indicate developer versions.
Example: In January 2003, the latest stable version was 2.4.20 and the developer version
was 2.5.59.
To compile a new kernel, you need the kernel sources, which are available on the Internet
or on the SuSE installation media (CDs).
The sources are available both in gzip (.gz) and in bzip2 format (.bz2). The
archives compressed with bzip2 are usually a bit smaller than those compressed with
gzip.
Kernels compiled from the unmodified kernel sources are referred to as vanilla ker-
nels.
On a standard SuSE system, the kernel is located in the directory /boot and is called
vmlinuz. The following overview shows a number of additional files and directories
associated with the kernel:
• /boot/initrd: The initial ramdisk containing all modules required for booting.
In the past, all hardware drivers were compiled into the Linux kernel. When new hardware
components not supported by the kernel were added, a new kernel had to be compiled.
This situation was remedied through the introduction of kernel modules, which can be
loaded during operation whenever necessary. These kernel modules are usually hardware
drivers, but there are also modules for various file systems, IPv6 support, firewall functions,
and so forth. Thus, modules can be defined as loadable device drivers and kernel functions.
On a standard SuSE Linux system, a number of kernel modules are usually loaded in the
memory. The command /sbin/lsmod shows which modules are currently loaded:
earth:~ # lsmod
Module Size Used by Not tainted
ipv6 150036 -1 (autoclean)
st 28428 0 (autoclean)
sr_mod 14616 0 (autoclean) (unused)
cdrom 28736 0 (autoclean) [ide-cd sr_mod]
sg 29568 0 (autoclean)
usb-uhci 23052 0 (unused)
usbcore 61696 1 [usb-uhci]
8139too 15208 1
mii 1232 0 [8139too]
lvm-mod 65184 5 (autoclean)
reiserfs 193424 3
aic7xxx 124856 0
The number in the third column indicates how many other modules use this module. The
fourth column contains further information about the module:
• (autoclean) shows that this module is managed by the kerneld (kernel ver-
sion 2.0.x) or kmod (kernel version 2.2.x or later).
Exercise
List the loaded modules with lsmod and compare the output with the content
of the file /proc/modules.
While lsmod lists the loaded kernel modules, the command /sbin/modinfo shows
information on the functions of these modules.
earth:~ # modinfo 8139too
filename: /lib/modules/2.4.19-4GB/kernel/drivers/net/8139too.o
description: ‘‘RealTek RTL-8139 Fast Ethernet driver’’
author: ‘‘Jeff Garzik <jgarzik@mandrakesoft.com>’’
license: ‘‘GPL’’
parm: multicast_filter_limit int, description ‘\infty139too maximum
number of filtered multicast addresses‘‘
parm: max_interrupt_work int, description ’’8139too maximum events
handled per interrupt‘‘
parm: media int array (min = 1, max = 8), description ’’8139too: Bits
4+9: force full duplex, bit 5: 100Mbps‘‘
parm: full_duplex int array (min = 1, max = 8), description ’’8139too:
Force full duplex for board(s) (1)‘‘
parm: debug int, description ’’8139too bitmapped message enable number‘‘
Section 3.3.4 on the next page shows how the parameters can be used to configure the
kernel modules.
Exercise
Execute the command modinfo for some of the modules loaded on your
system.
As a general rule, only modules that are not used and not needed by other modules can be
removed. The command for removing modules is /sbin/rmmod.
If you try to remove a module that is currently being used, the following message is dis-
played:
earth:~ # rmmod reiserfs
reiserfs: Device or resource busy
Kernel modules that are not needed are not removed automatically. For kernel modules
that are not needed to be removed automatically, configure a cron job. The following
command removes all unused modules:
rmmod -a
The option -a does the following:
Exercise
1. Use the command rmmod to remove an unused module.
2. Execute rmmod -a several times. Compare the output of lsmod be-
fore and after you execute the command.
Kernel modules are loaded automatically whenever necessary or manually by means of the
commands /sbin/insmod or modprobe.
Starting with kernel version 2.0, the automatic loading of the modules was handled by the
kerneld (kernel daemon). As of kernel version 2.2, this task is handled by the kernel
thread kmod.
Example for the automatic loading of a module: On a system on which the Logical Volume
Manager (LVM) is not installed, normally no module is loaded for the LVM:
If you search for Logical Volumes with the command lvscan , the required module will
be loaded automatically:
earth:~ # lvscan
lvscan -- no volume groups found
There are two commands for manual loading: insmod and modprobe. insmod can be
used to load individual modules; however, dependencies from other modules are not taken
into consideration.
Example 1:
Example 2:
In this case, various error messages (unresolved symbol) are displayed, indicating
that other kernel modules are needed. This can be remedied with the command mod-
probe, which takes dependencies into consideration and automatically loads the respec-
tive modules (see Example 3).
Example 3:
Exercise
Try to load modules. Here are some example modules that you can load and
remove:
ipsec An IPSEC module (VPN).
zft-compressor A compression module for floppy tapes.
raid5 A module for software RAID.
Settings affecting the kernel modules are stored in the file /etc/modules.conf (pre-
viously: /etc/conf.modules). Here are some of the parameters that can be set in the
file /etc/modules.conf:
Parameter Meaning
depfile Sets the path to the file containing the module dependencies (default:
depfile=/lib/modules/version/modules.dep)
path This parameter can be used to specify additional directories to search
for kernel modules.
alias With this parameter, an additional name (an alias) can be set
for the module. Syntax: alias alias_name module_name
Example: alias eth0 e100
options This parameter can be used to specify options for a module (or an alias).
Example: options 3c505 io=0x300 irq=10
Today, the compilation of custom kernels is usually unnecessary, as new device drivers
can be made available by means of suitable modules. Furthermore, modern hardware does
not require the compilation of resource-friendly lean kernels. Nevertheless, there are some
situations in which the compilation of a kernel does make sense. There are some interesting
kernel patches that enhance the operating system with a number of useful features. Here is
an incomplete list of available kernel patches:
• LIDS (Linux Intrusion Detection System): LIDS is a kernel patch and an administra-
tion tool for the Linux Intrusion Detection System. LIDS expands the kernel with a
number of security features. For example, LIDS can be used to restrict the rights of
the user root. See http://www.lids.org/
This section shows how you can compile and install your own kernel and how to handle
kernel patches.
Important note: SuSE does not provide any support for systems running non-SuSE ker-
nels.
This section covers the compilation of custom kernels without kernel modules. The de-
scription is based on the kernel sources of version 2.4.20.
The procedure is as follows:
2. Configure kernel
3. Compile kernel
4. Install kernel
5. Test kernel
The directory /usr/src is the correct location for installing the kernel sources.
Exercise
Ask your trainer where the kernel sources are located and decompress the
source archive in the directory /usr/src.
To configure the new kernel, change to the directory /usr/src/linux. Basically, there
are three ways of performing the configuration:
• make config
• make menuconfig
This approach provides a simple menu that can be used easily with the keyboard.
• make xconfig
This command starts a graphical configuration tool, shown in Figure 3.2 on the fac-
ing page. This configuration tool is used for the further procedure.
To begin with, here is an overview of the most important available configuration options:
Option Meaning
Code maturity level options Here, determine whether you want options
that are not fully mature to be displayed.
Loadable module support Support for kernel modules.
Processor type and features General processor settings: processor type,
multiprocessor support, high-memory sup-
port, etc.
General setup General settings: network support, PCI sup-
port, PCMCIA support, etc.
Parallel port support For devices connected to the parallel port,
such as zip drives, or for setting up a PLIP
network (Parallel Line Internet Protocol).
Plug and Play configuration Configuration of plug and play devices via the
kernel.
Block devices Support for block-oriented devices (RAID
controllers, floppy disk drives, etc.).
Multidevice support Support for software RAID and LVM.
Networking options General network options (no drivers for net-
work adapters): network protocols, router
functions, net filters, etc.
Option Meaning
ATA/IDE/MFM/RLL support Support of mass storage media, such as IDE
hard disks and ATAPI CD-ROM drives.
SCSI support General SCSI support and drivers for SCSI
controllers.
Network device support Drivers for network adapters.
ISDN subsystem ISDN support for Linux.
Character devices Character devices such as terminals and mice.
File systems Support for various file systems (ext2, Reis-
erFS, XFS, JFS, VFAT, NTFS, etc.).
Console drivers VGA console and framebuffer device support.
Sound Drivers for sound cards.
USB support USB support (Universal Serial Bus).
Important note concerning the kernel configuration: The kernel must be able to ad-
dress and mount the root partition. You need a driver for the hardware (e.g., IDE
or SCSI) and a driver for the file system (e.g., ext2 or ReiserFS) of the root partition.
If, for example, the file system is not supported by the kernel, the kernel will termi-
nate the boot process with a kernel panic message:
Kernel panic: VFS: Unable to mount root fs on 03:03
Drivers and kernel properties can be integrated into the kernel or compiled as kernel mod-
ules. For example, Figure 3.3 shows a driver for a network adapter that is configured as a
module (see arrow).
When the kernel configuration is terminated, all configuration parameters are written to the
file .config. On some Linux systems, there is a configuration file for the current kernel.
This file is located in the virtual proc file system: /proc/config.gz
Following the configuration, the kernel must be compiled. This is done in several steps,
using the command make:
1. make dep
Dependency check.
2. make clean
Removal of old log files and object files.
3. make bzImage
Compilation of the kernel. The compressed kernel image is located under
arch/i386/boot/bzImage.
4. make modules
Compilation of the kernel modules.
5. make modules_install
Installation of the modules in the directory /lib/modules.
Option Meaning
mrpoper A clean-up function similar to make clean. Additionally, con-
figuration files are removed.
distclean Like make mrproper, but the following files are also re-
moved: core, *.orig, *˜, *.SUMS, *.bak, *.rej
spec Generates an RPM spec file (required for generating a kernel
RPM).
Exercise
Configure a new kernel in such a way that this kernel can at least mount the
root partition. At the moment, you do not need any of the other configuration
options.
To do this, you need the following configuration options:
To test the compiled kernel, the kernel image must be copied to the correct location (direc-
tory /boot) and the boot manager needs to be adapted accordingly.
Step-by-step description of the kernel installation:
Exercise
1. Install your new kernel.
2. Test the kernel by rebooting the system and selecting your new kernel in
the boot manager.
3. Configure your kernel anew. Now you should also compile a module for
the network adapter in your machine.
4. Install and test this kernel too.
To replace a kernel version with a newer kernel version, you do not need to download
the entire sources from the Internet. You merely need to download a file containing the
differences between the two versions.
The diff file (patch file) can be generated with the command diff. These files can be
downloaded from http://www.kernel.org. A diff file contains exactly the changes
from one kernel version to the next. Example: The file patch-2.4.21.bz2 contains
the changes between 2.4.20 and 2.4.21.
The differences of a patch file can be incorporated in the current version of the source files
by means of the command patch. If the kernel sources were unpacked in the directory
/usr/src, the patch should also be copied to the directory /usr/src. Example:
earth:~ # ls /usr/src
drwxr-xr-x 14 root root 632 Mar 14 08:25 linux-2.4.20
drwxr-xr-x 7 root root 168 Mar 12 10:14 packages
-rw-r--r-- 1 root root 2741941 Mar 13 08:56 patch-2.4.21.bz2
As the patch file is compressed with bzip2 or gzip, it first must be decompressed:
earth:~ # cd /usr/src
earth:/usr/src # bunzip2 patch-2.4.21.bz2
Now the kernel sources can be updated to the new version with the command patch:
earth:/usr/src # patch -p0 <patch-2.4.21
The option -p specifies which path components are to be removed in file names in the
patch file:
Following the successful update, old object files and so forth should be removed. This can
be done with the command make mrproper.
Exercise
1. Update the kernel sources with the patch provided by the trainer.
2. Compile and install the newly patched kernel.
Removing Patches
The command patch can also be used to undo the changes. The option required for the
command patch is -R or --reverse.
earth:~ # cd /usr/src
earth:/usr/src # bunzip2 -cd patch-2.4.21.bz2 | patch -p0 --reverse
Exercise
Apart from the kernel, most Linux distributions load a ramdisk to the RAM when the
system is booted. This ramdisk usually contains kernel modules needed for booting the
system. A ramdisk is used like a hard disk and is set up in the RAM.
Most Linux distributions use generic kernels that only support IDE hard disks and the
standard file system ext2. If the root file system is located on a SCSI hard disk with
the Reiser file system, the kernel needs the respective drivers. However, the modules in
/lib/modules cannot be accessed, as this directory is located in the root partition.
Like the kernel, the initial ramdisk is located in the directory /boot:
earth:~ # ls -l /boot/initrd
-rw-r--r-- 1 root root 475100 Mar 12 10:16 /boot/initrd
This ramdisk is a file system compressed with gzip. It can be decompressed with a
command such as gunzip. In the following example, the file /boot/initrd is de-
compressed and stored in the file /tmp/ramdisk:
earth:~ # gunzip </boot/initrd >/tmp/ramdisk
This compressed ramdisk can be mounted on /mnt for test purposes. The option
-o loop is required for the command mount:
During the boot process, the kernel searches the initial ramdisk for a file called linuxrc
and executes it. This is normally a shell script in which kernel modules are loaded with the
command insmod.
Exercise
2. Install the new ramdisk by compressing the new image with gzip, for
example, with:
earth:~ # gzip </tmp/ramdisk >/boot/initrd
Normally, the initial ramdisk does not have to be processed in this way, as there is a script
that automatically performs the required adaptions: /sbin/mkinitrd. The correspond-
ing configuration file is /etc/sysconfig/kernel. The kernel modules to load in the
ramdisk are specified in the variable INITRD_MODULES:
earth:~ # cat /etc/sysconfig/kernel
#
# This variable contains the list of modules to be added to the initial
# ramdisk by calling the script "mk_initrd"
# (like drivers for scsi controllers, lvm, or reiserfs)
#
INITRD_MODULES="aic7xxx reiserfs"
...
If you modify the variable INITRD_MODULES, you have to execute the command
mkinitrd:
earth:~ # mkinitrd
using "/dev/sda6" as root device (mounted on "/" as "reiserfs")
...
creating initrd "/boot/initrd.shipped" for kernel "/boot/vmlinuz.shipped"
(version 2.4.19-4GB)
- insmod aic7xxx (kernel/drivers/scsi/aic7xxx/aic7xxx.o)
- insmod reiserfs (kernel/fs/reiserfs/reiserfs.o)
If you are using lilo as boot manager, you may want to run ’lilo’ now.
Note: If you use the boot manager LILO instead of the default boot manager GRUB, you
have to execute the command lilo.
Summary
• Kernel modules enable dynamic loading of device drivers and other kernel proper-
ties.
• New kernels can be configured with the command make menuconfig or make
xconfig.
• The command patch can be used to update kernel sources or undo patches.
• The initial ramdisk enables modules to be loaded even before the root partition is
mounted.
Learning Aims
• how to configure the system by setting various kernel parameters during operation
• how to prevent individual users from using system resources excessively at the ex-
pense of other users
The files and directories under /proc/ contain a wealth of information about various
aspects of the running system. Especially the files under /proc/sys/ can be modified
during operation, affecting the characteristics of the running machine.
The individual files are ASCII text files that can be viewed with cat or less. For exam-
ple, the following listing shows information about the CD-ROM drive:
tux@earth:~> cat /proc/sys/dev/cdrom/info
CD-ROM information, Id: cdrom.c 3.12 2000/10/18
The command sysctl can be used to view all or specific modifiable values:
tux@earth:~> /sbin/sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 0
tux@earth:~> /sbin/sysctl -a
sunrpc.nlm_debug = 0
sunrpc.nfsd_debug = 0
sunrpc.nfs_debug = 0
sunrpc.rpc_debug = 0
abi.fake_utsname = 0
abi.trace = 0
abi.defhandler_libcso = 68157441
...
The command echo can be used to edit individual values. For example, the following
command activates routing:
earth:~ # echo 1 > /proc/sys/net/ipv4/ip_forward
If, for example, you want to load a number of kernel parameters when the system is
booted, the command sysctl is very useful. The parameters can be entered in the file
/etc/sysctl.conf and set in the file /etc/init.d/boot.sysctl by executing
the command sysctl -p.
# /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.ipv4.icmp_echo_ignore_broadcasts = 1
fs.file-max = 65535
kernel.shmmax = 2147483648
4.2 Powertweak
SuSE Linux Enterprise Server offers a special tool for setting the said parameters: Power-
tweak. This tool comprises the daemon powertweakd and a graphical YaST2 front-end
by means of which the configuration can be carried out in a convenient and transparent
manner (Powertweak is not part of UnitedLinux). A significant advantage of this method
of setting kernel parameters is that a short description is provided for every parameter.
When started for the first time via yast2 System Powertweak
Configuration, the configuration file /etc/powertweak/tweaks is generated
and the daemon is started. From now on, it will be started every time the system is booted,
as the links to the start script /etc/init.d/powertweakd are also set in the respec-
tive runlevel directories under /etc/init.d/.
The activation of routing as shown in the above example (see page 39) can also be per-
formed here, as shown in the following figure:
# <b>Networking - IP</b>
# <p>IP Forwarding</p>
# <p>This option enables forwarding of IP packets. E.g. from eth0 to eth1.</p>
net/ipv4/ip_forward = 0
...
Note: A disadvantage of these diverse configuration possibilities is that the same param-
eter can be set differently in various places. For instance, a change of the variable
IP_FORWARD=no in the file /etc/sysconfig/sysctl will not have any ef-
fect if powertweakd is started and a different value was set for IP forwarding in
the Powertweak configuration. Therefore, you should select one method and use it
exclusively to avoid inconsistencies.
Exercise
Start yast and the module for powertweak. Modify various values and
observe the effect.
4.3 hdparm
hdparm offers a variety of options for changing the behavior of IDE hard disks. Most of
these options are only needed in exceptional cases and the manual page explicitly warns
against the use of certain options. However, there are some options that are used quite
frequently. Enter hdparm --help for an overview of the options:
earth:~ # hdparm --help
hdparm - get/set hard disk parameters - version v5.2
Usage: hdparm [options] [device] ..
Options:
-a get/set fs readahead
-A set drive read-lookahead flag (0/1)
-b get/set bus state (0 == off, 1 == on, 2 == tristate)
-B set Advanced Power Management setting (1-255)
-c get/set IDE 32-bit IO setting
-C check IDE power mode status
-d get/set using_dma flag
-D enable/disable drive defect-mgmt
-w perform device reset (DANGEROUS)
-W set drive write-caching flag (0/1) (DANGEROUS)
-x tristate device for hotswap (0/1) (DANGEROUS)
-X set IDE xfer mode (DANGEROUS)
-y put IDE drive in standby mode
-Y put IDE drive to sleep
-Z disable Seagate auto-powersaving mode
-z re-read partition table
The DMA mode setting is especially important for the hard disk performance. This mode
can be set with the option -d:
earth:~ # hdparm -d 0 /dev/hda
/dev/hda:
setting using_dma to 0 (off)
using_dma = 0 (off)
A performance test can be carried out to assess the difference between active and inactive
DMA access:
earth:~ # hdparm -t /dev/hda
/dev/hda:
Timing buffered disk reads: 64 MB in 8.73 seconds = 7.33 MB/sec
Exercise
Use hdparm to check the current setting of the DMA mode. Then test the
hard disk speed with hdparm, change the DMA mode, and test the hard disk
speed again. Do you notice any difference?
4.4 ulimit
The command ulimit does not have a direct impact on the system performance. Rather,
its task is to prevent individual users from using system resources excessively at the ex-
pense of other users. Accordingly, ulimit can be used to configure the memory usage,
the number of possible processes, and other factors. The current limits can be viewed with
the command ulimit -a:
earth:~ # ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) unlimited
cpu time (seconds, -t) unlimited
max user processes (-u) 1023
virtual memory (kbytes, -v) unlimited
The individual values can also be set anew for the current shell and its child processes:
earth:~ # ulimit -u
1023
earth:~ # ulimit -u 100
earth:~ # ulimit -u
100
The details of the individual options are described in the manual pages of bash, section
ulimit.
The settings can also be changed globally for the entire system. The configura-
tion can be performed by means of the file /etc/profile or by way of the
PAM configuration. The advantage of the configuration via PAM is that the file
/etc/security/limits.conf enables user or group–specific configuration and the
files in the directory /etc/pam.d/ enable application-specific (login, sshd, etc.)
configuration.
The file /etc/profile contains preconfigured entries that you can customize according
to your needs:
...
...
...
#* soft core 0
#* hard rss 10000
#@student hard nproc 20
#@faculty soft nproc 20
#@faculty hard nproc 50
#ftp hard nproc 0
#@student - maxlogins 4
This file also contains an explanation of what can be entered in the individual columns.
Exercise
C . Change the ulimit value, execute a.out again, and observe the
change in the processes. (If the default ulimit value of 1023 is used, the
computer will be virtually unusable following the execution of a.out.
Often, the only thing you can do in such a case is to reboot the system.)
Summary
• Kernel parameters can easily be modified during operation with the command
sysctl or the package powertweak.
• The command ulimit can be used to control the use of various resources by users.
Learning Aims
5.1 Basics
RAID stands for “Redundant Array of Independent (or Inexpensive) Disks.” Of the six
separate RAID levels originally defined, the Linux kernel supports the three most common
levels (0, 1, and 5). The purpose of RAID is to combine several hard disk partitions to a
“virtual” hard disk to increase performance or data security.
One way of implementing RAID is the use of special hardware controllers to which the
SCSI or IDE hard disks are connected. The controller controls the hard disks and the
organization of the data structures according to the set RAID level. In Linux, a single
device addressed as one hard disk appears instead of the individual hard disks.
Another possibility is to implement RAID as software RAID with the help of the drivers in
the Linux kernel. By means of this approach, individual hard disk partitions are combined
to a multiple-disk device (e.g., /dev/md0). For practical reasons, these partitions should
be located on separate hard disks. However, for training purposes they may also be located
on the same hard disk.
RAID 0: In contrast to what is implied by the name RAID, RAID 0 does not provide any
redundancy. RAID 0 merely distributes the data across multiple partitions to achieve
higher data throughput rates. The disadvantage: if one hard disk in this array fails,
all data is lost. This kind of data management is also referred to as striping.
RAID 1: RAID 1 increases the data security by maintaining an exact mirror of the data on
one disk on the other disk. If one disk fails, all data continues to be fully available on
the other disk. A further advantage is the increased read speed. The disadvantages
are the increased costs (two hard disks with 50 GB each merely provide a total capac-
ity of 50 GB) and the somewhat reduced write speed. This kind of data management
is often referred to as “mirroring”.
5.2 Configuration
A new RAID array can be set up during the installation or at a later time. The same YaST2
dialog is used in both cases.
To date, YaST2 does not offer the functionality needed for administering an existing RAID
array. This can be done with the command-line program mdadm as described in Section 5.3
on page 52.
The first step when setting up a RAID array is the selection of the involved partitions.
Partitions in a software RAID are specified by the ID 0xFD Linux RAID. Depending
on the RAID level, two or more partitions of this kind are needed.
In the second step, the RAID level is determined and the partitions are joined to a RAID
array.
The last step is the creation of a file system on the new device /dev/md0 and the addition
of a suitable entry in the file /etc/fstab.
Theoretically, the steps described above could also be performed from the command line
with mdadm. However, YaST2 is more practical for these tasks.
5.3 Administration
A RAID 5 disk array prevents data loss in the event of a disk failure, thus enhancing the
system availability. Nevertheless, RAID 5 does not constitute a substitute for a backup, as
the failure of two disks would inevitably lead to a loss of data.
The program mdadm can be used to remove and insert individual partitions from and to
the array. The current state of the RAID array can be viewed any time with the following
command:
earth:~ # cat /proc/mdstat
Personalities : [raid5]
read_ahead 1024 sectors
md0 : active raid5 sdb1[1] sdc1[2] sda3[0]
4208640 blocks level 5, 128k chunk, algorithm 2 [3/3] [UUU]
Defective disks can be marked as “faulty” and removed from the array with an additional
command:
earth:~ # mdadm /dev/md0 -f /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md0
earth:~ # mdadm /dev/md0 -r /dev/sdb1
mdadm: hot removed /dev/sdb1
Upon completion of the recovery, the new partition will be part of the array:
earth:~ # cat /proc/mdstat
Personalities : [raid5]
read_ahead 1024 sectors
md0 : active raid5 sdb2[1] sdc1[2] sda3[0]
4208640 blocks level 5, 128k chunk, algorithm 2 [3/3] [UUU]
If a fourth partition is added to a RAID array consisting of three partitions, the new partition
will not be used immediately. If, however, one of the partitions used fails, the fourth
partition will seemlessly replace the failed partition. The partition information will be
written to the free partition immediately, thus completing the array.
Initial state:
earth:~ # cat /proc/mdstat
Personalities : [raid5]
read_ahead 1024 sectors
md0 : active raid5 sdc1[2] sdb2[1] sda3[0]
4208640 blocks level 5, 128k chunk, algorithm 2 [3/3] [UUU]
One of the existing partitions is marked as faulty and removed from the array:
linux:~ # mdadm /dev/md0 -f /dev/sda3
mdadm: set /dev/sda3 faulty in /dev/md0
The reconstruction of the parity information on the newly added partition begins without
any additional commands:
linux:~ # cat /proc/mdstat
Personalities : [raid5]
read_ahead 1024 sectors
md0 : active raid5 sdb1[3] sdc1[2] sdb2[1]
4208640 blocks level 5, 128k chunk, algorithm 2 [3/2] [_UU]
[==>..................] recovery = 11.3% (240304/2104320) \
finish=6.3min speed=4914K/sec
unused devices: <none>
YaST2 and the listed options of mdadm are sufficient for the basic administration of soft-
ware RAID systems. Additional possibilities are described in the manual pages for mdadm.
Exercise
1. Create four partitions of the same size and set up a RAID 5 array con-
sisting of three partitions. Remove one partition from the array. Add the
fourth partition. Monitor the file /proc/mdstat.
2. Following the reconstruction of the array, reinsert the partition
you just removed and remove another partition. Monitor the file
/proc/mdstat.
Summary
Learning Aims
6.1 Basics
The “conventional” partitioning of hard disks is rather inflexible — when a partition is full,
you have to move the data to another medium before you can resize the partition, create
a new file system, and copy the files back. Usually, such changes cannot be implemented
without changing adjacent partitions, whose contents also need to be backed up to other
media and written to their original locations after the repartitioning. Although there are
tools that facilitate these steps, their use can sometimes result in loss of data and corrupt
file systems.
The Logical Volume Manager solves this problem by inserting an abstraction layer referred
to as volume group (VG) between the partitions and file systems accessed by the applica-
tions and the actual physical storage media. In the ideal case, this allows resizing of the
physical media during operation without affecting the applications.
The basic structure is as follows (see Figure 6.1): several physical volumes (PV) (entire
hard disks or individual partitions) are combined to a superordinate unit referred to as
volume group (VG). Further hard disks or partitions can be added to the volume group
during operation whenever necessary. The volume group can also be reduced in size by
removing hard disks or partitions. The volume group, in turn, can be split into several
(up to 256) logical volumes (LV) that can be addressed with their device names (e.g.,
/dev/system/usr) like conventional partitions and on which file systems can be cre-
ated.
usr var
Logical Volumes
Volume Group
Physical Volumes
Note: Just as with other direct manipulations of the file system, a data backup should be
made before configuring LVM.
During the installation of SuSE Linux Enterprise Server, determine the physical vol-
umes in the hard disk partitioning dialog by assigning them the partition ID 8e for LVM.
Then click LVM and add the physical volumes to a volume group. The proposed name
system can be accepted or changed (as the individual logical volumes are addressed with
/dev/VG_name/LV_name, you cannot assign any name that already exists in the di-
rectory /dev). In the following step, this volume group is split into individual logical
volumes in which file systems are created like in conventional partitions and for which
mount points are assigned.
Theoretically, it is possible to set up the root file system / in an LV. However, the access
with the rescue system in the event of an emergency is easier if this is not the case. As the
root partition does not have to be very large and the LVM provides a lot of flexibility, this
should not pose any problem.
Even on an installed system, additional partitions can be managed by means of the LVM.
The basic configuration procedure is the same as during the installation. While the exten-
sion of volume groups by adding physical media (partitions, hard disks) and the extension
of logical volumes is usually quite easy and can be done with YaST2 during operation, the
reduction of logical volumes or the removal physical storage media from the volume group
has to be done very carefully. When shrinking file systems, the respective file system first
must be unmounted with the command umount. If, for example, you want to shrink the
partition /usr, you have to use the command-line tools instead of YaST2.
pvcreate
The command pvcreate can be used to prepare partitions or entire hard disks for use in
a volume group.
earth:~ # pvcreate /dev/sda8
Entire hard disks should not have any partition table in the MBR. Any existing table should
be overwritten. The following example shows the command for overwriting the partition
table for the second SCSI hard disk:
earth:~ # dd if=/dev/zero of=/dev/sdb count=1
vgcreate
If no volume group exists or you want to create a new one, you can use the command
vgcreate together with the name of volume group and the physical volumes prepared
with pvcreate:
earth:~ # vgcreate system /dev/sda8
vgextend
To add a new physical volume to an existing volume group, use the command vgextend.
The syntax of this command is the same as that of vgcreate (the only difference is that
the volume group already exists):
earth:~ # vgextend system /dev/sda8
lvcreate
If there is still space in the volume group or you have freed additional space as described
above, you can create additional logical volumes with the command lvcreate. Apart
from the size of the new logical volume, specify the volume group with which to associate
it and its name, unless you want to use the name assigned by default:
earth:~ # lvcreate -L 500M -n tmp system
Subsequently, create a file system in the new logical volume (using a command such as
mkfs.reiserfs or mkfs.ext3). Now the logical volume can be mounted just like
any other partition:
earth:~ # mount /dev/system/tmp /tmp
lvextend
An existing logical volume can be extended if the volume group still has space not allocated
to any logical volume. The procedure comprises two basic steps:
If the logical volume contains an ext2 or ext3 file system, it has to be unmounted with the
command umount before the file system is extended. Reiser file systems can be extended
while they are mounted.
earth:~ # lvextend -L +500M /dev/system/tmp
earth:~ # resize_reiserfs -s +500M -f /dev/system/tmp
Exercise
1. Create two partitions with the ID 8e and include them in the volume
group system. Create two logical volumes that do not cover the entire
space of the volume group. Create file systems in the logical volumes.
2. Extend one of the volume groups by several MB and adapt the file system
accordingly.
3. Mount the logical volumes in the file system and copy data to the logical
volumes.
Before a physical volume can be removed from a volume group, the physical extents (ad-
ministrative units in the physical volumes) used by the logical extents (administrative units
in the logical volumes) must be moved to other physical volumes. The current state of the
physical volumes can be displayed with the command pvscan:
earth:~ # pvscan
pvscan -- reading all physical volumes (this may take a while...)
pvscan -- ACTIVE PV "/dev/sda5" of VG "system" [1.46 GB / 0 free]
pvscan -- ACTIVE PV "/dev/sda6" of VG "system" [1.46 GB / 0 free]
pvscan -- ACTIVE PV "/dev/sda7" of VG "system" [1.46 GB / 668 MB free]
pvscan -- ACTIVE PV "/dev/sda8" of VG "system" [996 MB / 996 MB free]
pvscan -- total: 4 [5.39 GB] / in use: 4 [5.39 GB] / in no VG: 0 [0]
The command for emptying the physical volume is pvmove. LVM will decide where to
move the contents of the extents. Refer to the manual pages if you want more control over
the procedure.
earth:~ # pvmove /dev/sda5
pvmove -- moving physical extents in active volume group "system"
pvmove -- WARNING: if you lose power during the move you may need
to restore your LVM metadata from backup!
pvmove -- do you want to continue? [y/n] y
pvmove -- doing automatic backup of volume group "system"
pvmove -- 375 extents of physical volume "/dev/sda5" successfully moved
The emptied physical volume can now be “checked out” from the volume group with the
command vgreduce:
earth:~ # vgreduce system /dev/sda5
vgreduce -- doing automatic backup of volume group "system"
vgreduce -- volume group "system" successfully reduced by physical volume:
vgreduce -- /dev/sda5
The result:
earth:~ # pvscan
pvscan -- reading all physical volumes (this may take a while...)
pvscan -- inactive PV "/dev/sda5" is in no VG [1.47 GB]
pvscan -- ACTIVE PV "/dev/sda6" of VG "system" [1.46 GB / 0 free]
pvscan -- ACTIVE PV "/dev/sda7" of VG "system" [1.46 GB / 0 free]
pvscan -- ACTIVE PV "/dev/sda8" of VG "system" [996 MB / 164 MB free]
pvscan -- total: 4 [5.39 GB] / in use: 3 [3.92 GB] / in no VG: 1 [1.47 GB]
lvreduce
Individual logical volumes can be reduced with the command lvreduce. However, two
additional steps are necessary: the logical volume must be unmounted with the command
umount and the file system must be shrunk. Depending on which part of the directory tree
the logical volume contains, it may be helpful to change to the single-user mode (init
1), as this ensures that the file system is not accessed in an uncontrolled way, which would
prevent the directory branch from being unmounted. The needed commands are umount,
resize_reiserfs or resize2fs, and lvreduce.
earth:~ # umount /dev/system/opt
earth:~ # resize_reiserfs -s -30M /dev/system/opt
resize_reiserfs 3.6.2 (2002)
You are running BETA version of reiserfs shrinker.
This version is only for testing or VERY CAREFUL use.
Backup of your data is recommended.
ReiserFS report:
blocksize 4096
block count 95232 (102912)
free blocks 4040 (11719)
bitmap block count 3 (4)
Syncing..done
Make sure that the size of the logical volume is still sufficient for the entire file system
(shrunk earlier) after the resize. If this is not the case, parts of the file system will be
truncated. This could result in a loss of data or it might be impossible to mount the file
system.
For ext2 file systems, the program e2fsadm can be used to perform the resizing of the file
system and the adaption of the logical volume in the correct sequence and with the correct
parameters.
Exercise
1. Reduce one of the logical volumes created in the previous exercise.
2. Move the logical extents from one physical extent to another and remove
the physical volume from the volume group.
6.4 Snapshots
LVM provides the possibility of freezing the state of a logical volume at a specific time
by means of a snapshot logical volume especially created for this purpose, for example, to
enable consistent backups. At the same time, the original logical volume can continue to
be used (files can be created, deleted, or modified).
To do this, a new logical volume must be created for the snapshot. The size of the volume
depends on the speed with which the data changes in the volume to back up. For the
snapshot, the status of all data in the volume to back up is linked with the snapshot volume.
As soon as data in the volume is modified, the original data is copied to the snapshot
volume. The more data changed, the more the snapshot volume will fill with the data at
the time of the snapshot.
Thus, a backup representing the state of the data at the time of the snapshot can be made
very easily:
earth:~ # lvcreate -L 200M -s -n var_snapshot /dev/system/var
lvcreate -L 200M -s -n var_snapshot /dev/system/var
lvcreate -- WARNING: the snapshot will be automatically disabled \
once it gets full
lvcreate -- INFO: using default snapshot chunk size of 64 KB for \
"/dev/system/var_snapshot"
lvcreate -- doing automatic backup of "system"
lvcreate -- logical volume "/dev/system/var_snapshot" successfully created
When the backup is ready and the snapshot is no longer needed, remove it with two steps:
earth:~ # umount /mnt
earth:~ # lvremove /dev/system/var_snapshot
lvremove -- do you really want to remove "/dev/system/var_snapshot"? [y/n]: y
lvremove -- doing automatic backup of volume group "system"
lvremove -- logical volume "/dev/system/var_snapshot" successfully removed
Exercise
Create a snapshot of a logical volume, mount it in the file system, then remove
it.
Summary
• The Logical Volume Manager facilitates the administration of storage space by lib-
erating the system administrator from the limitations of rigid partitions.
• The configuration and administration can be done partly with YaST2 or completely
from the command line.
Command Meaning
pvcreate Preparation of a partition or hard disk for inclusion in a
volume group.
vgcreate Creation of a new volume group.
vgextend Extension of a volume group by additional physical vol-
umes.
lvcreate Creation of a logical volume in a volume group.
lvextend Extension of an existing logical volume.
pvscan Display of the current state of a physical volume.
pvmove Relocation of physical extents to other physical vol-
umes.
Command Meaning
vgreduce Removal of a physical volume from a volume group.
lvreduce Reduction of a logical volume.
lvremove Removal of a logical volume from a volume group.
mount/umount Mounting/unmounting of partitions or logical volumes.
resize_reiserfs Resizing of Reiser file systems.
resize2fs Resizing of ext2 or ext3 file systems.
e2fsadm Resizing of ext2 file systems, adaption of logical vol-
umes.
Learning Aims
/dev/hda2
(root partition) /
a b c
/dev/hda3
/
d e f
The file systems currently mounted in a Linux system can be queried with the command
mount or by taking a look at the file /proc/mounts:
tux@earth:~ > mount
/dev/hda2 on / type ext2 (rw)
proc on /proc type proc (rw)
devpts on /dev/pts type devpts (rw,mode=0620,gid=5)
/dev/hda1 on /opt type reiserfs (rw)
/dev/hda5 on /tmp type reiserfs (rw)
/dev/hda6 on /usr type reiserfs (rw)
/dev/hda7 on /var type reiserfs (rw)
usbdevfs on /proc/bus/usb type usbdevfs (rw)
/dev/hda8 on /home type ext3 (rw,noatime)
shmfs on /dev/shm type shm (rw)
The general syntax for mounting file systems with the command mount is:
mount [-t file_system_type] [-o mount_options] file_system_mount_point
In the following example, the partition /dev/hda9 is mounted on the directory /space.
The file system type does not have to be specified, as it is usually recognized automatically:
earth:~ # mount /dev/hda9 /space
Exercise
Check the mounted partitions in your system. Compare the output of the com-
mand mount with the content of the file /proc/mounts.
The file systems and their mount points in the directory tree are specified in the file
/etc/fstab. This file contains one line comprising six fields for each mounted file
system.
Example:
/dev/hda2 / ext2 defaults 1 1
/dev/hda3 /opt reiserfs defaults 1 2
/dev/hda1 swap swap pri=42 0 0
/dev/hda5 /tmp reiserfs defaults 1 2
/dev/cdrom /media/cdrom auto ro,noauto,user,exec 0 0
Field 2 The mount point — the directory to which the file system should be mounted. The
directory specified here must already exist.
Field 4 Mount options. Multiple mount options are separated by commas (e.g.,
defaults, noauto, ro). For example, the option user means that normal users
(e.g., tux) are entitled to mount the device file in the Linux system. This option
is usually used for the CD-ROM drive (/dev/cdrom) and the floppy disk drive
(/dev/fd0).
Field 5 Determine whether to use the backup utility dump for the file system. 0 means no
backup.
Field 6 Determine the sequence of the file system checks (with the fsck utility) when the
system is booted:
A number of options can be used when mounting file systems. These options can be entered
in the file /etc/fstab (Field 4) or specified with -o when using the mount command.
In the following example, the partition is mounted with the option ro (for read-only):
earth:~ # mount -o ro /dev/hda8 /usr/local
There are file system–specific and file system–independent options. This sections merely
covers file system–independent options. The file system–specific options are covered in
the sections dealing with the individual file systems.
remount The option remount causes file systems that are already mounted to be
mounted anew.
Example (remounting the partition /usr with the additional option ro):
earth:~ # mount -o remount,ro /usr
rw, ro These options indicate whether a file system should be writable (rw) or only read-
able (ro).
Exercise
1. Mount your partition /usr as read-only using the options remount
and ro.
2. Check the result by entering the command mount without any further
options.
3. Remount your partition /usr as writable.
sync, async Synchronous (sync) or asynchronous (async) input and output in a file
system. The default setting is async.
atime, noatime This option determines whether the access time of a file (atime) is
updated in the inode (atime) or not (noatime). The option noatime should
improve the performance.
Exercise
1. Execute the following command several times and make a note of the
time that is needed:
time find /usr -name "*.so" &>/dev/null
The command time delivers the time needed by the command find.
2. Now mount the partition /usr with the option noatime and repeat the
first exercise. You should be able to see improved performance when the
command is executed.
Some options only make sense in the file /etc/fstab. This includes the options user,
nouser, auto, and noauto.
auto, noauto File systems marked with the option noauto in the file /etc/fstab
are not mounted automatically when the system is booted. These are usually floppy
disk drives or CD-ROM drives.
user, nouser The option user allows users to mount the respective file system. Nor-
mally, this is a privilege of the user root.
defaults This option causes the default options to be used: rw, suid, dev, exec,
auto, nouser, and async.
The two options noauto and user are usually combined for exchangeable media, such
as floppy disk or CD-ROM drives.
Security Options
Some mount options, such as the following, serve system security purposes:
nodev, dev nodev prevents device files from being interpreted as such in the file sys-
tem.
noexec, exec The execution of programs on a file system can be prohibited with the
option noexec.
nosuid, suid If nosuid is set, suid and sgid bits in the file system will be ignored.
Previously, local file systems were addressed by means of device file names (e.g.,
/dev/hda1 or /dev/sda3). However, this designation is not always unique. In sys-
tems with removable disks or SCSI systems in which disks can be added and removed
during operation, file systems cannot be identified with certainty by means of device files.
For this reason, file systems can be identified with various mechanisms:
• using the file system label (e.g., with the command e2label)
File system labels are names for file systems that are assigned when setting up the file
system. Names or labels can be changed later. The maximum length of file system labels
is limited to sixteen characters.
In the file systems ext2 and ext3, labels can be set in various ways:
The current label of an ext2 or ext3 file system can be displayed with the command
e2label:
earth:~ # e2label /dev/sda1
newlabel
In the file system reiserfs, file system labels can also be set when setting up the file system
or later:
or
earth:~ # mkreiserfs -l testlabel /dev/sda1
• The label can be changed later, provided the file system is not mounted:
earth:~ # reiserfstune -l newlabel /dev/sda1
Instead of the name of the device file, the file /etc/fstab can also have an entry in the
form LABEL=labelname:
Manual mounting using the file system label is possible with the option -L:
Exercise
1. Set a label for a mounted ext2 file system.
2. Modify the entry for this file system in the file /etc/fstab in such
a way that the file system is addressed by its label instead of its device
name.
7.1.5 UUID
In ext2, ext3, and Reiser file systems, a UUID (Universally Unique Identifier) is automati-
cally assigned to the respective file system. This UUID is unique for each file system (see
man uuidgen). Depending on which file system is used, different commands are needed
to find the UUID of existing file systems:
• ext2: The command tune2fs -l displays the parameters of an ext2 file system:
earth:~ # tune2fs -l /dev/hda2 | grep UUID
Filesystem UUID: 681352db-6533-4552-b37c-4a88b60b0b61
In the file /etc/fstab, a file system can be mounted via UUID with UUID=number.
The command mount provides the option -U for manually mounting a file system using
the UUID.
earth:~ # mount -U 754aaff2-b567-405c-abd2-3af19e472104 /mnt
Exercise
1. What are the UUIDs of your system’s file systems?
2. Enter a file system with its UUID in the file /etc/fstab. Check the
new entry for correctness, for example, by rebooting the computer.
The Second Extended File System, ext2 for short, is the classic among Linux file systems.
The file system structure resembles that of other Unix file systems. The file system is
arranged in groups with an identical structure, shown in Figure 7.2. Each group consists of
a number of blocks containing administrative information and the actual data blocks.
Data
blocks
Superblock: Contains meta information about the file system, such as the file system
label, status, UUID, block size, etc. All information in the superblock can be viewed
with the command dumpe2fs:
earth:~ # dumpe2fs -h /dev/hda2
dumpe2fs 1.28 (31-Aug-2002)
Filesystem volume name: root
Last mounted on: <not available>
Filesystem UUID: 681352db-6533-4552-b37c-4a88b60b0b61
...
Group descriptor: Information about the location of the other administrative blocks
(bitmaps and inode table).
Data blocks: The actual content of a file is stored here. A data block has a predefined size
of 1024, 2048, or 4096 bytes per block. A file occupies at least one entire block.
Every file is associated with an inode containing information on the file. The number of
inodes is predefined when the file system is set up and cannot be modified later. If too few
inodes are reserved, it might be impossible to create files even though there are still free
blocks.
The inode of a file contains the following information:
File type: Indicates whether the file is a normal file, a directory, a symbolic link, a device
file, or another type.
File permissions: The file access permissions in octal form (e.g., 0644).
Number of hard links: A link counter shows the number of hard links.
Time stamp: Three access times are stored: ctime (change time), atime (access time),
and mtime (modification time). The meaning of the individual times:
Data block addresses: Indicates the data blocks belonging to the file.
1 2 3 4
5 6 ... n
Data
blocks
Exercise
The setup of an ext2 file system in a partition is relatively easy. For example, to set up
an ext2 file system in the partition /dev/hda2, simply execute one of the following
commands:
earth:~ # mkfs -t ext2 /dev/hda2
or:
earth:~ # mke2fs /dev/hda2
Caution: All data in the respective partition is lost when a new file system is set up.
In both examples, no additional options were used, so the file system was set up with
default values. Of course, all modifiable values can be influenced with options:
Option Meaning
-b block_size Specification of the block size in bytes. Pos-
sible values: 1024, 2048, and 4096.
-c Before the file system is set up, the partition
is screened for bad blocks (also see the com-
mand badblocks).
Option Meaning
-i bytes_per_inode Determines the number of inodes. A value
of 4096 means that one inode is reserved for
every 4096 bytes.
-L label Specification of the file system label.
-m %_reserved_blocks Indicates what percent of the available blocks
should be reserved for the user root (default:
5%).
Exercise
1. Set up an ext2 file system with a block size of 2048 bytes on a test par-
tition (according to the specifications of your trainer). Select a suitable
value for the number of inodes. Discuss the value with the class.
2. Mount the new partition permanently in the system (/etc/fstab).
The mount point is the directory /ext2test.
Usually, the setup of the file system does not mean that all is done. File system errors or
other situations repeatedly necessitate the maintenance or repair of the ext2 file systems. A
number of commands are available for this purpose.
Following a power failure or a system crash, a file system may contain errors. The file
system is checked automatically when the system is booted. However, this automatic file
system check is not always successful. In this case, the administrator must perform a
manual file system check.
The file system check is performed with the command fsck or fsck.ext2 and should
only be carried out when the file system is not mounted.
Various parameters of an ext2 file system can be changed. The command for changing
parameters is tune2fs. The following values can be modified:
Time interval between file system checks: A file system check is performed after a cer-
tain time. This value can be changed with -i. Example:
tune2fs -i 100d
File system label: The file system label can be changed with -L.
Reserved blocks: The option -r changes the number of blocks reserved for root.
The command debugfs can be used to analyze and modify blocks and inodes in a detailed
way. Therefore, this tool should be used with utmost caution.
Exercise
3. In the first part of the exercise, you simulated a system crash with data
inconsistency (a block was marked as free even though it is still used).
Try to repair the file system with fsck.
One of the disadvantages of the ext2 file system is the time-consuming file system check
after an event such as a system crash, as the entire file system is checked. For very large
file systems, this can take thirty minutes or longer.
In contrast, modern journaling file systems use the journal, a special area in the file system,
to log information about the actions in the file system. The advantage of this approach is
that after a system crash, only the data areas that were actually used need to be checked.
Thus, the check of a journaling file system takes only a few seconds.
Some journaling file systems only store the meta data in the journal. Although this guaran-
tees the integrity of the file system, it does not guarantee the consistency of the actual data.
Journaling file systems that log both the meta data as well as the actual data in the journal
guarantee data integrity.
The successor of ext2 with journaling functionality is ext3. ext3 provides the following
advantages:
• ext3 is an ext2 file system with journaling. Therefore, an existing ext2 file system
can easily be converted to ext3.
ext3 file systems are set up almost in the same way as ext2 file systems (see Section 7.2.2
on page 74). Additionally, the option -j (for journal) must be set.
Apart from this, you can use the same options as when setting up an ext2 file system.
One advantage of ext3 is the easy conversion of an existing ext2 file system to ext3. The
actual journal can be created on an ext2 file system during operation. However, the new
feature will only be applied after the file system is remounted. On an ext2 file system, the
journal can be created with the command tune2fs -j:
Do not forget to change the file system type entry in the file /etc/fstab from ext2 to
ext3.
There are some file system–specific mount options for ext3. The following three options
affect the journaling of data (not of meta data):
data=journal: The data is transferred to the journal before it is actually written to the
file system. This increases the data security, but may inhibit the speed of the file
system.
File
Meta−
data Data
Filesystem
1
Journal
data=ordered Only meta data are written to the journal. The actual data is written
before the meta data is written to the file system. This is the default setting for the
ext3 file system.
File
Meta−
data Data
Filesystem
1 1
Journal
data=writeback: This is the fastest and least secure of all three variants. Only meta
data is written to the journal, but the meta data is written to the file system regardless
of whether the data has already been written.
Exercise
1. Convert the ext2 file system set up in Section 7.2.2 on page 75 to ext3.
2. Update the entry in the file /etc/fstab.
The Reiser file system (ReiserFS) was the first journaling file system available for Linux.
The ReiserFS developer Hans Reiser used an approach that is completely different from
ext2 and ext3.
The file system is organized in balanced trees (b-trees) which allow quicker access to the
files, especially in large directories. Moreover, ReiserFS does not always use an entire
block for a file. It tries to use the available space as efficiently as possible. However, this
efficient storage management also takes its toll on the speed. If you prefer more speed, use
the mount option notail.
Presently, the Reiser file system only provides meta data journaling. The advantages of
ReiserFS are as follows:
• easy resizing of the file system during operation (important for Logical Volume Man-
ager)
The command mkreiserfs sets up a Reiser file system. In contrast to ext2, a security
query must be confirmed before the file system is actually set up.
Exercise
Although journaling file systems rarely require a file system check, you should be familiar
with the command for checking the Reiser file system. This command is also needed
to shrink the file system. Before you shrink the file system, be sure to perform a file
system check. The file system check should only be performed on file systems that are not
mounted.
Exercise
Check the Reiser file system you set up on the test partition.
The file systems XFS from SGI and JFS from IBM have been placed under the GNU
General Public License and are also available for Linux. These file systems can be created
with the commands mkfs.jfs (for JFS) and mkfs.xfs (for XFS).
Many users consider the manual mounting (mount) and unmounting (umount) of CD-
ROM drives cumbersome. This is where the kernel-based automounter autofs comes
into play. The automounter automatically mounts file systems when needed and unmounts
them automatically from the directory tree when they are not used.
To use the kernel-based automounter, the programs in the package autofs are required.
This package should be installed. Use the following command to check if the package is
installed:
earth:~ # rpm -qi autofs
For the automounter service to be started automatically, it must be activated in the run-
levels:
Exercise
To automount file systems on floppy disks or CD-ROMs, only two configuration files need
to be modified:
# /misc /etc/auto.misc
Remove the comment mark (#) introducing the line for the directory /misc. The
first column specifies the directory, while the second column specifies the corre-
sponding configuration file (/etc/auto.misc).
The following example demonstrates how a CD-ROM drive is automounted, allowing the
CD-ROM to be mounted on the directory /misc/cdrom when necessary and unmounted
automatically after a certain time.
Step 1: Generate an entry for the directory /misc in the file /etc/auto.master:
/misc /etc/auto.misc --timeout 10
The option timeout specifies after how many seconds the mount point is released
when the CD-ROM drive is no longer accessed (umount).
Step 2: Configure a mount point for the CD-ROM drive in the directory-specific configu-
ration file /etc/auto.misc:
cdrom -fstype=auto,ro :/dev/cdrom
The first column contains the name of the mount point — the respective subdirectory
of /misc.
The second column contains mount options: -fstype=auto indicates that the file
system is recognized automatically. ro stands for readonly — only read access is
permitted.
The third column specifies the file system. For local files, the directory name must
be preceded by a colon.
Mount points are not required. These are dynamically generated and deleted by the
automounter.
Step 5: Now test the automounter by inserting any data CD-ROM in the drive. A glance
at the directory /misc/cdrom should reveal the content of the CD-ROM:
earth:~ # ls /misc/cdrom
Exercise
1. Configure the automounter for the floppy disk drive and the CD-ROM
drive with a time-out of ten seconds.
2. Check the automounting function of the floppy disk drive and the CD-
ROM drive. Is the mount point removed?
The Linux automounter is also able to automount network directories, such as Samba
shares or exported NFS directories. For this purpose, the configuration files should have
the following entries:
/etc/auto.master
/etc/auto.net
When accessing the directory /net/earth, the directory /export of the server
earth.example.com is automounted.
Exercise
1. The trainer configures an NFS export on the trainer host. Create an au-
tomounter configuration file for this directory.
2. Check if the automounter configuration is correct.
Linux provides the possibility of swapping part of the memory to the hard disk. This
“virtual” memory is located on one or several swap partitions. Multiple (up to 32) swap
partitions can be used to distribute the load to several hard disks. In the past, the rec-
ommended size of the swap partition was twice the size of the RAM. Today, this rule is
obsolete. Usually, 200 to 500 MB of swap space should be sufficient. If a large portion of
this space is used, consider increasing the RAM.
A number of commands provide information about whether and how the swap partition is
used:
free The command free prints the load status of the memory and the swap partition.
earth:~ # free
total used free shared buffers cached
Mem: 126360 117448 8912 0 5684 33476
-/+ buffers/cache: 78288 48072
Swap: 136040 41884 94156
The file /proc/swaps lists all available swap partitions, their size, priority, and load
status.
earth:~ # cat /proc/swaps
Filename Type Size Used Priority
/dev/hda1 partition 136040 41884 42
top The command top can be used to track the load of the RAM and the swap space
over an extended period.
Normally the swap partition is created automatically by the installation program YaST2
during the installation. To create a swap partition manually, proceed as follows:
Step 1: Create a partition with the ID 82 (Linux swap), using a program such as
fdisk, parted, or cfdisk.
• The partition can also be entered as swap space in the file /etc/fstab:
/dev/sda1 swap swap pri=42 0 0
The priority (pri=42) only plays a role if multiple swap partitions are used.
Apart from entire swap partitions, you can also create swap files. However, the perfor-
mance of swap files is not as good as that of pure swap partitions.
• Manual pages:
– man 5 fstab
– man 8 mount
• Internet:
• Manual pages:
– man 8 dumpe2fs
– man 8 e2label
– man 8 mke2fs
– man 8 tune2fs
• Internet:
– http://web.mit.edu/tytso/www/linux/ext2intro.html
• Manual pages:
– man 8 mkreiserfs
– man 8 reiserfsck
– man 8 reiserfstune
• Internet:
– http://www.namesys.com/
• Manual pages
– man 8 mkfs.jfs
– man 8 mkfs.xfs
• Internet
– JFS: http://www-124.ibm.com/developer/opensource/jfs/
– XFS: http://oss.sgi.com/projects/xfs/
7.8.5 Automounter
• Manual pages:
– man 5 autofs
– man 8 autofs
– man 8 automount
7.8.6 Swap
• Manual pages:
– man 8 mkswap
– man 8 swapon
– man 8 swapoff
A free . . . . . . . . . . . . . . . 85 mkfs.jfs . . . . . . . . . . 81
autofs . . . . . . . . . . . . . . . 81 fsck . . . . . . . . . . . . . . . 75 mkfs.reiserfs . . . 59
automounter . . . . . . . . . . . 81 fsck.ext2 . . . . . . . . 75 mkfs.xfs . . . . . . . . . . 81
automounting CD-ROMs . gcc . . . . . . . . . . . . . . . . . . 4 mkinitrd . . . . . . . . . . 35
82 gunzip . . . . . . . . . . . . . 34 mkreiserfs . . . . 70, 80
automounting floppy disks hdparm . . . . . . . . . . . . . 41 mkswap . . . . . . . . . . . . . 85
82 insmod . . . . . . . . . 24, 35 modinfo . . . . . . . . . . . 23
automounting network insserv . . . . . . . . . . . 39 modprobe . . . . . . . . . . 24
directories . . . . . . . . . 83 less . . . . . . . . . . . . . . . 38 mount . . . . . . . 66, 67, 71
preparation . . . . . . . . . . 81 lsmod . . . . . . . . . . . . . . 22 parted . . . . . . . . . . . . . 85
lvcreate . . . . . . 59, 62 patch . . . . . . . . . . . . 6, 33
B lvextend . . . . . . . . . . 59 pvmove . . . . . . . . . . . . . 60
BadRam . . . . . . . . . . . . . . . 27 lvreduce . . . . . . . . . . 61 pvcreate . . . . . . . . . . 58
boot manager lvscan . . . . . . . . . . . . . 25 pvscan . . . . . . . . . . . . . 60
GRUB . . . . . . . . . . . . . . . 32 make . . . . . . . . . . . . . . . . 4 reiserfsck . . . . . . . 80
LILO . . . . . . . . . . . . . . . . 36 make bzImage . . . . 31 reiserfstune . . . . 70
make clean . . . . . . . 31 resize2fs . . . . . . . . 61
C make config . . . . . . 28 resize_reiserfs 61
command make dep . . . . . . . . . . 31 rmmod . . . . . . . . . . . . . . 24
dump . . . . . . . . . . . . . . . 67 make distclean . . 31 rpm . . . . . . . . . . . . . 14, 17
fsck . . . . . . . . . . . . . . . 67 make install . . . . . . 5 swapon . . . . . . . . . . . . . 85
autoconf . . . . . . . . . . . 6 make menuconfig 28 sysctl . . . . . . . . . 38, 39
automake . . . . . . . . . . . 6 make modules . . . . 31 top . . . . . . . . . . . . . . . . . 85
cat . . . . . . . . . . . . . . . . . 38 make tune2fs . 70, 71, 76, 78
cfdisk . . . . . . . . . . . . . 85 modules_install . ulimit . . . . . . . . . . . . . 42
configure . . . . . . . . 16 31 umount . . . . . . . . . 59, 61
debugfs . . . . . . . . 74, 76 make mrproper 31, 34 vgcreate . . . . . . . . . . 58
debugreiserfs . . . 71 make spec . . . . . . . . 31 vgextend . . . . . . . . . . 59
diff . . . . . . . . . . . . . . . . 6 make uninstall . . . 5 vgreduce . . . . . . . . . . 60
dumpe2fs . . . . . . . . . . 72 make xconfig . . . . 28 compiler . . . . . . . . . . . . . . . . 2
e2fsadm . . . . . . . . . . . 62 mdadm . . . . . . . . . . . 49, 52 compiling software . . . . . . 1
e2label . . . . . . . . . . . 70 mke2fs . . . . . . . . . 74, 77
echo . . . . . . . . . . . . . . . 39 mkfs . . . . . . . . . . . . 70, 74 D
fdisk . . . . . . . . . . . . . . 85 mkfs.ext3 . . . . . . . . 59 daemon