Beruflich Dokumente
Kultur Dokumente
ADMIN MAGAZINE
NEW MAGAZINE!
Smart tools for better networks
ADMIN
ANALYZE AND REPAIR PC SYSTEMS
Four Live systems on one disc
ADMIN
✚ SystemRescueCd ✚ Parted Magic ✚
Network & Security
System 5Free
Fantastic
System Management
Back-up Tools
Management
Spacewalk
Icinga
MySQL Forks
Icinga
Free Backup Tools
MySQL Forks
A perfect tool for
Teamviewer
every task
ADMIN Magazine
Issue 01 £7.99
9 772045 070003
WWW.ADMIN-MAGAZINE.COM
“You even get six-star customer support thrown in."
- PC Pro magazine
We’ve been voted the UK’s best hosting company for 4 years running.
Isn’t it time you found out why?
www.memset.com
0800 634 9270 hosting
Welcome to Admin W E LCO M E
Management, monitoring, database Save time and simplify your workday Still finding your way through the
forks, and other pertinent glimpses at with these useful tools for real-world cloud? Keep on course with these cool
innovations across the IT landscape. networks. tools for virtual environments.
14 Icinga
This is not your father's Nagios fork.
52 OpenVZ
42 Synergy Container-based virtualization tools like
20 MySQL Forks Too many monitors on your desk? OpenVZ are sometimes more efficient
We investigate some invaluable This handy tool lets you control your than hypervisor systems.
variations on the MySQL theme. servers from a single desktop.
Applications
Container Context
Abstraction Layer
86 ModSecurity
Rescue CD
8 Spacewalk Careful admins know new exploits
Red Hat released the source code appear every day. Keep the intruders
for the Satellite Server management off your pages with this web
Toolbox
tool in 2008. Spacewalk is a new free application firewall for Apache web
version. servers.
Use these practical apps to extend, Timely tutorials on fundamental ✚ Four fabulous Live
simplify, and automate routine admin techniques for system Linux systems
tasks. administrators.
✚ SystemRescueCd –
64 Teamviewer 78 PAM
This popular remote control tool isn't The powerful Pluggable Authentication
A handy collec-
just for Mac and Windows anymore. System offers centralized authentication tion of advanced
for Unix and Linux systems.
rescue utilities
68 Chef
This snappy configuration manager lets ✚ Parted Magic –
you roll out Linux systems with a couple
of mouse clicks.
Partition your hard disk
✚ Clonezilla Live – Clone and
restore Linux systems
✚ Redo Backup – World‘s
easiest backup distro
86 ModSecurity
How safe is your web server? This
powerful Apache extension will help
d etails
74 Sysinternals
Get the pulse of your Windows network
keep intruders from getting control.
See p 6 for full
with this convenient collection of 92 Monitoring Daemons
management tools. Why write a custom script? A few simple
shell commands might be all you need
to monitor system daemons.
Service
3 Welcome
4 Table of Contents
6 On the CD
98 Call for Papers
Rescue CD Toolbox
On the CD
The CD included with this issue lets you boot to four
special-purpose Live Linux systems:
■ SystemRescueCd – This versatile rescue distro in-
cludes a copious collection of networking and trou-
bleshooting tools, including utilities for accessing and
repairing Windows systems.
■ Parted Magic – A little Linux that specializes in for-
matting, resizing, and recovering partitions.
■ Clonezilla Live – Clone systems for fast and easy
backup and restore. The bare-metal backup and re-
covery approach will bring back your system in a
fraction of the time.
■ Redo Backup – Another bare-metal backup tool with
an emphasis on simplicity.
Place the CD in the drive and reboot your system. (Make
sure your computer is configured to boot to the CD
CD?
DEFECTIVE ill be replaced. drive.) Choose a distro in the handy boot menu. See the
ective C w
Def Ds box titled “Resources” for links to more information on
email to
Please send an m.
the tools in our Rescue CD Toolbox. ■
-magazine.co
subs@admin
Resources
[1] SystemRescueCd: [http://www.sysresccd.org/]
[2] Parted Magic: [http://partedmagic.com/]
[3] Clonzilla: [http://www.clonezilla.org/]
[4] Redo Backup: [http://redobackup.org/]
N E A R Y OU!
ND
R L O NA NEWSSTA S l
pecia
H O US E P E azin e . c o m /
R
!
g copy today
W E - m a
IND PO x order your
F AT lin u m it e d so
INE
li
Supplies are
O R D E R ONL
OR
F e at u r e s Spacewalk
© patrimonio designs, Fotolia.com
Moon Landing
As your network grows, managing Linux systems manually becomes time consuming and impractical.
Enter Spacewalk: an open source tool that takes the footwork out of network management.
By Thorsten Scherf
Spacewalk [1] is the open source first has to register with the server. nels. The base channel contains the
derivative of the popular Red Hat Net- Registration can be based either on RPM-based operating system, such
work Satellite Server. Red Hat pub- a username/password combination as Red Hat Enterprise Linux, Fedora,
lished the source code for the server or an activation key that is pregener- or CentOS. The subchannels contain
in the summer of 2008, and the com- ated by the Spacewalk server. After additional software packages that are
munity has now released version 1.0. registration, the system appears in the independent of the operating system,
The application’s core tasks include server’s web GUI. such as the Red Hat Cluster Suite or
RPM package software provisioning, If the server has more resources, you the 389 Directory Server.
managing configuration files, and can assign them to the system at this Spacewalk can clone existing chan-
kickstart trees, thus supporting the point. Resources include software nels and create new channels from
installation of bare-metal systems. packages or configuration files that scratch. This feature gives you full
The approach that Spacewalk uses are normally organized in channels. control of the software stack that you
is quite simple: Before a system can A system always has exactly one provide via Spacewalk. Configura-
access Spacewalk’s resources, it base channel with optional subchan- tion channels help you distribute the
configuration files for the software command centrally on the Spacewalk [4], or CentOS [3] Linux. Note that
packages. Spacewalk also keeps older server. Spacewalk does need a current Java
versions of the files to let you roll Installing new systems is also quite Runtime Version 1.6.0 or newer. You
back to a previous version at any time simple. Spacewalk has the installation can use the Open JDK for this; Fedora
if the need arises. files you need in the form of kickstart includes it out of the box. Admins
The software packages or configura- trees. The installation candidate uses on RHEL or CentOS can retrieve the
tion files can be installed either via a boot medium such as a CD, a USB package via the additional EPEL (Ex-
the target system or centrally in the stick, or a PXE-capable network card tra Packages for Enterprise Linux)
Spacewalk web front end. To avoid to contact the server. The First-Stage software repository.
spending too much time on the instal- Installer, which is part of the instal- Besides the Java package, an Oracle
lation of a large number of systems, lation medium, defines which server 10g database is also required for
you can assign systems to logical will handle the installation. installing Spacewalk. Oracle XE pro-
groups and apply the installation of The remaining installation steps are vides a free version of the database.
a resource to a group. For example, it handled by the Second-Stage Installer, The developers are currently working
might make sense to assign all your located on the Spacewalk server and hard on implementing support for an
web servers to a WWW-Server group transferred to the client system when open source database after identify-
in Spacewalk. When a new version of the installation starts. If you want to ing PostgreSQL as the best alterna-
the web server software is released, automate the installation fully, define tive to Oracle. As of this writing it is
you would simply tell Spacewalk to the kickstart file location in the boot hard to say when official support for
apply the update to the group, au- medium. The kickstart file is a kind of PostgreSQL will be available, but it
tomatically updating all the systems answer file that describes the proper- makes sense to check the roadmap
belonging to the group. ties of the installation candidate, such [5] or the mailing lists [6] at regular
The installation uses polling by as partitioning, software, language, intervals.
default; in other words, the client and firewall settings. Of course, you
systems query the server at a pre- can create a kickstart file on the Oracle XE
defined interval (which defaults to Spacewalk server and just include a
four hours) to see if new actions have link to the file on the boot medium. After installing the repository RPM
been defined since the last poll. If so, Spacewalk can manage any RPM- for your distribution, the first step is
Spacewalk then runs these actions. based distribution. You even have to install Oracle Express, which you
As an alternative, you can trigger the the option of operating client systems can download for free [7]. You will
installation of software packages and across multiple organizations. Using need version 10.2.0.1. Besides the
other actions using a push approach. the web interface, the administrator database, you also need the oracle‑in-
The client system and the Spacewalk creates various organizations and as- stantclient‑basic and oracle‑instant-
server talk to each other constantly signs a certain number of system en- client‑sqlplus, which you can then
using the Jabber protocol. Any new titlements to them. Entitlements are install with Yum:
actions you define are immediately linked to certificates that Spacewalk
yum localinstall ‑‑nogpgcheck U
run on the client by Spacewalk. automatically generates during the in- oracle‑xe‑univ*.rpm
stallation. You can then add users to oracle‑instantclient‑basic*.rpm
with the Oracle Listener configuration the appropriate repository in /etc/ you can set up subchannels for the
later on. Use the following parameters yum.repos.d/. The following com- base channel and assign the subchan-
for the configuration: mand starts the installation: nels to clients as needed. After doing
so, you can use the subchannels to
HTTP port for Oracle Application U yum install spacewalk‑oracle
Express: 9055
distribute more RPM packages to the
Database listener port: 1521 Because this package depends on all systems. The packages can be your
Password for SYS/SYSTEM: Password the other Spacewalk packages, the own creations or RPMs from other
Start at boot: y package manager will automatically repositories.
The default HTTP port for the Oracle download and install the dependen- The easiest approach to setting up a
Express application (8080) is already cies in the next step. Then you can software channel is to use the web in-
occupied by the Tomcat application configure the application interactively terface (Channels | Manage Software
server, so you need to choose an al- with the setup tool or with the use of Channels | Create; Figure 1). Thanks
ternative port to avoid conflicts. an answer file (Listing 4). to the Spacewalk API, you can also
To talk to the database, you need to Pass the file in to the setup tool as script this process [8]. Call the script
configure the listener in the /etc/ follows: as follows:
tnsnames.ora file (Listing 1).
spacewalk‑setup ‑‑disconnected U create_channel.py ‑‑label=fedora‑12‑i386 U
Now you just need to make a few ‑‑answer‑file=answerfile ‑‑name "Fedora 12 32‑bit" U
changes to the database. To do this, ‑‑summary "32‑bit Fedora 12 channel"
log in to the database with sqlplus The configuration can take some time
and create a spacewalk user, to which to complete as the process sets up In the script, you need to provide
you could assign a password of the database tables. The setup tool the Fully Qualified Domain Name
spacewalk (Listing 2). then launches all the required ser- (FQDN) for the Spacewalk server
The standard configuration of Oracle vices. You can manually restart using and the user account for creating
Express supports a maximum of 40 the /usr/sbin/rhn‑satellite tool. the channels, such as the Spacewalk
simultaneous connections, which is To configure the system, launch the administrator account created previ-
not enough for Spacewalk operations. Spacewalk web interface via its URL ously. The Users tab also gives you
The instructions in Listing 3 change (http://spacewalk.server.tld). Besides the option of creating more users with
the limit to a maximum of 400 con- contact information, you can also set specific privileges (Figure 2).
nections. the password for the Spacewalk ad- The channel you set up should now
Now you need to restart the database ministrator here. be visible in the Channels tab of the
by giving the /sbin/service oracle‑ web interface but will not contain
xe command. Software Channels any software packages. Although you
can upload software packages to the
Spacewalk Setup The next step is to set up an initial server in several ways, the method
software channel for the client sys- you choose will depend on whether
The next step is to install the Space- tems. When you register a client, you the packages are available locally
walk server. To do so, you need to must specify exactly one base channel (e.g., DVD) or you want to synchro-
include the Spacewalk repository as for the client; it will use this channel nize a remote Yum repository with
described previously. You should have to retrieve its operating system pack- the Spacewalk server. If you choose
a spacewalk.repo file that points to ages and their updates. Of course, the local upload, you can use the
=false scope=spfile;
SQL>
alter system set "_optimizer_cost_based_
transformation"=off scope=spfile;
SQL> quit
Figure 1: The easiest approach to setting up a software channel is to use the web graphical interface.
rhnreg_ks ‑‑serverUrl=U
http://spacewalk.server.tld/XMLRPC U
‑‑activationkey=key
kickstart profiles. To install the cli- number of systems. The rhnsd service before rolling it out to your produc-
ent, simply select the required profile on the systems queries the Spacewalk tion systems. Thanks to the compre-
from the list. The client is then auto- server at predefined intervals to check hensive API, many tasks can also be
matically registered on the Spacewalk for new actions, such as software in- scripted. n
server. Existing systems can easily be stallations.
reinstalled using: When a system finds an action, it
then executes it. If the osad service Info
koan ‑‑replace‑self U
‑‑server=Spacewalk‑Server U
is enabled on the system, you can [1] Spacewalk project homepage:
‑‑profile=Kickstart‑Profile even run actions immediately with- [https://fedorahosted.org/spacewalk]
out waiting for the polling interval to [2] Spacewalk network ports:
This creates an entry in the system’s elapse. The client and the server then [http://magazine.redhat.com/2008/09/
bootloader menu and automatically use the Jabber protocol for a continu- 30/tips‑and‑tricks‑what‑tcpip‑ports‑are‑r
selects the entry when the system ous exchange. equired‑to‑be‑open‑on‑an‑rhn‑satellite‑pr
reboots. Finally, don’t forget the feature-rich oxy‑or‑client‑system/]
Spacewalk API, which is accessible at [3] RHEL5, CentOS5 Spacewalk Server Repos‑
System Management http://Servername/rhn/apidoc/index. itory RPM: [http://spacewalk.redhat.com/
jsp on the installed server. This tool yum/1.0/RHEL/5/i386/spacewalk‑repo‑1.
All of the systems registered on the gives you access to a plethora of func- 0‑2.el5.noarch.rpm]
Spacewalk server retrieve their soft- tions that are not available in the web [4] Fedora12 Spacewalk Server Repository
ware packages from this source, with interface. RPM: [http://spacewalk.redhat.com/yum/
no need to access external reposito- The API can be accessed with XML- 1.0/Fedora/12/i386/spacewalk‑repo‑1.0‑2.
ries. This method not only improves RPC, which makes it perfect for your fc12.noarch.rpm]
your security posture but also saves own Perl or Python scripts. A Python [5] Spacewalk Roadmap: [http://
network bandwidth. With a registered script [8] for creating a software fedorahosted.org/spacewalk/roadmap]
system, you can customize various channel is just one example of access- [6] Spacewalk mailing list:
settings in the System Properties sec- ing the Spacewalk server via the API [http://www.redhat.com/spacewalk/
tion (Figure 5). (Figure 6). communicate.html#lists]
For example, you can assign new [7] Oracle XE: [http://www.oracle.com/
software or configuration channels, Conclusions technology/software/products/database/
compare the installed software with xe/htdocs/102xelinsoft.html]
profiles on other systems, or create Spacewalk gives administrators a very [8] Spacewalk API script for creating a
snapshots as a backup that you can powerful tool for managing large- software channel: [http://fedorahosted.
roll back later. Additionally, you can scale Linux landscapes. It facilitates org/spacewalk/attachment/wiki/
install new software or distribute many daily tasks, such as the instal- UploadFedoraContent/create_channel.py]
configuration files from a centralized lation of software updates or upload- [9] Repository sync: [http://fedorahosted.
location. ing of configuration files. Advanced org/spacewalk/attachment/wiki/
Thanks to the ability to assign reg- features, such as channel cloning, UploadFedoraContent/sync_repos.py]
istered systems to groups, you can make it possible to put any software [10] Fedora12 Spacewalk Client Reposi‑
point and click to do this for a large through a quality assurance process tory RPM: [http://spacewalk.redhat.
c om/yum/1.0/Fedora/12/i386/
spacewalk‑client‑repo‑1.0‑2.fc12.noarch.
rpm]
[11] RHEL5 and CentOS5 Client Repository
RPM: [http://spacewalk.redhat.com/yum/
1.0/RHEL/5/i386/spacewalk‑client‑repo‑1.
0‑2.el5.noarch.rpm]
[12] EPEL Repository: [http://download.
fedora.redhat.com/pub/epel/5/i386/
epel‑release‑5‑3.noarch.rpm]
[13] Cobbler:
[https://fedorahosted.org/cobbler/]
The Author
Thorsten Scherf is a Senior Consultant for Red
Hat EMEA. You can meet him as a speaker at
Figure 6: An XML-RPC interface opens up a huge selection of Spacewalk server functions via the conferences. He is also a keen marathon runner
programmable API. whenever time permits.
Server Observer
© Alterfalter, 123RF.com
Icinga’s developers grew weary of waiting for updates to the popular Nagios monitoring tool, so they
started their own project. By Falko Benthin
A server can struggle for many server and take action immediately. Icinga delivers improved database
reasons: System resources like the Of course, you could check every connectors (for MySQL, Oracle, and
CPU, RAM, or hard disk space could server and service individually, but it PostgreSQL), a more user-friendly
be overloaded, or network services is far more convenient to use a moni- web interface, and an API that lets
might have crashed. Depending on toring tool like Icinga. administrators integrate numerous
the applications that run on a server, extensions without complicated
consequences can be dire – from Nagios Fork modification of the Icinga core. The
irked users to massive financial impli- Icinga developers also seek to reflect
cations. Icinga [1] is a relatively young project community needs more closely and to
Therefore, it is more important than that was forked from Nagios [2] be- integrate patches more quickly. The
ever in a highly networked world to cause of disagreements regarding the first stable version, 1.0, was released
be able to monitor the state of your pace and direction of development. in December 2009, and the version
Listing 1: my_hosts.cfg
01 # Webserver 19 # Fileserver
02 define host{ 20 define host{
03 host_name webserver 21 host_name fileserver
04 alias languagecenter 22 alias Fileserver
05 display_name Server at language center 23 display_name Fileserver
06 address 141.20.108.124
24 address 192.168.10.127
07 active_checks_enabled 1
25 active_checks_enabled 1
08 passive_checks_enabled 0
26 passive_checks_enabled 0
09 max_check_attempts 3
27 max_check_attempts 3
10 check_command check-host-alive
28 check_command check-host-alive
11 check_interval 5
29 check_interval 5
12 retry_interval 1
30 retry_interval 1
13 contacts spz_admin
14 notification_period 24x7 31 contacts admin
16 notification_options d 33 notification_interval 60
17 } 34 notification_options d,u,r
18 35 }
Table 1: States
Option Status
Server
o OK
d Down
u Unreachable
r Recovered
Services
o OK
w Warning
c Critical
r Recovered
u Unknown
11 } 21 }
ministrators only need to customize. a language center (display_name) and a host is unreachable, you have to de-
In principle, you can define multiple is displayed accordingly in the web fine the nodes passed along the route
objects in a CFG file, but you can just interface. to the host as parents – and this will
as easily create separate files for each To inform the administrator (con‑ only work if the routes for outgoing
object in a directory below /path‑ tacts) when the server goes down packets are known. The file server
to‑Icinga/etc/objects. Lines that (notification_options), I want Icinga definition looks similar.
start with a hash mark within an to ping (check_command) the server ev- Once the servers are defined, the
object definition are regarded as com- ery 5 minutes (check_interval). If the administrator configures the respec-
ments, as is everything within a line server is still down 60 minutes (noti‑ tive services that Icinga will monitor
to the right of a semicolon. fication_interval) after notifying the (Listing 2), along with the matching
administrator, I want to send another commands (Listing 3), the intervals
Defining Hosts and Services message. (Listing 4), and the stakeholding
Icinga is capable of deciding whether administrators (Listing 5). The indi-
Listing 1 provides a sample host defi- a host is down or unreachable (see vidual configuration files have a simi-
nition. The host is the web server at Table 1). However, to determine that lar structure. For each service, you
07 service_notification_period 24x7 13 }
Conference Topics:
• NSClient++ (Michael Medin)
• Clientless Windows Monitoring about
WMI with Samba4 (Thomas Sesselmann)
• The social seismograph at XING
(Dr Johannes Mainusch)
• RRDCacheD - how to escape the I/O hell
(Sebastian Harl)
• Monitoring at Thales Hengelo
using Nagios (Pieter van Emmerik)
Figure 3: A manual check of commands in commands.cfg reveals the culprit. Figure 4: Mail dispatched by Icinga is short and to the point.
need to consider the interval between Icinga with /etc/init.d/icinga re‑ are typically linked, so that clicking
checks. One useful feature is the abil- start. one takes you to more detailed infor-
ity to define time slots, within which mation.
Icinga will perform checks and, if GUI and Messages If something is so drastically wrong
necessary, notify the administrator. that a message is necessary, Icinga
Here, time limitations or holidays can Icinga works without a graphical will check its complex ruleset to see
be defined. interface, but it’s much nicer to have whether it should send a message
The contact configuration can include one. The standard interface can’t and, if so, to whom (Figure 4). The
email addresses or cell phone num- deny its Nagios ancestry, but it is filters through which the message
bers, but to integrate each contact clear-cut and intuitive. passes check the following: whether
with, for example, an Email2SMS If everything is working, you’ll see a notifications are required, if the
gateway or a Text2Speech system lot of green in the user interface (Fig- problem occurred at a time when the
(e.g., Festival), you need a matching ure 1), but if something goes wrong host and service should be running,
command. somewhere, the color will change and if messages should be sent for this
Icinga can use macros, which notice- move closer and closer to red to re- service in the current time slot, and
ably simplifies and accelerates many flect the status of the hosts or services what the contacts linked to the ser-
tasks because you can use a single (Figures 2 and 3). Status messages vice actually want. Each contact can
command for multiple hosts and ser-
vices. Listings 2 and 3 give examples
of macros.
All services defined for monitoring
the file server include a check_nrpe
instruction with an exclamation
mark. Each exclamation mark can
be followed by an argument, which
in turn is evaluated by the macros in
other definitions. Macros are nested
in $ signs.
After creating the configuration files
and storing them in etc/objects, you
still need to tell Icinga by adding a
new
cfg_file=/
usr/local/icinga/etc/objects/
object.cfg
Figure 6: Network overview. If you need to monitor a large number of machines Figure 7: The alert histogram, another useful gadget Icinga offers, shows peak
and have defined “parents,” you can also visualize the intermediate nodes. trouble times.
define its own rules to stipulate when ready for action 24/7. If the contact and the final results were disappoint-
it wants to receive messages and for that Icinga notifies does not respond ing. The interface was buggy and
what status. If multiple administrators within a defined period, Icinga can at- very slow under my, admittedly, not
exist and belong to a single group, tempt to establish contact on another very powerful Icinga test server (Via
Icinga will notify all of them. Again, channel (e.g., a cell phone instead of C3, 800MHz, 256MB RAM). As a de-
you can define individual notification email). If this notification fails as well, fault, you need a new username and
periods so that each admin will be the case can be escalated to someone password for Icinga Web. That said,
responsible for one period. higher up the chain of responsibility – however, the current status does re-
the team leader, for example. veal some potential; it makes sense to
Interesting Features check how the new interface is devel-
Conclusions oping from time to time.
Icinga contains several interesting The Icinga kernel is well and com-
features that allow administrators to Icinga is a complex tool that provides prehensively documented and leaves
customize the network monitor to valuable services whenever an ad- no questions unanswered. Icinga also
reflect their needs and system envi- ministrator needs to monitor comput- offers a plethora of useful gadgets,
ronment. For example, you can define ers on a network. But don’t expect to such as the status map (Figure 6) or
distributed monitoring environments. be able to set up the network monitor the alert histogram (Figure 7), mak-
If you need to monitor several hun- in a couple of minutes of spare time; ing the job of monitoring hosts less
dred or thousand hosts, the Icinga if all goes well, the installation and boring – at least initially. The depth
server might conceivably run out of configuration will take at least a cou- of information that Icinga provides is
resources because every active check ple of hours. Once you have battled impressive and promises an escape
requires system resources. To take through the extensive configuration, route for avoiding calls from end us-
some of the load off the main server, you can reward yourself with an ers. In short, Icinga is a useful tool
Icinga can delegate individual tasks extended lunch break: If something that makes the administrator’s life
to auxiliary servers which, in turn, happens that requires your attention, more pleasant. n
forward the results to a central server. Icinga will tell you all about it.
Scheduling the checks can also help The traditional web interface is clear
reduce this load. Instead of running cut and packed with information; Info
all your active checks in parallel, you when this article went to print, how- [1] Icinga: [http://www.icinga.org/]
can let Icinga stagger them. ever, the new interface wasn’t entirely [2] Nagios: [http://www.nagios.org/]
Another interesting feature is the abil- convincing (Figure 5). The installa- [3] NRPE: [https://git.icinga.org/]
ity to escalate notifications. Not every tion was tricky, the documentation [4] Nagios plugins: [http://sourceforge.net/
administrator can be available and required some imagination at times, projects/nagiosplug/]
Reporting
Extended reporting allows adminis-
Modern MySQL Forks and Patches trators to collect more granular in-
formation about the MySQL server’s
Spoiled for
behavior under load. Thus far, slow.
log, which offers very little in the
line of configuration options, might
be your first port of call. However, its
Choice
utility value is restricted to identifying
individual, computationally intensive
queries on the basis of the time they
use – and non-used indexes. The
MicroSlow patch offers new filters
for a more targeted search for poorly
formulated queries. Thus, it logs que-
ries that are responsible for writing
temporary tables to disk, performing
complete table scans, or reading a
freely defined minimum number of
© Dmitry Karasew, 123RF.com
this evaluation) shows that read ac- nodb_disallow_writes = 0 disables database engine holds more promise.
cess to the images table is particularly the freeze. The InnoDB plugin and Percona
frequent. The next step would be for XtraDB engines are becoming increas-
the database user to experiment with Performance Enhancements ingly widespread.
code optimization or other changes to
the table format to reduce access in- Thanks to its support for transactions The InnoDB Plugin
cidence and save time. The relatively and line-based locking, InnoDB has
write-intensive movies table has a developed into a modern alternative The InnoBase InnoDB plugin is an
suspiciously high write-access count for the now fairly ancient MyISAM ongoing development of the InnoDB
and, at the same time, a large number engine. Despite performance gains engine that ships with MySQL [6].
of index updates. in write access thanks to line-based Improvements include general optimi-
Because the statistics can be reset locking, the overhead for support- zation of CPU load and I/O access, a
easily with FLUSH TABLE_STATISTICS, ing transactions, foreign keys, and faster locking mechanism, extended
interval-based evaluation by means of other functions (even if you don’t configuration and reporting options,
a Munin plugin that you write your- use them) costs valuable CPU cycles and optional table compression.
self, or some similar method, would and hardware I/O resources. MySQL MySQL 5.1 introduced the option of
be your best bet. Retrospectively, you systems under heavy load thus need unloading the standard engine and
could investigate load peaks in rela- a perfectly configured and powerful replacing it with a different version.
tion to table access and modification. InnoDB engine. Starting with MySQL 5.1.38, MySQL
A variety of performance-boosting additionally supplies the InnoDB
New Functions patches are available for the version plugin. As MySQL describes this as
of InnoDB that ships with MySQLM; a release candidate, administrators
New functions in the MySQL kernel some of them are included in the do need to enable it manually. The
give administrators additional options OurDelta version. One that is worthy official MySQL documentation de-
so that maintenance of the MySQL of mention is a reworked RW lock scribes the steps required to do so
server is more secure and conve- that improves locking behavior on [7]. To benefit from the combination
nient. A typical task is to stop MySQL multi-processor systems in particular. of distribution updates for the MySQL
processes with the KILL command. A description of all the improvements kernel and the latest functions and
Under load, you might see a process is beyond the scope of this article, but optimizations of the InnoDB plugin, it
listed as Idle by SHOW PROCESSLIST one thing is clear; patches for InnoDB is a good idea to install the latest In-
start to handle a new query just at the exist that retain compatibility and noDB plugin version from the InnoDB
moment you kill it. The “Kill if Idle” offer transparent optimization of the website.
patch adds an option to kill a process engine. Typically, it is difficult to mea- Ubuntu 10.04 comes with MySQL
only if it is doing nothing: KILL IF_ sure the performance gain in an ob- server version 5.1.41. The /usr/lib/
IDLE Process_Id. This saves you the jective way because the performance mysql/plugin/ houses an InnoDB pl-
embarrassment of accidentally killing of the MySQL server will depend to ugin version 1.0.4 that is disabled by
a process while it is handling a query. a great extent on the hardware, con- default. The InnoDB website has the
LVM and ZFS snapshots are com- figuration, and data it uses. The most current version, 1.0.6, which you can
monly regarded as the simplest exhaustive and reliable source is the download and unpack. Then copy
methods for backing up an InnoDB MySQL Performance Blog [4], which ha_innodb.so to the /usr/lib/mysql/
database on the fly without inter- regularly publishes test results. plugin/ directory. Because Ubuntu
rupting operations. If this method is Improvements by selectively installing uses AppArmor to protect services by
not an option for you and you are patches for the legacy InnoDB engine default, you need to disable or modify
forced to rely on a legacy dump file, are regarded as a fairly conservative AppArmor to let you load content
you need to make sure that the data approach. The use of an alternative from the plugin directory by adding
on your MySQL server do not change
while a file is being dumped. The Listing 1: Userstats Patch in Action
legacy approach to doing this is FLUSH mysql> select * from information_schema.TABLE_STATISTICS ORDER BY ROWS_READ DESC LIMIT 0,5;
might not be sufficient in the InnoDB | TABLE_SCHEMA | TABLE_NAME | ROWS_READ | ROWS_CHANGED | ROWS_CHANGED_X_INDEXES |
also write to the database. The In- | moviepilot | images | 13138219791 | 14778 | 118224 |
| moviepilot | events | 3957858216 | 59964 | 359784 |
noDB Freeze patch executes SET
| moviepilot | comments | 2650553183 | 3408 | 20448 |
GLOBAL innodb_disallow_writes = 1
| moviepilot | movies | 2013076357 | 598505 | 7780565 |
and then freezes all processes that
| omdb | log_entries | 1106683022 | 2737 | 5474 |
write InnoDB data so you can create
+‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑+‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑+‑‑‑‑‑‑‑‑‑‑‑‑‑+‑‑‑‑‑‑‑‑‑‑‑‑‑‑+‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑+
a backup. Afterward, SET GLOBAL in‑
these lines to the /etc/apparmor.d/ racuda” file format. In contrast to tive to MyISAM – the new Maria en-
usr.sbin.mysqld ruleset: the standard “Antelope” format, it gine. At the same time, the MariaDB
stores the InnoDB tables in a com- fork happily integrates XtraDB as a
/usr/lib/mysql/plugin/ r,
/usr/lib/mysql/plugin/* mr,
pressed format. Although this could high-performance update to InnoDB.
cost additional CPU cycles, it will For the administrator, this means a
Then restart Apparmor with the ser‑ give you huge I/O performance sav- whole lot more optimizations: a state-
vice apparmor restart command ings – depending on your data struc- of-the-art MySQL version reworked
to enable the new rules. In my.cnf, ture – because far fewer operations by the MariaDB project and extended
you then need to enable the InnoDB are required on disk/SSD. To discover to include XtraDB (and Maria), along
engine and load the InnoDB plugin whether it is worthwhile changing to with additional patches courtesy of
as shown in Listing 2. The InnoDB Barracuda, you will need to measure the OurDelta project.
plugin documentation contains de- performance. Tables with large text A conversion from InnoDB to XtraDB
tailed information on this procedure. and blob fields in particular will ben- tables is not needed because XtraDB
The MySQL server error log contains efit from compression. Incidentally, replaces the standard InnoDB engine,
the message shown in Listing 3 after MyISAM has supported “compressed” just as the InnoDB plugin does. Exist-
loading the InnoDB plugin. tables for some time; however, you ing or new tables are automatically
The documentation provides detailed cannot modify compressed MyISAM managed by XtraDB. A downgrade to
information on the optimizations tables in ongoing operations. the InnoDB plugin and the standard
and new features offered in the In- InnoDB engine is also possible. To
noDB plugin. One thing that stands Percona XtraDB see that XtraDB still refers to itself as
out against the rest is the new “Bar- “InnoDB,” you can call SHOW ENGINES
Percona XtraDB [8] takes things one – also, you will see the other modern
Listing 2: Loading the InnoDB Plugin into my.cnf step further than the InnoDB plugin. engines, such as Maria and PBXT,
[mysql] This storage engine is a merge of the here. Listing 4 shows the engines on
ignore_builtin_innodb current InnoDB plugin version with a current MariaDB server.
plugin_load=innodb=ha_innodb.so; additional performance and feature Tests on Live systems demonstrates
innodb_trx=ha_innodb.so;innodb_locks=ha_innodb.so;
patches. From a codebase point of that migrating to the MariaDB pack-
innodb_lock_waits=ha_innodb.so;
view, XtraDB is thus the most innova- age is unproblematic for the most
innodb_cmp=ha_innodb.so;innodb_cmp_reset=ha_innodb.so;
tive version of InnoDB. But don’t let part. MyISAM tables are left un-
innodb_cmpmem=ha_innodb.so;
the name worry you: XtraDB is an In- changed; InnoDB tables continue to
innodb_cmpmem_reset=ha_innodb.so
noDB engine. The new name simply work. However, MySQL-specific con-
serves to underline the major differ- figurations in my.cnf are interpreted
Listing 3: With and Without the InnoDB Plugin ences between it and the version of in a fairly strict manner. sql‑mode=NO_
MySQL server without InnoDB plugin: InnoDB that ships with MySQL. ENGINE_SUBSTITUTION,TRADITIONAL
InnoDB: Started; log sequence number 0 44233 Your easiest approach to installing will put MySQL in traditional mode,
XtraDB is to resort to the MariaDB which handles many warnings as er-
MySQL server with current InnoDB plugin: packages created by OurDelta. rors. For example, a typical Rails-style
InnoDB: The InnoDB memory heap is disabled
MariaDB [9] itself is a MySQL fork database migration failed because
InnoDB: Mutexes and rw_locks use GCC atomic builtins
by the well-known MySQL developer it did not use completely standards-
InnoDB: highest supported file format is Barracuda.
Michael “Monty” Widenius. Widenius compliant queries, such as setting de-
InnoDB Plugin 1.0.6 started; log sequence number 44233
is working on a transactional alterna- fault values for text and blob fields.
In non-traditional mode, MySQL ig- write out the InnoDB buffer to disk. on the code of the not-yet-released
nores the default use and still runs If you do need to restart the MySQL MySQL 6.0 and mainly pursues the
the queries; in traditional mode, the or MariaDB server, the InnoDB engine goals of removing unnecessary func-
query quits with an error. Thus, it is loses its valuable buffer pool in RAM. tions and reducing complexity.
a good idea in many cases to change Depending on your configuration and The developers really have been radi-
the line to sql‑mode=NO_ENGINE_SUB‑ the application, the pool can be sev- cal in the features they eradicated: E
STITUTION and then restart the eral gigabytes and might be filled in
server. This problem is not specific the course of hours. Storing the buffer Listing 5: Storing and Loading the Buffer Pool
to MariaDB, simply a very restrictive pool before quitting and loading it // storing the buffer pool
configuration in line with the MySQL again after restarting will save valu- MariaDB [(none)]> select * from
standard. Additionally, it makes sense able warmup time, which you would information_schema.XTRADB_ADMIN_COMMAND
/*!XTRA_LRU_DUMP*/;
to check whether programs compiled notice as slow response on the part of
+‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑+
against Libmysqlclient-dev need to be the database server. Listing 5 shows
| result_message |
recompiled against the current Lib- the commands and returns for storing
+‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑+
mariadbclient-device. In a Rails envi- and loading the buffer pool. | XTRA_LRU_DUMP was succeeded. |
ronment, this will affect Mysql-Gem. +‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑+
Besides the benefits of the InnoDB Drizzle // loading buffer pool and
plugin described thus far, XtraDB also MariaDB [(none)]> select * from information_schema.
offers a considerable performance At this point, I’ll take a quick look XTRADB_ADMIN_COMMAND /*!XTRA_LRU_RESTORE*/;
with various setups proves [10]. Nu- According to the project, the fork is a | result_message |
merous additional functions make life return to the original MySQL values: +‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑+
| XTRA_LRU_RESTORE was succeeded. |
easier for developers and administra- simplicity, reliability, and perfor-
+‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑+
tors – and don’t forget the ability to mance. Drizzle was originally based
Hot Topics
• High Availability
• Computer Clusters
• Load Balancing
• Configuration Management
• Security Management
Register here:
streaming.linux-magazin.de/en
storage engines such as Federated and Community Edition of MySQL server Info
Merged have been removed; others, is to launch new and active projects [1] MySQL on Facebook:
such as CSV and MyISAM, have been (Figure 1). Existing MySQL 5.0 instal- [https://launchpad.net/mysqlatfacebook]
demoted to temporary engines. Mod- lations can be replaced easily by the [2] Google patches: [http://code.google.
ern engines such as XtraDB are main- OurDelta MySQL 5.0 build, which ac- com/p/google‑mysql‑tools/wiki/
tained in a separate branch. The stan- celerates the server, thanks to perfor- Mysql5Patches]
dard engine for Drizzle is InnoDB. mance patches, and offers advanced [3] Percona patches: [http://www.percona.
However, this does not mean that reporting functionality so the admin- com/docs/wiki/patches:start]
data dumped from a classical MySQL istrator can plan further steps on the [4] MySQL performance blog: [http://www.
server with InnoDB tables can be basis of run-time statistics. Installa- mysqlperformanceblog.com/]
integrated without problems, because tions of version 5.1 can benefit from [5] Patches from OurDelta:
Drizzle has also eradicated many field the latest optimizations by installing [http://ourdelta.org/patches]
types, such as TINYINT, TINYTEXT, and the current InnoDB plugin – ideally [6] InnoDB plugin: [http://www.innodb.com/
YEAR. Migrating to Drizzle thus means without needing to rebuilding. Migrat- products/innodb_plugin/features/]
architectural changes to your data- ing to the state-of-the-art MariaDB, [7] Installing the InnoDB plugin:
base design. Although a change from which outperforms the InnoDB plugin [http://dev.mysql.com/doc/refman/5.1/en/
TINYINT to INT could simply mean in performance tests, turns out to be innodb.html]
searching and replacing occurrences more effective. Luckily, MariaDB is [8] Percona XtraDB: [http://www.percona.
in a dump file, the lack of a YEAR field packetized by the OurDelta project, com/docs/wiki/percona‑xtradb:start]
can have a more serious effect on ex- which also adds a number of addi- [9] MariaDB:
isting applications. A generic solution tional patches. [http://askmonty.org/wiki/MariaDB]
for the migration does not exist. The Drizzle database, with its simpli- [10] Benchmarks: InnoDB plugin and XtraDB vs.
On a more positive note, Drizzle of- fied variant of InnoDB, is still at a InnoDB:
fers totally new replication mecha- very early stage. The Maria engine [http://www.mysqlperformanceblog.com/
nisms. One feature that stands out also represents a possible fast future 2010/01/13/innodb‑innodb‑plugin‑vs‑xtrad
is the ability to perform rabbit repli- alternative to the classical combina- b‑on‑fast‑storage/]
cation to NoSQL databases such as tion of MyISAM/InnoDB; however, [11] Drizzle: [http://drizzle.org/]
Voldemort [12] or services such as you need to perform extensive checks [12] Project Voldemort:
Memcached [13]; thus, you would before using it. Both projects’ engines [http://project‑voldemort.com]
be able to provision a variety of back require architectural changes to the [13] “Memcached” by Tim Schürmann, Linux
ends automatically from a central lo- database system and the program Magazine, November 2009, pg. 28
cation. As a state-of-the-art, high-per- code that accesses it, in contrast to [14] MariaDB in Ubuntu: [https://wiki.ubuntu.
formance, non-transactional database the InnoDB plugin, XtraDB, and pop- com/Lucid‑MariaDB‑Inclusion]
engine, the Drizzle project is working ular MySQL patches.
on BlitzDB, which will be positioned Administrators and developers are The Author
as an alternative to MyISAM. put in an ambivalent situation: Al- Caspar Clemens Mierau’s “Screenage” project
though it has become inevitable for provides consultancy services to Rails and PHP
Conclusions administrators to concern themselves portals such as moviepilot.de, omdb.org, and
with alternative MySQL patches, Artfacts.Net. Caspar Clemens works as a free‑
The community’s response to the le- engines, and forks and, ideally, to lance author and is collecting literature for his
thargic integration of patches into the deploy benchmarks to discover the thesis on development environments.
© peapop, Fotolia.com
Microsoft Exchange 2010: The Highlights
Message Exchange
For years, Exchange has been the standard in-house server solution for all messaging tasks on Windows. This
article introduces the highlights of the new Exchange version 2010. By Björn Bürstinghaus
Exchange 2010 [1] is the latest gen- pany is using more licenses than it again or the transport rules are modi-
eration of the Microsoft email server, has purchased. fied.
and it comes with a whole bunch Giving users permission to create dis-
of new and useful functions for tribution groups and their members is New Features from the
administrators and users alike [2]. also new; this task can be handled in User’s Point of View
The version is even interesting for the new, web-based Exchange Control
companies currently using Exchange Panel (ECP). In ECP, users can modify Exchange 2010 lets end users search
2007: Besides an archiving function, their own Active Directory informa- multiple mailboxes and send the re-
the new Exchange also integrates an tion, such as cellphone numbers or sults as a PST export to another per-
intelligent SMS gateway architecture, addresses, without needing to contact son. The Unified Messaging function
thus removing the need for expensive IT to do so. In this way, ECP will integrates a voicemail function for
third-party add-ons. probably make a major contribution each user in Exchange and supports
to reducing IT costs in the enterprise. speech-to-text conversion, making it
Improved Management for Whereas Exchange 2007 restricted possible to display a voicemail as well
Admins bulk changes to the PowerShell, Ex- as listen to the attached audio file.
change 2010 finally lets administrators The ability to display messages as
Many functions that posed tedious make bulk changes in the manage- conversations both in Microsoft Of-
Exchange Management Shell tasks for ment console, so you can run a task fice Outlook 2010 and in the Outlook
administrators in Exchange 2007 have simultaneously against multiple mail- Web App gives users a clearer view of
now been integrated into the console. boxes, which wasn’t possible in previ- their email folders. Users do not need
For example, you can create new ous versions (Figure 2). to check the sent items for email they
certificates or view the current crop The new transport cache prevents sent to the same person on the same
of certificates at the console, without email messages transmitted via SMTP subject, which can save time.
needing to compose your own Power- from being deleted until the down- The Outlook Wep App’s premium
Shell request (Figure 1). stream node confirms that they have functions were previously restricted
This version of Exchange also of- been forwarded successfully. In other to Internet Explorer. Exchange 2010
fers a console view of the number of words, if a hub transport server in lifts this restriction. The new Outlook
licenses used in the company. This your company goes down, the trans- Web App now supports premium
new transparency allows administra- port cache will retain the messages functions in Mozilla Firefox and Ap-
tors to identify cases in which a com- until the server becomes available ple Safari (Figure 3). E
Another new feature is the approach box, the administrator can create an ing or moving email from the mailbox
to migrating mailboxes. Previously, a archive for each member of staff, al- to the archive mailbox, and a match-
user’s mailbox was switched offline though this is a premium feature and ing retention policy that prevents
during migration to another mailbox does require an Exchange Premium deletion.
database, which meant delays of a Client Access License for each user Exchange 2010 currently does not of-
few minutes to several hours. Now, (Figure 4). Once enabled, the archive fer the same feature set as archiving
while the mailbox is in transit, the is automatically displayed in the Out- solutions by third parties such as
user can stay online. Exchange copies look 2010 and the Outlook Web App MailStore Server or GFI Mail Ar-
the total content of the mailbox and folder navigation, in addition to the chiver; however, the functionality that
then synchronizes any changes that normal mailbox. is currently available does relieve the
occurred during the migration. This solution allows administrators to burden on mailboxes, and it puts an
create an Exchange solution quickly end to the “PST Hell” that some ad-
Archive Mailbox that lets end users restore messages ministrators rightly complained of.
they deleted from their mailboxes
Centralized archiving of email previ- without having to call the help desk. Text Messages
ously relied on third-party add-ons. All you need for this is the journaling
Again, Exchange 2010 puts an end function, the enterprise-wide policies If a company’s employees use smart-
to this. In addition to a normal mail- to use for automatically synchroniz- phones with Windows Mobile 6.5,
Safety Net
cluster machines with a hardware
RAID. Local machines could be
backed up twice a day: The first time
to copy the backup from the previ-
© Melissa King, 123RF.com
rectory), finding the right location for OS X, and Windows. With a handful The repository I used had version
a given file can be a nightmare. Even of machines, configuring rsync by 0.33, so I downloaded and installed
if you are dealing with just a few sys- hand might be a viable solution. If that (although v0.4.1 is current). The
tems, administration of the backups you prefer a graphical interface, sev- source code is available, but various
can become a burden. eral different graphical interfaces are Linux distributions have compiled
This leads into the question of how available. In fact, many different ap- packages.
easy it is to recover your data. Can plications rely on it to do the backup. Describing itself as a backup and
you easily find files from a specific The rsync tool can be used to copy synchronization tool, luckyBackup
date if there are multiple copies? How files either from a local machine to uses rsync, to which it passes various
easy is it to restore individual files? a remote machine or the other way configuration options. It provides the
What about all files changed on a around. A number of features also ability to pass any option to rsync, if
specific date? make rsync a useful tool for synchro- necessary. Although it’s not a client-
Depending on your business, you nizing directories (which is part of its server application, all it needs is an
might have legal obligations in terms name). For example, rsync can ignore rsync connection to back up data
of how long you are required to keep files that have not been changed since from a remote system.
certain kinds of data. In some cases, the last backup, and it can delete files When you define which files and
it might be a matter of weeks; in on the target system that no longer directories to back up, you create
other cases, it can be 10 years or lon- exist on the source. If you don’t want a “profile” that is stored under the
ger. Can you recover data from that existing files to be overwritten but user’s home directory. Profiles can be
long ago? Even if it’s not required by still want all of the files to be copied, imported and exported, so it is pos-
law, having long-term backups is a you can tell rsync to add a suffix to sible to create backup templates that
good idea. If you accidentally delete files that already exist on the target. are copied to remote machines. (You
something and don’t notice it has The ability to specify files and direc- still need the luckyBackup binary to
happened for a period longer than tories to include or exclude is very run the commands.)
your backup cycle, you will probably useful when doing backups. This can Each profile contains one or more
never get your data back. How easy is be done by full name or with wild- tasks, each with a specific source
it for your backup software to make cards, and rsync allows you to specify and target directory, and includes the
full backups at the end of each month a file that it reads to determine what configuration options you select [Fig-
– for example, to ensure that the me- to include or exclude. When deter- ure 2]. Thus, it is possible to have
dia does not get overwritten? mining whether a file is a new ver- different options for different directo-
sion or not, rsync can look at the size ries (tasks), all for a single machine
Scheduling and modification date, but it can also (profile).
compare a checksum of the files. Within a profile, the tool makes it
If your situation prevents you from A “batch mode” can be used to up- easy to define a restore task on the
doing complete backups all the time, date multiple destinations from a basis of a given backup task. Essen-
consider how easy it is to schedule single source machine. For example, tially, this is the reverse of what you
them. Can you ensure that a complete changes to the configuration files can defined for the backup task, but it is
backup is done every weekend, for be propagated to all of your machines very straightforward to change op-
example? without having to specify the change tions for the restores, such as restor-
Also, you need to consider the sched- files for each target. Rsync also has a ing to a different directory.
uling options for the respective tool. GUI, Grsync [Figure 1]. Scheduling of the backup profiles is
Can it start backups automatically? Is done by cron, but the tool provides a
it dependent on some command? Is it luckyBackup simple interface. The cron parameters
simply a GUI for an existing tool, and are selected in the GUI; you click a
all the operations need to be started At first, I was hesitant to go into de- button, and the job is submitted to
manually? Just because a particular tails about luckyBackup [1], because cron.
operating system has no client does it is still a 0.X version and it has a A console, or command-line mode,
not mean you are out of luck: You somewhat “amateurish” appearance. allows you to manage and configure
can mount filesystems using Samba However, my skepticism quickly your backups, even when a GUI is
or NFS and then back up the files. faded as I began working with it. not available, such as when connect-
luckyBackup is very easy to use and ing via ssh. Because the profiles are
rsync provides a surprising number of op- stored in the user’s home directory, it
tions. Despite its simplicity, lucky- would be possible for users to create
Sometimes you do not need to look Backup had the distinction of winning their own profile and make their own
farther than your own backyard. third place in the 2009 SourceForge backups.
Rsync is available for all Linux distri- Community Choice Awards as a “Best Although I would not recommend
butions, all major Unix versions, Mac New Project.” it for large companies (no insult in-
tended), luckyBackup does provide a down searches when you need to of “optimization” in your backups.
basic set of features that can satisfy recover specific files. However, the Although optimization can be useful
home users and small companies. commercial version uses MySQL to in many situations, the explanation
store the information. is somewhat vague about how this is
Amanda Backups from multiple machines accomplished – and vague descrip-
can be configured to run in parallel, tions of how a system makes deci-
Initially developed internally at the even if you only have one tape drive. sions on its own always annoy me.
University of Maryland, the Advanced Data are written to a “holding disk” One important caveat is that Amanda
Maryland Automatic Network Disk and from there go onto tape. Data was developed with a particular envi-
Archiver (Amanda) [2] is one of the are written with the use of standard ronment in mind, and it is possible (if
most widely used open source backup (“built-in”) tools like tar, which not likely) that you will need to jump
tools. The software development is means data can be recovered inde- through hoops to get it to behave
“sponsored” by the company Zmanda pendently from Amanda. Proprietary the way you want it to. The default
[3], which provides an “enterprise” tools typically have a proprietary should always be to trust the admin-
version of Amanda that you can pur- format, which often means you can- istrator, in my opinion. If the admin
chase from the Zmanda website. The not access your data if the server is wants to configure it a certain way,
server only runs on Linux and Solaris down. the product shouldn’t think it knows
(including OpenSolaris), but Mac OS Scheduling is also done with a local better.
X and various Windows versions also tool: cron. Commands are simply For example, you should determine
have clients. started at the desired time with the whether the scheduling mechanism is
The documentation describes respective configuration file as an doing full backups at times other than
Amanda has having been designed to argument. when you expect or even want. In
work in “moderately sized computer Amanda supports the concept of “vir- many cases, large data centers do full
centers.” This and other parts of the tual tapes,” which are stored on your backups on the weekend when there
product description seem to indicate disk. These can be of any size smaller is less traffic and not simply “every
the free, community version might than the physical hard disk. This ap- five days.” If your installation has
have problems with larger computer proach is useful for splitting up your sudden spikes in data, Amanda might
centers. Perhaps this is one reason for files into small enough chunks to be think it knows better and change the
selling an “enterprise” version. The written to DVD, or even CD. schedule.
latest version is 3.1.1, which came Backups are defined by “levels,” with Although such situations can be ad-
out in June 2010, but it just provided 0 level indicating a full backup; the dressed by tweaking the system, I
bug fixes. Version 3.1.0 was released subsequent levels are backups of the have a bad feeling when software
in May 2010. changes made since the last n - 1 or has the potential for doing something
Amanda stores the index of the files less. The wiki indicates that Aman- unexpected. After all, as a sys admin,
and their locations in a text file. This da’s scheduling mechanism uses I was hired to think, not simply to
naturally has the potential to slow these levels to implement a strategy push buttons. To make things easier
Figure 1: Grsync – A simple front end to rsync. Figure 2: luckyBackup profile configuration.
in this regard, Zmanda recommends states that it cannot do filesystem or The logical view is a consolidated
their commercial enterprise product. disk images, nor can it write to CDs view of the files and directories in the
Although Amanda has been around or DVDs. Backups can be stored on archive.
for years and is used by many organi- remote machines with FTP or FTPS, Areca is able to trigger pre- and post-
zations, I was left with a bad taste in and you can back up from remotely actions, like sending a backup report
my mouth. Much of the information mounted filesystems, but with no re- by email, launching shell scripts
on their website was outdated, and mote agent. Areca provides no sched- before or after your backup, and so
many links went to the commercial uler, so it expects you to use some forth. It also provides a number of
Zmanda website, where you could other “task-scheduling software” variables, such as the archive and
purchase their products. Additionally, (e.g., cron) to start your backup auto- computer name, which you can pass
a page with the wish list and planned matically. to a script. Additionally, you can de-
features is as old as 2004. Although a In my opinion, the interface is not fine multiple scripts and specify when
note states that the page is old, there as intuitive as others, and it uses ter- they should run. For example, you
is no mention of why the page is still minology that is different from other can start one script when the backup
online or any explanation of what backup tools, making for slower prog- is successful but start a different one
items are still valid. Half of the pages ress at the beginning. For example, if it returns errors.
on the administration table of con- the configuration directory is called Areca provides a number of interest-
tents (last updated in 2007) simply a “workspace” and a collection of ing options when creating backups. It
list the title with no link to another configurations (which can be started allows you to compress the individual
page. at once) is a “group,” as opposed to a files as well as create a single com-
Also, I must admit I was shocked collection of machines. pressed file. To avoid problems with
when I read the “Zmanda Contribu- Areca provides three “modes,” which very large files, you can configure
tor License Agreement.” Amanda is determine how the files are saved: the backup to split the compressed
an open source tool, which is freely standard, delta, and image. The archive into files of a given size. Also,
available to everyone. However, in the standard mode is more or less an you can encrypt the archives with ei-
agreement “you assign and transfer incremental backup, storing all new ther AES 128 or AES 256. One aspect
the copyrights of your contribution files and those modified since the I liked was the ability to drop directo-
to Zmanda.” In return, you receive last backup. The delta mode stores ries from the file explorer directly into
a broad license to use and distribute the modified parts of files. The image Areca.
your contribution. Translated, this mode is explicitly not a disk image; The Areca forum has relatively low
means you give up your copyright basically, it is a snapshot that stores traffic, but posts are fairly current.
and not simply give Zmanda the right a unique archive of all your files with However, I did see a number of recent
to use it, which also means Zmanda each backup. The standard backups posts remain unanswered for a month
is free to add your changes to their (differential, incremental, or full) de- or longer. The wiki is pretty limited,
commercial product and make money termine which files to include. so you should probably look through
off of it – and all you get is a T-shirt! The GUI provides two views of your the user documentation, which I
backups. The physical view lists the found to be very extensive and easy
Areca Backup archives created by a given target. to understand.
Two “wizards” also ease the creation sis of the function of the respective One interesting aspect of Bacula is the
of backups. The Backup Shortcut machine. built-in Python interpreter for script-
wizard simplifies the process of creat- The Director supervises all backup, ing that can be used, for example,
ing the necessary Areca commands, restore, and other operations, includ- before starting a job, on errors, when
which are then stored in a script that ing scheduling backup jobs. Backup the job ends, and so on. Addition-
you can execute from the command jobs can start simultaneously as well ally, you can create a rescue CD for a
line or with cron. as on a priority basis. The Director “bare metal” recovery, which avoids
The Backup Strategy wizard generates also provides the centralized control the necessity of reinstalling your sys-
a script containing a set of backup and administration and is responsible tem manually and then recovering
commands to implement a specific for maintaining the file catalog. The your data. This process is supported
strategy for the given target. For ex- Console is used for interaction with by a “bootstrap file” that contains a
ample, you can create a backup every the Bacula director and is available as compact form of Bacula commands,
day for a week, a weekly backup for a GUI or command-line tool. thus allowing you to store your sys-
three weeks, and a monthly backup The File component is also referred tem without having access to the
for six months. to as the client program, which is the Catalog.
software that is installed on the ma- The basic unit is called a “job,”
Bacula chines to be backed up. As its name which consists of one client and one
implies, the Storage component is set of files, the level of backup, what
The Backup Dracula “comes by responsible for the storage and recov- is being done (backing up, migrating,
night and sucks the vital essence ery of data to and from the physical restoring), and so forth.
from your computers.” Despite this media. It receives instructions from Bacula supports the concept of a “me-
somewhat cheesy tag line, Bacula the Director and then transfers data to dia pool,” which is a set of volumes
[5] is an amazing product. Although or from a file daemon as appropriate. (i.e., disk, tape). With labeled vol-
it’s a newer product than Amanda, I It then updates the catalog by send- umes, it can easily match the external
definitely think it surpasses Amanda ing file location information to the labels on the medium (e.g., tape) as
in both features and quality. To be Director. well as prevent accidental overwrit-
honest, the setup is not the point- The Catalog is responsible for main- ing of that medium. It also supports
and-click type that you get with other taining the file indexes and volume backing up to a single medium from
products, but that is not really to be database, allowing the user to locate multiple clients, even if they are on
expected considering the range of fea- and restore files quickly. The Catalog different operating systems.
tures Bacula offers. maintains a record of not only the The Bacula website is not as fancy as
Although Bacula uses local tools to files but also the jobs run. Currently, Amanda, but I found it more useful
do the backup, it is a true client- Bacula supports MySQL, PostgreSQL, because the details about how the
server product with five major com- and SQLite. As of this writing, the program works are much more acces-
ponents that use authenticated com- Directory and Storage daemons on sible, and the information is more up
munication: Director, Console, File, Windows are not directly supported to date.
Storage, and Catalog. These elements by Bacula, although they are reported
are deployed individually on the ba- to work. The Right Fit
Although I only skimmed the surfaces
of these products, this article should
give you a good idea what is possible
in a backup application. Naturally,
each product has many more features
than I looked at, so if any of these
products piqued your interest, take a
look at the website to see everything
that product has to offer. n
Info
[1] luckyBackup:
[http://luckybackup.sourceforge.net]
[2] Amanda: [http://www.amanda.org]
[3] Zmanda: [http://www.zmanda.com]
[4] Areca Backup: [http://www.areca‑backup.org]
Figure 3: Areca backup. [5] Bacula: [http://www.bacula.org]
Learning
memcached (a memory-caching
program widely used to speed up
web-based applications). He found
many memcached servers open to
The decision to move from the old vendor was still difficult. Business
school staff had invested a great deal financially and even more in time
and lost productivity. Nevertheless it was decided to make a change to
SEP sesam.
Wherever two or more computers Before you can set up a cluster file- orderly access to the data with the
need to access the same set of data, system based on shared disks, you use of file locking to avoid conflict sit-
Linux and Unix systems will have need to look out for a couple of uations. In OCFS2’s case, the Distrib-
multiple competing approaches. (For things. First, the administrator needs uted Lock Manager (DLM) prevents
an overview of the various technolo- to establish the basic framework of a filesystem inconsistencies. Initializing
gies, see the “Shared Filesystems” cluster, including stipulating the com- the OCFS2 cluster automatically
box.) In this article, I take a close puters that belong to the cluster, how launches DLM, so you don’t need to
look at OCFS2, the Oracle Cluster File to access it via TCP/IP, and the clus- configure this separately. However,
System shared disk filesystem [1]. As ter name. In OCFS2’s case, a single the best file locking is worthless if the
the name suggests, this filesystem ASCII file is all it takes (Listing 1). computer writing to the filesystem
is mainly suitable for cluster setups The second task to tackle with a clus- goes haywire. The only way to pre-
with multiple servers. ter filesystem is that of controlled and vent computers from writing is fenc-
Getting Started
Figure 1: Cluster configuration with the ocfs2console GUI tool.
As I mentioned earlier, OCFS2 is a
cluster filesystem based on shared man page for mkfs.ocfs provides a OCFS2 volumes, you can expect a
disks. The range of technologies full list of options, the most important short delay: During mounting, the
that can provide a shared disk spans of which are covered by Table 2. executing machine needs to register
expensive SAN over Fibre Channel, Once you have created the filesys- with the DLM. In a similar fashion,
from iSCSI to low-budget DRBD [7]. tem, you need to mount it. The mount the DLM resolves any existing locks
In this article, I will use iSCSI and command works much like that on or manages them on the remaining
NDAS (Network Direct Attached Stor- unclustered filesystems (Figure 2). systems in case of a umount. The doc-
age). The second ingredient in the When mounting and unmounting umentation points to various options
OCFS2 setup is computers with an
OCFS2-capable operating system. The Table 1: Directories in a Repository
best choices here are Oracle’s Enter- Filesystem Function GUI Menu CLI Tool
prise Linux, SUSE Linux Enterprise Mount Mount mount.ocfs2
Server, openSUSE, Red Hat Enterprise Unmount Unmount umount
Linux, and Fedora. Create Format mkfs.ocfs2
The software suite for OCFS2 com- Repair Check fsck.ocfs2
prises the ocfs2-tools and ocfs2con- Repair Repair fsck.ocfs2
sole packages and the ocfs2‑`uname Change name Change Label tunefs.ocfs2
‑r` kernel modules. Typing ocfs2con‑ Maximum number of nodes Edit Node Slot Count tunefs.ocfs2
sole launches a graphical interface in
which you can create the cluster con- Table 2: Important Options for mkfs.ocfs2
figuration and distribute it over the Option Purpose
nodes involved (Figure 1). However, b Block size
you can just as easily do this with vi C Cluster size
and scp. Table 1 lists the actions the L Label
graphical front end supports and the
N Maximum number of computers with simultaneous access
equivalent command-line tools.
J Journal options
After creating the cluster configu-
T Filesystem type (optimization for many small files or a few large ones)
ration, /etc/init.d/o2cb online
launches the subsystem (Listing 2).
The init script loads the kernel mod- Listing 1: /etc/o cfs2/c luster.conf
ules and sets a couple of defaults for node: ip_port = 7777
you can set for the mount operation. for journaling is also important here. value for the N option in mkfs.ocfs2
If OCFS2 detects an error in the data The latest version lets OCFS2 write all can become a problem. The tunefs.
structure, it will default to read- the data out to disk before updating ocfs2 tool lets you change this in next
only. In certain situations, a reboot the journal. date=writeback forces the to no time. The same thing applies to
can clear this up. The errors=panic predecessor’s mode. the journal size (Listing 4).
mount option handles this. Another Inexperienced OCFS2 admins might Also, you can use this tool to modify
interesting option is commit=seconds. wonder why the OCFS2 volume is the filesystem label and enable or
The default value is 5, which means not available after a reboot despite an disable certain features (see also List-
that OCFS2 writes the data out to disk entry in /etc/fstab. The init script ing 8). Unfortunately, the man page
every five seconds. If a crash occurs, that comes with the distribution, doesn’t tell you which changes are
a consistent filesystem can be guaran- /etc/init.d/ocfs2 makes the OCFS2 permitted on the fly and which aren’t.
teed – thanks to journaling – and only mount resistant to reboots. Once en- Thus, you could experience a tunefs.
the work from the last five seconds abled, this script scans /etc/fstab for ocfs2: Trylock failed while opening de-
will be lost. The mount option that OCFS2 entries and integrates these vice "/dev/sda1" message when you
specifies the way data are handled filesystems. try to run some commands on OCFS2.
Just as with ext3/4, the administra-
Listing 2: Starting the OCFS2 Subsystem tor can modify a couple of filesystem More Detail
# /etc/init.d/o2cb online properties after mounting the file-
Loading filesystem "configfs": OK system without destroying data. The As I mentioned earlier, you do not
Mounting configfs filesystem at /sys/kernel/config: OK tunefs.ocfs2 tool helps with this. If need to preconfigure the cluster
Loading filesystem "ocfs2_dlmfs": OK
the cluster grows unexpectedly and heartbeat or fencing. When the
Mounting ocfs2_dlmfs filesystem at /dlm: OK
you want more computers to access cluster stack is initialized, default
Starting O2CB cluster ocfs2: OK
OCFS2 at the same time, too small a values are set for both. However, you
#
# /etc/init.d/o2cb status
Shared Filesystems
Driver for "configfs": Loaded
Filesystem "configfs": Mounted The shared filesystem family is a fairly colorful they first need to enter the cluster. The clus-
Driver for "ocfs2_dlmfs": Loaded bunch. By definition, they all share the ability ter setup requires additional infrastructure,
Filesystem "ocfs2_dlmfs": Mounted to grant multiple computers simultaneous ac- such as additional I/O cards, cluster software,
Checking O2CB cluster ocfs2: Online cess to certain data. The differences are in the and, of course, a configuration. Cluster file-
Heartbeat dead threshold = 31 way they implement these requirements. systems are also categorized by the way they
Network idle timeout: 30000 On the one hand are network filesystems, in store data. Those based on shared disks allow
Network keepalive delay: 2000 which the most popular representative in the multiple computers to read and write to the
Network reconnect delay: 2000 Unix/Linux camp is Network Filesystem (NFS) same medium. I/O is handled via Fibre Channel
Checking O2CB heartbeat: Not active [2]. NFS is available for more or less any op- (“classical SAN”) or TCP/IP (iSCSI). The most
#
erating system and to all intents and purposes popular representatives in the Linux camp
only asks the operating system to provide a here are OCFS2 and the Global Filesystem
Listing 3: OCFS2 Optimized for Mail Server TCP/IP stack. The setup is also fairly simple. (GFS2) [4].
# mkfs.ocfs2 ‑T mail ‑L data /dev/sda1
The Andrew filesystem (AFS) is another Parallel cluster filesystems are a more recent
mkfs.ocfs2 1.4.2 network filesystem that is available in a free invention. They distribute data over computers
Cluster stack: classic o2cb implementation, OpenAFS [3]. in the cluster by striping single files across
Filesystem Type of mail On the other hand are cluster filesystems. Be- multiple storage nodes. Lustre [5] and Ceph
Filesystem label=data fore computers can access “distributed” data, [6] are popular examples of this technology.
Block size=2048 (bits=11)
Cluster size=4096 (bits=12)
History
Volume size=1011675136 (246991 clusters) (493982 blocks)
16 cluster groups (tail covers 8911 clusters, rest cover
OCFS2 is a fairly young filesystem. As the “2” just a year later. Version 1.2 became more
15872 clusters)
in the name suggests, the current version widespread, with a great deal of support from
Journal size=67108864
is an enhancement. Oracle developed the various Enterprise Linux distributions.
Initial number of node slots: 2
predecessor, OCFS, for use in Real Application OCFS2 has been available for the major play-
Creating bitmaps: done Cluster for Oracle databases. The new OCFS2 ers in the Linux world for some time. This
Initializing superblock: done is designed to fulfill the requirements placed applies to commercial variants, such as SLES,
Writing system files: done on a mature filesystem capable of storing ar- RHEL, or Oracle EL, and to the free Debian,
Writing superblock: done bitrary data. POSIX compatibility and the typi- Fedora, and openSUSE systems. Depending on
Writing backup superblock: 0 block(s) cal – and necessary – performance required the kernel version, users either get version
Formatting Journals: done for databases were further criteria. 1.4, which was released in 2008, or version 1.2,
Formatting slot map: done After two years of development, the program- which is two years older. The “OCFS2 Choices”
Writing lost+found: done mers released version 1.0 of OCFS2, and it box and Table 3 show you what you need to
mkfs.ocfs2 successful made its way into the vanilla kernel (2.6.16) watch out for.
F Figure 2: Unspecta
cular: the OCFS2 mount
process.
E Figure 3: Automatic
reboot after 20 sec
onds on OCFS2 cluster
errors.
can modify the defaults to suit your In production use, you will probably An active OCFS2 cluster uses a
needs. The easiest approach here is want to remedy this state without handful of processes to handle
via the /etc/init.d/o2cb configure in-depth error analysis (i.e., reboot its work (Listing 5). DLM-related
script, which prompts you for the the cluster node). For this to happen, tasks are handled by dlm_thread,
required values – for example, when you need to modify the underlying dlm_reco_thread, and dlm_wq. The
the OCFS2 cluster should regard a operating system so that it automati- ocfs2dc, ocfs2cmt, ocfs2_wq, and
node or network connection as down. cally reboots in case of a kernel oops ocfs2rec processes are responsible
At the same time, you can specify or panic (Figure 3). Your best bet for for access to the filesystem. o2net
when the cluster stack should try to this on Linux is the /proc filesystem and o2hb‑XXXXXXXXXX handle cluster
reconnect and when it should send a for temporary changes, or sysctl if communications and the heartbeat.
keep-alive packet. you want the change to survive a All of these processes are started and
Apart from the heartbeat timeout, reboot. stopped by init scripts for the cluster
all of these values are given in mil- Just like any other filesystem, OCFS2 framework and OCFS2.
liseconds. However, for the heartbeat has a couple of internal limits you OCFS2 stores its management files
timeout, you need a little bit of math need to take into account when de- in the filesystem’s system directory,
to determine when the cluster should signing your storage. The number which is invisible to normal com-
consider that a computer is down. of subdirectories in a directory is mands such as ls. The debugfs.ocfs2
The value represents the number of restricted to 32,000. OCFS2 stores command lets you make the system
two-second iterations plus one for data in clusters of between 4 and directory visible (Figure 4). The
the heartbeat. The default value of 31 1,024Kb. Because the number of objects in the system directory are
is thus equivalent to 60 seconds. On cluster addresses is restricted to 232, divided into two groups: global and
larger networks, you might need to the maximum file size is 4PB. This
increase all these values to avoid false limit is more or less irrelevant be- Listing 4: Maintenance with tunefs.ocfs2
alarms. cause another restriction – the use # tunefs.ocfs2 ‑Q "NumSlots = %N\n" /dev/sda1
If OCFS2 stumbles across a criti- of JBD journaling – limits the maxi- NumSlots = 2
# tunefs.ocfs2 ‑N 4 /dev/sda1
cal error, it switches the filesystem mum OCFS2 filesystem size to 16TB,
# tunefs.ocfs2 ‑Q "NumSlots = %N\n" /dev/sda1
to read-only mode and generates a which can address a maximum of 232
NumSlots = 4
kernel oops or even a kernel panic. blocks of 4KB. #
# tunefs.ocfs2 ‑Q "Label = %V\n" /dev/sda1
Label = data
# tunefs.ocfs2 ‑L oldata /dev/sda1
# tunefs.ocfs2 ‑Q "Label = %V\n" /dev/sda1
Label = oldata
#
Figure 4: The metadata for OCSFS2 are stored in files that are invisible to the ls command. They can be listed root 7940 7 0 22:40 ? 00:00:00 [ocfs2cmt]
#
with the debugfs.ocfs2 command.
local (i.e., node-specific) files. The is necessary to create further node- complexity in a cluster filesystem
first of these groups includes global_ specific system files. doesn’t help. From the viewpoint of
inode_alloc, slot_map, heartbeat, OCFS2, things can go wrong in three
and global_bitmap. They have access What Else? different layers – the filesystem struc-
to each node on the cluster; incon- ture on the disk, the cluster configu-
sistencies are prevented by a locking When you make plans to install ration, or the cluster infrastructure –
mechanism. The only programs that OCFS2, you need to know which ver- or even a combination of the three.
access global_inode_alloc are those sion you will putting on the new ma- The cluster infrastructure includes
for creating and tuning the filesystem. chine. Although the filesystem itself – the network stack for the heartbeat,
To increase the number of slots, it that is, the structure on the medium cluster communications, and possibly
– is downward compatible, mixed op- media access. Problems with Fibre
Listing 6: Debugging with mounted.ocfs2 erations with OCFS2 v1.2 and OCFS2 Channel (FC) and iSCSI also belong
# grep ‑i ocfs /proc/mounts |grep ‑v dlm v1.4 are not supported. The network to this group.
# hostname protocol is to blame for this. The de- For problems with the cluster infra-
testvm2.seidelnet.de
velopers enabled a tag in the active structure, you can troubleshoot just
# tunefs.ocfs2 ‑L olddata /dev/sda1
tunefs.ocfs2: Trylock failed while opening device "/dev/
protocol version so that future OCFS2 as you would for a normal network,
sda1" versions would be downward compat- FC, or iSCSI problems. Problems can
# mounted.ocfs2 ‑f ible through the network stack. This also occur if the cluster configuration
Device FS Nodes comes at the price of incompatibility is not identical on all nodes. Armed
/dev/sda1 ocfs2 testvm
with v1.2. Otherwise, administrators with vi, scp, and md5sum, you can
#
have a certain degree of flexibility check this and resolve the problem.
when mounting OCFS2 media. OCFS2 The alternative – assuming the cluster
Listing 7: Restoring Corrupted OCFS2 Superblocks v1.4 computers will understand the infrastructure is up and running – is
# mount /dev/sda1 /cluster/ data structure of v1.2 and mount to synchronize the cluster configu-
mount: you must specify the filesystem type them without any trouble. This even ration on all of your computers by
# fsck.ocfs2 /dev/sda1
works the other way around: If the updating the configuration with ocfs
fsck.ocfs2: Bad magic number in superblock while opening
OCFS2 v1.4 volume does not use the 2console.
"/dev/sda1"
# fsck.ocfs2 ‑r1 /dev/sda1
newer features in this version, you It can be useful to take the problem-
[RECOVER_BACKUP_SUPERBLOCK] Recover superblock can use an OCFS2 v1.2 computer to atic OCFS2 volume offline – that is,
information from backup block#262144? <n> y access the data. to unmount it and restart the cluster
Checking OCFS2 filesystem in /dev/sda1: service on all of your computers by
label:
uuid:
backup
31 18 de 29 69 f3 4d 95 a0 99 a7
Debugging giving the /etc/init.d/o2cb restart
command. You can even switch the
23 ab 27 f5 04
A filesystem has a number of poten- filesystem to a kind of single-user
number of blocks: 367486
bytes per block: 4096
tial issues, and the added degree of mode with tunefs.ocfs2.
number of clusters: 367486
bytes per cluster: 4096
Table 3: New Features in OCFS2 v1.4
max slots: 2 Feature Description
Ordered journal mode OCFS2 writes data before metadata.
/dev/sda1 is clean. It will be checked after 20
additional mounts. Flexible allocation OCFS2 now supports sparse files – that is, gaps in files. Additionally,
# preallocation of extents is possible.
Inline data OCFS2 stores the data from small files directly in the inode and not in
extents.
Listing 8: Enabling/Disabling OCFS2 v1.4 Features
# tunefs.ocfs2 ‑Q "Incompatible: %H\n" /dev/sda1
Clustered flock() The flock() system call is cluster capable.
Incompatibel: sparse inline‑data
# tunefs.ocfs2 ‑‑fs‑features=nosparse /dev/sda1 OCFS2 Choices
# tunefs.ocfs2 ‑Q "Incompatible: %H\n" /dev/sda1
Administrators will basically come across two The mkfs.ocfs2 supplied with version 1.4
Incompatibel: inline‑data
versions of OCFS2: version 1.2 or 1.4. As re- automatically enables all the new features,
# tunefs.ocfs2 ‑‑fs‑features=noinline‑data /dev/sda1
# tunefs.ocfs2 ‑Q "Incompatible: %H\n" /dev/sda1
gards the data structure on disk, the two ver- thus effectively preventing OCFS v1.2 ma-
Incompatibel: None
sions are compatible; however, this does mean chines from accessing the filesystem. To
# doing without the newer features in v1.4. The change this, use tunefs.ocfs2 to disable
# tunefs.ocfs2 ‑‑fs‑features=sparse,inline‑data /dev/ documentation lists 10 significant differences the new functions (Listing 8). An easier ap-
sda1 between versions 1.2 and 1.4. Table 3 lists the proach is to create the filesystem with the
# tunefs.ocfs2 ‑Q "Incompatible: %H\n" /dev/sda1 most interesting of these. No matter which ‑‑fs‑feature‑level=max‑compat
Incompatibel: sparse inline‑data version you decide on, you should always option set. tunefs.ocfs2 will help you mi-
# watch for a couple of things. grate from version 1.2 to 1.4.
LINUX PRO
tem structure data are con- OCFS2 is simpler and less
tained in the superblock. Just complex than other cluster
like other Linux filesystems, filesystems, it is well worth
OCFS2 creates backup copies investigating. n
of the superblock; however,
the approach the OCFS2
developers took is slightly Info Enjoy a rich blend of tutorials, reviews,
unusual. [1] OCFS2: [http://oss.oracle.com/
OCFS2 creates a maximum projects/ocfs2/] international news, and practical
of six copies at non-config- [2] First NFS RFC: [http://tools.ietf. solutions for the technical reader.
urable offsets: 1, 4, 16, 64, org/html/rfc1094]
and 256GB and 1TB. Need- [3] OpenAFS:
less to say, OCFS2 volumes [http://www.openafs.org/]
smaller than 1GB (!) don’t
have a copy of the super-
[4] GFS: [http://sources.redhat.
com/cluster/gfs/]
Subscribe now
block. To be fair, mkfs.ocfs2
does tell you this when you
[5] Lustre: [http://wiki.lustre.org]
[6] Ceph:
to receive:
generate the filesystem. You
3 issues
[http://ceph.newdream.net/]
need to watch out for the [7] DRBD: [http://www.drbd.org/]
+ 3 DVDs
Writing backup superblock:
... line. The Author
for only
A neat side effect of these Udo Seidel is a teacher of math and
static backup superblocks is physics and has been an avid sup-
that you can reference them porter of Linux since 1996. After
by number during a filesys-
tem check. The example in
Listing 7 shows a damaged
completing his PhD, he worked as a
Linux/Unix trainer, system adminis-
trator, and senior solutions engineer.
$
3.00!
primary superblock that is He now works as the head of a www.linuxpromagazine.com/trial
preventing mounting and Linux/Unix team for Amadeus Data
a simple fsck.ocfs2 from Processing GmbH in Erding, Germany.
Side Effect
The many approaches to managing remote computers include VNC, No- vices – in both directions. Listing 1
contains an example with comments
machine, and SSH. Synergy is a clever tool that does a bit of lateral think- for the test case. The Synergy homep-
ing and connects multiple PCs to create a virtual desktop. By Florian Effenberger age documents many additional op-
tions [3].
To operate Synergy, you need at least the administrator has decided to use All options in the configuration file
two PCs, each with its own operat- Synergy. The admin will work mainly should be in lowercase. Also, make
ing system, monitor, and functional on the large PC, which is the Ubuntu sure you use the correct line breaks,
network card. The software supports system. In Synergy-speak, this is re- because Synergy is fussy about them
Windows 95 through Windows 7, ferred to as the control system; the and will not use the file if they are
Mac OS X as of version 10.2, and administrator will use the keyboard wrong. After completing all this work,
Linux with the current X server. Pre- and mouse on this server. The other you can launch the Synergy server on
built packages for Windows and Mac devices are clients. Ubuntu as a normal user by typing
OS X are available from the Synergy synergys. The ‑f parameter will pre-
homepage [1]. An RPM file is avail- Configuration vent Synergy from disappearing into
able for Linux and can be installed on the background.
most popular distributions with tools Before you start using Synergy, you QuickSynergy gives you an even more
such as Alien [2], if needed. Some need to configure it by editing the convenient approach to configuration.
distributions also offer prebuilt pack- /etc/synergy.conf or ~/.synergy. On Ubuntu, you can download the
ages; for example, Ubuntu Universe conf text file. The elementary unit is package from the Universe reposi-
contains a package called Synergy. a screen: Each computer belonging to tory and launch it with Applications |
a group, whether server or client, is a Tools | QuickSynergy after the install.
Test Case screen with a precisely defined posi- Unfortunately, the program failed to
tion – just like the display arrange- launch a working server during my
The administrator’s workplace com- ment in a configuration with multiple test.
prises a large desktop system running monitors. For each computer, you The Vista client, which I want to con-
Ubuntu and a small notebook run- need to enter into the configuration trol remotely with the Ubuntu system,
ning Vista on the right. To avoid con- file the name of the screen, its aliases, is even easier to configure. After the
stantly switching between keyboards, and its position relative to other de- installation, you can launch Synergy
continue to work on each system in of the program, in particular, is not section: screens
Keyboard input also reaches the cli- additional hardware. In contrast to # vista ‑> notebook
Data on Tap
© boing, Photocase.com
Does your application data take ages to creep off your disk or your network card, even if no noticeable activity
is taking place? Tools such as OProfile and SystemTap help you find out why. By Thorsten Scherf
Experienced administrators tend to popular strace traces applications. This command line sends all open sys-
use tools such as ps, vmstat, or the In the simplest cases, the tool lists calls for the Mutt application to the
like when they need statistics for in- all the system calls (syscalls) with /tmp/mutt.trace output file. Then,
dividual subsystems such as the net- their arguments and return codes for you can easily grep the configuration
work, memory, or block I/O. These a specific application. Setting options file from the results.
tools can help identify hardware or allows for highly selective Strace Profiling applications, including the
software bottlenecks, and they are output. For example, if you need to popular OProfile tool [1], take this a
indisputably useful for a general investigate whether an application is step further by giving you details of
appraisal, but if you want to delve parsing the configuration file that you the performance of individual appli-
deeper, you need something with painstakingly put together, you can cations, the kernel, or the complete
more punch. call Strace: system (see Figure 1). For this to hap-
Again, the standard toolbox offers a pen, OProfile accesses the CPU per-
couple of utilities. For example, the strace -e trace=open -o mutt.trace mutt formance counters on state-of-the-art
counter (500), and an optional mask ing and profile tools such as Strace CPU: Core 2, speed 2401 MHz (estimated)
Cou
nted L2_RQSTS events (number of L2 cache requests)
(0xc0). The counter defines the accu- and OProfile while providing a simple
with a unit mask of 0x7f
racy of a profile. The lower the value, but powerful interface for the user.
(multiple flags) count 500
the more often the event will be que- SystemTap was originally developed samples % image name symbol name
ried. Special properties of an event for monitoring the Linux kernel, al- 414 10.8377 mutt imap_exec_msgset
are available by query with the mask. though more recent versions also let 162 4.2408 mutt parse_set
For example, the L2_RQSTS event tells you monitor userspace applications. 161 4.2147 mutt mutt_buffer_add
you how many requests have been SystemTap builds on the kprobes 145 3.7958 mutt mutt_extract_token
made to the CPU’s L2 cache. When kernel subsystem. It lets the user 126 3.2984 mutt ascii_strncasecmp
called with a mask of 0xc0, OProfile insert arbitrary program code before 124 3.2461 mutt imap_read_headers
[...]
returns the value for all the available any event in kernel space – for ex-
important locations. Because develop- the module on another system that Tapsets are easy to integrate with
ers know their program better than doesn’t have a compiler is not a prob- your own scripts. The templates are
anybody else, this kind of information lem, though. SystemTap lets you build typically located below /usr/share/
is a big help. modules for kernel versions besides systemtap/ tapsets. Besides these syn-
SystemTap programs are written in the kernel on your own system. Then chronous events, other asynchronous
a language that is similar to Awk. A you can copy the module to the target events are not bound to a specific
parser checks the script for syntax er- system and run it with staprun – event in the kernel or program code,
rors before converting it to the faster more on this subject later. and typically they are used when you
C language, which is then loaded as Because all the major Linux distribu- need to create a header or footer for
a kernel module (Figure 2). Using tions support SystemTap, you can eas- your script. They are also suitable
ily install from the standard software for running specific events multiple
Listing 4: “Hello world” in SystemTap repository. The important thing is to times.
01
#!/usr/bin/stap install the kernel-debuginfo package Listing 5 shows a simple example
02
probe begin {printf("Hello, world!\n");} along with kernel-devel: with two probes, each with an asyn-
03
probe timer.sec(5) {exit();}
chronous and a synchronous event.
04
probe end {printf("Good‑bye, world!\n");} yum install kernel‑debuginfo U
kernel‑devel systemtap U
The first outputs a header at one-
05
systemtap‑runtime second intervals, the second calls the
06
# stap helloword.stp
prebuilt tcp.receive tapset, which is
07
Hello, world!
The latest versions are available from defined in Listing 6. This example
08
<5 seconds later>
the project’s Git repository: shows the extent to which the use
09
Good‑bye, world!
of tapsets reduces the complexity of
git clone U
http://sources.redhat.com/git/U
your own scripting. When you launch
Listing 5: tcpdump via SystemTap systemtap.git systemtap the script from Listing 1, typing stap
01
#!/usr/bin/stap tcpdump.stp lets you see the network
02
Assuming the installation is success- packets arriving at one-second in-
03
// A TCP dump like example
ful, you can use the following one- tervals with various other pieces of
04
liner to check that SystemTap is work- information. If you omit timer.s(1)
05
probe begin, timer.s(1) {
ing properly: in the first event, the header is only
06 printf("‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑
‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑\n")
output before outputting the first net-
stap ‑v ‑e 'probe vfs.read U
07 printf(
" Source IP Dest IP SPort {printf("Reading data from U
work packet.
DPort U A P R S F\n") disk.\n"); exit()}' The handler, also known as a body,
08 printf("‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑ supports instructions that will be
‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑\n") If this accesses the kernel’s VFS sub- familiar from various programming
09
} system, stap will send a message to languages. For example, you can
10 standard output and terminate. The initialize variables and arrays, call
11 p
robe tcp.receive {
ubiquitous “Hello World” program for functions, and query positioning pa-
12 printf(" %15s %15s %5d %5d %d %d %d %d %d
SystemTap is shown in Listing 4. rameters $ (integer) or @ (string).
13 %d\n", saddr, daddr, sport, dport, urg,
The “Hello World” example is a great Of course, you wouldn’t want to do
14 ack, psh, rst, syn, fin)
demonstration of the generic struc- without loops (while, until, for, if/
15
}
ture of a SystemTap script. A script else), which give you useful flow
always comprises two parts: an event control options for the script.
Listing 6: Tapset tcp.stp and a handler – typically
01
probe tcp.receive = kernel.function("tcp_v4_rcv") { preceded by a probe in-
02 iphdr = __get_skb_iphdr($skb)
struction. In this example,
03 saddr = ip_ntop(__ip_skb_saddr(iphdr))
the event is the read.vfs
04 daddr = ip_ntop(__ip_skb_daddr(iphdr))
function and the handler
05 protocol = __ip_skb_proto(iphdr)
is the printf command
06
07 tcphdr = __get_skb_tcphdr($skb)
that outputs text to stdout.
08 dport = __tcp_skb_dport(tcphdr)
The handler is always ex-
09 sport = __tcp_skb_sport(tcphdr) ecuted when the specified
10 urg = __tcp_skb_urg(tcphdr) event occurs. Events can
11 ack = __tcp_skb_ack(tcphdr) be kernel functions, sys-
12 psh = __tcp_skb_psh(tcphdr) calls, or, as in this exam-
13 rst = __tcp_skb_rst(tcphdr) ple, prebuilt tapsets – that
14 syn = __tcp_skb_syn(tcphdr)
is, prebuilt code blocks for
15 fin = __tcp_skb_fin(tcphdr)
specific kernel functions Figure 2: After the syntax of the SystemTap script is checked, the
16
}
and system calls. script is converted to C and loaded as a kernel module.
Instead of searching through a mass build system. The target systems only comprehensive tapset library, this can
of data for the required information, require the systemtap-runtime RPM be done without serious programming
which is the case with Strace, System- and the staprun program it contains. skills. Advanced users will enjoy the
Tap lets you output the information The following command creates a flexible, Awk-like scripting language
after exceeding a specific threshold prebuilt binary kernel module for the that gives them the freedom to create
value, or when a specific event oc- target system: highly complex tracing and profiling
curs. Thanks to the functional scope scripts. The SystemTap FAQ [4] and
stap ‑r kernel‑PAE‑2.6.31.12‑174.2.22 U
of the language, the choice of lan- capt‑io.stp ‑m read‑io
the language reference [5] are useful
guage constructs is more than ad- ports of call for more help. n
equate. The build system also needs the
Listing 7 shows another example that kernel-debuginfo package to match
uses the vfs.read tapset. The global the target system version, and you The Author
variable totals is an associative array must ensure that the build and tar- Thorsten Scherf is a Senior Consultant for Red
in this case. It contains the process get systems have the same hardware Hat EMEA. You can meet him as a speaker at
names and process IDs for all the architecture. After creating a kernel conferences. He is also a keen marathon runner
applications that access the VFS sub- module, copy it to the target system whenever time permits.
system to read data from disk. The and launch it with staprun:
counter is incremented each time it’s Listing 7: Finding I/O-Intensive Apps
staprun capt‑io.ko
accessed. 01
#!/usr/bin/stap
02
If you are interested in a specific user- If you want non-root users to load
03
global totals;
space program, you’ll need to install this kernel module, they need to be
04
probe vfs.read
the matching debuginfo package for members of the stapusr group; mem- 05
the application. To make things easy, bers of the stapdev group can addi- 06
{
I will look at the ls tool as an ex- tionally compile their own scripts. 07 totals[execname(), pid()]++
tion are also of interest, you can erspace programs without rebooting totals[name,pid])
15
}
change the call to stap as shown in the whole system. Thanks to the
Listing 9.
Listing 8: SystemTap Tracing a Userspace App
Cross-Compiling stap ‑e 'probe process("ls").function("*").call {log (pp())}' ‑c 'ls ‑l'
total 20
If you want to run a SystemTap script ‑rw‑rw‑r‑‑. 1 tscherf tscherf 17347 2010‑04‑12 08:43 systemtap.txt
process("/bin/ls").function("main@/usr/src/debug/coreutils‑7.6/src/ls.c:1225").call
on multiple systems, you will prob-
process("/bin/ls").function("set_program_name@/usr/src/debug/coreutils‑7.6/lib/progname.c:35").call
ably prefer not to have to install the
process("/bin/ls").function("human_options@/usr/src/debug/coreutils‑7.6/lib/human.c:462").call
compiler and the kernel debug in-
process("/bin/ls").function("clone_quoting_options@/usr/src/debug/coreutils‑7.6/lib/quotearg.c:99").call
formation on all of these machines. process("/bin/ls").function("xmemdup@/usr/src/debug/coreutils‑7.6/lib/xmalloc.c:107").call
In fact, you only need to do so on a process("/bin/ls").function("xmalloc@/usr/src/debug/coreutils‑7.6/lib/xmalloc.c:43").call
process("/bin/ls").function("get_quoting_style@/usr/src/debug/coreutils‑7.6/lib/quotearg.c:110").call
process("/bin/ls").function("clone_quoting_options@/usr/src/debug/coreutils‑7.6/lib/quotearg.c:99").call
Info process("/bin/ls").function("free_pending_ent@/usr/src/debug/coreutils‑7.6/src/ls.c:1132").call
[http://oprofile.sourceforge.net/news] process("/bin/ls").function("close_stdout@/usr/src/debug/coreutils‑7.6/lib/closeout.c:107").call
process("/bin/ls").function("close_stream@/usr/src/debug/coreutils‑7.6/lib/close‑stream.c:57").call
[2] Kernel debuginfo information:
process("/bin/ls").function("close_stream@/usr/src/debug/coreutils‑7.6/lib/close‑stream.c:57").call
[http://fedoraproject.org/wiki/
StackTraces#What_are_debuginfo_rpms.
2C_and_how_do_I_get_them.3F] Listing 9: Parameter Tracing
[3] SystemTap project homepage: stap
‑e 'probe process("ls").function("clone_quoting_options").call {log (probefunc() . " " . $$parms) }'
Bundle your custom apps in a Debian package one line. If you’ve ever worked with
packages in Ubuntu or any other De-
It’s a Wrap
bian-related Linux, you’ve probably
needed to download a package from
an online source and install it:
Get standard scripts and custom applications into the cloud with the On the inside, a Debian package is
Debian packaging system. By Dan Frost an archive of binary files, scripts, and
any other resources an application
You’ve got a cloud. It’s great. You ing and releasing your scripts. needs, plus a handful of control files
can scale, and you’ve got redundancy. In this article, I’ll show how to create that the various command-line tools
But you have about 20 scripts for a Debian packages and how to install use to install the package.
bunch of tasks (e.g., one for when them (which you probably already Because this standard package format
an instance is booted up and another know). And, I’ll explain how the is so easy to install on any Debian-
for when its IP changes) and these process will make you feel more com- based Linux, it’s a great way to get
scripts aren’t getting any shorter, fortable about pushing changes live standard scripts and custom applica-
they’re getting better and longer. If across your cloud. I talked to cloudy tions into the cloud. Often you need a
you want to manage them in your people about how to get code onto few lines to configure a new instance
favorite versioning software (which I new instances, and I tried lots of dif- or to connect the instance to the rest
hope is Git, but might be something ferent things, but the Debian package of your cloud, and storing all those
else), how do you get that onto the is such a solid, reliable format that I scripts in a package keeps things very
new instances simply? just had to share it. neat.
Enter the not-new-at-all technology of The parts of a Debian package I’m
Debian packages. They are straight- Debian Packages most interested in are the control
forward to use across any Debian- files, which live in a directory called
based Linux and simple to create, and Debian packages are simply archives DEBIAN. Control files tell Debian all
they provide an ideal way of contain- that are very easy to install, usually in about what the package contains,
what it’s called, the version, and ing and development servers all sit interaction. Everything has to run
so on. The second part is the code, inside one Debian package. automatically when new instances
which I’ll look at in detail later. start up, and you really don’t want
A Cloudy Package your script waiting for a human that
Creating the Package doesn’t exist.
A tiny HTML file isn’t all that useful Next, package your project into a .deb
To show this process at work, I’ll in the cloud, so I’ll look at something file and place it somewhere public
create a simple package. Suppose I a bit more useful. Server configura- from which you can download. This
need a simple page that can sit on tion can be set from the Debian pack- might be where you host, but it is
your server until an application is age simply by placing your preferred much better to put it somewhere re-
installed. Good practice dictates that configuration file in the package: silient, like S3 [2].
your instances should fail nicely, so Next, log in to Scalr and add the fol-
./etc/apache2/conf.d/ our‑config.conf
if you start 10 instances when your lowing lines to a new script that will
app gets 6 million tweets, you at least As long as Apache is configured to run on the event OnHostUp (Figure 1).
want them to deliver a nice page be- include this file (which, in Ubuntu, it
# install the package
fore they’re ready to do business. often is), it will take effect right away. wget ‑O myserver.deb http://mybucket. U
To begin, Although you might want to control s3.amazonaws.com/myserver.deb
n Create a directory called myserver this with tools such as Puppet after deb ‑i myserver.deb
and the directories and files inside deployment, starting the instance # run the script
/usr/local/myserver/bin/on‑host‑up.sh
it: with a good configuration will help
keep the environment sane from the Save the Scalr script under the name
./DEBIAN/
./var/www/index.html
outset. of the event that you want to trigger
Cloud hosting becomes difficult when it and go to the farm configuration.
n Put the code shown in Listing 1 in you use strange configurations – cre- Now you can add your neatly orga-
./DEBIAN/control. ating exceptions for some apps or nized scripts without having to edit
n Put the file shown in Listing 2 in generally working against the grain
./var/www/index.html. (e.g., using Tomcat’s configuration Listing 1: Control Files
If the path to the index.html file looks style and Apache’s config directories). Package: myserver
familiar, it’s because the file structure Avoid customizing the environment Version: 0.0.1
inside your package mirrors exactly too much because it will mean extra Section: server
the structure in the target instance. If maintenance in the future and could Priority: optional
you want a file in /var/some/where/ limit how you can scale. Architecture: all
Essential: no
here, just create that path inside your Another common script for cloud
Installed‑Size: 1024
package project. servers will run tasks at certain points
Maintainer: Dan Frost [dan@3ev.com]
Once you’ve created this amazing in the instance’s life cycle. For in-
Description: My scripts for running stuff in the cloud
page, package it up with: stance, a service such as Scalr will
run scripts on various events, such
$ dpkg‑deb ‑‑build myserver
dpkg‑deb: building package U
as OnHostUp, OnHostInit, and OnIPAd‑
`myserver' in `myserver.deb' dressChanged. You can create some Listing 2: Message Page
scripts for these events in your De- 01 <html>
When you look in the directory, you’ll bian package: 02 <head>
see a file called myserver.deb. Now 03 <title>We're getting there...</title>
./usr/local/myserver/bin/on‑host‑up.sh
that your project is all packaged up, ./usr/local/myserver/bin/on‑ip‑ U
04 </head>
cd /var/www/
After running this command, you’ll Listing 3: HTML File
wget ‑O tmp.tgz http://mybucket.s3. U
find the HTML file sitting there for amazonaws.com/website.tgz 01
$ cat /var/www/index.html
wget ‑O myserver.deb U
http://mybucket.s3.amazonaws.com/ U
Figure 1: Creating a new script. myserver.deb
deb ‑i myserver.deb
scripts via a web interface (Figure 2). you can test them more easily outside
If you have a team working on your the cloud. The ability to test server configuration
cloud hosting, you can even start us- in the cloud, for the cloud, is really
ing standard code management, such The Package in Production important.
as Git or SVN, to version your cloud If you’ve been running nice chunky
environment’s bootup and configura- Everything thus far might feel a bit servers for years, you wouldn’t make
tion scripts. heavy-handed. I put a lot of effort changes to them unless you were 100
A second event script, which is called into getting a short script up onto a percent sure, but with cloud comput-
each time the IP address changes, cloud instance. But suppose you have ing, you can prototype your configu-
would typically update Dynamic DNS a running server farm, and you need rations and settle your nerves before
with a one-liner (you’ll need to set up to update some scripts across the putting things live.
your Dynamic DNS account first): farm. When your cloud is running, you will
Several cloud services let you edit want every opportunity to test the
curl 'http://www.dnsmadeeasy.com/ U
servlet/updateip?username= U
scripts via a web interface, which is scripts, so being able to install, run,
myuser&password=mypassword&id= U fine up to a point, but beyond a few and test them on any instances is
99999999&ip=123.231.123.231' lines, you will start pining for Emacs valuable.
or your favorite editor. A custom .deb
Once you’ve placed this code in the package makes it easy to create and After Installation: Uninstall
script on‑ip‑address‑changed.sh, sim- test the script on local machines or a
ply package it up into your .deb file, development cloud before uploading So far, Debian packages might just
upload it to S3 again, and start a new the final version to the production look like glorified tarballs, so why
instance. With this approach, testing environment. not just tar up your scripts? Well …
small changes takes a little longer, but Installing the script on instances is they’re better than that. Two hooks
because the scripts are all in a .deb, simply a matter of deb ‑i myserver. are provided: post-install and pre-
uninstall. Once your Debian pack-
age’s files have been copied to the
filesystem, the post-install script,
./DEBIAN/postinst, is run, and when
you uninstall, Debian removes your
files before running ./DEBIAN/prerm.
With these scripts, you can install
software, start services, and call a
monitoring system to tell it exactly
what’s going on with the new in-
stance.
For example, open ./DEBIAN/postinst
Figure 2: Adding a script. and add something like:
http://my‑monitor.example.com/U
script that installs and con-
figures HAProxy, but you do
Online Training with
?event=installed‑apache&server=U
$SERVER_NAME
this on your local machine.
When you’re happy, you
Linux Magazine Academy
How you keep your moni-
toring systems informed
commit this into your De-
bian package and install it Preparing for the LPIC
depends on what you’re
running, but you can add
on some cloud instances for
testing.
exam - the easy way:
arbitrary commands here to When your HAProxy scripts
Linux+
keep yourself happy. A more all work fine, simply push
to Com pT IA
Equivalent
typical post-install task is your Debian package, along
PI
powered by L
sym-linking your scripts into with the new script, up to
the standard path: S3. Next, just terminate your
HAProxy instance and wait
ln ‑s /usr/local/myserver/bin/ U
on‑host‑up.sh/usr/bin/
for a new one to replace it
on‑host‑up.sh that will run the new scripts,
installing and running
Anything that gets your
scripts working, such as
HAProxy instead of whatever
you had before.
GET YOUR LINUX KNOWHOW
starting any services that are To get extra code onto an CERTIFIED WITH LPIC:
provided or used by your instance, just pull it by us-
scripts, should be done in ing SVN, Git, or Wget; put it Professional training
the post-install script: into place; and the work is
done. So, if you have a huge
for the exams
service apache2 start
service my‑monitor start
repository of PDFs that never LPI 101 and 102
change or a massive archive
However, this is not where database, your scripts can
you install your web app’s copy this down to instances
code, nor is it where you so that each runs indepen-
grab the latest data. Stick dently.
to getting the helper scripts Anything you can do on
running in the Debian hooks the command line can be
and installing your site from
the scripts inside your pack-
scripted, and packing up
your common tasks into a
❚ hardware settings
age.
Remember that the key to
Debian package means your
best scripts and best config
❚ package management
cloud computing is scaling
without friction. Your scripts
will be used on all of your
instances.
❚ partitioning and file systems
must install themselves with-
out the need for checking
Finally, remember that be-
ing scalable means being
❚ shell environment
the OS afterward, so use the
hooks to leave everything
friction-free. The people I
work with use Debian pack-
❚ automate system administration
ready to go live. ages, because if the package
installs, we’ve won half the
❚ network configuration
Why It All Makes battle: Our scripts are on the
❚ security administration tasks
Sense instance. It’s a standard and
convenient way of deploying,
To finish, I’ll look at a real- and it works every time. n ❚ troubleshooting
20%
world example. Suppose you
want to change your proxy
from Apache to HAProxy, Info
rrent
and you want your web serv- [1] “Scaling the Cloud with Scalr”
ers to host some extra code by Dan Frost, Linux Magazine,
o r c u
because this makes your August 2010, pg. 20 off f srcibers
app more scalable. Instead [2] Amazon Simple Storage sub
of changing to HAProxy on Service (S3):
the instance, you create the [http://aws.amazon.com/s3/]
Container Service
© sculpies, Fotolia.com
the virtualization technology market is currently concentrating on hypervisor-based systems, but hosting pro-
viders often use an alternative technology. container-based solutions such as openVz/Virtuozzo are the most
efficient way to go if the guest and host systems are both linux. By thomas drilling
Hypervisor-based virtualization of the CPU, chipset, and peripherals. accessing devices. For the application,
solutions are all the rage. Many com- If you have state-of-the-art hardware all of this appears to be a separate
panies use Xen, KVM, or VMware (a CPU with a virtualization extension universe. A container has to be de-
to gradually abstract their hardware – VT), the performance is good. signed so that the application thinks
landscape from its physical underpin- However, hypervisor-based systems it has access to a complete operating
nings. The situation is different if do have some disadvantages. Because system with a run-time environment.
you look at leased servers, however. each guest installs its own operating From the host’s point of view, con-
People who decide to lease a virtual system, it will perform many tasks in tainers are simply directories. Because
server are not typically given a fully its own context just like the host sys- all the guests share the same kernel,
virtualized system based on Xen or tem does, meaning that some services they can only be of the same type as
ESXi, and definitely not a root server. might run multiple times. This can the host operating system or its ker-
Instead, they might be given a re- affect performance because of over- nel. This means a Linux-based con-
source container, which is several lapping – one example of this being tainer solution like OpenVZ can only
magnitudes more efficient for Linux cache strategies for the hard disk sub- host Linux guests. From a technical
guest systems and also easier to set system. Caching the emulated disks point of view, resource containers ex-
up and manage. A resource container on the guest system is a waste of time tend the host system’s kernel. Adding
can be implemented with the use of because the host system already does an abstraction layer then isolates the
Linux VServer [1], OpenVZ [2], or this, and emulated hard disks are ac- containers from one another and pro-
Virtuozzo [3]. tually just files on the filesystem. vides resources, such as CPU cycles,
memory, and disk capacity (Figure 1).
Benefits Parallel Universes Installing a container means creating
a sub-filesystem in a directory on the
Hypervisor-based virtualization solu- Resource containers use a different host system, such as /var/lib/vz/
tions emulate a complete hardware principle on the basis that – from the gast1; this is the root directory for the
layer for the guest system. Ideally, application’s point of view – every guest. Below /var/lib/vz/gast1 is a
any operating system including appli- operating system comprises a filesys- regular Linux filesystem hierarchy but
cations can be installed on the guest, tem with installed software, space for without a kernel, just as in a normal
which will seem to have total control data, and a number of functions for chroot environment.
live migration, checkpointing and restoring (see the box “Live Migration, Check-
pointing and Restoring”). Incidentally,
OpenVZ containers can be shifted from one physical host to another during operations (Live
the host is referred to as the hardware
migration). Ideally, the user will not even notice this process. However, the host environment
must be configured to support live migration from a technical point of view. In other words, both
node in OpenVZ-speak.
virtual environments must reside on the same subnet, and data transmission rate must be high To be able to use OpenVZ, you will
enough. Additionally, the target virtual environment (VE) must have sufficient hard disk space. If need a kernel with OpenVZ patches.
these conditions are fulfilled, the following command starts the migration: One problem is that the current stable
release of OpenVZ is still based on
vzmigrate ‑online target IP VEID
kernel 2.6.18, and what is known as
Target IP is the network address of the VE into which you want to migrate to the VE with the ID the super stable version is based on
of VEID. Of course, the vzmigrate tool supports a plethora of different options (e.g., for migrat- 2.6.9. It looks like the OpenVZ de-
ing over secure connections). The exact syntax and other examples of applications are discussed velopers can’t keep pace with official
[12]. Additionally, OpenVZ can create what it refers to as checkpoints (snapshots) of VEs: A kernel development. Various distribu-
checkpoint freezes the current state of the VE and saves it in a file. The checkpoint can be cre- tions have had an OpenVZ kernel,
ated from within the host context with the vzctl chkpnt VEID command. The checkpoint file such as the last LTS release (v8.04) of
can be used later to restore the VE on another OpenVZ host with vzctlrestore VEID.
Ubuntu, on which this article is based
(Figure 2).
The container abstraction layer makes ever, you can’t load any drivers or Ubuntu 9.04 and 9.10 no longer fea-
sure that the guest system sees its kernels from within a container. The ture OpenVZ, apart from the VZ tools;
own process namespace with separate predecessors of container virtualiza- this also applies to Ubuntu 10.04. If
process IDs. On top of this, the kernel tion in the Unix world are technolo- you really need a current kernel on
extension that provides the interface gies that have been used for decades, your host system, your only option is
is required to create, delete, shut such as chroot (Linux), jails (BSD), or to download the beta release, which
down and assign resources to con- Zones (Solaris). With the exception uses kernel 2.6.32. The option of us-
tainers. Because the container data of (container) virtualization in User- ing OpenVZ and KVM on the same
are extensible on the host file system, Mode Linux [4], only a single host host system opens up interesting pos-
resource containers are easy to man- kernel runs with resource containers. sibilities for a free super virtualization
age from within the host context. solution with which administrators
OpenVZ can experiment.
Efficiency If you are planning to deploy OpenVZ
OpenVZ is the free variant of a com- in a production environment, I sug-
Resource containers are magnitudes mercial product called Parallels gest you keep to the following rec-
more efficient than hypervisor sys- Virtuozzo. The kernel component is ommendations: You must disable
tems because each container uses available under the GPL; the source SELinux because OpenVZ will not
only as many CPU cycles and as code for the matching tools under the work correctly otherwise. Addition-
much memory as its active applica- QPL. OpenVZ runs on any CPU type, ally, the host system should only be
tions need. The resources the abstrac- including CPUs without VT exten- a minimal system. You will probably
tion layer itself needs are negligible. sions. It supports snapshots of active want to dedicate a separate partition
The Linux installation on the guest containers as well as the Live migra- to OpenVZ and to mount this below,
only consumes hard disk space. How- tion of containers to a different host /ovz, for example Besides this, you
Applications
Container Context
Applications
Host Context
OpenVZ
Abstraction Layer
Figure 1: In virtualization based on resource containers, the host and guest use Figure 2: openSUSE with Ubuntu: System virtualization with resource containers
the same kernel; therefore, they must be of the same type. This means that a is an interesting option if you need to host (multiple) Linux guest systems as
Linux host can only support Linux guests. efficiently as possible on a Linux host system.
Figure 3: I installing OpenVZ from the package sources for Ubuntu 8.04 – the Figure 4: The OpenVZ developers provide container templates for various guest
last version of Ubuntu to officially include an OpenVZ kernel. The only package systems; this makes installing a guest system a quick and easy experience.
needed for this was the linux‑openvz meta-package. Templates from the community are also available.
should have at least 5GB of hard disk detailed information on this, refer to check of service vz status should
space, a fair amount of RAM (at least the sysctl section in the quick install now tell you that OpenVZ is running.
4GB), and enough swap space. guide, which covers providing net-
work access to the guest systems, in- Container Templates
Starting OpenVZ volving setting up packet forwarding
for IPv4 [5]. Then, you need to re- OpenVZ users don’t need to install
Installing OpenVZ is simple. Users on boot with the new kernel. If you edit an operating system in the traditional
RPM-based operating systems such as sysctl after rebooting, you can reload sense of the word. The most con-
RHEL or CentOS can simply include by typing sudo sysctl ‑p. Typing venient approach to set up OpenVZ
the Yum repository specified in the containers is with templates (i.e.,
sudo /etc/init.d/vz start
quick install manual on the project tarballs with a minimal version of the
homepage. Ubuntu 8.04 users will wakes up the virtualization machine. distribution you want to use in the
find a linux‑openvz meta-package Next, you should make sure all the container). Administrators can create
in the multiverse repository, which OpenVZ services are running; this is templates themselves, although it’s
installs the required OpenVZ kernel, done easily (on Ubuntu) by issuing: not exactly trivial [6]. Downloading
including the kernel modules and prebuilt templates [7] and copying
sudo sysv‑rc‑conf ‑list vz
header files (Figure 3). At the time of them to the template folder is easier:
writing, no OpenVZ kernel was avail- If the tool is missing, you can type
sudo cp path_to_template U
able for Ubuntu 10.04. If you are in- /var/lib/vz/template/cache
sudo apt‑get install sysconfig
terested in using OpenVZ with a cur-
rent version of Ubuntu, you will find to install it. Debian and Red Hat users Besides templates provided by the
a prebuilt deb package in Debian’s can run the legacy chkconfig tool. A OpenVZ team, the page also offers
unstable branch. To install type:
a number of community templates subnet and tell them the DNS server The ‑‑ipadd option lets you assign a
(Figure 4). address, which lets OpenVZ create local IP address. If you need to install
venet devices. All of the following a large number of VEs, use VEID as
Configuring the Host commands must be given in the host the host part of the numeric address.
Environment context. To do this, you first need to
sudo vzctl set VEID ‑‑ipadd U
stop the VE and then set all the basic IP-Address ‑‑save
The /etc/vz/vz.conf file lets you parameters. For example, you can set
configure the host environment. This the hostname for the VE as follows: The DNS server can be configured us-
is where you specify the path to the ing the ‑‑nameserver option:
container and template data on the sudo vzctl set VEID U
sudo vzctl set VEID U
host filesystem. If you prefer not to ‑‑hostname Hostname ‑‑save ‑‑nameserver Nameserver-address ‑‑save E
use the defaults of
TEMPLATE=/var/lib/vz/template
VE_ROOT=/var/lib/vz/root/$VEID
VE_PRIVATE=/var/lib/vz/private/$VEID
Creating Containers
The vzctl, which is only available
in the host context, creates contain-
ers and handles most management Figure 7: The vzlist command outputs a list of active VEs.
tasks, too. In the following example,
I used it to create a new VE based on
a template for openSUSE 11.1 that I
downloaded:
Network Configuration
The next step is to configure network
access for the container. OpenVZ
supports various network modes for
this. The easiest option is to assign Figure 8: User Bean Counters, a set of configuration parameters, allow the administrator to limit resources
the VEs an IP on the local network/ for each virtual environment.
OpenVZ Administration
Figure 9: Virtual Ethernet devices make the VE a full-fledged member of the network with all its advantages
The vzctl tool handles a number and disadvantages.
of additional configuration tasks.
Besides starting, stopping, entering, Without additional configuration, the ues lets you specify a minimum and
and exiting VEs, you can use the use of VEs is a security risk because maximum limit:
‑set option to set a number of op- only one host kernel exists, and each
sudo vzctl set 100 ‑‑diskspace 8G:10G U
erational parameters. Running vzlist container has a superuser. Besides ‑‑quotatime 300
in the host context displays a list of this, you need to be able to restrict
the currently active VEs, including the resources available to each VE, Incidentally, sudo vzlist ‑o lists all
some additional information such as such as the disk and memory and the set UBC parameters. Note that
the network parameter configuration the CPU cycles in the host context. some UBC parameters can clash, so
(Figure 7). OpenVZ has a set of configuration pa- you will need to familiarize your-
In the VE, you can display the pro- rameters for defining resource limits self with the details by reading the
cess list in the usual way by typing known as User Bean Counters (UBCs) exhaustive documentation. To com-
ps. And, if the package sources are [8]. The parameters are classified by pletely remove a container from the
configured correctly, patches and importance in Primary, Secondary, system, just type the
software updates should work in the and Auxiliary. Most of these param-
sudo vzctl destroy
normal way using apt, yum, or yast eters can also be set with vzctl set
depending on your guest system. (Figure 8). command.
For the next step, it is a good idea to For example, you can enter
enter the VE by typing vzctl enter
sudo vzctl set 100 ‑‑cpus 2 Conclusions
VEID. Then, you can set the root pass-
word, create more users, and assign to set the maximum number of CPUs Resource containers with OpenVZ
the privileges you wish to give them; for the VE. The maximum permitted offer a simple approach to running
otherwise, you can only use the VEs disk space is set by the ‑‑diskspace virtual Linux machines on a Linux
in trusted environments. parameter. A colon between two val- host. According to the developers, the
Network Modes
In many cases, a venet device is all you need that is visible in the host context. Within the sudo ifconfig veth100.0
in the line of network interfaces in a VE. Each container, the administrator can then use Linux A bridge device is the only thing missing for
venet device sets up a point-to-point connec- tools to configure the network interface with a host network access; to set this up host-side,
tion to the host context and can be addressed static address or even use DHCP. give the sudo brtcl addbr vmbr0 com-
using an IP address from the host context. The kernel module is loaded when the OpenVZ mand, then sudo brctl addif vmbr0
Venet devices have a couple of drawbacks, kernel boots. You can check that you have it by
verth100.0 to bind it to the veth device, as-
however: They don’t have a MAC address and issuing the sudo lsmod | grep vzethdev
suming bridge‑utils is installed. Host-side you
thus don’t support ARP or broadcasting, which command. To configure a veth device in the
now have the interfaces l0, eth0, venet0,
makes it impossible to use DHCP to assign IP container, run
and veth100.0. If the bridge device is set up
addresses. Also, network services like Samba
sudo vzctl set 101 ‑‑netif_add eth0 ‑‑save correctly, brctl show gives you a listing simi-
rely on functional broadcasting.
A virtual Ethernet (veth) device solves this where eth0 is the interface name in the con- lar to Figure 10. The additional veth device set
problem (Figure 9). These devices are sup- tainer context. The device name in the host up here, 100.1, is for test purposes only and is
ported by a kernel module that uses vzctl to context defaults to vethVEID. not important to further steps.
present a virtual network card to the VE. The If needed, you can assign MAC addresses and Virtual network devices are slightly slower
vzethdev sets up two Ethernet devices: one in device names explicitly. The device can be than venet devices. Also, security might be
the host context and one in the VE. The devices listed in the host context in the normal way an issue with a veth device – this places a
can be named individually, and you can manu- with ifconfig. The number following the dot full-fledged Ethernet device in the container
ally or automatically assign a MAC address to (0) refers to the device number; here, this is context, which the container owner could
them. The host-side device can also reside on the first veth device in the container with the theoretically use to sniff all the traffic outside
a bridge to give the VE a network environment VEID of 100: the container.
virtualization overhead with OpenVZ OpenVZ kernel requires just a couple ing resource limits in the form of
is only two to three percent more CPU of simple steps, and the template the GUI-based Parallels Management
and disk load: These numbers com- system gives you everything you need Console [9] or Parallels Infrastructure
pare with the approximately five per- to set up guest Linux distributions Manager [10]. The excellent OpenVZ
cent quoted by the Xen developers. quickly. wiki covers many topics, such as the
The excellent values for OpenVZ OpenVZ has a head start of several installation of Plesk in a VE or set-
are the result of the use of only one years development compared with ting up an X11 system [11]. OpenVZ is
kernel. The host and guest kernels modern hypervisor solutions such as the only system that currently offers
don’t need to run identical services, KVM and is thus regarded as mature. Linux guest systems a level of perfor-
and caching effects for the host and Unfortunately, the OpenVZ kernel mance that can compete with that of
guest kernels do not interfere with lags behind vanilla kernel develop- a physical system without sacrificing
each other. The containers themselves ment. performance to the implementation
provide a complete Linux environ- However, if you are thinking of de- itself. This makes OpenVZ a good
ment without installing an operating ploying OpenVZ commercially, you choice for virtualized Linux servers of
system. The environment only uses might consider its commercial coun- any kind. n
the resources that the applications terpart Virtuozzo. Besides support,
running in it actually need. there are a number of aspects to take
The only disadvantage of operating into consideration when using re- Info
system virtualization compared with source containers. For example, host- [1] Linux VServer: [http://linux‑vserver.org/
paravirtualization or hardware virtu- ing providers need to offer customers Welcome_to_Linux‑VServer.org]
alization is that, apart from the net- seamless administration via a web [2] OpenVZ:
work interfaces, it is not possible to interface, with SSH and FTP, or by [http://wiki.openvz.org/Main_Page]
assign physical resources exclusively both methods; of course, the security [3] Virtuozzo: [http://www.parallels.com/de/
to a single guest. concerns mentioned previously can- products/pvc45]
Otherwise, you can do just about not be overlooked. [4] User-Mode Linux: [http://
anything in the containers, includ- Parallels offers seamless integration user‑mode‑linux.sourceforge.net]
ing installing packages and providing of OpenVZ with Plesk and convenient [5] OpenVZ quick install guide: [http://wiki.
services. Additionally, setting up the administrations tools for, say, impos- openvz.org/Quick_installation]
[6] Creating your own OpenVZ tem-
plates: [http://wiki.openvz.org/
Category:Templates]
[7] Prebuilt OpenVZ templates:
[http://wiki.openvz.org/Download/
template/precreated]
[8] OpenVZ User Bean Counters: [http://wiki.
openvz.org/UBC_parameters_table]
[9] Parallels Management Console:
[http://www.parallels.com/de/products/
virtuozzo/tools/vzmc]
[10] Parallels Infrastructure Manager: [http://
www.parallels.com/de/products/pva45]
[11] X11 forwarding:
[http://wiki.openvz.org/X_inside_VE]
[12] Live migration: [http://openvz.org/
documentation/mans/vzmigrate.8]
The Author
Thomas Drilling has been a freelance journalist
and editor for scientific and IT magazines for
more than 10 years. With his editorial office
team, he regularly writes on the subject of open
source, Linux, servers, IT administration, and
Mac OS X. In addition to this, Thomas Drilling is
also a book author and publisher, a consultant
Figure 10: This example includes one venet and one veth device in the host context. The latter is physically to small and medium-sized companies, and a
connected to the host network via a bridge device. The host-side veth bridge looks like a normal Ethernet regular speaker on Linux, open source and IT
device (eth0) from the container context. security.
The New IT
New tools, new threats, new technologies...
Looking for a guide to the changing world
of system administration?
www.admin-magazine.com/subs
AND SAVE 30%!
ADMIN Network & Security
Explore the new world
of system administration
It isn’t all Windows anymore – and it isn’t all Linux. A router is more than a router.
A storage device is more than a disk. And the potential intruder who is looking for a way
around your security system might have some tricks that even you don’t know. Keep your
network tuned and ready for the challenges with the one magazine that is all for admins.
Each issue delivers technical solutions to the real-world problems you face every
day. Learn the latest techniques for better:
• network security
ORLD
• system management
R E A L - W
• troubleshooting
B L E M S
PR O
!
• performance tuning
SOLVED
• virtualization
• cloud computing
www.admin-magazine.com/subs
V i rt uA l i z At i o n Virtual machine manager
Virtual Windows
in theory, virtualizing all your old servers is a good idea, but managing them won’t necessarily become any
easier. Virtual machine manager gives windows administrators an easy option. By Björn Bürstinghaus
Service virtualization is no longer only available as virtual entities and managing an unlimited number of
just an interesting topic for large as the number of virtual machines virtual machines.
corporations and data centers. In and virtualization hosts continues to
fact, virtualization of multiple server rise, administrators need to consider System Requirements
systems on a single physical machine centralizing their management.
has become an affordable option for Microsoft System Center Virtual Ma- To install SCVMM, you need a 64-
small and medium-sized businesses, chine Manager 2008 R2 (SCVMM) is bit version of Windows Server 2008
too. With virtualization and the a management solution for Hyper-V (R2), which you can run as a virtual
consolidation benefits that it offers, (R2) hosts, Virtual Server 2005 R2 machine in smaller environments
system management also changes. hosts, and VMware ESX hosts that (with a maximum of five hosts). The
The machines you are managing are use the VMware VCenter. SCVMM [1] system on which you install SCVMM
offers excellent must be a member of an Active Direc-
scalability, easy tory domain, but you can use it to
management of manage host systems in non-member
hosts and virtual networks. In this case, you’ll need to
machines and install the agents manually because
many benefits automatic installation only works
for the adminis- inside the domain. SCVMM relies on
trator. A Work- Microsoft SQL Server to store data.
group Edition Depending on the size of your envi-
is available for ronment, you can use the free SQL
deployment in Server 2005 Express Edition supplied
small and me- with the bundle, which has a data-
dium-sized busi- base size limit of 4GB, or you can use
nesses: If you an instance of SQL Server 2005 or
use a maximum SQL Server 2008.
of five hosts, this It makes sense to use a separate
Figure 1: The Virtual Machine Manager startup screen shows a selection of the version is a cost- server for the database in larger envi-
components to install. effective way of ronments. You can install the SCVMM
After starting the installation, you’ll The agents for managing hosts via Microsoft offers a free configuration
be given a choice of components to SCVMM can be installed through the analyzer for SCVMM; after the install,
install for SCVMM (Figure 1). The management console or manually. the analyzer checks whether all the
management server, management If you use the management console components are installed optimally.
console, and self-service portal are for the install, an automatic check is Also, you can use the configuration
all installed separately. When you performed to see whether the host analyzer to check the configuration
install the management server, you has a hypervisor. If not, the Hyper-V on remote systems that you will be
can place the individual modules, role will be installed automatically on deploying as hosts and systems in-
such as the database and the library Windows Server 2008 (R2), and Vir- tended for P2V conversion. If any
server, on different systems. This ar- tual Server 2005 R2 will be installed issues are detected, you’ll be given
rangement improves performance if on Windows Server 2003 (R2). a detailed description and possible
you have a large number of hosts and
virtual machines.
Before the installation starts, an au-
tomatic check is performed to make
sure that the hardware and software
requirements are fulfilled. If this is
the case for all the components, pro-
visioning SCVMM will take less than
20 minutes.
Before installing the self-service por-
tal, you must enable the web server
role on Windows Server 2008 (IIS 7.0)
or Windows Server 2008 R2 (IIS 7.5),
as well as the ASP.NET, IIS-6 meta-
compatibility and IIS 6 WMI compat-
ibility role services. If the role or one
of the additional services is not in-
stalled, you will see an error message
to this effect. Portal access privileges
are configured in the management
console preferences. Figure 3: The Job queue shows modified and outstanding jobs.
evaluation function for intelligent tential issues before using the volume 2007 R2 [3] for monitoring host and
placement. shadow copy service to create an virtual machine availability. In this
image. On-the-fly conversion works case, SCVMM not only uses its own
Libraries and Templates with client systems as of Windows XP agent to monitor the systems but also
and for server systems as of Windows provides performance analysis and
The library component in SCVMM Server 2003. For older systems, you reporting for a host or virtual ma-
is a shared directory of virtual hard have only an offline conversion op- chine. The performance and resource
disks, ISO images, hardware, and tion. After conversion, you can shut optimization (PRO) function built into
guest operating system profiles. down the physical system and boot SCVMM can use Operations Manager
Templates automatically provision the system as a virtual machine. 2007 R2 to collect performance data
Windows client and server systems down to the application layer and
quickly. A template comprises a vir- Higher Availability with thus suggest optimization strategies,
tual hard disk and predefined hard- Clustering which are displayed as PRO tips in
ware and operating system profiles. the management console.
The hardware profile lets you specify Host clustering is a useful way to
the minimum requirements for the guarantee system availability. Instead Conclusions
CPU type or the amount of RAM the of expensive SAN memory, the data is
virtual machine needs. When a new provided by cheaper iSCSI solutions. Microsoft System Center Virtual
virtual machine with the specified To create a Hyper-V cluster, you need Machine Manager 2008 R2 greatly fa-
CPU type is created, SCVMM auto- two host systems, both of which ac- cilitates the management and admin-
matically searches for a host with re- cess the same SAN or iSCSI storage. istration of homogeneous or hetero-
sources to match the hardware profile Live migration introduced in Hyper-V geneous virtual infrastructures under
requirements. The operating system R2 means you can move a virtual ma- Windows. Automated provisioning
profile helps automate operating sys- chine between clusters without taking of new client and server systems can
tem provisioning. Besides selecting the virtual machine offline. The previ- be done in minutes with SCVMM.
the operating system, you can also ous version only supported virtual Thanks to the integration of System
configure the administrator password, machine migration if you used the Center Operations Manager 2007 R2,
a license key, the computer name, same processor type in both clusters. SCVMM also directly supports perfor-
and the domain membership. Although this restriction has not been mance and availability monitoring for
completely lifted, it only applies to hosts and virtual machines.
P2V Conversion the CPU vendor, thus improving sup- Because system management al-
port for a variety of hardware in the ways takes a great deal of your time,
SCVMM also converts physical sys- cluster and offering more flexibility. whether you have five or 50 host
tems to virtual machines on the fly systems, it makes sense to plan a
with physical-to-virtual (P2V) migra- Resource Monitoring centralized solution for all aspects of
tion. For this, simply install a small virtualization, which is exactly what
client on the machine; the client SCVMM can be combined with the SCVMM offers. n
checks the system and displays po- System Center Operations Manager
Info
[1] Microsoft System Center Virtual Machine
Manager 2008 R2: [http://support.
microsoft.com/kb/974722]
[2] PowerShell: [http://www.microsoft.
com/windowsserver2003/technologies/
management/powershell/default.mspx]
[3] Microsoft System Center Operations Man‑
ager 2007 R2:
[http://www.microsoft.com/systemcenter/
en/us/operations‑manager.aspx]
The Author
Björn Bürstinghaus is a system administrator
with simyo GmbH in Düsseldorf, Germany. In his
leisure time, he runs Björn’s Windows Blog, a
blog on Microsoft Windows topics located at
Figure 6: Using the PowerShell to move virtual machines between hosts. [http://blog.buerstinghaus.net].
Remotely
server. The HTTP label in the window
header, rather than the UDP label,
identifies this kind of connection. If
you are worried about using a third-
Controlled
party server, Teamviewer will sell you
your own authentication server on
request.
Teamviewer will even let you re-
motely control computers that only
Teamviewer is an impressive demonstration of how easy remote control have a modem connection. The soft-
ware vendor improved compression
across routers and firewalls can be. The popular software is now available in version 5 to reduce the amount of
for Linux. By Daniel Kottmair data crossing the wire. Video, Flash
banners, and other applications that
Some 60 million users already have computer on the opposite end of the permanently change screen content
the Teamviewer [1] commercial re- connection can use to access the local are problematic, but a fast DSL con-
mote control solution running on machine. This scheme prevents any- nection will let even those types of
Windows and Mac OS X. Because of body who has ever logged in to that applications run at an acceptable
the many requests from customers, machine from doing so again without speed.
Teamviewer’s manufacturer now pro- the owner’s authorization. You can Private users can run the program
vides a variant for Linux in version 5. keep the newly generated password free of charge, and the vendor offers
Teamviewer facilitates remote access or define one yourself. commercial licenses for commercial
to other computers across a network. use. Teamviewer is available for Win-
The only requirement is that the ma- Connections dows, Mac OS X, and Linux; any plat-
chine at the other end is also running form can remotely control any other
Teamviewer. Teamviewer provides all Remote access without port forward- platform. An iPhone client, available
this functionality in a standalone pro- ing works across routers and firewalls after registering for free online, lets
gram; special client or server versions thanks to one of the globally distrib- you remotely control a computer as
are not available. uted Teamviewer servers on the web, well.
Teamviewer automatically generates a which initiates a 256-bit encrypted The web and iPhone clients are the
globally unique ID on each machine. UDP connection between the two only versions that can only control,
When it is launched, Teamviewer parties. If a proxy server or a firewall rather than work in both directions.
generates a new password that the with content filtering makes this con- The other variants let you control
Linux Specifics
Teamviewer offers downloads of deb
and RPM versions 5.0.8206 pack-
ages for Linux, along with an X64
deb package and a simple tarball that
you don’t even need to install. Team-
viewer for Linux is still beta, and the
vendor asks for feedback and bug
reports.
The program is based on a modified
version of Wine, although the ven-
dor has made some Linux-specific
changes (e.g., to accelerate reading of
X server graphics). Although it uses
Wine, it is not just a copy of the Win- Figure 1: The Teamviewer window at program launch.
dows version. A native Linux version
is not available right now, but the ing the wallpaper (to avoid unneces- a connection between a v4 Mac client
vendor is considering creating one de- sary data traffic). and a v5 Linux client worked without
pending on acceptance and popularity problems.
of the Wine-based version. In the Lab The program offers three operating
The program offers a plethora of modes when it launches (Figure 1):
built-in remote control solutions, such The Linux version works fairly well Remote support, Presentation, and
as the ability to change direction, despite its beta status. One thing that File transfer. Remote support lets
reboot, simulate a Ctrl+Alt+Del key always worked during testing – no you remotely control a system, and
press, and transfer files conveniently matter what network the comput- presentation mode lets you demon-
between two machines. Multiple ers used or what firewalls they were strate an action to one or multiple
logins on a single machine are also hiding behind – was the connection users on their own machines. File
supported (e.g., for training purposes setup. Teamviewer has thus mas- transfer leaves out the administrative
in which you need to demonstrate tered the most important discipline and graphic functions and simply
something to a group of users). in remote control with flying colors. sends files to, or retrieves them from,
The VoIP and video chat function Also, no version problems emerged; a remote machine. This mode is ac-
introduced in version 5 is also use-
ful, and the application relies on free
codecs: Speex for audio and Theora
for video. Video on Linux only works
in the receiving direction right now.
V4L-connected Linux webcams are
not currently supported by Team-
viewer.
The Linux version has a couple of
other restrictions: The whiteboard
function, which lets users draw on
a whiteboard at the same time, and
VPN support both fail to provide the
goods. The program does not transmit
virtual consoles, so you need an X
server. The reboot, Ctrl+Alt+Delete
key sequence, and Disable Input/
Display on remote computer functions
all require the remote machine to run
Windows – Mac users have a similar
problem. And, the same thing applies
to changing the resolution and remov- Figure 2: File transfers between computers are convenient and easy to keep track of.
cessible at any time during normal On the bottom right is the connection vided by Compiz. Teamviewer only
remote support. monitor, which you can also hide and transfers the window content and not
From the Teamviewer startup win- which tells you who is accessing the the windows, so window zooms and
dow, use Extras | Options to change computer across the wire (Figure 3). soft fades just create unnecessary traf-
various settings, such as your own If you so desired – again this could be fic. A Flash blocker for the browser is
computer name, or to assign a fixed useful for training purposes – the pro- also a good idea for the same reason.
password for remote logins. Also, you gram will use screencasting to moni- Animated ads cause an unnecessarily
can specify which privileges you want tor activities on the remote computer. high level of data – and the bigger the
to grant a remote user accessing a The screencast files you can save only ad space, the worse the problem.
system across the wire. Teamviewer contain the data stream transmitted
will also accept a whitelist or blacklist across the wire by Teamviewer and Beta Blockers
of computers that are allowed or not are thus quite compact. Administra-
allowed to access your computer. tors can create a list of computers The older the distribution I tried, the
for single-click access to remote ma- more difficulties I had in testing. The
File Transfers chines. program performs better and is more
If needed, you can change the trans- stable on more recent versions; for ex-
File transfers occur in a separate win- mission focus. The High speed option ample, virtually no problems occurred
dow (Figure 2) that features a two- reduces the color depth to speed up in connections between Ubuntu 9.04
column view, with your own com- the transmission. If you prefer perfect and 9.10. I definitely advise against
puter on the left and the remote com- image quality at the cost of smoother changing the resolution on remote
puter on the right. To transfer files, operations, you can opt for this in Linux clients, because it typically
just select them and click the button the View menu. The Automatic set- caused Teamviewer to crash on the
above the column. This seems a little ting changes the mode to reflect the client. Also, you shouldn’t change the
convoluted; drag-and-drop, or at connection. Unfortunately, Team- native system settings during remote
least double-clicking, to transfer files viewer changes the standard gray of access, because that can interrupt the
would make more sense. The vendor some desktops (including Gnome on data stream from the remote machine.
aims to change this soon. Also, the Ubuntu) to a horrible pink. Because Another failing: The mouse cursor
Windows-style Wine symlinks and I didn’t notice any speed boosts in doesn’t change its appearance from
drive letters are a little irritating for reduced color mode, you might pre- an arrow to a hand, for example, if
Linux users. fer to keep a high-quality mode, for it moves outside of a window or title
In normal remote support mode, a Linux-to-Linux connections, at least. bar. The vendor has promised to fix
hideable Teamviewer function bar is A couple of options make the remote this before the final release.
displayed on the remote desktop, and support experience smoother. To
you can use it to access a full set of begin with, you will want to disable Conclusions
important remote control functions. desktop effects, such as those pro-
Teamviewer is an easy-to-use and
practical piece of software. Even if
you aren’t an administrator or con-
sole jockey, it gives you a really sim-
ple approach to managing machines
remotely – or for accessing your com-
puter when you’re on the road. Team-
viewer works quite well on Linux,
even though the beta had a few bugs.
The vendor promises to have every-
thing resolved by the time of the final
release. One of Teamviewer’s main
strengths is its cross-platform compat-
ibility between Linux, Windows, Mac
OS X, and even web browsers and the
iPhone. Also, the connection always
works. n
Info
[1] Teamviewer:
Figure 3: View of the remote desktop in Teamviewer.¡ [http://www.teamviewer.com/index.aspx]
s e 2 0 10 re
San J o Baltimo 9
200
8
a n D ie go 200
S Dallas
2007
Program Includes:
Unraveling the Mysteries oF Twitter Infrastructure,
Legal issues in the Cloud, and Huge NFS at Dreamworks
SPONSORED KEYNOTE AD
STAPLE
BY DRESS BY
HERE
Tony Cass,
CERN
in cooperat
ion with LO
PSA & SNIA
**JOIN US
TRAINING O FOR 6 DAYS OF PRACTI
N TOPICS I CAL **PLUS A 3
* 6-day Virt
N CL UDI NG: -DAY
ualization
Track by in TECHNICAL
John Arrasj
id and Rich structors PROGRAM
ard McDoug including
* Advanced Ti all
me Manageme Invited Tal
INSERT
Chef de Config
© Alistair Cotton, 123RF.com
ever dream of rolling out a complete computer farm with a single mouse over and making a mess on the server
room floor.
click? if you stick to Linux computers and you speak a little Ruby, chef can
go a long way toward making that dream come true. By tim Schürmann Valuable Ingredients
A full-fledged Chef installation com-
Chef is basically a server that stores you deploy a home-grown and home- prises the systems you want to config-
customized configuration guides for tested solution. ure (nodes) and the server that man-
software. Clients connected to the The installation is another obstacle ages and stores the recipes. Chef cli-
server access the recipes and auto- – and a fairly complex one, too, be- ents do all the hard work, picking up
matically configure their systems on cause the Chef server depends on sev- the recipes from the server via a REST
the basis of the rulesets the recipes eral other components, each of which interface and running the scripts.
contain. in turn requires even more software Each client runs on one node but can
To do so, the clients not only modify packages. The Chef server itself is apply recipes to multiple nodes. [Fig-
their configuration files, but – if written in Ruby but relies on the Rab- ure 1] shows you how this works.
needed – launch their package man- bitMQ server and on a Java-based For simplicity’s sake, the following
agers. If the recipes change, or new full-text search engine, at the same examples just use the Chef server and
ones are added at a later date, the time storing its data in a CouchDB a single client. The latter only config-
clients affected automatically update database. ures the computer on which it is run-
to reflect the changes. In an ideal en- Finally, your choice of operating sys- ning. The first thing you need to have
vironment, this just leaves it up to the tem is also important. Chef prefers in place is Ruby version 1.8.5 through
administrator to manage the recipes Linux underpinnings, but it will also 1.9.2 (with SSL bindings). Add to
on the server. run on other Unix-flavored operat- this, RubyGems, which will want to
ing systems such as Mac OS X, Open build various extensions and libraries
Bulk Shopping Solaris, FreeBSD, and OpenBSD, ac- later on, thus necessitating the exis-
cording to the wiki [1]. The fastest tence of make, gcc, g++, and the Ruby
Before you can enjoy the benefits, approach today is offered by Debian developer packages. Additionally, you
the developers behind Chef expect 5, Ubuntu 8.10 or later, or CentOS need the wget tool for various down-
you to put in a modicum of work. For 5.x. Setting up the server on any loads. The following command in-
example, recipes are made up of one other system can be an adventure. stalls the whole enchilada on Debian
or multiple standard Ruby scripts. If This article mainly relates to Debian and Ubuntu Linux:
you need anything beyond the fairly and Ubuntu for this reason. If this is
generic recipes available on the web, the first time you have ever cooked sudo apt‑get install ruby U
you need to have a good command of one of Chef’s recipes, it is also a good ruby1.8‑dev libopenssl‑ruby1.8 U
the Ruby scripting language. In other idea to run your kitchen on a virtual rdoc ri irb build‑essential U
words, your mileage will vary before machine. This prevents things boiling wget ssl‑cert
15
search_index_path "/var/chef/search_index" 32
16
role_path "/var/chef/roles" 33 u
mask 0022
17 34
18
validation_client_name "validator" 35 M
ixlib::Log::Formatter.show_time = false
‑r http://s3.amazonaws.com/U
Listing 2: SSL Certificates for the Chef Server
chef‑solo/bootstrap‑latest.tar.gz
01
server_ssl_req="/C=US/ST=Several/L=Locality/O=Example/OU=Operations/CN=chef.example.com/
emailAddress=ops@example.com"
The tool creates a couple of directo- 02
openssl genrsa 2048 > /etc/chef/validation.key
ries, corrects the configuration files, 03
openssl req ‑subj "${server_ssl_req}" ‑new ‑x509 ‑nodes ‑sha1 ‑days 3650 ‑key /etc/chef/validation.key
and adds chef-client to the init > /etc/chef/validation.crt
sudo chef‑client
and confirm the default responses by GUI Management
pressing Enter – except, enter your
The client automatically creates a key, own username when asked Your cli- The server now knows the emacs
which you need to add to the /etc/ ent user name?, and type . (dot) in cookbook, but the clients don’t. To
chef/client.pem file and which will response to the Path to a chef reposi- change this, launch a browser and ac-
sign every transaction with the server tory (or leave blank)? query. Knife cess the web front end with http://
from this point on. Then you want then registers a new client on the chefserver.example.com:4040. Chef
to delete the validation.pem file for Chef server, creates the above-men- does not offer SSL encryption here. If
security reasons. tioned certificate in /
.chef/ you prefer a more secure approach,
my-knife.pem, and finally creates the you could use Apache as a proxy.
Librarian /.chef/
knife.rb configuration file. In the form that then appears, log in
by typing the admin username [Figure
Now that you have the server and the Convenience Food 2]. The matching password is stored
client running, the next step is to cre- in the web_ui_admin_default_password
ate a repository server-side for your Multiple recipes with the same objec- line of the /etc/chef/server.rb file.
recipes: This is simply a hierarchy of tive can be grouped in a cookbook.
multiple, standardized (sub-)direc- For example, the mysql cookbook con- Listing 3: ~/c hef.json for the Server
tories. Of course, you could create tains all the recipes required to install 01
{
them all manually, but the template and set up the free database. For an 02 "bootstrap": {
03 "chef": {
provided by Opscode does a quicker initial test, it is a good idea to look for
04 "url_type": "http",
job; you just need to download and a simple cookbook [5].
05 "init_style": "runit",
unpack: In the section that follows, I will use
06 "path": "/srv/chef",
the cookbook for emacs from the ap- 07 "serve_path": "/srv/chef",
wget http://github.com/opscode/U
chef‑repo/tarball/master
plications group as an example. In 08 "server_fqdn": "chefserver.example.com",
tar ‑zxf opscode‑chef‑repo‑U this example, I’ll use the package 09 "webui_enabled": true
123454567878.tar.gz manager to install the popular Emacs 10 }
text editor. 11 },
12 "run_list": [ "recipe[bootstrap::server]" ]
Because this cryptic number is dif- After downloading the Cookbook ar-
13
}
ficult to remember in the daily grind, chive, unpack it in the cookbooks sub-
you might want to rename the direc- directory, then introduce the server to
tory (incidentally, the number comes the new recipes: Listing 4: ~/c hef.json for the Client
from the versioning system and repre- 01
{
rake upload_cookbooks 02 "bootstrap": {
sents the Commit ID):
03 "chef": {
The rake command automatically 04 "url_type": "http",
mv opscode‑chef‑repo‑123454567878 U
chef‑repo
calls knife with the correct param- 05 "init_style": "runit",
cd chef‑repo eters, and knife then uploads all the 06 "path": "/srv/chef",
07 "serve_path": "/srv/chef",
cookbooks from the corresponding di-
08 "server_fqdn": "chefserver.example.com"
[Table 1] explains the directory hierar- rectory. To upload a single cookbook 09 }
chy in chef-repo. to the server, do this: 10 },
The recipes stored here are injected 11 "run_list": [ "recipe[bootstrap::client]" ]
12
}
into the server by a tool named knife. rake upload_cookbook[emacs]
Role-Out
To group multiple cookbooks in a
role, create a new file below Roles in
the repository, say, beispiel.rb, with
the following content:
name "beispiel"
description "Example of a role"
run_list("recipe[emacs]", U
"recipe[zsh]", "recipe[git]")
rake roles
Figure 2: The web front end login page: the default password specified on the right is incorrect.
In the web front end, you can assign
Changing the slightly cryptic default sudo chef‑client roles to a node just like cookbooks
after logging in the first time is a good This command line immediately using drag and drop.
idea. opens a server connection, picks
Now go to the Nodes menu. When up the recipes assigned to the client Freshly Stirred
you get there, click the client name, (only emacs for the time being) and
change to the Edit tab, and finally executes the recipes [Figure 5]. To al- Ready-made recipes and cookbooks
drag the recipe you want to use from low this to happen on a regular basis, off the Internet will only cover stan-
Available Recipes and drop it into you should run the client at regular dard application cases. For special
the Run List (the recipe will slot into intervals as a daemon: cases, or individual configurations,
the top position in the list). In the you will typically need to create your
chef‑client ‑i 3600 ‑s 600
example, you would now see emacs own cookbook.
at the top [Figure 3]. To store this as- In this example, the client contacts The following, extremely simple
signment, press the Save Node button the server every 3,600 seconds. The example, creates a text file on the cli-
bottom left on the page. -s parameter lets you vary the period ent called /tmp/thoughts.txt that is
Client-side now, manually launch the slightly. If you don’t set this, all of based on the quick_start cookbook
chef-client tool: your clients might query the server [6], and it adds a sentence that is
G Figure 4: The Status tab lists all the nodes with their last contact attempts
and recipe assignments (following Run List).
F Figure 3: Using drag and drop to assign recipes to a node. In this example, the
client runs the beispiel recipe first, followed by emacs.
Health Check
© Denis Tevekov, 123RF.com
The Sysinternal tools are free tools domain because the new operating AdInsight lists all requests including
from Microsoft that can help Win- systems block access to administra- those that are blocked. This gives
dows administrators handle many tive shares by authentication of local administrators an easy option for ana-
tasks. This article introduces the user accounts. lyzing authentication problems with
Sysinternal tools that are useful for Some Sysinternal tools, such as Active Directory-aware programs and
system monitoring. All of the tools PSInfo.exe, require access to the identifying clients and servers that set
described here can be downloaded admin share and thus will not work up a connection to the domain con-
free of charge from the Microsoft site at first. To allow access, you must troller. AdInsight logs all requests is-
[1], either as individual downloads or enable local logins to administrative sued to domain controllers and stores
as part of the Sysinternals suite. shares in the Registry of standalone them as an HTML report or text file
One advantage of the Sysinternals computers. To do so, launch the Reg- for troubleshooting purposes. The
utilities is that you don’t need to in- istry Editor by typing regedit, then logfile contains the client request and
stall them, so they can be launched navigate to HKEY_LOCAL_MACHINE\SOFT‑ responses that the client received via
conveniently from a USB stick. When WARE\Microsoft\Windows\CurrentVer‑ LDAP.
launched for the first time, the pro- sion\Policies\System. Create a new AdInsight also logs access to system
grams display a license dialog; you Dword entry with the label LocalAc‑ services (Figure 1). When a program
can suppress this dialog with the countTokenFilterPolicy, set the value such as Exchange accesses the do-
/accepteula option, which can be to 1, then restart the computer. main controller, the window fills with
useful in scripting. Unfortunately, this information; then, you can right-click
option does not work for all of the LDAP Microscope to display details of the individual en-
Sysinternal tools. tries, as well as filter the display via
The programs only run on a Windows Insight for Active Directory, also the menu.
system as of Windows 2000 Server. known as AdInsight, lets you monitor The display also includes the name
For this article, I used Windows the LDAP connections on a domain of the accessing user. Unfortunately,
Server 2008 R2 and Windows 7. controller in real time with a GUI. AdInsight only lets you monitor lo-
Windows Server 2008 R2, Windows The user interface is similar to the cal access; over-the-wire diagnostics
Server 2008, Windows Vista, and Sysinternal tools Regmon and File- via remote access are not supported.
Windows 7 do not support access to mon. The tool investigates calls to However, AdInsight’s search function
the hidden System $ shares such as the wldap32.dll file, which most pro- does let you filter by process, error, or
C$, or admin$ as easily as Windows grams, including Exchange, use for request response. The tool selects the
XP or Windows Server 2003; the com- LDAP-based access to Active Direc- response to let you perform specific
puters do not belong to a Windows tory per LDAP. monitoring.
Filesystem, Registry,
Processes
The Process Monitor provides a
graphical user interface for monitor-
ing and color-tagging the filesystem, Figure 1: AdInsight helps you investigate LDAP access to domain controllers.
registry, and process/thread activity
in real time. The tool combines two entries that contain the same string. to use and does not have an extended
programs standard to Sysinternals: Additionally, you can enable multiple learning curve.
Filemon and Regmon. With a click filters at the same time and save the Process Explorer takes things a step
of the button, you can enable and configuration. further than Process Monitor, in that
disable the individual monitoring For more efficient diagnostics, you Process Explorer lists all the processes
options and restrict your monitoring might want to save the logfile and in a window and includes more
of registry and filesystem access and then load it for analysis, applying ad- detailed information on the current
process calls to an area in which you ditional filters as needed. The Tools process, such as access to directories
are interested. menu gives you a selection of precon- (Figure 3).
Process Monitor is a valuable aid for figured views. In DLL View mode, the tool shows
monitoring stability and identifying Process Monitor can also monitor the which libraries are used and where
bottlenecks; it is capable of logging boot process on a server because it they originated. The process menu
all read and write access to the sys- launches at a very early stage. You contains a Restart entry that first kills
tem and its media. Additionally, it can redirect all the output to a file. a process and then restarts it. Also,
displays comprehensive data on any If Windows fails to boot, analysis of you can temporarily stop individual
process or thread that is launched the output file gives you a fast way to threads and highlight processes in dif-
or terminated. Also, you can moni- identify the issue. Just like all Sysin- ferent colors. Process Explorer nests
tor any TCP/IP connections that are ternal tools, Process Monitor is easy in the taskbar, which provides an at-
opened, as well as UDP traffic. Note
that Process Monitor does not save
the content of the TCP packets or
payload data; it is not specifically de-
signed for network monitoring. If that
is what you need, you might prefer a
tool such as Wireshark [2].
The Process Monitor tool is also ex-
tremely useful for troubleshooting lo-
cal connection and privilege problems
(Figure 2). If needed, it can display
additional information on active
processes (e.g., their DLL files or the
parameters set when the process was
launched).
The filter function allows you to
reduce the volume of data output
generated by Process Monitor. For
example, it lets you hide any pro-
cesses with a specific string in their
names, without filtering out registry Figure 2: Monitoring processes, the registry, and filesystem access with Process Monitor.
Table 1: Selected PsLogList Options each file, you are told whether the
user has read (R), write (W), or both
Name Function
(RW) types of access. If you use |
File Runs the command on all the computers listed in the file. Each com-
more to redirect the output from this
puter needs a separate column in the text file.
command, the display will pause on
‑a dd/mm/yy Lists the entries after the specified date.
each page, and you can continue by
‑b dd/mm/yy Lists entries before the specified date.
pressing any key. In a similar fashion,
‑c Deletes the event logs after displaying them in PsLogList, which is
>file.txt redirects the output to a
useful in the case of batch-controlled retrieval.
file. With this approach, you do not
‑d n Displays the entries for the past n days.
see any command-line output.
‑e id1, id3, ... Filters entries with defined IDs.
The accesschk user ‑cw * command
‑f Filters entries with specific types (e.g., ‑f w filters warnings). You
can use arbitrary strings here.
shows you the Windows services to
which a group or user has write ac-
‑h n Lists entries for the past n hours.
cess. If you want to see a user’s ac-
‑i id1, id3, ... Shows entries with IDs defined in a comma-separated list of IDs.
cess privileges for a specific registry
‑l event_log_file Stores entries for the defined event log.
key, the accesschk ‑kns contoso\tami
‑m n Lists entries for the past n minutes.
hklm\software command is your best
‑n n Only shows the n latest defined entries. bet. The AccessChk tool is excellent
for checking computers for vulnera-
display from various computers and is the only account with privileges on bilities, and it also supports scripting.
then displays and compares events. If all the PCs and servers in the domain. AccessEnum gives you a GUI that
you run the tool without any options, In networks without a domain, you presents a full directory tree of user
it will output the entries in the local can use the administrator account for privileges. In other words, AccessE-
system event log. The program has the login; however, you need to set num is the graphical front end for Ac-
numerous options that give you vari- the password to be the same on all cessChk. The download contains both
ous ways of comparing the event logs the computers you want to monitor. files because AccessEnum relies on
that you retrieve. ShareEnum shows you not only the the AccessChk program for perform-
Table 1 lists some of PsLogList’s com- shares but also the local paths for the ing scans. In the GUI, you can select
mand-line options. By default, the shares on the computer. a directory and scan it for privilege
tool uses the system event log; you The Refresh button tells ShareEnum assignments. The tools also show de-
can select the event log by entering to launch a scan. If you want to scan nied privileges.
the first letter or by entering an ab- an individual computer, enter the
breviation, such as sec for security. same IP address as the start and end Conclusions
address of the IP range. The tool will
Monitoring Shares show you all of the shares on the net- The free command-line and GUI tools
work in a single window and list the in the Sysinternals suite are a feature-
The ShareEnum tool lets you monitor access privileges to boot. If you only rich addition to any Windows ad-
shares and their security settings by want to see local privileges, Sysinter- ministrator’s toolbox. The individual
scanning either an IP range or all the nals gives you a choice of two tools: programs are useful for monitoring
PCs and servers in a domain (or all AccessChk and AccessEnum. processes, users, and network con-
the domains in a network) for shares AccessChk outputs an exhaustive list nections, and tools like AdInsight
(Figure 6). To display all of this in- of a user’s rights at file, service, or are indispensable aids if you need to
formation reliably, you need to log in registry level at the command line, troubleshoot Active Directory logins.n
as the domain administrator, which giving you a quick overview of how
access privileges
are defined for a Info
specific user. [1] Sysinternals tools:
To see the privi- [http://www.sysinternals.com]
leges for the ad- [2] Wireshark: [http://www.wireshark.org]
ministrator in
the C:\Windows\ The Author
System32 direc- Thomas Joos is a freelance IT consultant who
tory, you would has worked in the IT industry for more than 20
type accesschk years. Among his many projects, Joos writes
administra‑ practical guides and articles on Windows and
Figure 6: ShareEnum gives you a neat list of all the shares and assigned tor c:\windows\ other Microsoft topics. You can meet him online
privileges on the network. system32. For at [http://thomasjoos.spaces.live.com].
© An
a Vas
ileva
, Foto
lia.co
m
Turnkey Solution
PAM is a very powerful framework for handling software- and hardware-based user authentication, giving ad-
ministrators a choice of implementation methods. By Thorsten Scherf
Hardware innovations are daily busi- passwd and /etc/shadow files. When the /etc/shadow file but from a direc-
ness in user account authentication. a user runs the login command to tory service. This task can be simpli-
Pluggable Authentication Modules log in to the system with a name and fied by deploying PAM [1].
(PAM) help transparently integrate password, the program creates a cryp-
these new devices into a system. tographic checksum of the password Modular Authentication
This gives experienced administra- and compares the results with the
tors the option of offering a variety checksum stored for this user in the Originally developed in the mid-
of different authentication methods /etc/shadow file. If the checksums 1990s by Sun Microsystems, PAM is
to their users while providing scope match, the user is authenticated; if available on most Unix-style systems
for controlling the total user session not, the login will fail. today. PAM offloads the whole au-
workflow. This approach doesn’t scale well. In thentication process from the appli-
larger environments, user credentials cation itself to a central framework
Old School are typically stored centrally on an comprising an extensive collection
LDAP server, for example. In this of modules (Figure 1). Each of these
User logins on Linux systems are case, the login program doesn’t re- modules handles a specific task;
traditionally handled by the /etc/ trieve the password checksum from however, the application only gets to
Figure 1: PAM provides a centralized user management framework for the Figure 2: A classic PAM configuration file contains modules and libraries that the
application. administrator can use to customize PAM.
know whether or not the user logged notebooks often include a fingerprint Before you modify the existing PAM
in successfully. In other words, it is reader that allows the owner to use a configuration, you might want to test
PAM’s job to find a suitable method digital fingerprint when logging into the device itself. To do so, scan a fin-
for authenticating the user. The PAM the system. The PAM ThinkFinger li- gerprint by giving the
framework defines what this method brary [2] provides the necessary sup-
tf‑tool ‑‑acquire
looks like, and the application re- port. According to the documentation,
mains blissfully unaware of it. the module will support the UPEK/ command (Figure 3). Then you can
PAM can use various authentication SGS Thomson Microelectronics fin- use
methods. Besides popular network- gerprint reader used by most recent
tf‑tool ‑‑verify
based methods like LDAP, NIS, or Lenovo notebooks and many external
Winbind, PAM can use more recent devices. to verify the results. You might see
libraries to access a variety of hard- Most major Linux distributions offer a Fingerprint does *not* match mes-
ware devices, thus supporting logins prebuilt packages for the PAM librar- sage at this point; initial attempts can
based on smartcards or the user’s ies. You can use your distribution’s be fairly inaccurate because you will
digital fingerprint. One-time password package manager to install the soft- need to familiarize yourself with the
systems, such as S/Key or SecurID, ware from the repositories. To install device.
are also supported by PAM, and some the required packages on your hard If you drag your finger too quickly
methods even require a specific Blue- disk, you would give the or too slowly across the scanner, the
tooth device to log in the user. device could fail to identify the fin-
yum install thinkfinger
The way PAM works is fairly simple. gerprint correctly. In this case, it will
Each PAM-aware application (the ap- command on a Fedora system and output an error message and quit.
plication must be linked against the When you achieve reliable results
apt‑get install thinkfinger‑tools U
libpam library) has a separate config- libpam‑thinkfinger
from fingerprint scans, you can delete
uration file in the /etc/pam.d/ folder. the temporary file with the test scan
The file will typically be named after on Ubuntu Hardy. Gentoo admins can in /tmp and create an individual file
the application itself – login, for issue a compact command: for each user on the system that will
example. Within the file, modules dis- contain the user’s fingerprint. The
emerge sys‑auth/thinkfinger
tribute PAM tasks among themselves. command is
Numerous libraries are available in If you’re using openSUSE, you’ll need
tf‑tool ‑‑add‑user username
each group, and they handle a variety the libthinkfinger and pam_thinkfin-
of tasks within the group (Figure 2). ger packages, the repository versions (Figure 4). Users must scan their fin-
Control flags let you manage PAM’s of which are not up to date. gerprints three times for this to work.
behavior in case of error – for ex- You might prefer a manual install If the fingerprint is identified correctly
ample, if a user fails to provide the with the typical ./configure, make, each time, the tool will store it in a
correct password or if the system is make install steps and files from the separate file below /etc/pam_think‑
unable to verify a fingerprint. current source code archive. Debian finger/.
users on Lenny will need to access Once everything is working, you can
Fingerprints the Experimental repository and then begin the PAM configuration. Figure
type 2 shows a PAM configuration for
More recent PAM libraries allow ad- the login program that lists just one
aptitude install libthinkfinger0 U
ministrators to authenticate users by libpam‑thinkfinger thinkfinger‑tools
authentication module: pam_unix. If
means of smartcards, USB tokens, or you want to authenticate against the
biometric features. State-of-the-art for the install. fingerprint scanner first, you need to
Figure 3: tf-tool gives you an option for testing your fingerprint scanner … Figure 4: … and then creating a fingerprint for each user.
call the pam_thinkfinger PAM module pam.d/system‑auth, although other the machine. If so, the user is logged
before pam_unix. Linux distributions call it /etc/pam/ in; if not, access to the system is de-
To prevent PAM from prompting us- common‑auth. You can enter all the nied. The plugged in device is identi-
ers to enter their password despite libraries against which you want to fied by its unique serial number and
passing the fingerprint test, you need authenticate your users in the file model and vendor names. Addition-
add a sufficient control flag. This (Figure 5). ally, a random number is stored on
tells PAM not to call any more librar- The include control flag then includes the USB device and on the computer;
ies once an authentication test has the file in all your other PAM configu- the number changes after each suc-
succeeded and to return PAM_SUCESS ration files. From now on, this makes cessful login attempt.
to the calling program – login in this all the programs in the PAM libraries When a user logs in, PAM checks
example. If the fingerprint-based login listed by the centralized configuration both the specified USB device proper-
fails, pam_unix is called as a last file available in the individual PAM ties and the random number. If the
resort and will prompt for the user’s files, including the pam_thinkfinger number stored on the USB does not
regular password. module. match the number on the disk, the
Manually entering the PAM libraries login fails. This prevents attackers
for all of your PAM-aware applica- USB Tokens from stealing the random number,
tions in every single PAM configura- placing it on their own USB device,
tion file would be fairly tedious. A The pam_usb library supports an- and then modifying the properties of
centralized PAM configuration file other hardware-based approach, in their own device to access the sys-
gives you an alternative. On Fedora which PAM checks to see whether a tem. Because the random number on
or Red Hat, this file is named /etc/ specific USB device is plugged into the system changes after each login,
Figure 5: On Fedora, system‑config‑authentication provides a ba‑ Figure 6: The USB device is identified by its properties. If the user tries to log in without the
sic PAM configuration tool. device, it will not work.
Sponsors:
IEEE Computer Society
ACM SIGARCH
T h e I n t e r n a t i o n a l C o n f e re n c e fo r H i g h Pe r fo r m a n c e C o m p u t i n g , N e t wo r k i n g , S t o ra g e, a n d A n a l ys i s
N u ts a n d B o lts PAM and Hardware
the stolen number will not match the automatically blocks the screen if the the fly; instead, one-time passwords
number on the system. USB device is unplugged: Then You are defined in advance. The pass-
Gentoo and Debian Linux offer pre- need to add the pam_usb PAM library words are stored on the token and
built packages of this PAM library. In to the corresponding PAM configura- in a database on the authentication
both cases, you can use the package tion file, /etc/pam.d/system‑auth or server.
manager to install, as described for /etc/pam.d/common‑auth. If you use When you press the Yubikey button,
pam_thinkfinger. Users on any other the sufficient control flag, users can the key sends one of these OTPs to
Linux distribution can download the log in to the system by plugging in the active application, which then
current source code archive [3] and the USB device, assuming the random uses an API to access the server and
run make; make install to compile number for the user matches on both verify the password. If this fails (Un-
the required files and install them on devices (Listing 2). known Key) or if the password has
the local system. Then you need to To enhance security, you can replace already been used (Replayed Key), an
connect any USB device – it can be the sufficient control flag with re‑ error message is output and the login
a cellphone with an SD card if you quired. This setting first looks for the fails. If the server identifies the key
like – and store its properties in the USB device, but even if the device is as valid, it sets usage‑count to 1 and
/etc/pamusb.conf file. The command identified correctly, PAM still prompts the user is authenticated. The user
for this is the user for a password in the next cannot login with this key anymore
stage of the login process. Both of times.
pamusb‑conf ‑‑add‑device USB‑device‑name
these tests have to complete success- Because of the simple API, more
(Figure 6). The command fully for the user to log in. and more applications are relying
All of the hardware-based login on authentication against the Yubico
pamusb‑conf ‑‑add‑user user
methods I have looked at thus far server. One example is the plugin for
lets you add more users to the con- are easily set up, but they all have the popular WordPress blog, which
figuration and generates a matching vulnerabilities, and it is easy to fake allows users with a Yubikey to log in
random number. The number for fingerprints. Also, USB sticks can be to the blog. A project from Google’s
each user is stored both on the USB stolen, thereby putting an end to any Summer of Code produced a PAM
device and on the system. Also, the security they offered. If you take your module that supports logging in to an
tool adds each user to the XML-based security seriously, you will probably SSH server [5].
/etc/pamusb.conf configuration file. want to use two-factor authentication. Instead of typing your user password
You can use the file to define ac- This method inevitably involves using at the login prompt, you simply press
tions for each user; these actions will chip cards with readers or USB tokens the button on the Yubikey to send a
run when the USB is plugged in or with one-time passwords and PINs. 44-character, modhex-encoded pass-
unplugged. For example, the entry word string to the SSH server. The
in Listing 1 of the configuration file Yubikey server then verifies the string by que-
rying the Yubico server. The first 12
Listing 1: Configuration File for pam_usb A small company from Sweden, characters uniquely identify the user
01
<user id="tscherf"> Yubico [4], recently started selling on the Yubikey server; the remaining
02 <device> Yubikeys (Figure 7), which are small 32 characters represent the one-time
03 /dev/sdb1 USB tokens that emulate a regular password.
04 </device> USB keyboard. The key has a button You can define a central file on the
05 <
agent event="lock">gnome‑screensaver‑command
on top which, when pressed, tells the SSH server to specify users permitted
‑‑lock</agent>
token to send a one-time password to log in by producing a Yubikey. To
06 <
agent event="unlock">gnome‑screensaver‑command
(OTP) to the active application, such do so, first create a /etc/yubikey‑us‑
‑‑deactivate</agent>
as a login prompt on an SSH server ers.txt file with a username, a colon
07
</user>
or the login window of a web service. separator, and the matching Yubikey
The OTP is verified in real time by a ID (i.e., the first 12 characters of the
Listing 2: USB Device-Based Authentication Yubico authentication server. Because user’s OTP) for each user. Alterna-
[tscherf@tiffy ~]$ id ‑u the software was released under an tively, users can create a file (~/.
500
open source license, you could theo- yubico/authorized_yubikeys) with
[tscherf@tiffy ~]$ su ‑
retically set up your own authentica- the same information in their home
* pam_usb v0.4.2
* Authentication request for user "root" (su‑l)
tion server on your LAN. This would directory.
* Device "/dev/sdb1" is connected (good). remove the need for an Internet con- You need to configure PAM to verify
* Performing one time pad verification... nection. the OTP against the Yubico server. To
* Verification match, updating one time pads... The way the token works is quite do so, add a line for the Yubikey to
* Access granted. simple. In contrast to popular RSA to- your /etc/pam.d/sshd file (Listing 3).
[root@tiffy ~]# id ‑u
kens, Yubikey doesn’t need a battery The configuration shown in Listing
0
because the OTP is not generated on 3 runs this authentication in addition
to the regular, system‑auth-driven on the card. If the certificate is valid, In this example, I’ll use Dogtag [6]
authentication method. But if you the user is mapped onto the system. from the Fedora project as a PKI solu-
replace the required flag with suffi‑ The mapping process can retrieve a tion. Users with other distributions
cient, there is no need for the user to variety of information from the certifi- might prefer OpenSC [7]. The PAM
log in after the Yubikey OTP has been cate, typically the Common Name or library is the same for both variants,
validated. Unfortunately, the Yubikey the UID stored in the certificate. pam_pkcs11.
is not protected by an additional PIN, To make sure the user really is who Dogtag consists of various compo-
and the system is vulnerable if the they claim to be, the system generates nents. For this setup, you’ll also need
token is stolen. An unauthorized user a random 128-bit number. A function a Certificate Authority (CA) to create
in possession of a token would be on the chip card then encrypts the the X.509 certificates. Online Certifi-
able to spoof a third party’s identity. number using the private key, which cate Status Protocol (OCSP) is used
The developers are working on add- is also stored on the card. The user for online validation of the certificates
ing PIN protection for OTPs, and an needs to enter the right PIN to be able on the chip cards. For offline valida-
unofficial patch is already available. to access the private key. The system tion, you just need the latest version
then uses the freely available public of the Certificate Revocation List
X.509 Certificates and PAM key to decrypt the encrypted number. (CRL) on the client system. Of course,
If the results match the random num- you also need a way of moving the
Classic two-factor authentication ber, the user is correctly authenticated user certificate from the certificate au-
typically relies on chip cards. The because the two keys match. thority to the chip card. You can use
cards typically contain a certificate The hardware required for this setup the Enterprise Security Client (ESC)
protected by a PIN. The PAM pam_ is a chip card with a matching reader to open a connection to another PKI
pkcs11 library allows users to log in – for example, the Gemalto e-Gate or component, the Token Processing Sys-
to the system via an X.509 certificate. SCR USB device by SCM. You can use tem, for this.
The certificate contains a private/ any Java Card 2.1.1 or Global Plat- Assuming correct authentication, the
public key pair. Both can be stored on form 2.0.1-compatible token: Gemalto user certificate is then copied to the
a suitable chip card, with the private Cyberflex tokens are widely available. chip card in the enrollment process.
key protected by an additional PIN Various software solutions are also The ESC tool then gives the user a
to prevent identity spoofing simply available: The approach described in convenient approach to managing the
by stealing a chip card. To log in, this article relies on the pcsc-lite and card. If the user needs to request a
you need both the chip card and the pcsc-lite-libs packages for accessing new certificate from the CA or needs
matching PIN. If the PIN is unknown, the reader. a new PIN for the private key on the
the login fails. card, it’s no problem with ESC.
The details of the login process are as Public Key Infrastructure If you use OpenSC to manage your
follows: The user inserts the chip card
into the reader and enters the PIN. It makes sense to use X.509 certifi- Listing 3: PAM Configuration for a Yubikey
The system searches for the certificate cates, but only if you have a complete auth required pam_yubico.so authfile=/etc/yubikey‑users.
with the public key and private key Public Key Infrastructure (PKI) set up. txt
Conclusions
PAM is a very powerful framework
for handling authentication. As
you can see from the PAM libraries
introduced in this article, the func-
tional scope is not just restricted to
Figure 8: The Security Client gives you an easy option for managing chip cards. authenticating users but also covers
tasks such as authorization, password
chip card, you can transfer a prebuilt it possible to validate the user’s au- management, and session manage-
PKCS#12 file [8] to the card using: thenticity. ment. Administrators who take the
time to familiarize themselves with
pkcs15‑init U
‑‑store‑private‑key tscherf.p12 U Certificates for configuring PAM, which isn’t always
trivial, will be rewarded with a feast
‑‑format pkcs12 U
‑‑auth‑id 01
Thunderbird and Co. of feature-rich and flexible options for
Applications that rely on Network password- and hardware-based au-
The PKCS#12 file contains both the Security Services (NSS) for signing or thentication and authorization. n
public key and private key. If you encrypting email with S/MIME, such
have a user certificate from a public as Thunderbird, use a file in the nss_
certification authority like CACert [9], dir as the CA database; applications Info
you can use your browser’s certificate based on the OpenSSL libraries use [1] Linux PAM: [http://www.kernel.org/pub/
management facility to export the cer- the database in the ca_dir directory. linux/libs/pam/]
tificate to a file and then transfer it to The certutil tool can import the [2] PAM ThinkFinger:
the chip card as described. CA certificate into the NSS database; [http://thinkfinger.sourceforge.net]
If you don’t have a certificate, you OpenSSL-based certificates can sim- [3] pam_usb:
can create a request and send it to ply be appended to the existing file. [http://downloads.sourceforge.net/
the appropriate certificate authority. Finally, you can define the mappings pamusb/pam_usb‑0.4.2.tar.gz?download]
Once the authority has verified your between user certificates and Linux [4] Yubico website: [http://www.yubico.com/
request, it will return a certificate to users in the pam_pkcs11 configuration products/yubikey/]
you. file. Various mapping tools are avail- [5] SSH server for Yubikey: [http://code.
For both approaches, you can use the able for this, specified as follows: google.com/p/yubico‑pam/downloads/]
/etc/pam_pkcs11/pam_pkcs11.conf file [6] Dogtag PKI: [http://pki‑svn.fedora.redhat.
use_mappers = cn, uid
to define the driver for access to the com/wiki/PKI_Main_Page]
chip card. The driver can be modified Next, you still need to add the PAM [7] OpenSC: [http://www.opensc‑project.org]
in the configuration file, as shown in pam_pkcs11 library to the correct [8] PKCS specifications:
Listing 4. PAM configuration file – that is, /etc/ [http://en.wikipedia.org/wiki/PKCS]
Here, you must specify the correct pam.d/login or /etc/pam.d/gdm. You [9] CACert certificate authority:
paths to the local CRL and CA certifi- can edit the file manually or use the [http://cacert.com]
cate repository. The CRL database is system‑config‑authentication tool [10] ESC Guide: [http://directory.
necessary to check that the certificate referred to previously. fedoraproject.org/wiki/ESC_Guide]
on a user’s chip card is still valid and When you insert the chip card into
has not been revoked by the certifi- the reader and launch the ESC tool, The Author
cate authority. You need the certifi- you should be able to see the certifi- Thorsten Scherf is a Senior Consultant for Red
cate for the certificate authority that cate (Figure 8). If you now attempt Hat EMEA. You can meet him as a speaker at
issued the user’s certificate from the to log in via the console or a new conferences. He is also a keen marathon runner
CA certificate repository. This makes GDM session, the authentication pro- whenever time permits.
Apache Protector
© KrishnaKumar Sivaraman, 123RF.com
even securely configured and patched web servers can be compromised NetBSD, AIX, and Windows, with
the later versions only available for
because of vulnerabilities in a web application. modSecurity is an Apache Apache 2.x. This article discusses
extension that acts as a web application firewall to protect the web server version 2.5.10; the successor 2.5.11 is
against attacks. by Sebastian wolfgarten merely a bugfix.
The software’s functional scope is
Security issues on the web are no this to happen, these firewalls ana- enormous but comprehensively docu-
longer typically a result of poor con- lyze incoming and outgoing client re- mented [3]. It logs HTTP requests
figuration or the lack of up-to-date quests and server responses to distin- and gives administrators unrestricted
server software. Tomcat, Apache, guish between benevolent and malev- access to the individual elements of
and even IIS have become extremely olent requests on the basis of rules. a request, such as the content of a
mature over the past few years – so If necessary, they can even launch POST request. It also identifies at-
much so that they don’t have any no- countermeasures; if configured to do tacks in real time based on positive or
ticeable vulnerabilities, although ex- so, the software will also inspect en- negative security models and detects
ceptions can always turn up to prove crypted HTTPS connections. anomalies based on supplied patterns
the rule. Thus, hackers have turned for known vulnerabilities.
their attention to the web applications Accessories en Masse The powerful rules discover whether
and scripts running on the servers. credit cards are in the data stream or
Increasingly complex user require- Where classical network-based fire- use GeoIP to prevent access from cer-
ments are making web applications walls – I’m exaggerating slightly tain regions. ModSecurity checks not
more complex, too: Ajax, interaction here – either permit any or no HTTP only incoming requests but also the
with external databases, back-end connections, WAFs target individual server’s outgoing responses. The soft-
interfaces, and directory services are HTTP connections based on their ware can implement chroot environ-
just part of the package for a modern content. ModSecurity is a high- ments. As a reverse proxy, it protects
application. And, attack vectors grow performance WAF for Apache and a web applications on other web serv-
to match this development (see the complex module for the Apache web ers, such as Tomcat or IIS.
“Attacks on Web Servers” box). server. Originally developed by Ivan Breach also provides a collection of
Ristic, Breach Security handles its dis- core rules that guarantees the basic
Firewalls for the Web tribution and development [2]. security of the web server. Com-
Two variants of the software are avail- prehensive documentation, many
In contrast to legacy packet filters, able: the open source variant released examples, and a mailing list provide
Web Application Firewalls (WAFs) under the GPLv2, and a commercial support for the user. This makes Mod-
don’t inspect data in the network version with professional support, Security a good choice for protecting
or transport layer, but rather at the pre-configured appliances, and man- web servers and their applications
HTTP protocol level (i.e., in OSI Layer agement consoles. ModSecurity runs against vulnerabilities. But before you
7) [1]. They actually speak HTTP. For on Linux, Solaris, FreeBSD, OpenBSD, can even consider tackling the highly
Figure 1: Once ModSecurity has been enabled, it will log suspicious activity at the detail level specified in SecAuditLogParts – in this case, an SQL injection attack.
The third instruction defines the au- SecRule REQUEST_HEADERS: User‑Agent "nikto" log the transaction in the auditlog and
ditlog storage location relative to the send a detailed overview of the way
Apache installation path. Finally, the This example tells the web server to the client request was processed to
SecAuditLogParts instruction defines refuse requests from the Nikto secu- the debuglog. This means that your
the information that ModSecurity logs rity scanner [7]. If you want Mod- Apache error_log file will contain a
in the auditlog (Table 2). In this case, Security to run a specific action for note to the effect that a client request
this is the header and the content of a rule, you can overwrite the default was effectively blocked, as shown in
the request, along with the ModSecu- action: Listing 2. Once ModSecurity is work-
rity reaction. The results are shown in ing correctly, you can start adding
SecRule REQUEST_HEADERS:User‑Agent "nikto" U
Figure 1. "phase:2,pass,msg:'Nikto‑Scan logged'"
rules and modifying them for the web
After this preparation, you can add applications you want to protect.
the most important directive to your This rule tells the module to write
configuration: SecRule. This defines a Nikto scan logged message to the The Art of Detecting
a filter rule and optionally an action logfile when it detects the Nikto user Attacks
that the module will perform if it agent in a client request in phase 2.
discovers a match for the rule. If you The rule then overwrites the drop To provide effective protection against
don’t define an action, the tool will default action, which is defined by a huge assortment of attacks, system
run the standard command defined in SecDefaultAction with the pass ac- administrators need to set up a robust
the SecDefaultAction directive. Rules tion. This allows the client request ruleset for ModSecurity. To formulate
always follow this pattern: to pass. To test ModSecurity, Listing rules that protect you against SQL
1 gives you an overview of the basic injection, cross-site scripting, or local
SecRule Variable Operator [Action]
configuration discussed thus far. and remote file inclusion attacks, for
The number of variables is huge, and example, you need in-depth knowl-
they cover every single element of the Practice Session edge of how attacks on web servers
client request (both for POST and for work.
GET), as well as the most important After restarting Apache, a request Of course, not every administrator
server environment details [6]. Ad- like http://www.example.com/index. has this knowledge or the time to
ditionally, you can use regular expres- html?file=/etc/passwd would trigger re-invent the wheel. To address this,
sions. For example, to investigate an the sample rule in line 8. Then the the Open Web Application Security
HTTP request to find out whether the action defined in line 9 would block Project (OWASP) offers a predefined
client requests the /etc/passwd string the request. The client sees an HTTP ruleset for ModSecurity [8] that re-
in a GET method, you would use this 403 Forbidden error. At the same time, lies on anomaly detection to protect
rule: lines 3 through 7 tell ModSecurity to web servers against a number of
<Location /apps/news/infocus/ U and although it is not particularly el- mercial products and services such
sgspeeches/statments_full.asp> egant, it does indicate what potential as training. Armed with ModSecurity,
SecRule &ARGS "!@eq 1"
you have. With a carefully crafted administrators can sit up tall in their
SecRule ARGS_NAMES "!^statid$"
SecRule ARGS:statID "!^\d{1,3}$"
regular expression, you can use the saddles, even if attackers are trying to
</Location> same technique to prevent credit card make their horses bolt. n
numbers from being revealed by, for
Three lines embedded in an Apache example, a compromised application
location container state that valid in the aftermath of a successful SQL Info
user requests for the statements_ injection attack. [1] OWASP Best Practices: Use of Web Appli‑
full.asp file are only allowed to cation Firewalls: [http://www.owasp.org/
have one argument (first rule) called The Chinese Wall in Reverse index.php/Category:OWASP_Best_Prac‑
statid (second rule) with numbers tices:_Use_of_Web_Application_Firewalls]
of one to three digits (third rule) as Another advanced scenario for Mod- [2] ModSecurity: [http://modsecurity.org]
their parameters. Any requests that Security involves cooperating with [3] ModSecurity Reference Manual:
do not follow this pattern are cleaned the GeoIP provider, Maxmind. GeoIP [http://modsecurity.org/documentation/
up by the default action, as defined in locates users geographically on the modsecurity‑apache/2.5.10/
SecDefaultAction. This would effec- basis of their IP address, which html‑multipage]
tively prevent an attacker exploiting means you can restrict access to a [4] ModSecurity Processing Phases: [http://
the SQL injection vulnerability. website to a specific region, such as www.modsecurity.org/documentation/
Pennsylvania – if you have a site in modsecurity‑apache/2.5.0/
No Inside Information Pennsylvanian Dutch that nobody html‑multipage/processing‑phases.html]
else would understand – or block [5] ModSecurity Actions: [http://www.
ModSecurity also filters outgoing a country entirely. To do this, you modsecurity.org/documentation/
data, especially server responses to would install the mod_geoip2 module modsecurity‑apache/1.9.3/
incoming requests. The PHP program- on Apache 2, along with the GeoIP html‑multipage/05‑actions.html]
ming language throws error messages software and GeoLiteCity.dat geo- [6] ModSecurity Variables: [http://www.
such as this: graphical database [11]. modsecurity.org/documentation/
Imagine a mechanical engineering modsecurity‑apache/2.1.0/
Fatal error: Connecting to MySQL server
'dbserv.example.com' failed
company in Germany’s Swabian html‑multipage/05‑variables.html]
region that is afraid of industrial [7] Nikto: [http://cirt.net/nikto2]
Although you can disable PHP error espionage from the Far East; in this [8] OWASP ModSecurity Core Rule Set Pro‑
messages in responses, Google still case, they could use the configura- ject: [http://www.owasp.org/index.php/
lists a bunch of websites where PHP tion in Listing 3 to prevent access Category:OWASP_ModSecurity_Core_Rule_
error messages reveal many juicy de- from China – if the people in China Set_Project]
tails of applications. This information didn’t spoof their origins. The last [9] ModSecurity blog, “Virtual Patching Dur‑
is very useful to an attacker, because two lines form a filter rule chain. Line ing Incident Response: United Nations
it can help them understand the inter- 6 locates the geographical region for Defacement”: [http://blog.modsecurity.
nal workings and structure of a web the requesting IP address, then line org/2007/08/27/]
application and attack it in a more 7 dumps the request and a message [10] Talks by the UN General Secretary:
targeted way. To tell ModSecurity to into the logfile if the request comes [http://www.un.org/apps/news/infocus/
catch PHP error messages and pre- from China. This might not be politi- sgspeeches/]
vent them from being sent to users, cally correct, but it is technically ef- [11] “Apache ModSecurity with GeoIP blocking
you can define a rule like this: fective. country specific traffic: ModSecurity +
GeoIP” by Suvabrata Mukherjee: [http://
SecRule RESPONSE_BODY "Fatal error:"
Full Insurance Coverage linuxhelp123.wordpress.com/2008/12/11/
RESPONSE_BODY refers to the content apache]
of the server response to the client, ModSecurity has an enormous feature
scope, and it can take some time to The Author
Listing 3: GeoIP Access understand it completely. But if you Sebastian Wolfgarten works as an IT Security
01 LoadModule geoip_module modules/mod_geoip.so
go to the trouble to plum the depths Expert with the European Central Bank as an
02 LoadModule security2_module modules/mod_security2.so
of the module, it will pay dividends advisor, manager, and auditor of internal proj‑
03 GeoIPEnable On
04 GeoIPDBFile /usr/tmp/GeoLiteCity.dat
with comprehensive methods that ects designed to improve the security of the
05 SecRuleEngine On give you additional protection against IT infrastructure. Before this, he spent several
06 SecGeoLookupDb /usr/tmp/GeoLiteCity.dat attacks on web applications. years working for Ernst & Young AG in Germany,
07 Sec
Rule REMOTE_ADDR "@geoLookup" Thankfully, the prebuilt rulesets and as an Advisor for Information Security in
"chain,drop,msg:'Connection attempt from .CN!'"
make it easier to get started. And, the Ireland. He has also worked as an IT security
08 SecRule GEO:COUNTRY_CODE "@streq CN" "t:none"
vendor behind the project offers com- expert with T-Mobile Germany.
• network security
• system management
• troubleshooting
• performance tuning
• virtualization
• cloud computing
. c o m/su bs
aga z i n e
w. a d m in-m
N L I N E AT ww
ORDER O
N U TS A N D B O LTS Daemon Monitoring
Watching the
© Shariff Che'Lah, 123RF.com
Daemons
Administrators often write custom monitoring programs to make sure for accessing the exit code. Error logs
are obtainable by redirecting the er-
their daemons are providing the intended functionality. But simple shell ror output to a file or, if available, by
tools are just as well suited to this task, and not just for systems that are setting the corresponding program
low on resources. By Harald Zisler option.
The only thing left to do is to find
Unix daemons typically go about Because almost every program out- the matching client program test the
their work discreetly in the back- puts standardized exit codes when it functionality of each service.
ground. The process table, which is terminates, you can use Unix conven-
output with the ps command, only tions. 0 stands for error-free process- Web Servers
shows you that these willing helpers ing, whereas 1 indicates some prob-
have been launched, although in the lems were encountered. This value is To check a web server, you could use
worst case they could just be hang- stored in the $? shell variable, which wget. The shell script command line
ing around as zombies. Whether or a shell script evaluates immediately for this would be:
not a daemon is actually working is after launching the sensor.
wget --spider -q ip-address
not something that the process table Various programs are suitable for
will tell you. In other words, you automated, “unmanned” access to The --spider option tells wget to
need more granular diagnostics. The the service provided by a given dae- check that the page exists but not to
underlying idea is to write a “sensor” mon; all of them will run in the shell load it. Defining the IP address in-
script for each service that performs a without a GUI. These programs often stead of the hostname avoids a false
tangible check of its individual func- provide an option (typically -q) that positive if DNS-based name resolution
tionality. suppresses output, and this is fine fails for some reason.
09 26 41 fi
10 if [ $? -eq 2 ]; 27 echo "$time: Database: serious error! 42 sleep 15
11 ***************" >> dba.log 43 done
12 then 28 echo "$time: Unable to restart!
44
13 ****************" >> dba.log
45 fi
14 echo "$time: Database is not accessible! 29 while true
46
****************" >> dba.log 30 do
47 fi
15 /usr/local/etc/rc.d/002pgsql.sh start 31 psql -U monitor -d monitor
16 sleep 15 -c "select * from watch;" 48 sleep 15
vidual printers, you additionally need groups to own the script and the 02
03
while true
to provide the print queue name and process (which is the case with many
04
do
then grep the exit code. To make sure daemons). 05
the exit code complies with this be- 06
lpq ‑Plp | grep ‑q "lp is ready"
havior, Grep checks the output that Database Restart 07
need to modify the search string for successfully restarted (Figure 1). If 14
15
done
grep. it can’t start the daemon, it waits
©
Ma
xim
Ka
z mi
n,
12
3R
F.c
om
Private Affair
Because Microsoft’s legacy VPN protocol, PPTP, has a couple of vulner- connection through a firewall, a NAT
device, or a proxy. Like all SSL VPNs,
abilities, SSTP, which routes data via an SSL connection, was introduced SSTP uses TCP port 443 (HTTPS) for
as the new VPN protocol with Vista, Windows Server 2008, and Windows 7. data transfer. Compared with other
By Thomas Drilling
commercial or proprietary solutions
(e.g., IPsec, L2TP, or PPTP), the ad-
Virtual private networks (VPNs) access. The Secure Socket Tunneling vantage is that port 443 is open in
have established themselves as a Protocol (SSTP), which was intro- almost any router or server configu-
standard solution for convenient re- duced with Microsoft Windows Server ration, and SSTP packets can thus
mote access to enterprise networks. 2008, provides a solution by setting pass through without any additional
However, they can cause some is- up a VPN tunnel that encapsulates configuration overhead. The “Hand-
sues in combination with standard PPP or L2TP traffic on a Secure Sock- shake” box explains what Windows
tunneling protocols like PPTP if, for ets Layer (SSL) channel (Figure 1). does with all of these protocols.
example, NAT routers are involved For administrators, this means that Strong encryption in SSL 3.0 ensures
or you need to work around the lo- SSTP is a new VPN tunnel type in maximum security and performance.
cal firewall. Typically, it is not in the the Windows Server 2008 routing SSTP VPNs are thus a class of SSL
administrator’s best interest to modify and RAS server role. It encapsulates VPNs, like Cisco’s WebVPN or the
the firewall, NAT, or proxy configura- PPP (point-to-point protocol) packets Vigor Router by Draytek, that basi-
tion to suit requirements for remote in HTTPS, thus supporting the VPN cally work in the same way as IPsec,
Handshake
Microsoft’s SSTP is basically another proprietary implementation of an 5. The client sends the encrypted SSL session key to the server.
SSL VPN. SSTP relies on standards such as SSL and TLS for encryption 6. The server decrypts the client’s SSL session key using its own private
and authentication, but Microsoft has modified the tried-and-trusted key and encrypts the communications with the session key. Up to this
SSL handshake by introducing proprietary extensions that bind the pro- point, the procedure is no different from standard SSL communication.
prietary PPP protocol. If you look at the basic handshake, SSTP at first However, Microsoft then implements additional handshake steps that
keeps to the standard SSL handshake procedure: build on what has happened thus far.
1. The client opens a connection to TCP port 443 on the server. 7. The client sends an HTTP-over-SSL request message to the server and
2. The client sends an “SSL Session Setup Message” to indicate that it negotiates an SSTP tunnel with the server.
wants to set up an SSL connection to the server. 8. After this, the client negotiates a PPP connection with the SSTP
3. The server sends an SSL certificate to the client. server, which includes authenticating the user’s login credentials with
4. The client validates the server certificate, identifies the correct en- a PPP authentication method and configuring the settings for the data
cryption method for the SSL session, and creates a session key, which it traffic.
encrypts using the public key from the server certificate. 9. The client starts to transfer data via the PPP connection.
L2TP, or PPTP, but use SSL to handle least one card (preferably both) needs no alternative to configuring a public
the data transfer. Because SSTP en- a static IP address. The client will be key infrastructure. At a minimum,
capsulates complete IP packets, the MS Windows 7. this means installing at least one cer-
connections act just like a PPTP or The server and clients are both mem- tificate on the SSTP server and a root
IPsec tunnel on the client side. Ac- bers of an Active Directory (AD) certificate authority certificate on all
cording to the Microsoft definition, domain. Additionally, the server and SSTP VPN clients. You might have
SSTP is a protocol mainly intended clients should have all the current to modify the packet filter rules, too,
for dialup connections in the applica- updates in place, such as the current even though SSTP doesn’t actually
tion layer that guarantees the confi- Service Pack 2 for Windows Server need any additional NAT configura-
dentiality, authenticity, and integrity 2008 R2. After installing a Windows tion because port 443 is typically
of the data to be transferred. A public server, you need to install the Ac- open. The “Port Customizer” box ex-
key infrastructure (PKI) is used for tive Directory Domain Services to plains how you can use a port other
authentication purposes. Microsoft in- make the server a domain controller. than 443 for SSTP.
troduced SSTP with Windows Server The easiest way to do this is to run Next, you’ll need to configure an
2008 and Vista SP1. Today, Windows dcpromo at the command line and Active Directory-integrated root certi-
Server 2008 R2 and Windows 7 also then follow the wizard. fication authority on the domain con-
support SSTP [1]. But what does SSTP The server also needs to provide troller. In combination with a group
offer the administrator, and how do DNS, DHCP, and certificate services, policy, this causes clients that are do-
you set up a VPN server with SSTP? which you can achieve by configuring main members to request certificates
the matching server roles in the role automatically when they open a con-
Sample Scenario wizard. VPN functionality is also pro- nection. Certificates are then issued
vided via a server role Network policy
In this example, I am using Windows and access services. Port customizer
Server 2008 R2 to provide an SSTP- SSTP normally uses TCP port 443, which is
based VPN server behind a NAT PKI open in most router and NAT configurations.
device. The server is configured as Security-conscious Windows administrators
the domain controller and needs two Because SSTP uses HTTPS (port 443) might prefer to modify the standard port
network cards for the VPN setup. At to handle all the data traffic, there is used by SSTP. To do so, you need to edit the
following registry key
HKEY_LOCAL_MACHINE\SYSTEM\ U
CurrentControlSet\Services\SstpSvc\ U
SSTP VPN connection (TCP port 443)
Parameters
1. Open TCP connection
2. Set up SSL connection and validate certificate
in the Registry Editor. Look for the Lis-
tenerPort parameter. Changing the view
3. HTTPS request
to Decimal (right-click) lets you specify a
4. Initiate SSL tunnel different port. Then you need to restart the
Routing and RAS service. If you change the
5. Data communication via PPP
ListenerPort, you need to reconfigure
your NAT device to match and forward all
SSTP gateway server SSTP VPN client incoming traffic addressed to port 443 to
the newly configured port on the SSTP-based
Figure 1: The SSTP handshake is not much different from a standard SSL handshake. In contrast to IPsec, SSTP
VPN server.
sends PPP packets (not IP packets) through the tunnel.
Figure 2: If you are unable to request a certificate, you can check the privileges Figure 3: A CLR distribution point controls the availability of the certificate
in the certificate template for web servers to see whether Enroll is allowed. revocation list, which all clients need to be able to access at any time.
Figure 4: You need to set up the RAS services to run a VPN server.
WRITE FOR US
Editor in Chief
Joe Casad, jcasad@admin-magazine.com
Managing Editor
Rita L Sooby, rsooby@admin-magazine.com
Contributing Editors
Admin: Network and Security is looking • unheralded open source utilities Oliver Frommel, Uli Bantle, Andreas Bohle,
for good, practical articles on system ad- • Windows networking techniques that Jens-Christoph Brendel, Hans-Georg Eßer,
Markus Feilner, Marcel Hilzinger, Mathias Huber,
ministration topics. We love to hear from aren’t explained (or aren’t explained Anika Kehrer, Kristian Kißling, Jan Kleinert,
IT professionals who have discovered well) in the standard documentation. Daniel Kottmair, Thomas Leichtenstern,
Jörg Luther, Nils Magnus
innovative tools or techniques for solving We need concrete, fully developed solu-
Localization & Translation
real-world problems. tions: installation steps, configuration Ian Travis
Tell us about your favorite: files, examples – we are looking for a Proofreading & Polishing
• interoperability solutions complete discussion, not just a “hot tip” Amber Ankerholz
Layout
• practical tools for cloud environments that leaves the details to the reader. Klaus Rehfeld, Judith Erb
• security problems and how you solved If you have an idea for an article, send Cover
them a 1-2 paragraph proposal describing your Illustration based on graphics by James Thew,
123RF
• ingenious custom scripts topic to: edit@admin-magazine.com.
Advertising
www.admin-magazine.com/Advertise
United Kingdom and Ireland
Penny Wilby, pwilby@admin-magazine.com
Phone: +44 1787 211 100
North America
Amy Phalen, aphalen@admin-magazine.com
Phone: +1 785 856 3434
All other countries
Hubert Wiest, anzeigen@admin-magazine.com
Phone: +49 89 9934 1123
Corporate Management (Vorstand)
Hermann Plank, hplank@linuxnewmedia.com
Brian Osborn, bosborn@linuxnewmedia.com
Management North America
Brian Osborn, bosborn@linuxnewmedia.com
Associate Publisher
Rikki Kite, rkite@linuxnewmedia.com
Product Management
Hans-Jörg Ehren, hjehren@linuxnewmedia.com
Customer Service / Subscription
For USA and Canada:
Email: subs@admin-magazine.com
AUTHORS Phone: 1-866-247-2802
(Toll Free from the US and Canada)
Falko Benthin 14 Fax: 1-785-274-4305
For all other countries:
Björn Bürstinghaus 25, 60 Email: subs@admin-magazine.com
Phone: +49 89 9934 1167
Thomas Drilling 52, 94 Fax: +49 89 9934 1199
Admin Magazine • c/o Linux New Media •
Florian Effenberger 42 Putzbrunner Str 71 • 81739 Munich • Germany
www.admin-magazine.com
Dan Frost 48 While every care has been taken in the content of the
magazine, the publishers cannot be held responsible for
Thomas Joos 74 the accuracy of the information contained within it or any
consequences arising from the use of it. The use of the
DVD provided with the magazine or any material provided
Daniel Kottmair 64 on it is at your own risk.
Copyright and Trademarks © 2010 Linux New Media Ltd.
Caspar Clemens Mierau 20 No material may be reproduced in any form whatsoever
in whole or in part without the written permission of the
James Mohr 28 publishers. It is assumed that all correspondence sent, for
example, letters, emails, faxes, photographs, articles, drawings,
are supplied for publication or license to third parties on a
Thorsten Scherf 8, 44, 78 non-exclusive worldwide basis by Linux New Media unless
otherwise stated in writing.
Tim Schürmann 68 Printed in Germany
Distributed by COMAG Specialist, Tavistock Road,
Udo Seidel 36 West Drayton, Middlesex, UB7 7QE, United Kingdom
Admin Magazine ISSN 2045-0702
Kurt Seifried 34 Admin Magazine is published by Linux New Media USA, LLC,
719 Massachusetts, Lawrence, KS 66044, USA, and Linux
Sebastian Wolfgarten 86 New Media Ltd, Manchester, England. Company registered in
England.
Harald Zisler 92 Linux is a trademark of Linus Torvalds.
PER MONT
H
BRAND NEW DELL RANGE WITH WINDOWS OR LINUX
PROCESSOR Intel® Xeon Intel® Xeon Intel® Xeon Intel® Xeon Intel® Xeon
CORES Quad 2.4Ghz Quad 2.66Ghz Quad 2.66Ghz Quad 2.26Ghz 2 x Quad 2.26Ghz
MEMORY 1GB DDR2 ECC 4GB DDR3 ECC 8GB DDR3 ECC 8GB DDR3 ECC 16GB DDR3 ECC
HARD DISKS 1 x 160GB SATA 2 x 250GB SATA 2 x 500GB SATA 2 x 1000GB SATA 4 x 1000GB SATA
RAID None Hardware RAID Hardware RAID Hardware RAID Hardware RAID
CONNECTION 100 Mbps 100 Mbps 100 Mbps 100 Mbps 100 Mbps
OR BUILD YOUR OWN FULLY CUSTOMISED DELL DEDICATED SERVER AT: linux.redstation.com