Sie sind auf Seite 1von 108











Email :
26 Improve Python Code by
Using a Profler
30 Understanding the
Document Object Model
(DOM) in Mozilla
40 Introducing AngularJS
45 Use Bugzilla to Manage
Defects in Software
48 An Introduction to Device
Drivers in the Linux Kernel
52 Creating Dynamic Web
Portals Using Joomla and
56 Compile a GPIO Control
Application and Test It On
the Raspberry Pi
59 Use Pound on RHEL to
Balance the Load on Web
67 Boost the Performance of
CloudStack with Varnish
74 Use Wireshark to
Detect ARP Spoofng
77 Make Your Own PBX with Asterisk
80 How to Make Your USB Boot
with Multiple ISOs
86 Contiki OS Connecting
Microcontrollers to the
Internet of Things

08 You Said It...
09 Ofers of the Month
10 New Products
13 FOSSBytes
25 Editorial Calendar
100 Tips & Tricks
105 FOSS Jobs
Experimenting with More Functions in Haskell
Why We Need to Handle Bounced Emails
Online access to old issues
I want all the issues of OSFY from 2011, right up to the
current issue. How can I get these online, and what would be
the cost?
c kiran kumar;
ED: It feels great to know that we have such valuable readers.
Thank you, Kiran, for bringing this request to us. You can avail
all the back issues of Open Source For You in e-zine format from
Request for a sample issue
I am with a company called Relia-Tech, which is a brick-
and-mortar computer service company. We are interested in
subscribing to your magazine. Would you be willing to send us a
magazine to check out before we commit to anything?
Lindsay Steele;
ED: Thanks for your mail. You can visit our website www.ezine. and access our sample issue.
A thank-you and a request for more help
I began reading your magazine in my college library and
thought of offering some feedback.
I was facing a problem with Oracle Virtual Box, but after
reading an article on the topic in OSFY, the task became so easy.
Thanks for the wonderful help. I am also trying to set up
my local (LAN-based) GIT server. I have no idea how to
set it up. I have worked a little with GitHub. I do wish your
magazine would feature content on this topic in upcoming
Abhinav Ambure;
ED: Thank you so much for your valuable feedback. We
really value our readers and are glad that our content proves
Share Your
Please send your comments
or suggestions to:
Open Source For You,
The Editor,
D-87/1, Okhla Industrial Area, Phase I,
New Delhi 110020, Phone: 011-26810601/02/03,
Fax: 011-26817563, Email:
helpful to them. We will surely look into your request and
try to include the topic you have asked for in upcoming
issues. Keep reading OSFY and continue sending us your
Annual subscription
Ive bought the July 2014 issue of OSFY and I loved
it. I want the latest version of the Ubuntu 14.04 LTS and the
programming tools (JDK and other tools for C, C+, Java and
Python). Also, how can I subscribe to your magazine for one
year and can I get it at my village (address enclosed)?
Parveen Kumar;
ED: Thank you for the compliments. We're glad to know that
you enjoy reading our magazine. We will defnitely look into
your request. Also, I am forwarding your query regarding
subscribing to the magazine to the concerned team. Please
feel free to get back to us in case of any other suggestions or
questions. We're always happy to help.
Availability of OSFY in your city
I want to purchase Open Source For You for the
library in my organisation but I am unable to fnd copies
in the city I live in (Jabalpur in Madhya Pradesh). I cannot
go in for the subscription as well. Please give me the name
of the distributor or dealer in my city through whom I can
purchase the magazine.
Gaurav Singh;
ED: We have a website where you can locate the nearest store
in your city that supplies Open Source For You. Do log on
to You will fnd
there are two dealers of the magazine in your city: Sahu News
Agency (Sanjay Sahu, Ph: 09301201157) and Janta News
Agency (Harish, Ph: 09039675118). They can ensure regular
supply of the magazine to your organisation.
Get 10%
Free Dedicated hosting/VPS for one
month. Subscribe for annual package
of Dedicated hosting/VPS and get
one month FREE
Reseller package special offer !
Contact us at 09841073179
or Write to
No condition attached for trial of our
cloud platform
(Free Trial Coupon)
Enjoy & Please share Feedback at
For more information, call us on
1800-212-2022 / +91-120-666-7718
ffer valid till 30
ber 2014!
ffer valid till 30
ber 2014!
ffer valid till 30
ber 2014!
Free Dedicated Server Hosting
for one month
For more information, call us
on 1800-209-3006/ +91-253-6636500
Subscribe for our Annual Package of Dedicated
Server Hosting & enjoy one month free service
Subscribe for the Annual Packages of
Dedicated Server Hosting & Enjoy Next
12 Months Free Services
12 Months
For more information, call us on
1800-212-2022 / +91-120-666-7777
ffer valid till 30
ber 2014!
Pay Annually & get 12 Month Free
Services on Dedicated Server Hosting
Get 35% off on course fees and if you appear
for two Red Hat exams, the second shot is free
off & more
Contact us @ 98409 82184/85 or
Write to
ffer valid till 30
ber 2014!
Do not wait! Be a part of
the winning team
Contact us at 98769-44977 or
Write to
Get 25%
Considering VPS or a Dedicated
Server? Save Big !!! And go
with our ProX Plans
ffer valid till 30
ber 2014!
Pr oX
Time to go PRO now
25% Off on ProX Plans - Ideal for running
High Traffic or E-Commerce Website
Coupon Code : OSFY2014
Contact us at +91-98453-65845 or
Write to
Pay the most
Faculty: Mr. Babu Krishnamurthy
Visiting Faculty / CDAC/ ACTS with 18 years
of Industry and Faculty Experience
Date: 20-21 Sept 2014 ( 2 days program)
Embedded RTOS -Architecture, Internals
and Programming - on ARM platform
To advertise here, contact
Omar on +91-995 888 1862 or
011-26810601/02/03 or
Write to
(all inclusive)
Ubuntu 14.04.1 LTS is out
The Ubuntu 14.04 LTS has
been around for quite some
time now and most people
must have upgraded it.
Another smaller update is
ready 14.04.1. Canonical
has announced that this
Ubuntu update fxes many
bugs and includes security
updates. There is also a list of bugs and other updates in Ubuntu 14.04.1 that
you might want to have a look at, in order to see the scope of this update. If you
havent upgraded to 14.04.1 yet, do so as soon as possible. It is a worthy upgrade
if you use an older version of Ubuntu.
Android Device Manager makes it easier to
search for lost phones!
Google has created an update in Android Device
Manager that will help the devices users better
security. This latest version is called 1.3.8. It will
help add a phone number in the remote locking
screen, and the lock screen password can also
be changed. An optional message can also be set
up. If the phone number is added, then a big green
button will appear on the lock screen saying Call
owner. If the lost phone is found by someone,
then the owner can be easily contacted. Earlier,
only a message could be added by the users. The
call-back number can be set up through the Android
Device Manager app as well as the Web interface,
if another Android device is not present. Both
these message and call-back features are optional,
though. But its highly recommended that these features are used so that a lost
phone can be easily found.
Ubuntus Amazon shopping feature complies
with UK Data Protection Act
The independent body investigating
the implementation of Ubuntus Unity
Shopping Lens feature and its compliance
with the UK Data Protection Act (DPA) of
1998 has found no instances of Canonical
being in breach of the act. Ubuntus
controversial Amazon shopping feature
has been found to be compliant with
relevant data protection and privacy laws
in the UK, something that was checked in response to a complaint fled by blogger
Luis de Sousa last year. Notably, the feature sends out queries made in the Dash to an
intermediary Canonical server, which sends it forward to Amazon. The e-commerce
giant then returns product suggestions matching the query back to the Dash. The
feature also sends across non-identifable location data out in the process.
VLC 2.1.5 has been
VideoLAN has announced the
release of the fnal update in the
2.1.x series of its popular open
source, cross-platform media player
and streaming media
server: the VLC media
player. VLC 2.1.5 is
now available for
download and
installation on
Windows, Mac and
Linux operating systems. Notably,
the next big release for the VLC
media player will be that of the
2.2.x branch. A careful look at the
change log reveals that although the
VLC 2.1.5 update has been released
across multiple platforms, the most
noticeable improvements are for OS
X users. Others could consider it as a
minor update.
For OS X users, VLC 2.1.5
brings about additional stability
to the Qtsound capture module as
well as improved support for Reti.
Other notable changes (for the OS
X platform) include compilation
fxes for OS/2 operating systems.
Also, MP3 fle conversions will no
longer be renamed .raw under the
Qt interface following the update. A
few decoder fxes will now beneft
DxVA2 sample decoding, MAD
resistance in broken MP3 streams
and PGS alignment tweaks for MKV.
In terms of security, the new release
comes with fxes for GNU TLS and
libpng as well. One should remember
that VLC is a portable, free and open
source, cross-platform media player
and streaming media server written by
the VideoLAN project that supports
many audio and video compression
methods and fle formats. It comes
with a large number of free decoding
and encoding libraries, thereby
eliminating the need of fnding or
calibrating proprietary plugins.
Powered by | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 13
Name, Date and Venue Description Contact Details and Website
4th Annual Datacenter
Dynamics Converged.
September 18, 2014;
The event aims to assist the community in
the data centre domain by exchanging ideas,
accessing market knowledge and launching
new initiatives.
Praveen Nair; Email: Praveen.nair@; Ph: +91
9820003158; Website:
Gartner Symposium IT Xpo,
October 14-17, 2014; Grand
Hyatt, Goa
CIOs and senior IT executives from across the
world will gather at this event, which offers
talks and workshops on new ideas and strate-
gies in the IT industry.
Open Source India,
November 7-8, 2014;
NIMHANS Center, Bengaluru
Asias premier open source conference that
aims to nurture and promote the open source
ecosystem across the sub-continent.
Omar Farooq; Email: omar.farooq@; Ph: 09958881862
November 12-14, 2014;
BIEC, Bengaluru
This is one of the worlds leading business IT
events, and offers a combination of services
and benefits that will strengthen the Indian IT
and ITES markets.
5th Annual Datacenter
Dynamics Converged;
December 9, 2014; Riyadh
The event aims to assist the community in
the datacentre domain by exchanging ideas,
accessing market knowledge and launching
new initiatives.
Praveen Nair; Email: Praveen.nair@; Ph: +91
9820003158; Website:
December 12-13, 2014;
NCPA, Jamshedji Bhabha
Theatre, Mumbai
This event will be attended by Web hosting
companies, Web design companies, domain
and hosting resellers, ISPs and SMBs from
across the world.
According to Sousa, the Shopping Lens implementation contravened a
1995 EU Directive on the protection of users personal data. Sousa had provided
a number of instances to put forward his point. Initially, Sousa began by reaching
out to Canonical for clarifcation but to no avail. He was fnally forced to fle a
complaint with the Information Commissioners Offce regarding his security
concerns. Finally, the ICO responded to Sousas need for clarifcation by clearly
stating that the Shopping Lens feature complies with the DPA (Data Protection Act)
very well and in no way breaches users privacy.
Oracle launches Solaris 11.2 with OpenStack support
Oracle Corp recently launched the latest
version of its Solaris enterprise UNIX
platform: Solaris 11.2. Notably, this new
version was in beta since April. The
latest release comes with several key
enhancementsthe support for OpenStack
as well as software-defned networking
(SDN). Additionally, there are various
security, performance and compliance
enhancements introduced in Oracles
new release. Solaris 11.2 comes with OpenStack integration, which is perhaps its
most crucial enhancement. The latest version runs the most recent version of the
popular toolbox for building clouds: OpenStack Havana. Meanwhile, the inclusion
of software-defned networking (SDN) support is seen as Oracles ongoing effort to
transform its Exalogic Elastic Cloud into one-stop data centres. Until now, Exalogic
boxes were being increasingly used in the form of massive servers or for transaction
processing. They were therefore not fulflling their real purpose, which is to work
Heres whats new in Linux 3.16
The founder of Linux, Linus Torvalds,
announced the release of the stable build of
Linux 3.16 recently. This version is known
as Shuffing Zombie Juror for developers.
There are a host of improvements and new
features in this new stable build of Linux.
These include new and improved drivers,
and some complex integral improvements
like a unifed control hierarchy. This new
Linux 3.16 stable version will be ideal for
the Ubuntu Linux Kernel 14.10. LTS version
users will get this update once the 14.10
kernel is released.
Shutter 0.92 for Linux released
and fixes a number of bugs
Users have had some trouble using the
popular Shutter screenshot tool for Linux
owing to the many irritating bugs and
stability issues that came along. But they are
in for a pleasant surprise as developers have
now released a new bug fx for the tool that
aims to address some of its more prominent
issues. The new bug fxShutter 0.92is
now available for download for the Linux
platform and a number of stability issues
have been dealt with for good.
Open source community irked
by broken Linux kernel patches
One of the many fne threads that bind the
open source community is avid participation
and cooperation between developers across
the globe, with the common goal of improving
the Linux kernel. However, not everyone is
actually trying to help out there, as recent
happenings suggest. Trolls exist even in the
Linux community, and one that has managed
to make a big impression is Nick Krause.
Krauses recent antics have led to signifcant
bouts of frustration among Linux kernel
maintainers. Krause continuously tries to get
broken patches past the maintainersonly
his goals are not very clear at the moment.
Many developers believe that Krause aims to
damage the Linux kernel. While that might
be a distant dream for him (at least for now),
he has managed to irk quite a lot of people,
slowing down the whole development process
because of the need to keep fxing broken
patches introduced by him.
as cloud-hosting systems. However, with SDN support added, Oracle is aiming
to change all this. Oracle plans to directly take on network equipment makers
like Cisco, Hewlett-Packard and Brocade with the introduction of Solaris 11.2.
Enterprises using Solaris can now simply purchase a handful of Solaris boxes and
run their mission-critical clouds. In addition, they can also use bits of OpenStack
without acquiring additional hardware.
Canonical launches Ubuntu 12.04.5 LTS
Marking its ffth point release, Canonical has announced that Ubuntu 12.04.5 LTS
is available for download and installation. Ubuntu 12.04
LTS was frst released back in April 2012. Canonical
will continue supporting the LTS until 2017 with regular
updates from time to time. Also, this is the frst major
release for Canonical since the debut of Ubuntu 14.04
LTS earlier this year. The most notable improvement
in the new release is the inclusion of an updated kernel
(3.13) and stack. Both of these have been traded
from Ubuntu 14.04 LTS. The new release is out now for
desktop, server, cloud and core products, as well as other
favours of Ubuntu with long-term support. In addition, the new release also comes
with security updates and corrections for other high-impact bugs, with a focus
on maintaining stability and compatibility with Ubuntu 12.04 LTS. Meanwhile,
Kubuntu 12.04.5 LTS, Edubuntu 12.04.5 LTS and Ubuntu Studio 12.04.5 LTS are
also available for download and install.
Storm Energys SunSniffer charmed by Raspberry Pi!
The humble Raspberry Pi single board computer is indeed going places, receiving critical
acclaim for, well, being downright awesome. The latest to be smitten by it is the German
company, Storm Energy, which builds products like SunSniffer, a
solar plant monitoring system. The SunSniffer system is designed
to monitor photovoltaic (PV) solar power installations of varied
sizes. The company has now upgraded the system to a Linux-
based platform running on a Raspberry Pi. In addition to this, the
latest SunSniffer version also comes with a custom expansion
board and customised Linux OS. The SunSniffer is IP65-rated,
and the new Connection Boxs custom Raspberry Pi expansion
board comes with fve RS-485 ports and eight analogue/digital
I/O interfaces to help simultaneously monitor a wide variety
of solar inverters (Refusol, Huawei and Kostal, among others). In short, the new system
can remotely control solar inverters via a radio ripple control receiver, as against earlier
versions where users could only monitor their data.
The Raspberry Pi-laden SunSniffer also offers SSL-encryption and optional
integrated anti-theft protection.
Italian city of Turin switching to open source technology
In a recent development, the Italian city of Turin is considering ditching all
Microsoft products in favour of open source alternatives. The move is directly
aimed at cutting government costs, while not compromising on functionality. If at
all Turin gets rid of all proprietary software, it will go on to become one of the frst
Italian open source cities and save itself at least a whopping six million Euros. A
report suggests that as many as 8,300 computers of the local administration in Turin
will soon have Ubuntu under the hood and will be shipped with the Mozilla Firefox
Android-x86 4.4 R1
Linux distro available for
download and testing
The team behind Android-x86
recently launched version 4.4 R1 of
the port of the Android OS designed
specifcally for the x86 platform.
Android-x86 4.4 KitKat is now
available for download and testing
on the Linux platform for your
PC. Android is actually based on a
modifed Linux kernel, with many
believing it to be a stand alone Linux
distribution in its own right. With that
said, developers have managed to
tweak Android to make it port to the
PC for the x86 platforms; thats what
Android-x86 is really all about.
Linux Mint Debian edition
to switch from snapshot
cycle to Debian stable
package base
The team behind Linux Mint has
decided to let go of the current
snapshot cycle in the Debian edition
for the Linux distribution and instead
switch over to a Debian stable
package base. The current Linux
Mint editions are based on Ubuntu
and the team is most likely to stick
to that for at least a couple of years.
The team recently launched the
latest iteration of Linux Mint, a.k.a.
Qiana. Both the Cinnamon and
Mate versions are now available for
download with the KDE and XFCE
versions expected to come out soon.
Meanwhile, it has been announced
that the next three Linux Mint
releases would also, in all probability,
be based on Ubuntu 14.04 LTS.
Web browser and OpenOffcethe two joys of the open source world. The local
government has argued that a large amount of money is spent on buying licences in
case of proprietary software, wasting a lot of the local tax payers money. Therefore,
a decision to drop Microsoft in favour of cost-effective open source alternatives
seems to be a viable option.
LibreOffice coming to Android
LibreOffce needs no introduction. The Document Foundations popular open
source offce suite is widely used by millions of people across the globe. Therefore,
news that the suite could soon be
launched on Android is something to
watch out for. You heard that right! A
new report by Tech Republic suggests
that the Document Foundation is
currently on a rigorous workout to
make this happen. However, as things
stand, there is still some time before that happens for real. Even as the Document
Foundation came out with the frst Release Candidate (RC) version of the upcoming
LibreOffce 4.2.5 recently (it has been quite consistent in updating its stable version
on a timely basis), work is on to make LibreOffce available for Googles much
loved Android platform as well, the report says. The buzz is that developers back
home are currently talking about (and working at) getting the fle size right, that is,
something well below the Google limit. Until they are able to do that, LibreOffce
for Android is a distant dream, sadly.
However, as and when this happens, LibreOffce would be in direct competition
with Google Docs. Since there is a genuine need for Open Document Format (ODF)
support in Android, the release might just be what the doctor ordered for many users.
This is more of a rumour at the moment, and things will get clearer in time. There is
no offcial word from either Google or the Document Foundation about this, but we
will keep you posted on developments. The recent release the LibreOffce 4.2.5
RC1meanwhile tries to curb many key bugs that plagued the last 4.2.4 fnal release.
This, in turn, has improved its usability and stability to a signifcant extent.
RHEL 6.6 beta is released; draws major inspiration from RHEL 7
Just so RHEL 6.x users (who wish to continue with this branch of the distribution for
a bit longer) dont feel left out, Red Hat has launched a beta release of its Red Hat
Enterprise Linux 6.6 (RHEL 6.6) platform. Taking much of its inspiration from the
recently released RHEL 7, the move is directed towards RHEL 6.x users so that they
beneft from new platform features. At the same time, it comes with some real cool
features that are quite independent of RHEL 7 and which make 6.6 beta stand out
on its own merits. Red Hat offers Application Binary Interface (ABI) compatibility
for RHEL for a period of ten years, so technically speaking, it cannot drastically
change major elements of an in-production release. Quite simply put, it cant and
wont change an in-production release in a way that could alter stability or existing
compatibility. This would eventually mean that the new release on offer cannot go
much against the tide with respect to RHEL 6. Although the feature list for RHEL
6.6 beta ties in closely with the feature list of the major release (6.0), it doesnt
mean RHEL 6.6 beta is simply old wine served in a new bottle. It does manage to
introduce some key improvements for RHEL 6.x users. To begin with, RHEL 6.6
beta includes some features that were frst introduced with RHEL 7, the most notable
being Performance Co-Pilot (PCP). The new beta release will also offer RHEL 6.x
users more integrated Remote Direct Memory Access (RDMA) capabilities.
Khronos releases OpenGL NG
The Khronos Group recently announced
the release of the latest iteration of
OpenGL (the oldest high-level 3D
graphics API still in popular use).
Although OpenGL 4.5 is a noteworthy
release in its own right, the Groups
second major release in the next
generation OpenGL initiative is garnering
widespread appreciation. While OpenGL
4.5 is what some might call a fairly
standard annual OpenGL update, OpenGL
NG is a complete rebuild of the OpenGL
API, designed with the idea of building an
entirely new version of OpenGL. This new
version will have a signifcantly reduced
overhead owing to the removal of a lot
of abstraction. Also, it will do away with
the major ineffciencies of older versions
when working at a low level with the bare
metal GPU hardware.
Being a very high-level API, earlier
versions of OpenGL made it hard to
effciently run code on the GPU directly.
While this didnt matter so much earlier,
now things have changed. Fuelled by
more mature GPUs, developers today
tend to ask for graphics APIs that allow
them to get much closer to the bare
metal. The next generation OpenGL
initiative is directed at developers who
are looking to improve performance and
reduce overhead.
Dropboxs updated Android
App offers improved features
A major update has been announced
by Dropbox in connection with its
offcial Android app, and is available
at Google Play. This new update
carries version number 2.4.3 and
comes with a lot of improved features.
As the Google Play listing suggests,
this new Dropbox version supports in-
app previews of Word, PowerPoint and
PDF fles. A better search experience is
also offered in this new version, which
enables tracking of recent queries, and
suggestions are also displayed. One
can also search in specifc folders from
now onwards. | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 17
CPU socket
The central processing unit is the key component of a motherboard
and its performance is primarily determined by the kind of
processor it is designed to hold. The CPU socket can be defned
as an electrical component that connects or attaches to the
motherboard and is designed to house a microprocessor. So,
when youre buying a motherboard, you should look for a CPU
socket that is compatible with the CPU you have planned to use.
Most of the time, motherboards use one of the following fve
sockets -- LGA1155, LGA2011, AM3, AM3+ and FM1. Some
of the sockets are backward compatible and some of the chips
are interchangeable. Once you opt for a motherboard, you will be
limited to using the processors that offer similar specifcations.
Form factor
A motherboards capabilities are broadly determined by its
shape, size and how much it can be expanded these aspects
are known as form factors. Although there is no fxed design or
form for motherboards, and they are available in many variations,
two form factors have always been the favourites -- ATX and
microATX. The ATX motherboard measures around 305cm
x 23cm (12 inch x 9 inch) and offers the highest number of
expansion slots, RAM bays and data connectors. MicroATX
motherboards measure 24.38cm x 24.38cm (9.6 x 9.6 inch) and
have fewer expansion slots, RAM bays and other components.
The form factor of a motherboard can be decided according to
what purpose the motherboard is expected to serve.
RAM bays
Random access memory (RAM) is considered the most important
workspace in a motherboard, where data is processed even after
being removed from the hard disk drive or solid state drive. The
effciency of your PC directly depends on the speed and size of your
RAM. The more space you have on your RAM, the more effcient
your computing will be. But its no use having a RAM with greater
effciency than your motherboard can support, as that will be just a
waste of the extra potential. Neither can you have RAM with lesser
effciency than the motherboard, as then the PC will not work well
due to the bottlenecks caused by mismatched capabilities. Choosing
the motherboard which supports just the right RAM is vital.
Apart from these factors, there are many others to consider before
selecting a motherboard. These include the audio system, display,
LAN support, expansion capabilities and peripheral interfaces.
If you are a gamer, or like to customise your PC and build it from scratch, the motherboard is
what you require to link all the important and key components together. Lets find out how to
select the best desktop motherboards.
he central processing unit (CPU) can be considered to
be the brain of a system or a PC in laymans language,
but it still needs a nervous system to be connected
with all the other components in your PC. A motherboard
plays this role, as all the components are attached to it and
to each other with the help of this board. It can be defned
as a PCB (printed circuit board) that has the capability of
expanding. As the name suggests, a motherboard is believed
to be the mother of all the components attached in it,
including network cards, sound cards, hard drives, TV tuner
cards, slots, etc. It holds the most signifcant sub-systems
the processor along with other important components. A
motherboard is found in all electronics devices like TVs,
washing machines and other embedded systems. Since it
provides the electrical connections through which other
components are connected and linked with each other, it needs
the most attention. It hosts other devices and subsystems and
also contains the central processing unit, unlike the backplane.
There are quite a lot of companies that deal with
motherboards and Simmtronics is one among the leading players.
According to Dr Inderjeet Sabbrawal, chairman, Simmtronics,
Simmtronics has been one of the exclusive manufacturers of
motherboards in the hardware industry over the last 20 years. We
strongly believe in creativity, innovation and R&D. Currently, we
are fulflling our commitment to provide the latest mainstream
motherboards. At Simmtronics, the quality of the motherboards
is strictly controlled. At present, the market is not growing.
India still has a varied market for older generation models as well
as the latest models of motherboards.
Factors to consider while buying a motherboard
In a desktop, several essential units and components
are attached directly to the motherboard, such as the
microprocessor, main memory, etc. Other components, such
as the external storage controllers for sound and video display
and various peripheral devices, are attached to it through
slots, plug-in cards or cables. There are a number of factors to
keep in mind while buying a motherboard, and these depend
on the specifc requirements. Linux is slowly taking over the
PC world and, hence, people now look for Linux-supported
motherboards. As a result, almost every motherboard now
supports Linux. The many factors to keep in mind when
buying a Linux-supported motherboard are discussed below.
Buyers Guide
The Lifeline of Your Desktop
Supported CPU: Fourth generation Intel Core i7
processor, Intel Core i5 processor and other Intel
processors in the LGA1150 package
Memory supported: 32GB of system memory, dual
channel DDR3 2400+ MHz, DDR3 1600/1333 MHz
Form factor: ATX form factor
CPU supported: Intel Core2nd and Core3rd
Generation i7/i5/i3/Pentium/Celeron
Main memory supported: Dual channel DDR3
BIOS: 132MB Flash ROM
Connectors: 14-pin ATX 12V power connector
Chipset: Intel H61 (B3 Version)
Intel: DZ87KLT-75K
Simmtronics SIMM-INT H61
(V3) motherboard
The author is a part of the editorial team at EFY.
By: Manvi Saxena
Asus: Z87-K
Supported CPU: Fourth generation Intel Core
i7 processor, Intel Core i5 processor and other
Intel processors
Memory supported: Dual channel memory
architecture supports Intel XMP
Form factor: ATX form factor
CPU supported: Fourth generation Intel Core i7
processor, Intel Core i5 processor and other Intel
Memory supported: Supports DDR3 3000
Form factor: MicroATX
Gigabyte Technology:
GA-Z87X-OC motherboard
Buyers Guide | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 19
A few desktop motherboards
with the latest chipsets
Sandya Mannarswamy
or the past few months, we have been discussing
information retrieval and natural language processing,
as well as the algorithms associated with them. This
month, we continue our discussion on natural language
processing (NLP) and look at how NLP can be applied
in the feld of software engineering. Given one or many
text documents, NLP techniques can be applied to extract
information from the text documents. The software
engineering (SE) lifecycle gives rise to a number of textual
documents, to which NLP can be applied.
So what are the software artifacts that arise in SE?
During the requirements phase, a requirements document
is an important textual artifact. This specifes the expected
behaviour of the software product being designed, in terms
of its functionality, user interface, performance, etc. It is
important that the requirements being specifed are clear
and unambiguous, since during product delivery, customers
would like to confrm that the delivered product meets all
their specifed requirements.
Having vague ambiguous requirements can hamper
requirement verifcation. So text analysis techniques can
be applied to the requirements document to determine
whether there are any ambiguous or vague statements.
For instance, consider a statement like, Servicing of user
requests should be fast, and request waiting time should
be low. This statement is ambiguous since it is not clear
what exactly the customers expectations of fast service
or low waiting time may be. NLP tools can detect such
ambiguous requirements. It is also important that there are
no logical inconsistencies in the requirements. For instance,
a requirement that Login names should allow a maximum
of 16 characters, and that The login database will have a
feld for login names which is 8 characters wide, confict
with each other. While the user interface allows up to a
maximum of 16 characters, the backend login database
will support fewer characters, which is inconsistent with
the earlier requirement. Though currently such inconsistent
requirements are fagged by human inspection, it is possible
to design text analysis tools to detect them.
The software design phase also produces a number of
SE artifacts such as the design document, design models
in the form of UML documents, etc, which also can be
mined for information. Design documents can be analysed
to generate automatic test cases in order to test the fnal
product. During the development and maintenance phases,
a number of textual artifacts are generated. Source code
itself can be considered as a textual document. Apart from
source code, source code control system logs such as SVN/
GIT logs, Bugzilla defect reports, developers mailing lists,
feld reports, crash reports, etc, are the various SE artifacts to
which text mining can be applied.
Various types of text analysis techniques can be applied
to SE artifacts. One popular method is duplicate or similar
document detection. This technique can be applied to
fnd out duplicate bug reports in bug tracking systems. A
variation of this technique can be applied to code clones
and copy-and-paste snippets.
Automatic summarisation is another popular technique
in NLP. These techniques try to generate a summary of a
given document by looking for the key points contained in it.
There are two approaches to automatic summarisation. One
is known as extractive summarisation, using which key
phrases and sentences in the given document are extracted
and put back together to provide a summary of the document.
The other is the abstractive summarisation technique, which
is used to build an internal semantic representation of the
given document, from which key concepts are extracted, and
a summary generated using natural language understanding.
The abstractive summarisation technique is close to how
humans would summarise a given document. Typically, we
would proceed by building a knowledge representation of
the document in our minds and then using our own words
to provide a summary of the key concepts. Abstractive
summarisation is obviously more complex than extractive
summarisation, but yields better summaries.
Coming to SE artifacts, automatic summarisation
techniques can be applied to generate large bug reports.
They can also be applied to generate high level comments
In this months column, we continue our discussion on natural language processing. | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 21
Guest Col umn CodeSport
of methods contained in source code. In this case, each method
can be treated as an independent document and the high level
comment associated with that method or function is nothing but a
short summary of the method.
Another popular text analysis technique involves the use of
language models, which enables predicting what the next word
would be in a particular sentence. This technique is typically used in
optical character recognition (OCR) generated documents, where due
to OCR errors, the next word is not visible or gets lost and hence the
tool needs to make a best case estimate of the word that may appear
there. A similar need also arises in the case of speech recognition
systems. In case of poor speech quality, when a sentence is being
transcribed by the speech recognition tool, a particular word may
not be clear or could get lost in transmission. In such a case, the tool
needs to predict what the missing word is and add it automatically.
Language modelling techniques can also be applied in intelligent
development environments (IDE) to provide auto-completion
suggestions to the developers. Note that in this case, the source code
itself is being treated as text and is analysed.
Classifying a set of documents into specifc categories is another
well-known text analysis technique. Consider a large number of news
articles that need to be categorised based on topics or their genre, such
as politics, business, sports, etc. A number of well-known text analysis
techniques are available for document classifcation. Document
classifcation techniques can also be applied to defect reports in SE to
classify the category to which the defect belongs. For instance, security
related bug reports need to be prioritised. While people currently
inspect bug reports, or search for specifc key words in a bug category
feld in Bugzilla reports in order to classify bug reports, more robust
and automated techniques are needed to classify defect reports in large
scale open source projects. Text analysis techniques for document
classifcation can be employed in such cases.
Another important need in the SE lifecycle is to trace source
code to its origin in the requirements document. If a feature X
is present in the source code, what is the requirement Y in the
requirements document which necessitated the development
of this feature? This is known as traceability of source code to
requirements. As source code evolves over time, maintaining
traceability links automatically through tools is essential to
scale out large software projects. Text analysis techniques
can be employed to connect a particular requirement from the
requirements document to a feature in the source code and hence
automatically generate the traceability links.
We have now covered automatic summarisation techniques
for generating summaries of bug reports and generating header
level comments for methods. Another possible use for such
techniques in SE artifacts is to enable the automatic generation
of user documentation associated with that software project.
A number of text mining techniques have been employed to
mine stack overfow mailing lists to generate automatic user
documentation or FAQ documents for different software projects.
Regarding the identifcation of inconsistencies in the
requirements document, inconsistency detection techniques
can be applied to source code comments also. It is a general
expectation that source code comments express the programmers
intent. Hence, the code written by the developer and the comment
associated with that piece of code should be consistent with each
other. Consider the simple code sample shown below:
/* linux/drivers/scsi/in2000.c: */
/* caller must hold instance lock */
Static int reset_hardware()
static int in2000_bus_reset()

In the above code snippet, the developer has expressed the
intention that instance_lock must be held before the function
reset_hardware is called as a code comment. However, in the
actual source code, the lock is not acquired before the call to
reset_hardware is made. This is a logical inconsistency, which can
arise either due to: (a) comments being outdated with respect to the
source code; or (b) incorrect code. Hence, fagging such errors is
useful to the developer who can fx either the comment or the code,
depending on which is incorrect.
My must-read book for this month
This months book suggestion comes from one of our readers,
Sharada, and her recommendation is very appropriate to the
current column. She recommends an excellent resource for natural
language processinga book called, Speech and Language
Processing: An Introduction to Natural Language Processing by
Jurafsky and Martin. The book describes different algorithms for
NLP techniques and can be used as an introduction to the subject.
Thank you, Sharada, for your valuable recommendation.
If you have a favourite programming book or article that you
think is a must-read for every programmer, please do send me
a note with the books name, and a short write-up on why you
think it is useful so I can mention it in the column. This would
help many readers who want to improve their software skills.
If you have any favourite programming questions/software
topics that you would like to discuss on this forum, please
send them to me, along with your solutions and feedback, at
sandyasm_AT_yahoo_DOT_com. Till we meet again next
month, happy programming!
The author is an expert in systems software and is currently working
with Hewlett Packard India Ltd. Her interests include compilers,
multi-core and storage systems. If you are preparing for systems
software interviews, you may nd it useful to visit Sandya's LinkedIn
group Computer Science Interview Training India at http://www. "
By: Sandya Mannarswamy
Guest Col umn Exploring Software
Hadoop is a large scale, open source storage and processing
framework for data sets. In this article, the author sets up Hadoop
on a single node, takes the reader through testing it, and later
tests it on multiple nodes.
Exploring Big Data on a Desktop
Getting Started with Hadoop
edora 20 makes it easy to install Hadoop. Version 2.2
is packaged and available in the standard repositories.
It will place the confguration fles in /etc/hadoop,
with reasonable defaults so that you can get started easily. As
you may expect, managing the various Hadoop services is
integrated with systemd.
Setting up a single node
First, start an instance, with name h-mstr, in OpenStack
using a Fedora Cloud image (http://fedoraproject.
org/get-fedora#clouds). You may get an IP like You will need to choose at least the
m1.small flavour, i.e., 2GB RAM and 20GB disk. Add
an entry in /etc/hosts for convenience: h-mstr
Now, install and test the Hadoop packages on the virtual
machine by following the article,
$ ssh fedora@h-mstr
$ sudo yum install hadoop-common hadoop-common-native hadoop-
hdfs \
hadoop-mapreduce hadoop-mapreduce-examples hadoop-yarn
It will download over 200MB of packages and take about
500MB of disk space.
Create an entry in the /etc/hosts fle for h-mstr using the
name in /etc/hostname, e.g.: h-mstr h-mstr.novalocal
Now, you can test the installation. First, run a script to
create the needed hdfs directories:
$ sudo hdfs-create-dirs
Then, start the Hadoop services using systemctl:
$ sudo systemctl start hadoop-namenode hadoop-datanode \
hadoop-nodemanager hadoop-resourcemanager
You can find out the hdfs directories created as
follows. The command may look complex, but you are
running the hadoop fs command in a shell as Hadoop's
internal user, hdfs:
$ sudo runuser hdfs -s /bin/bash /bin/bash -c hadoop fs -ls
Found 3 items
drwxrwxrwt - hdfs supergroup 0 2014-07-15 13:21 /tmp
drwxr-xr-x - hdfs supergroup 0 2014-07-15 14:18 /user
drwxr-xr-x - hdfs supergroup 0 2014-07-15 13:22 /var
Testing the single node
Create a directory with the right permissions for the user,
fedora, to be able to run the test scripts:
$ sudo runuser hdfs -s /bin/bash /bin/bash -c "hadoop fs
-mkdir /user/fedora"
$ sudo runuser hdfs -s /bin/bash /bin/bash -c "hadoop fs
-chown fedora /user/fedora"
Disable the firewall and iptables and run a
mapreduce example. You can monitor the progress at
http://h-mstr:8088/. Figure 1 shows an example running
on three nodes.
The frst test is to calculate pi using 10 maps and
1,000,000 samples. It took about 90 seconds to estimate the
value of pi to be 3.1415844.
$ hadoop jar /usr/share/java/hadoop/hadoop-mapreduce-
examples.jar pi 10 1000000
In the next test, you create 10 million records of 100
bytes each, that is, 1GB of data (~1 min). Then, sort it (~8
min) and, fnally, verify it (~1 min). You may want to clean
up the directories created in the process:
Anil Seth
Guest Col umn Exploring Software
$ hadoop jar /usr/share/java/hadoop/hadoop-mapreduce-
examples.jar teragen 10000000 gendata
$ hadoop jar /usr/share/java/hadoop/hadoop-mapreduce-
examples.jar terasort gendata sortdata
$ hadoop jar /usr/share/java/hadoop/hadoop-mapreduce-
examples.jar teravalidate sortdata reportdata
$ hadoop fs -rm -r gendata sortdata reportdata
Stop the Hadoop services before creating and working
with multiple data nodes, and clean up the data directories:
$ sudo systemctl stop hadoop-namenode hadoop-datanode \
hadoop-nodemanager hadoop-resourcemanager
$ sudo rm -rf /var/cache/hadoop-hdfs/hdfs/dfs/*
Testing with multiple nodes
The following steps simplify creation of multiple instances:
Generate ssh keys for password-less log in from any node
to any other node.
$ ssh-keygen
$ cat .ssh/ >> .ssh/authorized_keys
In /etc/ssh/ssh_confg, add the following to ensure that
ssh does not prompt for authenticating a new host the frst
time you try to log in.
StrictHostKeyChecking no
In /etc/hosts, add entries for slave nodes yet to be created: h-mstr h-mstr.novalocal h-slv1 h-slv1.novalocal h-slv2 h-slv2.novalocal
Now, modify the confguration fles located in /etc/hadoop.
Edit core-site.xml and modify the value of
by replacing localhost by h-mstr:
Edit mapred-site.xml and modify the value of mapred.job.
tracker by replacing localhost by h-mstr:

Delete the following lines from hdfs-site.xml:
<!-- Immediately exit safemode as soon as one DataNode
checks in.
On a multi-node cluster, these confgurations must be
removed. -->
Edit or create, if needed, slaves with the host names of the
data nodes:
[fedora@h-mstr hadoop]$ cat slaves
Add the following lines to yarn-site.xml so that multiple
node managers can be run:
Now, create a snapshot, Hadoop-Base. Its creation will
take time. It may not give you an indication of an error if it
runs out of disk space!
Figure 1: OpenStack-Hadoop
Guest Col umn Exploring Software
Launch instances h-slv1 and h-slv2 serially using
Hadoop-Base as the instance boot source. Launching of the
frst instance from a snapshot is pretty slow. In case the IP
addresses are not the same as your guess in /etc/hosts, edit /
etc/hosts on each of the three nodes to the correct value. For
your convenience, you may want to make entries for h-slv1
and h-slv2 on the desktop /etc/hosts fle as well.
The following commands should be run from Fedora on
h-mstr. Reformat the namenode to make sure that the single
node tests are not causing any unexpected issues:
$ sudo runuser hdfs -s /bin/bash /bin/bash -c "hadoop
namenode -format"
Start the hadoop services on h-mstr.
$ sudo systemctl start hadoop-namenode hadoop-datanode
hadoop-nodemanager hadoop-resourcemanager
Start the datanode and yarn services on the slave nodes:
$ ssh -t fedora@h-slv1 sudo systemctl start hadoop-datanode
$ ssh -t fedora@h-slv2 sudo systemctl start hadoop-datanode
Create the hdfs directories and a directory for user fedora
as on a single node:
$ sudo hdfs-create-dirs
The author has earned the right to do what interests him.
You can nd him online at, http://sethanil., and reach him via email at
By: Dr Anil Seth
$ sudo runuser hdfs -s /bin/bash /bin/bash -c "hadoop fs
-mkdir /user/fedora"
$ sudo runuser hdfs -s /bin/bash /bin/bash -c "hadoop fs
-chown fedora /user/fedora"
You can run the same tests again. Although you are using
three nodes, the improvement in the performance compared to
the single node is not expected to be noticeable as the nodes
are running on a single desktop.
The pi example took about one minute on the three nodes,
compared to the 90 seconds taken earlier. Terasort took 7
minutes instead of 8.
Note: I used an AMD Phenom II X4 965 with 16GB
RAM to arrive at the timings. All virtual machines and their
data were on a single physical disk.
Both OpenStack and Mapreduce are a collection of
interrelated services working together. Diagnosing problems,
especially in the beginning, is tough as each service has its
own log fles. It takes a while to get used to realising where to
look. However, once these are working, it is incredible how
easy they make distributed processing!
March 2014 Network monitoring Security -------------------
April 2014 Android Special Anti Virus Wifi Hotspot Devices
May 2014 Backup and Data Storage Certification External Storage
June 2014 Open Source on Windows Mobile Apps UTMs fo SMEs
July 2014 Firewall and Network security Web Hosting Solutions Providers MFD Printers for SMEs
August 2014 Kernel Development Big Data solution Providers SSDs for Servers
September 2014 Open Source for Start-ups Cloud Android Devices
October 2014 Mobile App Development Training on Programming Languages Projectors
November 2014 Cloud Special Virtualisation Solutions Providers Network Switches and Routers
December 2014 Web Development Leading Ecommerce Sites AV Conferencing
January 2015 Programming Languages IT Consultancy Service Providers Laser Printers for SMEs
February 2015 Top 10 of Everything on Open Source Storage Solutions Providers Wireless Routers
OSFY Magazine Attractions During 2014-15
Developers Insight
ave you ever wondered which module is slowing
down your Python program and how to optimise
it? Well, there are profilers that can come to
your rescue.
Profling, in simple terms, is the analysis of a program
to measure the memory used by a certain module,
frequency and duration of function calls, and the time
complexity of the same. Such profling tools are termed
proflers. This article will discuss the line_profler for
Installing pre-requisites: Before installing line_profler
make sure you install these pre-requisites:
a) For Ubuntu/Debian-based systems (recent versions):
sudo apt-get install mercurial python python3 python-pip
python3-pip Cython Cython3
b) For Fedora systems:
sudo yum install -y mercurial python python3 python-pip
Note: 1. I have used the y argument to
automatically install the packages after being tracked by
the yum installer.
2. Mac users can use Homebrew to install these packages.
Cython is a pre-requisite because the source releases
require a C compiler. If the Cython package is not found or is
too old in your current Linux distribution version, install it by
running the following command in a terminal:
sudo pip install Cython
Note: Mac OS X users can install Cython using pip.
Improve Python Code
by Using a Profiler
The line_profiler gives
a line-by-line analysis
of the Python code
and can thus identify
bottlenecks that slow
down the execution of
a program. By making
modifications to the
code based on the
results of this profiler,
developers can
improve the code and
refine the program.
Developers Insight
Figure 1: line_profiler output
Cloning line_profler: Let us begin
by cloning the line_profler source code
from bitbucket.
To do so, run the following
command in a terminal:
hg clone
The above repository is the offcial
line_profler repository, with support for
python 2.4 - 2.7.x.
For python 3.x support, we will
need to clone a fork of the offcial
source code that provides python 3.x
compatibility for line_profler and kernprof.
hg clone
Installing line_profler: Navigate to the cloned
repository by running the following command in a terminal:
cd line_profler
To build and install line_profler in your system, run the
following command:
a) For offcial source (supported by python 2.4 - 2.7.x):
sudo python install
b) For forked source (supported by python 3.x):
sudo python3 install
Using line_profiler
Adding profler to your code: Since line_profler has been
designed to be used as a decorator, we need to decorate the
specifed function using a @profle decorator. We can do so
by adding an extra line before a function, as follows:
def foo(bar):
Running line_profiler: Once the slow module is
profiled, the next step is to run the line_profiler, which
will give line-by-line computation of the code within the
profiled function.
Open a terminal, navigate to the folder where the .py fle
is located and type the following command: -l; python3 -m line_proflerexample.
Note: I have combined both the commands in a single
line separated by a semicolon ; to immediately show the
profled results.
You can run the two commands separately or run with v argument to view the formatted result in
the terminal. -l compiles the profiled function in line by line; hence, the argument -l stores
the result in a binary file with a .lprof extension. (Here,
We then run line_profler on this binary fle by using the
-m line_profler argument. Here -m is followed by the
module name, i.e., line_profler.
Case study: We will use the Gnome-Music source code
for our case study. There is a module named _connect_view
in the fle, which handles the different views (artists,
albums, playlists, etc) within the music player. This module is
reportedly running slow because a variable is initialised each
time the view is changed.
By profling the source code, we get the following result:
Wrote profle results to gnome-music.lprof
Timer unit: 1e-06 s
File: ./gnomemusic/
Function: _connect_view at line 211
Total time: 0.000627 s
Line # Hits Time Per Hit % Time Line Contents
211 @profle
212 def _connect_view(self):
213 4 205 51.2 32.7 vadjustment =
214 4 98 24.5 15.6 self._
adjustmentValueId =
215 4 79 19.8 12.6 'value-changed',
Developers Insight
216 4 245 61.2 39.1
In the above code, line no
213, vadjustment = self.view.get_
vadjustment(), is called too many times,
which makes the process slower than
expected. After caching (initialising) it
in the init function, we get the following
result tested under the same condition.
You can see that there is a signifcant
improvement in the results (Figure 2).
Wrote profle results to gnome-music.lprof
Timer unit: 1e-06 s
File: ./gnomemusic/
Function: _connect_view at line 211
Total time: 0.000466 s
Line # Hits Time Per Hit % Time Line Contents
211 @profle
212 def _connect_view(self):
213 4 86 21.5 18.5 self._adjustmentValueId =
214 4 161 40.2 34.5 'value-changed',
215 4 219 54.8 47.0 self._on_scrolled_win_change)
Understanding the output
Here is an analysis of the output shown in the above snippet.
Function: Displays the name of the function that is
profled and its line number.
Line#: The line number of the code in the respective fle.
Hits: The number of times the code in the corresponding
line was executed.
Time: Total amount of time spent in executing the line
in Timer unit (i.e., 1e-06s here). This may vary from
system to system.
Per hit: The average amount of time spent in executing
the line once in Timer unit.
% time: The percentage of time spent on a line with respect
By: Jackson Isaac
The author is an active open source contributor to projects
like gnome-music, Mozilla Firefox and Mozillians. Follow
him on or email him at
Figure 2: Optimised code line_profiler output
to the total amount of recorded time spent in the function.
Line content: It displays the actual source code.
Note: If you make changes in the source code you
need to run the kernprof and line_profler again in order to
profle the updated code and get the latest results.
Line_profiler helps us to profile our code line-by-line,
giving the number of hits, time taken for each hit and
%time. This helps us to understand which part of our code
is running slow. It also helps in testing large projects and
the time spent by modules to execute a particular function.
Using this data, we can commit changes and improve our
code to build faster and better programs.
Developers Insight
objects. For example, the document object that represents the
document itself, the tableObject that implements the special
HTMLTableElement DOM interface to access the HTML
tables, and so forth.
Why is DOM important?
Dynamic HTML (DHTML) is a term used by some vendors
to describe the combination of HTML, style sheets and
scripts that allow documents to be animated. The W3C DOM
working group is aiming to make sure interoperable and
language-neutral solutions are agreed upon.
As Mozilla claims the title of Web Application Platform,
support for the DOM is one of the most requested features; in
fact, it is a necessity if Mozilla wants to be a viable alternative
to the other browsers. The user interface of Mozilla (also
Firefox and Thunderbird) is built using XUL and the DOM to
manipulate its own user interface.
How do I access the DOM?
You dont have to do anything special to begin using the
he Document Object Model (DOM) is a programming
interface for HTML and XML documents. It provides
a structured representation of a document and it
defnes a way that the structure can be accessed from the
programs so that they can change the document structure,
style and content. The DOM provides a representation of the
document as a structured group of nodes and objects that have
properties and methods. Essentially, it connects Web pages to
scripts or programming languages.
A Web page is a document that can either be displayed in
the browser window or as an HTML source that is in the same
document. The DOM provides another way to represent, store
and manipulate that same document. In simple terms, we can
say that the DOM is a fully object-oriented representation of a
Web page, which can be modifed by any scripting language.
The W3C DOM standard forms the basis of the DOM
implementation in most modern browsers. Many browsers
offer extensions beyond the W3C standard.
All the properties, methods and events available for
manipulating and creating the Web pages are organised into
This article is an introduction to the DOM programming interface and the DOM inspector,
which is a tool that can be used to inspect and edit the live DOM of any Web document or
XUL application.
Understanding the Document
Object Model (DOM) in Mozilla
Developers Insight
DOM. Different browsers have different implementations of
it, which exhibit varying degrees of conformity to the actual
DOM standard but every browser uses some DOM to make
Web pages accessible to the script.
When you create a script, whether its inline in a
script element or included in the Web page by means of
a script loading instruction, you can immediately begin
using the API for the document or window elements.
This is to manipulate the document itself or to get at the
children of that document, which are the various elements
in the Web page.
Your DOM programming may be something as simple as
the following, which displays an alert message by using the
alert( ) function from a window object or it may use more
sophisticated DOM methods to actually create them, as in the
longer examples that follow:
<body onload = window.alert (welcome to my home page!); >
Aside from the script element in which JavaScript is
defned, this JavaScript sets a function to run when the
document is loaded. This function creates a new element H1,
adds text to that element, and then adds H1 to the tree for this
document, as shown below:
// run this function when the document is loaded
window.onload = function() {
// create a couple of elements
// in an otherwise empty HTML page
heading = document.createElement(h1);
heading_text = document.createTextNode(Big Head!);
DOM interfaces
These interfaces just give you an idea about the actual things
that you can use to manipulate the DOM hierarchy. The object
representing the HTMLFormElement gets its name property
from the HTMLFormElement interface but its className
property from the HTMLElement interface. In both cases, the
property you want is simply in the form object.
Interfaces and objects
Many objects borrow from several different interfaces. The
table object, for example, implements a specialised HTML
table element interface, which includes such methods as
createCaption and insertRow. Since an HTML element is
also, as far as the DOM is concerned, a node in the tree of
nodes that makes up the object model for a Web page or an
XML page, the table element also implements the more basic
node interface, from which the element derives.
When you get a reference to a table object, as in
the following example, you routinely use all three of
these interfaces interchangeably on the object, perhaps
var table = document.getElementById (table);
var tableAttrs = table.attributes; // Node/Element interface
for (var i = 0; i < tableAttrs.length; i++) {
// HTMLTableElement interface: border attribute
if(tableAttrs[i].nodeName.toLowerCase() == border)
table.border = 1;
// HTMLTableElement interface: summary attribute
table.summary = note: increased border ;
Core interfaces in the DOM
These are some of the important and most commonly
used interfaces in the DOM. These common APIs are
used in the longer examples of DOM. You will often
see the following APIs, which are types of methods and
properties, when you use DOM.
The interfaces of document and window objects are
generally used most often in DOM programming. In
simple terms, the window object represents something
like the browser, and the document object is the root of
the document itself. The element inherits from the generic
node interface and, together, these two interfaces provide
many of the methods and properties you use on individual
elements. These elements may also have specific interfaces
for dealing with the kind of data those elements hold, as in
the table object example.
Figure 1: DOM inspector
Figure 2: Inspecting content documents
Developers Insight
The following are a few common APIs in XML and Web
page scripting that show the use of DOM:
document.getElementById (id)
element.getElementsByTagName (name)
document.createElement (name)
parentNode.appendChild (node)
Testing the DOM API
Here, you will be provided samples for every interface
that you can use in Web development. In some cases, the
samples are complete HTML pages, with the DOM access
in a <script> element, the interface (e.g., buttons) necessary
to fre up the script in a form, and the HTML elements upon
which the DOM operates listed as well. When this is the
case, you can cut and paste the example into a new HTML
document, save it, and run the example from the browser.
There are some cases, however, when the examples are
more concise. To run examples that only demonstrate the
basic relationship of the interface to the HTML elements,
you may want to set up a test page in which interfaces can be
easily accessed from scripts.
An introduction to the DOM inspector
The DOM inspector is a Mozilla extension that you can
access from the Tools -> Web Development menu in
SeaMonkey, or by selecting the DOM inspector menu item
from the Tools menu in Firefox and Thunderbird or by
using Ctrl/Cmd+Shift+I in either application. The DOM
inspector is a standalone extension; it supports all toolkit
applications, and its possible to embed it in your own
XULRunner app. The DOM inspector can serve as a sanity
check to verify the state of the DOM, or it can be used to
manipulate the DOM manually, if desired.
When you frst start the DOM inspector, you are presented
with a two-pane application window that looks a little like the
main Mozilla browser. Like the browser, the DOM inspector
includes an address bar and some of the same menus. In
SeaMonkey, additional global menus are available.
Using the DOM inspector
Once youve opened the document for the page you are
interested in Chrome, youll see that it loads the DOM nodes
viewer in the document pane and the DOM node viewer in
the object pane. In the DOM nodes viewer, there should be a
structured, hierarchical view of the DOM.
By clicking around in the document pane, youll see
that the viewers are linked; whenever you select a new node
from the DOM nodes viewer, the DOM node viewer is
automatically updated to refect the information for that node.
Linked viewers are the frst major aspect to understand when
learning how to use the DOM inspector.
Inspecting a document
When the DOM inspector opens, it may or may not load an
associated document, depending on the host application. If it
doesnt automatically load a document or loads a document
other than the one youd like to inspect, you can select the
desired document in a few different ways.
Figure 3: Inspecting Chrome documents
Figure 5: Inspecting a Web page
Figure 4: Inspecting arbitrary URLs
Developers Insight
There are three ways of inspecting any document, which
are described below.
Inspecting content documents: The Inspect Content
Document menu popup can be accessed from the File menu,
and it will list the currently loaded content documents. In
the Firefox and SeaMonkey browsers, these will be the
Web pages you have opened in tabs. For Thunderbird and
SeaMonkey Mail and News, any messages youre viewing
will be listed here.
Inspecting Chrome documents: The Inspect Chrome
Document menu popup can be accessed from the File menu,
and it will contain the list of currently loaded Chrome
windows and sub-documents. A browser window and the
DOM inspector are likely to already be open and displayed
in this list. The DOM inspector keeps track of all the
windows that are open, so to inspect the DOM of a particular
window in the DOM inspector, simply access that window
as you would normally do and then choose its title from this
dynamically updated menu list.
Inspecting arbitrary URLs: We can also inspect the
DOM of arbitrary URLs by using the Inspect a URL menu
item in the File menu, or by just entering a URL into the
DOM inspectors address bar and clicking Inspect or pressing
Enter. We should not use this approach to inspect Chrome
documents, but instead ensure that the Chrome document
loads normally, and use the Inspect Chrome Document menu
popup to inspect the document.
When you inspect a Web page by this method, a browser
pane at the bottom of the DOM inspector window will open
up, displaying the Web page. This allows you to use the DOM
inspector without having to use a separate browser window,
or without embedding a browser in your application at all. If
you fnd that the browser pane takes up too much space, you
may close it, but you will not be able to visually observe any
of the consequences of your actions.
DOM inspector viewers
You can use the DOM nodes viewer in the document pane
of the DOM inspector to fnd and inspect the nodes you
are interested in. One of the biggest and most immediate
advantages that this brings to your Web and application
development is that it makes it possible to fnd the mark-up
and the nodes in which the interesting parts of a page or a
piece of the user interface are defned.
One common use of the DOM inspector is to fnd the
name and location of a particular icon being used in the
Figure 6: Finding app content Figure 7: Search on Click
FACULTY : Babu Krishnamurthy
(Visiting Faculty, CDAC/ACTS - with 18 years of Industry
and Faculty Experience)
AUDIENCE : BE/BTECH Students, PG Diploma Students,
ME/MTECH Students and Embedded / sw Engineers
DATES : 20-09-2014 and 21-09-2014 (2 Days Program)
VENUE: School of Embedded Software Development,
M.V. Creators' Wing,
3rd Floor, #218, Sunshine Complex, Kammanahalli,
4th Main, 2nd Block, HRBR Layout, Kalyan Nagar,
Bangalore - 560043.
(Opposite to HDFC Bank, Next to FoodWorld
and near JalaVayu Vihar )
Email :
Phone : 080-41207855
SMS : +91-9845365845 ( leave a message and we will call you back )
Developers Insight
user interface, which is not an easy task otherwise. If youre
inspecting a Chrome document, as you select nodes in the
DOM nodes viewer, the rendered versions of those nodes are
highlighted in the user interface itself. Note that there are
bugs that prevent the fasher from the DOM inspector APIs
that are working currently on certain platforms.
If you inspect the main browser window, for example,
and select nodes in the DOM nodes viewer, you will see the
various parts of the browser interface being highlighted with
a blinking red border. You can traverse the structure and go
from the topmost parts of the DOM tree to lower level nodes,
such as the search-go-button icon that lets users perform a
query using the selected search engine.
The list of viewers available from the viewer menu gives
you some idea about how extensive the DOM inspectors
capabilities are. The following descriptions provide an
overview of these viewers capabilities:
1. The DOM nodes viewer shows attributes of nodes that can
take them, or the text content of text nodes, comments and
processing instructions. The attributes and text contents
may also be edited.
2. The Box Model viewer gives various metrics about XUL
and HTML elements, including placement and size.
3. The XBL Bindings viewer lists the XBL bindings attached
to elements. If a binding extends to another binding, the
binding menu list will list them in descending order to
root binding.
4. The CSS Rules viewer shows the CSS rules that
are applied to the node. Alternatively, when used in
conjunction with the Style Sheets viewer, the CSS Rules
viewer lists all recognised rules from that style sheet.
Properties may also be edited. Rules applying to pseudo-
elements do not appear.
5. This viewer gives a hierarchical tree of the object panes
subject. The JavaScript Object viewer also allows
JavaScript to be evaluated by selecting the appropriate
menu item in the context menu.
Three basic actions of DOM node viewers are
described below.
Selecting elements by clicking: A powerful interactive
feature of the DOM inspector is that when you have it open
and have enabled this functionality by choosing Edit >
Select Element by Click (or by clicking the little magnifying
glass icon in the upper left portion of the DOM Inspector
application), you can click anywhere in a loaded Web page or
the Inspect Chrome document. The element you click will be
shown in the document pane in the DOM nodes viewer and
the information will be displayed in the object pane.
Searching for nodes in the DOM: Another way to inspect
the DOM is to search for particular elements youre interested in
by ID, class or attribute. When you select Edit > Find Nodes...
or press Ctrl + F, the DOM inspector displays a Find dialogue
that lets you fnd elements in various ways, and that gives you
incremental searching by way of the <F3> shortcut key.
Updating the DOM, dynamically: Another feature
worth mentioning is the ability the DOM inspector gives
you to dynamically update information refected in the DOM
about Web pages, the user interface and other elements. Note
that when the DOM inspector displays information about a
particular node or sub-tree, it presents individual nodes and
their values in an active list. You can perform actions on the
individual items in this list from the Context menu and the
Edit menu, both of which contain menu items that allow you
to edit the values of those attributes.
This interactivity allows you to shrink and grow the
element size, change icons, and do other layout-tweaking
updatesall without actually changing the DOM as it is
defned in the fle on disk.
By: Anup Allamsetty
The author is an active contributor to Mozilla and GNOME. He
blogs at and you can email him
This monthly B2B Newspaper is a resource for traders, distributors, dealers, and those
who head channel business, as it aims to give an impetus to channel sales
Get East, West, North & South Editions at you
doorstep. Write to us at and get EB
Times regularly
An EFY Group publication
is Becoming Regional
Electronics Trade Channel Updates
function in Haskell has the function name
followed by arguments. An infx operator function
has operands on either side of it. A simple infx
add operation is shown below:
*Main> 3 + 5
If you wish to convert an infx function to a prefx
function, it must be enclosed within parentheses:
*Main> (+) 3 5
Similarly, if you wish to convert a prefix function
into an infix function, you must enclose the function
name within backquotes(`). The elem function takes an
element and a list, and returns true if the element is a
member of the list:
*Main> 3 `elem` [1, 2, 3]
*Main> 4 `elem` [1, 2, 3]
Functions can also be partially applied in Haskell. A function that
subtracts ten from a given number can be defned as:
diffTen :: Integer -> Integer
diffTen = (10 -)
Loading the fle in GHCi and passing three as an argument yields:
*Main> diffTen 3
Haskell exhibits polymorphism. A type variable in a function
is said to be polymorphic if it can take any type. Consider the last
function that returns the last element in an array. Its type signature is:
Experimenting with
More Functions in Haskell
We continue our exploration of the open source, advanced and purely functional
programming language, Haskell. In the third article in the series, we will focus on more
Haskell functions, conditional constructs and their usage.
Developers Let's Try
Developers Let's Try
*Main> :t last
last :: [a] -> a
The a in the above snippet refers to a type variable and
can represent any type. Thus, the last function can operate on a
list of integers or characters (string):
*Main> last [1, 2, 3, 4, 5]
*Main> last "Hello, World"
You can use a where clause for local defnitions inside a
function, as shown in the following example, to compute the
area of a circle:
areaOfCircle :: Float -> Float
areaOfCircle radius = pi * radius * radius
where pi = 3.1415
Loading it in GHCi and computing the area for radius
1 gives:
*Main> areaOfCircle 1
You can also use the let expression with the in statement to
compute the area of a circle:
areaOfCircle :: Float -> Float
areaOfCircle radius = let pi = 3.1415 in pi * radius * radius
Executing the above with input radius 1 gives:
*Main> areaOfCircle 1
Indentation is very important in Haskell as it helps in code
readability the compiler will emit errors otherwise. You must
make use of white spaces instead of tab when aligning code. If
the let and in constructs in a function span multiple lines, they
must be aligned vertically as shown below:
compute :: Integer -> Integer -> Integer
compute x y =
let a = x + 1
b = y + 2
a * b
Loading the example with GHCi, you get the following output:
*Main> compute 1 2
Similarly, the if and else constructs must be neatly aligned.
The else statement is mandatory in Haskell. For example:
sign :: Integer -> String
sign x =
if x > 0
then "Positive"
if x < 0
then "Negative"
else "Zero"
Running the example with GHCi, you get:
*Main> sign 0
*Main> sign 1
*Main> sign (-1)
The case construct can be used for pattern matching
against possible expression values. It needs to be combined
with the of keyword. The different values need to be aligned
and the resulting action must be specifed after the ->
symbol for each case. For example:
sign :: Integer -> String
sign x =
case compare x 0 of
LT -> "Negative"
GT -> "Positive"
EQ -> "Zero"
The compare function compares two arguments and
returns LT if the frst argument is lesser than the second, GT
if the frst argument is greater than the second, and EQ if both
are equal. Executing the above example, you get:
*Main> sign 2
*Main> sign 0
*Main> sign (-2)
The sign function can also be expressed using guards
(|) for readability. The action for a matching case must be
specifed after the = sign. You can use a default guard with
the otherwise keyword:
sign :: Integer -> String
sign x
| x > 0 = "Positive"
Developers Let's Try
| x < 0 = "Negative"
| otherwise = "Zero"
The guards have to be neatly aligned:
*Main> sign 0
*Main> sign 3
*Main> sign (-3)
There are three very important higher order functions in
Haskell map, flter and fold.
The map function takes a function and a list, and applies
the function to each and every element of the list. Its type
signature is:
*Main> :t map
map :: (a -> b) -> [a] -> [b]
The frst function argument accepts an element of type a
and returns an element of type b. An example of adding two
to every element in a list can be implemented using map:
*Main> map (+ 2) [1, 2, 3, 4, 5]
The flter function accepts a predicate function for
evaluation, and a list, and returns the list with those elements
that satisfy the predicate. For example:
*Main> flter (> 0) [-2, -1, 0, 1, 2]
Its type signature is:
flter :: (a -> Bool) -> [a] -> [a]
The predicate function for flter takes as its frst argument
an element of type a and returns True or False.
The fold function performs a cumulative operation on a
list. It takes as arguments a function, an accumulator (starting
with an initial value) and a list. It cumulatively aggregates the
computation of the function on the accumulator value as well
as each member of the list. There are two types of folds left
and right fold.
*Main> foldl (+) 0 [1, 2, 3, 4, 5]
*Main> foldr (+) 0 [1, 2, 3, 4, 5]
Their type signatures are, respectively:
*Main> :t foldl
foldl :: (a -> b -> a) -> a -> [b] -> a
*Main> :t foldr
foldr :: (a -> b -> b) -> b -> [a] -> b
The way the fold is evaluated among the two types is
different and is demonstrated below:
*Main> foldl (+) 0 [1, 2, 3]
*Main> foldl (+) 1 [2, 3]
*Main> foldl (+) 3 [3]
It can be represented as f (f (f a b1) b2) b3 where f is the
function, a is the accumulator value, and b1, b2 and b3
are the elements of the list. The parenthesis is accumulated on
the left for a left fold. The computation looks like this:
*Main> (+) ((+) ((+) 0 1) 2) 3
*Main> (+) 0 1
*Main> (+) ((+) 0 1) 2
*Main> (+) ((+) ((+) 0 1) 2) 3
With the recursion, the expression is constructed and
evaluated only when it is fnally formed. It can thus cause
stack overfow or never complete when working with infnite
lists. The foldr evaluation looks like this:
*Main> foldr (+) 0 [1, 2, 3]
*Main> foldr (+) 0 [1, 2] + 3
*Main> foldr (+) 0 [1] + 2 + 3
It can be represented as f b1 (f b2 (f b3 a)) where f is the
function, a is the accumulator value, and b1, b2 and b3
are the elements of the list. The computation looks like this:
*Main> (+) 1 ((+) 2 ((+) 3 0))
*Main> (+) 3 0
*Main> (+) 2 ((+) 3 0)
*Main> (+) 1 ((+) 2 ((+) 3 0))
To be continued on page.... 44
Developers Insight
the Hello World program in minutes. With the help of
Angular, the combined power of HTML and JavaScript can
be put to maximum use. One of the prominent features of
Angular is that it is extremely easy to test. And that makes
it very suitable for creating large-scale applications. Also,
the Angular community, comprising Googles developers
primarily, is very active in the development process.
Google Trends gives assuring proof of Angulars future in
the field of Web development (Figure 1).
Core features
Before getting into the basics of AngularJS, you need to
understand two key termstemplates and models. The
HTML page that is rendered out to you is pretty much the
template. So basically, your template has HTML, Angular
entities (directives, flters, model variables, etc) and CSS (if
necessary). The example code given below for data binding
is a template.
In an SPA, the data and presentation of data is separated
by a model layer that handles data and a view layer that reads
ngularJS can be introduced as a front-end
framework capable of incorporating the
dynamicity of JavaScript with HTML. The self-
proclaimed super heroic JavaScript MVW (Model View
Whatever) framework is maintained by Google and many
other developers at Github. This open source framework
works its magic on Web applications of the Single Page
Applications (SPA) category. The logic behind an SPA is
that an initial page is loaded at the start of an application
from the server. When an action is performed, the
application fetches the required resources from the server
and adds them to the initial page. The key point here is
that an SPA just makes one server round trip, providing
you with the initial page. This makes your applications
very responsive.
Why AngularJS?
AngularJS brings out the beauty in Web development.
It is extremely simple to understand and code. If youre
familiar with HTML and JavaScript, you can write
AngularJS is an open source Web application framework maintained by Google and the
community, which helps to build Single Page Applications (SPA). Lets get to know it better.
Developers Insight
from models. This helps an SPA in redrawing any part of the
UI without requiring a server round trip to retrieve HTML.
When the data is updated, its view is notifed and the altered
data is produced in the view.
Data binding
AngularJS provides you with two-way binding between the
model variables and HTML elements. One-way binding
would mean a one-way relation between the twowhen the
model variables are updated, so are the values in the HTML
elements; but not the other way around. Lets understand
two-way binding by looking at an example:
<html ng-app >
<script src="
<body ng-init = yourtext = Data binding is cool! >
Enter your text: <input type="text" ng-model =
"yourtext" />
<strong>You entered :</strong> {{yourtext}}
The model variable yourtext is bound to the HTML input
element. Whenever you change the value in the input box,
yourtext gets updated. Also, the value of the HTML input box
is initialised to that of the yourtext variable.
In the above example, many words like ng-app, ng-init
and ng-model may have struck you as odd. Well, these
are attributes that represent directives - ngApp, ngInit and
ngModel, respectively. As described in the offcial AngularJS
developer guide, Directives are markers on a DOM element
(such as an attribute, element name, comment or CSS class)
that tell AngularJS's HTML compiler ($compile) to attach a
specifed behaviour to that DOM element. Lets look into
the purpose of some common directives.
ngApp:This directive bootstraps your angular
application and considers the HTML element in which the
attribute is specified to be the root element of Angular.
In the above example, the entire HTML page becomes an
angular application, since the ng-app attribute is given
to the <html> tag. If it was given to the <body> tag,
the body alone becomes the root element. Or you could
create your own Angular module and let that be the root
of your application. An AngularJS module might consist
of controllers, services, directives, etc. To create a new
module, use the following commands:

var moduleName = angular.module( moduleName , [ ] );
// The array is a list of modules our module depends on
Also, remember to initialise your ng-app attribute to
moduleName. For instance,
<html ng-app = moduleName >
ngModel: The purpose of this directive is to bind the
view with the model. For instance,
<input type = "text" ng-model = "sometext" />
<p> Your text: {{ sometext }}</p>
Here, the model sometext is bound (two-way) to the
view. The double curly braces will notify Angular to put the
value of sometext in its place.
ngClick: How this directive functions is similar to that of
the onclick event of Javascript.

<button ng-click="mul = mul * 2" ng-init="mul = 1"> Multiply
with 2 </button>
After multiplying : {{mul}}

Whenever the button is clicked, mul gets multiplied by
A flter helps you in modifying the output to your view. You
can subject your expression to any kind of constraints to give
out the desired output. The format is:
{{ expression | flter }}
You can flter the output of flter1 again with flter2, using
the following format:
{{ expression | flter1 | flter2 }}
The following code flters the members of the people
array using the name as the criteria:
Figure 1: Report from Google Trends
Average 2005 2007 2009
March 2009
Interest over time News headlines Forecast
angularjs emberjs knockoutjs backbonejs
search term search term search term search term
+ Add term
2011 2013
Developers Insight
<body ng-init=" people=[{name:'Tony',branch:'CSE'},
{name:'Santhosh', branch:'EEE'},
{name:'Manisha', branch:'ECE'}];">
Name: <input type="text" ng-model="name"/>
<li ng-repeat="person in people | flter: name"> {{person.
name }} - {{person.branch}}
Advanced features
Controllers: To bring some more action to our app, we
need controllers. These are JavaScript functions that add
behaviour to our app. Lets make use of the ngController
directive to bind the controller to the DOM:
<body ng-controller="ContactController">
<input type="text" ng-model="name"/>
<button ng-click="disp()">Alert !</button>
<script type="text/javascript">
function ContactController($scope) {
$scope.disp = function( ){
alert("Hey " + $;
One term to be explained here is $scope. To quote
from the developer guide: Scope is an object that
refers to the application model. With the help of scope,
the model variables can be initialised and accessed.
In the above example, when the button is clicked the
disp( ) comes into play, i.e., the scope is assigned with
a behaviour. Inside disp( ), the model variable name is
accessed using scope.
Views and routes: In any usual application, we
navigate to different pages. In an SPA, instead of pages, we
have views. So, you can use views to load different parts
of your application. Switching to different views is done
through routing. For routing, we make use of the ngRoute
and ngView directives:
var miniApp = angular.module( 'miniApp', ['ngRoute'] );
miniApp.confg(function( $routeProvider ){
$routeProvider.when( '/home', { templateUrl:
'partials/home.html' } );
$routeProvider.when( '/animal', {templateUrl:
'partials/animals.html' } );
$routeProvider.otherwise( { redirectTo: '/home' } );
ngRoute enables routing in applications and
$routeProvider is used to confgure the routes. home.
html and animals.html are examples of partials; these are
fles that will be loaded to your view, depending on the
URL passed. For example, you could have an app that has
icons and whenever the icon is clicked, a link is passed.
Depending on the link, the corresponding partial is loaded to
the view. This is how you pass links:
<a href='#/home'><img src='partials/home.jpg' /></a>
<a href='#/animal'><img src='partials/animals.jpg' /></a>
Dont forget to add the ng-view attribute to the HTML
component of your choice. That component will act as a
placeholder for your views.
<div ng-view=""></div>
Services: According to the official documentation of
AngularJS, Angular services are substitutable objects
that are wired together using dependency injection (DI).
You can use services to organise and share code across
your app. With DI, every component will receive
a reference to the service. Angular provides useful
services like $http, $window, and $location. In order to
use these services in controllers, you can add them as
dependencies. As in:

var testapp = angular.module( testapp, [ ] );
testapp.controller ( testcont, function( $window ) {
//body of controller
To defne a custom service, write the following:

testapp.factory ('serviceName', function( ) {
var obj;
return obj; // returned object will be injected to
the component
//that has called the service
Testing is done to correct your code on-the-go and avoid
ending up with a pile of errors on completing your apps
development. Testing can get complicated when your
app grows in size and APIs start to get tangled up, but
Angular has got its own defined testing schemes. Usually,
two kinds of testing are employed, unit and end-to-end
testing (E2E). Unit testing is used to test individual API
components, while in E2E testing, the working of a set of
components is tested.
The usual components of unit testing are describe( ),
beforeEach( ) and it( ). You have to load the angular module
before testing and beforeEach( ) does this. Also, this function
Developers Insight
By: Tina Johnson
The author is a FOSS enthusiast who has contributed to
Mediawiki and Mozilla's Bugzilla. She is also working on a project
to build a browser (using AngularJS) for autistic children.
makes use of the injector method to inject dependencies.
The test to be conducted is given in it( ). The test suite is
describe( ), and both beforeEach( ) and it( ) come inside
it. E2E testing makes use of all the above functions.
One other function used is expect( ). This creates
expectations, which verify if the particular application's
state (value of a variable or URL) is the same as the
expected values.
Recommended frameworks for unit testing are
Jasmine and Karma, and for E2E testing, Protractor is the
one to go with.
Who uses AngularJs?
Some of the following corporate giants use AngularJS:
Sony (YouTube on PS3)
Virgin America
msnbc (
You can fnd a lot of interesting and innovative apps in
the Built with AngularJS page.
Competing technologies
Features Ember.js AngularJS Backbone.js
Routing Yes Yes Yes
Views Yes Yes Yes
Two-way binding Yes Yes No
The chart above covers only the core features of the three
frameworks. Angular is the oldest of the lot and has the
biggest community.
To be continued from page.... 37
There are some statements like condition checking
where f b1 can be computed even without requiring the
subsequent arguments, and hence the foldr function can
work with infinite lists. There is also a strict version of
foldl (foldl) that forces the computation before proceeding
with the recursion.
If you want a reference to a matched pattern, you can use
the as pattern syntax. The tail function accepts an input list
and returns everything except the head of the list. You can
write a tailString function that accepts a string as input and
returns the string with the frst character removed:
tailString :: String -> String
tailString "" = ""
tailString input@(x:xs) = "Tail of " ++ input ++ " is " ++ xs
The entire matched pattern is represented by input in the
above code snippet.
Functions can be chained to create other functions. This is
called composing functions. The mathematical defnition is
as under:
(f o g)(x) = f(g(x))
This dot (.) operator has the highest precedence and is
left-associative. If you want to force an evaluation, you can
use the function application operator ($) that has the second
highest precedence and is right-associative. For example:
*Main> (reverse ((++) "yrruC " (unwords ["skoorB",
"Haskell Brooks Curry"
You can rewrite the above using the function application
operator that is right-associative:
Prelude> reverse $ (++) "yrruC " $ unwords ["skoorB",
"Haskell Brooks Curry"
You can also use the dot notation to make it even more
readable, but the fnal argument needs to be evaluated frst;
hence, you need to use the function application operator for it:
*Main> reverse . (++) "yrruC " . unwords $ ["skoorB",
"Haskell Brooks Curry"

By: Shakthi Kannan
The author is a free software enthusiast and blogs
Use Bugzilla
to Manage Defects in Software
them are on your Linux system before proceeding with the
installation. This specifc installation covers MySQL as the
backend database.
Step 2: User and database creation
Before proceeding with the installation, the user and database
need to be created by following the steps mentioned below.
The names used here for the database or the users are
specifc to this installation, which can change between
Start the service by issuing the following command:
$/etc/rc.d/init.d/mysql start
Trigger MySQL by issuing the following command (you
will be asked for the root password, so ensure you keep it
$mysql -u root -p
Use the following keywords as shown in the MySQL
prompt for creating a user in the database for Bugzilla:
mysql > CREATE USER 'bugzilla'@'localhost' IDENTIFIED BY
n any project, defect management and various types of
testing play key roles in ensuring quality. Defects need
to be logged, tracked and closed to ensure the project
meets quality expectations. Generating defect trends also
helps project managers to take informed decisions and make
the appropriate course corrections while the project is being
executed. Bugzilla is one of the most popular open source
defect management tools and helps project managers to track
the complete lifecycle of a defect.
Installation and configuration of Bugzilla
Step 1: Getting the source code
Bugzilla is part of the Mozilla foundation. Its latest releases
are available from the official website. This article will
be covering the installation of Bugzilla version 4.4.2.
The steps mentioned here should apply to later releases
as well. However, for version-specific releases, check the
appropriate release notes. Here is the URL for downloading
Bugzilla version 4.4.2 on a Linux system: http://www.
Pre-requisites for Bugzilla include a CGI-enabled Web
server (an Apache http server), a database engine (MySQL,
PostgreSQL, etc) and the latest Perl modules. Ensure all of
In the quest for excellence in software products, developers have to go through the process of
defect management. The tool of choice for defect containment is Mozilla's Bugzilla. Learn how to
install, configure and use it to file a bug report and act on it.
Let's Try Developers
Developers Let's Try
> GRANT ALL PRIVILEGES ON *. * TO 'bugzilla'@'localhost';
mysql > CREATE DATABASE bugzilla_db CHARACTER SET utf8;
REFERENCES ON bugzilla_db.* TO 'bugzilla'@'localhost'
IDENTIFIED BY 'cspasswd';
Use the following command to connect the user with the
$mysql -u bugzilla -p bugzilla_db
$mysql > use bugzilla_db
Step 3: Bugzilla installation and configuration
After downloading the Bugzilla archive from the URL
mentioned above, untar the package into the /var/www
directory. All the confguration related information can
be modifed by the localconfg fle. To start with, set the
variable $webservergroup as www' and set other items as
mentioned in Figure 1.
Followed by the confguration, installation can be
completed by executing the following Perl script. Ensure this
script is executed with root privileges:
$ ./
Step 4: Integrating Bugzilla with Apache
Insert the following lines in the Apache server confguration
fle (server.xml) to integrate Bugzilla into it. Place the
directory bugzilla inside www in our build folder:
<Directory /var/www/bugzilla>
AddHandler cgi-script.cgi
Options +ExecCGI
DirectoryIndex index.cgi index.html
AllowOverride Limit FileInfo Indexes Options
Our set up is now ready. Lets hit the address in the
browser to see the home page of our freshly deployed Web
application (http://localhost/bugzilla).
Defect lifecycle management
The main purpose of Bugzilla is to manage the defects
lifecycle. Defects are created and logged in various phases of
the project (e.g., functional testing), where they are created by
the test engineer and assigned to development engineers for
resolution. Along with that, managers or team members need
to be aware of the change in the state of the defect to ensure
that there is a good amount of traceability of the defects.
When the defect is created, it is given a new state, after
which it is assigned to a development engineer for resolution.
Subsequently, it will get resolved and eventually be moved
to the closed state.
Step 1: User account creation
To start using Bugzilla, various user accounts have to be
created. In this example, Bugzilla is deployed in a server
named hydrogen. On the home page, click the New
Account link available in the header/footer of the pages (refer
to Figure 4). You will be asked for your email address; enter
it and click the Send button. After registration is accepted,
you should receive an email at the address you provided
confrming your registration. Now all you need to do is to
Figure 1: Configuring Bugzilla by changing the localconfig file
Figure 2: Bugzilla main page
Figure 3: Defect lifecycle
Figure 4: New account creation
Developers Let's Try
Step 4: Reports and dashboards
Typically, in large scale projects, there could be thousands of
defects logged and fxed by hundreds of development and test
engineers. To monitor the project at various phases, generation of
reports and dashboards becomes very important. Bugzilla offers
very simple but very powerful search and reporting features with
which all the necessary information can be obtained immediately.
By exploring the Search and Reports options, one can easily
fgure out ways to generate reports. A couple of simple examples
are provided in Figure 7 (search) and Figure 8 (reports). Outputs
can be exported to formats like CSV for further analysis.
Bugzilla is a very simple but powerful open source tool
that helps in complete defect management in projects. Along
with the information provided above, Bugzilla also exposes its
source code, which can be explored for further scripting and
programming. This helps to make Bugzilla a super-customised,
defect-tracking tool for effectively managing defects.
By: Satyanarayana Sampangi
Satyanarayana Sampangi is a Member - Embedded software at
Emertxe Information Technologies ( His
area of interest lies in Embedded C programming combined with
data structures and micro-controllers. He likes to experiment with
C programming and open source tools in his spare time to explore
new horizons. He can be reached at
Figure 5: New defect creation
Figure 6: Defect resolution
Figure 8: Simple dashboard of defects
Figure 7: Simple search
click the Log in link in the header/footer at the bottom of
the page in your browser, enter your email address and the
password you just chose into the login form, and click on the
Log in button. You will be redirected to the Bugzilla home
page for defect interfacing.
Step 2: Reporting the new bug
1. Click the New link available in the header/footer of the
pages, or the File a bug option displayed on the home
page of the Bugzilla installation as shown in Figure 5.
2. Select the product in which you found a bug. Please note
that the administrator will be able to create an appropriate
product and corresponding versions from his account,
which is not demonstrated here.
3. You now see a form on which you can specify the
component, the version of the program you were using, the
operating system and platform your program is running on,
and the severity of the bug, as shown in Figure 5.
4. If there is any attachment like a screenshot of the bug,
attach it using the option Add an attachment shown at
the bottom of the page, else click on Submit Bug.
Step 3: Defect resolution and closure
Once the bug is filed, the assignees (typically, developers)
get an email when the bug gets fixed. If the developers
fix the bug successfully by adding the details like a bug
fixing summary and then marking the status as resolved
in the status button, they can route the defect back to the
tester or to the development team leader for further review.
This can be easily done by changing the assignee field
of a defect and filling it with an appropriate email ID.
When the developers complete fixing the defect, it can
be marked as shown in Figure 6. When the test engineers
receive the resolved defect report, they can verify it and
mark the status as closed. At every step, notes from each
individual are to be captured and logged along with the
time-stamp. This helps in backtracking the defect in case
any clarifications are required.
Developers How To
Device: This can be the actual device present at the
hardware level, or a pseudo device.
Let us take an example where a user-space
application sends data to a character device.
Instead of using an actual device we are going to
use a pseudo device. As the name suggests, this
device is not a physical device. In GNU/Linux /
dev/null is the most commonly used pseudo
device. This device accepts any kind of data
(i.e., input) and simply discards it. And it
doesn't produce any output.
Let us send some data to the /dev/null
pseudo device:
[mickey]$ echo -n 'a' > /dev/null
In the above example, echo is a user-
space application and null is a special
fle present in the /dev directory. There
is a null driver present in the kernel to
control the pseudo device.
To send or receive data to and
from the device or application,
use the corresponding device
file that is connected to the driver
through the Virtual File System (VFS)
layer. Whenever an application wants to perform any
operation on the actual device, it performs this on the
device file. The VFS layer redirects those operations to
the appropriate functions that are implemented inside the
driver. This means that whenever an application performs
the open() operation on a device file, in reality the open()
function from the driver is invoked, and the same concept
applies to the other functions. The implementation of these
operations is device-specific.
Major and minor numbers
We have seen that the echo command directly sends data to
the device fle. Hence, it is clear that to send or receive data to
and from the device, the application uses special device fles.
But how does communication between the device fle and the
driver take place? It happens via a pair of numbers referred to
as major and minor numbers.
The command below lists the major and minor numbers
associated with a character device fle:
ave you ever wondered how a computer
plays audio or shows video? The
answer is: by using device drivers.
A few years ago we would always install
audio or video drivers after installing MS
Windows XP. Only then we were able
to listen the audio. Let us explore device
drivers in this column.
A device driver (often referred to as
driver') is a piece of software that controls
a particular type of device which is
connected to the computer system.
It provides a software interface to
the hardware device, and enables
access to the operating system
and other applications. There are
various types of drivers present
in GNU/Linux such as Character,
Block, Network and USB
drivers. In this column,
we will explore only
character drivers.
Character drivers
are the most common
drivers. They provide
unbuffered, direct access to hardware
devices. One can think of character drivers as a
long sequence of bytes -- same as regular fles but can be
accessed only in sequential order. Character drivers support
at least the open(), close(), read() and write() operations. The
text console, i.e., /dev/console, serial consoles /dev/stty*, and
audio/video drivers fall under this category.
To make a device usable there must be a driver present
for it. So let us understand how an application accesses data
from a device with the help of a driver. We will discuss the
following four major entities.
User-space application: This can be any simple utility
like echo, or any complex application.
Device fle: This is a special fle that provides an interface
for the driver. It is present in the fle system as an ordinary
fle. The application can perform all supported operation on
it, just like for an ordinary fle. It can move, copy, delete,
rename, read and write these device fles.
Device driver: This is the software interface for the device
and resides in the kernel space.
In the article An Introduction to the Linux Kernel in the August 2014 issue of OSFY, we wrote and
compiled a kernel module. In the second article in this series, we move on to device drivers.
An Introduction to
Device Drivers in the Linux Kernel
Developers How To
[bash]$ ls -l /dev/null
crw-rw-rw- 1 root root 1, 3 Jul 11 20:47 /dev/null
In the above output there are two numbers separated by a
comma (1 and 3). Here, 1 is the major and 3 is the minor
number. The major number identifes the driver associated
with the device, i.e., which driver is to be used. The minor
number is used by the kernel to determine exactly which
device is being referred to. For instance, a hard disk may
have three partitions. Each partition will have a separate
minor number but only one major number, because the same
storage driver is used for all the partitions.
Older kernels used to have a separate major number
for each driver. But modern Linux kernels allow multiple
drivers to share the same major number. For instance, /
dev/full, /dev/null, /dev/random and /dev/zero use the same
major number but different minor numbers. The output
below illustrates this:
[bash]$ ls -l /dev/full /dev/null /dev/random /dev/zero
crw-rw-rw- 1 root root 1, 7 Jul 11 20:47 /dev/full
crw-rw-rw- 1 root root 1, 3 Jul 11 20:47 /dev/null
crw-rw-rw- 1 root root 1, 8 Jul 11 20:47 /dev/random
crw-rw-rw- 1 root root 1, 5 Jul 11 20:47 /dev/zero
The kernel uses the dev_t type to store major and minor
numbers. dev_t type is defned in the <linux/types.h> header
fle. Given below is the representation of dev_t type from the
header fle:
#ifndef _LINUX_TYPES_H

#include <uapi/linux/types.h>
typedef __u32 __kernel_dev_t;
typedef __kernel_dev_t dev_t;
dev_t is an unsigned 32-bit integer, where 12 bits are used
to store the major number and the remaining 20 bits are used to
store the minor number. But don't try to extract the major and
minor numbers directly. Instead, the kernel provides MAJOR
and MINOR macros that can be used to extract the major and
minor numbers. The defnition of the MAJOR and MINOR
macros from the <linux/kdev_t.h> header fle is given below:
#ifndef _LINUX_KDEV_T_H
#defne _LINUX_KDEV_T_H
#include <uapi/linux/kdev_t.h>
#defne MINORBITS 20
#defne MINORMASK ((1U << MINORBITS) - 1)

#defne MAJOR(dev) ((unsigned int) ((dev) >> MINORBITS))
#defne MINOR(dev) ((unsigned int) ((dev) & MINORMASK))
If you have major and minor numbers and you want to
convert them to the dev_t type, the MKDEV macro will do
the needful. The defnition of the MKDEV macro from the
<linux/kdev_t.h> header fle is given below:
#defne MKDEV(ma,mi) (((ma) << MINORBITS) | (mi))
We now know what major and minor numbers are and the
role they play. Let us see how we can allocate major numbers.
Here is the prototype of the register_chrdev():
int register_chrdev(unsigned int major, const char *name,
struct fle_operations *fops);
This function registers a major number for character
devices. Arguments of this function are self-explanatory. The
major argument implies the major number of interest, name
is the name of the driver and appears in the /proc/devices area
and, fnally, fops is the pointer to the fle_operations structure.
Certain major numbers are reserved for special drivers;
hence, one should exclude those and use dynamically allocated
major numbers. To allocate a major number dynamically, provide
the value zero to the frst argument, i.e., major == 0. This
function will dynamically allocate and return a major number.
To deallocate an allocated major number use the
unregister_chrdev() function. The prototype is given below
and the parameters of the function are self-explanatory:
void unregister_chrdev(unsigned int major, const char *name)
The values of the major and name parameters must be
the same as those passed to the register_chrdev() function;
otherwise, the call will fail.
File operations
So we know how to allocate/deallocate the major number, but
we haven't yet connected any of our drivers operations to the
major number. To set up a connection, we are going to use
the fle_operations structure. This structure is defned in the
<linux/fs.h> header fle.
Each feld in the structure must point to the function in the
driver that implements a specifc operation, or be left NULL for
unsupported operations. The example given below illustrates that.
Without discussing lengthy theory, let us write our frst
null driver, which mimics the functionality of a /dev/null
pseudo device. Given below is the complete working code for
the null driver.
Open a fle using your favourite text editor and save the
code given below as null_driver.c:
Developers How To
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/kdev_t.h>
static int major;
static char *name = "null_driver";
static int null_open(struct inode *i, struct fle *f)
printk(KERN_INFO "Calling: %s\n", __func__);
return 0;
static int null_release(struct inode *i, struct fle *f)
printk(KERN_INFO "Calling: %s\n", __func__);
return 0;
static ssize_t null_read(struct fle *f, char __user *buf,
size_t len, loff_t *off)
printk(KERN_INFO "Calling: %s\n", __func__);
return 0;
static ssize_t null_write(struct fle *f, const char __user
*buf, size_t len, loff_t *off)
printk(KERN_INFO "Calling: %s\n", __func__);
return len;
static struct fle_operations null_ops =
.owner = THIS_MODULE,
.open = null_open,
.release = null_release,
.read = null_read,
.write = null_write
static int __init null_init(void)
major = register_chrdev(0, name, &null_ops);
if (major < 0) {
printk(KERN_INFO "Failed to register driver.");
return -1;
printk(KERN_INFO "Device registered successfully.\n");
return 0;
static void __exit null_exit(void)
unregister_chrdev(major, name);
printk(KERN_INFO "Device unregistered successfully.\n");
MODULE_AUTHOR("Narendra Kangralkar.");
Our driver code is ready. Let us compile and insert the
module. In the article last month, we did learn how to write
Makefle for kernel modules.
[mickey]$ make
[root]# insmod ./null_driver.ko
We are now going to create a device fle for our driver.
But for this we need a major number, and we know that
our driver's register_chrdev() function will allocate the
major number dynamically. Let us fnd out this dynamically
allocated major number from /proc/devices, which shows the
currently loaded kernel modules:

[root]# grep "null_driver" /proc/devices
248 null_driver
From the above output, we are going to use 248 as a
major number for our driver. We are only interested in the
major number, and the minor number can be anything within
a valid range. I'll use 0 as the minor number. To create the
character device fle, use the mknod utility. Please note that to
create the device fle you must have superuser privileges:

[root]# mknod /dev/null_driver c 248 0
Now it's time for the action. Let us send some data to the
pseudo device using the echo command and check the output
of the dmesg command:
[root]# echo "Hello" > /dev/null_driver
[root]# dmesg
Device registered successfully.
Calling: null_open
Calling: null_write
Calling: null_release
Yes! We got the expected output. When open, write, close
Developers How To
operations are performed on a device fle, the appropriate
functions from our driver's code get called. Let us perform the
read operation and check the output of the dmesg command:

[root]# cat /dev/null_driver
[root]# dmesg
Calling: null_open
Calling: null_read
Calling: null_release
To make things simple I have used printk() statements in
every function. If we remove these statements, then /dev/null_
driver will behave exactly the same as the /dev/null pseudo
device. Our code is working as expected. Let us understand
the details of our character driver.
First, take a look at the driver's function. Given below are the
prototypes of a few functions from the fle_operations structure:
int (*open)(struct inode *i, struct fle *f);
int (*release)(struct inode *i, struct fle *f);
ssize_t (*read)(struct fle *f, char __user *buf, size_t len,
loff_t *off);
ssize_t (*write)(struct fle *f, const char __user buf*,
size_t len, loff_t *off);
The prototype of the open() and release() functions is
exactly same. These functions accept two parametersthe frst
is the pointer to the inode structure. All fle-related information
such as size, owner, access permissions of the fle, fle creation
timestamps, number of hard-links, etc, is represented by the
inode structure. And each open fle is represented internally by
the fle structure. The open() function is responsible for opening
the device and allocation of required resources. The release()
function does exactly the reverse job, which closes the device
and deallocates the resources.
As the name suggests, the read() function reads data from the
device and sends it to the application. The frst parameter of this
function is the pointer to the fle structure. The second parameter
is the user-space buffer. The third parameter is the size, which
implies the number of bytes to be transferred to the user space
buffer. And, fnally, the fourth parameter is the fle offset which
updates the current fle position. Whenever the read() operation
is performed on a device fle, the driver should copy len bytes
of data from the device to the user-space buffer buf and update
the fle offset off accordingly. This function returns the number
of bytes read successfully. Our null driver doesn't read anything;
that is why the return value is always zero, i.e., EOF.
The driver's write() function accepts the data from the
user-space application. The frst parameter of this function is the
pointer to the fle structure. The second parameter is the user-
space buffer, which holds the data received from the application.
The third parameter is len which is the size of the data. The
fourth parameter is the fle offset. Whenever the write() operation
is performed on a device fle, the driver should transfer len bytes
of data to the device and update the fle offset off accordingly.
Our null driver accepts input of any length; hence, return value is
always len, i.e., all bytes are written successfully.
In the next step we have initialised the fle_operations
structure with the appropriate driver's function. In initialisation
function we have done a registration related job, and we are
deregistering the character device in cleanup function.
Implementation of the full pseudo driver
Let us implement one more pseudo device, namely, full. Any write
operation on this device fails and gives the ENOSPC error. This
can be used to test how a program handles disk-full errors. Given
below is the complete working code of the full driver:
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/kdev_t.h>
static int major;
static char *name = "full_driver";
static int full_open(struct inode *i, struct fle *f)
return 0;
static int full_release(struct inode *i, struct fle *f)
return 0;
static ssize_t full_read(struct fle *f, char __user *buf,
size_t len, loff_t *off)
return 0;
static ssize_t full_write(struct fle *f, const char __user
*buf, size_t len, loff_t *off)
return -ENOSPC;
static struct fle_operations full_ops =
.owner = THIS_MODULE,
.open = full_open,
.release = full_release,
.read = full_read,
.write = full_write
To be continued on page.... 55
Developers Insight
Joomla and WordPress are popular
Web content management
systems, which provide authoring,
collaboration and administration
tools designed to allow amateurs
to create and manage websites
with ease.
Creating Dynamic Web Portals
Using Joomla and WordPress
be called by the programmer depending upon the module and
feature required in the application. As far as user-friendliness is
concerned, the CMSs are very easy to use. CMS products can
be used and deployed even by those who do not have very good
programming skills.
A framework can be considered as a model, a structure
or simply a programming template that provides classes,
events and methods to develop an application. Generally,
the software framework is a real or conceptual structure of
software intended to serve as a support or guide to build
something that expands the structure into something useful.
The software framework can be seen as a layered structure,
indicating which kind of programs can or should be built and
the way they interrelate.
Content Management Systems (CMSs)
The digital repositories and CMSs have a lot of feature-
overlap, but both systems are unique in terms of their
underlying purposes and the functions they fulfll.
A CMS for developing Web applications is an integrated
application that is used to create, deploy, manage and store
content on Web pages. The Web content includes plain or
formatted text, embedded graphics in multiple formats,
photos, video, audio as well as the code that can be third party
APIs for interaction with the user.
owadays, every organisation wishes to have an online
presence for maximum visibility as well as reach.
Industries from across different sectors have their
own websites with detailed portfolios so that marketing as
well as broadcasting can be integrated very effectively.
Web 2.0 applications are quite popular in the global market.
With Web 2.0, the applications developed are fully dynamic
so that the website can provide customised results or output to
the client. Traditionally, long term core coding, using different
programming or scripting languages like CGI PERL, Python,
Java, PHP, ASP and many others, has been in vogue. But today
excellent applications can be developed within very little
time. The major factor behind the implementation of RAD
frameworks is re-usability. By making changes to the existing
code or by merely reusing the applications, development has
now become very fast and easy.
Software frameworks
Software frameworks and content management systems
(CMS) are entirely different concepts. In the case of CMSs, the
reusable modules, plugins and related components are provided
with the source code and all that is required is to only plug in or
plug out. The frameworks need to be installed and imported on
the host machine and then the functions are called. This means
that the framework with different classes and functions needs to
Developers Insight
Digital repositories
An institutional repository refers to the online archive or
library for collecting, preserving and disseminating digital
copies of the intellectual output of the institution, particularly
in the feld of research.
For any academic institution like a university, it also
includes digital content such as academic journal articles. It
covers both pre-prints and post-prints, articles undergoing
peer review, as well as digital versions of theses and
dissertations. It even includes some other digital assets
generated in an institution such as administrative documents,
course notes or learning objectives. Depositing material in
an institutional repository is sometimes mandated by some
Joomla CMS
Joomla is an award-winning open source CMS written in
PHP. It enables the building of websites and powerful online
applications. Many aspects, including its user-friendliness and
extensible nature, makes Joomla the most popular Web-based
software development CMS. Joomla is built on the model
viewcontroller (MVC) Web application framework, which
can be used independent of the CMS.
Joomla CMS can store data in a MySQL, MS SQL or
PostgreSQL database, and includes features like page caching,
RSS feeds, printable versions of pages, news fashes, blogs,
polls, search and support for language internationalisation.
According to reports by Market Wire, New York, as of
February 2014, Joomla has been downloaded over 50 million
times. Over 7,700 free and commercial extensions are available
from the offcial Joomla Extension Directory and more are
available from other sources. It is supposedly the second most
used CMS on the Internet after WordPress. Many websites
provide information on installing and maintaining Joomla sites.
Joomla is used across the globe to power websites of all
types and sizes:
Corporate websites or portals
Corporate intranets and extranets
Online magazines, newspapers and publications
E-commerce and online reservation sites
Sites offering government applications
Websites of small businesses and NGOs
Community-based portals
School and church websites
Personal or family home pages
Joomlas user base includes:
The military -
US Army Corps of Engineers - Country: http://www.spl.
MTV Networks Quizilla (social networking) - http://www.
New Hampshire National Guard - https://www.nh.ngb.
United Nations Regional Information Centre - http://www.
IHOP (a restaurant chain) -
Harvard University -
and many others
The essential features of Joomla are:
User management
Media manager
Language manager
Banner management
Contact management
Web link management
Content management
Syndication and newsfeed management
Menu manager
Template management
Integrated help system
System features
Web services
Powerful extensibility
Joomla extensions
Joomla extensions are used to extend the functionality of
Joomla-based Web applications. The Joomla extensions for
multiple categories and services can be downloaded from
PHP-based open source CMSs
PHP-based open source frameworks
Figure 1: Joomla extensions
Developers Insight
Installing and working with Joomla
For Joomla installation on a Web server, whether local or hosted,
we need to download the Joomla installation package, which
ought to be done from the offcial website, If Joomla
is downloaded from websites other than the offcial one, there are
risks of viruses or malicious code in the set-up fles.
Once you click the Download button for the latest stable
Joomla version, the installation package will be saved to the local
hard disk. Extract it so that it can be made ready for deployment.
Now, at this instant, upload the extracted fles and folders
to the Web server. The easiest and safest method to upload the
Joomla installation fles is via FTP.
If Joomla is required to be installed live on a specifc
domain, upload the extracted fles to the public_html folder
on the online fle manager of the domain. If access to Joomla
is needed on a sub-folder of any domain (www.mydomain.
com/myjoomla) it should be uploaded to the appropriate sub-
directory (public_html/myjoomla/).
After this step, create a blank MySQL database and assign
a user to it with full permissions. A blank database is created
because Joomla will automatically create the tables inside
that database. Once you have created your MySQL database
and user, save the database name, database user name and
password just created because, during Joomla installation, you
will be asked for these credentials.
After uploading the installation fles, open the Web
browser and navigate to the main domain (http://www., or to the appropriate sub-domain (http://www., depending upon the location the Joomla
installation package is uploaded to. Once done, the frst screen
of the Joomla Web Installer will open up.
Once you fll in all the required felds, press the Next button
to proceed with the installation. On the next screen, you will have
to enter the necessary information for your MySQL database.
After all the necessary information has been flled in at all
stages, press the Next button to proceed. You will be forwarded
to the last page of the installation process. On this page, specify
if you want any sample data installed on your server.
The second part of the page will show the pre-installation
checks. The Web hosting servers will check that all Joomla
requirements and prerequisites have been met and you will
see a green check after each line.
Finally, click the Install button to start the actual Joomla
installation. In a few moments, you will be redirected to the last
screen of the Joomla Web Installer. On the last screen of the
installation process, press the Remove installation folder button.
This is required for security reasons; otherwise, every time, the
installation will restart. Joomla is now ready to be used.
Creating articles and linking them with the menu
After installation, the administrator panel to control the
Joomla website is displayed. Here, different modules, plugins
and components, along with the HTML contents, can be
added or modifed.
WordPress CMS
WordPress is another free and open source blogging CMS
tool based on PHP and MySQL. The features of WordPress
include a specialised plugin architecture with a template
system. WordPress is the most popular blogging system in use
on the Web, used by more than 60 million websites. It was
initially released in 2003 with the objective of providing an
easy-to-use CMS for multiple domains.
The installation steps for all CMSs are almost the same.
The compressed fle is extracted and deployed on the public_
html folder of the Web server. In the same way, a blank
database is created and the credentials are placed during the
installation steps.
According to the offcial declaration of WordPress, this
CMS powers more than 17 per cent of the Web and the fgure
is rising every day.
Figure 2: Creating a MySQL user in a Web hosting panel
Database Overview Confguration

is free software released under the GNU General Public License.

English (United States)
Site Name *
Admin Username*
Admin Email* My now Joomla Website!
Enter the name of your Joomla! site.
This is my new Joomla site and it
is going to be great!
Enter a description of the overall Web site
that is to be used by search engines.
Generally, a maximum of 20 words is
Set the site fronted offine when installation is completed. The site can be set online later on through the Global
Enter an email address. This will be the
email address of the Web site Super
You may change the default username
Set the password for your Super
Administrator account and confrm it in the
feld below.
Confrm Admin
Site Offine
Yes No
Admin Password*
Select Language
Main Confguration
Figure 3: Database configuration panel for setting up Joomla
This is usuaally "localhost"
Database Type*
Confguration Database Overview
Database Confguration
Host Name*
Database Name*
Table Prefx*
Old Database
Remove Backup
Either something as "root or a username given by the host
For site security using a password for the database account is manadatory
Choose a table prefx or use the randomly generated ideally, three or four characters long,
contain only alphanumeric characters, and MUST end in an underscore. Make sure that the
prefx chosen is not used by other table
Any existing backup tables from former joomla! installations will be replaced
Some hosts allow only a certain DB name per site. Use table prefx in this case for district joomla! sites.
This is probably "MySQLi"
Developers Insight
The salient features of
WordPress are:
Ease of publishing
Publishing tools
User management
Media management
Full standards
Easy theme system
Can be extended with plugins
Built-in comments
Search engine optimised
Easy installation and upgrades
Strong community of troubleshooters
Worldwide users of WordPress include:
FIU College of Engineering and Computing
MTV Newsroom
Sony Music
Nicholls State University
Milwaukee School of Engineering
.and many others
Figure 4: Administrator login for Joomla Figure 5: WYSIWYG editor for creating articles
By: Dr Gaurav Kumar
The author is the MD of Magma Research & Consultancy Pvt Ltd,
Ambala. He is associated with a number of academic institutes,
where he delivers lectures and conducts technical workshops
on the latest technologies and tools. He can be contacted at
static int __init full_init(void)
major = register_chrdev(0, name, &full_ops);
if (major < 0) {
printk(KERN_INFO "Failed to register driver.");
return -1;
return 0;
static void __exit full_exit(void)
unregister_chrdev(major, name);
MODULE_AUTHOR("Narendra Kangralkar.");
Let us compile and insert the module.
[mickey]$ make
[root]# insmod ./full_driver.ko
[root]# grep "full_driver" /proc/devices
248 full_driver
[root]# mknod /dev/full_driver c 248 0
[root]# echo "Hello" > /dev/full_driver
-bash: echo: write error: No space left on device
If you want to learn more about GNU/Linux device
drivers, the Linux kernel's source code is the best place to do
so. You can browse the kernel's source code from http://lxr. You can also download the latest source
code from Additionally, there are a
few good books available in the market like Linux Kernel
Development' (3rd Edition) by Robert Love, and Linux
Device Drivers' (3rd Edition) which is a free book. You can
download it from These books
also explain kernel debugging tools and techniques.
By: Narendra Kangralkar
The author is a FOSS enthusiast and loves exploring
anything related to open source. He can be reached at
To be continued from page.... 51
Developers Let's Try
Jumper (female-to-female)
SD card (with bootable Raspbian image)
Here's a quick overview of what device drivers are. As the
name suggests, they are pieces of code that drive your device.
One can even consider them a part of the OS (in this case,
Linux) or a mediator between your hardware and UI.
A basic understanding of how device drivers actually
work is required; so do learn more about that in case you need
to. Lets move forward to the GPIO driver assuming that one
knows the basics of device drivers (like inserting/removing
the driver from the kernel, probe functionality, etc).
When you insert (insmod) this driver, it will register itself
as a platform driver with the OS. The platform device is also
registered in the same driver. Contrary to this, registering
the platform device in the board fle is a good practice. A
peripheral can be termed a platform device if it is a part of
the SoC (system on chip). Once the driver is inserted, the
registration (platform device and platform driver) takes place,
his article goes deep into what really goes on inside
an OS while managing and controlling the hardware.
The OS hides all the complexities, carries out all the
operations and gives end users their requirements through the
UI (User Interface). GPIO can be considered as the simplest
of all the peripherals to work on any board. A small GPIO
driver would be the best medium to explain what goes on
under the hood.
A good embedded systems engineer should, at the very
least, be well versed in the C language. Even if the following
demonstration can't be replicated (due to the unavailability of
hardware or software resources), a careful read through this
article will give readers an idea of the underlying processes.
Prerequisites to perform this experiment
C language (high priority)
Raspberry Pi board (any model)
BCM2835-ARM-peripherals datasheet (just Google for it!)
GPIO is the acronym for General Purpose (I/O). The role played by these drivers is to handle
I/O requests to read or write to groups of GPIO pins. Lets try and compile a GPIO driver.
Compile a GPIO Control Application and
Test It On the Raspberry Pi
Developers Let's Try
Figure 1: System layout
Hardware Platform
System Call Interface
Architecture-Dependent Kernel Code
User Applications
Figure 2: Console
after which the probe function gets called.
Generic information
Probe in the driver gets called whenever a device's (already
registered) name matches with the name of your platform
driver (here, it is bcm-gpio). The second major functionality
is ioctl which acts as a bridge between the application
space and your driver. In technical terms, whenever your
application invokes this (ioctl) system call, the call will be
routed to this function of your driver. Once the call from the
application is in your driver, you can process or provide data
inside the driver and can respond to the application.
The SoC datasheet, i.e., BCM2835-ARM-Peripherals,
plays a pivotal role in building up this driver. It consists of
all the information pertaining to the peripherals supported by
your SoC. It exposes all the registers relevant to a particular
peripheral, which is where the key is. Once you know what
registers of a peripheral are to be confgured, half the job
is done. Be cautious about which address has to be used to
access these peripherals.
Types of addressing modes
There are three kinds of addressing modes - virtual
addressing, physical addressing and system bus addressing.
To learn the details, turn to Page 6 of the datasheet.
The macro __io_address implemented in the probe
function of the driver returns the virtual address of the
physical address passed as an argument. For GPIO, the
physical address is 0x20200000(0x20000000 + 0x200000),
where 0x20000000 is the base address and 0x200000 is the
peripheral offset. Turn to Page 5 of the datasheet for more
details. Any guesses on which address the macro __io_
address would return? The address returned by this macro can
then be used for accessing (reading or writing) the concerned
peripheral registers.
The GPIO control application is analogous to a simple
C program with an additional ioctl call. This call is capable
of passing data from the application layer to the driver layer
with an appropriate command. I have restricted the use of
other GPIOs as they are not exposed to headers like others.
So, modify the application as per your requirements. More
information is available on this peripheral from Page 89 of
the datasheet. In this code, I have just added functionality for
setting or clearing a GPIO. Another interesting feature is that
by confguring the appropriate registers, you can confgure
GPIOs as interrupt pins. So whenever a pulse is routed to
that pin, the processor, i.e., ARM, is interrupted and the
corresponding handler registered for that interrupt is invoked
to handle and process it. This interesting aspect will be taken
up in later articles.
Compilation of the GPIO device driver
There are two ways in which you can compile your driver.
Cross compilation on the host PC
Local compilation on the target board
In the frst method, one needs to have certain packages
downloaded. These are:
ARM cross-compiler
Raspbian kernel source (the kernel version must match
with the one running on your Pi; otherwise, the driver will
not load onto the OS due to the version mismatch)
In the second method, one needs to install certain
packages on Pi.
Go to the following link and follow the steps indicated:
Or, follow the third answer at this link, the starting line of
which says, "Here are the steps I used to build the Hello
World kernel module on Raspbian."
I went ahead with the second method as it was more
Testing on your Raspberry Pi
Boot up your Raspberry Pi using minicom and you will see
the console that resembles mine (Figure 2).
GNU C Library (glibc)
Developers Let's Try
Run sudo dmesg C. (This command would clean up all
the kernel boot print logs.)
Run sudo make. (This command would
compile GPIO driver. Do this only for
the second method.)
Run sudo insmod gpio_driver.ko. (This
command inserts the driver into the OS.)
Run dmesg. You can see the prints
from the GPIO driver and the major
number allocated to it, as shown in
Figure 3. (The major number plays a
unique role in identifying a specifc
driver with whom the process from the
application space wants to communicate
with, whereas the minor number is used
to recognise hardware.)
Run sudo mknod /dev/bcm-gpio c major-num 0. (The
mknod command creates a node in /dev directory,
c stands for character device and 0 is the minor
Run sudo gcc gpio_app.c -o gpio_app. (Compile the
GPIO control application.)
Now lets test our GPIO driver and application.
To verify whether our driver is indeed communicating
with GPIO, short pins 25 and 24 (one can use other
available pins like 17, 22 and 23 as well but make sure that
they aren't mixed up for any other peripheral) using the
female-to-female jumper (Figure 4). The default values of
both the pins will be 0. To confirm the default values, run
the following commands:
sudo ./app -n 25 -g 1
This will be the output. The output value of GPIO 25 = 0.
Now run the following command:
sudo ./app -n 24 -g 1
This will again be the output. The output value of GPIO
24 = 0.
Thats it. Its verifed (see Figure 5). Now, as the GPIO
pins are shorted, if we output 1 to 24 then it would be the
input value of 25 and vice versa.
To test this, run:
sudo ./app -n 24 -d 1 -v 1 -s 1
By: Sumeet Jain
The author works at eInfochips as an embedded systems
engineer. You can reach him at
Figure 3: dmesg output
Figure 5: Output showing GPIO 24=0
Figure 4: R-pi GPIO Figure 6: Output showing GPIO 25=1
This command will drive the value of GPIO 24 to 1,
which in turn will be routed to GPIO 25. To verify the value
of GPIO 25, run:
sudo ./app -n 25 -g 1
This will give the output. The output value of GPIO 25 = 1
(see Figure 6).
One can also connect any external device or a simple LED
(through a resistor) to the GPIO pin and test its output.
Arguments passed to the application through the command
lines are:
-n : GPIO number
-d : GPIO direction (0 - IN or 1 - OUT)
-v : GPIO value (0 or 1)
-s/g : set/get GPIO
The fles are:
gpio_driver.c : GPIO driver fle
gpio_app.c : GPIO control application
gpio.h : GPIO header fle
Makefle : File to compile GPIO driver
After conducting this experiment, some curious folk may
have questions like:
Why does one have to use virtual addresses to access GPIO?
How does one determine the virtual address from the
physical address?
We will discuss the answers to these in later articles.
Admin How To
[root@apachewebsever1 Packages]#
Start the service:
[root@apachewebsever1 ~]# service httpd start
Starting httpd:
[ OK ]
[root@apachewebsever1 ~]#
Start the service at boot time:
[root@apachewebsever1 ~]# chkconfg httpd on
[root@apachewebsever1 ~]#
[root@apachewebsever1 ~]# chkconfg --list httpd
httpd 0:off 1:off 2:on 3:on 4:on 5:on
[root@apachewebsever1 ~]#
The directory location of Apache HTTP Service is /etc/
httpd/. Figure 2 gives the default test page for Apache Web
Server on Red Hat Enterprise Linux.
Now, lets create a Web page index.html at /var/www/html.
Restart Apache Web Service to bring the changes into effect.
The index.html Web page will be displayed (Figure 3).
Repeat the above steps for Web Server2, except for the following:
Set the IP address to
The contents of the custom Web page index.html should
be ApacheWebServer2 as shown in Figure 4.
Installation and configuration of Pound
gateway server
First, ensure YUM is up and running:
[root@poundgateway ~]# ps -ef | grep yum
root 2050 1998 0 13:30 pts/1 00:00:00 grep yum
[root@poundgateway ~]#
[root@poundgateway ~]# yum clean all
Loaded plugins: product-id, refresh-packagekit, subscription-
Updating Red Hat repositories.
Cleaning repos:
Cleaning up Everything
[root@poundgateway ~]#
[root@poundgateway ~]# yum update all
Loaded plugins: product-id, refresh-packagekit, subscription-
Updating Red Hat repositories.
Setting up Update Process
No Match for argument: all
No package all available.
No Packages marked for Update
[root@poundgateway ~]#
Then, check the default directory of YUM:
[root@poundgateway ~]# cd /etc/yum.repos.d/
[root@poundgateway yum.repos.d]#
[root@poundgateway yum.repos.d]# ll
total 8
-rw-r--r-- 1 root root 67 Jul 27 13:30 redhat.repo
-rw-r--r--. 1 root root 529 Apr 27 2011 rhel-
[root@poundgateway yum.repos.d]#
By default, the repo fle rhel-source.repo is disabled. To
enable, edit the fle rhel-source.repo and change the value
enabled = 1
enable = 0
For now you can leave this repository disabled.
Figure 1: Load balancing using the Pound server
Figure 2: Default page
Figure 4: Custom web page of
Apache Web Server2
Figure 3: Custom web page of
Apache Web Server1
...ApacheWebServer1... ...ApacheWebServer2...
NOTE - POUND Server performs the require load balancing of the Web Servers.
Pound server
USER3 ...etc
Admin How To
Now, download the epel-release-6-8.noarch.rpm package
and install it.
Important notes on EPEL
1. EPEL stands for Extra Packages for Enterprise Linux.
2. EPEL is not a part of RHEL but provides a lot of open
source packages for major Linux distributions.
3. EPEL packages are maintained by the Fedora team and
are fully open source, with no core duplicate packages and
no compatibility issues. They are to be installed using the
YUM utility.
The link to download the EPEL release for RHEL 6 (32-bit)
And for 64 bit, it is:
Here, epel-release-6-8.noarch.rpm is kept at /opt:
Go to the /opt directory and change the permission of
the files:
[root@poundgateway opt]# chmod -R 755 epel-release-6-8.noarch.
[root@poundgateway opt]#
Now, install epel-release-6-8.noarch.rpm:
[root@poundgateway opt]# rpm -ivh --aid --force epel-
warning: epel-release-6-8.noarch.rpm: Header V3 RSA/SHA256
Signature, key ID 0608b895: NOKEY
Preparing... ###################################
######## [100%]
1:epel-release ###################################
######## [100%]
[root@poundgateway opt]#
epel-release-6-8.noarch.rpm installs the repo fles necessary
to download the Pound package:
[root@poundgateway ~]# cd /etc/yum.repos.d/
[root@poundgateway yum.repos.d]# ll
total 16
-rw-r--r-- 1 root root 957 Nov 4 2012 epel.
-rw-r--r-- 1 root root 1056 Nov 4 2012 epel-
-rw-r--r-- 1 root root 67 Jul 27 13:30 redhat.repo
-rw-r--r--. 1 root root 529 Apr 27 2011 rhel-
[root@poundgateway yum.repos.d]#
As observed, epel.repo and epel-testing.repo are the new
added repo fles. No changes are made in epel.repo and
epel-testing.repo. Move the default redhat.repo and rhel-
source.repo to the backup location. Now, connect the server
to the Internet and, using the yum utility, install Pound:
[root@PoundGateway ~]# yum install Pound*
This will install Pound, Pound-debuginfo and will also
install required dependencies along with it.
To verify Pounds installation, type:
[root@PoundGateway ~]# rpm -qa Pound
[root@PoundGateway ~]#
The location of the Pound confguration fle is /etc/
You can view the default Pound confguration fle by
using the command given below:
[root@PoundGateway ~]# cat /etc/pound.cfg
Make the changes to the Pound confguration fle as shown
in the code snippet given below:
We will comment the section related to ListenHTTPS
as we do not need HTTPS for now.
Add the IP address under the
ListenHTTP section.
Add the IP address and
with Port 80 under Service Backend Section, where
[] is for the Pound server; []
for Web Server1 and [ ] for Web Server2.
The edited Pound confguration fle is:
[root@PoundGateway ~]# cat /etc/pound.cfg
# Default pound.cfg
# Pound listens on port 80 for HTTP and port 443 for HTTPS
# and distributes requests to 2 backends running on
# see pound(8) for confguration directives.
# You can enable/disable backends with poundctl(8).
User "pound"
Group "pound"
Control "/var/lib/pound/pound.cfg"
Port 80
Admin How To
# Address
# Port 443
# Cert "/etc/pki/tls/certs/pound.pem"
Port 80
Port 80
[root@PoundGateway ~]#
Now, start the Pound service:
[root@PoundGateway ~]# service pound start
Starting Pound: starting... [OK]
[root@PoundGateway ~]#
By: Arindam Mitra
The author can be reached at or
To confgure the service to be started at boot time, type:
[root@PoundGateway ~]# chkconfg pound on
[root@PoundGateway ~]# chkconfg list pound
pound 0:off 1:off 2:on 3:on 4:on 5:on 6:off
[root@PoundGateway ~]#
Now open a Web browser and access the URL It displays the Web page from Web
Refresh the page, and it will display the Web page from
Keep refreshing the Web page; it will fip from Web
Server1 to Web Server2, back and forth. We have now
confgured a system where the load on the Web server is
being balanced between two physical servers.
You can mail us at You can send this form to
The Editor, OSFY, D-87/1, Okhla Industrial Area, Phase-1, New Delhi-20. Phone No. 011-26810601/02/03, Fax: 011-26817563
Customer Feedback Form
Open Source For You
Admin How To
ikipedia defnes a bounce email as a system-
generated failed delivery status notifcation
(DSN) or a non-delivery report (NDR), which
informs the original sender about a delivery problem. When
that happens, the original email is said to have bounced.
Broadly, bounces are categorised into two types:
A hard/permanent bounce: This indicates that there
exists a permanent reason for the email not to get
delivered. These are valid bounces, and can be due to the
non-existence of the email address, an invalid domain
name (DNS lookup failure), or the email provider
blacklisting the sender/recipient email address.
A soft/temporary bounce: This can occur due to
various reasons at the sender or recipient level. It
can evolve due to a network failure, the recipient
mailbox being full (quota-exceeded), the recipient
having turned on a vacation reply, the local Message
Transfer Agent (MTA) not responding or being badly
configured, and a whole lot of other reasons. Such
Bounced emails are the bane of marketing campaigns and mailing lists. In this article, the
author explains the nature of bounce messages and describes how to handle them.
Why We Need to Handle
Bounced Emails
bounces cannot be used to determine the status of a
failing recipient, and therefore need to be sorted out
effectively from our bounce processing.
To understand this better, consider a sender alice@, sending an email to bob@somewhere.
com. She mistyped the recipients address as bub@ The email message will have a default
envelope sender, set by the local MTA running there
(, or by the PHP script to alice@ Now, looks up the
DNS mx records for, chooses a host
from that list, gets its IP address and tries to connect
to the MTA running on, port 25 via
an SMTP connection. Now, the MTA of somewhere.
com is in trouble as it can't find a user receiver in its
local user table. The responds to with an SMTP failure code, stating that
the user lookup failed (Code: 550). Its time for mta. to generate a bounce email to the address
of the return-path email header (the envelope sender),
with a message that the email to alice@somewhere.
com failed. That's a bounce email. Properly maintained
mailing lists will have every email passing through
them branded with the generic email ID, say mails@ as the envelope sender, and bounces to
that will be wasted if left unhandled.
Admin How To
VERP (Variable Envelope Return-Path)
In the above example, you will have noticed that the
delivery failure message was sent back to the address of
the Return-Path header in the original email. If there is
a key to handle the bounced emails, it comes from the
Return-Path header.
The idea of VERP is to safely encode the recipient details,
too, somehow in the return-path so that we can parse the
received bounce effectively and extract the failing recipient
from it. We specifcally use the Return-Path header, as thats
the only header that is not going to get tampered with by the
intervention of a number of MTAs.
Typically, an email from Alice to Bob in the above
example will have headers like the following:
Now, we create a custom return path header by encoding
the To address as a combination of prefx-delim-hash. The
hash can be generated by the PHP hmac functions, so that the
new email headers become something like what follows:
Return-Path:{encode ( bob@somewher.
com ) }
Now, the bounces will get directed to our new return-path
and can be handled to extract the failing recipient.
Generating a VERP address
The task now is to generate a secure return-path, which is not
bulky, and cannot be mimicked by an attacker. A very simple
VERP address for a mail to will be:
Since it can be easily exploited by an attacker, we need to
also include a hash generated with a secret key, along with the
address. Please note that the secret key is only visible to the
sender and in no way to the receiver or an attacker.
Therefore, a standard VERP address will be of the form:
bounces-{ prefx }-{hash(prefx,secretkey) }@sender_domain
PHP has its own hash-generating functions that can make
things easier. Since PHPs hmacs cannot be decoded, but only
compared, the idea will be to adjust the recipient email ID in
the prefx part of the VERP address along with its hash. On
receipt, the prefx and the hash can be compared to validate
the integrity of the bounce.
We will string replace the @ in the recipient email ID to
attach it along with the hash.
You need to edit your email headers to generate the
custom return-path, and make sure you pass it as the ffth
argument to the php::mail() function to tell your exim MTA to
set it as the default envelope sender.
$to =;
$from =;
$subject = This is the message subject ;
$body = This is the message body;
/** Altering the return path */
$alteredReturnPath = self::generateVERPAddress( $to );
$headers[ Return-Path] = $alteredReturnPath;
$envelopeSender= -f . $alteredReturnPath;
mail( $to, $subject, $body, $headers, $envelopeSender );
/** We need to produce a return address of the form -
* bounces-{ prefx }- {hash(prefx) }@sender_domain, where
prefx can be
* string_ replaced(to_address )
public generateVERPAddress( $to ) {
global $hashAlgorithm = md5;
global $hashSecretKey = myKey;
$emailDomain =;
$addressPrefx = str_replace( '@', '.', $to );
$verpAddress = hash_hmac( $hashAlgorithm , $to,
$hashSecretKey );
$returnPath = bounes. -.$addressPrefx.-.
$verpAddress. @. $emailDomain;
return $returnPath;
Including security features is yet another concern and can
be done effectively by adding the current timestamp value (in
UNIX time) in the VERP prefx. This will make it easy for
the bounce processor to decode the email delivery time and
add additional protection by brute-forcing the hash. Decoding
and comparing the value of the timestamp with the current
timestamp will also help to understand how old the bounce is.
Therefore, a more secure VERP address will look like
what follows:
bounces-{ to_address }-{ delivery_timestamp }-{ encode ( to_
address-delivery & timestamp ), secretKey }
The current timestamp can be generated in PHP by:
$current_timestamp = time();
Theres still work to do before the email is sent, as the
local MTA at may try to set its own custom
Admin How To
return-path for messages it transmits. In the example below,
we adjust the exim confguration on the MTA to override
this behaviour.
$ sudo nano /etc/exim4/exim4.conf
# Do not remove Return Path header
return_path_remove = false
# Remove the feld errors_to from the current router
# This will enable exim to use the ffth param of
php::mail() prefxed by -f to be set as the default #
envelope sender
Every email ID will correspond to a user_id feld in a
standard user database, and this can be used instead of an
email ID to generate a tidy and easy to look up VERP hash.
Redirect your bounces to a PHP bounce-
handling script
We now have a VERP address being generated on every
sent email, and it will have all the necessary information
we need securely embedded in it. The remaining part of our
task is to capture and validate the bounces, which would
require redirecting the bounces to a processing PHP script.
By default, every bounce message will reach all the way
back till the MTA that sent it, say, as its
return-path gets set to, with or without
VERP. The advantage of using VERP is that we will have
the encoded failing address, too, somewhere in the bounce.
To get that out from the bounce, we can HTTP POST
the email via curl to the bounce processing script, say
localhost/handleBounce.php using an exim pipe transport,
as follows:
$sudo nano /etc/exim4/exim4.conf
# suppose you have a recieve_all router that will accept
all the emails to your domain.
# this can be the system_alias router too
driver = accept
transport = pipe_transport
# Edit the pipe_transport
driver = pipe
command = /usr/bin/curl http://localhost/handleBounce..php
--data-urlencode "email@-"
group = nogroup
return_path_add # adds Return-Path header for
incoming mail.
delivery_date_add # adds the bounce timestamp
envelope_to_add # copies the return
path to the To: header of bounce
The email can be made use of in the handleBounce.php by
using a simple POST request.
$email = $_POST[ email ];
Decoding the failing recipient from the
bounce email
Now that the mail is successfully in the PHP script, our task
will be to extract the failing recipient from the encoded email
headers. Thanks to exim confgurations like envelope_to_add
in the pipe transport (above), the VERP address gets pasted to
the To header of the bounce email, and thats the place to look
for the failing recipient.
Some common regex functions to extract the headers are:
function extractHeaders( $email ) {
$bounceHeaders = array();
$lineBreaks = explode( "\n", $email );
foreach ( $lineBreaks as $lineBreak ) {
if ( preg_match( "/^To: (.*)/", $lineBreak , $toMatch )
) {
$bounceHeaders[ 'to' ] = $toMatch[1];
if ( preg_match( "/^Subject: (.*)/", $lineBreak ,
$subjectMatch ) ) {
$bounceHeaders[ 'subject' ] =
if ( preg_match( "/^Date: (.*)/", $lineBreak ,
$dateMatch ) ) {
$bounceHeaders[ 'date' ] = $dateMatch[1];
if ( trim( $lineBreak ) == "" ) {
// Empty line denotes that the header part is
return $bounceHeaders;
After extracting the headers, we need to decode the
original-failed-recipient email ID from the VERP hashed
$bounceHeader[to], which involves more or less the
reverse of what we did earlier. This would help us validate
the bounced email too.

*Considering the recieved $heders[ to ] is of the form
* bounces-{ to_address }-{ delivery_timestamp }-{ encode (
to_address-delivery & timestamp ), * secretKey }@
Admin How To
By: Tony Thomas
The author is currently doing his Google SoC project for Wikimedia
on handling email bounces effectively. You can contact the author
at Github:
$hashedTo = $headers[ to ];
$to = self::extractToAddress( $hashedTo );
function extractToAddress( $hashedTo ) {
$timeNow = time();
// This will help us get the address part of address@
preg_match( '~(.*?)@~', $hashedTo, $hashedSlice );
// This will help us cut the address part with the
symbol -
$hashedAddressPart = explode( '-', $hashedSlice1] );
// Now we have the prefx in
$hashedAddressPart[ 0 - 2 ] and the hash in
$verpPrefx = $hashedAddressPart [0]. '-'.
$hashedAddressPart 1]. '-'. hashedAddressPart [2];
// Extracting the bounce time.
$bounceTime = $hashedAddressPart[ 2 ];
// Valid time for a bounce to happen. The values can be
subtracted to fnd out the time in between and even used to
set an accept time, say 3 days.
if ( $bounceTime < $timeNow ) {
if ( hash_hmac( $hashAlgorithm, $verpPrefx ,
$hashSecretKey ) === $hashedAddressPart[3] ) {
// Bounce is valid, as
the comparisons return true.
$to = string_replace(
., @, $verpPrefx[1] );
return $to;
Taking action on the failing recipient
Now that you have got the failing recipient, the task would
be to record his bounce history and take relevant action.
A recommended approach would be to maintain a bounce
records table in the database, which would store the failed
recipient, bounce-timestamp and failure reason. This can be
inserted into the database on every bounce processed, and can
be as simple as:
/** extractHeaders is defned above */
$bounceHeaders = self::extractHeaders( $email );
$failureReason = $bounceHeaders[ subject ];
$bounceTimestamp = $bounceHeaders[ date ];
$hashedTo = $bounceHeaders[ to ]; // This will hold the
VERP address
$failedRecipient = self::extractToAddress( $hashedTo );
$con = mysqli_connect( "database_server", "dbuser", "dbpass",
"databaseName" );
mysqli_query( $con, "INSERT INTO bounceRecords(
failedRecipient, bounceTimestamp, failureReason )VALUES (
$failedRecipient, $bounceTimestamp , $failureReason);
mysqlI_close( $con );
Simple tests to differentiate between a
permanent and temporary bounce
One of the greatest challenges while writing a bounce
processor is to make sure it handles only the right bounces or
the permanent ones. A bounce processing script that reacts to
every single bounce can lead to mass unsubscription of users
from the mailing list and a lot of havoc. Exim helps us here in
a great way by including an additional X-Failed-Recipients:
header to a permanent bounce email. This key can be checked
for in the regex function we wrote earlier, and action can be
taken only if it exists.
* Check if the bounce corresponds to a permanent failure
* can be added to the extractHeaders() function above
function isPermanentFailure( $email ) {
$lineBreaks = explode( "\n", $email );
foreach ( $lineBreaks as $lineBreak ) {
if ( preg_match( "/^X-Failed-Recipients: (.*)/", $lineBreak,
$permanentFailMatch ) ) {
$bounceHeaders[ 'x-failed-recipients' ] =
return true;
Even today, we have a number of large organisations
that send more than 100 emails every minute and still
have all bounces directed to /dev/null. This results in far
too many emails being sent to undeliverable addresses
and eventually leads to frequent blacklisting of the
organisations mail server by popular providers like
Gmail, Hotmail, etc.
If bounces are directed to an IMAP maildir, the regex
functions won't be necessary, as the PHP IMAP library can
parse the headers readily for you.
Admin How To
A word about Varnish
Varnish is a Web application accelerator or reverse proxy.
Its installed in front of the Web server to handle HTTP
requests. This way, it speeds up the site and improves the
performance signifcantly. In some cases, it can improve the
performance of a site by 300 to 1000 times.
It does this by caching the Web pages and when visitors
come to the site, Varnish serves the cached pages rather than
requesting the Web server for it. Thus the load on the Web
server reduces. This method improves the sites performance
and scalability. It can also act as a failsafe method if the Web
server goes down because Varnish will continue to serve the
cached pages in the absence of the Web server.
With that said, lets begin by installing Varnish on a VPS,
and then connect it to a single NGINX Web server. Then lets
add another NGINX Web server so that we can implement
a failsafe mechanism. This will accomplish the
performance goals that we stated. So lets get
started. For the rest of the article, lets
assume that you are using the Centos
6.4 OS. However, we have provided
information for Ubuntu users
wherever we felt it was necessary.
Enable the required
First enable the appropriate
repositories. For Centos, Varnish
is available from the EPEL
repository. Add this repository to
your repos list, but before you do so,
youll need to import the GPG keys. So
open a terminal and enter the following
[root@bookingwire sridhar]#wget https://
[root@bookingwire sridhar]#mv 0608B895.txt /
[root@bookingwire sridhar]#rpm --import /
[root@bookingwire sridhar]#rpm -qa
he current cloud inventory for one of the SaaS
applications at our frm is as follows:
Web server: Centos 6.4 + NGINX + MySql + PHP + Drupal
Mail server: Centos 6.4 + Postfx + Dovecot + Squirrelmail
A quick test on Pingdom showed a load time of 3.92
seconds for a page size of 2.9MB with 105 requests.
Tests using Apache Bench ab -c1 -n500 http://www. yielded almost the same fguresa mean
response time of 2.52 seconds.
We wanted to improve the page load times by caching the
content upstream, scaling the site to handle much greater http
workloads, and implementing a failsafe mechanism.
The first step was to handle all incoming http requests
from anonymous users that were loading our Web server.
Since anonymous users are served content that seldom
changes, we wanted to prevent these requests
from reaching the Web server so that
its resources would be available
to handle the requests from
authenticated users. Varnish
was our first choice to
handle this.
Our next concern was to
find a mechanism to handle
the SSL requests mainly
on the sign-up pages,
where we had interfaces
to Paypal. Our aim was
to include a second Web
server that handled a
portion of the load,
and we wanted to
configure Varnish to
distribute http traffic
using a round-robin
mechanism between these
two servers. Subsequently,
we planned on configuring
Varnish in such a way that
even if the Web servers
were down, the system would
continue to serve pages. During the course of
this exercise we documented our experiences
and thats what youre reading about here.
In this article, the author demonstrates how the performance of CloudStack can
dramatically improve by using Varnish. He does so by drawing upon his practical
experience with administering SaaS servers at his own firm.
Boost the Performance of
CloudStack with Varnish
Admin How To
After importing the GPG keys you can enable the repository.
[root@bookingwire sridhar]#wget
[root@bookingwire sridhar]#rpm -Uhv epel-release-6*.rpm
To verify if the new repositories have been added to the
repo list, run the following command and check the output to
see if the repository has been added:
[root@bookingwire sridhar]#yum repolist
If you happen to use an Ubuntu VPS, then you should use
the following commands to enable the repositories:
[root@bookingwire sridhar]# wget http://repo.varnish-cache.
[root@bookingwire sridhar]# apt-key add GPG-key.txt
[root@bookingwire sridhar]# echo deb http://repo.varnish- precise varnish-3.0 | sudo tee -a /etc/
[root@bookingwire sridhar]# sudo apt-get update
Installing Varnish
Once the repositories are enabled, we can install Varnish:
[root@bookingwire sridhar]# yum -y install varnish
On Ubuntu, you should run the following command:
[root@bookingwire sridhar]# sudo apt-get install varnish
After a few seconds, Varnish will be installed. Lets verify
the installation before we go further. In the terminal, enter the
following command the output should contain the lines that
follow the input command (we have reproduced only a few
lines for the sake of clarity).
[root@bookingwire sridhar]## yum info varnish
Installed Packages
Name : varnish
Arch : i686
Version : 3.0.5
Release : 1.el6
Size : 1.1 M
Repo : installed
That looks good; so we can be sure that Varnish is installed.
Now, lets confgure Varnish to start up on boot. In case you
have to restart your VPS, Varnish will be started automatically.
[root@bookingwire sridhar]#chkconfg --level 345 varnish on
Having done that, lets now start Varnish:
[root@bookingwire sridhar]#/etc/init.d/varnish start
We have now installed Varnish and its up and running.
Lets confgure it to cache the pages from our NGINX server.
Basic Varnish configuration
The Varnish confguration fle is located in /etc/sysconfg/
varnish for Centos and /etc/default/varnish for Ubuntu.
Open the fle in your terminal using the nano or vim text
editors. Varnish provides us three ways of confguring it. We
prefer Option 3. So for our 2GB server, the confguration
steps are as shown below (the lines with comments have been
stripped off for the sake of clarity):
Figure 1: Pingdom result
Figure 2: Apache Bench result
Full Page Test
Waterfall Performance Grade Page Analysis History
Tested from on May 15 at 15:29:23
Your website is faster than 41% of all tested websites
Pingdom Website Speed Test
Enter a URL to test the load time of that page, analyze it and fnd bottlenecks
Test Now
DNS Health Ping and Traccroute Sign up
Admin How To
-u varnish -g varnish \
-p thread_pool_add_delay=2 \
-p thread_pools=2 \
-p thread_pool_min=400 \
-p thread_pool_max=4000 \
-p session_linger=50 \
-p sess_workspace=262144 \
The frst line when substituted with the variables will read -a
:80,:443 and instruct Varnish to serve all requests made on Ports
80 and 443. We want Varnish to serve all http and https requests.
To set the thread pools, frst determine the number
of CPU cores that your VPS uses and then update the
[root@bookingwire sridhar]# grep processor /proc/cpuinfo
processor : 0
processor : 1
This means you have two cores.
The formula to use is:

-p thread_pools=<Number of CPU cores> \
-p thread_pool_min=<800 / Number of CPU cores> \
The -s ${VARNISH_STORAGE} translates to -s
malloc,1G after variable substitution and is the most
important directive. This allocates 1GB of RAM for
exclusive use by Varnish. You could also specify -s fle,/
var/lib/varnish/varnish_storage.bin,10G which tells
Varnish to use the fle caching mechanism on the disk and
that 10GB has been allocated to it. Our suggestion is that
you should use the RAM.
Configure the default.vcl file
The default.vcl fle is where you will have to make most of
the confguration changes in order to tell Varnish about your
Web servers, assets that shouldnt be cached, etc. Open the
default.vcl fle in your favourite editor:
[root@bookingwire sridhar]# nano /etc/varnish/default.vcl
Since we expect to have two NGINX servers running
our application, we want Varnish to distribute the http
requests between these two servers. If, for any reason, one
of the servers fails, then all requests should be routed to the
healthy server. To do this, add the following to your default.
vcl fle:
backend bw1 { .host =;
.probe = { .url = /google0ccdbf1e9571f6ef.
.interval = 5s;
.timeout = 1s;
.window = 5;
.threshold = 3; }}
backend bw2 { .host =;
.probe = { .url = /google0ccdbf1e9571f6ef.
.interval = 5s;
.timeout = 1s;
.window = 5;
.threshold = 3; }}
backend bw1ssl { .host =;
.port = 443;
.probe = { .url = /google0ccdbf1e9571f6ef.
.interval = 5s;
.timeout = 1s;
.window = 5;
.threshold = 3; }}
backend bw2ssl { .host =;
.port = 443;
.probe = { .url = /google0ccdbf1e9571f6ef.
.interval = 5s;
.timeout = 1s;
.window = 5;
.threshold = 3; }}
director default_director round-robin {
{ .backend = bw1; }
{ .backend = bw2; }
director ssl_director round-robin {
{ .backend = bw1ssl; }
{ .backend = bw2ssl; }
sub vcl_recv {
if (server.port == 443) {
set req.backend = ssl_director;
else {
set req.backend = default_director;
You might have noticed that we have used public IP
Admin How To
addresses since we had not enabled private networking
within our servers. You should define the backends one
each for the type of traffic you want to handle. Hence, we
have one set to handle http requests and another to handle
the https requests.
Its a good practice to perform a health check to see
if the NGINX Web servers are up. In our case, we kept it
simple by checking if the Google webmaster fle was present
in the document root. If it isnt present, then Varnish will not
include the Web server in the round robin league and wont
redirect traffc to it.
.probe = { .url = /google0ccdbf1e9571f6ef.html;
The above command checks the existence of this fle at
each backend. You can use this to take an NGINX server out
intentionally either to update the version of the application or
to run scheduled maintenance checks. All you have to do is to
rename this fle so that the check fails!
In spite of our best efforts to keep our servers sterile,
there are a number of reasons that can cause a server to
go down. Two weeks back, we had one of our servers go
down, taking more than a dozen sites with it because the
master boot record of Centos was corrupted. In such cases,
Varnish can handle the incoming requests even if your Web
server is down. The NGINX Web server sets an expires
header (HTTP 1.0) and the max-age (HTTP 1.1) for each
page that it serves. If set, the max-age takes precedence
over the expires header. Varnish is designed to request
the backend Web servers for new content every time the
content in its cache goes stale. However, in a scenario
like the one we faced, its impossible for Varnish to obtain
fresh content. In this case, setting the Grace in the
configuration file allows Varnish to serve content (stale)
even if the Web server is down. To have Varnish serve the
(stale) content, add the following lines to your default.vcl:
sub vcl_recv {
set req.grace = 6h;
sub vcl_fetch {
set beresp.grace = 6h;
if (!req.backend.healthy) {
unset req.http.Cookie;
The last segment tells Varnish to strip all cookies for an
authenticated user and serve an anonymous version of the
page if all the NGINX backends are down.
Most browsers support encoding but report it differently.
NGINX sets the encoding as Vary: Cookie, Accept-Encoding.
If you dont handle this, Varnish will cache the same page
once each, for each type of encoding, thus wasting server
resources. In our case, it would gobble up memory. So add the
following commands to the vcl_recv to have Varnish cache
the content only once:
if (req.http.Accept-Encoding) {
if (req.http.Accept-Encoding ~ gzip) {
# If the browser supports it, well use gzip.
set req.http.Accept-Encoding = gzip;
else if (req.http.Accept-Encoding ~ defate) {
# Next, try defate if it is supported.
set req.http.Accept-Encoding = defate;
else {
# Unknown algorithm. Remove it and send unencoded.
unset req.http.Accept-Encoding;
Now, restart Varnish.
[root@bookingwire sridhar]# service varnish restart
Additional configuration for content management
systems, especially Drupal
A CMS like Drupal throws up additional challenges when
confguring the VCL fle. Well need to include additional
directives to handle the various quirks. You can modify the
directives below to suit the CMS that you are using. When
using CMSs like Drupal if there are fles that you dont want
cached for some reason, add the following commands to your
default.vcl fle in the vcl_recv section:
if (req.url ~ ^/status\.php$ ||
req.url ~ ^/update\.php$ ||
req.url ~ ^/ooyala/ping$ ||
req.url ~ ^/admin/build/features ||
req.url ~ ^/info/.*$ ||
req.url ~ ^/fag/.*$ ||
req.url ~ ^.*/ajax/.*$ ||
req.url ~ ^.*/ahah/.*$) {
return (pass);
Varnish sends the length of the content (see the
Varnish log output above) so that browsers can display
the progress bar. However, in some cases when Varnish
is unable to tell the browser the specified content-length
(like streaming audio) you will have to pass the request
directly to the Web server. To do this, add the following
command to your default.vcl:
Admin How To
if (req.url ~ ^/content/music/$) {
return (pipe);
Drupal has certain fles that shouldnt be accessible to
the outside world, e.g., Cron.php or Install.php. However,
you should be able to access these fles from a set of IPs
that your development team uses. At the top of default.vcl
include the following by replacing the IP address block with
that of your own:
acl internal {;
Now to prevent the outside world from accessing these
pages well throw an error. So inside of the vcl_recv function
include the following:
if (req.url ~ ^/(cron|install)\.php$ && !client.ip ~
internal) {
error 404 Page not found.;
If you prefer to redirect to an error page, then use this
if (req.url ~ ^/(cron|install)\.php$ && !client.ip ~
internal) {
set req.url = /404;
Our approach is to cache all assets like images, JavaScript
and CSS for both anonymous and authenticated users. So
include this snippet inside vcl_recv to unset the cookie set by
Drupal for these assets:
if (req.url ~ (?i)\.(png|gif|jpeg|jpg|ico|swf|css|js|html|
htm)(\?[a-z0-9]+)?$) {
unset req.http.Cookie;
Drupal throws up a challenge especially when
you have enabled several contributed modules. These
modules set cookies, thus preventing Varnish from
caching assets. Google analytics, a very popular module,
sets a cookie. To remove this, include the following in
your default.vcl:
set req.http.Cookie = regsuball(req.http.Cookie, (^|;\s*)
If there are other modules that set JavaScript cookies,
then Varnish will cease to cache those pages; in which case,
you should track down the cookie and update the regex
above to strip it.
Once you have done that, head to /admin/confg/
development/performance, enable the Page Cache setting
and set a non-zero time for Expiration of cached pages.
Then update the settings.php with the following snippet
by replacing the IP address with that of your machine running
$conf[reverse_proxy] = TRUE;
$conf[reverse_proxy_addresses] = array(;
$conf[page_cache_invoke_hooks] = FALSE;
$conf[cache] = 1;
$conf[cache_lifetime] = 0;
$conf[page_cache_maximum_age] = 21600;
You can install the Drupal varnish module (http://www., which provides better integration
with Varnish and include the following lines in your settings.php:
$conf[cache_backends] = array(sites/all/modules/varnish/;
$conf[cache_class_cache_page] = VarnishCache;
Checking if Varnish is running and
serving requests
Instead of logging to a normal log fle, Varnish logs to a
shared memory segment. Run varnishlog from the command
line, access your IP address/ URL from the browser and
view the Varnish messages. It is not uncommon to see a 503
service unavailable message. This means that Varnish is
unable to connect to NGINX. In which case, you will see an
error line in the log (only the relevant portion of the log is
reproduced for clarity).
[root@bookingwire sridhar]# Varnishlog
12 StatSess c 34869 0 1 0 0 0 0 0 0
12 SessionOpen c 34870 :80
12 ReqStart c 34870 1343640981
12 RxRequest c GET
12 RxURL c /
12 RxProtocol c HTTP/1.1
12 RxHeader c Host:
12 RxHeader c User-Agent: Mozilla/5.0 (X11; Ubuntu;
Linux i686; rv:27.0) Gecko/20100101 Firefox/27.0
12 RxHeader c Accept: text/html,application/
12 RxHeader c Accept-Language: en-US,en;q=0.5
12 RxHeader c Accept-Encoding: gzip, defate
12 RxHeader c Referer:
12 RxHeader c Cookie: __zlcmid=OAdeVVXMB32GuW
12 RxHeader c Connection: keep-alive
12 FetchError c no backend connection
Admin How To
12 VCL_call c error
12 TxProtocol c HTTP/1.1
12 TxStatus c 503
12 TxResponse c Service Unavailable
12 TxHeader c Server: Varnish
12 TxHeader c Retry-After: 0
12 TxHeader c Content-Type: text/html; charset=utf-8
12 TxHeader c Content-Length: 686
12 TxHeader c Date: Thu, 03 Apr 2014 09:08:16 GMT
12 TxHeader c X-Varnish: 1343640981
12 TxHeader c Age: 0
12 TxHeader c Via: 1.1 varnish
12 TxHeader c Connection: close
12 Length c 686
Resolve the error and you should have Varnish running.
But that isnt enoughwe should check if its caching the
pages. Fortunately, the folks at the following URL have
made it simple for us.
Check if Varnish is serving pages
Visit, provide your URL/
IP address and you should see your Gold Star! (See Figure
3.) If you dont, but instead see other messages, it means that
Varnish is running but not caching.
Then you should look at your code and ensure that it
sends the appropriate headers. If you are using a content
management system, particularly Drupal, you can check the
additional parameters in the VCL fle and set them correctly.
You have to enable caching in the performance page.
Running the tests
Running Pingdom tests showed improved response times
of 2.14 seconds. If you noticed, there was an improvement
in the response time in spite of having the payload of
the page increasing from 2.9MB to 4.1MB. If you are
wondering why it increased, remember, we switched the
site to a new theme.
Apache Bench reports better fgures at 744.722 ms.
Configuring client IP forwarding
Check the IP address for each request in the access logs of
your Web servers. For NGINX, the access logs are available
at /var/log/nginx and for Apache, they are available at /var/
log/httpd or /var/log/apache2, depending on whether you are
running Centos or Ubuntu.
Its not surprising to see the same IP address (of the
Varnish machine) for each request. Such a confguration will
throw all Web analytics out of gear. However, there is a way
out. If you run NGINX, try out the following procedure.
Determine the NGINX confguration that you currently run by
executing the command below in your command line:
[root@bookingwire sridhar]# nginx -V
Look for the with-http_realip_module. If this is
available, add the following to your NGINX confguration fle
in the http section. Remember to replace the IP address with
that of your Varnish machine. If Varnish and NGINX run on
the same machine, do not make any changes.
real_ip_header X-Forwarded-For;
Restart NGINX and check the logs once again. You will
see the client IP addresses.
If you are using Drupal then include the following line in
$conf[reverse_proxy_header] = HTTP_X_FORWARDED_FOR;
Other Varnish tools
Varnish includes several tools to help you as an administrator.
varnishstat -1 -f n_lru_nuked: This shows the number of
Figure 3: Varnish status result
Figure 4: Pingdom test result after configuring Varnish
Full Page Test
Tweet Email Post to Timeline
Your website is faster than 68% of all tested websites
Pingdom Website Speed Test
Enter a URL to test the load time of that page, analyze it and fnd bottlenecks
Test Now
DNS Health Ping and Traccroute Sign up
Admin How To
objects nuked from the cache.
Varnishtop: This reads the logs and displays the most
frequently accessed URLs. With a number of optional fags, it
can display a lot more information.
Varnishhist: Reads the shared memory logs, and displays
a histogram showing the distribution of the last N requests on
the basis of their processing.
Varnishadm: A command line utility for Varnish.
Varnishstat: Displays the statistics.
Dealing with SSL: SSL-offloader, SSL-
accelerator and SSL-terminator
SSL termination is probably the most misunderstood term
in the whole mix. The mechanism of SSL termination is
employed in situations where the Web traffic is heavy.
Administrators usually have a proxy to handle SSL
requests before they hit Varnish. The SSL requests are
decrypted and the unencrypted requests are passed to
the Web servers. This is employed to reduce the load
on the Web servers by moving the decryption and other
cryptographic processing upstream.
Since Varnish by itself does not process or understand
SSL, administrators employ additional mechanisms to
terminate SSL requests before they reach Varnish. Pound
( and Stud (https://github.
com/bumptech/stud) are reverse proxies that handle SSL
termination. Stunnel ( is a program
that acts as a wrapper that can be deployed in front of Varnish.
Alternatively, you could also use another NGINX in front of
Varnish to terminate SSL.
However, in our case, since only the sign-in pages
required SSL connections, we let Varnish pass all SSL
requests to our backend Web server.
Additional repositories
There are other repositories from where you can get the latest
release of Varnish:
rpm nosignature -i varnish-release-3.0-1.el6.noarch.rpm
If you have the Remi repo enabled and the Varnish
cache repo enabled, install them by specifying the defined
Yum install varnish enablerepo=epel
Yum install varnish enablerepo=varnish-3.0
Our experience has been that Varnish reduces the number
of requests sent to the NGINX server by caching assets, thus
improving page response times. It also acts as a failover
mechanism if the Web server fails.
We had over 55 JavaScript fles (two as part of the theme
and the others as part of the modules) in Drupal and we
aggregated JavaScript by setting the fag in the Performance
page. We found a 50 per cent drop in the number of requests;
however, we found that some of the JavaScript fles were not
loaded on a few pages and had to disable the aggregation.
This is something we are investigating. Our recommendation
is not to choose the aggregate JavaScript fles in your Drupal
CMS. Instead, use the Varnish module (
The module allows you to set long object lifetimes
(Drupal doesnt set it beyond 24 hours), and use Drupals
existing cache expiration logic to dynamically purge
Varnish when things change.
You can scale this architecture to handle higher loads
either vertically or horizontally. For vertical scaling, resize
your VPS to include additional memory and make that
available to Varnish using the -s directive.
To scale horizontally, i.e., to distribute the requests between
several machines, you could add additional Web servers and
update the round robin directives in the VCL fle.
You can take it a bit further by including HAProxy
right upstream and have HAProxy route requests to
Varnish, which then serves the content or passes it
downstream to NGINX.
To remove a Web server from the round robin league,
you can improve upon the example that we have mentioned
by writing a small PHP snippet to automatically shut down
or exit() if some checks fail.
Figure 5: Apache Bench result after configuring Varnish
By: Sridhar Pandurangiah
The author is the co-founder and director of Sastra
Technologies, a start-up engaged in providing EDI solutions on
the cloud. He can be contacted at sridhar@sastratechnologies.
in / He maintains a technical blog at
Admin How To
Use Wireshark to Detect ARP Spoofing
consequences of the lie. It thus becomes vitally important
for the state to use all of its powers to repress dissent, for the
truth is the mortal enemy of the lie, and thus by extension, the
truth is the greatest enemy of the state.
So let us interpret this quote by a leader of the
infamous Nazi regime from the perspective of the ARP
protocol: If you repeatedly tell a device who a particular
MAC address belongs to, the device will eventually
believe you, even if this is not true. Further, the device
will remember this MAC address only as long as you keep
telling the device about it. Thus, not securing an ARP
cache is dangerous to network security.
Note: From the network security professionals
view, it becomes absolutely necessary to monitor ARP
traffc continuously and limit it to below a threshold. Many
managed switches and routers can be confgured to monitor
and control ARP traffc below a threshold.
An MITM attack is easy to understand using this
context. Attackers trying to listen to traffc between any two
devices, say a victims computer system and a router, will
launch an ARP spoofng attack by sending unsolicited (what
this means is an ARP reply packet sent out without receiving
magine an old Hindi movie where the villain and his
subordinate are conversing over the telephone, and the
hero intercepts this call to listen in on their conversation
a perfect man in the middle (MITM) scenario. Now
extend this to the network, where an attacker intercepts
communication between two computers.
Here are two possibilities with respect to what an attacker
can do to intercepted traffc:
1. Passive attacks (also called eavesdropping or only
listening to the traffc): These can reveal sensitive
information such as clear text (unencrypted) login IDs and
2. Active attacks: These modify the traffc and can be used
for various types of attacks such as replay, spoofng, etc.
An MITM attack can be launched against cryptographic
systems, networks, etc. In this article, we will limit our
discussions to MITM attacks that use ARP spoofng.
ARP spoofing
Joseph Goebbels, Nazi Germanys minister for propaganda,
famously said, If you tell a lie big enough and keep repeating
it, people will eventually come to believe it. The lie can
be maintained only for such time as the state can shield
the people from the political, economic and/or military
The first two articles in the series on Wireshark, which appeared in the July and August 2014
issues of OSFY, covered a few simple protocols and various methods to capture traffic in a
switched environment. This article describes an attack called ARP spoofing and explains
how you could use Wireshark to capture it.
Admin How To
an ARP request) ARP reply packets with the following
source addresses:
Towards the victims computer system: Router IP address
and attacker's PC MAC address;
Towards the router: Victims computer IP address and
attackers PC MAC address.
After receiving such packets continuously, due to ARP
protocol characteristics, the ARP cache of the router and the
victims PC will be poisoned as follows:
Router: The MAC address of the attackers PC registered
against the IP address of the victim;
Victims PC: The MAC address of the attackers PC
registered against the IP address of the router.
The Ettercap tool
ARP spoofng is the most common type of MITM attack, and
can be launched using the Ettercap tool available under Linux
( A few
sites claim to have Windows executables. I have never tested
these, though. You may install the tool on any Linux distro, or
use distros such as Kali Linux, which has it bundled.
The tool has command line options, but its GUI is easier
and can be started by using:
ettercap -G
Launch the MITM ARP spoofng attack by using Ettercap
menus (Figure 1) in the following sequence (words in italics
indicate Ettercap menus):
Sniff is unifed sniffng and selects the interface to be
sniffed (for example, eth0 for a wired network).
Hosts scans for hosts. It scans for all active IP addresses
in the eth0 network.
The hosts list displays the list of scanned hosts.
The required hosts are added to Target1 and Target2.
An ARP spoofng attack will be performed so as to read
traffc between all hosts selected under Target1 and
Targets gives the current targets. It verifes selection of
the correct targets.
MITM ARP poisoning: Sniff remote connections will
start the attack.
The success of the attack can be confrmed as follows:
In the router, check ARP cache (for a CISCO router, the
command is show ip arp).
In the victim PC, use the ARP -a command. Figure 2
gives the output of the command before and after a
successful ARP spoofng attack.
The attacker PC captures traffc using Wireshark to
check unsolicited ARP replies. Once the attack is successful,
the traffc between two targets will also be captured. Be
carefulif traffc from the victims PC contains clear text
authentication packets, the credentials could be revealed.
Note that Wireshark gives information such as Duplicate
Figure 1: Ettercap menus
Figure 2: Successful ARP poisoning
Figure 4: Wireshark capture on the attackers PCsniffed packets sniffed
from the victims PC and router
Figure 3: Wireshark capture on the attackers PCARP packets
Admin How To
use of IP is detected under the
Info column once the attack is
Here is how the actual packet
travels and is captured after a
successful ARP poisoning attack:
When the packet from the
victim PC starts for the router,
at Layer 2, the poisoned MAC
address of the attacker (instead
of the original router MAC)
is inserted as the target MAC;
thus the packet reaches the
attackers PC.
The attacker sees this packet
and forwards the same to the
router with the correct MAC
The reply from the router is
logically sent towards the spoofed destination MAC
address of the attackers system (rather than the victims
PC). It is captured and forwarded by the attacker to the
victims PC.
In between, the sniffer software, Wireshark, which is
running on the attackers PC, reads this traffc.
Here are various ways to prevent ARP spoof attacks:
Monitor arpwatch logs on Linux
Use static ARP commands on Windows and Ubuntu
as follows:
Windows: arp-s DeviceIP DeviceMAC
Ubuntu: arp -i eth0 -s DeviceIP DeviceMAC
Control ARP packets on managed switches
Can MITM ARP spoofing be put to fruitful use?
Definitely! Consider capturing packets from a system
suspected of malware (virus) infection in a switched
environment. There are two ways to do thisuse a
wiretap or MITM ARP spoofing. Sometimes, you may
not have a wiretap handy or may not want the system
to go offline even for the time required to connect the
wiretap. Here, MITM ARP spoofing will definitely
serve the purpose.
Note: This attack is specifically targeted towards
OSI Layer 2a data link layer; thus, it can be executed
only from within your network. Be assured, this attack
cannot be used sitting outside the local network to
sniff packets between your computer and your banks
Web server the attacker must be within the local
Before we conclude, let us understand an important
Wireshark feature called capture flters.
We did go through the basics of display filters in
Figure 5: Wiresharks capture filter
No broadcast and no Multicast
IP only
IP address
IPX only
TCP only
UDP only
By: Rajesh Deodhar
The author has been an IS auditor and network security
consultant-trainer for the last two decades. He is a BE in Industrial
Electronics, and holds CISA, CISSP, CCNA and DCL certications.
Please feel free to contact him at
Packets captured using the test scenarios described in
this series of articles are capable of revealing sensitive
information such as login names and passwords. Using
ARP spoong, in particular, will disturb the network
temporarily. Make sure to use these techniques only in
a test environment. If at all you wish to use them in a
live environment, do not forget to avail explicit written
permission before doing so.
the previous article. But, in a busy network, capturing
all traffic and using display filters to see only the desired
traffic may require a lot of effort. Wiresharks capture
filters provide a way out.
In the beginning, before selecting the interface, you can
click on Capture Options and use capture flters to capture
only the desired traffc. Click on the Capture flter button to
see various flters, such as ARP, No ARP, TCP only, UDP only,
traffc from specifc IP addresses, and so on. Select the desired
flter and Wireshark will capture only the defned traffc.
For example, MITM ARP spoofng can be captured
using the ARP flter from Capture flters instead of Display
fltering the entire captured traffc.
Keep a watch on this column for exciting Wireshark
Admin Insight
his requirements. Later, he published the software as open
source and a lot of others joined the community to further
develop the software. The rest is history.
The statistics
Today, Asterisk claims to have 2 million downloads
every year, and is running on over 1 million servers,
with 1.3 million new endpoints created annually. A
2012 statistic by Eastern Management claims that 18
per cent of all PBX lines in North America are open
source-based and the majority of them are on Asterisk.
Indian companies have also started adopting Asterisk
since a few years. The initial thrust was for international
call centres. A large majority of the smaller call centres
(50-100 seater) use Vicidial', another open source
application based on Asterisk. IP PBX penetration in the
Indian market is not very high due to certain regulatory
misinterpretations. Anyhow, this unclear environment is
gradually getting clarity, and very soon, we will see an
astronomic growth of Asterisk in the Indian market.
sterisk is a revolutionary open source platform
started by Mark Spencer, and has shaken up the
telecom world. This series is meant to familiarise
you with it, and educate you enough to be a part of it in
order to enjoy its many benefts.
If you are a technology freak, you will be able to make
your own PBX for your offce or home after going through
this series. As a middle level manager, you will be able to
guide a techie to do the job, while senior level managers with
a good appreciation of the technology and minimal costs
involved would be in a position to direct somebody to set up
an Asterisk PBX. If you are an entrepreneur, you can adopt
one of the many business models with Asterisk. As you will
see, it is worthwhile to at least evaluate the option.
In 1999, Mark Spencer of Digium fame started a Linux
technical support company with US$ 4000. Initially, he had
to be very frugal; so buying one of those expensive PBXs
was unthinkable. Instead, he started programming a PBX for
This article, the first of a multi-part series, familiarises readers with Asterisk, which is a
software implementation of a private branch exchange (PBX).
Make Your Own PBX with Asterisk
Admin Insight
The call centre boom also led
to the development of the Asterisk
ecosystem comprising Asterisk-
based product companies, software
supporters, hardware resellers, etc,
across India. This presents a huge
opportunity for entrepreneurs.
Some terminology
Before starting, I would like to
introduce some basic terms for the
benefit of readers who are novices in this field. Let us start
with the PBX or private branch exchange, which is the
heart of all corporate communication. All the telephones
seen in an office environment are connected to the PBX,
which in turn connects you to the outside world. The
internal telephones are called subscribers and the external
lines are called trunk lines.
The trunk lines connect the PBX to the outside world
or the PSTN (Public Switched Telephony Network).
Analogue trunks (FXOForeign eXchange Office) are
based on very old analogue technology, which is still
in use in our homes and in some companies. Digital
trunk technology or ISDN (Integrated Services Digital
Network) evolved in the 80s with mainly two types of
connections BRI (Basic Rate Interface) for SOHO
(small office/ home office) use, and PRI (Primary Rate
Interface) for corporate use. In India, analogue trunks
are used for SOHO trunking, but BRI is no longer used
at all. Anyhow, PRI is quite popular among companies.
IP/SIP (Internet Protocol/Session Initiation Protocol)
trunking has been used by international call centres for
quite some time. Now, many private providers like Tata
Telecom have started offering SIP trunking for domestic
calls also. The option of GSM trunking through a GSM
gateway using SIM cards is also quite popular, due to the
flexibility offered in costs, prepaid options and network
The users connected to the PBX are called subscribers.
Analogue telephones (FXS Foreign eXchange Subscriber)
are still very commonly used and are the cheapest. As
Asterisk is an IP PBX, we need a VoIP FXS gateway to
convert the IP signals to analogue signals. Asterisk supports
IP telephones, mainly using SIP.
Nowadays, Wi-Fi clients are available even for
smartphones, which enable the latter to work like extensions.
These clients bring in a revolutionary transformation to the
telephony landscapeanalogous to paperless offces and
telephone-less desks. The same smartphone used to make
calls over GSM networks becomes a dual-purpose phone
also working like a desk extension. Just for a minute, consider
the limitless possibilities enabled by this new transformed
extension phone.
Extension roaming: Employees can roam about
anywhere in the offceparticipate in a conference, visit
a colleague, doctors can visit their in-patientsand yet
receive calls as if they were seated at their desks.
External extensions: The employees could be at home,
at a friend's house, or even out making a purchase, and
still receive the same calls, as if at their desks.
Increased call accountability: Calls can be recorded and
monitored for quality or security purposes at the PBX.
Lower telephone costs: The volume of calls passing
through the PBX makes it possible to negotiate with the
service provider for better rates.
The advantages that a roaming extension brings
are many, which we will explore in more detail in
subsequent editions.
Let us look into the basics of Asterisk. Asterisk is
like a box of Lego blocks for people who want to create
communications applications. It includes all the building
blocks needed to create a PBX, an IVR system, a conference
bridge and virtually any other communications app you can
imagine, says an excerpt from
Asterisk is actually a piece of software. In very simple
and generic terms, the following are the steps required to
create an application based on it:
1. Procure standard hardware.
2. Install Linux.
3. Download Asterisk software.
4. Install Asterisk.
5. Confgure it.
6. Procure hardware interfaces for the trunk line and
confgure them.
7. Procure hardware for subscribers and confgure them.
8. Youre then ready to make your calls.
Procure a standard desktop or server hardware, based
on Pentium, Xeon, i3, etc. RAM is an important factor, and
could be 2GB, 4GB or 8GB. These two factors decide the
number of concurrent calls. Hard disk capacity of 500GB or
1TB is mainly for space to store voice fles for VoiceMail
or VoiceLogger. The hard disks speed also infuences the
concurrent calls.
The next step is to choose a suitable OSFedora,
Debian, CentOS or Ubuntu are well suited for this
purpose. After this, Asterisk software may be downloaded
from Either the newest LTS
(Long Term Support) release or the latest standard version
can be downloaded. LTS versions are released once in
four years. They are more stable, but have fewer features
than the standard version, which is released once a year.
Once the software is downloaded, the installation may be
carried out as per the instructions provided. We'll go into
the details of the installation in later sessions.
The download page also offers the option to download
AsteriskNow, which is an ISO image of Linux, Asterisk and
FreePBX GUI. If you prefer a very quick and simple installation
without much fexibility, you may choose this variant.
Mark Spencer,
founder of Asterisk
Admin Insight
After the installation, one needs to create the trunks, users
and set up some more features to be able to start using the
system. The administrators can make these confgurations
directly into the dial plan, or there are GUIs like FreePBX,
which enable easy administration.
Depending on the type of trunk chosen, we need to
procure hardware. If we are connecting a normal analogue
line, an FXO card with one port needs to be procured, in PCI
or PCIe format, depending on the slots available on the server.
After inserting the card, it has to be confgured. Similarly, if
you have to connect analogue phones, you need to procure
FXS gateways. IP phones can be directly connected to the
system over the LAN.
Exploring the PBX further, you will be astonished
by the power of Asterisk. It comes with a built in voice
logger, which can be customised to record either all calls
or those from selective people. In most proprietary PBXs,
this would have been an additional component. Asterisk not
only provides a voice mail box, but also has the option to
convert the voice mail to an attachment that can be sent to
you as an email. The Asterisk IVR is very powerful; it has
multiple levels, digit collection, database and Web-service
integration, and speech recognition.
There are also lots of applications based on Asterisk
like Vicidial, which is a call-centre suite for inbound
and outbound dialling. For the latter, one can configure
campaigns with lists of numbers, dial these numbers
in predictive dialling mode and connect to the agents.
Similarly, inbound dialling can also be configured with
multiple agents, and the calls routed based on multiple
criteria like the region, skills, etc.
Asterisk also easily integrates with multiple enterprise
applications (like CRM and ERP) over CTI (computer
telephony interfaces) like TAPI (Telephony API) or by using
simple URL integration.
O'Reilly has a book titled Asterisk: The future of
telephony', which can be downloaded. I would like to take
you through the power of Asterisk in subsequent issues, so
that you and your network can beneft from this remarkable
product, which is expected to change the telephony landscape
of the future.
By: Devasia Kurian
The author is the founder and CEO of *astTECS.
Please share your feedback/ thoughts/
views via email at
Open Gurus How To
Fat32 (0x0c). You can choose ext2/ext3 fle systems also, but
they will not load some OSs. So, Fat32 is the best choice for
most of the ISOs.
Now download the grub4dos-0.4.5c (not grub4dos-
0.4.6a) from
downloads/list and extract it on the desktop.
Next, install the grub4dos on the MBR with a zero
second time-out on your USB stick, by typing the following
command at the terminal:
sudo ~/Desktop/grub4dos-0.4.5c/ - -time-out =0 /
Note: You can change the path to your grub4dos folder.
sdb is your USB and can be checked by the df command in a
terminal or by using the gparted or disk utility tools.
Copying Easy2Boot files to USB
Your pen drive is ready to boot, but we need menu files,
which are necessary to detect the .ISO files in your USB.
ystems administrators and other Linux enthusiasts use
multiple CDs or DVDs to boot and install operating
systems on their PCs. But it is somewhat diffcult and
costly to maintain one CD or DVD for each OS (ISO image
fle) and to carry around all these optical disks; so, lets look
at the alternativea multi-boot USB.
The Internet provides so many ways (in Windows and
in Linux) to convert a USB drive into a bootable USB. In
real time, one can create a bootable USB that contains a
single OS. So, if you want to change the OS (ISO image),
you have to format the USB. To avoid formatting the USB
each time the ISO is changed, use Easy2Boot. In my case,
the RMPrepUSB website saved me from unnecessarily
formatting the USB drive by introducing the Easy2Boot
option. Easy2Boot is open source - it consists of plain text
batch fles and open source grub4dos utilities. It has no
proprietary software.
Making the USB drive bootable
To make your USB bootable, just connect it to your Linux
system. Open the disk utility or gparted tool and format it as
This DIY article is for systems admins and software hobbyists, and teaches them
how to create a bootable USB that is loaded with multiple ISOs.
How to Make Your USB Boot
with Multiple ISOs
Open Gurus How To
Figure 1: Folders for different OSs
Figure 2: Easy2Boot OS selection menu
Figure 3: Ubuntu boot menu
The menu (.mnu) files and other boot-related files can
be downloaded from the Easy2boot website. Extract
the Easy2Boot file to your USB drive and you can
observe the different folders that are related to different
operating systems and applications. Now, just place the
corresponding .ISO file in the corresponding folder. For
example, all the Linux-related .ISO files should be placed
in the Linux folder, all the backup-Linux related files
should be placed in the corresponding folder, utilities
should be placed in the utilities folder, and so on.
Your USB drive is now ready to be loaded with any
(almost all) Linux image files, backup utilities and some
other Windows related .ISOs without formatting it. After
placing your required image files, either installation
ISOs or live ISOs, you need to defragment the folders in
the USB drive. To defrag your USB drive, download the
defragfs-1.1.1.gz file from
download.html and extract it to the desktop. Now run the
following command at the terminal:

sudo umount /dev/sdb1
sdb1 is the partition on my USB which has the E2B fles.
sudo mkdir ~/Desktop/usb && sudo mount /dev/sdb1 ~/Desktop/
sudo perl ~/Desktop/defragfs ~/Desktop/usb -f

Thats it. Your USB drive is ready with a number of ISO
fles to boot on any system. Just run the defragfs command
every time you modify (add or remove) the ISO fles in the
USB to make all the fles in the drive contiguous.
Using the QEMU emulator for testing
After completing the fnal stage, test how well your USB
boots with lots of .ISO load on it, using the QEMU tool.
Alternatively, you can choose any of the virtualisation tools
like Virtual Box or VMware. We used QEMU (it is easy
but somewhat slow) in our Linux machine by typing the
following command at the terminal:
sudo qemu m 512M /dev/sdb
Note: The loading of every .ISO fle in the corresponding
folder is based only on the .mnu fle for that .ISO. So, by
creating your own .mnu fle you can add your own favour to
the USB menu list. For further details and help regarding .mnu
fle creation, just visit
Your USB will boot and the Easy2Boot OS selection menu
will appear. Choose the OS you want, which is placed under
the corresponding folder. You can use your USB in real
time, and can add or remove the .ISOs in the corresponding
folders simply by copy-pasting. You can use the same USB
for copying documents and other fles by making all the fles
that belong to Easy2Boot contiguous.
By: Gaali Mahesh and Nagaram Suresh Kumar
The authors are assistant professors at VNITSW (Vignans Nirula
Institute of Technology and Science for Women, Andhra Pradesh).
They blog at, where they share some tech tricks
and their practical experiences with open source. You can reach
them at and
Open Gurus Let's Try
to working with most target boards, you can apply these
techniques on other boards too.
Device tree
Flattened Device Tree (FDT) is a data structure that describes
hardware initiatives from open frmware. The device tree
perspective kernel no longer contains the hardware description,
which is located in a separate binary called the device tree
blob (dtb) fle. So, one compiled kernel can support various
hardware confgurations within a wider architecture family.
For example, the same kernel built for the OMAP family can
work with various targets like the BeagleBoard, BeagleBone,
PandaBoard, etc, with dtb fles. The boot loader should be
customised to support this as two binaries-kernel image and
the dtb fle - are to be loaded in memory. The boot loader
passes hardware descriptions to the kernel in the form of dtb
fles. Recent kernel versions come with a built-in device tree
compiler, which can generate all dtb fles related to the selected
architecture family from device tree source (dts) fles. Using the
device tree for ARM has become mandatory for all new SOCs,
with support from recent kernel versions.
ou may have heard of the many embedded target
boards available today, like the BeagleBoard,
Raspberry Pi, BeagleBone, PandaBoard, Cubieboard,
Wandboard, etc. But once you decide to start development for
them, the right hardware with all the peripherals may not be
available. The solution to starting development on embedded
Linux for ARM is by emulating hardware with QEMU, which
can be done easily without the need for any hardware. There
are no risks involved, too.
QEMU is an open source emulator that can emulate
the execution of a whole machine with a full-fedged OS
running. QEMU supports various architectures, CPUs and
target boards. To start with, lets emulate the Versatile Express
Board as a reference, since it is simple and well supported by
recent kernel versions. This board comes with the Cortex-A9
(ARMv7) based CPU.
In this article, I would like to mention the process of
cross compiling the Linux kernel for ARM architecture
with device tree support. It is focused on covering the
entire process of workingfrom boot loader to fle system
with SD card support. As this process is almost similar
This article is intended for those who would like to experiment with the many embedded
boards in the market but do not have access to them for one reason or the other. With the
QEMU emulator, DIY enthusiasts can experiment to their hearts content.
How to Cross Compile the Linux Kernel
with Device Tree Support
Open Gurus Let's Try
Building mkimage
The mkimage command is used to create images for use with
the u-boot boot loader.
Here, we'll use this tool to transform the kernel image to
be used with u-boot. Since this tool is available only through
u-boot, we need to go for a quick build of this boot loader
to generate mkimage. Download a recent stable version of
u-boot (tested on u-boot-2014.04.tar.bz2) from
tar -jxvf u-boot-2014.04.tar.bz2
cd u-boot-2014.04
make tools-only
Now, copy mkimage from the tools directory to any
directory under the standard path (like /usr/local/bin) as a
super user, or set the path to the tools directory each time,
before the kernel build.
Building the Linux kernel
Download the most recent stable version of the kernel source
from (tested with linux-3.14.10.tar.xz):
tar -xvf linux-3.14.10.tar.gz
cd linux-3.14.10
make mrproper #clean all built fles and
confguration fles
make ARCH=arm vexpress_defconfg #default confguration for
given board
make ARCH=arm menuconfg #customize the confguration
Then, to customise kernel confguration (Figure 1), follow
the steps listed below:
1) Set a personalised string, say -osfy-fdt, as the local
version of the kernel under general setup.
2) Ensure that ARM EABI and old ABI compatibility are
enabled under kernel features.
3) Under device drivers--> block devices, enable RAM disk
support for initrd usage as static module, and increase
default size to 65536 (64MB).
You can use arrow keys to navigate between various options
Building QEMU from sources
You may obtain pre-built QEMU binaries from your distro
repositories or build QEMU from sources, as follows.
Download the recent stable version of QEMU, say qemu-
2.0.tar.bz2, extract and build it:
tar -zxvf qemu-2.0.tar.bz2
cd qemu-2.0
./confgure --target-list=arm-softmmu, arm-linux-user
make install

You will observe commands like qemu-arm, qemu-
system-arm, qemu-img under /opt/qemu-arm/bin.
Among these, qemu-system-arm is useful to emulate the
whole system with OS support.
Preparing an image for the SD card
QEMU can emulate an image fle as storage media in the
form of the SD card, fash memory, hard disk or CD drive.
Lets create an image fle using qemu-img in raw format and
create a FAT fle system in that, as follows. This image fle
acts like a physical SD card for the actual target board:

qemu-img create -f raw sdcard.img 128M
#optionally you may create partition table in this image
#using tools like sfdisk, parted
mkfs.vfat sdcard.img
#mount this image under some directory and copy required fles
mkdir /mnt/sdcard
mount -o loop,rw,sync sdcard.img /mnt/sdcard
Setting up the toolchain
We need a toolchain, which is a collection of various cross
development tools to build components for the target
platform. Getting a toolchain for your Linux kernel is
always tricky, so until you are comfortable with the process
please use tested versions only. I have tested with pre-built
toolchains from the Linaro organisation, which can be got
from the following link
gnueabihf-4.8-2014.04_linux.tar.xz or any latest stable
version. Next, set the path for cross tools under this toolchain,
as follows:
tar -xvf gcc-linaro-arm-linux-gnueabihf-4.8-2014.04_linux.
tar.xz -C /opt
export PATH=/opt/gcc-linaro-arm-linux-gnueabihf-4.8-2014.04_
You will notice various tools like gcc, ld, etc, under /opt/
gcc-linaro-arm-linux-gnueabihf-4.8-2014.04_linux/bin with
the prefx arm-linux-gnueabihf-
Figure 1: Kernel configurationmain menu
Open Gurus Let's Try
and space bar to select among various states (blank, m or *)
4) Make sure devtmpfs is enabled under the Device Drivers
and Generic Driver options.
Now, lets go ahead with building the kernel, as follows:

#generate kernel image as zImage and necessary dtb fles
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf-
zImage dtbs
#transform zImage to use with u-boot
make ARCH=arm CROSS_COMPILE=arm-linux-
gnueabihf- uImage \
#copy necessary fles to sdcard
cp arch/arm/boot/zImage /mnt/sdcard
cp arch/arm/boot/uImage /mnt/sdcard
cp arch/arm/boot/dts/*.dtb /mnt/sdcard
#Build dynamic modules and copy to suitable destination
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- modules
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf-
modules_install \ INSTALL_
MODPATH=<mount point of rootfs>
You may skip the last two steps for the moment, as the
given confguration steps avoid dynamic modules. All the
necessary modules are confgured as static.
Getting rootfs
We require a fle system to work with the kernel weve built.
Download the pre-built rootfs image to test with QEMU
from the following link:
image-minimal-qemuarm.ext3 and copy it to the SD card (/
mnt/image) by renaming it as rootfs.img for easy usage. You
may obtain the rootfs image from some other repository or
build it from sources using Busybox.
Your first try
Lets boot this kernel image (zImage) directly without u-boot,
as follows:
export PATH=/opt/qemu-arm/bin:$PATH
qemu-system-arm -M vexpress-a9 -m 1024 -serial stdio \
-kernel /mnt/sdcard/zImage \
-dtb /mnt/sdcard/vexpress-v2p-ca9.dtb \
-initrd /mnt/sdcard/rootfs.img -append root=/dev/ram0
In the above command, we are treating rootfs as initrd
image, which is fne when rootfs is of a small size. You can
connect larger fle systems in the form of a hard disk or SD
card. Lets try out rootfs through an SD card:
qemu-system-arm -M vexpress-a9 -m 1024 -serial stdio \
-kernel /mnt/sdcard/zImage \
-dtb /mnt/sdcard/vexpress-v2p-ca9.dtb \
-sd /mnt/sdcard/rootfs.img -append root=/dev/mmcblk0
In case the sdcard/image fle holds a valid partition table, we
need to refer to the individual partitions like /dev/mmcblk0p1, /dev/
mmcblk0p2, etc. Since the current image fle is not partitioned, we
can refer to it by the device fle name /dev/mmcblk0.
Building u-boot
Switch back to the u-boot directory (u-boot-2014.04), build
u-boot as follows and copy it to the SD card:
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- vexpress_
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf-
cp u-boot /mnt/image
# you can go for a quick test of generated u-boot as follows
qemu-system-arm -M vexpress-a9 -kernel /mnt/sdcard/u-boot
-serial stdio
Lets ignore errors such as u-boot couldn't locate kernel
image or any other suitable fles.
The final steps
Lets boot the system with u-boot using an image fle such as
SD card, and make sure the QEMU PATH is not disturbed.
Unmount the SD card image and then boot using QEMU.
umount /mnt/sdcard
Figure 2: Kernel configurationRAM disk support
Figure 3: U-boot loading
Open Gurus Let's Try
qemu-system-arm -M vexpress-a9 -sd sdcard.img -m 1024
-serial stdio -kernel u-boot
You can stop autoboot by hitting any key within the time
limit and enter the following commands at the u-boot prompt
to load rootfs.img, uimage, dtb fles from the SD card to
suitable memory locations without overlapping. Also, set the
kernel boot parameters using setenv as shown below (here,
0x82000000 stands for the location of the loaded rootfs image
and 8388608 is the size of the rootfs image).
Note: The following commands are internal to u-boot
and must be entered within the u-boot prompt.
fatls mmc 0:0 #list out partition contents
fatload mmc 0:0 0x82000000 rootfs.img # note down the size of
image being loaded
[2] Device Tree for Dummies by Thomas Petazzoni (free-
[3] Few inputs taken from
[4] mkimage man page from u-boot documentation
I thank Babu Krishnamurthy, a freelance trainer for his valuable
inputs on embedded Linux and OMAP hardware during the
course of my embedded journey. I am also grateful to C-DAC for
the good support Ive received.
By: Rajesh Sola
The author is a faculty member of C-DAC's Advanced
Computing Training School, Pune, in the embedded systems
domain. You can reach him at
Figure 4: Loading of kernel with FDT support
fatload mmc 0:0 0x80200000 uImage
fatload mmc 0:0 0x80100000 vexpress-v2p-ca9.dtb
setenv bootargs 'console=ttyAMA0 root=/dev/ram0 rw
bootm 0x80200000 - 0x80100000
Ensure a space before and after the symbol in the
above command.
Log in using root as the username and a blank password
to play around with the system.
I hope this article proves useful for bootstrapping with
embedded Linux and for teaching the concepts when there is
no hardware available.
Step 3: Open the Virtual Machine and open the Contiki
OS; then wait till the login screen appears.
Step 4: Input the password as user; this shows the
desktop of Ubuntu (Contiki).
Running the simulation
To run a simulation, Contiki comes with many prebuilt
modules that can be readily run on the Cooja simulator or on
the real hardware platform. There are two methods of opening
the Cooja simulator window.
Method 1: In the desktop, as shown in Figure 1, double
click the Cooja icon. It will compile the binaries for the frst
time and open the simulation windows.
Method 2: Open the terminal and go to the Cooja directory:
pradeep@localhost$] cd contiki/tools/cooja
pradeep@localhost$] ant run
You can see the simulation window as shown in Figure 2.
Creating a new simulation
To create a simulation in Contiki, go to File menu New
Simulation and name it as shown in Figure 3.
Select any one radio medium (in this case) -> Unit Disk
Graph Medium (UDGM): Distance Loss and click Create.
Figure 4 shows the simulation window, which has the
following windows.
Network window: This shows all the motes in the
simulated network.
Timeline window: This shows all the events over the time.
Mote output window: All serial port outputs will be
shown here.
ontiki is an open source operating system for
connecting tiny, low-cost, low-power microcontrollers
to the Internet. It is preferred because it supports
various Internet standards, rapid development, a selection
of hardware, has an active community to help, and has
commercial support bundled with an open source licence.
Contiki is designed for tiny devices and thus the memory
footprint is far less when compared with other systems.
It supports full TCP with IPv6, and the devices power
management is handled by the OS. All the modules of Contiki
are loaded and unloaded during run time; it implements
protothreads, uses a lightweight fle system, and various
hardware platforms with sleepy routers (routers which sleep
between message relays).
One important feature of Contiki is its use of the Cooja
simulator for emulation in case any of the hardware devices
are not available.
Installation of Contiki
Contiki can be downloaded as Instant Contiki, which is
available in a single download that contains an entire Contiki
development environment. It is an Ubuntu Linux virtual
machine that runs in VMware Player, and has Contiki and
all the development tools, compilers and simulators used in
Contiki development already installed. Most users prefer
Instant Contiki over the source code binaries. The current
version of Contiki (at the time of writing this post) is 2.7.
Step 1: Install VMware Player (which is free for
academic and personal use).
Step 2: Download the Instant Contiki virtual image of
size 2.5 GB, approximately (
contiki/fles/Instant%20Contiki/) and unzip it.
As the Internet of
Things becomes
more of a reality,
Contiki, an open
source OS, allows
DIY enthusiasts
to experiment
with connecting
tiny, low-cost,
microcontrollers to
the Internet.
i O


t o
f T
How To Open Gurus
Open Gurus How To
Notes window: User notes information can be put here.
Simulation control window: Users can start, stop and
pause the simulation from here.
Adding the sensor motes
Once the simulation window is opened, motes can be added to
the simulation using Menu: Motes-> Add Motes. Since we are
adding the motes for the frst time, the type of mote has to be
specifed. There are more than 10 types of motes supported by
Contiki. Here are some of them:
Contiki will generate object codes for these motes to run
on the real hardware and also to run on the simulator if the
hardware platform is not available.
Step 1: To add a mote, go to Add MotesSelect any of
the motes given aboveMicaZ mote. You will get the screen
shown in Figure 5.
Step 2: Cooja opens the Create Mote Type dialogue
box, which gives the name of the mote type as well as the
Contiki application that the mote type will run. For this
example, click the button on the right hand side to choose
the Contiki application and select
Then, click Compile.
Step 3: Once compiled without errors, click Create (Figure 5).
Step 4: Now the screen asks you to enter the number of
motes to be created and their positions (random, ellipse, linear
or manual positions).
In this example, 10 motes are created. Click the Start
button in the Simulation Control window and enable the
mote's Log Output: printf() statements in the View menu of
the Network window. The Network window shows the output
Hello World in the sensors. Figure 6 illustrates this.
This is a simple output of the Network window. If the real
MicaZ motes are connected, the Hello World will be displayed
in the LCD panel of the sensor motes. The overall output is
shown in Figure 7.
The output of the above Hello World application can also
be run using the terminal.
To compile and test the program, go into the hello-
world directory:
pradeep@localhost $] cd /home/user/contiki/examples/hello-
pradeep@localhost $] make
This will compile the Hello World program in the native
target, which causes the entire Contiki operating system and
the Hello World application to be compiled into a single
program that can be run by typing the following command
(depicted in Figure 8):
Figure 1: Contiki OS desktop
Figure 3: New simulation
Figure 2: Cooja compilation
Figure 4: Simulation window
Open Gurus How To
pradeep@localhost$] ./hello-world.native
This will print out the following text:
Contiki initiated, now starting process scheduling
Hello, world
The program will then appear to hang, and must be
stopped by pressing Control + C.
Developing new modules
Contiki comes with numerous pre-built modules like
IPv6, IPV6 UDP, hello world, sensor nets, EEPROM,
IRC, Ping, Ping-IPv6, etc. These modules can run with
all the sensors irrespective of their make. Also, there
are modules that run only on specific sensors. For
example, the energy of a sky mote can be used only on
Sky Motes and gives errors if run with other motes like
Z1 or MicaZ.
Developers can build new modules for various sensor
motes that can be used with different sensor BSPs using
conventional C programming, and then be deployed in the
corresponding sensors.
By: T S Pradeep Kumar
The author is a professor at VIT University, Chennai. He has two
websites and http://www.pradeepkumar.
org. He can be contacted at
Figure 5: Mote creation and compilation in Contiki
Figure 6: Log output in motes
Figure 7: Simulation window of Contiki
Figure 8: Compilation using the terminal
Here is the C source code for the above Hello World
#include "contiki.h"
#include <stdio.h> /* For printf() */
PROCESS(hello_world_process, "Hello world process");
PROCESS_THREAD(hello_world_process, ev, data)
printf("Hello, world\n");
The Internet of Things is an emerging technology that leads
to concepts like smart cities, smart homes, etc. Implementing
the IoT is a real challenge but the Contiki OS can be of great
help here. It can be very useful for deploying applications like
automatic lighting systems in buildings, smart refrigerators,
wearable computing systems, domestic power management for
homes and offces, etc.
inux is versatile and full of choices. Every other day
you wake up to hear about a new distro. Most of these
are based on a more famous distro and use its package
manager. There are many package managers like Zypper and
Yum for Red Hat-based systems; Aptitude and apt-get for
Debian-based systems; and others like Pacman and Emerge. No
matter how many package managers you have, you may still run
into dependency hell or you may not be able to install multiple
versions of the same package, especially for tinkering and
testing. If you frequently mess up your system, you should try
out Nix, which is more than just another package manager.
Nix is a purely functional package manager. According
to its site, Nix is a powerful package manager for Linux
and other UNIX systems that makes package management
reliable and reproducible. It provides atomic upgrades and
roll-backs, side-by-side installation of multiple versions of a
package, multi-user package management and easy set-up of
build environments. Here are some reasons for which the site
recommends you ought to try Nix.
Reliable: Nixs purely functional approach ensures that
installing or upgrading one package cannot break other
Reproducible: Nix builds packages in isolation from each
other. This ensures that they are reproducible and do not
have undeclared dependencies. So if a package works on
one machine, it will also work on another.
Its great for developers: Nix makes it simple to set up
and share build environments for your projects, regardless
of what programming languages and tools youre using.
Multi-user, multi-version: Nix supports multi-user
package management. Multiple users can share a common
Nix store securely without the need to have root privileges
to install software, and can install and use different
versions of a package.
Source/binary model: Conceptually, Nix builds packages
from source, but can transparently use binaries from a
binary cache, if available.
Portable: Nix runs on Linux, Mac OS X, FreeBSD and
other systems. Nixpkgs, the Nix packages collection,
contains thousands of packages, many pre-compiled.
Installation is pretty straightforward for Linux and Macs;
everything is handled magically for you by a script, but there
are some pre-requisites like sudo, curl and bash, so make sure
you have them installed before moving on. Type the following
command at a terminal:
bash <(curl
It will ask for sudo access to create a directory named Nix.
You may see something similar to whats shown in Figure 1.
There are binary packages available for Nix but we are
looking for a new package manager, so using another package
manager to install it is bad form (though you can, if you want
to). If you are running another distro with no binary packages
while also running Darwin or OpenBSD, you have the option
of installing it from source. To set the environment variables
right, use the following command:
Now that we have Nix installed, lets use it for further testing.
To see a list of installable packages, run the following:
nix-env -qa
This will list the installable packages. To search for a
specifc package, pipe the output of the previous command
to Grep with the name of the target package as the argument.
Lets search for Ruby, with the following command:
nix-env -qa | grep ruby
It informs us that there are three versions of Ruby available.
This article introduces the reader to Nix, a reliable, multi-user, multi-version, portable,
reproducible and purely functional package manager. Software enthusiasts will find it a
powerful package manager for Linux and UNIX systems.
For U & Me
Lets install Ruby 2.0. There are two
ways to install a package. Packages can
be referred to by two identifers. The frst
one is the name of the package, which
might not be unique, and the second is
the attribute set value. As the result of
our search for the various Ruby versions
showed that the name of the package for
Ruby 2.0 is Ruby-2.0.0-p353, lets try to
install it, as follows:
nix-env - i ruby-2.0.0-p353
It gives the following error as the
error: unable to fork: Cannot allocate
nix-env: src/libutil/ int
nix::Pid::wait(bool): Assertion `pid !=
-1 failed.
Aborted (core dumped)
As per the Nix wiki, the name of the
package might not be unique and may
yield an error with some packages. So
we could try things out with the attribute
set value. For Ruby 2.0, the attribute set
value is nixpkgs.ruby2 and can be used
with the following command:
nix-env -iA nixpkgs.ruby2
This worked. Notice the use of -iA
fag when using the attribute set value.
I talked to Nix developer Domen
Koar about this and he said, Multiple
packages may share the same name and
version; thats why using attribute sets is a
better idea, since it guarantees uniqueness.
This is some kind of a downside of Nix,
but this is how it functions :)
To see the attribute name and
the package name, use the following
nix-env -qaP | grep package_name
In case of Ruby, I replaced the
package_name with ruby2 and it
By: Anil Kumar Pugalia The author is a C++ lover and a Rubyist. His areas of interest include robotics,
programming and Web development. He can be reached at
By: Jatin Dhankhar
Figure 1: Nix installation
Figure 2: Nix search result Figure 3: Package and attribute usage
[3] - To convince yourself to use Nix
To update a specifc package and all its
dependencies, use:
nix-env -uA nixpkgs.package_attribute_name
To update all the installed packages, use:
nix-env -u
To uninstall a package, use:
nix-env -e package_name
In my case, while using Ruby
2.0, I replaced it with Ruby-
2.0.0-p353, which was the package
name and not the attribute name.
Well, thats just the tip of the
iceberg. To learn more, refer to the
Nix manual
There is a distro named
NixOS, which uses Nix for
both confguration and package
For U & Me Lets Try
Solve Engineering Problems
with Laplace Transforms
n higher mathematics, transforms play an important role.
A transform is mathematical logic to transform or convert
a mathematical expression into another mathematical
expression, typically from one domain to another. Laplace
and Fourier are two very common examples, transforming
from the time domain to the frequency domain. In general,
such transforms have their corresponding inverse transforms.
And this combination of direct and inverse transforms is very
powerful in solving many real life engineering problems. The
focus of this article is Laplace and its inverse transform, along
with some problem-solving insights.
The Laplace transform
Mathematically, the Laplace transform F(s) of a function f(t)
is defned as follows:
where t represents time and s represents complex
angular frequency.
To demonstrate it, lets take a simple example of f(t) = 1.
Substituting and integrating, we get F(s) = 1/s. Maxima has
the function laplace() to do the same. In fact, with that, we
can choose to let our variables t and s be anything else as
well. But, as per our mathematical notations, preserving them
as t and s would be the most appropriate. Lets start with
some basic Laplace transforms. (Note that string() has been
used to just fatten the expression.)
$ maxima -q
(%i1) string(laplace(1, t, s));
(%o1) 1/s
(%i2) string(laplace(t, t, s));
(%o2) 1/s^2
(%i3) string(laplace(t^2, t, s));
(%o3) 2/s^3
(%i4) string(laplace(t+1, t, s));
(%o4) 1/s+1/s^2
(%i5) string(laplace(t^n, t, s));
Is n + 1 positive, negative, or zero?
p; /* Our input */
(%o5) gamma(n+1)*s^(-n-1)
(%i6) string(laplace(t^n, t, s));
Is n + 1 positive, negative, or zero?
n; /* Our input */
(%o6) gamma_incomplete(n+1,0)*s^(-n-1)
(%i7) string(laplace(t^n, t, s));
Is n + 1 positive, negative, or zero?
z; /* Our input, making it non-solvable */
(%o7) laplace(t^n,t,s)
(%i8) string(laplace(1/t, t, s)); /* Non-solvable */
(%o8) laplace(1/t,t,s)
(%i9) string(laplace(1/t^2, t, s)); /* Non-solvable */
(%o9) laplace(1/t^2,t,s)
(%i10) quit();
In the above examples, the expression is preserved as is, in
case of non-solvability.
Laplace transforms are integral mathematical transforms widely used in physics and
engineering. In this 21
article in the series on mathematics in open source, the author
demonstrates Laplace transforms through Maxima.
For U & Me
(%o4) [s = -w,s = w]
(%i5) string(solve(denom(laplace(cos(w*t), t, s)), s));
(%o5) [s = -%i*w,s = %i*w]
(%i6) string(solve(denom(laplace(cosh(w*t), t, s)), s));
(%o6) [s = -w,s = w]
(%i7) string(solve(denom(laplace(exp(w*t), t, s)), s));
(%o7) [s = w]
(%i8) string(solve(denom(laplace(log(w*t), t, s)), s));
(%o8) [s = 0]
(%i9) string(solve(denom(laplace(delta(w*t), t, s)), s));
(%o9) []
(%i10) string(solve(denom(laplace(erf(w*t), t, s)), s));
(%o10) [s = 0]
(%i11) quit();
Involved Laplace transforms
laplace() also understands derivative() / diff(), integrate(),
sum(), and ilt() - the inverse Laplace transform. Here are some
interesting transforms showing the same:
$ maxima -q
(%i1) laplace(f(t), t, s);
(%o1) laplace(f(t), t, s)
(%i2) string(laplace(derivative(f(t), t), t, s));
(%o2) s*laplace(f(t),t,s)-f(0)
(%i3) string(laplace(integrate(f(x), x, 0, t), t, s));
(%o3) laplace(f(t),t,s)/s
(%i4) string(laplace(derivative(sin(t), t), t, s));
(%o4) s/(s^2+1)
(%i5) string(laplace(integrate(sin(t), t), t, s));
(%o5) -s/(s^2+1)
(%i6) string(sum(t^i, i, 0, 5));
(%o6) t^5+t^4+t^3+t^2+t+1
(%i7) string(laplace(sum(t^i, i, 0, 5), t, s));
(%o7) 1/s+1/s^2+2/s^3+6/s^4+24/s^5+120/s^6
(%i8) string(laplace(ilt(1/s, s, t), t, s));
(%o8) 1/s
(%i9) quit();
Note the usage of ilt() - inverse Laplace transform in the %i8
of the above example. Calling laplace() and ilt() one after the
other cancels their effectthat is what is meant by inverse. Lets
look into some common inverse Laplace transforms.
Inverse Laplace transforms
$ maxima -q
(%i1) string(ilt(1/s, s, t));
(%o1) 1
(%i2) string(ilt(1/s^2, s, t));
(%o2) t
(%i3) string(ilt(1/s^3, s, t));
(%o3) t^2/2
(%i4) string(ilt(1/s^4, s, t));
laplace() is designed to understand various symbolic
functions, such as sin(), cos(), sinh(), cosh(), log(), exp(),
delta(), erf(). delta() is the Dirac delta function, and erf() is the
error functionothers being the usual mathematical functions.
$ maxima -q
(%i1) string(laplace(sin(t), t, s));
(%o1) 1/(s^2+1)
(%i2) string(laplace(sin(w*t), t, s));
(%o2) w/(w^2+s^2)
(%i3) string(laplace(cos(t), t, s));
(%o3) s/(s^2+1)
(%i4) string(laplace(cos(w*t), t, s));
(%o4) s/(w^2+s^2)
(%i5) string(laplace(sinh(t), t, s));
(%o5) 1/(s^2-1)
(%i6) string(laplace(sinh(w*t), t, s));
(%o6) -w/(w^2-s^2)
(%i7) string(laplace(cosh(t), t, s));
(%o7) s/(s^2-1)
(%i8) string(laplace(cosh(w*t), t, s));
(%o8) -s/(w^2-s^2)
(%i9) string(laplace(log(t), t, s));
(%o9) (-log(s)-%gamma)/s
(%i10) string(laplace(exp(t), t, s));
(%o10) 1/(s-1)
(%i11) string(laplace(delta(t), t, s));
(%o11) 1
(%i12) string(laplace(erf(t), t, s));
(%o12) %e^(s^2/4)*(1-erf(s/2))/s
(%i13) quit();
Interpreting the transform
A Laplace transform is typically a fractional expression
consisting of a numerator and a denominator. Solving
the denominator, by equating it to zero, gives the various
complex frequencies associated with the original function.
These are called the poles of the function. For example, the
Laplace transform of sin(w * t) is w/(s^2 + w^2), where the
denominator is s^2 + w^2. Equating that to zero and solving
it, gives the complex frequency s = +iw, -iw; thus, indicating
that the frequency of the original expression sin(w * t) is w,
which indeed it is. Here are a few demonstrations of the same:
$ maxima -q
(%i1) string(laplace(sin(w*t), t, s));
(%o1) w/(w^2+s^2)
(%i2) string(denom(laplace(sin(w*t), t, s))); /* The Denominator
(%o2) w^2+s^2
(%i3) string(solve(denom(laplace(sin(w*t), t, s)), s)); /* The
Poles */
(%o3) [s = -%i*w,s = %i*w]
(%i4) string(solve(denom(laplace(sinh(w*t), t, s)), s));
For U & Me Lets Try
(%o4) t^3/6
(%i5) string(ilt(1/s^5, s, t));
(%o5) t^4/24
(%i6) string(ilt(1/s^10, s, t));
(%o6) t^9/362880
(%i7) string(ilt(1/s^100, s, t));
(%o7) t^99/933262154439441526816992388562667004907159682643816
(%i8) string(ilt(1/(s-a), s, t));
(%o8) %e^(a*t)
(%i9) string(ilt(1/(s^2-a^2), s, t));
(%o9) %e^(a*t)/(2*a)-%e^-(a*t)/(2*a)
(%i10) string(ilt(s/(s^2-a^2), s, t));
(%o10) %e^(a*t)/2+%e^-(a*t)/2
(%i11) string(ilt(1/(s^2+a^2), s, t));
Is a zero or nonzero?
n; /* Our input */
(%o11) sin(a*t)/a
(%i12) string(ilt(s/(s^2+a^2), s, t));
Is a zero or nonzero?
n; /* Our input */
(%o12) cos(a*t)
(%i13) assume(a < 0) or assume(a > 0)$
(%i14) string(ilt(1/(s^2+a^2), s, t));
(%o14) sin(a*t)/a
(%i15) string(ilt(s/(s^2+a^2), s, t));
(%o15) cos(a*t)
(%i16) string(ilt((s^2+s+1)/(s^3+s^2+s+1), s, t));
(%o16) sin(t)/2+cos(t)/2+%e^-t/2
(%i17) string(laplace(sin(t)/2+cos(t)/2+%e^-t/2, t, s));
(%o17) s/(2*(s^2+1))+1/(2*(s^2+1))+1/(2*(s+1))
(%i18) string(rat(laplace(sin(t)/2+cos(t)/2+%e^-t/2, t, s)));
(%o18) (s^2+s+1)/(s^3+s^2+s+1)
(%i19) quit();
Observe that if we take the Laplace transform of the
above %o outputs, they would give back the expressions,
which are input to ilt() of the corresponding %is. %i18
specifcally shows one such example. It does laplace() of
the output at %o16, giving back the expression, which was
input to ilt() of %i16.
Solving differential and integral equations
Now, with these insights, we can easily solve many
interesting and otherwise complex problems. One of
them is solving differential equations. Lets explore a
simple example of solving f(t) + f(t) = e^t, where f(0) =
0. First, lets take the Laplace transform of the equation.
Then substitute the value for f(0), and simplify to obtain
the Laplace of f(t), i.e., F(s). Finally, compute the inverse
Laplace transform of F(s) to get the solution for f(t).
$ maxima -q
(%i1) string(laplace(diff(f(t), t) + f(t) = exp(t), t, s));
(%o1) s*laplace(f(t),t,s)+laplace(f(t),t,s)-f(0) = 1/(s-1)
Substituting f(0) as 0, and then simplifying, we get
laplace(f(t),t,s) = 1/((s-1)*(s+1)), for which we do an inverse
Laplace transform:
(%i2) string(ilt(1/((s-1)*(s+1)), s, t));
(%o2) %e^t/2-%e^-t/2
(%i3) quit();
That gives us f(t) = (e^t e^-t) / 2, i.e., sinh(t), which
defnitely satisfes the given differential equation.
Similarly, we can solve equations with integrals. And not
just integrals, but also equations with both differentials and
integrals. Such equations come up very often when solving
problems linked to electrical circuits with resistors, capacitors
and inductors. Lets again look at a simple example that
demonstrates the fact. Lets assume we have a 1 ohm resistor,
a 1 farad capacitor, and a 1 henry inductor in series being
powered by a sinusoidal voltage source of frequency w. What
would be the current in the circuit, assuming it to be zero at t =
0? It would yield the following equation: R * i(t) + 1/C * i(t)
dt + L * di(t)/dt = sin(w*t), where R = 1, C = 1, L =1.
So, the equation can be simplifed to i(t) + i(t) dt + di(t)/
dt = sin(w*t). Now, following the procedure as described
above, lets carry out the following steps:
$ maxima -q
(%i1) string(laplace(i(t) + integrate(i(x), x, 0, t) + diff(i(t),
t) = sin(w*t), t, s));
(%o1) s*laplace(i(t),t,s)+laplace(i(t),t,s)/
s+laplace(i(t),t,s)-i(0) = w/(w^2+s^2)
Substituting i(0) as 0, and simplifying, we get
laplace(i(t), t, s) = w/((w^2+s^2)*(s+1/s+1)). Solving that by
inverse Laplace transform, we very easily get the complex
expression for i(t) as follows:
(%i2) string(ilt(w/((w^2+s^2)*(s+1/s+1)), s, t));
Is w zero or nonzero?
n; /* Our input: Non-zero frequency */
(%o2) w^2*sin(t*w)/(w^4-w^2+1)-(w^3-w)*cos(t*w)/(w^4-w^2+1)+%e^-
(%i3) quit();
By: Anil Kumar Pugalia
The author is a gold medallist from NIT Warangal and IISc
Bangalore and he is also a hobbyist in open source hardware
and software, with a passion for mathematics. Learn more
about him and his experiments at He can be
reached at
By: Anil Kumar Pugalia
For U & Me
For U & Me Let's Try
r S
ll w

Discover the Z shell, a powerful scripting
language, which is designed for interactive use.
shell (zsh) is a powerful interactive login shell
and command interpreter for shell scripting. A big
improvement over older shells, it has a lot of new
features and the support of the Oh-My-Zsh framework that
makes using the terminal fun.
Released in 1990, the zsh shell is fairly new compared
to its older counterpart, the bash shell. Although more than
a decade has passed since its release, it is still very popular
among programmers and developers who use the command-
line interface on a daily basis.
Why zsh is better than the rest
Most of what is mentioned below can probably be
implemented or confgured in the bash shell as well; however,
it is much more powerful in the zsh shell.
Advanced tab completion
Tab completion in zsh supports the command line option for
the auto completion of commands. Pressing the tab key twice
enables the auto complete mode, and you can cycle through
the options using the tab key.
You can also move through the files in a directory with
the tab key.
Zsh has tab completion for the path of directories or fles
in the command line too.
Another great feature is that you can switch paths by
using 1 to switch to the previous path, 2 to switch to the
previous, previous path and so on.
Real time highlighting and themeable prompts
To include real time highlighting, clone the zsh-syntax-
highlighting repository from github (
users/zsh-syntax-highlighting). This makes the command-
line look stunning. In some terminals, existing commands
are highlighted in green and those typed incorrectly are
highlighted in red. Also, quoted text is highlighted in yellow.
All this can be confgured further according to your needs.
Prompts on zsh can be customised to be right-aligned,
left-aligned or as multi-lined prompts.
Wikipedia defines globbing as follows: In computer
programming, in particular in a UNIX-like environment,
the term globbing is sometimes used to refer to pattern
matching based on wildcard characters. Shells
before zsh also offered globbing; however, zsh offers
extended globbing. Extra features can be enabled if the
EXTENDEDGLOB option is set.
For U & Me Let's Try
Here are some examples of the
extended globbing offered by zsh. The
^ character is used to negate any pattern
following it.
# Enables extended globbing in zsh.
ls *(.)
# Displays all regular fles.
ls -d ^*.c # Displays
all directories and fles that are not cpp fles.
ls -d ^*.* # Displays
directories and fles that have no extension.
ls -d ^fle # Displays
everything in directory except fle called fle.
ls -d *.^c
# Displays fles with extensions except .c fles.
An expression of the form <x-y> matches a range of
integers. Also, fles can be grouped in the search pattern.
% ls (foo|bar).*
bar.o foo.c foo.o
% ls *.(c|o|pro)
bar.o foo.c foo.o main.o q.c
To exclude a certain fle from the search, the ~ character
can be used.
% ls *.c
foo.c foob.c bar.c
% ls *.c~bar.c
foo.c foob.c
% ls *.c~f*
These and several more extended globbing features can
help immensely while working through large directories
Case insensitive matching
Zsh supports pattern matching that is independent of
whether the letters of the alphabet are upper or lower case.
Zsh frst surfs through the directory to fnd a match, and if
one does not exist, it carries out a case insensitive search for
the fle or directory.
Sharing of command history among running shells
Running shells share command history, thereby eradicating
the diffculty of having to remember the commands you typed
earlier in another shell.
Aliases are used to abbreviate commands and command
options that are used very often or for a combination of
Figure 1: Tab completion for command options
Figure 2: Tab completion for files
commands. Most other shells have aliases but zsh supports
global aliases. These are aliases that are substituted anywhere
in the line. Global aliases can be used to abbreviate
frequently-typed usernames, hostnames, etc. Here are some
examples of aliases:
alias -g mr=rm
alias -g TL=| tail -10
alias -g NUL=> /dev/null 2>&1
Installing zsh
To install zsh in Ubuntu or Debian-based distros, type the
sudo apt-get update && sudo apt-get install zsh # install zsh
chsh -s /bin/zsh # to make zsh your default shell
To install it on SUSE-based distros, type:
sudo zypper install zsh
fnger yoda | grep zsh
Configuring zsh
The .zshrc fle looks something like what is shown in Figure 4.
Add your own aliases for commands you use frequently.
Customising zsh with Oh-My-Zsh
Oh-My-Zsh is believed to be an open source community-driven
framework for managing the zsh confguration. Although zsh is
powerful in comparison to other shells, its main attraction is the
themes, plugins and other features that come with it.
To install Oh-My-Zsh you need to clone the Oh-My-Zsh
repository from Github (
oh-my-zsh). A wide range of themes are available so there is
something for everybody.
To clone the repository from Github, use the following
command. This installs Oh-My-Zsh in ~/.oh-my-zsh (a hidden
directory in your home directory). The default path can be
changed by setting the environment variable for zsh using
export ZSH = /your/path
git clone
To install Oh-My-Zsh via curl, type:
curl -L | sh
For U & Me Let's Try
To install it via wget, type:
wget no-check-certifcate -O - | sh
To customise zsh, create a new zsh confguration, i.e., a
~/.zshrc fle by copying any of the existing templates provided:
cp ~/.oh-my-zsh/templates/zshrc.zsh-template ~/.zshrc
Restart your zsh terminal to view the changes.
To check out the numerous plugins offered in Oh-My-Zsh,
you can go to the plugins directory in ~/.oh-my-zsh.
To enable these plugins, add them to the ~/.zshrc fle and
then source them.
cd ~/.oh-my-zsh
vim ~/.zshrc
source ~/.zshrc
If you want to install some plugin that is not present in
the plugins directory, you can clone the plugin from Github or
install it using wget or curl and then source the plugin.
To view the themes in zsh go to the themes/ directory. To
change your theme, set ZSH_THEME in ~/.zshrc to the theme
you desire and then source Oh-My-Zsh. If you do not want any
theme enabled, set ZSH_THEME = . If you cant decide
on a theme, you can set ZSH_THEME = random. This will
change the theme every time you open a shell and you can
decide upon the one that you fnd most suitable for your needs.
To make your own theme, copy any one of the existing
themes from the themes/ directory to a new fle with a zsh-
theme extension and make your changes to that.
A customised theme is shown in Figure 6.
Here, the user name, represented by %n, has been set to
the colour green and the computer name, represented by %m,
has been set to the colour cyan. This is followed by the path
represented by %d. The prompt variable then looks like this...
PROMPT= $fg[green]%n $fg[red]at $fg[cyan]%m---
The prompt can be changed to incorporate spacing, and
git states, battery charge, etc, by declaring functions that do
the same.
For example, here, instead of printing the entire path
including /home/darshana, we can defne a function such that
if PWD detects $HOME, it replaces the same with ~
function get_pwd() {
echo ${PWD/$HOME/~}
To view the status of the current Git repository, the
following code can be used:
function git_prompt_info() {
Figure 3: View previous paths
Figure 5: Setting aliases in ~/.zshrc file
Figure 4: ~/.zshrc file
Panasonic Looks to Engage with
Developers in India!
Panasonic entered the Indian smartphone market last year. In just one year, the
company has assessed the potential of the market and has found that it could make
India the headquarters for its smartphone division. But this cannot happen without
that bit of extra effort from the company. While Panasonic is banking big on Indias
favourite operating system, Android, it is also leaving no stone unturned to provide
a unique user experience on its devices. Diksha P Gupta from Open Source For You
spoke to Pankaj Rana, head, smartphones and tablets, Panasonic India Pvt Ltd, to
get a clearer picture of the companys growth plans. Excerpts
about the strategy, Pankaj Rana, head, smartphones
and tablets, Panasonic India Pvt Ltd, says, We
are banking on Android purely because it provides
the choice of customisation. Based on this ability
of Android, we have created a very different user
experience for Panasonic smartphones. What
Rana is referring to here is the single fit-home UI
launched by Panasonic. He explains, While we
have provided the standard Android UI in the feature
phones, the highly-efficient fit-home UI is available
on Panasonic smartphones. When working on the
standard Android UI, users need to use both hands to
perform any task. However, the fit-home UI allows
single-hand operations, making it easy for the user
to function.
Yet another feature of the UI is that it can
be operated in the landscape mode. Rana claims
that many phones do not allow the use of various
functions like settings, et al, in the landscape mode.
He says, We have kept the comfort of the users as
our top priority and, hence, designed the UI in such
a way that it offers a tablet-like experience as well.
The Panasonic Eluga is a 12.7cm (5-inch) phone.
This kind of a UI will be a great advantage on big
screen devices. For users of feature phones who
are migrating to smartphones now, this kind of UI
makes the transition easier.
Coming soon: An exclusive Panasonic app store
Well, if you thought the unique user experience was
the end of the show, hold on. Theres more coming
The company plans to leave no stone unturned
when it comes to making its Android experience
complete for the Indian region. Rana reveals, We
are planning to come up with a Panasonic exclusive
app store, which should come to existence in the
next 3-4 months.
Pankaj Rana, head, smartphones and tablets, Panasonic India Pvt Ltd
anasonic is all set to launch 15 smartphones and
eight feature phones in India this year. While the
company will keep its focus on the smartphone
segment, it has no plans of losing its feature phone
lovers as Panasonic believes that there is still scope for
the latter in the Indian market. That said, Panasonic will
invest more energy in grabbing what it hopes will be a
5 per cent share in the Indian smartphone market. And
that will happen with the help of Android. Speaking
For U & Me Open Strategy
When it comes to the development for this
app store, Panasonic will look at hiring in-house
developers, as well as associate with third party
developers. Rana says, We will look at all possible
ways to make our app ecosystem an enriched one.
Just for the record, this UI has been built within
the company, with engineers from various facilities
including India, Japan and Vietnam. For the exclusive
app store that we are planning to build, we will have
some third-party developers. But besides that, we
plan to develop our in-house team as well. Right now,
we have about 25 software engineers working with
us in India, who are from Japan. We also have some
Vietnamese resources working for us.
The company plans to do the hiring for the in-
house team within the next six months. The team
may comprise about 100 people. Rana clarifies that
the developers hired in India are going to be based
in Gurgaon, Bengaluru and Hyderabad. He says,
We already have about 20 developers in Bengaluru,
who are on third party rolls. We are in the process
of switching them to the companys rolls over the
next couple of months. Similarly, we have about 10
developers in Gurgaon. In addition, our R&D team in
Vietnam has 70 members. We are also planning to shift
the Vietnam operations to India, making the country
our smartphone headquarters.
To take the idea of the Panasonic-exclusive app
store further, the company is planning some developers
engagement activities this November and December.
The consumer is the king!
While Rana asserts that Panasonic can make one of the
best offerings in the smartphone world, he recognises that
consumers are looking for something different every time,
when it comes to these fancy devices. He says, Right
now, companies are working on the UI level to offer that
newness in the experience. But six months down the line,
things will not remain the same. The situation is bound to
change and, to survive in this business, developers need
to predict the tastes of the consumers. But for now, it is
about providing an easy experience, so that the feature
phone users who are looking to migrate to smartphones
fnd it convenient enough.
The company plans to do the hiring
for the in-house team within the next
six months. The team may comprise
about 100 people. Rana claries
that the developers hired in India
are going to be based in Gurgaon,
Bengaluru and Hyderabad.
For U & Me Open Strategy | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 99
Convert images to PDF
Often, your scanned copies will be in an image
format that you would like to convert to the PDF format. In
Linux, there is an easy-to-use tool called convert that can
convert any image to PDF. The following example shows
you how:
$convert scan1.jpg scan1.pdf
To convert multiple images into one PDF fle, use the
following command:
$convert scan*.jpg scanned_docs.pdf
The convert tool comes with the imagemagick
package. If you do not fnd the convert command on your
system, you will need to install imagemagick.
Madhusudana Y N,
Your own notepad
Here is a simple and fast method to create a notepad-
like application that works in your Web browser. All you
need is a browser that supports HTML 5 and the commands
mentioned below.
Open your HTML 5 supported Web browser and paste
the following code in the address bar:
data:text/html, <html contenteditable>
Then use the following code:
data:text/html, <title>Text Editor</title><body
contenteditable style=font-size:2rem;line-height:1.4;max-
width:60rem;margin:0 auto;padding:4rem;>
And fnally
data:text/html, <style>html,body{margin: 0; padding: 0;}</
style><textarea style=font-size: 1.5em; line-height:
1.5em; background: %23000; color: %233a3; width: 100%;
height: 100%; border: none; outline: none; margin: 0;
padding: 90px; autofocus placeholder=wake up Neo... />
You Web browser-based notepad is ready.
Chintan Umarani,
How to find a swap partition or
file in Linux
Swap space can be a dedicated swap partition, a swap fle,
or a combination of swap partitions and swap fles.
To fnd a swap partition or fle in Linux, use the
following command:
swapon -s
cat /proc/swaps
and the output will be something like whats shown
Filename Type Size Used Priority
/dev/sda5 partition 2110460 0 -1
Here, the swap is a partition and not a fle.
Sharad Chhetri,
Monitoring a process in Linux
To monitor the systems performance on a per-
process basis, use the following command:
pidstat -d -h -r -u -p <PID> <Delay in seconds>
This command will show the CPU utilisation, memory
utilisation and IO utilisation of the process, along with the PID.
pidstat -d -h -r -u -p 1234 5
where 1234 is the PID of the process to be monitored
and 5 is the delay in seconds.
Prasanna Mohanasundaram,
Finding the number of threads
a process has
By using the ps command and reading the /proc fle entries,
we can fnd the number of threads in a process.
Here, the nlwp option will report the number of light
weight processes (i.e., threads).
[bash]$ ps -o nlwp <PID Of Process>
[bash]$ ps -o nlwp 3415
We can also obtain the same information by reading /
proc fle entries:
[bash]$ cat /proc/<PID>/status | grep Threads
[bash]$ cat /proc/3415/status | grep Threads
Threads: 34
Narendra Kangralkar,
Find out the number of times a user has
executed a command
In Linux, the command known as hash will display the
number of hits or number of times that a particular shell
command has been executed.
test@linux-erv3:~/Desktop> hash
hits command
1 /bin/ps
2 /usr/bin/pidstat
1 /usr/bin/man
2 /usr/bin/top
In the above output of hash, we can see that the commands
pidstat and top have been used twice and the rest only once.
If hash is the frst command in the terminal, it will
return the following as output:
test@linux-erv3:~/Desktop> hash
hash: hash table empty
Bharanidharan Ambalavanan,
Searching for a specific string entirely
within a particular location
If youre working on a big project with multiple fles
organised in various hierarchies and would need to know
where a specifc string/word is located, the following Grep
command option would prove to be priceless.
grep -HRnF foo test <dir_loc> --include *.c
The above command will search for foo test (without
the quotes) under <dir_loc> recursively for all the fles
ending with .c extension and will print the output with the line
Remember that foo test is case sensitive, so a search
for Foo Test would return different results.
Here is a sample output:
<dir_loc>/tip/src/WWW/cgi-bin/admin/fle1.c:880: line
beginning of foo test in sample;
<dir_loc>/tip/src/WWW/cgi-bin/admin/fle1.c:1034: foo test
line ending;
<dir_loc>/tip/src/WWW/cgi-bin/user/fle2.c:166: partial foo
<dir_loc>/fle3.c:176: somewhere foo testing of sample;
You can always replace <dir_loc> with .(dot) to
indicate a recursive search from the current directory and
also leave out the --include parameter to search for all fles;
or provide it as --include *.*
grep -HRnF foo test . --include *.*
Runcy Oommen,
Share Your Linux Recipes!
The joy of using Linux is in nding ways to get around
problemstake them head on, defeat them! We invite you to
share your tips and tricks with us for publication in OSFY so
that they can reach a wider audience. Your tips could be related
to administration, programming, troubleshooting or general
tweaking. Submit them at The sender
of each published tip will get a T-shirt. | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 101
We are
looking to hire
people with
core Android
Before Xiaomi was to enter the
Indian market, many assumed
that this was just another Chinese
smartphone coming their way. But
perceptions changed after the brand
entered the sub-continent. Flipkart
got bombarded with orders and
Xiaomi eventually could not meet
the Indian demand. There are quite
a few reasons for this explosive
demand, but one of the most
important factors is the unique user
experience that the device offers. It
runs on Android, but on a different
versionone that originates from the
brain of Hugo Barra, vice president,
Xiaomi. When he was at Google,
he was pretty much instrumental
in making the Android OS what
it is. Currently, he is focused on
offering a different taste of it
at a unique price point. He has
launched MIUI, an Android-based
OS that he wants to be ported to
devices other than Xiaomi. For this,
he needs a lot of help from the
developers community. Diksha P
Gupta from Open Source For You
caught up with him to discuss his
plans for India and how he wants
to contribute to and leverage
the developers ecosystem in the
country. Read on...
Hugo Barra, vice president, Xiaomi
For U & Me Interview
What are the top features of Xiaomi MIUI that you think
are lacking in other devices?
First of all, we have a dramatically simplifed UI for the
average person; so it feels simpler than anything else in the
market right now.
Second, it is very customisable and is really appealing
to our customers. We have thousands of themes that
can completely change the experience, not just the wall
paper or the lock screen. From very detailed to very
minimalistic designs, from cartoonist styles to cultural
statements and on to other things, there is a huge list of
options to choose from.
Third, I would say that theres customisation for power
users as well. You can change a lot of things in the system.
You can control permissions on apps, and you can decide
which apps are allowed to run in the background. There
is a lot you can do to fine tune the performance of your
device if you choose to. For example, you can decide
which apps are allowed to access the 3G network. So I
can say that out of the 45 apps that I have running on
my phone, the only ones that are allowed to use 3G are
WhatsApp, Hike, my email and my browser. I dont want
any of the other apps that are running on this phone to be
allowed to access 3G at all, which I wont know about and
which may use bandwidth that I am paying for. It is a very
simple menu. Like checkboxes, it lets you choose the apps
that you want to allow 3G access to. So if you care about
how much bandwidth you are consuming and presently
control that by turning 3G on and off (which people do all
the time), now you can allow 3G access only to messaging
apps like WhatsApp or Hike that use almost no bandwidth
at all. Those are the apps that youre all the time connected
to because if someone sends you a message, you want to
get it as soon as possible.
Fourth, we have added a number of features to the core
apps that make them more interesting. These include in
call features that allow users to take notes during a phone
call, the ability to record a phone call and a bunch of other
things. So it is not the dialler app alone, but also dozens of
features all around the OS like turning on the flash light
from the lock screen, having a private messaging inbox and
a whole lot of other features.
Fifth, on your text inbox, you can pin a person to the
topif there is really someone who matters to you and you
always want to have their messages on the top. You can
decide at what time an SMS needs to be sent out. You can
compose a message saying, I want this message to go out
at 7 p.m., because maybe youre going to be asleep, for
example, but still want that message to go out.
Then there are little things like, if you fire up the
camera app and point the camera towards the QR code,
it just automatically recognises it. You dont have to
download a special app just to recognise the QR codes. If
you are connected to a Wi-Fi network with your Mi phone,
you may want to share this Wi-Fi network with someone
else. So, you just go into the Wi-Fi wing and say, Share
this connection, and the phone will then share a QR code.
The person you want to share your Wi-Fi connection with
can just scan this QR code and immediately get connected
to the network without having to enter a password. So
lots and lots of little things like that add up to a pretty
delightful experience.
After MIUI from Xiaomi, Panasonic has launched its own
UI and Xolo has launched HIVE. So do you think the war
has now shifted to the UI level?
I think it would be an injustice to say that our operating
system MIUI is just another UI like Android because it
is so much more than that. We have had a team of 500
engineers working on our operating systems for the last
four years, so it is not just a re-skinning of Android. It is
much, much more significant effort. I can spend five hours
with you just explaining the features of MIUI. I dont think
there are many companies out there that have as significant
a software effort that has been on for as long a time, as we
have. So while I havent looked at these operating systems
that you are talking about closely, my instinct is that they
are not as profoundly different and well founded as MIUI.
What are your plans to reach out to the developers?
From a development perspective, first and foremost, we
are very Android compliant. All of our builds, before OTA,
go to Google for approval, like every other OEM out there.
We are focused on Android APIs. We are not building new
APIs. We believe that doing so would create fragmentation.
Its kind of not ideal and goes against the ecosystem. So,
from our point of view, we see developers as our early
adopters. They are the people who we think are best
equipped to try our products. We see developers as the first
people that we take our devices to try out. Thats primarily
how we view the developer community.
There are some interesting twists out there as well. For
instance, we are the frst company to make a Tegra K1 tablet.
So, already, MiPad is being used by game developers as the
reference development platform for K1 game tabs. This is one
of the few ways in which we get in touch with the developers
and work with them.
How do you want to involve the Indian developer
community, considering the fact that it is one of the
largest in the world?
First of all, we are looking to hire developers here. We are
looking to build a software engineering team in India, and
in Bengaluru, to be precisewhere we are headquartered.
So that is the frst and the most important step for us. The
second angle is MIUI. Its not an open source operating
system, but it is an open operating system that is based on
Android. A lot of the actual code is closed, but its open
For U & Me Interview | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 103
and is confgurable. A really interesting initiative with the
developer community, where we would love to get some
help with, is porting MIUI to all of the local devices. If
someone with a Micromax or a Karbonn device wants to run
MIUI on it, let them do it. So, we would like the help of the
developer community for things like that.
We do have some developers in the country porting our
builds on different devices. So, thats something we would
love the developer community to help us with.
What would be the size of the engineering team and by
when can we expect that team to come up?
We will start hiring people pretty much right away. As for
the size, I dont know yet. I suspect that we would only be
limited by our ability to hire quickly, as we need a really
high count of engineers. The Bengaluru tech talent scene is
incredibly competitive, and there is a shortage of talented
people. So we will be working hard to recruit as fast as we
can but I dont think we will be limited by any particular
quota or budgetits how quickly we can take this up.
What kind of developers are you looking to hire in India?
Mainly, Android developers. We are looking to hire
people with core Android experiencesoftware engineers
who have written code below the application level. Java
developers and Android developers are totally fine.
You are originally from Android, but you chose to do
something that is not open source, but closed-source.
Any reasons for this choice?
This is something the company has thought about a lot.
Managing an open source project is a very different thing,
from what we do today. So we thought that what we would
rather do is contribute back. So, while our core UI is not
open, we actually contribute a lot back, not only to Android,
but to other initiatives as well. HBASE is one database
initiative for which our team is the No. 1 contributor.
What are your thoughts on the Android One platform?
I think its a phenomenal effort from Google. Its a
very clever program. It is designed to empower members
of the ecosystem, who you might label as challengers, and
to help them reap real incentives. So I think it is a very,
very cleverly designed program. I am very excited to see it
taking off in India.
Is it true that Xiaomi provides weekly updates for its
We provide weekly updates for our Mi device family. We
have two families the Redmi family and the Mi family.
We provide weekly updates for the devices of the Mi
family on what we call the beta channel. So when you
buy a Mi 3, for example, its on what we call the stable
channel, which gets updates every one or two months. But
if you want to be on the beta channel and get updates every
week, to be the one to try out features first, all you have to
do is go to the MIUI forum and download the beta channel
ROM to flash it to your device. It is a very simple thing
to do, and then you are on the beta channel and will get
weekly updates.
So what is the idea behind such weekly updates? Are
they not too frequent for even power users to adopt?
The power users take them all the time. Their take rate
is incredibly high. These are people who have chosen to
take these over-the-air updates. They have actively said:
I want to be on the beta channel because I want weekly
updates. And the reason they do it is because they want to
get access to features early, they want to provide feedback,
they like having the latest and greatest software on their
devices. They want to participate and that is why people
want to join in.
Google promotes stock Android usage. You also did the
same when you were with Google, so forking Android is
not happily accepted by Google. Given that, where do you
see Android reaching eventually in this game?
From Googles perspective and from my perspective as
well, the most important thing in the Android ecosystem
is for the devices to remain compatible. To me, the two
most important things are: first, every Android device must
be 100 per cent compatible, and second, all of Googles
innovations must reach every Android user. In the approach
that we have taken as an OS developer, we are checking
both these boxes one hundred per cent.
Our devices, even those selling in China, without any of
Googles apps, are 100 per cent compatible and have always
been so, which means that all of the Android APIs work
perfectly. They pass all the compatibility apps, which means
that you will never have apps crashing. Second, because we
are pre-loading all of Googles apps (outside of China of
course, where we are allowed to preload Googles apps), we
are enabling all of the innovation that Google has to reach
every MIUI user. So we are checking all the boxes.
I think you need to be careful when using the word
forking because you may be confusing yourself and
your users with forks that are not compatibleforks that
break away from the Android ecosystem, if you will.
Im not going to mention names but that is unhealthy for
the ecosystem. That is almost a sin, because it is going
to lead to user frustration at some point, sooner or later.
Apps are going to start crashing or not working. That is a
bad thing to do. Its just not a good idea and obviously we
will not do that. We are 100 per cent Android compatible,
we are 100 per cent behind Android, and we love what the
Android teams do.
For U & Me Interview
There is a place for Postgres in every datacenter.
EnterpriseDB Software India Pvt Ltd
Unit #3, Godrej Castlemaine
Pune - 411 01 India
T +91 20 30589500/01
Postgres delivers. Enterprise class Performance, Security, Success.
For 80% less.
EnterpriseDB - The Postgres Database Company
Delhi Postal Regd. No. DL(S)-01/3443/2013-15 R N I No. DELENG/2012/49440, Mailed on 27/28th of Advance month
Published on 27th of Advance month